- Normal inference (needs ~20GB GPU memory)
- 4bit quantized inference (needs ~7GB GPU memory)
The following demos use the Image captioning task:
- PEFT (LORA) finetuning (notebook) (fits on Google colab)
- Normal finetuning (needs ~40GB GPU memory)
| Name | Name | Last commit date | ||
|---|---|---|---|---|
parent directory.. | ||||
The following demos use the Image captioning task: