Skip to content
#

llm-fine-tuning

Here are 40 public repositories matching this topic...

Fully Connected Neural Networks, Multilayer Neural Networks, MAdaline, CNNs, Segmentation, Detection, RNNs, CNN-LSTM, LSTM, Bi-LSTM, GRU, Transformers, Huber Loss, ViT, DGMs, Triplet VAE, AdvGAN, Image Caption Generation, attention, LLM Fine-Tuning, Soft Prompting, LoRA, Layer Freezing, SlimOrca

  • Updated Nov 21, 2025
  • Jupyter Notebook

The course teaches how to fine-tune LLMs using Group Relative Policy Optimization (GRPO)—a reinforcement learning method that improves model reasoning with minimal data. Learn RFT concepts, reward design, LLM-as-a-judge evaluation, and deploy jobs on the Predibase platform.

  • Updated Jun 13, 2025
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the llm-fine-tuning topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the llm-fine-tuning topic, visit your repo's landing page and select "manage topics."

Learn more