Rankings
23 leaderboards tracking fine-tuned models and training datasets. Updated daily with position changes.

Overall most popular fine-tunes and datasets
Popular
Most Downloaded Fine-Tunes
The most downloaded fine-tuned models of all time on HuggingFace. Only genuine fine-tunes — no quantizations or format conversions.
Most Liked Fine-Tunes
Community favorites — the fine-tuned models with the most likes on HuggingFace.
Highest Quality Fine-Tunes
Top fine-tuned models ranked by our quality score — combining downloads, likes, documentation, and trending signals.
Top Uncensored Models
The most popular uncensored fine-tuned models. No alignment filters, no refusals — full creative freedom.
Top models by type — LLM, VLM, Image Gen, TTS, Embeddings
By Modality
Top LLM Fine-Tunes
Best fine-tuned large language models for text generation, chat, code, and reasoning.
Top VLM Fine-Tunes
Best fine-tuned vision-language models for image understanding, OCR, and visual reasoning.
Top Image Generation Fine-Tunes
Best fine-tuned image generation models — Stable Diffusion, FLUX, and other diffusion model fine-tunes.
Top TTS Fine-Tunes
Best fine-tuned text-to-speech models for voice generation and audio synthesis.
Top Embedding Fine-Tunes
Best fine-tuned embedding models for semantic search, RAG, and vector similarity.
Best models by how they were trained — LoRA, DPO, SFT, RLHF
By Training Method
Top LoRA Models
Best models trained with LoRA (Low-Rank Adaptation) — efficient fine-tuning that adapts large models with minimal compute.
Top QLoRA Models
Best models trained with QLoRA — quantized LoRA for fine-tuning large models on consumer GPUs.
Top DPO Models
Best models trained with Direct Preference Optimization — learning human preferences without a reward model.
Top SFT Models
Best models trained with Supervised Fine-Tuning — the classic approach of training on instruction-response pairs.
Top RLHF Models
Best models trained with Reinforcement Learning from Human Feedback — the technique that made ChatGPT possible.
Top fine-tunes for each model family — Llama, Qwen, Mistral, Gemma
By Base Model
Top Llama Fine-Tunes
Best fine-tunes built on Meta's Llama family — Llama 2, Llama 3, Llama 3.1, and Llama 3.3.
Top Qwen Fine-Tunes
Best fine-tunes built on Alibaba's Qwen family — Qwen2, Qwen2.5, and Qwen3.
Top Mistral Fine-Tunes
Best fine-tunes built on Mistral AI's models — Mistral 7B, Mixtral, and Mistral Large.
Top Gemma Fine-Tunes
Best fine-tunes built on Google's Gemma family — Gemma 2, Gemma 3, and CodeGemma.
Top Phi Fine-Tunes
Best fine-tunes built on Microsoft's Phi family — small but powerful models optimized for efficiency.
Top DeepSeek Fine-Tunes
Best fine-tunes built on DeepSeek models — including DeepSeek-R1 and DeepSeek-Coder.
Most popular and most used training datasets
Datasets
Most Downloaded Datasets
The most downloaded training datasets of all time. The data that powers the best fine-tuned models.
Most Liked Datasets
Community favorites — the training datasets with the most likes from researchers and fine-tuners.
Most Used Datasets
The training datasets referenced by the most fine-tuned models in our catalog. The backbone of the fine-tuning ecosystem.