Top Phi Fine-Tunes

Best fine-tunes built on Microsoft's Phi family — small but powerful models optimized for efficiency.

Last updated April 3, 2026 · Updated daily

Phi-3.5-mini-instruct-GGUF by MaziyarPanahi holds the #1 position with 523.2K downloads, ahead of Phi-4-mini-instruct-GGUF at 116.1K.

The top 10 is dominated by MaziyarPanahi, TIGER-Lab, lmstudio-community. This is the first snapshot — future updates will track position changes and emerging trends.

The gap between #1 and #42 is 523.2K vs 3.8K downloads, showing massive concentration at the top.

🥇new

Phi-3.5-mini-instruct-GGUF

Bronze41

MaziyarPanahi · Role-Play & Characters · from microsoft/Phi-3.5-mini-instruct

523.2K

downloads

🥈new

Phi-4-mini-instruct-GGUF

Bronze35

MaziyarPanahi · Role-Play & Characters · from microsoft/Phi-4-mini-instruct

116.1K

downloads

🥉new

phi-4-GGUF

Bronze34

MaziyarPanahi · Role-Play & Characters · from microsoft/phi-4

113.9K

downloads

4new

VLM2Vec-Full

Bronze37

TIGER-Lab · Code Generation · from microsoft/Phi-3.5-vision-instruct

86.2K

downloads

5new

Phi-4-mini-reasoning-MLX-4bit

Bronze30

lmstudio-community · Code Generation · from microsoft/Phi-4-mini-reasoning

51.9K

downloads

6new

HTML-Pruner-Phi-3.8B

Bronze34

zstanjj · Code Generation · from microsoft/Phi-3.5-mini-instruct

51.3K

downloads

7new

task-20-microsoft-Phi-3-mini-4k-instruct

Bronze26

ycheng1024 · Chat & Assistants · from microsoft/Phi-3-mini-4k-instruct

32.8K

downloads

8new

Phi-4-reasoning-plus-MLX-4bit

Bronze26

lmstudio-community · Code Generation · from microsoft/Phi-4-reasoning-plus

31.9K

downloads

9new

Phi-3.5-mini-instruct-GGUF

Bronze37

bartowski · Code Generation · from microsoft/Phi-3.5-mini-instruct

31.0K

downloads

10new

Phi-4-reasoning-vision-15B-GGUF

Bronze30

jamesburton · Math & Reasoning · from microsoft/Phi-4-reasoning-vision-15B

28.6K

downloads

11new

Phi-4-mini-instruct-GGUF

Bronze37

unsloth · Code Generation · from microsoft/Phi-4-mini-instruct

25.9K

downloads

12new

phi-4

Bronze37

unsloth · Code Generation · from microsoft/phi-4

25.6K

downloads

13new

Phi-3-mini-4k-instruct-q4f16_1-MLC

Bronze25

mlc-ai · Other · from microsoft/Phi-3-mini-4k-instruct

23.2K

downloads

14new

Phi-4-mini-instruct-4bit

Bronze25

mlx-community · Code Generation · from microsoft/Phi-4-mini-instruct

20.6K

downloads

15new

phi-2-GGUF

Bronze39

TheBloke · Code Generation · from microsoft/phi-2

18.1K

downloads

16new

phi-4-GGUF

Bronze35

lmstudio-community · Code Generation · from microsoft/phi-4

17.7K

downloads

17new

Phi-3.5-mini-instruct-bnb-4bit

Bronze31

unsloth · Chat & Assistants · from microsoft/Phi-3.5-mini-instruct

16.6K

downloads

18new

microsoft_Phi-4-mini-instruct-GGUF

Bronze33

bartowski · Chat & Assistants · from microsoft/Phi-4-mini-instruct

15.8K

downloads

19new

phi-4-GGUF

Bronze35

bartowski · Code Generation · from microsoft/phi-4

14.9K

downloads

20new

phi-4-unsloth-bnb-4bit

Bronze34

unsloth · Code Generation · from microsoft/phi-4

12.6K

downloads

21new

Phi-4-mini-instruct-unsloth-bnb-4bit

Bronze31

unsloth · Code Generation · from microsoft/Phi-4-mini-instruct

11.7K

downloads

22new

Phi-4-mini-reasoning-GGUF

Bronze26

lmstudio-community · Code Generation · from microsoft/Phi-4-mini-reasoning

11.6K

downloads

23new

phi-4-AWQ

Bronze26

stelterlab · Code Generation · from microsoft/phi-4

9.8K

downloads

24new

Phi-3-mini-4k-instruct-AWQ

New

solidrust · Code Generation · from microsoft/Phi-3-mini-4k-instruct

9.1K

downloads

25new

Phi-4-mini-instruct

Bronze31

unsloth · Code Generation · from microsoft/Phi-4-mini-instruct

8.7K

downloads

26new

InternVL2-4B

Bronze33

OpenGVLab · VLM · from merge:microsoft/Phi-3-mini-128k-instruct

8.1K

downloads

27new

Phi-4-reasoning-plus-GGUF

Bronze27

lmstudio-community · Code Generation · from microsoft/Phi-4-reasoning-plus

7.5K

downloads

28new

Phi-3.5-mini-instruct-q4f16_1-MLC

Bronze25

mlc-ai · Other · from microsoft/Phi-3.5-mini-instruct

6.2K

downloads

29new

Flow-Judge-v0.1-AWQ

Bronze26

flowaicom · Code Generation · from microsoft/Phi-3.5-mini-instruct

5.9K

downloads

30new

Phi-3.5-mini-instruct-q4f16_0-MLC

New

mlc-ai · Other · from microsoft/Phi-3.5-mini-instruct

5.7K

downloads

31new

Phi-4-mini-instruct-bnb-4bit

Bronze26

unsloth · Code Generation · from microsoft/Phi-4-mini-instruct

5.6K

downloads

32new

Phi-3.5-mini-ITA

Bronze28

anakin87 · Code Generation · from microsoft/Phi-3.5-mini-instruct

5.4K

downloads

33new

Artemide-3.5

New

ReDiX · Code Generation · from microsoft/Phi-3.5-mini-instruct

5.0K

downloads

34new

phi-4-FP8-dynamic

New

RedHatAI · Code Generation · from microsoft/phi-4

4.9K

downloads

35new

Phi-4-mini-reasoning-GGUF

Bronze32

unsloth · Code Generation · from microsoft/Phi-4-mini-reasoning

4.9K

downloads

36new

Phi-3-mini-4k-instruct-int4-ov

New

OpenVINO · Code Generation · from microsoft/Phi-3-mini-4k-instruct

4.5K

downloads

37new

phi-4-bnb-4bit

Bronze28

unsloth · Code Generation · from microsoft/phi-4

4.3K

downloads

38new

Phi-4-mini-instruct-INT8-INT4

New

pytorch · Code Generation · from microsoft/Phi-4-mini-instruct

4.3K

downloads

39new

NuExtract-1.5

Bronze35

numind · Code Generation · from microsoft/Phi-3.5-mini-instruct

4.2K

downloads

40new

Phi-3.5-mini-instruct

Bronze31

unsloth · Chat & Assistants · from microsoft/Phi-3.5-mini-instruct

4.0K

downloads

41new

Phi-4-reasoning-plus-GGUF

Bronze32

unsloth · Code Generation · from microsoft/Phi-4-reasoning-plus

3.8K

downloads

42new

Phi-3.1-mini-4k-instruct-GGUF

Bronze30

bartowski · Code Generation · from microsoft/Phi-3-mini-4k-instruct

3.8K

downloads

About the Top Phi Fine-Tunes Leaderboard

Best fine-tunes built on Microsoft's Phi family — small but powerful models optimized for efficiency. This leaderboard tracks the top 42 fine-tuned models ranked by downloads, with daily snapshots to monitor how the rankings evolve over time.

Unlike HuggingFace's default model hub, Fine-Tune Catalog filters out pure quantizations and format conversions (repos that simply re-package a model as GGUF, AWQ, GPTQ, or EXL2 without any new training). Fine-tuned models published in quantized formats are still included — what matters is whether new training happened, not the output format.

Methodology

Rankings are based on total all-time download counts from HuggingFace. Downloads reflect real-world adoption — models and datasets that people actually use in production and research, not just stars or hype.

Rankings are snapshotted daily at 6:00 AM UTC. Position changes shown on the leaderboard compare the current snapshot to the previous day's snapshot. All data is sourced directly from the HuggingFace Hub API and processed through our classification pipeline, which uses tag analysis, model card parsing, and naming pattern detection to identify genuine fine-tunes.

Data Sources

  • HuggingFace Hub API — download counts, likes, trending scores, model metadata, and README/model cards
  • Model card parsing — training datasets, training method (LoRA, DPO, SFT, etc.), framework, hardware, and hyperparameters extracted from README files
  • Tag classification — fine-tune detection via `base_model:finetune:*` and `base_model:quantized:*` HuggingFace tags, plus naming pattern analysis

Who Is This For?

This leaderboard is designed for engineers who have already chosen a base model architecture and want to find the best fine-tuned versions available, saving them the time and compute of training from scratch.

Whether you're a beginner exploring what's possible with fine-tuned AI models or an experienced ML engineer looking for the best starting point for your next project, these rankings give you a data-driven way to find the highest quality models without having to wade through thousands of quantizations, format conversions, and abandoned repositories on HuggingFace.

Update Schedule

This leaderboard was last updated on April 3, 2026. Rankings are refreshed daily with the latest download counts, likes, and trending data from HuggingFace. Historical snapshots are preserved to track trends over time — you can see which models are gaining traction and which are losing momentum.