Skip to content
llm-speed

Mistral-Small-3.1-24B-Instruct vs Phi-4

Side-by-side decode tok/s, prefill tok/s, and TTFT for Mistral-Small-3.1-24B-Instruct and Phi-4, sourced from community-submitted runs of the llm-speed suite. Every number on this page links back to the run it came from.

model
Mistral-Small-3.1-24B-Instruct
Mistral · 24B
View Mistral-Small-3.1-24B-Instruct page →
model
Phi-4
Microsoft · 14B
View Phi-4 page →

No overlapping Mistral-Small-3.1-24B-Instruct ↔ Phi-4 benchmarks yet.

We don't have a submitted run that covers both sides of this comparison yet. Run the suite on either side to populate this page:

$ pipx install llm-speed && llm-speed bench

See also: Mistral-Small-3.1-24B-Instruct benchmarks · Phi-4 benchmarks · All hardware · All models · Methodology