Skip to content
llm-speed
wrapped·Apr 26, 2026
/r/r_bftqtkilvoe
your run

Qwen2.5-0.5B-Instruct-4bit

283
tok/sdecode
M3 Pro (18-core GPU) + 36GB unified
rank in tier
3/27M3 Pro runstop 11%
best workload
chat-short
where the rig flew
slowest workload
single-workload run
backend
mlxMLX is the fastest backend on Apple Silicon for dense models.
faster than
  • stable-code-instruct-3b-4bit on M3 Ultra192 tok/s
  • DeepSeek-Coder-V2-Lite-Instruct-4bit on M3 Ultra168 tok/s
  • gpt-oss-20b-MXFP4-Q4 on M3 Ultra153 tok/s
share
283 tok/s on M3 Pro running Qwen2.5-0.5B-Instruct-4bit — top 11% of M3 Pro runs. Signed benchmark via llm-speed.
https://llm-speed.com/wrapped/r_bftqtkilvoe

This is a shareable view of one signed llm-speed suite-suite-v1 run. Numbers are not edited. The raw record has every workload, the public key, and the fingerprint hash.