Skip to content
llm-speed
wrapped·Apr 28, 2026
/r/r_721b4bls_oq
your run

Qwen2.5-Coder-32B-Instruct-4bit

34.5
tok/sdecode
M3 Ultra (60-core GPU) + 96GB unified
rank in tier
17/20M3 Ultra runstop 85%
best workload
chat-short
where the rig flew
slowest workload
single-workload run
backend
mlxMLX is the fastest backend on Apple Silicon for dense models.
faster than
  • Qwen2.5-7B-Instruct-4bit on M3 Pro30.5 tok/s
  • Llama-3.1-8B-Instruct-4bit on M3 Pro29.2 tok/s
  • deepseek-v3.2-exp on M3 Pro27.2 tok/s
share
34.5 tok/s on M3 Ultra running Qwen2.5-Coder-32B-Instruct-4bit — top 85% of M3 Ultra runs. Signed benchmark via llm-speed.
https://llm-speed.com/wrapped/r_721b4bls_oq

This is a shareable view of one signed llm-speed suite-suite-v1 run. Numbers are not edited. The raw record has every workload, the public key, and the fingerprint hash.