Skip to content
llm-speed
wrapped·Apr 26, 2026
/r/r_fui3jk1m4n0
your run

claude-haiku-4.5

45.1
tok/sdecode
M3 Pro (18-core GPU) + 36GB unified
rank in tier
10/27M3 Pro runstop 37%
best workload
chat-short
where the rig flew
slowest workload
single-workload run
backend
hosted-apiHosted API — the number is your network + the provider's rig, not yours.
faster than
  • Qwen2.5-32B-Instruct-4bit on M3 Ultra34.6 tok/s
  • Qwen2.5-Coder-32B-Instruct-4bit on M3 Ultra34.5 tok/s
  • Qwen3-32B-4bit on M3 Ultra34.4 tok/s
share
45.1 tok/s on M3 Pro running claude-haiku-4.5 — top 37% of M3 Pro runs. Signed benchmark via llm-speed.
https://llm-speed.com/wrapped/r_fui3jk1m4n0

This is a shareable view of one signed llm-speed suite-suite-v1 run. Numbers are not edited. The raw record has every workload, the public key, and the fingerprint hash.