wrapped·May 12, 2026
/r/r_dnvwv68uo3zyour run
m
on x
10.0
tok/sdecode
x
rank in tier
5/7x runstop 71%
best workload
chat-short
where the rig flew
slowest workload
—
single-workload run
backend
llama.cppllama.cpp is the universal backend — broad model support, modest speed ceiling.
faster than
- Qwen3-32B-4bit on M3 Pro7.2 tok/s
- Qwen2.5-32B-Instruct-4bit on M3 Pro7.1 tok/s