Qwen3-32B-Instruct.Q4_K_M on smoke-host
Workload results
| Workload | Backend | Model | decode tok/s | prefill tok/s | TTFT | p50 | p95 |
|---|---|---|---|---|---|---|---|
| chat-short | llama.cpp@b1 | Qwen3-32B-Instruct.Q4_K_MQ4_K_M | 10.00tok/s | 100.0tok/s | 50.0ms | 20.0ms | 30.0ms |
Reproduce on your machine
Same workload, same model, signed at your rig. The exact command that produced this run:
$ pipx install llm-speed && llm-speed bench --model 'Qwen3-32B-Instruct.Q4_K_M' --workload 'chat-short'
Runs in about a minute. Your number lands on the leaderboard signed and linkable. How it's measured.
Embed this run
Drop the badge into a README, blog post, or signature. Each render is a backlink to the signed result.
[](https://llm-speed.com/r/r_r7fc52oxuvq)Related benchmarks
- More Qwen3-32B-Instruct.Q4_K_M benchmarks — every backend and rig that has run this model.
- More smoke-host LLM benchmarks — every model measured on this hardware.
Provenance
- Run ID
- r_r7fc52oxuvq
- Fingerprint hash
- benign
- Public key
- dxe8poGDrhcbRES1JhyTWqrI3T6vHy6vS8bkjQpy5Iw=
- Received
- 2026-05-12 23:07:27