m on x
Workload results
| Workload | Backend | Model | decode tok/s | prefill tok/s | TTFT | p50 | p95 |
|---|---|---|---|---|---|---|---|
| chat-short | llama.cpp | mQ4 | 10.00tok/s | — | — | — | — |
Reproduce on your machine
Same workload, same model, signed at your rig. The exact command that produced this run:
$ pipx install llm-speed && llm-speed bench --model 'm' --workload 'chat-short'
Runs in about a minute. Your number lands on the leaderboard signed and linkable. How it's measured.
Embed this run
Drop the badge into a README, blog post, or signature. Each render is a backlink to the signed result.
[](https://llm-speed.com/r/r_3r1vcq0s4vo)Related benchmarks
- More m benchmarks — every backend and rig that has run this model.
- More x LLM benchmarks — every model measured on this hardware.
Provenance
- Run ID
- r_3r1vcq0s4vo
- Fingerprint hash
- abababababababababababababababababababababababababababababababab
- Public key
- ybHrHkqpCU8lD980UoK+ELJPX7rrQcQrNxs6iJzCCRc=
- Received
- 2026-05-12 22:47:19