Llama-3.1-8B-Instruct-4bit on M3 Pro (18-core GPU) + 36GB unified
M3 Pro (18-core GPU) + 36GB unified
suite suite-v1
cli 0.0.1-dev
signedS8711zXnps…
submitted Apr 28, 2026Workload results
| Workload | Backend | Model | decode tok/s | prefill tok/s | TTFT | p50 | p95 |
|---|---|---|---|---|---|---|---|
| chat-short | mlx@0.31.3 | mlx-community/Llama-3.1-8B-Instruct-4bit | 29.20tok/s | 203.3tok/s | 669ms | 34.1ms | 36.0ms |
Embed this run
Drop the badge into a README, blog post, or signature. Each render is a backlink to the signed result.
[](https://llm-speed.com/r/r_h0-use1ypnb)Related benchmarks
- More Llama-3.1-8B-Instruct-4bit benchmarks — every backend and rig that has run this model.
- More M3 Pro (18-core GPU) LLM benchmarks — every model measured on this hardware.
Provenance
- Run ID
- r_h0-use1ypnb
- Fingerprint hash
- a52e5dd258afe436
- Public key
- S8711zXnpsbOS9F8EZCne0DE3jWiyeYAqEDECBzTVWk=
- Received
- 2026-04-28 14:39:18