llama-3.3-70b-instruct
1 workload result across 1 hardware configuration.
Fastest known config
43.2 decode tok/s
on M3 Pro (18-core GPU) + 36GB unified via hosted-api — see full run
M3 Pro (18-core GPU) + 36GB unified
| Workload | Backend | Quant | decode tok/s | prefill tok/s | TTFT | Run |
|---|---|---|---|---|---|---|
| chat-short | hosted-api | — | 43.20tok/s | 124.7tok/s | 890ms | r_tthgrsb7zn5 |