Skip to content
llm-speed
Leaderboard/model/llama-3-3-70b-instruct-4bit

llama-3.3-70b-Instruct-4bit

1 workload result across 1 hardware configuration.

Fastest known config

16.8 decode tok/s

on M3 Ultra (60-core GPU) + 96GB unified via mlx see full run

M3 Ultra (60-core GPU) + 96GB unifiedM3 Ultra (60-core GPU) + 96GB unified

WorkloadBackendQuantdecode tok/sprefill tok/sTTFTRun
chat-shortmlx@0.31.316.78tok/s25.09tok/s5,420msr_sx3a4y9n-m4

llama-3.3-70b-Instruct-4bit on hardware