Skip to content
llm-speed
Leaderboard/model/meta-llama-llama-3-3-70b-instruct

llama-3.3-70b-instruct

1 workload result across 1 hardware configuration.

Fastest known config

43.2 decode tok/s

on M3 Pro (18-core GPU) + 36GB unified via hosted-api see full run

M3 Pro (18-core GPU) + 36GB unifiedM3 Pro (18-core GPU) + 36GB unified

WorkloadBackendQuantdecode tok/sprefill tok/sTTFTRun
chat-shorthosted-api43.20tok/s124.7tok/s890msr_tthgrsb7zn5

llama-3.3-70b-instruct on hardware