Skip to content
llm-speed
Leaderboard/model/openai-gpt-4o

gpt-4o

1 workload result across 1 hardware configuration.

Fastest known config

83.8 decode tok/s

on M3 Pro (18-core GPU) + 36GB unified via hosted-api see full run

M3 Pro (18-core GPU) + 36GB unifiedM3 Pro (18-core GPU) + 36GB unified

WorkloadBackendQuantdecode tok/sprefill tok/sTTFTRun
chat-shorthosted-api83.80tok/s131.7tok/s805msr_ks8veotzymi

gpt-4o on hardware