Skip to content
llm-speed
Leaderboard/model/codellama-13b-python-4bit-mlx

CodeLlama-13b-Python-4bit-MLX

2 workload results across 2 hardware configurations.

M3 Pro (18-core GPU) + 36GB unifiedM3 Pro (18-core GPU) + 36GB unified

WorkloadBackendQuantdecode tok/sprefill tok/sTTFTRun
chat-shortmlx@0.31.3no datano datano datar_js2ve1vf_jj

M3 Ultra (60-core GPU) + 96GB unifiedM3 Ultra (60-core GPU) + 96GB unified

WorkloadBackendQuantdecode tok/sprefill tok/sTTFTRun
chat-shortmlx@0.31.3no datano datano datar_63gq-4zixk0

CodeLlama-13b-Python-4bit-MLX on hardware