Skip to content
llm-speed

Qwen3-32B-Instruct.Q4_K_M on smoke-host

smoke-hostsmoke-host
suite suite-v1
cli smoke-test
signeddxe8poGDrh…
Embed badgesubmitted May 12, 2026

Workload results

WorkloadBackendModeldecode tok/sprefill tok/sTTFTp50p95
chat-shortllama.cpp@b1Qwen3-32B-Instruct.Q4_K_MQ4_K_M10.00tok/s100.0tok/s50.0ms20.0ms30.0ms

Reproduce on your machine

Same workload, same model, signed at your rig. The exact command that produced this run:

$ pipx install llm-speed && llm-speed bench --model 'Qwen3-32B-Instruct.Q4_K_M' --workload 'chat-short'

Runs in about a minute. Your number lands on the leaderboard signed and linkable. How it's measured.

Embed this run

Drop the badge into a README, blog post, or signature. Each render is a backlink to the signed result.

llm-speed: 10.0 tok/s on smoke-host (Qwen3-32B-Instruct.Q4_K_M)
[![llm-speed: 10.0 tok/s on smoke-host (Qwen3-32B-Instruct.Q4_K_M)](https://llm-speed.com/badge/r_r7fc52oxuvq.svg)](https://llm-speed.com/r/r_r7fc52oxuvq)

Related benchmarks

Provenance

Run ID
r_r7fc52oxuvq
Fingerprint hash
benign
Public key
dxe8poGDrhcbRES1JhyTWqrI3T6vHy6vS8bkjQpy5Iw=
Received
2026-05-12 23:07:27