Skip to content
llm-speed

Qwen2.5-Coder-7B-Instruct vs DeepSeek-Coder-V2-Lite-Instruct

Side-by-side decode tok/s, prefill tok/s, and TTFT for Qwen2.5-Coder-7B-Instruct and DeepSeek-Coder-V2-Lite-Instruct, sourced from community-submitted runs of the llm-speed suite. Every number on this page links back to the run it came from.

model
Qwen2.5-Coder-7B-Instruct
Qwen · 7B
View Qwen2.5-Coder-7B-Instruct page →
model
DeepSeek-Coder-V2-Lite-Instruct
DeepSeek · 16B-A2.4B
View DeepSeek-Coder-V2-Lite-Instruct page →

No overlapping Qwen2.5-Coder-7B-Instruct ↔ DeepSeek-Coder-V2-Lite-Instruct benchmarks yet.

We don't have a submitted run that covers both sides of this comparison yet. Run the suite on either side to populate this page:

$ pipx install llm-speed && llm-speed bench

See also: Qwen2.5-Coder-7B-Instruct benchmarks · DeepSeek-Coder-V2-Lite-Instruct benchmarks · All hardware · All models · Methodology