Skip to content
llm-speed

Series

State of the local LLM

Monthly snapshots of what the fastest local LLM actually is, measured under one reproducible workload suite (llm-speed suite-v1). Each issue answers the question “as of YYYY-MM, the canonical answer is X” with deltas vs. the previous month and a citation per claim.

Issues

  • May 2026

    Inaugural issue. Fastest local 70B-class result, fastest dense Apple Silicon decode, and the first numbers under suite-v1 with the dual-domain trust chain in place.

Want the next issue in your inbox? There is no inbox yet — subscribe to the public RSS feed (coming soon) or follow github.com/meadow-kun/llm-speed for releases.