We ran Gemma 4 26B and Qwen 3.6 35B-A3B head-to-head on the same server, same quantization, same protocol. Gemma 4 is 3.7× faster at 32k context — and 7.2× faster at 128k. The gap widens with context, and the reason reveals something important about model selection for long-context workloads.
We put Qwen 3.6 35B-A3B on a developer laptop and a dual-GPU server. The speed gap grows from 2.4× to 5.3× as context grows — and the real bottleneck turns out not to be compute.