
vLLM 0.17 Ships FlashAttention 4 and Live MoE Scaling
vLLM v0.17.0 adds FlashAttention 4, elastic expert parallelism for live MoE rescaling, full Qwen3.5 support, and a performance-mode flag, all in 699 commits from 272 contributors.

vLLM v0.17.0 adds FlashAttention 4, elastic expert parallelism for live MoE rescaling, full Qwen3.5 support, and a performance-mode flag, all in 699 commits from 272 contributors.

Kimi K2.5 leads every coding benchmark, but Qwen3.5-35B-A3B delivers 87-93% of that performance at 3-4x lower cost and runs on a single consumer GPU. Here is the full breakdown.

Zhipu AI's GLM-5 is a 744B MoE model with 40B active parameters, trained on 100K Huawei Ascend chips, scoring 77.8% SWE-bench and 50 on Artificial Analysis Intelligence Index - MIT licensed.

Zhipu AI's 744B open-source model GLM-5 was trained entirely on Huawei Ascend chips and now competes with GPT-5.2 and Claude Opus on major benchmarks.

Two Chinese open-weight trillion-parameter MoE models with ~32B active parameters each - DeepSeek V4 bets on cost and context, Kimi K2.5 bets on Agent Swarm and verified benchmarks.

A pre-release comparison of DeepSeek V3.2 and V4 - examining the generational leap from 671B text-only to a trillion-parameter natively multimodal model with 1M context.