
DeepSeek V4 Drops Next Week - 1 Trillion Parameters on Chinese Chips
DeepSeek will release V4, a natively multimodal trillion-parameter model with a 1M token context window, in the first week of March - optimized for Huawei Ascend chips, not Nvidia.

DeepSeek will release V4, a natively multimodal trillion-parameter model with a 1M token context window, in the first week of March - optimized for Huawei Ascend chips, not Nvidia.

DeepSeek has denied Nvidia and AMD pre-release access to its upcoming V4 model while granting Huawei and domestic Chinese chipmakers a multi-week optimization window, signaling a strategic pivot toward building a parallel AI software ecosystem on Chinese silicon.

Meta has agreed to rent Google's Ironwood TPUs through Google Cloud to train next-generation AI models, adding a third major chip supplier alongside Nvidia and AMD in a single month.

Three AI chip startups - MatX, SambaNova, and Axelera - raised a combined $1.1 billion in one week, signaling an acceleration in the race to break Nvidia's GPU dominance.

A senior Trump administration official confirms DeepSeek trained its upcoming AI model on Nvidia's most advanced Blackwell chips at an Inner Mongolia data center, despite US export controls banning the hardware from reaching China.

Toronto startup Taalas raises $169M to build custom chips that permanently etch AI model weights into transistors, claiming 73x faster inference than Nvidia's H200 at a fraction of the power.