DeepSeek Launches Next-Generation AI Models with Domestic Chip Support
Hangzhou-based AI startup DeepSeek released two versions of its next-generation foundational AI model on Friday.
"This was achieved with world-leading cost efficiency." – DeepSeek
The lineup includes the V4-pro, featuring a massive 1.6 trillion parameters, and the V4-flash, with 284 billion parameters. Both models boast a significant upgrade in context window size, now supporting 1 million tokens—a leap from the 128,000 tokens in the previous flagship model.
Domestic Chip Ecosystem Embraces the New ModelsFollowing the release, Huawei announced that its Ascend chips and supernode systems would support V4 models for inference. AI chipmaker Cambricon Technologies also confirmed compatibility with the new models.
Analysts from Huatai Securities noted that the release "explicitly mentions compatibility with domestic chips" and predicted "significant improvement and widespread adoption of domestic graphics cards this year."
Technical Considerations- Hardware Requirements: The V4-pro model is too large to run on consumer-grade hardware, limiting its deployment to enterprise or cloud infrastructure.
- Global Impact: The technical report detailing V4's architecture and training techniques is expected to be useful for global AI developers, offering insights into cost-efficient scaling.