Ant Group Unveils Ling-1T: Trillion-Parameter AI Model

Ant Group releases Ling-1T, a trillion-parameter open-source AI model with state-of-the-art coding, reasoning, and multimodal capabilities.

by HowAIWorks Team
aiant-groupling-1tai-modelsopen-sourcecodingreasoningmultimodalartificial-intelligencechina-aitrillion-parameter-modelchinese-aimoe-architecturefp8-trainingopen-source-llm

Introduction

The Chinese fintech giant Ant Group has unveiled the Ling AI model family and launched Ling-1T, a trillion-parameter language model that sets new standards in code generation, mathematical reasoning, and logical problem-solving. This comprehensive release introduces three distinct model series designed to address different AI capabilities while maintaining the company's commitment to open-source development and accessible artificial general intelligence (AGI).

Quick Facts

  • Release Date: October 13, 2025
  • Developer: Ant Group (inclusionAI)
  • Model Scale: 1 trillion total parameters, ~50B active per token
  • License: MIT (Free & Open Source)
  • Top Performance: 70.42% on AIME 2025, #1 on ArtifactsBench (open-source)
  • Availability: Hugging Face & ModelScope

Ling AI Model Family Overview

Three Distinct Series

Ant Group's new Ling family (also known as BaiLing) comprises three specialized series:

Ling Series:

  • Focus: Efficient language processing with implicit reasoning (non-thinking models)
  • Use cases: Content generation, code development, language translation, text processing
  • Architecture: Optimized for efficient reasoning without explicit thinking mode
  • Flagship model: Ling-1T with one trillion parameters

Ring Series:

  • Focus: Advanced reasoning capabilities
  • Breakthrough: World's first open-source trillion-parameter reasoning model (Ring-1T-preview)
  • Use cases: Complex problem-solving, logical reasoning, mathematical computations
  • Previous release: Ring-1T-preview launched in September

Ming Series:

  • Focus: Multimodal processing capabilities
  • Capabilities: Processing and understanding multiple data types (text, images, audio, video)
  • Use cases: Cross-modal understanding, multimedia analysis, integrated AI applications
  • Integration: Combines language and visual understanding

Key Highlights of the Release

Breakthrough Performance

Ling-1T sets new standards with verified benchmarks:

  • AIME 2025: Achieved 70.42% accuracy on competitive mathematics, using 4,000+ output tokens per problem—matching industry-leading models
  • ArtifactsBench: Ranks #1 among open-source models for front-end generation and aesthetic understanding
  • BFCL V3: Demonstrates approximately 70% tool-call accuracy with minimal instruction tuning, showing strong emergent capabilities
  • Efficient Architecture: 1T total parameters but only approximately 50B active per token (1/32 MoE ratio), achieving 15%+ speedup with FP8 training

Technical Innovations

The release introduces several architectural breakthroughs:

  • Ling Scaling Law: Purpose-built architecture for trillion-scale efficiency under 1e25–1e26 FLOPs
  • Evo-CoT Optimization: Evolutionary Chain-of-Thought for progressive reasoning enhancement
  • LPO Training: Linguistics-Unit Policy Optimization for precise sentence-level alignment
  • FP8 Training: Largest known FP8-trained foundation model with less than 0.1% loss deviation

What Makes This Release Significant

First Complete Open-Source AI Family at Trillion Scale

This release marks a milestone in open-source AI development:

  • Unprecedented Scale: First complete family of trillion-parameter models (Ling, Ring, Ming) all open-sourced under MIT license
  • No Cost Barriers: Eliminates licensing costs that typically run into millions for comparable proprietary models
  • Production-Ready: All models deployed and available via API, with comprehensive deployment support (vLLM, SGLang)
  • World First: Ring-1T-preview was the world's first open-source trillion-parameter reasoning model (September 2025)

Open Source Commitment

Community Access

Ant Group has made Ling-1T and Ring-1T-preview fully open source:

  • Model weights: Free access to complete model parameters
  • Training data: Transparency about training methodology
  • Implementation details: Comprehensive documentation for deployment
  • Community support: Active engagement with open-source community
  • No restrictions: Minimal barriers to commercial and research use

AGI as a Public Good

Ant Group's Chief Technology Officer, He Zhenyu, articulates the company's vision:

"We believe that artificial general intelligence (AGI) should be a public good—a shared achievement for humanity's intelligent future. We are committed to building practical and inclusive AGI services that benefit everyone, which requires continuous advancement of technology."

Industry Impact

The open-source release has significant implications:

  • Research acceleration: Enables widespread experimentation and innovation
  • Cost reduction: Eliminates licensing costs for trillion-parameter models
  • Collaboration: Facilitates global cooperation on AI advancement
  • Accessibility: Makes advanced AI capabilities available to smaller organizations
  • Transparency: Promotes understanding of large-scale AI systems

Deployment and Accessibility

Ling-1T and the entire model family are immediately available:

  • Open Source: Full model weights on Hugging Face and ModelScope under MIT license
  • API Access: Available through third-party providers with OpenAI-compatible API format
  • Self-Hosting: Full support for vLLM and SGLang deployment
  • No Restrictions: Free for commercial and research use

Competitive Positioning

Global AI Landscape

Ling-1T positions Ant Group among AI leaders:

  • Chinese AI innovation: Demonstrates China's advancement in large language models
  • Open-source leadership: Sets example for open AI development
  • Technical capability: Proves competitiveness with Western AI models
  • Scale achievement: Joins elite group of trillion-parameter models
  • Performance parity: Matches or exceeds comparable international models

Market Differentiation

Key advantages of the Ling family:

  • Specialized series: Targeted models for different use cases
  • Open access: No cost barriers for researchers and developers
  • Comprehensive capabilities: Coverage across reasoning, language, and multimodal tasks
  • Strong performance: Competitive results on industry benchmarks
  • Community-driven: Open development model encouraging collaboration

Impact on the AI Ecosystem

Democratizing Advanced AI

The release significantly lowers barriers to entry for advanced AI:

  • Cost Savings: Organizations can now deploy trillion-parameter models without licensing fees that typically cost millions
  • Research Acceleration: Academic institutions gain access to frontier-level AI for research and education
  • Startup Enablement: Smaller companies can compete with tech giants using the same AI infrastructure
  • Global Access: Models optimized for deployment in diverse environments, from cloud to edge devices

Advancing Open-Source AI

This release strengthens the open-source AI movement:

  • Transparency: Full access to model architecture, training methodology, and weights
  • Reproducibility: Complete documentation enables verification and improvement by the community
  • Innovation: Creates foundation for derivative works and specialized applications
  • Collaboration: Facilitates global cooperation on AI safety and capability research

Industry Reaction and Context

Market Impact

The release comes at a critical time for the AI industry:

  • Competitive Pressure: Adds significant pressure on proprietary models from OpenAI, Anthropic, and Google to justify premium pricing
  • Open-Source Leadership: Positions China's AI ecosystem as a leader in open-source development, following DeepSeek's success
  • Performance Parity: Demonstrates that open-source models can match closed-source alternatives on key benchmarks
  • Ecosystem Growth: Expected to spawn numerous derivative applications and specialized fine-tunes

Strategic Timing

Ant Group's release aligns with broader industry shifts:

  • Post-Scaling Laws Era: Focus on efficiency and specialized architectures rather than pure parameter scaling
  • AGI Democratization: Growing movement toward AGI as a public good, not proprietary advantage
  • Multi-Model Strategies: Trend toward model families (Ling/Ring/Ming) rather than single flagship models
  • Production Readiness: Emphasis on deployment infrastructure alongside model release

Conclusion

Ant Group's release of the Ling AI model family and the trillion-parameter Ling-1T model represents a significant milestone in open-source artificial intelligence. By combining state-of-the-art performance with complete open access, Ant Group is advancing its vision of AGI as a public good while demonstrating competitive technical capabilities.

Key Takeaways:

  • Complete Model Family: First open-source release of an entire trillion-parameter AI family (Ling, Ring, Ming) under MIT license
  • Verified Performance: 70.42% on AIME 2025, #1 on ArtifactsBench (open-source), approximately 70% on BFCL V3—matching proprietary models
  • Technical Innovation: Largest FP8-trained model, novel Evo-CoT optimization, and LPO alignment methodology
  • Immediate Availability: Production-ready deployment via Hugging Face with vLLM/SGLang support and third-party APIs
  • World Firsts: Ring-1T as first open-source trillion-parameter reasoning model; Ling-1T as most efficient at this scale
  • Market Impact: Disrupts pricing models of proprietary AI, enabling startups and researchers to access frontier-level capabilities
  • AGI Philosophy: Embodies Ant Group's vision of AGI as a public good, not proprietary advantage

This release marks a pivotal moment in AI democratization, proving that open-source models can achieve parity with closed alternatives while eliminating cost barriers that have limited innovation to well-funded organizations.

Sources

Related Reading

Explore more about AI models and technologies:


Want to learn more about AI models and their capabilities? Explore our AI models catalog, check out our AI fundamentals courses, or browse our glossary of AI terms for deeper understanding.

Frequently Asked Questions

Ling-1T is Ant Group's flagship AI model with one trillion parameters. It demonstrates state-of-the-art performance in code generation, software development, competitive-level math, and logical reasoning, achieving 70.42% accuracy on AIME 2025.
The Ling family includes three series: Ling (efficient language models without explicit thinking mode), Ring (thinking models with advanced reasoning), and Ming (multimodal models).
Yes, Ant Group has open-sourced Ling-1T along with Ring-1T-preview, making these trillion-parameter models freely available to the AI community.
On the American Invitational Mathematics Examination (AIME) 2025, Ling-1T achieved 70.42% accuracy while using an average of over 4,000 output tokens per problem, matching industry-leading models.

Continue Your AI Journey

Explore our lessons and glossary to deepen your understanding.