Introduction
Meta Platforms is preparing a major technological offensive in the AI race, developing two next-generation artificial intelligence models designed to challenge the dominance of OpenAI and Google. According to a report from The Wall Street Journal on December 18, 2025, the social media giant is working on Mango, a high-stakes image and video-focused AI model, and Avocado, its next major text-based large language model (LLM).
This development represents Meta's first major output from Meta Superintelligence Labs (MSL), a specialized division created after a major restructuring in the summer of 2025. The ambitious roadmap was detailed during an internal Q&A session featuring Meta's Chief AI Officer Alexandr Wang and Chief Product Officer Chris Cox, signaling Meta's commitment to moving beyond the incremental updates of the current Llama family.
The Strategic Vision
Beyond Llama: A New Generation
Meta's development of Mango and Avocado marks a significant strategic shift. While the Llama family has been successful in the open-source AI community, these new models represent Meta's attempt to compete directly with closed-source frontier models from OpenAI and Google.
Key Strategic Elements:
- First-half 2026 timeline: Aggressive development schedule targeting competitive release
- Dual-model approach: Specialized models for different modalities (image/video vs. text)
- World models research: Early-stage development of AI systems that understand physical reality
- Coding focus: Addressing a traditional weakness in Meta's model portfolio
The Investment Behind the Strategy
Meta's commitment to this initiative is substantial. The company invested over $14 billion to acquire a near-majority stake in Scale AI, cementing Alexandr Wang's role as the primary architect of Meta's post-Llama strategy. This investment demonstrates CEO Mark Zuckerberg's willingness to spend aggressively to close the gap with rivals.
Meta Superintelligence Labs
Organizational Structure
Meta Superintelligence Labs was created following a major restructuring in the summer of 2025. This specialized division represents Meta's focused effort to develop next-generation AI capabilities:
Leadership:
- Alexandr Wang: Chief AI Officer, 28-year-old founder of Scale AI
- Chris Cox: Chief Product Officer, involved in strategic planning
- Mark Zuckerberg: CEO, personally orchestrated high-profile recruiting efforts
Talent Acquisition:
- Poached more than 20 researchers from OpenAI
- Hired Wang to oversee the effort
- Significant investment in building a world-class AI research team
First Major Output
Mango and Avocado represent the first major output from MSL, establishing the division as a key player in Meta's AI strategy. The development timeline and capabilities suggest these models will be significantly more advanced than previous Meta offerings.
Mango: Image and Video AI Model
Capabilities and Focus
Mango is Meta's next-generation model focused on image and video generation and understanding:
Expected Capabilities:
- Advanced image generation and editing
- Video creation and analysis
- Multimodal understanding combining visual and textual information
- Integration with world models research
Strategic Positioning:
- Designed to compete with models like OpenAI's Sora and Google's video generation capabilities
- High-stakes development indicating Meta's commitment to visual AI
- Part of broader multimodal AI strategy
Market Context
The image and video AI market has seen rapid growth, with models like:
- OpenAI's Sora: Advanced video generation
- Google's Gemini: Strong multimodal capabilities
- Stable Diffusion: Open-source image generation
Mango represents Meta's attempt to establish a competitive position in this space, moving beyond the text-focused Llama models.
Avocado: Advanced Text LLM
Coding-Focused Development
Avocado is Meta's next major text-based large language model, with a specific focus on addressing previous weaknesses:
Key Focus Areas:
- Advanced coding capabilities: Traditional weak point for Meta models
- Competitive performance: Designed to match or exceed OpenAI and Google models
- Production readiness: Built for real-world applications
Development Priorities:
- Coding performance comparable to GPT-4 and Gemini
- Enhanced reasoning capabilities
- Better tool use and agentic workflows
Competitive Landscape
Avocado enters a competitive market dominated by:
- OpenAI's GPT models: Industry-leading coding capabilities
- Google's Gemini: Strong reasoning and multimodal understanding
- Anthropic's Claude: Advanced safety and coding features
Meta's focus on coding capabilities suggests Avocado will prioritize developer use cases and agentic applications.
World Models: The Future Vision
Understanding Physical Reality
During the internal session, Wang noted that Meta is in the early stages of developing world models. This represents a significant shift in AI development:
Current LLM Approach:
- Predict the next word in a sequence
- Text-based understanding
- Limited physical world comprehension
World Models Vision:
- Understand physical reality through visual information
- Process vast amounts of visual data
- Comprehend how the physical world works
Implications
World models could enable:
- Robotics applications: AI systems that understand physical environments
- Autonomous systems: Better navigation and interaction with the real world
- Scientific research: Understanding physical processes and phenomena
- Augmented reality: Enhanced AR experiences with better world understanding
This research direction aligns with Meta's focus on the metaverse and AR/VR technologies, suggesting potential integration with Meta's hardware and software ecosystem.
The Scale AI Connection
Strategic Acquisition
Meta's investment in Scale AI represents one of the largest AI-related acquisitions:
Investment Details:
- Over $14 billion for near-majority stake
- Alexandr Wang becomes Meta's Chief AI Officer
- Scale AI's expertise in data labeling and AI infrastructure
Strategic Value:
- Access to Scale AI's data labeling capabilities
- Infrastructure for training large models
- Expertise in building production AI systems
- Established relationships with AI research community
Leadership Transition
Alexandr Wang's transition from Scale AI founder to Meta's Chief AI Officer represents a significant talent acquisition. At 28 years old, Wang brings:
- Experience building AI infrastructure companies
- Understanding of production AI systems
- Connections in the AI research community
- Vision for next-generation AI development
Competitive Positioning
Against OpenAI
Meta's development of Mango and Avocado positions the company to compete with OpenAI across multiple fronts:
Text Models:
- Avocado vs. GPT-4 and future GPT models
- Focus on coding capabilities to match OpenAI's strengths
- Competitive reasoning and tool use
Multimodal Models:
- Mango vs. Sora and other OpenAI visual models
- Advanced image and video generation
- Integration with text capabilities
Against Google
Google's Gemini family represents another key competitor:
Model Comparison:
- Avocado vs. Gemini Pro and Ultra models
- Mango vs. Gemini's multimodal capabilities
- World models research vs. Google's multimodal understanding
Strategic Differences:
- Meta's focus on open-source community (via Llama)
- Google's integration with search and productivity tools
- Different approaches to multimodal AI
Timeline and Expectations
First-Half 2026 Release
Both Mango and Avocado are expected to release in the first half of 2026, according to the internal Q&A session. This timeline suggests:
Development Status:
- Active development underway
- Significant progress made since MSL formation
- Aggressive timeline to compete with rivals
Market Timing:
- Potential release alongside or before GPT-5
- Competition with Google's Gemini updates
- Opportunity to establish market position
What to Expect
Based on the information revealed:
Mango:
- Advanced image and video generation
- Multimodal understanding capabilities
- Integration with Meta's platforms
Avocado:
- Strong coding performance
- Competitive reasoning capabilities
- Production-ready features
World Models:
- Early-stage research
- Long-term vision for physical world understanding
- Potential applications in AR/VR and robotics
Implications for the AI Industry
Market Dynamics
Meta's aggressive investment and development signal:
Increased Competition:
- Three major players (OpenAI, Google, Meta) competing for leadership
- Significant capital investment in AI development
- Talent war intensifying with poaching from competitors
Innovation Acceleration:
- Faster model development cycles
- Focus on specialized capabilities (coding, multimodal)
- Research into next-generation AI approaches (world models)
Open Source vs. Closed Source
Meta's history with Llama suggests potential open-source releases, but Mango and Avocado may follow a different strategy:
Considerations:
- Competitive pressure may favor closed-source models
- Investment recovery may require commercial licensing
- Balance between open-source community and competitive advantage
Technical Challenges and Opportunities
Development Challenges
Building models that compete with OpenAI and Google presents several challenges:
Technical Hurdles:
- Matching or exceeding coding capabilities
- Advanced multimodal understanding
- Efficient training and inference
- World models research is still early-stage
Resource Requirements:
- Significant compute infrastructure
- Large-scale data collection and processing
- Talent acquisition and retention
- Research and development investment
Opportunities
Meta's unique position offers several advantages:
Platform Integration:
- Integration with Facebook, Instagram, WhatsApp
- AR/VR hardware ecosystem (Quest, Ray-Ban Meta)
- Massive user base for testing and deployment
Research Capabilities:
- FAIR (Facebook AI Research) expertise
- Access to vast amounts of user-generated content
- Infrastructure for large-scale training
Conclusion
Meta's development of Mango and Avocado represents a significant escalation in the AI race, with the company making substantial investments to compete with OpenAI and Google. The strategic acquisition of Scale AI, hiring of Alexandr Wang, and formation of Meta Superintelligence Labs demonstrate Meta's commitment to becoming a leader in next-generation AI.
Key Takeaways
- Dual-model strategy: Mango for image/video, Avocado for text with coding focus
- Aggressive timeline: First-half 2026 release target
- Substantial investment: Over $14 billion in Scale AI acquisition
- Talent acquisition: Poached 20+ researchers from OpenAI, hired Scale AI founder
- Future vision: Early-stage world models research for physical reality understanding
What This Means
For the AI industry, Meta's entry into high-stakes model development increases competition and accelerates innovation. For developers, the potential release of advanced coding-focused models (Avocado) and multimodal capabilities (Mango) offers new tools and possibilities. For Meta, this represents a critical strategic move to establish leadership in AI and support its metaverse and AR/VR ambitions.
The success of Mango and Avocado will depend on their ability to match or exceed the capabilities of existing frontier models while offering unique advantages. With the first-half 2026 timeline, the AI community will soon see whether Meta's substantial investment and strategic vision translate into competitive AI models that can challenge the current market leaders.
Sources
- Yahoo Finance - Meta bets on 'Mango' and 'Avocado' in AI race
- The Wall Street Journal - Meta Is Developing a New AI Image and Video Model Code-Named 'Mango'
- Meta AI Research
- Scale AI
Interested in learning more about AI models and their capabilities? Explore our AI models section, check out our glossary of AI terms, or discover other AI tools in our comprehensive catalog.