Introduction:
In the high-stakes race for AI supremacy, Microsoft is hitting unexpected roadblocks. While NVIDIA continues to dominate the AI chip market with its cutting-edge GPUs, Microsoft in-house AI chip efforts are reportedly facing major delays.
With its much-anticipated Braga chip pushed back and internal performance concerns rising, the tech giant is now scrambling to launch a stopgap chip by 2027.
But as rivals like Google, Meta, and Amazon double down on custom AI silicon, the big question is: can Microsoft catch up—or is it already falling behind in the AI arms race?
techovedas.com/13-billion-in-question-microsoft-and-openai-high-stakes-negotiations
Brief 5-Point Overview
Microsoft pushed its Braga chip from 2025 to 2026.
Successor designs (Braga-R, Clea) also face delays.
The interim Maia 280 chip emerges for 2027 delivery.
Microsoft claims up to 30% better perf-per-watt vs. NVIDIA’s 2027 chips.
Custom silicon race heats up among AWS, Google, Meta.
Follow us on Linkedin for everything around Semiconductors & AI
Background: Chasing Chip Independence
Since November 2023, Microsoft has built its own Maia 100 AI processor to cut cloud costs and tighten supply control. Its goal: supply in-house chips to Azure data centers rather than buy NVIDIA GPUs at steep prices .
Amazon, Google and Meta already roll out custom AI silicon. Microsoft aimed to join them with its Braga chip by 2025.
Braga Chip Delays Shake Roadmap
Unexpected design tweaks, team churn and testing hurdles pushed Braga’s mass-production start six months to 2026, The Information reports .
Industry watchers worry that slower rollouts also risk delaying Braga-R and Clea, planned successors meant to match NVIDIA’s Blackwell and Rubin GPUs .
techovedas.com/why-did-microsoft-and-apple-drop-openai-seats
Interim Solution: Maia 280 for 2027
To bridge the gap, Microsoft now plans an interim AI chip code-named Maia 280 for delivery in 2027. This design will package two Braga dies in one chiplet to boost energy efficiency .
Executives forecast up to 30% better performance per watt than NVIDIA’s projected 2027 GPUs. They believe this stopgap will keep Azure competitive as custom silicon becomes table stakes.
/techovedas.com/blackwell-ultra-gpu-nvidia-unleashes-ais-new-powerhouse
Timeline & Performance Comparison
| Chip | Original Target | Revised Target | Perf-per-Watt vs. NVIDIA |
|---|---|---|---|
| Braga | 2025 | 2026 | Below Blackwell GPUs |
| Maia 280 | – | 2027 | +30% (projected) |
| Braga-R | 2026 | TBD | TBD |
| Clea | 2027 | TBD | TBD |
Industry Impact: The Custom-Silicon Arms Race
While Microsoft adapts, other cloud providers speed ahead. Google deployed its TPU v6e inference chips in 2025. Amazon readies Trainium 3 for late 2025.
Meta doubles down on its MTIA chips, aiming to ship twice as many units by 2026. Each player seeks to optimize performance-per-dollar and unlock AI cost savings. NVIDIA still leads with its Blackwell GPUs in active deployments and the upcoming Rubin architecture.
techovedas.com/openai-chooses-googles-tpu-chips-over-nvidia-a-major-shift-in-ai-hardware-strategy/
Conclusion: A Pivotal Stopgap
Microsoft’s Maia 280 represents a crucial stopgap to regain lost ground. It underlines the risks of internal chip design and the need for flexible roadmaps.
If Maia 280 hits its performance targets and ships on time, Microsoft can strengthen its Azure AI offering and reduce its grip on NVIDIA. Otherwise, it may lean on partners and suppliers longer than planned.
For expert guidance on semiconductor challenges, from design to manufacturing, @Techovedas is your trusted partner. Contact us today for tailored technical solutions and support!




