Introduction
In the race to make artificial intelligence faster, lighter, and more accessible, one startup just made a bold leap. Multiverse, a stealth-mode AI company founded by former DeepMind and Google researchers, has raised $217 million to tackle one of the industry’s biggest bottlenecks—the massive size of large language models (LLMs).
Today’s LLMs need massive compute and energy, limiting them to data centers. Multiverse plans to change that—compressing models by up to 95% without losing performance.
With $217M funding led by Lightspeed, the startup aims to bring ChatGPT-level AI to phones, wearables, and edge devices.
This could spark a shift to fast, efficient, and always-on AI anywhere.
5 Key Takeaways at a Glance
Massive Funding Boost: €189 million raised, making Multiverse Spain’s largest AI startup.
Compression Breakthrough: Cuts LLM sizes by up to 95% without sacrificing accuracy.
Cost Reduction: Cuts AI operational costs by as much as 80%.
Quantum-Inspired Tech: Uses ideas from quantum physics and machine learning but requires no quantum computers.
Broad Model Support: Supports major open-source models including Meta’s Llama, China’s DeepSeek, and France’s Mistral.
Why LLM Compression Matters More Than Ever
Large language models like OpenAI’s GPT and Meta’s Llama have transformed natural language processing. However, their massive size demands enormous computing power and energy. This limits their use to tech giants and well-funded organizations.
Multiverse’s breakthrough tackles this challenge head-on. Their CompactifAI technology slashes the size of LLMs by up to 95%, maintaining near-original performance while significantly reducing compute requirements and costs.
The Quantum Edge: Inspiration Without Quantum Hardware
Multiverse blends quantum physics principles with machine learning in a novel way.
Their approach mimics quantum systems through tensor network methods, enabling efficient compression without needing expensive quantum computers.
Analogy: Imagine you have a massive library filled with many redundant books. Instead of discarding knowledge, you reorganize and condense the library into a sleek, compact bookshelf that holds all the essential information.
This is exactly what CompactifAI does with AI models—keeping knowledge intact but in a smaller, smarter package.
Industry Impact and Market Position
This funding round cements Multiverse’s position as a leader in AI innovation. It puts the company alongside Europe’s AI heavyweights like Mistral, Aleph Alpha, and Synthesia.
With compressed models already available on platforms like AWS Marketplace, Multiverse is making AI more accessible to businesses around the world.
Metric | Detail |
---|---|
Funding Raised | €189 million (~$217 million) |
Compression Rate | Up to 95% size reduction |
Cost Savings | Up to 80% reduction in operational costs |
Supported LLMs | Meta’s Llama, DeepSeek, Mistral |
Investors | Bullhound Capital, HP, Forgepoint, Toshiba |
Real-World Benefits
- Lower AI Entry Barriers: Small and medium businesses can afford powerful LLMs.
- Edge Computing Enabled: Run complex models on devices like phones, drones, or Raspberry Pi boards.
- Sustainability Gains: Reduced compute demand lowers carbon footprint.
- Geopolitical Relevance: Strengthens Europe’s AI sovereignty amid global tech competition.
- Wide Adoption: Integration with cloud services boosts scalability and ease of use.
What’s Next for Multiverse?
With €189 million in funding, Multiverse plans to expand its technology to compress more LLMs, optimize AI deployments, and scale globally. The startup currently holds over 160 patents and serves 100+ customers, including multinational corporations in energy, finance, and manufacturing.

This investment signals a broader industry shift. As AI models grow larger, compression and efficiency technologies like CompactifAI will play a pivotal role in democratizing AI access and sustainability.
Follow us on Linkedin for everything around Semiconductors & AI
Conclusion
Multiverse Computing’s innovative compression technology stands at the crossroads of AI scalability and sustainability. By cutting LLM size by 95% and costs by 80%, it offers a smarter way to deploy AI models — similar to condensing a sprawling mansion into a sleek, efficient home.
In a world increasingly driven by AI, this leap not only lowers financial barriers but also opens AI doors to countless new users, devices, and use cases. For the AI ecosystem, Multiverse’s progress marks a strategic turning point.
For more of such news and views choose Techovedas! Your semiconductor Guide and Mate!