$217 Million: Multiverse Secures to Revolutionize AI with 95% Smaller Models

It raised $217M to shrink AI models by 95%, making them faster, cheaper, and ready for edge devices.

Introduction

In the race to make artificial intelligence faster, lighter, and more accessible, one startup just made a bold leap. Multiverse, a stealth-mode AI company founded by former DeepMind and Google researchers, has raised $217 million to tackle one of the industry’s biggest bottlenecks—the massive size of large language models (LLMs).

Today’s LLMs need massive compute and energy, limiting them to data centers. Multiverse plans to change that—compressing models by up to 95% without losing performance.

With $217M funding led by Lightspeed, the startup aims to bring ChatGPT-level AI to phones, wearables, and edge devices.

This could spark a shift to fast, efficient, and always-on AI anywhere.

5 Key Takeaways at a Glance

Massive Funding Boost: €189 million raised, making Multiverse Spain’s largest AI startup.

Compression Breakthrough: Cuts LLM sizes by up to 95% without sacrificing accuracy.

Cost Reduction: Cuts AI operational costs by as much as 80%.

Quantum-Inspired Tech: Uses ideas from quantum physics and machine learning but requires no quantum computers.

Broad Model Support: Supports major open-source models including Meta’s Llama, China’s DeepSeek, and France’s Mistral.

Why LLM Compression Matters More Than Ever

Large language models like OpenAI’s GPT and Meta’s Llama have transformed natural language processing. However, their massive size demands enormous computing power and energy. This limits their use to tech giants and well-funded organizations.

Multiverse’s breakthrough tackles this challenge head-on. Their CompactifAI technology slashes the size of LLMs by up to 95%, maintaining near-original performance while significantly reducing compute requirements and costs.

techovedas.com/chinas-eda-self-sufficiency-crosses-10-in-2024-can-china-beat-the-eda-ban-in-2025/#google_vignette

The Quantum Edge: Inspiration Without Quantum Hardware

Multiverse blends quantum physics principles with machine learning in a novel way.

Their approach mimics quantum systems through tensor network methods, enabling efficient compression without needing expensive quantum computers.

Analogy: Imagine you have a massive library filled with many redundant books. Instead of discarding knowledge, you reorganize and condense the library into a sleek, compact bookshelf that holds all the essential information.

This is exactly what CompactifAI does with AI models—keeping knowledge intact but in a smaller, smarter package.

techovedas.com/saudi-arabia-via-its-public-investment-fund-pif-plans-to-invest-100-billion-in-ai-and-tech-by-2030

Industry Impact and Market Position

This funding round cements Multiverse’s position as a leader in AI innovation. It puts the company alongside Europe’s AI heavyweights like Mistral, Aleph Alpha, and Synthesia.

With compressed models already available on platforms like AWS Marketplace, Multiverse is making AI more accessible to businesses around the world.

MetricDetail
Funding Raised€189 million (~$217 million)
Compression RateUp to 95% size reduction
Cost SavingsUp to 80% reduction in operational costs
Supported LLMsMeta’s Llama, DeepSeek, Mistral
InvestorsBullhound Capital, HP, Forgepoint, Toshiba

Real-World Benefits

  • Lower AI Entry Barriers: Small and medium businesses can afford powerful LLMs.
  • Edge Computing Enabled: Run complex models on devices like phones, drones, or Raspberry Pi boards.
  • Sustainability Gains: Reduced compute demand lowers carbon footprint.
  • Geopolitical Relevance: Strengthens Europe’s AI sovereignty amid global tech competition.
  • Wide Adoption: Integration with cloud services boosts scalability and ease of use.

What’s Next for Multiverse?

With €189 million in funding, Multiverse plans to expand its technology to compress more LLMs, optimize AI deployments, and scale globally. The startup currently holds over 160 patents and serves 100+ customers, including multinational corporations in energy, finance, and manufacturing.

This investment signals a broader industry shift. As AI models grow larger, compression and efficiency technologies like CompactifAI will play a pivotal role in democratizing AI access and sustainability.

Follow us on Linkedin for everything around Semiconductors & AI

Conclusion

Multiverse Computing’s innovative compression technology stands at the crossroads of AI scalability and sustainability. By cutting LLM size by 95% and costs by 80%, it offers a smarter way to deploy AI models — similar to condensing a sprawling mansion into a sleek, efficient home.

In a world increasingly driven by AI, this leap not only lowers financial barriers but also opens AI doors to countless new users, devices, and use cases. For the AI ecosystem, Multiverse’s progress marks a strategic turning point.

For more of such news and views choose Techovedas! Your semiconductor Guide and Mate!

Kumar Priyadarshi
Kumar Priyadarshi

Kumar Joined IISER Pune after qualifying IIT-JEE in 2012. In his 5th year, he travelled to Singapore for his master’s thesis which yielded a Research Paper in ACS Nano. Kumar Joined Global Foundries as a process Engineer in Singapore working at 40 nm Process node. Working as a scientist at IIT Bombay as Senior Scientist, Kumar Led the team which built India’s 1st Memory Chip with Semiconductor Lab (SCL).

Articles: 3087

For Semiconductor SAGA : Whether you’re a tech enthusiast, an industry insider, or just curious, this book breaks down complex concepts into simple, engaging terms that anyone can understand.The Semiconductor Saga is more than just educational—it’s downright thrilling!

For Chip Packaging : This Book is designed as an introductory guide tailored to policymakers, investors, companies, and students—key stakeholders who play a vital role in the growth and evolution of this fascinating field.