Is the LLM Cost Crisis Over? This Spanish Startup Just Released a Free "Quantum" Alternative.
Spanish AI Startup Multiverse Computing Releases Free Compressed AI Model — Is Europe's Next Unicorn Solving the LLM Cost Crisis?
Large language models have a fundamental problem: they're simply too large for most companies to actually deploy. While OpenAI, Anthropic, and Google continue racing to build ever-bigger frontier models, a Spanish AI startup called Multiverse Computing is taking a radically different approach — and it might just transform how enterprises think about artificial intelligence infrastructure.
Multiverse Computing, a Basque AI company valued at over €1.5 billion, has just released HyperNova 60B 2602 — a free, compressed large language model available on Hugging Face that delivers frontier-model performance at half the size and cost. The breakthrough technology behind it, called CompactifAI, uses principles inspired by quantum computing to compress AI models without sacrificing accuracy or capability. And if recent funding rumors are true, this "soonicorn" (a startup approaching unicorn status) is about to become one of Europe's most valuable AI companies.
What Is Multiverse Computing? Meet Spain's Rising AI Powerhouse:
Multiverse Computing is a Spanish artificial intelligence startup headquartered in the Basque Country with offices spanning the United States, Canada, and across Europe. Founded with a focus on applying quantum-inspired computing techniques to solve complex optimization problems, the company has evolved into one of Europe's most promising AI infrastructure players.
Unlike most AI startups that focus purely on building larger models, Multiverse has carved out a unique niche: making advanced AI models dramatically smaller, faster, and more cost-effective without losing the intelligence and capabilities that make frontier models valuable. This positioning has resonated strongly with enterprise customers, including major names like Iberdrola (the Spanish energy giant), Bosch (the German industrial conglomerate), and the Bank of Canada.
The company is currently rumored to be raising a massive €500 million funding round at a valuation exceeding €1.5 billion, which would make it one of the most valuable AI startups in Europe outside of France's Mistral AI. While Multiverse confirmed to TechCrunch that active discussions with potential investors are ongoing, the company declined to comment on specific valuation or funding size at this stage. The startup also would not confirm reports that its annual recurring revenue (ARR) reached €100 million in January 2026.
If those numbers are accurate, Multiverse would still trail OpenAI's staggering $20 billion ARR, but would be in striking distance of European rival Mistral AI, whose ARR recently soared past $400 million. More importantly, Multiverse's growth trajectory positions it as a serious contender in the rapidly expanding market for AI model compression and optimization — a market that could be worth tens of billions of dollars as enterprises struggle with the economics of deploying frontier AI models.
The LLM Size Problem: Why Compressed AI Models Matter More Than Ever:
Here's the challenge facing every enterprise AI team in 2026: the most capable large language models — OpenAI's GPT-4, Anthropic's Claude 3.5, Google's Gemini — are enormous. These frontier models contain hundreds of billions of parameters, require massive amounts of GPU memory to run, consume extraordinary amounts of power, and cost enterprises thousands or tens of thousands of dollars per month to operate at scale.
For most companies, the economics simply don't work. A typical enterprise might need to run thousands or millions of AI inference requests per day across customer support, content generation, data analysis, and other use cases. At frontier model pricing and compute requirements, the costs quickly become prohibitive — especially for mid-market companies or startups that don't have hyperscaler budgets.
This creates a painful dilemma: companies can either use smaller, cheaper models that lack the reasoning capabilities they need, or use frontier models and blow through their AI budgets in weeks. Multiverse Computing's solution is to eliminate that trade-off entirely by compressing frontier models down to a size and cost structure that makes them economically viable for widespread enterprise deployment.
What Is CompactifAI? The Quantum-Inspired Technology Behind Multiverse's Compressed Models:
CompactifAI is Multiverse Computing's proprietary compression technology that uses principles inspired by quantum computing to dramatically reduce the size of large language models while preserving their accuracy, reasoning capabilities, and performance characteristics. The technology represents years of research in quantum optimization and tensor network theory applied to the practical problem of AI model compression.
The results are remarkable. Multiverse's HyperNova 60B model — the company's flagship compressed model now available for free on Hugging Face — is approximately 32GB in size, roughly half the size of the OpenAI model it derives from (gpt-oss-120b, OpenAI's open-source 120 billion parameter model). Despite being half the size, HyperNova 60B delivers comparable performance across most benchmarks while offering significantly lower memory usage, reduced latency, and dramatically lower inference costs.
The newest version, HyperNova 60B 2602, includes enhanced support for tool calling and agentic coding — two capabilities that are becoming increasingly important for enterprise AI applications but that typically drive inference costs sky-high with larger models. By optimizing specifically for these use cases, Multiverse is targeting some of the most economically painful aspects of deploying advanced AI systems in production.
HyperNova 60B vs. Mistral Large 3: The European AI Rivalry Heats Up:
One of the most interesting competitive dynamics in European AI is the rivalry between Multiverse Computing and Mistral AI, the French decacorn (a startup valued at over $10 billion) that has become Europe's flagship AI model developer. According to Multiverse's own benchmarking, HyperNova 60B outperforms Mistral Large 3 — one of Mistral's flagship models — on several key metrics while being significantly smaller and more cost-effective to run.
But beyond the technological rivalry, the two companies actually have a lot in common. Both are European AI champions positioning themselves as sovereign alternatives to U.S. tech giants. Both have expanded internationally with offices across multiple continents. Both serve major enterprise customers. And both are benefiting enormously from growing European demand for AI infrastructure that isn't subject to American jurisdiction or regulatory frameworks.
The difference is in strategy. While Mistral AI has focused on building its own frontier models from scratch and competing head-to-head with OpenAI and Anthropic on raw capability, Multiverse is taking a more pragmatic approach: start with proven models (including OpenAI's open-source releases) and make them dramatically more efficient through compression. It's a bet that enterprises care more about economics and deployability than they do about having the absolute cutting-edge model.
Free and Open Source: Multiverse's Strategy for Developer Adoption:
One of the most strategic decisions Multiverse has made is releasing HyperNova 60B for free on Hugging Face, the leading platform for open-source AI models. By making the compressed model freely available to developers, Multiverse is betting that widespread adoption will drive enterprise demand for its commercial offerings — which likely include custom compression services, enterprise support, and proprietary versions of CompactifAI technology.
The company has also committed to open-sourcing more compressed models in 2026 to support a wider range of use cases. This open-source strategy mirrors the approach that made companies like Hugging Face, Meta (with LLaMA), and Mistral AI (with its open-source models) so influential in the AI ecosystem. Developers who build applications on top of HyperNova 60B become advocates for Multiverse's technology and potential customers for premium offerings.
For enterprises evaluating AI infrastructure options, the availability of a free, high-performance compressed model dramatically lowers the barrier to experimentation. Companies can test HyperNova 60B in production workloads, compare it against their current solutions, and make informed decisions about whether model compression is a viable strategy for reducing their AI infrastructure costs.
The Sovereign AI Narrative: Why European Governments Are Backing Multiverse:
One of the most powerful tailwinds behind Multiverse Computing's growth is the European push for AI sovereignty — the idea that European companies, governments, and citizens should have access to advanced AI systems that are developed, hosted, and governed under European law rather than being dependent on American technology giants.
This narrative has opened doors for Multiverse across Europe. The company recently secured a collaboration with the regional government of Aragón in northeastern Spain to deploy AI solutions for public sector use cases. The Spanish Agency for Technological Transformation (SETT) participated in Multiverse's $215 million Series B funding round last year, signaling strong government backing for the company's mission.
Since its inception, Multiverse has also benefited from substantial support from the Basque regional government, which has long invested in technology and innovation as a strategy for economic development. If Multiverse successfully closes its rumored €500 million funding round at a €1.5+ billion valuation, it would become the Basque Country's first unicorn — a major milestone for the region and a powerful symbol of Europe's growing AI ambitions.
In its latest press release, Multiverse explicitly positions itself as a company that can "deliver sovereign solutions across the AI stack" — language clearly designed to resonate with European enterprises and governments increasingly concerned about data sovereignty, regulatory compliance, and dependence on U.S. technology infrastructure.
What This Means for the AI Industry: The Shift Toward Model Efficiency:
Multiverse Computing's approach represents a broader shift in the AI industry away from the "bigger is always better" mentality that has dominated the last few years of large language model development. As compute costs have soared, energy consumption has become a political issue, and enterprises have struggled with the economics of deploying frontier models, the industry is increasingly recognizing that intelligence per dollar and intelligence per watt may be more important metrics than raw capability.
Model compression, quantization, pruning, and distillation — techniques for making AI models smaller and more efficient without losing capability — are becoming critical differentiators. Companies like Multiverse that can deliver frontier-level performance at a fraction of the size and cost have a potentially enormous market opportunity ahead of them.
The release of HyperNova 60B as a free, open-source model is a bold statement of confidence in that thesis. Multiverse is essentially saying: we believe our compression technology is so valuable that we can give away the compressed models themselves and still build a massive business around the infrastructure, tooling, and custom solutions that enterprises need to deploy these models at scale.
The Road to Unicorn Status: What's Next for Multiverse Computing:
If Multiverse successfully closes its rumored €500 million funding round, the company will have the resources to significantly accelerate its roadmap. Potential areas of investment include:
Expanding the CompactifAI platform to support compression of models from other providers beyond OpenAI, including Anthropic's Claude, Google's Gemini, and Meta's LLaMA family.
Building out enterprise infrastructure for deploying compressed models on-premises, in private clouds, or in sovereign cloud environments that meet strict European data residency requirements.
Developing industry-specific compressed models optimized for verticals like financial services, healthcare, manufacturing, and government — where Multiverse already has strong customer relationships.
Scaling international operations to capitalize on demand for AI sovereignty solutions not just in Europe but in other regions concerned about dependence on U.S. technology, including Latin America, parts of Asia, and the Middle East.
The Bottom Line: Why Multiverse Computing Could Reshape Enterprise AI Economics:
Multiverse Computing sits at the intersection of three of the most important trends in enterprise AI: the urgent need to reduce model deployment costs, the growing European demand for AI sovereignty, and the broader industry shift toward optimizing for efficiency rather than pure scale.
The company's quantum-inspired compression technology delivers genuine value to enterprises struggling with the economics of frontier AI models. Its open-source strategy builds developer mindshare and ecosystem momentum. Its sovereign AI positioning resonates with European governments and enterprises. And its impressive list of enterprise customers demonstrates real market traction.
Whether Multiverse becomes Europe's next AI unicorn in the coming months remains to be seen. But regardless of valuation milestones, the company is addressing a real and growing problem in the AI industry — and doing it with technology that appears to genuinely work.
For enterprises evaluating AI infrastructure strategies in 2026, Multiverse Computing and its HyperNova models represent a compelling alternative to the "bigger models, bigger bills" paradigm that has dominated the last few years.
Sometimes the smartest solution isn't the biggest one — it's the one that's just smart enough, at a price you can actually afford.



