
Spanish startup Multiverse Computing is pushing its compressed artificial intelligence models into the mainstream, aiming to make advanced AI systems faster, cheaper, and more widely accessible. The company has launched new tools, including an application and an API platform, designed to allow developers and enterprises to easily use its optimized models at scale.
The move comes as demand for efficient AI infrastructure continues to rise globally, with companies seeking ways to reduce the high computational costs associated with large language models. Multiverse Computing has focused on compressing models developed by major AI labs, including those from OpenAI, Meta, DeepSeek, and Mistral AI, making them lighter while retaining strong performance.
At the core of the company’s innovation is its CompactifAI technology, which compresses AI models by restructuring their internal architecture rather than simply removing parameters. This approach significantly reduces memory usage and computational requirements, allowing models to run efficiently across cloud environments, enterprise systems, and even edge devices.
The company has also introduced tools such as its CompactifAI app, which enables advanced AI models to run directly on local devices without requiring constant internet connectivity. This capability is particularly valuable for organizations operating in low-connectivity environments or those with strict data privacy requirements, as it reduces reliance on centralized cloud infrastructure.
By launching an API portal alongside its application, Multiverse is opening up its compressed models to a broader developer ecosystem. This allows businesses to integrate high-performance AI into their products without needing expensive hardware or large-scale infrastructure investments. The strategy reflects a shift in the AI industry toward democratizing access to advanced models while maintaining efficiency.
The rise of compressed AI models is also aligned with a broader industry trend toward optimizing performance per cost. As organizations scale AI deployments across millions of users, the ability to reduce latency, energy consumption, and operational expenses has become a key competitive factor. Multiverse’s approach positions it as a key player in enabling this transition from experimental AI to real-world, large-scale applications.
Industry observers note that model compression could play a critical role in expanding AI adoption beyond large tech companies. By lowering barriers related to cost and infrastructure, compressed models make it feasible for smaller enterprises and developers to deploy sophisticated AI solutions across industries such as finance, healthcare, and manufacturing.
The company’s growing visibility also reflects increasing investor and enterprise interest in efficiency-focused AI solutions. With prior funding rounds and ongoing expansion efforts, Multiverse Computing is positioning itself as a leader in the next phase of AI development, where optimization and scalability are as important as raw model performance.
As artificial intelligence continues to evolve, the ability to run powerful models with fewer resources is expected to become a defining factor in the industry. Multiverse Computing’s latest push into mainstream accessibility highlights how compressed AI could reshape deployment strategies and accelerate the global adoption of intelligent systems.




