Logo

Multiverse Computing – CompactifAI

Multiverse Computing 

Sector: Generalist

Business Case

The challenge facing CompactifAI is the need to significantly reduce the consumption of computational and energy resources required by LLMs (Large Language Models). These models are usually extremely large and complex, which leads to high operating costs and limits their use in devices with restricted computational capabilities. CompactifAI seeks to solve this problem by implementing quantum and quantum-inspired technologies to compress these models without losing efficiency, making artificial intelligence more accessible and sustainable.

Objectives

CompactifAI aims to revolutionise the efficiency of LLMs through advanced compression of these models using quantum and quantum-inspired technologies. It aims to drastically reduce resource and energy consumption, lower operating costs and enable the deployment of these models on computationally constrained devices, thus extending the reach and accessibility of AI-driven solutions.

Use case

CompactifAI proposes a solution based on the use of tensor networks and quantum technologies to compress LLMs, reducing their resource and energy consumption and associated costs. Building this solution involves the ongoing development of compression tools that will be progressively integrated into both on-premise and cloud devices, optimising public and proprietary models for a wide range of applications. This incremental approach ensures the adaptability and continuous improvement of the software, facilitating its adoption in diverse operating environments.

Infrastructure

Hybrid

Technology

Quantum

Data

Depending on the focus and specific application of the model, various datasets could be used

Resources

Difficulties and learning

KPIs (business impact and metrics of the model)

Funding

Collaborators, Partners

Scroll to Top