Logo

Tecnalia – MAIA Multimodal, Adaptive and Interactive AI System for Acting in Multiple Contexts

Tecnalia 

Sector: Health

Business Case

Millions of patients worldwide suffer motor disability as a result of stroke, traumatic brain injury, etc. Neuroprostheses allow these patients to control assistive or rehabilitative devices by means of their own biological signals for the replacement or recovery of these affected motor functions. The MAIA project focuses on the development of interactive control interfaces based on artificial intelligence (AI) that enable natural, fast and safe interactions between the user and the device (e.g. prostheses, wheelchairs, robotic exoskeletons). 

Objectives

Research and develop a multifunctional controller based on human-centred artificial intelligence to enable intuitive and safe interaction between the user and the controlled device: robotic arms, wheelchairs, exoskeletons, etc.

Use case

The solution is based on research into new ways of combining and decoding biological data (neural, behavioural, kinematic, gaze, etc.) from the user and information from the environment (e.g., object location) to adjust the system’s response to both the user’s intention and the characteristics of the context. The AI technology developed is interactive, so that the interfaces respond to the user’s intention and the real needs of the environment based on current and previous data through a continuous learning and adaptation process.

Infrastructure

On Premise

Technology

Machine learning and deep learning

Data

Neuronal neurophysiological signals: intracortical spikes and local field potentials (LFPs), muscle EMG. Kinematic data of robotic exoskeleton for upper limb rehabilitation.

Resources

Multidisciplinary team of: domain experts (neuroengineering, neurosciences), experts in data science and temporal signal processing.

Difficulties and learning

Selection and adaptation of AI methods to multimodal data collected in a clinical study with complex and variable characteristics.

KPIs (business impact and metrics of the model)

Significant improvement in motor intention decoding accuracy over the state of the art. More intuitive and secure control interfaces.

Funding

Public co-funding: H2020-EU.1.2.-EXCELLENT SCIENCE- Future and Emerging Technologies (FET)

Collaborators, Partners

Univ. Bologna (Italy), Univ. Muenster (Germany), Carl Zeiss Vision GmbH (Germany), Consiglio Nazionale delle ricerche (Italy), Azienda Unita’sanitaria locale di Bologna (Italy), STAM SRL (Italy).

Scroll to Top