Acapela Group is involved in the VOICI project, helping to design a demo dedicated to the next generation of cockpits, overseen by Thales and in collaboration with SINTEF, Multitel and sensiBel.
The goal of the VOICI (VOIce Crew Interaction) research project, which is part of the ambitious European ‘Clean Sky 2’ programme, is to develop a virtual crew assistant, i.e. a natural voice dialogue system within the avionic system.
The speech recognition system efficiency has been evaluated and validated in a test cockpit with a realistic noisy environment and the dialog manager has been trained and tested with operational scenarios to manage the whole voice dialog between the pilot and his cockpit.
As crew’s tasks become more and more complicated, it is hugely important that we put in place natural interfaces that help to simplify these tasks using multimodal means such as physical buttons, touch screens, gestures, eye tracking and voice interaction.
Today, the system is essentially based on a graphical interface with voice commands given by the crew, without voice synthesis feedback.
The challenge in this project involves bringing new technologies into the cockpit, with a dialogue system that has been adapted to the operational environment and specific vocabulary, to ‘say and understand’ situations.
The VOICI virtual assistant’s job is to simplify tasks and reduce the crew’s workload, using an industry-specific technological software component that establishes a reliable dialogue, regardless of the avionic system or sound environment.
The purpose of VOICI is to produce a POC (Proof of concept) demo of a Virtual Cockpit Assistant that is not only able to listen to all of the communication going on inside the cockpit, whether between members of the crew or between the crew and ATC (Air Traffic Control), but can also recognise and interpret the content, interact with the crew and answer its demands.
Sound recording, voice technology and artificial intelligence have been defined by the topic manager, Thales, as the three key technologies used in the system.
An audio evaluation environment has been developed by SINTEF to carry out scenario-based tests allowing Multitel to measure the performance of their speech recognition solution and interpretation in a noisy environment. A miniature super directive microphone array has been developed by senSibel to provide an alternative to speech capture via the headset microphone. Acapela has worked on creating a specific synthetic voice to suit the context in which it is to be used, which passes on information clearly and intelligibly to establish the right conditions for dialogue.
Speech recognition tests have been completed successfully, with a 95%-word recognition rate whatever is the flight phase.
‘This new project, which has come to an end with the delivery of the voice in February 2020, has once more helped establish our position in a crucial field, and moved the technology forwards to suit future uses, by working closely with key partners, which is of course essential if you want to be right at the heart of innovation. We would like to thank our partners and the topic manager, Thales, for the results we have achieved together.’ comments Remy Cadic, CEO of Acapela Group.
This project has received funding from the CleanSky 2 Joint undertaking under the European Union Horizon 2020 research and innovation programme under grant agreement No. 785401
Link to Clean Sky 2 project:
VOICI project presentation
Open data from the project
About Acapela Group
At Acapela Group, we create personalized digital voices that match branding. Our voices, based on NeuralTTS and machine learning, bring persona to the user experience.
Acapela is the European leader of voice solutions with thirty years of expertise and market feedback, strong partnerships, deep rooted R&D, an enthusiastic team and a strong appetite for innovation. Your voice matters. And Acapela cares about it. The company aims to create voices that sound different: custom voices, children voices, voice banking with my-own-voice. Voices adapted to the needs and context of application, based on promising results on NeuralTTS.