Ecoutez le texte Ecoutez le texte

Personalized Virtual Assistants for the Elderly: Learn more about the Empathic Research project and adapted expressive voices by Acapela Group.

The Empathic project focuses on Personalised Virtual Coaches to assist elderly people living independently at and around their home. Acapela Group is working on the voice synthesis part, to provide users with advanced voice-first interface, based on Deep Learning.

The project is part of the Horizon 2020 programme which is the biggest EU Research and Innovation programme ever with nearly €80 billion of funding available over 7 years (2014 to 2020).

Empathic aims to research, innovate, explore and validate new paradigms and platforms, laying the foundation for future generations of Personalised Virtual Coaches. The consortium consists of 10 partners active in the area’s of health-maintenance, end-user organizations, technology developers, academic/research institutes and system integrators.

Innovative multimodal face analytics, adaptive spoken dialogue systems and natural language interfaces are part of what the project will research and innovate on, in order to facilitate lives of reasonably-healthy and active seniors by helping to improve their daily routines.

Acapela will provide a Text to speech technology based on Deep Neural Networks and adapted expressive speech that will enhance the expressive possibilities of the dialogue system and adapt it to the user’s emotions and mood to improve the consistency, naturalness and adaptability of the interaction. Four languages are targeted: i.e. English, French, Spanish and Norwegian.

The project will use remote non-intrusive technologies to extract physiological markers of emotional states in real-time for online adaptive responses of the coach, and advanced holistic modelling of behavioural, computational, physical and social aspects of a personalized expressive virtual coach.

It will include a demonstration and validation phase with clearly-defined realistic use cases. It will focus on evidence-based, user-validated research and integration of intelligent user and context sensing methods through voice, eye and facial analysis, intelligent heuristics (complex interaction, user intention detection, distraction estimation, system decision), visual and spoken dialogue system, and system reaction capabilities. Through measurable end-user validation, to be performed in 3 different countries (Spain, Norway and France) with 3 distinct languages and cultures (plus English for R&D), the proposed methods and solutions will ensure usefulness, reliability, flexibility and robustness.

EMPATHIC Partners:

  • Universidad del País Vasco, Spain
  • OSATEK, Spain
  • Oslo University Hospital, Norway
  • e-Seniors Association ESE END, France
  • Tunstall Healthcare (UK) Ltd., UK
  • Technion – Israel Institute of Technology
  • Intelligent Voice Ltd.UK
  • Institut Mines-Télécom, France
  • Seconda Università degli Studi di Napoli, Italy
  • Acapela Group S.A., Belgium

Acapela Group is involved in many Research project worldwide. Involvement in R&D funded projects and collaboration with high profile partners, organizations, universities and laboratories, are essential for Acapela Group to keep pushing the boundaries of technology and set up the new standards for advanced AI voices adapted to everyone.

About Empathic – http://www.empathic-project.eu/

About Acapela Group – www.acapela-group.com
Voice First is reinventing the way we engage with devices. Acapela creates personalized voices to guide end users throughout this new experience. The expertise, in-house technologies and latest works on the subject of deep learning and artificial intelligence puts the company in an excellent position to accompany its customers by rapidly creating voices adapted to the context of utilisation.
Acapela’s vocal solutions speech empower all services or devices that need to speak. The company offers a large portfolio of standard voices and create custom voices, for the exclusive use of a company or a brand. Because voice matters.

 


Published on 09/04/2018 in

print print