Synapse Unveils “Gerard,” the Contextually-Aware Robotic Assistant that Really Understands Us


Synapse Unveils “Gerard,” the Contextually-Aware Robotic Assistant that Really Understands Us

News Releases - Dec. 12, 2018

Innovation specialist Synapse Product Development today unveiled Gerard, a first-of-its-kind natural user interface. Gerard understands its environment as well as voice and physical gesture commands, giving a glimpse into a future where context-aware digital assistants move closer to the richness of human communication.

Today’s voice-controlled assistants are reaching mainstream adoption in the home but are limited by a lack of the critical context that we take for granted in everyday interactions with other people. In the future, digital assistants will understand context, such as our body language, emotions, and physical surroundings.

To bring us closer to that future, Synapse engineers integrated machine vision, voice and gesture recognition, 3D mapping and autonomous robotics to give Gerard eyes, ears and mobility. The robot autonomously explores physical spaces, becoming aware of where it’s been, its current location and objects in its environment. With all of these capabilities combined, Gerard understands who you are, what you're saying, and what your gestures mean in the context of your surroundings. This allows for instructions such as “Gerard, turn on that light” while pointing to the light you want to turn on.  



“Voice interface technology has come a long way in a few short years, but is still well short of resembling human interactions,” said Jeff Hebert, VP of Engineering at Synapse. “Gerard uses contextual awareness to make digital interactions much more natural and efficient.”

With this technology in the home, Gerard could support many new use cases. Imagine the benefit it could bring to an elderly relative, making home automation accessible without requiring mastery of a screen-based user interface. In an office environment, Gerard could control conference room technology, enabling automatic meeting minutes with attribution to the correct speakers, or intuitive gesture and voice control of a videoconferencing camera, allowing commands such as “zoom in on that whiteboard”. In industrial settings, Gerard could improve safety as well as productivity. A factory worker could control heavy equipment without sharing their attention with a display or control panel or needing to remove protective gloves, while automated equipment could anticipate the movement of workers to enhance safety.

Gartner has predicted that by 2020, about 30 percent of web access will take place without the use of a screen. Synapse is at the leading-edge of developments in interaction technology, harnessing the latest in sensing, machine learning, spatial awareness and inputs from voice, gesture and haptics, to develop completely natural user interfaces that bring new richness to human/digital experiences.  

Meet Gerard in-person at CES 2019 in Las Vegas, from January 8th 2019. Visit us at Sands Expo, Level 2, Halls A-D, booth 44337.

Other News

World-Changing Innovation…In Action

Three visionary technology demonstrations from Synapse Product Development and Cambridge Consultants at CES® 2019

Synapse Chats Drones with Inverse

Account Director, Dylan Garrett, recently discussed the possibilities of using drones (such as Cambridge Consultant's

Michael Ciuffo on 3 common pitfalls of VR hardware development at AWE 2018

General Purpose? or Application Specific? Technical limitations still make VR hardware cost-prohibitive for many, so careful consideration of the application and market must be applied very early in the development process.