Innovation specialist Synapse Product Development today unveiled Gerard, a first-of-its-kind natural user interface. Gerard understands its environment as well as voice and physical gesture commands, giving a glimpse into a future where context-aware digital assistants move closer to the richness of human communication.
Today’s voice-controlled assistants are reaching mainstream adoption in the home but are limited by a lack of the critical context that we take for granted in everyday interactions with other people. In the future, digital assistants will understand context, such as our body language, emotions, and physical surroundings.
To bring us closer to that future, Synapse engineers integrated machine vision, voice and gesture recognition, 3D mapping and autonomous robotics to give Gerard eyes, ears and mobility. The robot autonomously explores physical spaces, becoming aware of where it’s been, its current location and objects in its environment. With all of these capabilities combined, Gerard understands who you are, what you're saying, and what your gestures mean in the context of your surroundings. This allows for instructions such as “Gerard, turn on that light” while pointing to the light you want to turn on.
“Voice interface technology has come a long way in a few short years, but is still well short of resembling human interactions,” said Jeff Hebert, VP of Engineering at Synapse. “Gerard uses contextual awareness to make digital interactions much more natural and efficient.”
With this technology in the home, Gerard could support many new use cases. Imagine the benefit it could bring to an elderly relative, making home automation accessible without requiring mastery of a screen-based user interface. In an office environment, Gerard could control conference room technology, enabling automatic meeting minutes with attribution to the correct speakers, or intuitive gesture and voice control of a videoconferencing camera, allowing commands such as “zoom in on that whiteboard”. In industrial settings, Gerard could improve safety as well as productivity. A factory worker could control heavy equipment without sharing their attention with a display or control panel or needing to remove protective gloves, while automated equipment could anticipate the movement of workers to enhance safety.
Gartner has predicted that by 2020, about 30 percent of web access will take place without the use of a screen. Synapse is at the leading-edge of developments in interaction technology, harnessing the latest in sensing, machine learning, spatial awareness and inputs from voice, gesture and haptics, to develop completely natural user interfaces that bring new richness to human/digital experiences.
Meet Gerard in-person at CES 2019 in Las Vegas, from January 8th 2019. Visit us at Sands Expo, Level 2, Halls A-D, booth 44337.
PanelPicker season is here! Our colleagues at Cambridge Consultants are teaming up with Nike and L'Oréal to talk about the future of personalized products & experiences this March in Austin.