Here at Synapse, we spend a good deal of time watching the major trends in technology that underlie product development. Over the past year, artificial intelligence has become a major theme, both for us and our industry as a whole—we recently spent some time with our colleagues at Cambridge Consultants thinking about what the latest trends in AI and related technologies mean for our clients in the year ahead.

Research laboratories at the biggest companies and institutions in the tech industry, as well as some focused startups, are making huge strides in advanced AI techniques like deep learning, and are taking advantage of ever-more-powerful hardware to execute massively parallel algorithms. As an engineering services company, we are also pushing the boundaries of what is possible, with a focus on enabling next-generation AI-based product designs for our clients.

Doing More with Less Data

We see rapid and significant progress being made in human-level AI systems, with driverless vehicles being a flagship technology getting lots of recent media attention. But along with the development of novel algorithmic techniques, these systems rely on massive amounts of data to do their job. These data sets can be very expensive and time consuming to create, yet are needed both for training the systems (e.g. in visual recognition tasks), and for interpreting what they sense (e.g. detailed digital maps). Our engineers have made strides towards building capable systems without the need for an expensive, time consuming data curation effort.

Duncan Smith, Head of the ICE division at Cambridge Consultants, describes Vincent’s deep learning capabilities to the crowd at CES 2018.
Duncan Smith, Head of the ICE division at Cambridge Consultants, describes Vincent’s deep learning capabilities to the crowd at CES 2018.

Our art-generating system, Vincent, demonstrates a novel application of AI in augmenting human creativity, though the underlying advancement is in training techniques that work with limited data sets. Other recent projects make use of data synthesis techniques to automatically generate thousands or millions of labeled training examples instead of a cumbersome manual process.

“As an engineering services company, we are also pushing the boundaries of what is possible, with a focus on enabling next-generation AI-based product designs for our clients.”  

Augmented Reality Beyond Phones and Glasses

Virtual and augmented reality are also very much in the current spotlight—we’ve  seen incredible demos that combine the most advanced display technology with computer vision technology that overlays visual information on what a user sees. We’ve been prototyping how this technology can be brought to bear for applications with user interfaces other than complex head-mounted displays or mobile phones, and specifically how these systems can enhance human capabilities using strong computer vision technology that can model and understand the world around us.

To this end, we have showcased an augmented reality system that uses computer vision techniques to guide a human through an assembly task (we call it HECTAR), with visual cues projected directly onto the parts being assembled.  We have also developed a machine-vision recycling assistance system that assists hurried users with the task of sorting their recyclables, compostables, and garbage into the proper bin.

The HECTAR demo showcase at CES 2018.
The HECTAR demo showcase at CES 2018.

Novel Voice User Interfaces

And of course, everyone is talking about voice technology and it’s clear to us that voice UI is going to be a big factor in product development going forward. So we’re focusing on the specific problems to be solved outside of in-home voice assistants—like how we can efficiently customize the interaction model, enable the technology under resource constraints (battery power, or less-expensive computation platforms), and make it function reliably outside of the home (e.g. office, connected venues, and industrial settings) using signal processing techniques for noise reduction and speaker isolation. Additionally, we’re very involved in pushing what voice assistants can do beyond speech recognition, like speaker identification and understanding the context behind the speech.

Walking a delegate through AKSENT, a showcase of voice understanding.
Walking a delegate through AKSENT, a showcase of voice understanding.

All around us there are examples of AI quickly moving out of the laboratory and into our everyday lives. Synapse and Cambridge Consultants are excited to work with our clients to combine their novel product concepts and enabling technologies with advances that are happening in AI to see what product and service transformations will emerge to make a positive impact on our world.