Voice Assistants at CES 2019 Auto? Welcome to Fantasyland

voice-assistants-at-ces-2019-auto-welcome-to-fantasyland

As vehicles grow increasingly connected and become packed with sensors, there is potential for AI-driven technologies to play a prominent role in the human-machine interface (HMI). It makes logical sense for voice-enabled smart assistants for car controls, infotainment, and more to become the dominant HMI as we move into the era of the connected car, with momentum and regulatory demand (New Car Assessment Program [NCAP]) for driver monitoring. Eventually, when we all become passengers and vehicles become moving entertainment centers in the autonomous vehicle era, these voice assistants will make even more sense.

As rapidly as natural language processing technologies (NLP) are advancing, particularly how quickly tech leaders like Amazon and Google are affecting lives with their voice assistants (and their already stated fit for cars), you would think that voice-enabled smart assistants in vehicles would show significant progress. But for production-ready, built-in voice assistant technology, CES 2019 was Fantasyland.

If CES 2019 is any indication, significant market penetration for built-in voice assistant/voice controls are several years out. Over 3 days, I visited and interviewed 31 OEMs, Tier 1s, and solutions providers. While eight of those were not in the voice solutions arena (they were 3D augmented reality [AR] and computer vision solutions providers), only two of the remaining 23, Audi and Chinese carmaker BYTON, were showing production-ready built-in voice assistant technology. Audi will feature Alexa integration in its new 2019 E-tron SUV, as will BYTON’s New Concept SUV.

Welcome to Fantasyland

To be sure, there were concepts presented, including very impressive work from Harman, Honda, Nuance, Magneti Marelli, and Mitsubishi Electric. But as far as seeing this technology in production vehicles, we are likely looking at 2020 for Honda and 2021 for the rest at best. Three major Tier 1s with potential skin in the game (those that provide dash consoles and infotainment systems) I visited showed nothing. Four OEMs I visited showed nothing. Two OEMs, Hyundai and Kia, showed bulbous, pod-like concept vehicles that have all the smart device bells and whistles, such as voice-commanded controls and entertainment, big screens, and more, but must be 7 to 10 years from actual production. They should ship those vehicles to a theme park for people to enjoy.

A logical question would be why voice tech is moving so slowly in the auto industry. Several factors appear to be at play:

  • History of lukewarm effectiveness and acceptance. In March 2017, J.D. Power’s Kristin Kolodge wrote in a blog post: “I would lend credence to another motivating factor: a high level of dissatisfaction with the current voice recognition systems in vehicles. Consumers have been extremely patient with automakers’ built-in technology. For the last four years, voice recognition has been the top-quality problem in J.D. Power’s Initial Quality Study. Yet, consumers continue to want this capability. The J.D. Power Tech Experience Index shows that 78% of consumers want voice recognition on their next vehicle. For those that are choosing to walk away from voice recognition, 50% state that the system was not useful, or it was confusing to use.” The reason the auto industry has failed at voice technology is because natural language understanding is one of the most difficult challenges for current-generation AI. Market leaders like Amazon and Google are challenged by conversational context, humor/sarcasm, and personal history. Why would automakers’ attempts be any different?
  • Carmakers versus software players. Carmakers and their vendors are reinventing themselves to fuse hardware and software, but none of the players in the automotive ecosystem have great software engineering acumen.
  • Hardware-software fusion and the product lifecycle challenge. Carmakers and their vendors are faced with delivering a hardware-software product that is currently outliving any piece of technology-infused hardware. Wolf Street estimated that the average age of cars and trucks in operation in the U.S. in 2017 was nearly 12 years. No tech hardware, not even mainframe computers or telecom switches, are designed to last that long. So, the question becomes, How do you design hardware that can adapt to software that hasn’t been productized that might become relevant 8, 9, 10 years from now?
  • Issues with the installed base. According to the Wolf Street estimate, less than one-third of the vehicles on the road in the U.S. today are 5 years old or newer, which means the current annual replacement rate for vehicles is logically a lot less, most likely between 5% and 10%. That means embedded voice assistant technology will trickle, not flood, into the market.

These challenges present issues for automakers and their vendor ecosystem that want to develop voice assistants. They also highlight issues related to any advancing software technology automakers would like to pursue that consumers are adopting rapidly through other means. It makes sense that consumers are simply choosing to use voice assistants and navigation from their smartphones instead of embedded solutions.

Exiting Fantasyland, Entering Tomorrowland

What can the auto industry do to move on from the concepts of Fantasyland?

  • Watch out for and learn from the aftermarket disruptors. Amazon is launching Echo Auto, a dash-mounted Alexa device that connects to a user’s smartphone and the car’s speakers (Bluetooth or auxiliary input). It is likely that other nimble tech players will build aftermarket solutions.
  • Embrace Interoperability. Automaker-branded voice assistant solutions face a significant uphill battle for consumer mindshare. The solution is to embrace interoperability and build platforms that can integrate to branded voice assistants. A Sirius executive told attendees at the pre-CES Consumer Telematics Show that Sirius, which has 40 million subscribers mostly bound to cars, is developing an open platform to integrate any or all willing smart assistants into its system. Consumers will be able to use the smart voice assistant of their choice and have continuity between their home, car, and smartphone.
  • Focus on OTA. It was only a few years ago that consumers began to expect over-the-air (OTA) software updates for mobile phones. Now it’s time for automakers to fully embrace OTA as a strategy for future-proofing their hardware-software fusion.
  • Adopt software development best practices. During a discussion with one prominent OEM, the user experience (UX) designer for next-generation vehicles said it is finally adopting Agile development practices. The OEM is building software in smaller, more manageable pieces, not one giant release, allowing designers to work more quickly. He also said to combat the industry’s notoriously long lead times, they are starting to put prototypes in front of real customers as part of the design process. “We have to take on more of that responsibility instead of relying on the supplier base,” he said.
  • Find a sweet-spot use case. The best example of that at CES was from Tier 1 vendor Magneti Marelli, which showed a concept for an exterior virtual assistant. With a partner, Magneti Marelli developed a conversational external HMI that “[leverages] voice commands and a 2D avatar projected in the rear window of our test vehicle. Using the integrated Smart Corner sensors to identify when a person is approaching the vehicle, the avatar can perform commands for the driver and passengers such as opening the trunk or unlocking the doors.”

Magneti Marelli’s avatar is an example of a practical use case for automotive voice assistant; it meets the need for voice-based controls outside the vehicle. Perhaps just as important, the voice assistant can be developed using a much more limited domain to understand how it may be asked to perform a task, as opposed to the very complex domain of understanding for a more universal voice assistant.

Comments are closed.