Voice assistants and services in cars today are admittedly hit and miss, but the world is on a path to fully autonomous vehicles. Voice-enabled virtual digital assistants (VDAs) will play a key role in the increasingly autonomous future car.
How long will it be before we reach level 5/fully autonomous vehicles? A key expert, Tony Seba of ReThinkX, is focused on the year 2030:
By 2030, within 10 years of regulatory approval of Autonomous Vehicles (AVs), 95 percent of U.S. passenger miles traveled will be served by on-demand autonomous electric vehicles owned by fleets, not individuals, in a new business model we call “transport-as-a-service” (TaaS).
As vehicles become increasingly autonomous, the time spent in them becomes less about operations and more about entertainment and work. Auto manufacturers and fleet operators believe value-added services in the form of VDAs will be a significant element of their businesses. Because the trend will be toward fleets and not individually-owned vehicles, cloud-based, voice-controlled VDAs will be key.
In a comprehensive blog post on July 11, 2018, Patrick Gaelweiler of Nuance sketched out some of the challenges AVs present and how Nuance intends to address those challenges with its VDAs:
When starting fresh and new with a car, drivers and passengers need to get used to it and to all the functionalities it offers. When buying a car, this initial service is typically provided by the dealer, but for shared vehicles, smart car manuals included in the automotive assistant can play a key role. Leveraging advanced AI technologies, these smart car manuals enable drivers to gain information about vehicle features and the car’s current status … voice-enabled interaction with the car manual will help users to get settled and will make the learning curve as smooth and short as possible … Today’s modern systems offer local functionality to save preferred settings for seating, steering wheel position, mirrors and more, allowing drivers to adjust to their personalized settings with just the tap of a fingertip. For shared vehicle purposes, these preferred settings could be stored in the cloud and enriched with further information such as preferred air conditioning setting or favorite radio station to personalize the user experience right from start ….
Sharing a car also demands trust; drivers expect to find the car neat and clean and in good condition. In contrast to an owned vehicle, drivers aren’t in control of the vehicle’s maintenance, so they rely on the fleet operator to keep the vehicles maintained regularly and kept in good condition.
Society of Automotive Engineers Automation Levels
(Source: Society of Automotive Engineers)
Today’s Automotive Virtual Digital Assistants
Voice-controlled VDAs are emerging today from automotive original equipment manufacturers (OEMs) to provide entertainment and car diagnostic functions. (Note: on-board emergency services like OnStar are generally telephone services operated by live agents, they are not considered VDAs.)
Nuance with its Dragon Drive solution, as well as Amazon, Google, iFlytek, and Alibaba, are working with automotive OEMs to produce VDAs. In April 2018 at the Beijing Motor Show, it was announced that Daimler, Audi, and Volvo will use Alibaba-built VDAs:
Daimler, Audi, and Volvo car owners in China will be able to use Alibaba’s Tmall Genie voice-controlled assistant to monitor fuel levels, mileage, and engine and battery status, as well as control car doors, windows, and air conditioning from home via voice command. The feature is expected to launch in the near future—Volvo plans to use the Tmall Genie service on its XC90, S90, and XC60 models; Daimler and Audi haven’t yet specified which will implement the voice assistant.
Key Industry Player: Nuance
Nuance is a provider of AI-based automotive assistant technology known as Dragon Drive. The technology allows highly personalized, voice-activated services available through the car’s infotainment console. Nuance says Dragon Drive ships within vehicles by Toyota, Audi, BMW, Fiat, Ford, GM, Hyundai, SAIC, and others. The company also recently announced a deal with Mercedes-Benz.
Nuance’s AI engine helps take natural language processing (NLP) to the next level, allowing the system to understand context, as well as permitting the user to interrupt the assistant at any time to provide a command or dictate content. The system can recognize individual speakers in the car via voice biometrics and microphone arrays.
Nuance also offers advanced AI-enabled text messaging, where the automotive assistant can understand the context of an incoming message and proactively suggest a response or take an action based on its contents. The user can also dictate instructions back to the personal assistant using natural language (NL), and the system uses algorithms to determine context to ensure that instructions provided to the system can be given, while also carrying on other conversations within the vehicle with other occupants.
Tractica spoke with Robert Policano, Product Manager, Dragon Drive Innovation Showcase, about Nuance’s vision and strategy for the automotive VDA use case, who explained:
Our portfolio has five core themes or attributes. One, we take a hybrid approach. We embed our technology into the vehicle infotainment architecture, but there are also cloud based aspects to the system. This ensures persistent usability regardless of cellular connectivity and provides nearly instantaneous feedback to the user.
Policano said Nuance has the ability to continuously teach the model if the cloud is engaged, to update the language models, adding:
Two, multi-modality. Our assistant is multi-modal in terms of input and output methods in the vehicle environment—speech, visual, haptics and handwriting recognition. For example, we have integrated eye-tracking with the assistant, so you could ask, “what is that building over there?” and receive details on opening hours, rating, description, etc.
Three, our assistant uses conversational AI to power what we call smart domains to complete driving related tasks and anticipate the driver’s needs—all based on machine learning for user behavior to deliver results that are customized to the user and the context of the request. For example, you could say “Find cheap parking near Lincoln Center.” Since the assistant is integrated with the car’s sensors, it knows there is rain on the windshield and will deduce that covered parking may be preferred.
Four, “Just Talk.” With Just Talk, the assistant is always listening and using AI to understand whether it is being spoken to. For instance, when we talk, we naturally pause before we direct a statement to someone else. Just Talk incorporates that flow using a combination of real-time speech-to-text and natural language processing to understand the intent of the request. Most systems today still require the push of a button to activate the assistant, and we’re just now starting to see a wake-up word being used instead, for example in the new Mercedes MBUX system which wakes up with the words ‘Hey Mercedes.’
We also supply a suite of acoustic solutions called Speech Signal Enhancement (SSE) to enhance voice input, reduce ambient noise, cancel echo, etc. SSE permits the OEM to create acoustic zones specific for each passenger in the vehicle. And through voice biometrics, user voice prints allow the assistant to recognize and switch to individual profiles and respond to them personally.
When asked if he thought the interface to cars are changing from manual controls to voice, Policano said, “Yes, I think the industry at large recognizes this trend. Vehicles are also evolving, they will change in form factor. Today we’re seeing level 3 vehicles at best (SAE Autonomy Levels, level 3 is conditional automation), so not fully automated vehicles. We are now building solutions for level 4 and 5 vehicles.”
Policano said Nuance does not see itself competing with Google, Amazon, or Apple. Nuance’s automotive assistants are made to order. The assistant needs to be deeply integrated for the car, with backend systems for service, call centers and marketing platforms. “However, we can also integrate into and refer commands to different VDAs like Alexa or Google Home,” said Policano.