September 24, 2019
Balises
-

Amazon, Baidu, BMW, Cerence, ecobee, Microsoft, Orange, Salesforce, SFR, Sonos, Spotify, Sound United, Tencent, Verizon and more to promote customer choice by supporting multiple, interoperable voice services on a single device

With multiple, simultaneous wake words, customers can access multiple voice services by simply saying the corresponding wake word – from Alexa and Cortana to Orange’s Djingo, Salesforce’s Einstein, and more

Solutions providers like Intel, MediaTek, NXP and Qualcomm to develop hardware and reference solutions that support multiple wake word engines

SEATTLE--()--Sep. 24, 2019-- Today, Amazon (NASDAQ: AMZN) and leading technology companies announced the Voice Interoperability Initiative, a new program to ensure voice-enabled products provide customers with choice and flexibility through multiple, interoperable voice services. The initiative is built around a shared belief that voice services should work seamlessly alongside one another on a single device, and that voice-enabled products should be designed to support multiple simultaneous wake words. More than 30 companies are supporting the effort, including global brands like Amazon, Baidu, BMW, Bose, Cerence, ecobee, Harman, Logitech, Microsoft, Salesforce, Sonos, Sound United, Sony Audio Group, Spotify and Tencent; telecommunications operators like Free, Orange, SFR and Verizon; hardware solutions providers like Amlogic, InnoMedia, Intel, MediaTek, NXP Semiconductors, Qualcomm Technologies, Inc., SGW Global and Tonly; and systems integrators like CommScope, DiscVision, Libre, Linkplay, MyBox, Sagemcom, StreamUnlimited and Sugr.

“Multiple simultaneous wake words provide the best option for customers,” said Jeff Bezos, Amazon founder and CEO. “Utterance by utterance, customers can choose which voice service will best support a particular interaction. It’s exciting to see these companies come together in pursuit of that vision.”

The Voice Interoperability Initiative is built around four priorities:

  • Developing voice services that can work seamlessly with others, while protecting the privacy and security of customers
  • Building voice-enabled devices that promote choice and flexibility through multiple, simultaneous wake words
  • Releasing technologies and solutions that make it easier to integrate multiple voice services on a single product
  • Accelerating machine learning and conversational AI research to improve the breadth, quality and interoperability of voice services

Multiple, interoperable voice services deliver choice and flexibility for customers

Companies participating in the Voice Interoperability Initiative will work with one another to ensure customers have the freedom to interact with multiple voice services on a single device. On products that support multiple voice services, the best way to promote customer choice is through multiple simultaneous wake words, so customers can access each service simply by saying the corresponding wake word. Customers get to enjoy the unique skill and capabilities of each service, from Alexa and Cortana to Djingo, Einstein, and any number of emerging voice services.

Companies participating in the initiative – including Amazon, Baidu, BMW, Bose, Cerence, ecobee, Free, Harman, Microsoft, Orange, Salesforce, SFR, Sonos, Sound United, Sony Audio Group, Spotify and Tencent – are committed to adopting a similar technological approach, whether building voice-enabled products or developing voice services and assistants of their own.

“We’re in the midst of an incredible technological shift, in which voice and AI are completely transforming the customer experience,” said Marc Benioff, Chairman and co-CEO at Salesforce. “We look forward to working with Amazon and other industry leaders to make Einstein Voice, the world's leading CRM assistant, accessible on any device."

“We value freedom of choice, empowering listeners to choose what they want to listen to and how they want to control it,” said Patrick Spence, Sonos CEO. “We were the first company to have two voice assistants working concurrently on the same system, a major milestone for the industry. We are committed to a day where we’ll have multiple voice assistants operating simultaneously on the same device, and are working to make that happen as soon as possible.”

“Access to the music and podcasts you love should be simple, regardless of the device you’re on, or the voice assistant you use,” said Gustav Söderström, Chief R&D Officer, Spotify. “We are excited to join the Voice Interoperability Initiative, which will give our listeners a more seamless experience across whichever voice assistant they choose, including the ability to ask for Spotify directly.”

Developers and device makers have a shared commitment to customer trust, and will work together to protect the security and privacy of customers interacting with multiple voice services. Companies participating in the initiative will work to ensure this commitment extends to products that support multiple, simultaneous wake words.

Making multiple, simultaneous wake words more accessible for developers and device makers

Alexa machine learning and speech science technology is designed to support multiple, simultaneous wake words. As a result, any device maker building with the Alexa Voice Service (AVS) can build powerful, differentiated products that feature Alexa alongside other voice services.

Still, device makers interested in supporting multiple, simultaneous wake words often face higher development costs and increased memory load on their devices. To address this, the Voice Interoperability Initiative will also include support from hardware providers like Amlogic, Intel, MediaTek, NXP Semiconductors and Qualcomm Technologies, Inc.; original design manufacturers (ODMs) like InnoMedia, Tonly and SGW Global; and systems integrators like CommScope, DiscVision, Libre, Linkplay, MyBox, Sagemcom, StreamUnlimited and Sugr. As part of the initiative, these companies will develop products and services that make it easier and more affordable for OEMs to support multiple wake words on their devices.

“Giving people flexibility in how they interact with their PCs is foundational to a great user experience, and the mission of this initiative aligns with Intel’s Project Athena innovation program,” said Ran Senderovitz, vice president and general manager of Mobile Product Marketing, Client Computing Group at Intel Corporation. “We are excited to collaborate to drive the industry to scale voice experiences beyond the many 10th Gen Intel Core based systems expected to launch with multiple voice assistants this year.”

“Qualcomm chipsets allow multiple wake word engines to run simultaneously on a single device already, and we believe joining the initiative will help make these solutions accessible to more device makers and on more form factors,” said Rahul Patel, senior vice president and general manager, connectivity, Qualcomm Technologies, Inc. “We are excited to work closely with OEMs and developers to understand their needs in this fast growing area of innovation and to develop powerful and scalable solutions to support multiple services on voice-enabled products.”

Advancing the state of the art in machine learning and wake word technology

The academic community has played a vital role in advancing the core machine learning and conversational AI behind voice technology. Companies involved in the initiative will work with researchers and universities to further accelerate the state of the art in machine learning and wake word technology, from developing algorithms that allow wake words to run on portable, low-power devices to improving the encryption and APIs that ensure voice recording are routed securely to the right destination. This continued innovation will provide an important building block for long-term advancements that improve the quality, breadth and interoperability of voice services in the future.

“Customers want flexibility, in addition to greater value and functionality. They don’t want to be locked into using a specific voice service, and that means we’re going to see more households become multi-assistant environments,” said Mariana Zamoszczyk, senior analyst for Smart Living at Ovum. “This trend means that device makers and AI developers need to prioritize interoperability with other services, and work to deliver differentiated, personalized experiences through their own products or assistants.”

Participating companies will have more detail to share on the initiative and compatible products in the coming months. To learn more about the program and opportunities to get involved, visit http://developer.amazon.com/alexa/voice-interoperability.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit amazon.com/about and follow @AmazonNews.

Contacts

Amazon.com, Inc.
Media Hotline
Amazon-pr@amazon.com
www.amazon.com/pr

Découvrez-en davantage sur le futur de l'expérience des déplacements