Bragi, a pioneer in the smart headphone market and maker of Dash headphones, has announced the beta launch of Bragi Intelligent Edge featuring nanoAI, a business-to-business (B2B) software framework and platform to help ultra-low power edge devices to incorporate AI edge processing. Bragi has been using intelligent AI edge processing technology in its Dash headphones to perform complex deep learning processing tasks, including training and inference on an ARM Cortex M processor. Tractica received an early embargoed briefing about this technology and the upcoming platform release back in November 2017 when we met with Nikolaj Hviid (CEO) and Dr. Friedrich Förstner (Chief Scientist) at Bragi’s Munich offices.
Based on our discussions with Bragi at the time, it was quite clear that the company’s ambitions are broader than the smart headphone market. Bragi is one of the first companies to have really championed the case for a truly smart headphone and in-ear computer. Compared to other smart headphones in the market, including the Apple AirPod, Bragi’s device is much more capable and, more importantly, has an ambitious vision to become one of the leading platforms to help enable the move of AI processing toward the edge, especially for ultra-low power devices.
An Innovative Use of AI in a Platform Created from Scratch
The only way that one can achieve true intelligence in a headphone is if it is able to process data streams on the device in real time. A few years back when Bragi went looking in the market for platforms and frameworks that would allow it to squeeze a large complex deep learning network into a tiny ARM Cortex M processor, it could not really find much in terms of off-the-shelf platforms. As a result, the company created an embedded AI group with Bragi headed by Dr. Förstner that was given the task of creating this platform from scratch. Tractica has covered Bragi’s innovative use of embedded AI in the latest Smart Headphones report, an excerpt of which is provided below:
Bragi has developed a custom neural network for the Dash Pro that sits on the ARM Cortex [central processing unit] CPU core, and is designed to be an embedded neural network that constantly learns about users’ preferences and adjusts performance accordingly to create highly customized listening experiences. The embedded neural network from Bragi takes inspiration from the brain of a fruit fly which has more than 100,000 neurons and more than 800 clusters of photoreceptors. Bragi has clearly leapfrogged its competitors with the embedded neural network design with which it aims to train and process AI algorithms on the device, rather than have the training done in the cloud. This gives the Dash Pro unparalleled advantage over other smart headphones, in that it provides real-time intelligence and smart capabilities with the headphones learning on the fly, and being able to adjust settings and performance to maximize the [user experience] UX. Whether it is improving battery life, sound quality, accuracy of sensor data, connectivity, voice translation, or on-board voice assistants, these are all features that can be made better with high-performance chipsets, along with neural networks. Over time, almost all smart headphones will need to adopt a similar architecture using embedded neural networks that run on a powerful device, making them in-ear computers.
Impressive Processing Capabilities, but Lacking Voice and Vision Capabilities
At the time of our meeting in November 2017, Bragi said that it was able to process 20 streams of sensor data, saving 20% battery life on the Dash headphones, using 8% of the ARM resources. According to its latest press release, it has been able to squeeze this down to less than 1% of processing resources, allowing it to process 27 different sensor streams. This is quite impressive considering the size, processing, and power requirements for a tiny device like a smart headphone. On the other hand, most of these data streams do not really touch upon complex voice, image, or video data, which will be crucial to support if Bragi wants its platform to support robots, drones, and security cameras, all of which need voice and vision capabilities.
With the beta launch of the platform, Bragi hopes that a wide range of hardware device manufacturers, including wearables, smart home devices, Internet of Things (IoT) sensors, and medical devices, will be able to leverage its Intelligent Edge platform to squeeze large and dense neural networks into a small footprint, enabling AI edge computing. Bragi is one of the first companies to come up with a platform and framework that allows device original equipment manufacturers (OEMs) to develop AI edge capabilities. One would ideally expect semiconductor companies like ARM, Qualcomm, or Intel to dive into this head first. Bragi, with its Intelligent Edge platform, becomes an attractive acquisition target for these semiconductor players, which all have ambitions around bringing AI to the edge. At the same time, Bragi would also be a good fit for Apple, Amazon, or Google, which could use Bragi’s technology stack to help create a framework for embedded AI developers to expand their operating system (OS) offerings and grow their individual device ecosystems. As covered in Tractica’s CES 2018 AI blog post, Chinese hardware companies are already ahead in providing platform stacks for integrating AI into everything from smart speakers to consumer robots.
Can an Internal Product Become a Third Party’s Chosen Platform?
The key test will be seeing how robust Bragi’s platform is and whether it can truly stand the test of transitioning the platform from being an internal product to becoming one that third-party OEMs can use efficiently and that can be supported. The major challenge for Bragi will be pivoting from what is effectively a business-to-consumer (B2C) hardware company to becoming a B2B platform company.