NVIDIA Expands its AI Edge Capabilities

nvidia-expands-its-ai-capabilities

NVIDIA is not necessarily known for powering low power AI applications. Most of its artificial intelligence (AI) focus has been on powering high power cloud or data center applications, where its Tesla graphics processing unit (GPU) processors have largely led the market. However, over the last 12 months, NVIDIA has accelerated its reach beyond the data center and is now building its portfolio to service low power AI edge applications.

When it comes to AI at the edge, NVIDIA has been focusing on the automotive sector with its Xavier platform and on robotics and drones with its Jetson family of products. The Jetson product family started off in 2015 with the Jetson TX1, which was mainly offered as a hobbyist board but has now expanded with the Jetson TX2 and Jetson AGX Xavier module. The Jetson TX 2 (and TX1) family of products has found applications in drones and consumer robots, which are generally in the power range of 10 W-20 W or more, providing 1-2 TOPS of performance. The Jetson AGX Xavier module is unique in the sense that it can support between 10 W and 30 W with 32 TOPS performance. With the Jetson TX2, TX1, and AGX Xavier, NVIDIA targets AI processors in a wide variety of robot types, including consumer robots, enterprise robots, drones, and industrial robots.

Jetson Nano for Low Power AI Applications

The Jetson Nano is the latest addition to the Jetson family, and it further expands NVIDIA’s capabilities. The company has already been pushing its inference capabilities in the cloud with the Tesla T4, but has been largely missing out on the low power applications in the 1 W-10 W range. With the Jetson Nano, it brings capabilities down to the 5 W-10 W range, which should be good for several small consumer robots or home security camera types of applications. At that power budget, Jetson Nano offers a very impressive 472 GFLOPS of performance (approximately 1 TOPS).

The Jetson Nano product is specifically good at processing video using deep learning algorithms supporting a variety of large deep neural networks. It can process eight 1080p streams in parallel running object detection simultaneously on all eight streams at 30 fps. According to NVIDIA, although Google’s Edge TPU (Coral) performs better on a few deep learning models like MobileNet, the Jetson Nano has much better performance and applicability across a broad range of frameworks and models.

Jetson Nano Performance Benchmarks versus Competition

(Source: NVIDIA)

NVIDIA provides a range of software development kits along with Jetson, including the Jetpack software suite, ISAAC simulation platform, and DeepStream video analytics solution. The company’s success with AI developers is largely due to its software capabilities, which make it much easier for developers to build applications on top of their platforms.

From Hobbyists to OEMs

Just like the Jetson TX1, the Nano is targeted at the hobbyist market to an extent, with a price point of $99 for the Nano developer kit. However, NVIDIA hopes that the Jetson’s wider applicability for frameworks and cutting-edge performance will drive OEMs to use it for end products. The company has priced the module itself at $129 (for 1,000+ units), and it will be available in June 2019. NVIDIA is pitching the Nano as a scalable platform that can power the next generation of AI devices, not just a robotics kit that a few developers and hobbyists will explore.

The Jetson Nano can be thought of as the Raspberry Pi of AI applications to some extent, providing a low price point for an AI development board targeted at low power applications. But it goes beyond the ambitions of Raspberry Pi. Enthusiasts will argue that the Pi is much cheaper and much more open source. In the AI marketplace, though, the Nano is possibly the best equivalent of the Pi – until a cheaper and much more open hardware platform emerges.

A Positive Addition for Market Expansion

In the end, the Jetson Nano is a great addition to the NVIDIA Jetson portfolio that covers the high end of AI edge applications like industrial robots (30 W and 30 TOPS) and the lower end (e.g., a consumer robot; 5 W and 1 TOPS). The Nano is certainly expanding NVIDIA’s reach of AI edge devices, with integrations like network video recorders (NVRs) that can process video feeds at the edge for smart city applications with the capability of eight channel feeds being processed in tandem. Tractica has covered the use of AI processing in NVRs in its Artificial Intelligence for Edge Devices report, which (along with security cameras) will become endpoints for AI edge processing.

We are still missing NVIDIA solutions in the 1 W-5 W devices market – including smartphones, augmented and virtual reality devices, and tablets – where AI edge has the largest market potential. NVIDIA is possibly already hitting power versus performance boundaries on how much its GPU architecture can deliver, but I wouldn’t be surprised if the company delivers on that capability in 12-18 months. Although it may still be at the high end of the price curve (due to its GPU IP) for 1 W-5 W solutions, its products would open the market to much higher volumes, from thousands of units to millions. These higher volumes could shift NVIDIA into a completely different type of silicon provider.

Comments are closed.