Artificial intelligence (AI) processing today is mostly done in a cloud-based data center. The majority of AI processing is dominated by training of deep learning models, which requires heavy compute capacity. In the last 6 years, the industry has experienced a 300,000X growth in compute requirements, with graphics processing units (GPUs) providing most of that horsepower. According to a new report from Tractica, however, as the diversity of AI applications grows, an increasing amount of AI processing will be handled within edge devices rather than in a centralized, cloud-based environment.
Tractica forecasts that AI edge device shipments will increase from 161.4 million units in 2018 to 2.6 billion units worldwide annually by 2025. Tractica’s analysis indicates that the top AI-enabled edge devices, in terms of unit volumes, will be as follows:
- Mobile Phones
- Smart Speakers
- Head-Mounted Displays
- Consumer and Enterprise Robots
- Security Cameras
“Privacy, security, cost, latency, and bandwidth all need to be considered when evaluating cloud versus edge processing,” says research director Aditya Kaul. “Depending on the AI application and device category, there are several hardware options for performing AI edge processing. These options include CPUs, GPUs, ASICs, FPGAs, and SoC accelerators.”
Tractica’s report, “Artificial Intelligence for Edge Devices”, provides a quantitative and qualitative assessment of the market opportunity for AI edge processing across several consumer and enterprise device markets. The device categories include automotive, consumer and enterprise robots, drones, head-mounted displays, mobile phones, PCs/tablets, security cameras, and smart speakers. The report includes segmentation by processor type, power consumption, compute capacity, and training versus inference for each device category, with unit shipment and revenue forecasts for the period from 2017 to 2025. An Executive Summary of the report is available for free download on the firm’s website.