This market research report was originally published at Tractica's website. It is reprinted here with the permission of Tractica.
Qualcomm is betting on device-based AI. Its new $100 million startup fund announced on November 28 includes a dedicated focus on the edge and device-based AI. The fund will focus on startups that are advancing the cause of device-based AI covering autonomous cars, robotics, computer vision, and the Internet of Things (IoT), cutting across applications, platforms, and machine learning technologies.
Qualcomm has been largely conservative with its AI strategy, so far, compared to what Intel, NVIDIA, Arm, or even the smartphone vendors have been doing. For the most part, Qualcomm has been calling for the use of its existing Snapdragon processor to process AI workloads, using a heterogeneous combination of central processing unit (CPU), graphics processing unit (GPU), and digital signal processing (DSP) resources.
A few years back, Qualcomm had introduced Zeroth AI software meant to work with Snapdragon 820 processors, but has since abandoned the Zeroth brand and strategy, which then morphed into an SDK called the neural processing engine (NPE), which orchestrates AI workloads to be processed on the CPU, GPU, or DSP.
Qualcomm has not really pushed for specific neural network IP cores or dedicated application-specific integrated circuits (ASICs) for AI, but there could be something brewing in the pipeline. The AI fund announcement could be a taste of what is expected in terms of Qualcomm’s larger AI strategy, yet to be revealed.
Apple and Google have been following a system-on-a-chip (SoC) accelerator or neural network IP core strategy with dedicated IP sitting on top of their existing application processors. Apple’s neural processing unit (NPU) on the A10 and A11 processor or Google’s image processing unit (IPU) on its Pixel 2 and Pixel 3 phones are examples of SoC accelerator cores that are becoming standard practice across device manufacturers.
Tractica has covered the SoC accelerator trend in its latest Artificial Intelligence for Edge Devices report, comparing it to other strategies like using the CPU, GPU, or a separate dedicated ASIC. SoC accelerators are great as a first step to introduce on-device AI, as it is relatively simple and cheap to add on a new core to an existing processor.
However, as the volume and size of AI workloads increase, it makes more sense to have a dedicated ASIC for AI. This is what Intel Movidius has been pushing for in drones, which is likely to trickle down into other device categories. Huawei’s Ascend chip also follows the same dedicated ASIC strategy and, in fact, is the best proof of how on-device AI and dedicated AI ASICs are coming into the mainstream smartphones through to PCs, drones, automotive, and IoT sensors.
Qualcomm also announced its first investment through the AI Fund, called AnyVision, which focuses on face, body, and object detection. AnyVision uses proprietary technology to run the models on-device, maintaining privacy and data security on existing chips and devices. Like AnyVision, the AI fund will allow Qualcomm to get a pulse for the application layer for on-device AI, especially focusing on improvements on algorithms and software to help drive on-device AI. The investments should also provide Qualcomm with a better understanding of the hardware requirements for on-device AI, and whether its Snapdragon-based NPE strategy will hold steady in the future.
Qualcomm is also keen on positioning 5G as a driver for AI at the edge, more so from a network edge perspective. As explained in an earlier blog, the role of 5G in AI is not that straightforward. Tractica looks forward to Qualcomm providing more clarity on this topic as it dips its feet deeper into AI at the edge.
Aditya Kaul
Research Director, Tractica