A look at select, cutting-edge research coming out of Qualcomm AI Research
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm.
Artificial Intelligence (AI) is revolutionizing industries, products, and core capabilities by delivering dramatically enhanced experiences. However, this is just the start of the AI revolution. The field of AI, especially deep learning, is still in its infancy with tremendous opportunity for exploration and improvement. For instance, deep neural networks of today are rapidly growing in size and use too much memory, compute, and energy. To make AI truly ubiquitous, it needs to run on the end device within a tight power and thermal budget. New approaches and fundamental research in AI, as well as applying that research, is required to advance machine learning further and speed up adoption.
That’s where Qualcomm AI Research comes in. Qualcomm Technologies has a rich history of foundational research across technologies that have led to breakthrough innovations. Qualcomm AI Research brings together machine learning researchers across the organization to investigate a wide range of machine learning topics from fundamental deep learning research to applied AI. In this blog post, I’ll briefly discuss notable topics that show the breadth of our research, from quantization and unsupervised learning to fundamental long-term research like quantum AI. For more in-depth discussion, please attend my webinar.
Qualcomm AI Research drives leading research and development across the AI spectrum.
Quantization research for neural network power efficiency
An area that we have put a lot of ongoing effort into is neural network model optimization research for improved power efficiency and performance. Shrinking the neural network model size is what is going to allow AI to scale and become ubiquitous. Quantization reduces the precision of neural network weight and activation computations, which results in lower power, lower memory bandwidth, lower storage, and higher performance. The challenge with quantization is maintaining model accuracy and automating the process. We have made significant progress on both fronts and are pushing the limits of what’s possible with quantization.
Our goal is to quantize to lower bit widths while maintaining accuracy, increasing automation, reducing data required, and minimizing training. We’ve introduced three quantization methods over the past year to address these issues:
- Data-free quantization (DFQ) is an automated method that addresses bias and imbalance in weight ranges. It requires no training, is data free, and allows us to achieve 8-bit quantization without losing much accuracy.
- AdaRound (Adaptive Rounding) questions the common method of rounding and creates an automated method for finding the best rounding choice. It builds on data-free quantization methods, requires no training, and only needs minimal unlabeled data. It allows us to achieve 4-bit weight quantization without losing much accuracy.
- Bayesian bits is a novel method to learn mixed-precision quantization. It requires training and training data, but it allows us to jointly learn the bit-width precision and whether to prune nodes. It automates mixed-precision quantization and enables the tradeoff between accuracy and kernel bit-width, resulting in state-of-the-art results.
We’re very excited to see this amazing progress in quantization. With these different quantization options, AI engineers and developers can select the method that fits their needs and make appropriate tradeoffs. We are much closer to making quantization a no brainer for neural network inference.
I’m also excited to see the speed at which our leading quantization research is being commercialized and shared with the community through papers, open sourcing, or SDKs. For example, DFQ , is already in the AI Model Efficiency Toolkit (AIMET), Qualcomm Innovation Center’s open source project on GitHub, and the Qualcomm Neural Processing SDK.
Our leading quantization research is being quickly commercialized and shared with the AI community.
Unsupervised learning from RF for precise positioning
AI is a powerful tool, but the key is to intelligently apply AI to solve the right challenges – for example, challenges that are difficult to solve with traditional methods but much easier to solve with AI. One such challenge is determining a receiver’s precise position from radio frequency (RF) signals. Radio waves are all around us, and there is an opportunity to learn from them. In this research area, we are applying unsupervised learning to the RF signals to achieve centimeter-accurate positioning.
Consider this auto assembly line in the image below where GPS and other techniques are infeasible. The environment is complex with many irregular shapes and moving equipment. If we wanted to know the precise location of an assembly line worker (from the RF the smartphone receives), it would be very complex to model the indoor RF propagation using traditional methods. In other words, it is hard to precisely know the location of the worker since the RF signals that we are measuring could be coming from different paths due to reflections, diffraction, and scattering from walls and various irregular objects like robot arms.
Unsupervised learning from RF can be used for precise positioning at an auto assembly line.
For this type of complex environment or any type of indoor positioning, we thought that AI coupled with domain knowledge of physics would be a good tool to learn the complex physics of propagation from the unlabeled RF. We call this hybrid approach “neural augmentation,” a technology that augments neural networks with human knowledge and algorithms, or vice versa. One benefit of a neural network learning the RF environment is that it can estimate the precise position of the RF receiver, and thus the location of the person.
The neural network we created uses a generative auto-encoder plus conventional channel modeling (based on physics of propagation) to train on unlabeled channel state information (CSI) observations and learn the environment. Our initial results from implementing neural unsupervised learning from RF for positioning are promising. The neural network learns the virtual transmitter locations up to rigid body transformations (shifts, reflections, rotations) completely unsupervised. With a few labeled measurements, map ambiguity is resolved to achieve cm-level positioning.
Quantum AI for exponential performance increase
Quantum computing is a hot field and has been showing tremendous progress in recent years. We are doing fundamental research on how to apply quantum mechanics to AI in order to realize significant performance improvement. Quantum mechanics describes nature at the microscopic scale and has two properties that we’d like to utilize for AI processing: superposition and entanglement. Superposition means that every quantum bit, or qubit, is both a 1 and 0 at the same time. Entanglement means that qubits that are inextricably linked such that whatever happens to one immediately affects the other.
The progression of our research has led us from classical bits (0 or 1), to Bayesian bits (a probability distribution between 0 and 1), to quantum bits (which can be viewed as sphere and adds another dimension or degree of freedom).
We are applying quantum mechanics to machine learning.
Applying quantum mechanics to machine learning is truly fundamental green field research. Quantum annealing and quantum deep learning are two use cases that can utilize the exponential power of quantum computing. Quantum annealing can be used for combinational optimization problems, like laying out the physical design of a silicon chip, to potentially provide a huge improvement in performance. Quantum deep learning applies the mathematics of quantum mechanics to deep learning to design more powerful algorithms.
We are very excited by our initial results. We developed a quantum deformed binary neural network, which allows us to run a large classical neural network on a quantum computer or efficiently simulate on a classical computer. On top of that, we can deform this classical neural network to incorporate quantum effects and show that we can still train and run it efficiently. This is the first quantum binary neural network for realistic data! We look forward to making further progress in quantum AI.
Also, if you’re excited about solving big problems with cutting-edge AI research — and improving the lives of billions of people — we’d like to hear from you. We’re recruiting for several machine learning openings.
Dr. Max Welling
Vice President, Technology, Qualcomm