The State of AI and Where It’s Heading in 2024

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm.

An interview with Qualcomm Technologies’ SVP, Durga Malladi, about AI benefits, challenges, use cases and regulations

It’s an exciting and dynamic time as the adoption of on-device generative artificial intelligence (AI) grows, driving the democratization of AI on consumer devices to further enhance user experiences. So we invited TIRIAS Research Principal Analyst, Jim McGregor, to step into the world of Qualcomm Technologies’ Senior Vice President and General Manager, Technology Planning and Edge Solutions, Durga Malladi, to share his first-hand insights on the ever-quickly evolving state of AI.

Editor’s Note: The following interview was transcribed and edited for readability.

Jim McGregor (JM): You mentioned earlier that AI has been your nighttime job in between doing all the 5G stuff over the past decade. You’ve been looking at this from not just a Qualcomm perspective, but as an industry perspective. So, where do you think AI is and how it’s evolving?

Durga Malladi (DM): If you take a look at the last 10 years or so, AI has gradually evolved from what was very focused on image processing with the original Convolutional Neural Network (CNN) architecture. And about five years back, AI started to make a difference in terms of language and speech processing.

But what’s changed the game in recent times is generative AI and what it has been able to accomplish in terms of astonishing results and the use cases on top of it. So as Qualcomm, we’ve been in this space for a while now. There’s a very large amount of research that we’ve been doing in terms of AI algorithms and all the other models that are being developed. What’s the best way for us to take all of that, curate that and make sure that it runs in the most effective manner in an end-consumer device?

Those consumer devices can be extremely diverse. On one extreme, you might have the ultra-low-end internet of things devices. On the other hand, you could have the ultra-high-end automotive scenarios where you’re using autonomous driving.

And somewhere in between are smartphones, laptops, XR devices and whatnot. So, we have been in the business of making sure that our research is focused toward taking a lot of the research that was out there in terms of new models, and then making them run in the most effective manner. And there is a very large number of models that are out there. That’s been our real focus.

And it’s our belief that as we progress, AI needs to be democratized and running on device.

There will always be some assistance from the cloud, but at the end, for large-scale deployment and large-scale adoption, I think it’s very important to start seeing generative AI running on devices. There’s a scalability issue, a privacy issue and many reasons why I believe it makes sense for us to actually start running generative AI on devices.

JM: AI is kind of an umbrella term. Can you talk about the diversity of the workloads we’re likely to see in AI?

DM: There’s multiple kinds of diversities associated with AI. First, is the kind of models that exist. For image processing back in the day, there were CNNs, then for audio processing there were Long Short-Term Memory networks (LSTMs) and these days we use transformer networks for generative AI models. That’s one kind of diversity — the architectures themselves. And they are constantly evolving over time.

The second level of diversity is the size of the models. The models can be as small as below one gigabyte, or a billion parameters. On the other extreme, you can have 65 billion to 70 billion parameters. So, you have the large models, which we call them as large language models in the specific context of what we’ve seen in recent times, to some of the smaller models. That’s the second level of diversity.

The third one is that the source of the models can be from anywhere. If you go to some of the more popular developer sites, you will see more than 300,000 models that are out there, and they’re constantly increasing.

From Qualcomm Technologies’ perspective, our focus is to make sure that as we build our platforms, we are able to ingest any kind of model.

JM: How does Qualcomm Technologies’ portfolio of solutions accommodate the diversity of AI models in terms of size, type and usage, and what metrics are considered to ensure effective performance in running multiple AI models simultaneously on devices?

DM: Pick your model size, pick your model type, pick your different kind of a model, and we will have a solution in which you will be able to adopt and then run with it. From that standpoint, we have our own portfolio of solutions with a complete notion of the neural processing unit (NPU), central processing unit (CPU) and graphics processing unit (GPU) all running in conjunction, with simultaneous usage of these processors. And depending upon the key performance indicator (KPI), if it’s extremely power hungry, run it on the NPU. If it’s something that’s lightweight but you want it to be in a different category, run it on the CPU and GPU — but a mix and match of both.

When it comes to AI, you never have a single model running: You have several AI models running all the time. Some of them might be very pervasive and contextual in terms of trying to anticipate your move, thinking about what you’re going to do next. So that’s constantly running, which by the way, is very hard to run on the cloud. It has to be running on the device.

And then there’s other kinds of models, which are more in terms of you prompting something, you ask something and you get a response back. That’s on-demand AI. So, you have a mix and match of these two models.

Our platforms are designed to make sure that we embrace all of this diversity and are still able to run in the most effective manner.

When I say effective, the KPIs can go all the way from latency to power consumption to accuracy. At the end of the day, if you’re doing an image processing job, it has to look really good. So, there’s accuracy associated with it. Those are the metrics with which we look at generative AI and all the other AI models that exist out there.

JM: You mentioned that there are more than 300,000 models out there. Are you seeing the same thing on a regional basis? Or are you seeing it develop differently from Asia to the U.S. to Europe?

DM: From what we have seen so far, at an architectural level, there is not that much of a difference between different regions. What you see, for example, in the U.S. is not fundamentally different from what you see, let’s say, in Europe or in other parts of Asia — including China.

But, on the other hand, the datasets on which they have been trained tend to be different depending upon which region you are in. Some things might have been trained on some of the local languages, and some things are trained in English. So, there are some differences. At a very high level, there’s not that much of a difference in terms of what we see in the models.

But increasingly, as we move forward, we do expect to see some models doing better in certain languages than other models. It just depends upon the training dataset. And these are still foundational models, in the sense that this is public domain information that’s out there. Which brings me to the next point.

So far, we’ve been talking about inference running on devices. But what happens next after that? Imagine a situation where you and I both have the same phone that we buy at the same time and have the same foundational model running.

But imagine if two years from now, what you end up seeing is that my model tends to perform slightly differently from yours. That’s because each of our models is gradually learning our behavior and is therefore, fine-tuned.

So now it’s a more personalized AI assistant for you, which is different for me. And that’s the next evolution of where we are heading in terms of personalized AI assistants and training or fine-tuning on the device.

Start with a foundational model, and then start training on the device as well. That will be the next step. We’re not there yet, but that is exactly where we will be heading as we move forward. It’s a fast, evolving space.

JM: Another topic our industry has to consider going forward, especially for applications like automotive, is collective AI. We are going to have all these intelligent things that have to operate together. And let’s face it, we don’t do that very well as humans. How are we going to do this with machines?

DM: There’s many thoughts in that space, and it’s an active area of research, including some of the federated learning concepts, learning from other devices and so on. It is something that just in the background, from a research standpoint, there’s a lot of work that’s involved. But at this point in time, our focus and emphasis are on making sure that we have this action on the ground with AI — generative AI especially — running on devices, and we’re not that far away. That’s exactly where we are heading.

JM: Qualcomm showed at Mobile World Congress this year one of the coolest demos: It was Stable Diffusion running on a Snapdragon based platform. But I think it’s important for people to know that with everything Qualcomm does — from 5G to Wi-Fi to you name it — there is usually at least a decade of research and development investment that goes into it.

How long has Qualcomm been investing in AI?

DM: It’s been more than 10 years. It’s been a very long time for us because some of the genesis of that goes even before that. But suffice it to say that a lot of the foundational work that we did on AI dates back to more than 10 years, and it continues. We have been more and more active in the research community and more public with our publications.

Our emphasis is on making sure that you might end up with AI models that are trained in the cloud, but if you start thinking about the end devices that you want these to run on, you might have a very different way of training the models to begin with. And that’s one of the end-to-end system problems that we’re looking at.

In other words, if you just train in the cloud, and then say, “we’ll figure out the best way to implement it,” that’s one way of doing it. But the other way of doing it is if I know exactly the kinds of devices that are out there, I might actually start off in a very different way. That’s the way that we’re looking at how AI is evolving.

JM: I know that you’re looking at this holistically. Can you talk about some of the standards and committees you are involved in, like AI responsibility?

DM: I represent Qualcomm Technologies at the World Economic Forum AI Governance Alliance. It’s a committee of members from different places in the industry and academia, from different parts of the world, meant to come out with directional guidelines, which are helpful for regulators to formulate the right kind of policies for making sure that we are doing everything in the right and responsible way.

There are guardrails that everyone in the industry who is involved in AI needs to put in place. But at the same time, let’s not lose track of the fact that there are tremendous benefits that AI brings to the table in terms of increasing productivity in different sectors, from manufacturing to enterprises to the workplace.

And of course, there are consumer-centric use cases as well. So, as we do all of that, we also want to make sure that we develop the right guardrails as we move forward. That’s one of the reasons why, from our perspective, we decided that we are going to be a little more of a visible and active participant in the space.

Suffice it to say that in different regions, you already see that in the U.S., Europe and other parts of the world that policymakers are paying attention to determine their role.

Their role is to have the right framework without necessarily stifling innovation on one hand, so not being too overbearing from a regulatory standpoint, but at the same time having the right guardrails as well. So, it’s an evolving space and we are an active member of that, especially as we come in from a different pedigree.

We are there to talk in terms of how AI gets adopted in a very diverse set of devices. It’s a true democratization of AI across a very large number of devices. It’s a slightly different perspective that we come from relative to a lot of the others who are network and cloud-centric. And I think you need all viewpoints to formulate the right policy.

I think from our standpoint, our role is to make sure that the policymakers are well-educated in this as to what exactly is the technology. Qualcomm Technologies’ role is best suited for explaining the technology. We are not there to come up with guidelines for what the policy should be. That’s really what they (regulators) do. But they should understand fully what the technology is to separate what might be fiction versus what is real, and then go with it. And I believe we will land in the right spot.

JM: We’re seeing generative AI go at light speed in the data center to a point that is not sustainable. You talked about the importance of bringing generative AI on device and having guardrails in place. So, where are we in that evolution and where are we going to be over the next couple of years?

DM: As far as adoption of generative AI and devices is concerned, it’s imminent. Stay tuned for a lot of things that will happen over the next few months as devices get launched with generative AI capabilities. There are already some guardrails in place, and I’m sure they will be evolving a lot more. And in some regions, there’s already the notion that any generative AI model that is published must be certified before it gets adopted.

Certification involves a lot of the tests that are associated with it, which I believe is a pretty good step in the right direction. So even as we start today and continue at lightning speed, we are already trying to measure ourselves the right way. I’m sure those metrics will evolve but at the same time, I think that constant push and pull is exactly how we, as an industry, have tried to keep pushing from an innovation standpoint.

As we go through that, we’ll probably learn a bit more and then start adding more guardrails as and when necessary. But we are truly excited about where we are at this point in time. And from a Qualcomm Technologies’ standpoint, we are placing our emphasis on getting generative AI right and getting the action onto devices.

JM: Generative AI has sparked a level of excitement around not just our industry, but around consumers and around pretty much everything. Is there one particular application or one aspect of AI that just excites you more than anything else?

DM: The idea of having an AI productivity assistant that’s constantly along with you on your device and being able to converse with the device all the time, as opposed to figuring out which app is where and then trying to use it. We’re beginning to see glimpses of this, along with a more natural interface to your device using voice as opposed to touch. It will be exciting to see where we end up.

Pat Lawlor
Director, Technical Marketing, Qualcomm Technologies, Inc.

Jerry Chang
Senior Manager, Marketing, Qualcomm Technologies

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top