Palo Alto – November 3, 2025 – Woodside Capital Partners (WCP) is pleased to release our Digital Advertising Quarterly Sector Update for Q3 2025, authored by senior bankers Alain Bismuth and George Jones.
Introduction: Intelligence on the Edge
Edge AI isn’t just a buzzword – it’s fast becoming the next battleground in silicon. As billions of sensors and devices come online, they are generating an avalanche of data that can’t always wait for the cloud. IDC projects 41.6 billion connected IoT devices by 2025, producing nearly 79 zettabytes of data per year. Transmitting all that information to distant data centers is impractical due to latency, bandwidth, and privacy constraints. The solution? Push the computing – and the intelligence – out to the network’s edge.
As EE Times Editor-in-Chief Nitin Dahad noted, “While industry commentators have been talking about edge AI for a while, the challenge to date, as with IoT, was in the market fragmentation… But that is changing.” He highlighted how the ecosystem is consolidating through moves like Qualcomm’s acquisitions of Edge Impulse and Arduino, or Google’s collaboration with Synaptics on open-source RISC-V NPUs – signs that “edge AI really pushes connected IoT devices into a new realm – one of “ambient intelligence,” where AI chips put intelligence into things without having to connect to the cloud, consume massive power, or compromise security.”
This new paradigm is ushering in ultra-efficient chips purpose-built for on-device learning and inference. These Edge AI processors are no longer niche curiosities; they are becoming “the heartbeat of a new digital age,” with the market expected to reach $13.5 billion in 2025. The strategic takeaway is clear: the future of AI will be decentralized, and the real innovation—and value creation—is happening at the edge.
Drivers of the Edge AI Chip Boom
Several converging forces are propelling the rapid rise of edge AI chips. For the investment community, understanding these drivers is key to seeing where value will accrue in the coming years:
- Data Deluge & Latency Sensitivity: The sheer volume of sensor data (from cameras, microphones, wearables, etc.) is overwhelming. Sending it all to the cloud is often infeasible. Edge chips enable real-time processing at the source, avoiding network latency. This is mission-critical for applications like autonomous drones or surgical robots that can’t afford the milliseconds of round-trip cloud delay. For example, IDC estimates that 41.6 billion IoT devices will produce unimaginable data streams by 2025 – processing must be pushed outward to handle this in time.
- Bandwidth & Connectivity Limits: Not every environment has fat internet pipes or reliable 5G. From rural farms to factory floors, edge AI hardware ensures analytics continue even with spotty connectivity. It’s often more cost-effective to process data locally than to continuously offload gigabytes to the cloud.
- Privacy and Security: In an era of stricter data regulations and heightened user sensitivity, keeping data on-device is a significant advantage. Edge AI chips let a smartphone analyze your biometrics or photos privately, without ever uploading to a server. This reduces exposure to breaches and eases compliance with laws like GDPR.
- Power Efficiency: Perhaps counterintuitively, doing AI on the edge can save energy overall. Rather than firing up a distant server (and all the network infrastructure in between) for a small inference task, a low-power chip in a device can do it with less total energy. Moreover, many edge use-cases involve battery-powered gear – think wearables or remote sensors – where ultra-efficient silicon is the only option. A cloud model simply can’t run on a coin cell battery.
- Cost & Scalability: Lastly, cloud computing at scale is expensive. As AI becomes ubiquitous, offloading every task to centralized GPUs or TPUs racks up cloud bills. Pushing intelligence to millions of cheap, dedicated edge chips distributes the load and can be more cost-efficient at scale. It also enables new experiences and products that wouldn’t be feasible if every interaction required a cloud call (for instance, AI features in areas with no connectivity, or devices that need to respond in under 10 ms).
In short, the edge is where the digital world meets the real world, and it demands silicon that can handle messy, real-time data within tight power and latency budgets. This demand is fueling an “arms race” among chip makers – both established giants and ambitious startups – to build the brains for the edge. And the money is following: by 2025, custom ASICs for edge inference are projected to generate nearly $7.8 billion in revenue, and AI chip startups have already raised over $5.1 billion in VC funding in the first half of 2025 alone. The race is on to create chips that deliver data-center-caliber smarts without the luxury of a data center’s power or cooling.
The New Silicon Landscape: From Cloud Titans to Edge Niche
Not long ago, NVIDIA GPUs dominated the AI hardware narrative, thanks to the deep learning boom. But those power-hungry processors live in server racks, gulping kilowatts – not exactly ideal for a smart camera or a drone. The shift to edge AI has cracked the door open for a new wave of silicon solutions optimized for efficiency, specialization, and integration into smaller devices. This has made the competitive landscape far more diverse and exciting than the old CPU/GPU duopoly.
Tech giants have recognized the trend and are embedding AI acceleration in their edge offerings. Apple’s latest iPhone chip, the A19 Bionic, packs a 35 TOPS neural engine – effectively a dedicated AI brain inside your pocket. Qualcomm now ships NPUs (neural processing units) in hundreds of millions of Snapdragon chips each year, ensuring that nearly every new smartphone has on-device AI capability. Even Google, synonymous with cloud AI, has its Edge TPU chips for on-premise and IoT inference. These incumbents leverage enormous R&D and software ecosystems, but they also have broad mandates (serving many applications), which leaves plenty of room for specialists to outperform in niche areas.
This is where startups and smaller players shine, by laser-focusing on edge use cases and squeezing out efficiencies that general-purpose silicon can’t match. A slew of innovators worldwide are delivering novel architectures for edge AI – many of them fundamentally rethinking how computations are done under the hood. The approaches vary (digital ASICs, analog in-memory computing, neuromorphic designs, and more), but the goal is the same: maximum AI performance per watt on tiny footprints. Below are a few notable edge AI chip players and their strategies:
- Ambient Scientific (USA): Silicon Valley–based company developing ultra-low power, analog in-memory AI processors that enable always-on, on-device AI for battery-powered edge applications.
- Axelera AI (Netherlands): Developed the Metis AI processing unit, a high-performance vision accelerator purpose-built for the network edge.
- Blumind (Canada): Pioneers all-analog AI chips for ultra-low-power, always-on edge tasks, delivering standard neural network performance at up to 1000x less power than traditional digital designs.
- BrainChip (Australia): A neuromorphic-chip trailblazer that has commercialized the Akida spiking neural network processor for extreme-edge AI. BrainChip’s architecture performs brain-inspired event-based learning on-chip, targeting milliwatt-scale power budgets.
- GreenWaves Technologies (France): A pioneer in RISC-V-based edge processors for battery-powered devices. Its GAP9 processor combines a multi-core MCU, a DSP, and a neural accelerator, enabling advanced AI features like neural noise cancellation in hearables at exceptionally low power.
- Hailo (Israel): A leading-edge AI accelerator vendor whose specialized processors combine high throughput with low energy use for deep learning at the edge.
- Klepsydra (Switzerland): Takes a software-centric approach to edge AI optimization, with a lightweight framework that boosts inference efficiency across existing hardware, achieving up to 4x lower latency and 50% less power consumption on standard processors.
- Kneron (USA): Provides low-power AI inference chips for smart devices at the edge. Kneron’s “full-stack” edge AI SoCs deliver efficient on-device vision processing and face/pattern recognition, powering everything from smart home cameras to driver-assistance systems.
- Neuronova (Italy): Builds neuromorphic processors that emulate brain neurons and synapses in silicon, enabling sensor AI tasks with up to 1000x lower energy consumption than conventional chips.
- Mentium (USA): Using a hybrid in-memory and digital-computation approach, Mentium delivers dependable AI at the Edge with co-processors capable of Cloud-quality inference at ultra-low power. The company has enjoyed success in space-based applications.
- SiMa.ai (USA): Supplies low-power, high-performance system-on-chip (SoC) solutions for edge machine learning, branded as an MLSoC. SiMa.ai’s platform emphasizes ease of deployment for computer vision and autonomous systems.
- SynSense (China): Offers event-driven neuromorphic chips that tightly integrate sensing and processing to achieve ultra-low-latency, low-power AI on the edge.
- Syntiant (USA): Designs ultra-low-power Neural Decision Processors that enable always-on voice and sensor analytics in battery-operated gadgets. Syntiant’s tiny chips have already shipped in over 10 million devices.
Neuromorphic Computing: The Brain as Blueprint
Among all edge AI innovations, neuromorphic computing stands out as the most radical— and arguably the most visionary—approach. Rather than brute-force number crunching, these chips mimic biological brains, using networks of artificial “neurons” and “synapses” that communicate via spiking signals. The appeal is clear: the human brain is a 20-watt wonder that can outperform megawatt supercomputers on specific tasks. After millions of years of evolution, it remains the ultimate proof of concept for efficient intelligence. Why not try to capture some of that magic in silicon?
“The reason for that is evolution,” says Steven Brightfield, CMO of BrainChip. “Our brain had a power budget.” Nature had to optimize for energy efficiency, and neuromorphic chips follow that same rule, making them ideal for battery-powered AI. As Brightfield puts it, “If you only have a coin-cell battery to run AI, you want a chip that works like the brain; sipping energy only when there’s something worth processing.”
This event-driven paradigm is neuromorphic computing’s secret sauce: neurons fire only when input changes, consuming power only when needed. Intel’s Mike Davies, who leads the company’s neuromorphic lab, explains that such architectures excel at “processing signal streams when you can’t afford to wait to collect the whole stream… suited for a streaming, real-time mode of operation.” Intel’s Loihi chip, for example, matched GPU accuracy on a sensing task while using just one-thousandth the energy.
Though still early, the field is advancing fast. Major players like Intel and IBM, along with startups such as BrainChip, SynSense, and Innatera, are proving that brain-inspired computing is more than academic curiosity. Neuromorphic processors now handle keyword spotting, gesture recognition, and anomaly detection at microwatt power levels—a breakthrough for wearables, drones, and IoT devices.
Challenges remain: spiking neural networks still lag in programming ease and general performance, and the software ecosystem is nascent. As Davies cautions, with tiny neural networks there’s a “limited amount of magic you can bring to a problem.” Yet momentum is building. The efficiency gains are too compelling to ignore. Neuromorphic chips mirror the brain’s architecture, offering real-time intelligence at minimal energy – precisely what the edge demands.
While sales are still modest —projected to reach $0.5 billion by 2025 —the potential payoff is enormous. In a world where AI’s power appetite collides with energy constraints, brain-like chips may become essential infrastructure for the next generation of intelligent, efficient devices.
The Edge Is Also About Security
As Thomas Rosteck, President of Infineon’s Connected Secure Systems division, told EE Times in a recent interview, “The future of AI is about intelligence and security moving together to the edge. We can’t separate compute performance from trust – both have to be built into the silicon.”
Rosteck emphasized that this transformation isn’t simply about smaller chips or lower power; it’s about secure intelligence at the system level—combining sensors, connectivity, compute, and protection in a single integrated architecture. In his words, “The edge has to be smart, but also safe. You need to trust the data before you can use it for AI.”
That trust layer is rapidly becoming a differentiator in edge AI. Devices operating outside controlled environments – from industrial sensors to connected vehicles and medical wearables – are continuously exposed to tampering and data interception. Embedding hardware-based security (secure boot, encrypted memory, and trusted execution environments) ensures that models, data, and inferences cannot be altered or spoofed.
Infineon and peers are leading a broader industry shift: security as an enabler, not an afterthought. As energy efficiency defines the viability of edge AI, trust defines its scalability. For AI to permeate the physical world safely, intelligence must be both local and verifiable—a dual mandate that will shape the next generation of edge architectures.
Case Study: Large Player Pivots
Qualcomm is executing a fundamental strategic shift, moving beyond its traditional cellular markets to focus heavily on the high-growth Intelligent Edge and IoT. Suddenly, Qualcomm has a comprehensive, full-stack edge AI platform accelerated by the acquisitions of Edge Impulse (AI/ML tooling) and Arduino (prototyping ecosystem). A massive, diverse, and bottom-up customer base leads to a crucial shift away from serving a small number of large cellular customers (OEMs and carriers). This diversification mitigates risk and establishes a global innovation pipeline.
This strategy establishes a deep competitive moat by securing software mindshare and platform control. Edge Impulse provides the critical AI/ML framework, ensuring models are built and optimized specifically for Qualcomm’s specialized hardware, such as the AI accelerators (NPUs) in its Dragonwing™ platforms. Qualcomm has created a technical lock-in: developers face significant switching costs if they attempt to migrate optimized models to competing, generic hardware platforms. Qualcomm receives real-time market intelligence on successful developer models, enabling it to tailor its silicon roadmap.
For the millions of developers already engaged, the primary outcomes are accessibility, reduced development friction, and guaranteed scalability. The integrated ecosystem effectively democratizes access to robust, complex chip architectures. Arduino offers a universally trusted, user-friendly Integrated Development Environment (IDE) and open-source libraries. Developers can minimize the need for high-cost, specialized engineering talent. Critically, the workflow bridges the gap between prototyping and mass production, enabling a significantly faster time-to-market.
The acquisitions signify a radical departure from Qualcomm’s historical operating model, shifting from concentrated engagements to high-velocity community adoption.

Qualcomm is transitioning from a premium cellular hardware provider to a full-stack platform provider. This strategy ensures revenue diversification and establishes powerful software-based competitive lock-in for the company. For its customers, the result is the democratization of advanced AI hardware and a clear, supported path from concept to global mass production, positioning Qualcomm as the crucial infrastructure partner for Edge AI innovation.
Outlook: Toward an Intelligent, Efficient, and Secure Edge
The edge AI chip arena in 2025 is nothing short of a renaissance in computer architecture. Startups are racing, incumbents are pivoting, and the shakeout is coming. Apple’s grab of Xnor.ai showed the playbook: big semis will buy edge innovation or build it in-house. Meanwhile, NVIDIA’s Jetson, AMD/Xilinx FPGAs, and Qualcomm NPUs are already redefining the edge. Lines between categories are blurring, but the rule is simple: efficiency is king. The winners will deliver the most AI per joule, whether through digital accelerators, analog tricks, or brain-inspired architectures.
For investors, the strategic importance is massive. Edge chips sit at the crossroads of AI, IoT, 5G, and smart everything. The market spans $1 sensors to $1,000+ auto processors, a fragmentation that allows nimble players to dominate niches like hearables, robotics, or satellite imaging. But fragmentation also raises the stakes; chips alone aren’t enough; software ecosystems, partnerships, and timing decide who wins design slots.
Near term, digital ASICs from Hailo, Qualcomm, and Google will capture the lion’s share—aligned with today’s deep learning. Analog and in-memory approaches are next, delivering leaps in efficiency for power-starved devices. And on the horizon, neuromorphic computing looms: if spiking neural nets scale, brain-like chips could rewrite the rules entirely. Giants like Intel and IBM are betting the upside is worth it.
The sustainability angle only adds fuel. Cloud AI guzzles megawatts; edge AI can slash energy costs by orders of magnitude. A 1 W camera chip doing local inference beats streaming to a 100 W GPU farm. In sectors like agriculture and healthcare, efficiency isn’t just about battery; it’s about global carbon footprint.
Bottom line: edge AI chips are evolving faster than the incumbents can dictate. What seemed like sci-fi—analog brains, self-learning silicon—is moving into commercial reality. The smart money is shifting to the edge, where the next generation of AI will be defined not by brute force, but by clever, energy-frugal design. The brain took eons to optimize; edge AI chips are doing it in years, and the race is on.
Woodside Capital Partners is a leading corporate finance advisory firm for tech companies in M&A and financings in the $30M –$500M enterprise value segment. The firm has worked with extraordinary entrepreneurs and investors since 2001, providing ultra-personalized service to its clients. Our team has global vision and reach, and has completed hundreds of successful engagements. We have deep industry knowledge and extensive domain and transaction experience in these and other sectors: Artificial Intelligence, CyberSecurity, HR Tech, Digital Advertising and Marketing, Autonomous Vehicles, ADAS, Computer Vision, Aerospace and Defense, CloudTech, Enterprise Software, IT Services, Information Security, FinTech, Internet of Things, Networking / Infrastructure, Robotics, Semiconductors, Quantum, Energy Storage, Digital Health & Virtual Care, Diagnostic, Medical Devices & Precision Medicine, Healthcare IT & Data Analytics Platforms, AI & Automation in Clinical Decision Support, Revenue Cycle Management & Financial Ops, Behavioral & Mental Health Tech, Value-Based Care & Preventive/Wellness Platforms, Healthcare Infrastructure & Cybersecurity. Woodside Capital Partners is a specialist in cross-border transactions, and has extensive relationships among venture capitalists, private equity investors, and corporate executives from Global 1000 companies.
Questions? Contact Alain Bismuth, Managing Director, Woodside Capital Partners at [email protected].

