This blog post was originally published at Lattice Semiconductor’s website. It is reprinted here with the permission of Lattice Semiconductor.
On-device AI inference capability is expected to reach 60% of all devices by 2024, according to ABI Research. This underscores the rapid speed of AI innovation to take place in the last few years that has required engineers to come up with flexible design models as they transition from the cloud to the edge. The trend is driven by advances in ultra-low latency, security, bandwidth limitations, and privacy.
Lattice FPGAs and software solutions help enable acceleration of future models with existing silicon. This blog will explore use cases for Lattice FPGAs and software solutions in computer vision and edge AI technology design.
Why FPGAs Work Best for Edge Computing and AI Applications
FPGAs are ideal for edge processing and AI applications due to their inherent flexibility and adaptability.
An FPGA is a parallel compute engine that is able to run at lower clock frequency translating directly into lower power, and they contain flexible resources that spread throughout a fabric. These resources include DSPs, memories, programmable logic devices that are spread out and interconnected — resembling in a lot of ways some of the new ASIC that are built for AI purposes. However, unlike ASICs and other processors, the flexibility of FPGAs allows for continuous improvements to existing use cases within systems, and the ability to introduce entirely new use cases within systems without the need for all new hardware.
When we compare FPGA design cycles to ASIC design cycles, we see that a system designer can use an FPGA to iterate multiple times, introduce new cases, and get to market quickly. A system designer using an ASIC, however, would have to wait for iterations of that ASIC to reach the same level of performance capability, slowing time-to-market and being overall less efficient when adaptability is required.
A demo at this year’s Embedded Vision Summit featured our Lattice CertusPro™-NX device that illustrates its capability to run multiple AI engines in parallel, run concurrent threads that reduce the overall latency of the system, and produce faster FPS in a full system implementation.
Helping System Designers Accelerate AI Applications
In the second half of my presentation, I spent some time providing an overview of how Lattice helps system designers, who are often software developers and not FPGA experts, with our variety of software solutions, including the Lattice sensAI™ solution stack. Lattice sensAI is a collection of tools, hardware, acceleration IPs, software tools, example reference designs and demos, custom design services. and end to end solutions that we built to enable AI applications in specific end markets.
Within the sensAI stack is our software tool Lattice sensAI Studio that system designers can use to verify their use cases within hours — not days or weeks. The sensAI studio enables system designers to bring in either one of our models that exist in our model zoo, or their own model, go through transfer learning, evaluate how well the model is trained, capture and label data, configure and test the model, and then compile for a specific device to put it on a board.
During the Q&A portion of my presentation, I talked about how one of our agricultural customers at Lattice was able to build an AI system to detect berries in the field using sensAI studio. This example helped demonstrate how tools like sensAI studio are really important for everyday system designers looking to build something for a variety of use cases that don’t have as much experience when it comes to developing AI applications.
Enabling Next-Gen Smart PC Experiences
PC users are increasingly looking for smart and aware devices that also provide security to protect your privacy. Additionally, they’re looking for great collaboration capabilities like audio and video when on conference calls. System designers for notebooks are challenged with all the different form factors they want to introduce into the market and that brings in system challenges; how do you move all of this data through the camera to the rest of the system?
Lattice’s AI based solutions I showcased throughout the presentation help solve system designer’s problems in this area so new form factors can be introduced, including presence detection, onlooker detection, and face framing for video calls. Our solutions also help enable attention tracking to deliver up to 28% additional battery life based on a 45% of time distracted while using.
To learn more about how our software tools help enable low-power FPGAs to process AI/ML edge capabilities, download our whitepaper. You can also contact us at firstname.lastname@example.org with any questions about our tools and FPGAs you may have.
Segment Marketing Director, Lattice Semiconductor