Edge AI and Vision Insights: July 3, 2024


Scaling Vision-based Edge AI Solutions: From Prototype to Global DeploymentNetwork Optix
Integrating the latest computer vision and perceptual AI hardware, software and algorithm innovations into prime-time-ready products can be quite challenging. Scaling from a proof of concept—for example, a novel neural network architecture performing a valuable task efficiently on a new piece of silicon—to an AI vision system installed in hundreds of sites requires surmounting myriad hurdles. In this presentation, Maurits Kaptein, Chief Data Scientist at Network Optix and Professor at the University of Eindhoven, builds on Network Optix’s 14 years of experience in detailing how to overcome the networking, fleet management, visualization and monetization challenges that come with scaling a global vision solution. Kaptein also discusses the complexities of making vision AI solutions device-agnostic and remotely manageable, proposing an open standard for AI model deployment to edge devices. The proposed standard aims to simplify market entry for silicon manufacturers and enhance scalability for solution developers. Kaptein outlines the standard’s core components and invites collaborative contributions to drive market expansion.

Enterprise Intelligence: The Power of Computer Vision and Generative AI at the EdgeIntel
In this talk, Leila Sabeti, Americas AI Technical Sales Lead at Intel, focuses on the transformative impact of AI at the edge, highlighting the role of the OpenVINO tool kit in streamlining the AI solution life cycle on Intel hardware. This includes the development of energy-efficient computer vision and generative AI models suitable for edge computing. Sabeti showcases cutting-edge AI applications, such as multimodal LLMs for document understanding and YOLO object detection for smart retail solutions. She addresses the entire edge compute ecosystem, discussing how to optimize AI processes from training to inference across various computing platforms, including Intel GPUs. Additionally, she explores how businesses can seamlessly transition between edge and cloud environments and how Intel’s portfolio of solutions unlock the advantages of edge computing, such as data protection and AI acceleration.


Implementing Transformer Neural Networks for Visual Perception on Embedded DevicesVeriSilicon
Transformers are a class of neural network models originally designed for natural language processing. Transformers are also powerful for visual perception due to their ability to model long-range dependencies and process multimodal data. Resource constraints form a central challenge when deploying transformers on embedded platforms. Transformers demand substantial memory for parameters and intermediate computations. Further, the computations involved in self-attention create challenging computation requirements. Energy efficiency adds another layer of complexity. Mitigating these challenges, as described by Shang-Hung Lin, Vice President of Neural Processing Products at VeriSilicon in this talk, requires a multifaceted approach. Optimization techniques like quantization ameliorate memory constraints. Pruning and sparsity techniques, removing less critical connections, alleviate computation demands. Knowledge distillation transfers knowledge from larger models to compact models. Lin also discusses hardware accelerators such as NPUs customized for transformer workloads, and software techniques for efficiently mapping transformer models to hardware accelerators.

Temporal Event Neural Networks: An Efficient Alternative to the TransformerBrainChip
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers. Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation from Chris Jones, Director of Product Management at BrainChip, delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.


The Rise of Neuromorphic Sensing and Computing: Technology Innovations, Ecosystem Evolutions and Market Trends – Yole Group Webinar: July 11, 2024, 9:00 am PT

Who is Winning the Battle for ADAS and Autonomous Vehicle Processing, and How Large is the Prize? – TechInsights Webinar: July 24, 2024, 9:00 am PT

More Events


Axelera AI Raises $68 Million Series B Funding to Accelerate Next-generation Artificial Intelligence

Basler Introduces Small Form Factor ace 2 V CoaXPress 2.0 Cameras

Ceva’s New TinyML-optimized NPUs for AIoT Devices Enable Edge AI Everywhere

Intel’s Latest CPUs and Gaudi Accelerator Redefine AI Power, Performance and Affordability

AMD Unveils Next-generation Zen 5 Ryzen Processors to Power Advanced AI Experiences

More News


Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top