Videos

“Building Large-scale Distributed Computer Vision Solutions Without Starting from Scratch,” a Presentation from Network Optix

Darren Odom, Director of Platform Business Development at Network Optix, presents the “Building Large-scale Distributed Computer Vision Solutions Without Starting from Scratch” tutorial at the May 2023 Embedded Vision Summit. Video is hard. Network Optix makes it really easy. Video has the potential to become a valuable source of operational data for business, especially with […]

“Building Large-scale Distributed Computer Vision Solutions Without Starting from Scratch,” a Presentation from Network Optix Read More +

“Fast-track Design Cycles Using Lattice’s FPGAs,” a Presentation from Lattice Semiconductor

Hussein Osman, Segment Marketing Director at Lattice Semiconductor, presents the “Fast-track Design Cycles Using Lattice’s FPGAs” tutorial at the May 2023 Embedded Vision Summit. Being first to market can mean the difference between success and failure of a new product. But rapid product development brings challenges. With the growing use of AI in embedded vision

“Fast-track Design Cycles Using Lattice’s FPGAs,” a Presentation from Lattice Semiconductor Read More +

“Intensive In-camera AI Vision Processing,” a Presentation from Hailo

Yaniv Iarovici, Head of Business Development at Hailo, presents the “Intensive In-camera AI Vision Processing” tutorial at the May 2023 Embedded Vision Summit. In this talk, you’ll hear about the new Hailo-15 AI vision processor family, specifically designed to empower smart cameras with unprecedented AI capabilities. Iarovici explains how the Hailo-15 family of processors—with performance

“Intensive In-camera AI Vision Processing,” a Presentation from Hailo Read More +

“Challenges in Architecting Vision Inference Systems for Transformer Models,” a Presentation from Flex Logix

Cheng Wang, Co-founder and CTO of Flex Logix, presents the “Challenges in Architecting Vision Inference Systems for Transformer Models” tutorial at the May 2023 Embedded Vision Summit. When used correctly, transformer neural networks can deliver greater accuracy for less computation. But transformers are challenging for existing AI engine architectures because they use many compute functions

“Challenges in Architecting Vision Inference Systems for Transformer Models,” a Presentation from Flex Logix Read More +

“Bring Your ML Models to the Edge with the DeGirum DeLight Cloud Platform,” a Presentation from DeGirum

Shashi Chilappagari, Co-Founder and Chief Architect of DeGirum, presents the “Bring Your ML Models to the Edge with the DeGirum DeLight Cloud Platform,” tutorial at the May 2023 Embedded Vision Summit. In this talk, Chilappagari presents an overview of the DeGirum DeLight Cloud Platform, which accelerates the edge AI application development process by bringing together

“Bring Your ML Models to the Edge with the DeGirum DeLight Cloud Platform,” a Presentation from DeGirum Read More +

“Tensilica Processor Cores Enable Sensor Fusion for Robust Perception,” a Presentation from Cadence

Amol Borkar, Product Marketing Director at Cadence, presents the “Tensilica Processor Cores Enable Sensor Fusion for Robust Perception” tutorial at the May 2023 Embedded Vision Summit. Until recently, the majority of sensor-based AI processing used vision and speech inputs. Recently, we have begun to see radar, LiDAR, event-based image sensors and other types of sensors

“Tensilica Processor Cores Enable Sensor Fusion for Robust Perception,” a Presentation from Cadence Read More +

“Enabling Ultra-low Power Edge Inference and On-device Learning with Akida,” a Presentation from BrainChip

Nandan Nayampally, Chief Marketing Officer at BrainChip, presents the “Enabling Ultra-low Power Edge Inference and On-device Learning with Akida” tutorial at the May 2023 Embedded Vision Summit. The AIoT industry is expected to reach $1T by 2030—but that will happen only if edge devices rapidly become more intelligent. In this presentation, Nayampally shows how BrainChip’s

“Enabling Ultra-low Power Edge Inference and On-device Learning with Akida,” a Presentation from BrainChip Read More +

“Battery-powered Edge AI Sensing: A Case Study Implementing Low-power, Always-on Capability,” a Presentation from Avnet

Peter Fenn, Director of the Advanced Applications Group at Avnet, presents the “Battery-powered Edge AI Sensing: A Case Study Implementing Low-power, Always-on Capability” tutorial at the May 2023 Embedded Vision Summit. The trend of pushing AI/ML capabilities to the edge brings design challenges around the need to combine high-performance computing (for AI/ML algorithms) with low

“Battery-powered Edge AI Sensing: A Case Study Implementing Low-power, Always-on Capability,” a Presentation from Avnet Read More +

“Sparking the Next Generation of Arm-based Cloud-native Smart Camera Designs,” a Presentation from Arm

Stephen Su, Senior Product Manager at Arm, presents the “Sparking the Next Generation of Arm-based Cloud-native Smart Camera Designs” tutorial at the May 2023 Embedded Vision Summit. As enterprises and consumers increasingly adopt machine learning-enabled smart cameras, the expectations of these end users are becoming more sophisticated. In particular, smart camera users increasingly expect their

“Sparking the Next Generation of Arm-based Cloud-native Smart Camera Designs,” a Presentation from Arm Read More +

“Processing Raw Images Efficiently on the MAX78000 Neural Network Accelerator,” a Presentation from Analog Devices

Gorkem Ulkar, Principal ML Engineer at Analog Devices, presents the “Processing Raw Images Efficiently on the MAX78000 Neural Network Accelerator” tutorial at the May 2023 Embedded Vision Summit. In this talk, Ulkar presents alternative and more efficient methods of processing raw camera images using neural network accelerators. He begins by introducing Analog Devices’ convolutional neural

“Processing Raw Images Efficiently on the MAX78000 Neural Network Accelerator,” a Presentation from Analog Devices Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top