Edge AI and Vision Alliance

“Introduction to DNN Training: Fundamentals, Process and Best Practices,” a Presentation from Think Circuits

Kevin Weekly, CEO of Think Circuits, presents the “Introduction to DNN Training: Fundamentals, Process and Best Practices” tutorial at the May 2025 Embedded Vision Summit. Training a model is a crucial step in machine learning, but it can be overwhelming for beginners. In this talk, Weekly provides a comprehensive introduction… “Introduction to DNN Training: Fundamentals, […]

“Introduction to DNN Training: Fundamentals, Process and Best Practices,” a Presentation from Think Circuits Read More +

“Introduction to Depth Sensing: Technologies, Trade-offs and Applications,” a Presentation from Think Circuits

Chris Sarantos, Independent Consultant with Think Circuits, presents the “Introduction to Depth Sensing: Technologies, Trade-offs and Applications” tutorial at the May 2025 Embedded Vision Summit. Depth sensing is a crucial technology for many applications, including robotics, automotive safety and biometrics. In this talk, Sarantos provides an overview of depth sensing… “Introduction to Depth Sensing: Technologies,

“Introduction to Depth Sensing: Technologies, Trade-offs and Applications,” a Presentation from Think Circuits Read More +

“Lessons Learned Building and Deploying a Weed-killing Robot,” a Presentation from Tensorfield Agriculture

Xiong Chang, CEO and Co-founder of Tensorfield Agriculture, presents the “Lessons Learned Building and Deploying a Weed-Killing Robot” tutorial at the May 2025 Embedded Vision Summit. Agriculture today faces chronic labor shortages and growing challenges around herbicide resistance, as well as consumer backlash to chemical inputs. Smarter, more sustainable approaches… “Lessons Learned Building and Deploying

“Lessons Learned Building and Deploying a Weed-killing Robot,” a Presentation from Tensorfield Agriculture Read More +

“Transformer Networks: How They Work and Why They Matter,” a Presentation from Synthpop AI

Rakshit Agrawal, Principal AI Scientist at Synthpop AI, presents the “Transformer Networks: How They Work and Why They Matter” tutorial at the May 2025 Embedded Vision Summit. Transformer neural networks have revolutionized artificial intelligence by introducing an architecture built around self-attention mechanisms. This has enabled unprecedented advances in understanding sequential… “Transformer Networks: How They Work

“Transformer Networks: How They Work and Why They Matter,” a Presentation from Synthpop AI Read More +

“Virtual Reality, Machine Learning and Biosensing Advances Converging to Transform Healthcare and Beyond,” an Interview with Stanford University

Walter Greenleaf, Neuroscientist at Stanford University’s Virtual Human Interaction Lab, talks with Tom Vogelsong, Start-Up Scout at K2X Technology and Life Science for the “Virtual Reality, Machine Learning and Biosensing Advances Converging to Transform Healthcare and Beyond” interview at the May 2025 Embedded Vision Summit. In this wide-ranging interview, Greenleaf… “Virtual Reality, Machine Learning and

“Virtual Reality, Machine Learning and Biosensing Advances Converging to Transform Healthcare and Beyond,” an Interview with Stanford University Read More +

“Understanding Human Activity from Visual Data,” a Presentation from Sportlogiq

Mehrsan Javan, Chief Technology Officer at Sportlogiq, presents the “Understanding Human Activity from Visual Data” tutorial at the May 2025 Embedded Vision Summit. Activity detection and recognition are crucial tasks in various industries, including surveillance and sports analytics. In this talk, Javan provides an in-depth exploration of human activity understanding,… “Understanding Human Activity from Visual

“Understanding Human Activity from Visual Data,” a Presentation from Sportlogiq Read More +

“Multimodal Enterprise-scale Applications in the Generative AI Era,” a Presentation from Skyworks Solutions

Mumtaz Vauhkonen, Senior Director of AI at Skyworks Solutions, presents the “Multimodal Enterprise-scale Applications in the Generative AI Era” tutorial at the May 2025 Embedded Vision Summit. As artificial intelligence is making rapid strides in use of large language models, the need for multimodality arises in multiple application scenarios. Similar… “Multimodal Enterprise-scale Applications in the

“Multimodal Enterprise-scale Applications in the Generative AI Era,” a Presentation from Skyworks Solutions Read More +

“Real-world Deployment of Mobile Material Handling Robotics in the Supply Chain,” a Presentation from Pickle Robot Company

Peter Santos, Chief Operating Officer of Pickle Robot Company, presents the “Real-World Deployment of Mobile Material Handling Robotics in the Supply Chain” tutorial at the May 2025 Embedded Vision Summit. More and more of the supply chain needs to be, and can be, automated. Demographics, particularly in the developed world,… “Real-world Deployment of Mobile Material

“Real-world Deployment of Mobile Material Handling Robotics in the Supply Chain,” a Presentation from Pickle Robot Company Read More +

Edge AI and Vision Insights: October 1, 2025

OPTIMIZING DEEP LEARNING MODEL EFFICIENCY Quantization Techniques for Efficient Deployment of Large Language Models: A Comprehensive Review The deployment of large language models (LLMs) in resource-constrained environments is challenging due to the significant computational and memory demands of these models. To address this challenge, various quantization techniques have been proposed to reduce the model’s resource

Edge AI and Vision Insights: October 1, 2025 Read More +

“Developing a GStreamer-based Custom Camera System for Long-range Biometric Data Collection,” a Presentation from Oak Ridge National Laboratory

Gavin Jager, Researcher and Lab Space Manager at Oak Ridge National Laboratory, presents the “Developing a GStreamer-based Custom Camera System for Long-range Biometric Data Collection” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Jager describes Oak Ridge National Laboratory’s work developing software for a custom camera system… “Developing a GStreamer-based Custom Camera

“Developing a GStreamer-based Custom Camera System for Long-range Biometric Data Collection,” a Presentation from Oak Ridge National Laboratory Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top