Dear Colleague,2020 Embedded Vision Summit

We’ve got big news! Originally scheduled to take place in person next month in California, the 2020 Embedded Vision Summit is moving to a fully online experience. The event will be made up of five sessions taking place Tuesdays and Thursdays from September 10 through September 24 from 9 am to 2 pm Pacific Time.

The Summit remains the premier conference and tradeshow for innovators adding computer vision and AI to products—except now it’s:

  • Easier to attend
  • More flexible for your schedule
  • Available in two different pass tiers ($249 and $99) to fit your budget!

What can you expect? Hear from and interact with over 100 expert speakers and industry leaders on the latest in practical computer vision and edge AI technology—including processors, tools, techniques, algorithms and applications—in both live and on-demand sessions. And see cool demos of the latest building-block technologies from dozens of exhibitors! Attending the Summit is the perfect way to bring your next vision- or AI-based product to life.

Are you ready to gain valuable insights and make important connections? Be sure to register today with promo code SUPEREBNL20-V to receive your Super Early Bird Discount!

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance


Distance Estimation Solutions for ADAS and Automated DrivingAImotive
Distance estimation is at the heart of automotive driver assistance systems (ADAS) and automated driving (AD). Simply stated, safe operation of vehicles requires robust distance estimation. Many different types of sensors (camera, radar, LiDAR, sonar) can be used for distance estimation, and different distance estimation techniques can be used with each type of sensor. Each type of sensor and technique has unique strengths and weaknesses. In this presentation, Gergely Debreczeni, Chief Scientist at AImotive, examines these techniques and their strengths and weaknesses, and shows how multiple techniques using different sensor types can be fused to enable robust distance estimation for a specific automated driving application.

Sensory Fusion for Scalable Indoor NavigationBrain Corp
Indoor autonomous navigation requires using a variety of sensors in different modalities. Merging together RGB, depth, lidar and odometry data streams to achieve autonomous operation requires a fusion of sensory data. In this talk, Oleg Sinyavskiy, Director of Research and Development at Brain Corp, describes his company’s sensor-pack agnostic sensory fusion approach, which allows it to take advantage of the latest in sensor technology to achieve robust, safe and performant perception across a large fleet of industrial robots. He explains how Brain Corp addressed a number of sensory fusion challenges such as robust and safe obstacle detection, fusing geometric and semantic information and dealing with moving people and sensory blind spots.


Emerging Processor Architectures for Deep Learning: Options and Trade-offsHailo
In the past year, numerous new processor architectures for machine learning have emerged. Many of these focus on edge applications, reflecting the growing demand for deploying machine learning outside of data centers. This intensive focus on processor architecture innovation comes at a perfect time in light of the slowing progress in silicon fabrication technology and the massive opportunities for deployment of AI applications using vision and other sensors. In this presentation, Orr Danon, CEO of Hailo, explores the architectural concepts underlying these diverse processors and analyzes their suitability for various applications. He derives the performance bounds of each architecture approach and provides insights on the practical deployment of machine learning using these specialized architectures. In addition, using a case study, he explores the opportunities enabled through designing neural networks to exploit specialized processor architectures. Also be sure to check out Hailo’s upcoming webinar on deep neural network quantization trade-offs and optimizations.

Pioneering Analog Compute for Edge AI to Overcome the End of Digital ScalingMythic
AI inference at the edge will continue to create insatiable demand for compute performance in power- and cost-constrained form factors. Taking into account past trends, continuous scale-up of algorithms and the real economic value now being generated by AI at the edge, a demand for 1000x more compute over the next 10 years is not out of the question. Mythic is a pioneer in analog compute, a key technology that will take us well beyond the end of Moore’s Law and deliver powerful, easy to use compute at the edge to meet this demand. In this presentation, Mike Henry, CEO and Founder of Mythic, discusses Mythic’s unique IPU architecture that combines analog compute with compute-in-memory, delivering an unparalleled combination of AI inference performance and energy efficiency. He also highlights the advantages of using the Mythic architecture in edge applications such as DNN-enabled video surveillance cameras.


Edge AI and Vision Alliance Announces Vision Tank Start-up Competition Semi-finalists

Chips&Media Reveals the c.WAVE120, a New Generation of Super-resolution Hardware IP

Intel and Udacity Launch a New Edge AI Program to Train 1 Million Developers

XIMEA Releases a High-speed Industrial Camera Based On a New 5K Resolution CMOS Image Sensor

BrainChip Announces Wafer Fabrication of the Akida System-on-Chip

Synopsys’ Online Embedded Vision Sessions Help You Navigate Intelligent Vision at the Edge

More News


Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top