fbpx
Computer vision and visual AI developers, help guide technology suppliers by taking this 10-minute survey and get $50 off a 2022 Embedded Vision Summit pass and entry in a drawing to win 1 of 50 $25 Amazon gift cards.Learn More & Take The Survey

LETTER FROM THE EDITOR
Dear Colleague,Developer Survey

Every year, the Edge AI and Vision Alliance surveys developers to understand what chips and tools they use to build visual AI systems. This is our eighth year conducting the survey, and we would like to get your opinions. Many suppliers of computer vision building-block technologies use the results of our Computer Vision Developer Survey to guide their priorities. We also share the results from the Survey at Edge AI and Vision Alliance events and in white papers and presentations made available throughout the year on the Alliance website. I’d really appreciate it if you’d take a few minutes to complete the first stage of this year’s survey. (It typically takes less than 10 minutes to complete.) We are keeping the survey open through end of day tomorrow, November 11! Don’t miss your chance to have your voice heard. As a thank-you, we will send you a coupon for $50 off the price of a two-day Embedded Vision Summit ticket (to be sent when registration opens). In addition, we will enter your completed survey into a drawing for one of fifty Amazon gift cards worth $25! Thank you in advance for your perspective. Fill out the survey.

Next Tuesday, November 16 at 9 am PT, Yole Développement will deliver the free webinar “Neuromorphic Sensing and Computing: Compelling Options for a Host of AI Applications” in partnership with the Edge AI and Vision Alliance. AI is transforming the way we design processors and accompanying sensors, as well as how we develop systems based on them. Deep learning, currently the predominant paradigm, leverages ever-larger neural networks, initially built with vast amounts of training information and subsequently fed with large sets of inference data from image and other sensors. This brute force approach is increasingly limited by cost, size, power consumption and other constraints. Today’s AI accelerators are already highly optimized, with next-generation gains potentially obtained via near or in-memory compute…but what after that? Brain-inspired, asynchronous, energy-efficient neuromorphic sensing and computing aspires to be the long-term solution for a host of AI applications. Yole Développement analysts Pierre Cambou and Adrien Sanchez will begin with an overview of neuromorphic sensing and computing concepts, followed by a review of target applications. Next, they will evaluate the claims made by neuromorphic technology developers and product suppliers, comparing and contrasting them with the capabilities and constraints of mainstream approaches both now and as both technologies evolve in the future. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

DEVELOPMENT TOOLS

Standards: Powering the Future of Embedded VisionKhronos Group
Open standards play an important role in enabling interoperability for faster, easier deployment of vision-based systems. With advances in machine learning, the number of accelerators, processors, libraries and compilers in the market is rapidly increasing. Proprietary APIs and formats create a complex industry landscape that can hinder overall market growth. The Khronos Group’s open standards for accelerating parallel programming play a major role in deploying inferencing and embedded vision applications and include SYCL, OpenVX, NNEF, Vulkan, SPIR, and OpenCL. In this presentation, Neil Trevett, Vice President of Developer Ecosystems at NVIDIA and President of the Khronos Group, provides an up-to-the-minute overview and update on the Khronos embedded vision ecosystem, highlighting the capabilities and benefits of each API, giving insight into which standards may be relevant to their own embedded vision projects, and discussing the future directions of these key industry initiatives.

Three Lessons Learned in Building a Successful AI Inferencing ToolkitIntel
With OpenVINO, Intel started with a simple vision in mind: “How might we get the best deep learning performance with real-world solutions deployed into production?” In this session, Yury Gorbachev, Senior Principal Engineer at Intel and the lead architect of the Intel Distribution of OpenVINO toolkit, discusses the three key lessons learned in building a successful AI inferencing platform. Gorbachev explores critical topics including low precision optimizations, open-source and the essential components for designing an AI inferencing toolkit architecture.

DEPTH SENSING

When 2D Is Not Enough: An Overview of Optical Depth Sensing TechnologiesAmbarella
Camera systems used for computer vision at the edge are smarter than ever, but when they perceive the world in 2D, they remain limited for many applications because they lack information about the third dimension: depth. Sensing technologies that capture and integrate depth allow us to build smarter and safer applications across a wide variety of applications including robotics, surveillance, AR/VR and gesture detection. In this presentation, Dinesh Balasubramaniam, Senior Product Marketing Manager at Ambarella, examines three common technologies used for optical depth sensing: stereo camera systems, time-of-flight (ToF) sensors and structured light systems. He reviews the core ideas behind each technology, compares and contrasts them, and identifies the tradeoffs to consider when selecting a depth sensing technology for your application, focusing on accuracy, sensing range, performance under difficult lighting conditions, optical hardware requirements and more.

DepthAI: Embedded, Performant Spatial AI and Computer VisionLuxonis
Performant spatial AI and CV enables human-like perception and real-time interaction with the world. But embedding performant spatial AI and CV into actual products is difficult, costly and time-consuming. DepthAI makes building products with these capabilities fast, easy, and flexible. DepthAI is a platform—a complete ecosystem of custom hardware, firmware, software and AI training—that enables neural inference, depth vision, and hardware-accelerated computer vision functions into an easy-to-use solution. DepthAI, as described by Brandon Gilles, CEO of Luxonis, in this presentation, includes integrations with ROS, Python and C++. It is provided under the permissive MIT open-source license, so you can use all of these functionalities royalty-free and in closed-source products and projects. DepthAI supports USB (2 and 3), SPI and Ethernet/POE (Gigabit), UART and I2C interfaces. It works with any operating system (Linux, macOS, Windows, FreeRTOS, Zephyr) and with any bare-metal system that can run C++. DepthAI can also run completely standalone, with no other processor involved, using Luxonis’ Gen2 Pipeline Builder and onboard Python interpreter.

UPCOMING INDUSTRY EVENTS

Neuromorphic Sensing and Computing: Compelling Options for a Host of AI Applications – Yole Développement Webinar: November 16, 2021, 9:00 am PT

Developing Intelligent AI Everywhere with BrainChip’s Akida – BrainChip Webinar: December 9, 2021, 9:00 am PT

Embedded Vision Summit: May 17-19, 2022, Santa Clara, California

More Events

FEATURED NEWS

An Upcoming Online Event from Unity Technologies Explores Data-centric AI Development

Recent Intel Innovation Conference Announcements Include Its 12th Generation Intel Core Microprocessor Architecture and Initial Products, and the oneAPI 2022 Toolkit and Other Developer Offerings

Nextchip Licenses aiMotive’s aiWare4 for the Apache6 Automotive Domain Processor

Ambarella Intends to Acquire Radar Perception AI Algorithm Developer Oculii

Qualcomm Upgrades Its Mobile Roadmap to Deliver Increased Capabilities Across the Snapdragon 7, 6 and 4 Series

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Edge Impulse EON Compiler (Best Edge AI Developer Tool)Edge Impulse
Edge Impulse’s EON Compiler is the 2021 Edge AI and Vision Product of the Year Award Winner in the Edge AI Developer Tools category. The Edge Impulse EON compiler accelerates embedded machine learning (ML) code to run faster on your neural network in 25-55% less RAM and up to 35% less flash memory, while retaining the same accuracy, compared to TensorFlow Lite for Microcontrollers. EON achieves this performance by compiling neural networks to C++, unlike other embedded solutions using generic interpreters, thus eliminating complex code, device power, and precious time. The EON compiler represents the new standard for tinyML developers seeking to bring better embedded technologies to the market.

Please see here for more information on Edge Impulse’s EON Compiler. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top