|LETTER FROM THE EDITOR|
Next Thursday, August 25 at 9 am PT, Intel will deliver the free webinar “Accelerating TensorFlow Models on Intel Compute Devices Using Only 2 Lines of Code” in partnership with the Edge AI and Vision Alliance. Are you using Google’s TensorFlow framework to develop your deep learning models? And are you doing inference processing on those models using Intel compute devices: CPUs, GPUs, VPUs and/or HDDL (High Density Deep Learning) processing solutions? If the answer to both questions is “yes”, then this hands-on tutorial on how to integrate TensorFlow with Intel’s Distribution of the OpenVINO toolkit for rapid development while also achieving accurate and high-performance inference results is for you!
TensorFlow developers can now take advantage of OpenVINO optimizations with TensorFlow inference applications across a wide range of Intel compute devices by adding just two lines of code. In addition to introducing OpenVINO and its capabilities, the webinar will include demonstrations of the concepts discussed via a code walk-through of a sample application. It will be presented by Kumar Vishwesh, Technical Product Manager, and Ragesh Hajela, Senior AI Engineer, both of Intel. A question-and-answer session will follow the presentation. For more information and to register, please see the event page.
And the following Tuesday, August 30 at 9 am PT, Edge Impulse will deliver the free webinar “Edge Impulse’s FOMO Technology and Sony’s Computer Vision Platform: A Compelling Combination” in partnership with the Edge AI and Vision Alliance. Edge Impulse’s FOMO (Faster Objects, More Objects), introduced earlier this year, is a brand new approach to running object detection models on resource-constrained devices. This ground-breaking algorithm brings real-time object detection, tracking and counting to microcontrollers, such as Sony’s Spresense product line, for the first time. Sony’s latest multicore Spresense microcontrollers, in combination with the company’s high-resolution image sensor and camera portfolios as well as global LTE connectivity capabilities, create robust computer vision hardware platforms.
In this webinar, you’ll learn how Edge Impulse’s software expertise and products unlock this hardware potential to deliver an optimized total solution for agriculture technology, industrial IoT, smart cities, remote monitoring and other application opportunities. The webinar will be presented by Jenny Plunkett, Senior Developer Relations Engineer at Edge Impulse, and Armaghan Ebrahimi, Partner Solutions Engineer at Sony Electronics Professional Solutions Americas. Plunkett and Ebrahimi will introduce their respective companies’ technologies and products, as well as explain how they complement each other in delivering enhanced edge machine learning, computer vision and IoT capabilities. The webinar will include demonstrations of the concepts discussed, detailing how to bring to life applications with needs for sensor analysis, machine learning, image processing and data filtering. For more information and to register, please see the event page.
|NEUROMORPHIC SENSING AND PROCESSING|
Event-Based Neuromorphic Perception and Computation: The Future of Sensing and AI
Can we gain important advantages by implementing computer vision using principles of biological vision? Professor Ryad Benosman thinks so. Mainstream image sensors and processors acquire and process visual information as a series of snapshots recorded at a fixed frame rate, resulting in limited temporal resolution, low dynamic range and a high degree of redundancy in data and computation. Nature suggests a different approach: Biological vision systems are driven and controlled by events within the scene in view, and not – like conventional techniques – by artificially created timing and control signals that have no relation to the source of the visual information. The term “neuromorphic” refers to systems that mimic biological processes.
In this 2022 Embedded Vision Summit keynote, Professor Benosman — a pioneer of neuromorphic sensing and computing — introduces the fundamentals of bio-inspired, event-based image sensing and processing approaches, and explores their strengths and weaknesses. He shows that bio-inspired vision systems have the potential to outperform conventional, frame-based systems and to enable new capabilities in terms of data compression, dynamic range, temporal resolution and power efficiency in applications such as 3D vision, object tracking, motor control and visual feedback loops.
Are Neuromorphic Vision Technologies Ready for Commercial Use?
This lively 2022 Embedded Vision Summit panel discussion provides perspectives on these and other topics from a panel of seasoned experts who are working at the leading edge of neuromorphic vision development, tools and techniques. Sally Ward-Foxton, European Correspondent for EE Times, moderates; other panelists include Garrick Orchard, Research Scientist at Intel Labs, James Marshall, Chief Scientific Officer at Opteran, Ryad Benosman, Professor at the University of Pittsburgh and Adjunct Professor at the CMU Robotics Institute, and Steve Teig, Founder and CEO of Perceive.
|EVALUATING AND SELECTING DEEP LEARNING MODELS|
Is My Model Performing Well? It Depends…
Applying the Right Deep Learning Model with the Right Data for Your Application
|UPCOMING INDUSTRY EVENTS|
Accelerating TensorFlow Models on Intel Compute Devices Using Only 2 Lines of Code – Intel Webinar: August 25, 2022, 9:00 am PT
Edge Impulse’s FOMO Technology and Sony’s Computer Vision Platform: A Compelling Combination – Edge Impulse Webinar: August 30, 2022, 9:00 am PT
Edge Impulse Releases Deployment Support for BrainChip Akida Neuromorphic Processor IP Core
Ambarella Partners with Inceptio Technology to Deliver Level 3 Autonomous Trucking, Including Surround Camera and Front ADAS Perception With AI Compute
STMicroelectronics’ New Inertial Modules Enable AI Training Inside the Sensor
Imagination Technologies Launches the IMG RTXM-2200, Its First Real-time Embedded RISC-V CPU
|EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE|
Grabango Checkout-free Technology (Best Enterprise Edge AI End Product)
Please see here for more information on Grabango’s Checkout-free Technology. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.