fbpx

Edge AI and Vision Insights: March 8, 2023 Edition

LETTER FROM THE EDITOR
Dear Colleague,2023 Vision Tank

The Vision Tank is the Edge AI and Vision Alliance’s annual start-up competition, showcasing the best new ventures using computer vision or visual AI in their products or services. Open to early-stage companies, entrants are judged on four criteria: technology innovation, business plan, team and business opportunity. The competition is intended for start-ups that:

  • Have an initial product or prototype
  • Have ~15 or fewer people
  • Have raised less than ~$2M in capital

The Vision Tank final round takes place live on stage during the Embedded Vision Summit. The winning company receives:

  • A $5,000 cash award
  • Membership in the Edge AI and Vision Alliance for one year

Finalists also:

  • Present their new products or product ideas to more than 1,400 influencers and product creators at the 2023 Embedded Vision Summit
  • Build brand awareness and visibility through Alliance marketing channels
  • Benefit from advice from top industry experts
  • Gain introductions to potential investors, customers, employees and suppliers.

For more information and to enter, please see the program page. The submission deadline has been extended to this Friday, March 10 and the application requires detailed information, so don’t delay!


2023 Embedded Vision SummitTime-pressed to see the latest in new tools and techniques in perceptual AI? Looking to see what’s on the horizon in enabling technologies for embedded computer vision? Need to keep an eye on the competition? If so, you should be at the 2023 Embedded Vision Summit, happening May 22-24 in Santa Clara, California.

The Summit is the event for practical computer vision and edge AI; you don’t want to miss it! Register now using discount code SUMMIT23-NL and you can save 25%. The discount ends next Friday, March 17; don’t delay!

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

INDUSTRY INSIGHTS FROM THOUGHT LEADERS

Embedded Vision in Robotics, Biotech and EducationDEKA Research and Development
In his 2018 keynote presentation at the Embedded Vision Summit, “From Mobility to Medicine: Vision Enables the Next Generation of Innovation,” legendary inventor and technology visionary Dean Kamen, Founder of DEKA Research and Development, memorably predicated that embedded vision capabilities would eventually become as common as limit switches–i.e., used universally to enable systems to understand their environments. In this on-stage conversation, Jeff Bier, Founder of the Edge AI and Vision Alliance, catches up with Kamen on how visual AI is currently being used in his projects. These projects include mobile robots (developed by his company, DEKA Research and Development), new processes for the large-scale manufacturing of engineered human tissues (led by Kamen’s Advanced Regenerative Manufacturing Institute) and the educational programs organized by the FIRST (For Inspiration and Recognition of Science and Technology) program. Kamen and Bier explore where visual AI is paying off as Kamen envisioned, and what challenges are limiting his projects from realizing their full potential.

Ask the Ethicist: Your Questions about AI Privacy, Bias and Ethics AnsweredSanta Clara University
Building ethical AI systems is a tricky business, since examining one’s work through an ethical lens may seem to raise more questions than it answers. Following her 2022 Embedded Vision Summit talk, “Privacy: A Surmountable Challenge for Computer Vision,” Susan Kennedy, Assistant Professor of Philosophy at Santa Clara University, held an extended conversation with Jeff Bier, Founder of the Edge AI and Vision Alliance, in which she took questions from the audience covering issues of privacy, bias and ethics in AI systems. What are best practices for building privacy into product design? What can we do to reduce bias in AI systems? Does the presence of bias automatically render a tool ethically unjustifiable? Is ethics really necessary in cases where the relevant laws and regulations are being followed? How should one balance ethical considerations against the company’s objectives? How should teams be structured to minimize ethical issues? These and other queries were addressed during this interesting conversation.

SMART CITIES AND OTHER ANALYTICS OPPORTUNITIES

Taking Intelligent Video Analytics to the Next LevelHailo
In this presentation, Avi Baum, CTO and Co-founder of Hailo, examines how powerful edge AI improves the value proposition of video analytics across market segments such as physical security, access control, retail and intelligent transportation. He explores several real-world applications and highlights ways in which state-of-the-art neural network models are making an impact. Of course, incorporating AI into video analytics means higher processing requirements. In addition, image sensor resolutions are increasing (in order to detect smaller objects), and the use of multi-channel and multi-sensor cameras is growing, further driving up processing performance demands. Baum shows how to meet these increasing processing needs using Hailo’s unique high-performance edge processors, while adhering to operational guidelines and complying with regulations.

COVID-19 Safe Distancing Measures in Public Spaces with Edge AIGovTech of Singapore
Whether in indoor environments, such as supermarkets, museums and offices, or outdoor environments, such as parks, maintaining safe social distancing has been a priority during the COVID-19 pandemic. In this talk, Ebi Jose, Senior Systems Engineer at GovTech, the Government Technology Agency of Singapore, presents GovTech’s work developing cloud-connected edge AI solutions that count the number of people present in outdoor and indoor spaces, providing facility operators with real-time information that allows them to manage spaces to enable social distancing. Jose provides an overview of the system architecture, hardware, algorithms, software and backend monitoring elements of GovTech’s solution, highlighting differences in how it addresses indoor vs. outdoor spaces. He also presents results from deployments of GovTech’s solution.

UPCOMING INDUSTRY EVENTS

Embedded Vision Summit: May 22-25, 2023, Santa Clara, California

More Events

FEATURED NEWS

e-con Systems Launches a New 3D ToF MIPI Camera for NVIDIA Jetson Processors

Teledyne FLIR’s Enhanced Prism AI Detection and Tracking Software Model Enables Embedded Thermal Automotive Perception Systems

STMicroelectronics’ New Microcontrollers Expand the STM32U5 Series, Raising Performance and Energy Efficiency for IoT and Embedded Applications

The Beta Release of Network Optix’ Latest v5.1 Video-software-as-a-service Platform is Now Available

MediaTek Launches the Dimensity 7200 SoC, Enhancing Computational Photography and Other AI-enabled Experiences

More News

EMBEDDED VISION SUMMIT SPONSOR SHOWCASE

Attend the Embedded Vision Summit to meet these and other leading computer vision and edge AI technology suppliers!

DEEPXDEEPX
DEEPX develops neural network processing units (NPUs) and other artificial intelligence (AI) technologies for edge applications. The company’s strengths in high performance and ultra-low power consumption enable it to deliver efficient, low-cost AI NPUs for the rapidly growing IoT industry.

 

Edge ImpulseEdge Impulse
Edge Impulse is a leading development platform for machine learning on edge devices. The company’s mission is to enable every developer and device maker with the best development and deployment experience for machine learning on the edge, focusing on sensor, audio, and computer vision applications.

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Synopsys’ ARC EV7xFS Processor IP for Functional Safety (Best Automotive Solution)Synopsys
Synopsys’ ARC EV7xFS Processor IP for Functional Safety was the 2022 Edge AI and Vision Product of the Year Award winner in the Automotive Solutions category. The EV7xFS is a unique multi-core SIMD vector digital signal processing (DSP) solution that combines seamless scalability of computer vision, DSP and AI processing with state-of-the-art safety features for real-time applications in next-generation automobiles. This processor family scales from a single core EV71FS to a dual core EV72FS and a quad core EV74FS. The multicore vector DSP products include L1 cache coherence and a software tool chain that supports OpenCL C or C/C++ and automatically partitions algorithms across multiple cores. All cores share the same programming model and one set of development tools, ARC EV Development Toolkit for Safety. The EV7xFS family includes unique fine-grained power management for maximizing power efficiency. AI is supported in the vector DSP cores and can be scaled to higher levels of performance with optional neural network accelerators. The EV7xFS processors were built with a ground-up approach applying the latest state-of-the-art safety concepts to the design of the EV7xFS architecture. This approach was critical to achieve the ideal balance of performance, power, area and safety for use in today’s automotive SoCs requiring safety levels up to ASIL D for autonomous driving.

Please see here for more information on Synopsys’ ARC EV7xFS Processor IP for Functional Safety. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top