fbpx

e-CAM130_CURB: 13 Mpixel Fixed-focus 4K Camera for the Raspberry Pi 4

This video was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. The Raspberry Pi 4 is one of the most common platforms used for camera evaluation and prototyping. However, unlocking the true vision potential of Raspberry Pi 4 requires the ability to select the right camera. e-CAM130_CURB …

e-CAM130_CURB: 13 Mpixel Fixed-focus 4K Camera for the Raspberry Pi 4 Read More +

Procedural Traffic Generation: What are the Options?

This blog post was originally published at AImotive’s website. It is reprinted here with the permission of AImotive. Providing an end-to-end environment, which can one day replace validation requires refined, high-fidelity models of every part of the world. Using aiSim’s open APIs and SDK makes sure integration of these many elements is seamless and performance …

Procedural Traffic Generation: What are the Options? Read More +

The SK-TDA4VM, an 8 TOPS Deep Learning Starter Kit for Edge AI Application Development

Bring smart cameras, robots and intelligent machines to life with Texas Instruments’ TDA4VM processor starter kit. With a fast setup process and an assortment of foundational demos and tutorials, you can start prototyping a vision-based application in less than an hour. The kit enables 8 TOPS of deep learning performance and hardware-accelerated edge AI processing …

The SK-TDA4VM, an 8 TOPS Deep Learning Starter Kit for Edge AI Application Development Read More +

The Rise of Level 4 and Level 5 Autonomy

This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Thanks to the commercialization of robotaxis, the driverless revolution is almost here. Modern autonomy is understood as a range of capabilities laid out on a scale—level 0 through 5. On the lower end (levels 1 and 2), …

The Rise of Level 4 and Level 5 Autonomy Read More +

Edge AI and Vision Insights: January 26, 2022 Edition

LETTER FROM THE EDITOR Dear Colleague, Until this Friday, January 28, the Edge AI and Vision Alliance is accepting applications for the 2022 Edge AI and Vision Product of the Year Awards competition. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and …

Edge AI and Vision Insights: January 26, 2022 Edition Read More +

Sensor Fusion at the Edge: Two Perspectives

This blog post was originally published at Lattice Semiconductor’s website. It is reprinted here with the permission of Lattice Semiconductor. Sensor fusion is a popular application for Lattice FPGAs, and that particular application is often a topic of discussion among marketing team members here at Lattice. Because of their low power consumption and small size, …

Sensor Fusion at the Edge: Two Perspectives Read More +

Real-time Video Enhancement Reaches New Standard with Visionary.ai and Inuitive Partnership

25 January 2022 – Jerusalem – Two Israeli AI startups, Visionary.ai and Inuitive, today announced a new partnership. The companies have been collaborating in recent months to run Visionary.ai’s AI-based imaging technology on Inuitive’s AI processor, making cutting-edge computer vision available to edge devices, affordably and with minimal power consumption. Visionary.ai has developed a software-based …

Real-time Video Enhancement Reaches New Standard with Visionary.ai and Inuitive Partnership Read More +

MIPI Cameras vs USB Cameras: a Detailed Comparison

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Over the past few years, embedded vision has evolved from a buzzword to a widely adopted technology used across industrial, medical, retail, entertainment, and farming sectors. With each phase of its evolution, embedded vision has …

MIPI Cameras vs USB Cameras: a Detailed Comparison Read More +

Oculi Enables Near-zero Lag Performance with an Embedded Solution for Gesture Control

Immersive extended reality (XR) experiences let users seamlessly interact with virtual environments. These experiences require real-time gesture control and eye tracking while running in resource-constrained environments such as on head-mounted displays (HMDs) and smart glasses. These capabilities are typically implemented using computer vision technology, with imaging sensors that generate lots of data to be moved …

Oculi Enables Near-zero Lag Performance with an Embedded Solution for Gesture Control Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top