Cameras and Sensors for Embedded Vision
WHILE ANALOG CAMERAS ARE STILL USED IN MANY VISION SYSTEMS, THIS SECTION FOCUSES ON DIGITAL IMAGE SENSORS
While analog cameras are still used in many vision systems, this section focuses on digital image sensors—usually either a CCD or CMOS sensor array that operates with visible light. However, this definition shouldn’t constrain the technology analysis, since many vision systems can also sense other types of energy (IR, sonar, etc.).
The camera housing has become the entire chassis for a vision system, leading to the emergence of “smart cameras” with all of the electronics integrated. By most definitions, a smart camera supports computer vision, since the camera is capable of extracting application-specific information. However, as both wired and wireless networks get faster and cheaper, there still may be reasons to transmit pixel data to a central location for storage or extra processing.
A classic example is cloud computing using the camera on a smartphone. The smartphone could be considered a “smart camera” as well, but sending data to a cloud-based computer may reduce the processing performance required on the mobile device, lowering cost, power, weight, etc. For a dedicated smart camera, some vendors have created chips that integrate all of the required features.
Cameras
Until recent times, many people would imagine a camera for computer vision as the outdoor security camera shown in this picture. There are countless vendors supplying these products, and many more supplying indoor cameras for industrial applications. Don’t forget about simple USB cameras for PCs. And don’t overlook the billion or so cameras embedded in the mobile phones of the world. These cameras’ speed and quality have risen dramatically—supporting 10+ mega-pixel sensors with sophisticated image processing hardware.
Consider, too, another important factor for cameras—the rapid adoption of 3D imaging using stereo optics, time-of-flight and structured light technologies. Trendsetting cell phones now even offer this technology, as do latest-generation game consoles. Look again at the picture of the outdoor camera and consider how much change is about to happen to computer vision markets as new camera technologies becomes pervasive.
Sensors
Charge-coupled device (CCD) sensors have some advantages over CMOS image sensors, mainly because the electronic shutter of CCDs traditionally offers better image quality with higher dynamic range and resolution. However, CMOS sensors now account for more 90% of the market, heavily influenced by camera phones and driven by the technology’s lower cost, better integration and speed.

How AI-powered Cameras Are Transforming Data Center Security
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Data centers process, store, and transmit enormous volumes of data. So, it makes them natural targets for intrusion, sabotage, and other violations. Learn about the importance of AI-based camera surveillance in data centers and their

New Geometries Driving MEMS Gyroscopes Towards Better Performance
Inertial measurement units, or IMUs, form the backbone of the physical layer of modern navigation systems. According to IDTechEx‘s research report, “Next-Generation MEMS 2026-2036: Markets, Technologies, and Players“, high-end IMUs constitute a US$3.8 billion global market. IMUs are continually evolving to serve a diverse and challenging set of performance requirements from industry. In particular, IDTechEx

Next-gen Fleet Telematics and Dashcams Shift to On-device AI
This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. The role of dashcams has changed significantly over the past decade. What began as a passive recording device has become an active, intelligent safety and operations tool. This evolution is being driven by edge AI—the ability to

“Taking Computer Vision Products from Prototype to Robust Product,” an Interview with Blue River Technology
Chris Padwick, Machine Learning Engineer at Blue River Technology, talks with Mark Jamtgaard, Director of Technology at RetailNext for the “Taking Computer Vision Products from Prototype to Robust Product,” interview at the May 2025 Embedded Vision Summit. When developing computer vision-based products, getting from a proof of concept to a… “Taking Computer Vision Products from

ImagingNext 2025 Brings Keynote Speakers from NVIDIA, RealSense, NXP and Altera
Munich, Bavaria, Germany – September 4th, 2025.– ImagingNext is raising the bar with its keynote and featured speakers. Taking place on September 18–19, 2025 at the smartvillage in Munich-Bogenhausen, ImagingNext will bring together industry leaders, developers, and decision-makers to explore how artificial intelligence is transforming imaging technologies. The premiere edition of ImagingNext features an impressive lineup

“Improving Worksite Safety with AI-powered Perception,” a Presentation from Arcure
Sabri Bayoudh, Chief Innovation Officer at Arcure, presents the “Improving Worksite Safety with AI-powered Perception” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Bayoudhl explores how embedded vision is being used in industrial applications, including vehicle safety and production. He highlights some of the challenging requirements of… “Improving Worksite Safety with AI-powered

Software-defined Vehicles: Built For Users, or For the Industry?
SDV Level Chart: IDTechEx defines SDV performance using six levels. Most consumers still have limited awareness of the deeper value behind “software-defined” capabilities The concept of the Software-Defined Vehicle (SDV) has rapidly emerged as a transformative trend reshaping the automotive industry. Yet, despite widespread use of the term, there remains significant confusion around its core

How to Support Multi-planar Format in Python V4L2 Applications on i.MX8M Plus
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. The default Python V4L2 library module contains critical details related to the V4L2 capture method. Learn how to implement basic definitions (missing the default library module) and capture images in the V4L2 multi-planar format. Python

Upcoming Presentation and Demonstrations Showcase Autonomous Mobile Robots and Machine Vision
On Wednesday, October 15 from 11:45 AM – 12:15 PM PT, Alliance Member company eInfochips will deliver the presentation “Real-time Vision AI System on Edge AI Platforms” at the RoboBusiness and DeviceTalks West 2025 Conference in Santa Clara, California. From the event page: This session presents a real-time, edge-deployed Vision AI system for automated quality

“Integrating Cameras with the Robot Operating System (ROS),” a Presentation from Amazon Lab126
Karthik Poduval, Principal Software Development Engineer at Amazon Lab126, presents the “Integrating Cameras with the Robot Operating System (ROS)” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Poduval explores the integration of cameras within the Robot Operating System (ROS) for robust embedded vision applications. He delves into… “Integrating Cameras with the Robot