Cameras and Sensors for Embedded Vision
WHILE ANALOG CAMERAS ARE STILL USED IN MANY VISION SYSTEMS, THIS SECTION FOCUSES ON DIGITAL IMAGE SENSORS
While analog cameras are still used in many vision systems, this section focuses on digital image sensors—usually either a CCD or CMOS sensor array that operates with visible light. However, this definition shouldn’t constrain the technology analysis, since many vision systems can also sense other types of energy (IR, sonar, etc.).
The camera housing has become the entire chassis for a vision system, leading to the emergence of “smart cameras” with all of the electronics integrated. By most definitions, a smart camera supports computer vision, since the camera is capable of extracting application-specific information. However, as both wired and wireless networks get faster and cheaper, there still may be reasons to transmit pixel data to a central location for storage or extra processing.
A classic example is cloud computing using the camera on a smartphone. The smartphone could be considered a “smart camera” as well, but sending data to a cloud-based computer may reduce the processing performance required on the mobile device, lowering cost, power, weight, etc. For a dedicated smart camera, some vendors have created chips that integrate all of the required features.
Cameras
Until recent times, many people would imagine a camera for computer vision as the outdoor security camera shown in this picture. There are countless vendors supplying these products, and many more supplying indoor cameras for industrial applications. Don’t forget about simple USB cameras for PCs. And don’t overlook the billion or so cameras embedded in the mobile phones of the world. These cameras’ speed and quality have risen dramatically—supporting 10+ mega-pixel sensors with sophisticated image processing hardware.
Consider, too, another important factor for cameras—the rapid adoption of 3D imaging using stereo optics, time-of-flight and structured light technologies. Trendsetting cell phones now even offer this technology, as do latest-generation game consoles. Look again at the picture of the outdoor camera and consider how much change is about to happen to computer vision markets as new camera technologies becomes pervasive.
Sensors
Charge-coupled device (CCD) sensors have some advantages over CMOS image sensors, mainly because the electronic shutter of CCDs traditionally offers better image quality with higher dynamic range and resolution. However, CMOS sensors now account for more 90% of the market, heavily influenced by camera phones and driven by the technology’s lower cost, better integration and speed.

“Three Big Topics in Autonomous Driving and ADAS,” an Interview with Valeo
Frank Moesle, Software Department Manager at Valeo, talks with Independent Journalist Junko Yoshida for the “Three Big Topics in Autonomous Driving and ADAS” interview at the May 2025 Embedded Vision Summit. In this on-stage interview, Moesle and Yoshida focus on trends and challenges in automotive technology, autonomous driving and ADAS.… “Three Big Topics in Autonomous

“Toward Hardware-agnostic ADAS Implementations for Software-defined Vehicles,” a Presentation from Valeo
Frank Moesle, Software Department Manager at Valeo, presents the “Toward Hardware-agnostic ADAS Implementations for Software-defined Vehicles” tutorial at the May 2025 Embedded Vision Summit. ADAS (advanced-driver assistance systems) software has historically been tightly bound to the underlying system-on-chip (SoC). This software, especially for visual perception, has been extensively optimized for… “Toward Hardware-agnostic ADAS Implementations for

What Is The Role of Embedded Cameras in Smart Warehouse Automation?
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Cameras ensure that warehouse automation systems use visual data to function with consistency. It helps identify, track, and interact in real time. Discover how warehouse automation cameras work, their use cases, and critical imaging features.

“Depth Estimation from Monocular Images Using Geometric Foundation Models,” a Presentation from Toyota Research Institute
RareČ™ AmbruČ™, Senior Manager for Large Behavior Models at Toyota Research Institute, presents the “Depth Estimation from Monocular Images Using Geometric Foundation Models” tutorial at the May 2025 Embedded Vision Summit. In this presentation, AmbruČ™ looks at recent advances in depth estimation from images. He first focuses on the ability… “Depth Estimation from Monocular Images

AI-enhanced In-cabin Sensing Systems
As the trend of vehicle intelligence enhancement rises, in-cabin sensing systems will be largely responsible for increased communication, sensitivity, and smart features within cars. IDTechEx‘s report, “In-Cabin Sensing 2025-2035: Technologies, Opportunities, and Markets“, provides the latest technology developments within the sector, along with forecasts for their uptake over the next ten years. Where AI meets

Basler AG Acquires 76% Stake in Alpha TechSys Automation in India
Computer vision experts Basler and Alpha TechSys Automation announce Basler’s acquisition of a stake in the Indian company. Basler is thus expanding its direct business in the growth market of India. Ahrensburg, October 14, 2025 – Basler AG, a leading provider of components for computer vision solutions, has acquired a 76% stake in its Indian

D3 Embedded, HTEC, Texas Instruments and Tobii Pioneer the Integration of Single-camera and Radar Interior Sensor Fusion for In-cabin Sensing
The companies joined forces to develop sensor fusion based interior sensing for enhanced vehicle safety, launching at the InCabin Europe conference on October 7-9. Rochester, NY – October 6, 2025 – Tobii, with its automotive interior sensing branch Tobii Autosense, together with D3 Embedded, and HTEC today announced the development of an interior sensing solution

“Introduction to Depth Sensing: Technologies, Trade-offs and Applications,” a Presentation from Think Circuits
Chris Sarantos, Independent Consultant with Think Circuits, presents the “Introduction to Depth Sensing: Technologies, Trade-offs and Applications” tutorial at the May 2025 Embedded Vision Summit. Depth sensing is a crucial technology for many applications, including robotics, automotive safety and biometrics. In this talk, Sarantos provides an overview of depth sensing… “Introduction to Depth Sensing: Technologies,

Upcoming Seminar Explores the Latest Innovations in Mobile Robotics
On October 22, 2022 at 9:00 am PT, Alliance Member company NXP Semiconductors, along with Avnet, will deliver a free (advance registration required) half-day in-person robotics seminar at NXP’s office in San Jose, California. From the event page: Join us for a free in-depth seminar exploring the latest innovations in mobile robotics with a focus

“Lessons Learned Building and Deploying a Weed-killing Robot,” a Presentation from Tensorfield Agriculture
Xiong Chang, CEO and Co-founder of Tensorfield Agriculture, presents the “Lessons Learned Building and Deploying a Weed-Killing Robot” tutorial at the May 2025 Embedded Vision Summit. Agriculture today faces chronic labor shortages and growing challenges around herbicide resistance, as well as consumer backlash to chemical inputs. Smarter, more sustainable approaches… “Lessons Learned Building and Deploying
