Cameras and Sensors

Cameras and Sensors for Embedded Vision

WHILE ANALOG CAMERAS ARE STILL USED IN MANY VISION SYSTEMS, THIS SECTION FOCUSES ON DIGITAL IMAGE SENSORS

While analog cameras are still used in many vision systems, this section focuses on digital image sensors—usually either a CCD or CMOS sensor array that operates with visible light. However, this definition shouldn’t constrain the technology analysis, since many vision systems can also sense other types of energy (IR, sonar, etc.).

The camera housing has become the entire chassis for a vision system, leading to the emergence of “smart cameras” with all of the electronics integrated. By most definitions, a smart camera supports computer vision, since the camera is capable of extracting application-specific information. However, as both wired and wireless networks get faster and cheaper, there still may be reasons to transmit pixel data to a central location for storage or extra processing.

A classic example is cloud computing using the camera on a smartphone. The smartphone could be considered a “smart camera” as well, but sending data to a cloud-based computer may reduce the processing performance required on the mobile device, lowering cost, power, weight, etc. For a dedicated smart camera, some vendors have created chips that integrate all of the required features.

Cameras

Until recent times, many people would imagine a camera for computer vision as the outdoor security camera shown in this picture. There are countless vendors supplying these products, and many more supplying indoor cameras for industrial applications. Don’t forget about simple USB cameras for PCs. And don’t overlook the billion or so cameras embedded in the mobile phones of the world. These cameras’ speed and quality have risen dramatically—supporting 10+ mega-pixel sensors with sophisticated image processing hardware.

Consider, too, another important factor for cameras—the rapid adoption of 3D imaging using stereo optics, time-of-flight and structured light technologies. Trendsetting cell phones now even offer this technology, as do latest-generation game consoles. Look again at the picture of the outdoor camera and consider how much change is about to happen to computer vision markets as new camera technologies becomes pervasive.

Sensors

Charge-coupled device (CCD) sensors have some advantages over CMOS image sensors, mainly because the electronic shutter of CCDs traditionally offers better image quality with higher dynamic range and resolution. However, CMOS sensors now account for more 90% of the market, heavily influenced by camera phones and driven by the technology’s lower cost, better integration and speed.

An Engineer’s Guide on Data and Control Buses of Imaging Systems

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Communication protocols are a key consideration for high-resolution, high-frame-rate imaging in embedded vision applications. In this blog, we’ll explore how control and data buses enable seamless transmission and timely, synchronized imaging for high-performance embedded applications.

Read More »

AI Drives the Wheel: How Computing Power is Reshaping the Automotive Industry

This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. In its new report, Automotive Computing and AI 2025, Yole Group analyzes the technological and market forces redefining vehicle intelligence, safety, and connectivity. The automotive industry is accelerating into a new era

Read More »

FRAMOS Unveils Three Specialized Camera Modules for UAV and Drone Applications

Munich, Bavaria, Germany – October 21st, 2025 – FRAMOS, the world’s leading vision expert, unveils three new camera modules specially developed for use in drones and unmanned aerial vehicles (UAVs). These modules feature state-of-the-art image sensors from SONY, which offer exceptional precision, high speed, and energy efficiency, creating the ideal conditions for demanding vision systems.

Read More »

NanoEdge AI Studio v5, the First AutoML Tool with Synthetic Data Generation

This blog post was originally published at STMicroelectronics’ website. It is reprinted here with the permission of STMicroelectronics. NanoEdge AI Studio v5 is the first AutoML tool for STM32 microcontrollers capable of generating anomaly data out of typical logs, thanks to a new feature we call Synthetic Data Generation. Additionally, the latest version makes it

Read More »

“Three Big Topics in Autonomous Driving and ADAS,” an Interview with Valeo

Frank Moesle, Software Department Manager at Valeo, talks with Independent Journalist Junko Yoshida for the “Three Big Topics in Autonomous Driving and ADAS” interview at the May 2025 Embedded Vision Summit. In this on-stage interview, Moesle and Yoshida focus on trends and challenges in automotive technology, autonomous driving and ADAS.… “Three Big Topics in Autonomous

Read More »

“Toward Hardware-agnostic ADAS Implementations for Software-defined Vehicles,” a Presentation from Valeo

Frank Moesle, Software Department Manager at Valeo, presents the “Toward Hardware-agnostic ADAS Implementations for Software-defined Vehicles” tutorial at the May 2025 Embedded Vision Summit. ADAS (advanced-driver assistance systems) software has historically been tightly bound to the underlying system-on-chip (SoC). This software, especially for visual perception, has been extensively optimized for… “Toward Hardware-agnostic ADAS Implementations for

Read More »

What Is The Role of Embedded Cameras in Smart Warehouse Automation?

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Cameras ensure that warehouse automation systems use visual data to function with consistency. It helps identify, track, and interact in real time. Discover how warehouse automation cameras work, their use cases, and critical imaging features.

Read More »

“Depth Estimation from Monocular Images Using Geometric Foundation Models,” a Presentation from Toyota Research Institute

RareČ™ AmbruČ™, Senior Manager for Large Behavior Models at Toyota Research Institute, presents the “Depth Estimation from Monocular Images Using Geometric Foundation Models” tutorial at the May 2025 Embedded Vision Summit. In this presentation, AmbruČ™ looks at recent advances in depth estimation from images. He first focuses on the ability… “Depth Estimation from Monocular Images

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top