Cameras and Sensors

Cameras and Sensors for Embedded Vision

WHILE ANALOG CAMERAS ARE STILL USED IN MANY VISION SYSTEMS, THIS SECTION FOCUSES ON DIGITAL IMAGE SENSORS

While analog cameras are still used in many vision systems, this section focuses on digital image sensors—usually either a CCD or CMOS sensor array that operates with visible light. However, this definition shouldn’t constrain the technology analysis, since many vision systems can also sense other types of energy (IR, sonar, etc.).

The camera housing has become the entire chassis for a vision system, leading to the emergence of “smart cameras” with all of the electronics integrated. By most definitions, a smart camera supports computer vision, since the camera is capable of extracting application-specific information. However, as both wired and wireless networks get faster and cheaper, there still may be reasons to transmit pixel data to a central location for storage or extra processing.

A classic example is cloud computing using the camera on a smartphone. The smartphone could be considered a “smart camera” as well, but sending data to a cloud-based computer may reduce the processing performance required on the mobile device, lowering cost, power, weight, etc. For a dedicated smart camera, some vendors have created chips that integrate all of the required features.

Cameras

Until recent times, many people would imagine a camera for computer vision as the outdoor security camera shown in this picture. There are countless vendors supplying these products, and many more supplying indoor cameras for industrial applications. Don’t forget about simple USB cameras for PCs. And don’t overlook the billion or so cameras embedded in the mobile phones of the world. These cameras’ speed and quality have risen dramatically—supporting 10+ mega-pixel sensors with sophisticated image processing hardware.

Consider, too, another important factor for cameras—the rapid adoption of 3D imaging using stereo optics, time-of-flight and structured light technologies. Trendsetting cell phones now even offer this technology, as do latest-generation game consoles. Look again at the picture of the outdoor camera and consider how much change is about to happen to computer vision markets as new camera technologies becomes pervasive.

Sensors

Charge-coupled device (CCD) sensors have some advantages over CMOS image sensors, mainly because the electronic shutter of CCDs traditionally offers better image quality with higher dynamic range and resolution. However, CMOS sensors now account for more 90% of the market, heavily influenced by camera phones and driven by the technology’s lower cost, better integration and speed.

“Improving Worksite Safety with AI-powered Perception,” a Presentation from Arcure

Sabri Bayoudh, Chief Innovation Officer at Arcure, presents the “Improving Worksite Safety with AI-powered Perception” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Bayoudhl explores how embedded vision is being used in industrial applications, including vehicle safety and production. He highlights some of the challenging requirements of… “Improving Worksite Safety with AI-powered

Read More »

Software-defined Vehicles: Built For Users, or For the Industry?

SDV Level Chart: IDTechEx defines SDV performance using six levels. Most consumers still have limited awareness of the deeper value behind “software-defined” capabilities The concept of the Software-Defined Vehicle (SDV) has rapidly emerged as a transformative trend reshaping the automotive industry. Yet, despite widespread use of the term, there remains significant confusion around its core

Read More »

How to Support Multi-planar Format in Python V4L2 Applications on i.MX8M Plus

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. The default Python V4L2 library module contains critical details related to the V4L2 capture method. Learn how to implement basic definitions (missing the default library module) and capture images in the V4L2 multi-planar format. Python

Read More »

Upcoming Presentation and Demonstrations Showcase Autonomous Mobile Robots and Machine Vision

On Wednesday, October 15 from 11:45 AM – 12:15 PM PT, Alliance Member company eInfochips will deliver the presentation “Real-time Vision AI System on Edge AI Platforms” at the RoboBusiness and DeviceTalks West 2025 Conference in Santa Clara, California. From the event page: This session presents a real-time, edge-deployed Vision AI system for automated quality

Read More »

“Integrating Cameras with the Robot Operating System (ROS),” a Presentation from Amazon Lab126

Karthik Poduval, Principal Software Development Engineer at Amazon Lab126, presents the “Integrating Cameras with the Robot Operating System (ROS)” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Poduval explores the integration of cameras within the Robot Operating System (ROS) for robust embedded vision applications. He delves into… “Integrating Cameras with the Robot

Read More »

“Using Computer Vision for Early Detection of Cognitive Decline via Sleep-wake Data,” a Presentation from AI Tensors

Ravi Kota, CEO of AI Tensors, presents the “Using Computer Vision for Early Detection of Cognitive Decline via Sleep-wake Data” tutorial at the May 2025 Embedded Vision Summit. AITCare-Vision predicts cognitive decline by analyzing sleep-wake disorders data in older adults. Using computer vision and motion sensors coupled with AI algorithms,… “Using Computer Vision for Early

Read More »

“AI-powered Scouting: Democratizing Talent Discovery in Sports,” a Presentation from ai.io

Jonathan Lee, Chief Product Officer at ai.io, presents the “AI-powered Scouting: Democratizing Talent Discovery in Sports,” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Lee shares his experience using AI and computer vision to revolutionize talent identification in sports. By developing aiScout, a platform that enables athletes… “AI-powered Scouting: Democratizing Talent Discovery

Read More »

“Vision-based Aircraft Functions for Autonomous Flight Systems,” a Presentation from Acubed (an Airbus Innovation Center)

Arne Stoschek, Vice President of AI and Autonomy at Acubed (an Airbus innovation center), presents the “Vision-based Aircraft Functions for Autonomous Flight Systems” tutorial at the May 2025 Embedded Vision Summit. At Acubed, an Airbus innovation center, the mission is to accelerate AI and autonomy in aerospace. Stoschek gives an… “Vision-based Aircraft Functions for Autonomous

Read More »

Why 4K HDR Imaging is Required in Front View Cameras?

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Mobility systems require front-view cameras that depend on visual intelligence. Hence, the camera’s input is the starting point for every decision. Learn why 4K HDR imaging is critical in front-view cameras and explore five major

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top