Cameras and Sensors

Cameras and Sensors for Embedded Vision

WHILE ANALOG CAMERAS ARE STILL USED IN MANY VISION SYSTEMS, THIS SECTION FOCUSES ON DIGITAL IMAGE SENSORS

While analog cameras are still used in many vision systems, this section focuses on digital image sensors—usually either a CCD or CMOS sensor array that operates with visible light. However, this definition shouldn’t constrain the technology analysis, since many vision systems can also sense other types of energy (IR, sonar, etc.).

The camera housing has become the entire chassis for a vision system, leading to the emergence of “smart cameras” with all of the electronics integrated. By most definitions, a smart camera supports computer vision, since the camera is capable of extracting application-specific information. However, as both wired and wireless networks get faster and cheaper, there still may be reasons to transmit pixel data to a central location for storage or extra processing.

A classic example is cloud computing using the camera on a smartphone. The smartphone could be considered a “smart camera” as well, but sending data to a cloud-based computer may reduce the processing performance required on the mobile device, lowering cost, power, weight, etc. For a dedicated smart camera, some vendors have created chips that integrate all of the required features.

Cameras

Until recent times, many people would imagine a camera for computer vision as the outdoor security camera shown in this picture. There are countless vendors supplying these products, and many more supplying indoor cameras for industrial applications. Don’t forget about simple USB cameras for PCs. And don’t overlook the billion or so cameras embedded in the mobile phones of the world. These cameras’ speed and quality have risen dramatically—supporting 10+ mega-pixel sensors with sophisticated image processing hardware.

Consider, too, another important factor for cameras—the rapid adoption of 3D imaging using stereo optics, time-of-flight and structured light technologies. Trendsetting cell phones now even offer this technology, as do latest-generation game consoles. Look again at the picture of the outdoor camera and consider how much change is about to happen to computer vision markets as new camera technologies becomes pervasive.

Sensors

Charge-coupled device (CCD) sensors have some advantages over CMOS image sensors, mainly because the electronic shutter of CCDs traditionally offers better image quality with higher dynamic range and resolution. However, CMOS sensors now account for more 90% of the market, heavily influenced by camera phones and driven by the technology’s lower cost, better integration and speed.

Edge AI Hangs on Power: Can Chipmakers Meet Up?

Power semiconductors will define how well and how quickly the global economy adopts Edge AI and benefit from its promises. That’s why the race is stiffening among chipmakers to offer the most innovative power management components and systems. Who is winning? What’s at stake: The stakes for power semiconductor makers in the Edge AI market

Read More »

FRAMOS and Restar Corporation Establish Joint Venture “RESTAR FRAMOS Technologies” (Formerly ViMOS Technologies) to Strengthen Distribution of SONY Image Sensors and Displays

Munich, Bavaria, Germany – October 1st, 2025 – FRAMOS GmbH today announced a strategic realignment with long-term partner Restar Corporation: the creation of RESTAR FRAMOS Technologies (formerly ViMOS Technologies), a joint venture that will take over SONY image sensor and display distribution across Europe, the Middle East, and North America, effective October 1st, 2025. By

Read More »

What Is Image Denoising and What Are Its Methods?

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. A camera with effective denoising capabilities is essential for ensuring reliable object detection, classification, and measurement, particularly in embedded vision applications. To understand the various types of image noise and the different methods for reducing

Read More »

“Developing a GStreamer-based Custom Camera System for Long-range Biometric Data Collection,” a Presentation from Oak Ridge National Laboratory

Gavin Jager, Researcher and Lab Space Manager at Oak Ridge National Laboratory, presents the “Developing a GStreamer-based Custom Camera System for Long-range Biometric Data Collection” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Jager describes Oak Ridge National Laboratory’s work developing software for a custom camera system… “Developing a GStreamer-based Custom Camera

Read More »

“Scaling Artificial Intelligence and Computer Vision for Conservation,” a Presentation from The Nature Conservancy

Matt Merrifield, Chief Technology Officer at The Nature Conservancy, presents the “Scaling Artificial Intelligence and Computer Vision for Conservation” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Merrifield explains how the world’s largest environmental nonprofit is spearheading projects to scale the use of edge AI and vision… “Scaling Artificial Intelligence and Computer

Read More »

“A Lightweight Camera Stack for Edge AI,” a Presentation from Meta

Jui Garagate, Camera Software Engineer, and Karthick Kumaran, Staff Software Engineer, both of Meta, co-present the “Lightweight Camera Stack for Edge AI” tutorial at the May 2025 Embedded Vision Summit. Electronic products for virtual and augmented reality, home robots and cars deploy multiple cameras for computer vision and AI use… “A Lightweight Camera Stack for

Read More »

ImagingNext 2025 Inspires the AI Vision Community: Exciting Insights, Powerful Keynote Speeches, and Valuable Contacts

Munich, Bavaria, Germany – September 25th, 2025 – This year’s ImagingNext, the leading event for AI vision and embedded imaging, was a resounding success. Around 100 participants came together to discuss the latest trends, technologies, and use cases in the field of vision AI in an inspiring atmosphere. The mood was great – from the

Read More »

Semiconductors at the Heart of Automotive’s Next Chapter

This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. Automotive White Paper, Vol.2, Powered by Yole Group – Shifting gears! KEY TAKEAWAYS The automotive semiconductor market will soar from $68 billion in 2024 to $132 billion in 2030, growing at a

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top