Cameras and Sensors for Embedded Vision
WHILE ANALOG CAMERAS ARE STILL USED IN MANY VISION SYSTEMS, THIS SECTION FOCUSES ON DIGITAL IMAGE SENSORS
While analog cameras are still used in many vision systems, this section focuses on digital image sensors—usually either a CCD or CMOS sensor array that operates with visible light. However, this definition shouldn’t constrain the technology analysis, since many vision systems can also sense other types of energy (IR, sonar, etc.).
The camera housing has become the entire chassis for a vision system, leading to the emergence of “smart cameras” with all of the electronics integrated. By most definitions, a smart camera supports computer vision, since the camera is capable of extracting application-specific information. However, as both wired and wireless networks get faster and cheaper, there still may be reasons to transmit pixel data to a central location for storage or extra processing.
A classic example is cloud computing using the camera on a smartphone. The smartphone could be considered a “smart camera” as well, but sending data to a cloud-based computer may reduce the processing performance required on the mobile device, lowering cost, power, weight, etc. For a dedicated smart camera, some vendors have created chips that integrate all of the required features.
Cameras
Until recent times, many people would imagine a camera for computer vision as the outdoor security camera shown in this picture. There are countless vendors supplying these products, and many more supplying indoor cameras for industrial applications. Don’t forget about simple USB cameras for PCs. And don’t overlook the billion or so cameras embedded in the mobile phones of the world. These cameras’ speed and quality have risen dramatically—supporting 10+ mega-pixel sensors with sophisticated image processing hardware.
Consider, too, another important factor for cameras—the rapid adoption of 3D imaging using stereo optics, time-of-flight and structured light technologies. Trendsetting cell phones now even offer this technology, as do latest-generation game consoles. Look again at the picture of the outdoor camera and consider how much change is about to happen to computer vision markets as new camera technologies becomes pervasive.
Sensors
Charge-coupled device (CCD) sensors have some advantages over CMOS image sensors, mainly because the electronic shutter of CCDs traditionally offers better image quality with higher dynamic range and resolution. However, CMOS sensors now account for more 90% of the market, heavily influenced by camera phones and driven by the technology’s lower cost, better integration and speed.
Transforming Interconnects in AI Systems: Co-Packaged Optics’ Role
In recent years, there has been a noticeable trend in optical transceiver technology, moving toward bringing the transceiver closer to the ASIC. Traditionally, pluggable optics—optical modules inserted and removed from the front panel of a switch—have been located near the edge of the printed circuit board (PCB). These pluggable optics are widely used in data
“Using MIPI CSI to Interface with Multiple Cameras,” a Presentation from Meta
Karthick Kumaran Ayyalluseshagiri Viswanathan, Staff Software Engineer at Meta, presents the “Using MIPI CSI to Interface with Multiple Cameras” tutorial at the May 2024 Embedded Vision Summit. As demand rises for vision capabilities in robotics, virtual/augmented reality, drones and automotive, there’s a growing need for systems to incorporate multiple cameras.… “Using MIPI CSI to Interface
What Is GMSL Technology And How Does It Work?
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. The GMSL interface plays a key role in embedded vision systems across industries. It can handle high-resolution video with low latency for long-distance data transmission. Discover more about the GMSL camera interface, its principles and
“Introduction to Depth Sensing,” a Presentation from Meta
Harish Venkataraman, Depth Cameras Architecture and Tech Lead at Meta, presents the “Introduction to Depth Sensing” tutorial at the May 2024 Embedded Vision Summit. We live in a three-dimensional world, and the ability to perceive in three dimensions is essential for many systems. In this talk, Venkataraman introduced the main… “Introduction to Depth Sensing,” a
“Advancing Embedded Vision Systems: Harnessing Hardware Acceleration and Open Standards,” a Presentation from the Khronos Group
Neil Trevett, President of the Khronos Group, presents the “Advancing Embedded Vision Systems: Harnessing Hardware Acceleration and Open Standards” tutorial at the May 2024 Embedded Vision Summit. Offloading processing to accelerators enables embedded vision systems to process workloads that exceed the capabilities of CPUs. However, parallel processors add complexity as… “Advancing Embedded Vision Systems: Harnessing
“Using AI to Enhance the Well-being of the Elderly,” a Presentation from Kepler Vision Technologies
Harro Stokman, CEO of Kepler Vision Technologies, presents the “Using Artificial Intelligence to Enhance the Well-being of the Elderly” tutorial at the May 2024 Embedded Vision Summit. This presentation provides insights into an innovative application of artificial intelligence and advanced computer vision technologies in the healthcare sector, specifically focused on… “Using AI to Enhance the
eYs3D Microelectronics Demonstration of Computer Vision Embedded Solutions with an AI SoC
James Wang, president of eYs3D Microelectronics, demonstrates the company’s latest edge AI and vision technologies and products at the September 2024 Edge AI and Vision Alliance Forum. Specifically, Wang demonstrates the company’s stereo camera and AI SoC with edge processing capability, and computer vision object detection models.
TechNexion Demonstration of Eight Framesync Cameras with the NXP i.MX95
John Weber, Business Development Engineer at TechNexion, demonstrates the company’s latest edge AI and vision technologies and products at the September 2024 Edge AI and Vision Alliance Forum. Specifically, Weber demonstrates modular camera products as well as eight framesync cameras using an EDM-IMX95. EDM—IMX95 is an upcoming product in TechNexion’s EDM pin-to-pin product portfolio which
Waveye Demonstration of Ultra-high Resolution LIR Imaging Technology
Levon Budagyan, CEO of Waveye, demonstrates the company’s latest edge AI and vision technologies and products at the September 2024 Edge AI and Vision Alliance Forum. Specifically, Budagyan demonstrates his company’s ultra-high resolution LIR imaging technology, focused on key robotics applications: Robust, privacy-preserving human detection for robots 4D microwave imaging in high resolution for indoor
“Real-time Retail Product Classification on Android Devices Inside the Caper AI Cart,” a Presentation from Instacart
David Scott, Senior Machine Learning Engineer at Instacart, presents the “Real-time Retail Product Classification on Android Devices Inside the Caper AI Cart” tutorial at the May 2024 Embedded Vision Summit. In this talk, Scott explores deploying an embedded computer vision model on Android devices for real-time product classification with the… “Real-time Retail Product Classification on