Sensors and Cameras

How Cameras Enable Vision in New-age Agricultural Robots

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Embedded vision has been transforming agriculture by enabling autonomous mobile robots to ‘see’ their environment for the purposes of picking & harvesting, plowing, weed & bug detection, etc. Jump right into this article to learn […]

How Cameras Enable Vision in New-age Agricultural Robots Read More +

“Optimized Image Processing for Automotive Image Sensors with Novel Color Filter Arrays,” a Presentation from Nextchip

Young-Jun Yoo, Vice President of the Automotive Business and Operations Unit at Nextchip, presents the “Optimized Image Processing for Automotive Image Sensors with Novel Color Filter Arrays” tutorial at the May 2023 Embedded Vision Summit. Traditionally, image sensors have been optimized to produce images that look natural to humans. For images consumed by algorithms, what

“Optimized Image Processing for Automotive Image Sensors with Novel Color Filter Arrays,” a Presentation from Nextchip Read More +

GMSL2 Cameras vs FPD-Link III Cameras: A Detailed Study

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. GMSL2 and FPD-Link III are two of the most popular camera interfaces used for high-bandwidth long-distance transmission. In this article, we do an in-depth comparison of all their features, camera architecture, performance, and applications. GMSL

GMSL2 Cameras vs FPD-Link III Cameras: A Detailed Study Read More +

Plumerai Wins MLPerf Tiny 1.1 AI Benchmark for Microcontrollers Again

This blog post was originally published at Plumerai’s website. It is reprinted here with the permission of Plumerai. New results show Plumerai leads performance on all Cortex-M platforms, now also on M0/M0+ Last year we presented our MLPerf Tiny 0.7 and MLPerf Tiny 1.0 benchmark scores, showing that our inference engine runs your AI models

Plumerai Wins MLPerf Tiny 1.1 AI Benchmark for Microcontrollers Again Read More +

“Tensilica Processor Cores Enable Sensor Fusion for Robust Perception,” a Presentation from Cadence

Amol Borkar, Product Marketing Director at Cadence, presents the “Tensilica Processor Cores Enable Sensor Fusion for Robust Perception” tutorial at the May 2023 Embedded Vision Summit. Until recently, the majority of sensor-based AI processing used vision and speech inputs. Recently, we have begun to see radar, LiDAR, event-based image sensors and other types of sensors

“Tensilica Processor Cores Enable Sensor Fusion for Robust Perception,” a Presentation from Cadence Read More +

Lens Selection for Vision Projects

This blog post was originally published at FRAMOS’ website. It is reprinted here with the permission of FRAMOS. Lenses are one of the most important components of the vision system. A wrongly chosen lens can lead to a wrong outcome in the system. Therefore, choosing the right lens is crucial to ensure optimized image quality

Lens Selection for Vision Projects Read More +

GMSL Cameras vs MIPI Cameras: A Detailed Study

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. GMSL is a high-speed serial interface used in automotive video applications, robotic devices, agricultural vehicles, etc. It is a SerDes technique that enables long-distance transmission. Compared to MIPI – one of the most popular camera

GMSL Cameras vs MIPI Cameras: A Detailed Study Read More +

Qualcomm at CVPR 2023: Advancing Research and Bringing Generative AI to the Edge

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. ControlNet running entirely on device, fitness coaching with an LLM, 3D reconstruction for XR, our accepted papers and much more The annual IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) is regarded as one of the

Qualcomm at CVPR 2023: Advancing Research and Bringing Generative AI to the Edge Read More +

“Battery-powered Edge AI Sensing: A Case Study Implementing Low-power, Always-on Capability,” a Presentation from Avnet

Peter Fenn, Director of the Advanced Applications Group at Avnet, presents the “Battery-powered Edge AI Sensing: A Case Study Implementing Low-power, Always-on Capability” tutorial at the May 2023 Embedded Vision Summit. The trend of pushing AI/ML capabilities to the edge brings design challenges around the need to combine high-performance computing (for AI/ML algorithms) with low

“Battery-powered Edge AI Sensing: A Case Study Implementing Low-power, Always-on Capability,” a Presentation from Avnet Read More +

“Sparking the Next Generation of Arm-based Cloud-native Smart Camera Designs,” a Presentation from Arm

Stephen Su, Senior Product Manager at Arm, presents the “Sparking the Next Generation of Arm-based Cloud-native Smart Camera Designs” tutorial at the May 2023 Embedded Vision Summit. As enterprises and consumers increasingly adopt machine learning-enabled smart cameras, the expectations of these end users are becoming more sophisticated. In particular, smart camera users increasingly expect their

“Sparking the Next Generation of Arm-based Cloud-native Smart Camera Designs,” a Presentation from Arm Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top