TECHNOLOGIES

Computing and AI for Automotive: Toward Centralization and Connectivity

This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. Chiplets for ADAS & infotainment take off with centralization trends. OUTLINE The ADAS and infotainment processors market, worth US$7.8 billion in 2023, is expected to reach US$16.4 billion by 2029, with a […]

Computing and AI for Automotive: Toward Centralization and Connectivity Read More +

Exploring the Components of LiDAR

Automotive autonomy has triggered enormous interest in various sensors in assisting data collection for both vehicle and road information. Among them, three-dimensional (3D) light detection and ranging (Lidar), a remote sensing method that uses laser light to measure distances and create precise 3D maps of the surroundings, provides high angular resolution and long detection range

Exploring the Components of LiDAR Read More +

Modernizing Automotive Interfaces

This blog post was originally published at Avnet’s website. It is reprinted here with the permission of Avnet. The MCU is a low-power, flexible and highly integrated device. It has inspired many peripherals, from clocks to timers, as well as data conversion and power management. Most MCUs incorporate plenty of general-purpose interfaces (GPIO), along with

Modernizing Automotive Interfaces Read More +

“Optimized Vision Language Models for Intelligent Transportation System Applications,” a Presentation from Nota AI

Tae-Ho Kim, Co-founder and CTO of Nota AI, presents the “Optimized Vision Language Models for Intelligent Transportation System Applications” tutorial at the May 2024 Embedded Vision Summit. In the rapidly evolving landscape of intelligent transportation systems (ITSs), the demand for efficient and reliable solutions has never been greater. In this presentation, Kim shows how an

“Optimized Vision Language Models for Intelligent Transportation System Applications,” a Presentation from Nota AI Read More +

Understanding What the Machines See: State-of-the-art Computer Vision at CVPR 2024

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Qualcomm’s accepted papers, demos and workshops at CVPR 2024 showcase the future of generative AI and perception The Computer Vision and Pattern Recognition Conference (CVPR) 2024 begins on Monday, June 17, and Qualcomm Technologies is excited to

Understanding What the Machines See: State-of-the-art Computer Vision at CVPR 2024 Read More +

“Image Signal Processing Optimization for Object Detection,” a Presentation from Nextchip

Young-Jun Yoo, Executive Vice President at Nextchip, presents the “Image Signal Processing Optimization for Object Detection” tutorial at the May 2024 Embedded Vision Summit. This talk delves into the challenges and optimization strategies in image signal processing (ISP) for enhancing object detection in advanced driver-assistance systems (ADAS). Through real-world examples, Yoo explores the critical role

“Image Signal Processing Optimization for Object Detection,” a Presentation from Nextchip Read More +

Understanding Spatial Noise and Its Reduction Methods Using Convolution

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Convolution is a mathematical operation used in image processing to apply filters to images. These filters are used for spatial noise reduction in images with variations or irregularities in the pixel values that are unrelated

Understanding Spatial Noise and Its Reduction Methods Using Convolution Read More +

“Squeezing the Last Milliwatt and Cubic Millimeter from Smart Cameras Using the Latest FPGAs and DRAMs,” a Presentation from Lattice Semiconductor and Etron Technology America

Hussein Osman, Segment Marketing Director at Lattice Semiconductor, and Richard Crisp, Vice President and Chief Scientist at Etron Technology America, co-presents the “Squeezing the Last Milliwatt and Cubic Millimeter from Smart Cameras Using the Latest FPGAs and DRAMs” tutorial at the May 2024 Embedded Vision Summit. Attaining the lowest power, size and cost for a

“Squeezing the Last Milliwatt and Cubic Millimeter from Smart Cameras Using the Latest FPGAs and DRAMs,” a Presentation from Lattice Semiconductor and Etron Technology America Read More +

Dream Chip and Cadence Demo Automotive SoC Featuring Tensilica AI IP at embedded world 2024

Cadence verification and RTL-to-GDS digital full-flow tuned for automotive safety, quality and reliability requirements 18 Jun 2024 – At embedded world 2024, Cadence and Dream Chip demonstrated Dream Chip’s latest automotive SoC, which features the Cadence® Tensilica® Vision P6 DSP IP and Cadence design IP controllers and was taped out using the complete Cadence® Verification

Dream Chip and Cadence Demo Automotive SoC Featuring Tensilica AI IP at embedded world 2024 Read More +

TOPS of the Class: Decoding AI Performance on RTX AI PCs and Workstations

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. What is a token? Why is batch size important? And how do they help determine how fast AI computes? Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology

TOPS of the Class: Decoding AI Performance on RTX AI PCs and Workstations Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top