TECHNOLOGIES

Visual ChatGPT Explained

This blog post was originally published at SOYNET’s website. It is reprinted here with the permission of SOYNET. A Multi-Modal Conversational Model for Image Understanding and Generation Visual ChatGPT allows users to perform complex visual tasks using text and visual inputs. With the rapid advancements in AI, there is a growing need for models that […]

Visual ChatGPT Explained Read More +

The MEMS Industry: Looking Back at the Last 20 Years of Innovation and Growth

This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. Overcoming a global economic downturn, the MEMS market is set to grow to US$20 billion by 2028 as MEMS allow OEMs in the consumer, automotive, and other industries to optimize the cost,

The MEMS Industry: Looking Back at the Last 20 Years of Innovation and Growth Read More +

Multiclass Confusion Matrix for Object Detection

This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. We introduce the Multiclass Confusion Matrix for Object Detection, a table that can help you perform failure analysis identifying otherwise unnoticeable errors, such as edge cases or non-representative issues in your data. In this article we introduce

Multiclass Confusion Matrix for Object Detection Read More +

Selecting the Right Camera for the NVIDIA Jetson and Other Embedded Systems

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The camera module is the most integral part of an AI-based embedded system. With so many camera module choices on the market, the selection process may seem overwhelming. This post breaks down the process to help make

Selecting the Right Camera for the NVIDIA Jetson and Other Embedded Systems Read More +

May 2023 Embedded Vision Summit Vision Tank Competition Finalist Presentations

Swathi A N Kumar, Founder and CEO of BetterMeal AI (substituting for David Hojah, CEO and Founder of Parrots), Amit Mate, Founder and CEO of GMAC Intelligence, Slava Chesnokov, CTO of Lemur Imaging, Tsvi Achler, Founder of Optimizing Mind, and Robert Brown, CEO of ProHawk Technology Group, deliver their Vision Tank finalist presentations at the

May 2023 Embedded Vision Summit Vision Tank Competition Finalist Presentations Read More +

Introducing Temporian: Tryolabs and Google Venture in Temporal Data Processing

This blog post was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. Today marks a significant milestone for us at Tryolabs as we introduce Temporian. In collaboration with Google, we’ve designed this tool to address the multifaceted challenges of temporal data processing head-on. Let’s explore the inspiration, functionality, and

Introducing Temporian: Tryolabs and Google Venture in Temporal Data Processing Read More +

The Global Market for Lidar in Autonomous Vehicles Will Grow to US$8.4 Billion by 2033

The demand for lidars to be adopted in the automotive industry drives huge investment and rapid progression, with innovations in beam steering technologies, performance improvement, and cost reduction in lidar transceiver components. These efforts can enable lidars to be implemented in a wider application scenario beyond conventional usage and automobiles. However, the rapidly evolving lidar

The Global Market for Lidar in Autonomous Vehicles Will Grow to US$8.4 Billion by 2033 Read More +

“Generative AI: How Will It Impact Edge Applications and Machine Perception?,” An Embedded Vision Summit Expert Panel Discussion

Sally Ward-Foxton, Senior Reporter at EE Times, moderates the “Generative AI: How Will It Impact Edge Applications and Machine Perception?” Expert Panel at the May 2023 Embedded Vision Summit. Other panelists include Greg Kostello, CTO and Co-Founder of Huma.AI, Vivek Pradeep, Partner Research Manager at Microsoft, Steve Teig, CEO of Perceive, and Roland Memisevic, Senior

“Generative AI: How Will It Impact Edge Applications and Machine Perception?,” An Embedded Vision Summit Expert Panel Discussion Read More +

Cadence Accelerates On-device and Edge AI Performance and Efficiency with New Neo NPU IP and NeuroWeave SDK for Silicon Design

Highlights: Neo NPUs efficiently offload from any host processor and scale from 8 GOPS to 80 TOPS in a single core, extending to hundreds of TOPS for multicore AI IP delivers industry-leading AI performance and energy efficiency for optimal PPA and cost points Targets a broad range of on-device and edge applications, including intelligent sensors,

Cadence Accelerates On-device and Edge AI Performance and Efficiency with New Neo NPU IP and NeuroWeave SDK for Silicon Design Read More +

“Frontiers in Perceptual AI: First-person Video and Multimodal Perception,” a Keynote Presentation from Kristen Grauman

Kristen Grauman, Professor at the University of Texas at Austin and Research Director at Facebook AI Research, presents the “Frontiers in Perceptual AI: First-person Video and Multimodal Perception” tutorial at the May 2023 Embedded Vision Summit. First-person or “egocentric” perception requires understanding the video and multimodal data that streams from wearable cameras and other sensors.

“Frontiers in Perceptual AI: First-person Video and Multimodal Perception,” a Keynote Presentation from Kristen Grauman Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top