Tools

Accelerating Transformer Neural Networks for Autonomous Driving

This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Autonomous driving (AD) and advanced driver assistance system (ADAS) providers are deploying more and more AI neural networks (NNs) to offer human-like driving experience. Several of the leading AD innovators have either deployed, or have a roadmap […]

Accelerating Transformer Neural Networks for Autonomous Driving Read More +

Sensor Cortek Demonstration of SmarterRoad Running on Synopsys ARC NPX6 NPU IP

Fahed Hassanhat, head of engineering at Sensor Cortek, demonstrates the company’s latest edge AI and vision technologies and products in Synopsys’ booth at the 2024 Embedded Vision Summit. Specifically, Hassanhat demonstrates his company’s latest ADAS neural network (NN) model, SmarterRoad, combining lane detection and open space detection. SmarterRoad is a light integrated convolutional network that

Sensor Cortek Demonstration of SmarterRoad Running on Synopsys ARC NPX6 NPU IP Read More +

Annual Computer Vision and Perceptual AI Developer Survey Now Open

Every year we survey developers to understand their requirements and pain points around computer vision and perceptual AI. This survey is now in its 11th year because of people like you, who contribute their real-world insights. We share the results from the survey at Alliance events and in white papers and presentations made available throughout

Annual Computer Vision and Perceptual AI Developer Survey Now Open Read More +

Build VLM-powered Visual AI Agents Using NVIDIA NIM and NVIDIA VIA Microservices

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Traditional video analytics applications and their development workflow are typically built on fixed-function, limited models that are designed to detect and identify only a select set of predefined objects. With generative AI, NVIDIA NIM microservices, and foundation

Build VLM-powered Visual AI Agents Using NVIDIA NIM and NVIDIA VIA Microservices Read More +

Quantization: Unlocking Scalability for Large Language Models

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm Find out how LLM quantization solves the challenges of making AI work on device In the rapidly evolving world of artificial intelligence (AI), the growth of large language models (LLMs) has been nothing short of astounding. These

Quantization: Unlocking Scalability for Large Language Models Read More +

Ambarella and Plus Announce High Performance Transformer-based AD Perception Software Stack, PlusVision, for CV3-AD AI Domain Controller Family With Industry-leading Power Efficiency

Birds-Eye-View Vision Technology Enables OEMs to Offer L2+/L3 Autonomy Across Vehicle Models With Uniform Perception Software SANTA CLARA, Calif., July 31, 2024 — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, and Plus, an AI-based driver assist and autonomous driving (AD) solutions provider, today announced that Plus’s PlusVision™—a high-performance transformer-based AD perception software stack

Ambarella and Plus Announce High Performance Transformer-based AD Perception Software Stack, PlusVision, for CV3-AD AI Domain Controller Family With Industry-leading Power Efficiency Read More +

Enhance Multi-camera Tracking Accuracy by Fine-tuning AI Models with Synthetic Data

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Large-scale, use–case-specific synthetic data has become increasingly important in real-world computer vision and AI workflows. That’s because digital twins are a powerful way to create physics-based virtual replicas of factories, retail spaces, and other assets, enabling precise simulations

Enhance Multi-camera Tracking Accuracy by Fine-tuning AI Models with Synthetic Data Read More +

Nota AI Demonstration of Transforming Edge AI with the LaunchX Converter and Benchmarker

Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Kim demonstrates his company’s LaunchX platform, featuring its powerful Converter and Benchmarker. LaunchX optimizes AI models for edge devices, reducing latency and boosting performance. Practical applications of the Converter

Nota AI Demonstration of Transforming Edge AI with the LaunchX Converter and Benchmarker Read More +

Nota AI Demonstration of Elevating Traffic Safety with Vision Language Models

Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Kim demonstrates his company’s Vision Language Model (VLM) solution, designed to elevate vehicle safety. Advanced models analyze and interpret visual data to prevent accidents and enhance driving experiences. The

Nota AI Demonstration of Elevating Traffic Safety with Vision Language Models Read More +

Free Webinar Explores Synthetic Data for Deep Learning Model Training

On September 26, 2024 at 9 am PT (noon ET), Jakub Pietrzak, Chief Technology Officer for SKY ENGINE AI, will present the free hour webinar “Leveraging Synthetic Data for Real-time Visual Human Behavior Analysis Using the SKY ENGINE AI Platform,” organized by the Edge AI and Vision Alliance. Here’s the description, from the event registration

Free Webinar Explores Synthetic Data for Deep Learning Model Training Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top