Algorithms & Models

Maximizing Attention, Minimizing Costs: Embracing Intelligent Digital Assistants with Vision and Speech Processing in the Cloud and Edge

This blog post was originally published by GMAC Intelligence. It is reprinted here with the permission of GMAC Intelligence. Humans mainly rely on speech, vision and touch to operate efficiently and effectively in the physical world. We also rely on smell and taste for our activities and our survival as well, but for most of […]

Maximizing Attention, Minimizing Costs: Embracing Intelligent Digital Assistants with Vision and Speech Processing in the Cloud and Edge Read More +

Transformer Models and NPU IP Co-optimized for the Edge

Transformers are taking the AI world by storm, as evidenced by super-intelligent chatbots and search queries, as well as image and art generators. These are also based on neural net technologies but programmed in a quite different way from more commonly understood convolution methods. Now transformers are starting to make their way to edge applications.

Transformer Models and NPU IP Co-optimized for the Edge Read More +

DEEPX Demonstration of Simplifying Software Development with DEEPX’s Two-step SDK

Jay Kim, EVP of Technology for DEEPX, demonstrates the company’s latest edge AI and vision technologies and products at the 2023 Embedded Vision Summit. Specifically, Kim demonstrates the simplicity of using DEEPX’s software development kit (SDK). Kim shows how to choose a target application and select an AI software framework in just two easy steps.

DEEPX Demonstration of Simplifying Software Development with DEEPX’s Two-step SDK Read More +

Reflections from RSS: Three Reasons DL Fails at Autonomy

This blog post was originally published by Opteran Technologies. It is reprinted here with the permission of Opteran Technologies. Last week I had the pleasure of attending, and presenting at, the annual Robotics: Science and Systems (RSS) in Daegu, South Korea.  RSS ranks amongst the most prestigious of the international robotics conferences, and brings together

Reflections from RSS: Three Reasons DL Fails at Autonomy Read More +

“How Transformers Are Changing the Nature of Deep Learning Models,” a Presentation from Synopsys

Tom Michiels, System Architect for ARC Processors at Synopsys, presents the “How Transformers Are Changing the Nature of Deep Learning Models” tutorial at the May 2023 Embedded Vision Summit. The neural network models used in embedded real-time applications are evolving quickly. Transformer networks are a deep learning approach that has become dominant for natural language

“How Transformers Are Changing the Nature of Deep Learning Models,” a Presentation from Synopsys Read More +

Get a Clearer Picture of Vision Transformers’ Potential at the Edge

This blog post was originally published at BrainChip’s website. It is reprinted here with the permission of BrainChip. Scenario: Corporate security staff get an alert that a video camera has detected a former employee entering an off-limits building. Scenario: A radiologist receives a flag that an MRI contains early markers for potentially abnormal tissue growth.

Get a Clearer Picture of Vision Transformers’ Potential at the Edge Read More +

“Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN,” a Presentation from Perceive

Steve Teig, CEO of Perceive, presents the “Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN” tutorial at the May 2023 Embedded Vision Summit. Generative adversarial networks, or GANs, are widely used to create amazing “fake” images and realistic, synthetic training data. And yet, despite their name, mainstream GANs

“Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN,” a Presentation from Perceive Read More +

Qualcomm Works with Meta to Enable On-device AI Applications Using Llama 2

Highlights: Qualcomm is scheduled to make available Llama 2-based AI implementations on flagship smartphones and PCs starting from 2024 onwards to enable developers to usher in new and exciting generative AI applications using the AI-capabilities of Snapdragon platforms. On-device AI implementation helps to increase user privacy, address security preferences, enhance applications reliability and enable personalization

Qualcomm Works with Meta to Enable On-device AI Applications Using Llama 2 Read More +

“Can AI Solve the Low Light and HDR Challenge?,” a Presentation from Visionary.ai

Oren Debbi, CEO and Co-founder of Visionary.ai, presents the “Can AI Solve the Low Light and HDR Challenge?” tutorial at the May 2023 Embedded Vision Summit. The phrase “garbage in, garbage out” is applicable to machine and human vision. If we can improve the quality of image data at the source by removing noise, this

“Can AI Solve the Low Light and HDR Challenge?,” a Presentation from Visionary.ai Read More +

GMAC Intelligence Goes Big with BrainChip Partnership

Laguna Hills, Calif. – July 16, 2023 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event- based, neuromorphic AI IP, welcomes AI/ML software company GMAC Intelligence as a partner to its Essential AI ecosystem. The Audience Choice Award Winner at the 2023 Vision

GMAC Intelligence Goes Big with BrainChip Partnership Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top