fbpx

Processors

Chips&Media Now Reveals c.WAVE120 – New Generation of Super-Resolution HW IP

Deep learning-based neural network SR IP Capable of processing 8K 60fps output images at 550MHz High performance and low power consumption SEOUL, April 23rd, 2020 – Chips&Media, the leading global hardware IP provider, today announced the launch of c.WAVE120, which is a deep learning-based neural network, super-resolution IP that upscales the low-resolution data into high-resolution […]

Chips&Media Now Reveals c.WAVE120 – New Generation of Super-Resolution HW IP Read More +

Speeding Up Deep Learning Inference Using TensorRT

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This is an updated version of How to Speed Up Deep Learning Inference Using TensorRT. This version starts from a PyTorch model instead of the ONNX model, upgrades the sample application to use TensorRT 7, and replaces the

Speeding Up Deep Learning Inference Using TensorRT Read More +

Accelerating WinML and NVIDIA Tensor Cores

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Every year, clever researchers introduce ever more complex and interesting deep learning models to the world. There is of course a big difference between a model that works as a nice demo in isolation and a model that

Accelerating WinML and NVIDIA Tensor Cores Read More +

CEVA Announces Industry’s First High Performance Sensor Hub DSP Architecture

SensPro™ family serves as hub for processing and fusing of data from multiple sensors including camera, Radar, LiDAR, Time-of-Flight, microphones and inertial measurement units Highly-configurable and self-contained architecture brings together scalar and parallel processing for floating point and integer data types, as well as deep learning training and inferencing MOUNTAIN VIEW, Calif., – April 7,

CEVA Announces Industry’s First High Performance Sensor Hub DSP Architecture Read More +

BrainChip Introduces Company’s Event-Based Neural-Network IP and NSoC Device at Linley Processor Virtual Conference

AKD1000 is the first event-based processor for Edge AI with ultra-low power consumption and continuous learning APRIL 2, 2020–SAN FRANCISCO–(BUSINESS WIRE)– BrainChip Holdings Ltd. (ASX: BRN), a leading provider of ultra-low power, high performance edge AI technology, today announced that it will be introducing its AKD1000 to audiences at the Linley Fall Processor Virtual Conference

BrainChip Introduces Company’s Event-Based Neural-Network IP and NSoC Device at Linley Processor Virtual Conference Read More +

Speeding Up Deep Learning Inference Using TensorFlow, ONNX, and TensorRT

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Starting with TensorRT 7.0,  the Universal Framework Format (UFF) is being deprecated. In this post, you learn how to deploy TensorFlow trained deep learning models using the new TensorFlow-ONNX-TensorRT workflow. Figure 1 shows the high-level workflow of TensorRT.

Speeding Up Deep Learning Inference Using TensorFlow, ONNX, and TensorRT Read More +

Application Processor Unit (APU) Quarterly Market Monitor

Application processor: All-in-one solution for the computing challenges of the next decade MARKET DYNAMICS: 2019 APU market closed with total revenue of $31B. Seasonally weak Q1-20 expected to remain above $7B even as COVID-19 stresses the supply chain. Cost & ASP declines at ~20% per year through 2021; slowing to ~10% per year for 2022+.

Application Processor Unit (APU) Quarterly Market Monitor Read More +

Maximize CPU Inference Performance with Improved Threads and Memory Management in Intel Distribution of OpenVINO Toolkit

This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel. The popularity of convolutional neural network (CNN) models and the ubiquity of CPUs means that better inference performance can deliver significant gains to a larger number of users than ever before. As multi-core processors become the norm,

Maximize CPU Inference Performance with Improved Threads and Memory Management in Intel Distribution of OpenVINO Toolkit Read More +

“Market Analysis on SoCs for Imaging, Vision and Deep Learning in Automotive and Mobile Markets,” a Presentation from Yole Développement

John Lorenz, Market and Technology Analyst for Computing and Software at Yole Développement, delivers the presentation “Market Analysis on SoCs for Imaging, Vision and Deep Learning in Automotive and Mobile Markets” at the Edge AI and Vision Alliance’s March 2020 Vision Industry and Technology Forum. Lorenz presents Yole Développement’s latest analysis on the evolution of

“Market Analysis on SoCs for Imaging, Vision and Deep Learning in Automotive and Mobile Markets,” a Presentation from Yole Développement Read More +

CEVA Announces DSP and Voice Neural Networks Integration with TensorFlow Lite for Microcontrollers

WhisPro™ speech recognition software for voice wake words and custom command models now available with open source TensorFlow Lite for Microcontrollers implementing machine learning at the edge TensorFlow Lite for Microcontrollers from Google is already optimized and available for CEVA-BX DSP cores, accelerating the use of low power AI in conversational and contextual awareness applications

CEVA Announces DSP and Voice Neural Networks Integration with TensorFlow Lite for Microcontrollers Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top