PROVIDER

2019-DTS-events-website-small-banner-479x292-0527

IniVation Selected as Top-10 Wow-factor Start-up for 2019 London Deep Tech Summit

September 4, 2019 – niVation has been selected as one of the top 10 wow factor start-ups for Deep Tech Summit 2019.  The event, which will take place between 15-16 October 2019 in London, covers a range of breakthrough technologies including autonomous systems, robotics, AI, IoT, cyber-security, big-data, blockchain, 3D printing, space, hardware and electronics, […]

IniVation Selected as Top-10 Wow-factor Start-up for 2019 London Deep Tech Summit Read More +

“Can We Have Both Safety and Performance in AI for Autonomous Vehicles?,” a Presentation from Codeplay Software

Andrew Richards, CEO and Co-founder of Codeplay Software, presents the “Can We Have Both Safety and Performance in AI for Autonomous Vehicles?” tutorial at the May 2019 Embedded Vision Summit. The need for ensuring safety in AI subsystems within autonomous vehicles is obvious. How to achieve it is not. Standard safety engineering tools are designed

“Can We Have Both Safety and Performance in AI for Autonomous Vehicles?,” a Presentation from Codeplay Software Read More +

ONSPR3120_LRES

High-speed Image Sensor from ON Semiconductor Enables Intelligent Vision Systems for Viewing and Artificial Intelligence

Ultra-low power 0.3 mega-pixel image sensor offers superior low-light performance in a cost effective, compact, square format PHOENIX, Ariz. – 11, September, 2019 – ON Semiconductor (Nasdaq: ON), driving energy efficient innovations, has announced the introduction of the ARX3A0 digital image sensor with 0.3 Mega-Pixel (MP) resolution in a 1:1 aspect ratio. With up to

High-speed Image Sensor from ON Semiconductor Enables Intelligent Vision Systems for Viewing and Artificial Intelligence Read More +

“Memory-centric Hardware Acceleration for Machine Intelligence,” a Presentation from Crossbar

Sylvain Dubois, Vice President of Business Development and Marketing at Crossbar, presents the “Memory-centric Hardware Acceleration for Machine Intelligence” tutorial at the May 2019 Embedded Vision Summit. Even the most advanced AI chip architectures suffer from performance and energy efficiency limitations caused by the memory bottleneck between computing cores and data. Most state-of-the-art CPUs, GPUs,

“Memory-centric Hardware Acceleration for Machine Intelligence,” a Presentation from Crossbar Read More +

Framos AI Launches FAIM SDK with 2D/3D Skeleton Tracking Functionality

September 10, 2019 – Framos AI GmbH, a member of the FRAMOS® Group, leading global supplier of imaging products, custom vision solutions and OEM services, is launching its FAIM SDK to enable AI-powered algorithms optimized for real-time applications. Skeleton tracking is the first functionality that is integrated into the Framos AI SDK, which enables efficient

Framos AI Launches FAIM SDK with 2D/3D Skeleton Tracking Functionality Read More +

“DNN Challenges and Approaches for L4/L5 Autonomous Vehicles,” a Presentation from Graphcore

Tom Wilson, Vice President of Automotive at Graphcore, presents the “DNN Challenges and Approaches for L4/L5 Autonomous Vehicles” tutorial at the May 2019 Embedded Vision Summit. The industry has made great strides in development of L4/L5 autonomous vehicles, but what’s available today falls far short of expectations set as recently as two to three years

“DNN Challenges and Approaches for L4/L5 Autonomous Vehicles,” a Presentation from Graphcore Read More +

EVA180x100

Embedded Vision Insights: September 10, 2019 Edition

LETTER FROM THE EDITOR Dear Colleague, Deep Learning for Computer Vision with TensorFlow 2.0 is the Embedded Vision Alliance's in-person, hands-on technical training class. The next session will take place November 1 in Fremont, California, hosted by Alliance Member company Mentor. This one-day hands-on overview will give you the critical knowledge you need to develop

Embedded Vision Insights: September 10, 2019 Edition Read More +

“Snapdragon Hybrid Computer Vision/Deep Learning Architecture for Imaging Applications,” a Presentation from Qualcomm

Robert Lay, Computer Vision and Camera Product Manager at Qualcomm, presents the “Snapdragon Hybrid Computer Vision/Deep Learning Architecture for Imaging Applications” tutorial at the May 2019 Embedded Vision Summit. Advances in imaging quality and features are accelerating, thanks to hybrid approaches that combine classical computer vision and deep learning algorithms and that take advantage of

“Snapdragon Hybrid Computer Vision/Deep Learning Architecture for Imaging Applications,” a Presentation from Qualcomm Read More +

“Dynamically Reconfigurable Processor Technology for Vision Processing,” a Presentation from Renesas

Yoshio Sato, Senior Product Marketing Manager in the Industrial Business Unit at Renesas, presents the “Dynamically Reconfigurable Processor Technology for Vision Processing” tutorial at the May 2019 Embedded Vision Summit. The Dynamically Reconfigurable Processing (DRP) block in the Arm Cortex-A9 based RZ/A2M MPU accelerates image processing algorithms with spatially pipelined, time-multiplexed, reconfigurable- hardware compute resources.

“Dynamically Reconfigurable Processor Technology for Vision Processing,” a Presentation from Renesas Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top