PROVIDER

“Accelerate Adoption of AI at the Edge with Easy to Use, Low-power Programmable Solutions,” a Presentation from Lattice Semiconductor

Hussein Osman, Consumer Segment Manager at Lattice Semiconductor, presents the “Accelerate Adoption of AI at the Edge with Easy to Use, Low-power Programmable Solutions” tutorial at the May 2019 Embedded Vision Summit. In this talk, Osman shows why Lattice’s low-power FPGA devices, coupled with the sensAI software stack, are a compelling solution for implementation of […]

“Accelerate Adoption of AI at the Edge with Easy to Use, Low-power Programmable Solutions,” a Presentation from Lattice Semiconductor Read More +

“MediaTek’s Approach for Edge Intelligence,” a Presentation from MediaTek

Bing Yu, Senior Technical Manager and Architect at MediaTek, presents the “MediaTek’s Approach for Edge Intelligence” tutorial at the May 2019 Embedded Vision Summit. MediaTek has incorporated an AI processing unit (APU) alongside the traditional CPU and GPU in its SoC designs for the next wave of smart client devices (smartphones, cameras, appliances, cars, etc.).

“MediaTek’s Approach for Edge Intelligence,” a Presentation from MediaTek Read More +

2019-DTS-events-website-small-banner-479x292-0527

IniVation Selected as Top-10 Wow-factor Start-up for 2019 London Deep Tech Summit

September 4, 2019 – niVation has been selected as one of the top 10 wow factor start-ups for Deep Tech Summit 2019.  The event, which will take place between 15-16 October 2019 in London, covers a range of breakthrough technologies including autonomous systems, robotics, AI, IoT, cyber-security, big-data, blockchain, 3D printing, space, hardware and electronics,

IniVation Selected as Top-10 Wow-factor Start-up for 2019 London Deep Tech Summit Read More +

“Can We Have Both Safety and Performance in AI for Autonomous Vehicles?,” a Presentation from Codeplay Software

Andrew Richards, CEO and Co-founder of Codeplay Software, presents the “Can We Have Both Safety and Performance in AI for Autonomous Vehicles?” tutorial at the May 2019 Embedded Vision Summit. The need for ensuring safety in AI subsystems within autonomous vehicles is obvious. How to achieve it is not. Standard safety engineering tools are designed

“Can We Have Both Safety and Performance in AI for Autonomous Vehicles?,” a Presentation from Codeplay Software Read More +

ONSPR3120_LRES

High-speed Image Sensor from ON Semiconductor Enables Intelligent Vision Systems for Viewing and Artificial Intelligence

Ultra-low power 0.3 mega-pixel image sensor offers superior low-light performance in a cost effective, compact, square format PHOENIX, Ariz. – 11, September, 2019 – ON Semiconductor (Nasdaq: ON), driving energy efficient innovations, has announced the introduction of the ARX3A0 digital image sensor with 0.3 Mega-Pixel (MP) resolution in a 1:1 aspect ratio. With up to

High-speed Image Sensor from ON Semiconductor Enables Intelligent Vision Systems for Viewing and Artificial Intelligence Read More +

“Memory-centric Hardware Acceleration for Machine Intelligence,” a Presentation from Crossbar

Sylvain Dubois, Vice President of Business Development and Marketing at Crossbar, presents the “Memory-centric Hardware Acceleration for Machine Intelligence” tutorial at the May 2019 Embedded Vision Summit. Even the most advanced AI chip architectures suffer from performance and energy efficiency limitations caused by the memory bottleneck between computing cores and data. Most state-of-the-art CPUs, GPUs,

“Memory-centric Hardware Acceleration for Machine Intelligence,” a Presentation from Crossbar Read More +

Framos AI Launches FAIM SDK with 2D/3D Skeleton Tracking Functionality

September 10, 2019 – Framos AI GmbH, a member of the FRAMOS® Group, leading global supplier of imaging products, custom vision solutions and OEM services, is launching its FAIM SDK to enable AI-powered algorithms optimized for real-time applications. Skeleton tracking is the first functionality that is integrated into the Framos AI SDK, which enables efficient

Framos AI Launches FAIM SDK with 2D/3D Skeleton Tracking Functionality Read More +

“DNN Challenges and Approaches for L4/L5 Autonomous Vehicles,” a Presentation from Graphcore

Tom Wilson, Vice President of Automotive at Graphcore, presents the “DNN Challenges and Approaches for L4/L5 Autonomous Vehicles” tutorial at the May 2019 Embedded Vision Summit. The industry has made great strides in development of L4/L5 autonomous vehicles, but what’s available today falls far short of expectations set as recently as two to three years

“DNN Challenges and Approaches for L4/L5 Autonomous Vehicles,” a Presentation from Graphcore Read More +

EVA180x100

Embedded Vision Insights: September 10, 2019 Edition

LETTER FROM THE EDITOR Dear Colleague, Deep Learning for Computer Vision with TensorFlow 2.0 is the Embedded Vision Alliance's in-person, hands-on technical training class. The next session will take place November 1 in Fremont, California, hosted by Alliance Member company Mentor. This one-day hands-on overview will give you the critical knowledge you need to develop

Embedded Vision Insights: September 10, 2019 Edition Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top