Intel at the Embedded Vision Summit 2016

intel

Intel Corporation is attending the Embedded Vision Summit, being held from May 2-4, 2016 at the Santa Clara Convention Center in Santa Clara, to demonstrate chip-based solutions technologies, such as field programmable gate arrays (FPGAs), software development tools and process flows that enable a wide range of automotive, industrial, and high-performance data center vision applications. Intel will demonstrate a convolutional neural network-based pedestrian detector, as well as a CNN-based application on low-end devices accelerated with on-board graphics. Intel speakers will also present on a range of compelling topics.

Sessions:

  • Join Bill Jenkins, senior product specialist for high level design tools, Intel Programmable Solutions Group, will present, "Accelerating Deep Learning Using Altera FPGAs” on Tuesday, May 3, from 12:00 – 12:30 pm.

    • Deep learning neural network systems currently provide the best solution to many large computing problems for image recognition and natural language processing, including convolutional neural networks (CNNs), which use an artificial network of neurons to execute computer image/picture identification or recognition.
  • Intel’s Anavai Ramesh, senior software engineer, will present during the session, “Getting from Idea to Product with 3D Vision,” on Tuesday, May 3, from 12:30 PM – 1:00 PM. He is working on developing applications using the Intel® RealSense™ camera technology.

    • For system developers, 3D vision brings a slew of new concepts, terminology, and algorithms – such as SLAM, SFM and visual odometry. This talk focuses on challenges engineers are likely to face while incorporating 3D vision algorithms into their products. Location:  Mission City Ballroom B1/M1
  • Don Faria, investment director, Intel Capital, will be among the advisors hearing from a set of vision technology focused start-ups, in Vision Tank, being held Tuesday, May 3 at 4:30 pm in the Mission City Ballroom. This is a must-attend, first-ever event at the Summit.

Demonstrations:

You will see the demos working in real-time at the Embedded Vision Summit in the Vision Technology Showcase, on Monday, May 2 from 4:00 – 7:00 pm, and on Tuesday, May 3 from 10:30 am – 7:30 pm.

  • Convolutional Neural Network-based Pedestrian Detector

    • This will demonstrate a Convolutional Neural Network-based pedestrian detector running a Convolutional Neural Network (CNN)-based pedestrian detector running on an Intel Programmable Solutions (Altera) Arria® 10 FPGA-based accelerator board. The demo uses IP from i-Abra, an expert in deep learning neural networks, running on Altera’s FPGA, showcasing a highly efficient platform for neural networks vs. GPUs. It offers a smaller computational footprint, requires much lower power to process each pixel, and has much lower latency
      The demo uses a video footage taken from a drone to identify people, and demonstrates high performance of the neural network and the Arria 10 accelerator platform to overcome challenges such as low resolution subjects/objects, occlusion, and masking of subjects/objects, as well as discriminating people from cattle. You can view an online link to a video of a demo, here.
  • Intel Real-time CNN Demonstration for Graphics Acceleration

    • Intel will demonstrate a real-time CNN-based application on low-end devices showing the acceleration power of Intel® Iris™ Pro Graphics, which is available on a majority of desktop or mobile devices. Iris Pro Graphics’ compute power complements the CPU to support a broad range of applications with high compute demands.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top