fbpx

Embedded Vision Insights: March 11, 2014 Edition

EVA180x100

In this edition of Embedded Vision Insights:

LETTER FROM THE EDITOR

Dear Colleague,

In the previous edition of Embedded Vision Insights, I mentioned that we'd published initial details on the afternoon keynote for the May 29 Embedded Vision Summit, to be held in Santa Clara. The presenter, Nathaniel Fairfield of Google's self-driving car team, has subsequently finalized the title and abstract for his talk, which addresses the second of the two foundation themes of the Summit, recognition and autonomy. Here's what Fairfield says about his planned presentation:

Self-driving cars have the potential to transform how we move: they promise to make us safer, give freedom to millions of people who can't drive, and give people back their time. The Google Self-Driving Car project was created to rapidly advance autonomous driving technology and build on previous research. For the past four years, Google has been working to make cars that drive reliably on many types of roads, using lasers, cameras, and radar, together with a detailed map of the world. Fairfield will describe how Google leverages maps to assist with challenging perception problems such as detecting traffic lights, and how the different sensors can be used to complement each other. Google's self-driving cars have now traveled more than a half a million miles autonomously. In this talk, Fairfield will discuss Google's overall approach to solving the driving problem, the capabilities of the car, the company's progress so far, and the remaining challenges to be resolved.

The Embedded Vision Summit West is a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software. Online registration is now available, along with travel and housing information, so I encourage you to sign up for the conference right away before all of the attendance slots are filled. And while you're on the Alliance website, make sure you check out all the new content published there in recent weeks. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your emailed suggestions on what the Alliance can do better, as well as what else the Alliance can do, to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

September 2013 Qualcomm UPLINQ Conference Presentation: "Accelerating Computer Vision Applications with the Hexagon DSP," Eric Gregori, BDTI
Eric Gregori, Senior Software Engineer at BDTI, presents the "Accelerating Computer Vision Applications with the Hexagon DSP" tutorial at the September 2013 Qualcomm UPLINQ Conference. Smartphones, tablets and embedded systems increasingly use sophisticated vision algorithms to deliver capabilities like augmented reality and gesture user interfaces. Since vision algorithms are computationally demanding, a key challenge when implementing vision in battery-powered devices is achieving energy-efficient processing. This BDTI tutorial presents a straightforward approach for using the Hexagon DSP core in Qualcomm’s Snapdragon application processors to offload vision processing functions from the CPU, in order to reduce power and free up CPU cycles for other tasks.

January 2014 Consumer Electronics Show Product Demonstration: ARM
Phill Smith, Demo Solutions Manager, demonstrates ARM's latest embedded vision technologies and products at the January 2014 Consumer Electronics Show. Specifically, Smith demonstrates various gesture-interface applications that are hardware-accelerated by the company's Mali GPU cores.

More Videos

FEATURED ARTICLES

Significant Growth Potential for Image Sensors in Automotive Market
Image sensors are used effectively in a range of applications, from photographic to medical. One sector in which image sensors are currently experiencing particularly impressive growth potential is in automotive, specifically in Advanced Driver Assistance Systems (ADAS). ADAS use image sensors to improve a driver's safety on the road by offering features such as parking assistance (APA), lane departure warning (LDW), and collision avoidance systems, all of which can help the car to gather information about the outside world. In order to offer these various features, ADAS embraces a number of technologies such as sensors, radar, lidar and integrated cameras to provide the driver with an all-round picture of their surroundings. More

School Security Trends
On the anniversary of Sandy Hook, many schools have already made changes to policies and security systems. IHS has forecast strong growth for security equipment in US schools for the next several years.  IHS estimates the market size for security equipment in schools to reach $634 million this year and is expected to surpass $720 million by 2014. Security equipment installations and upgrades as well as policies will vary by school district and university however IHS finds that video surveillance will be the focal point in the years to come. More

More Articles

FEATURED COMMUNITY DISCUSSIONS

Job Opportunity For Vision Software Expert In Duluth, GA

Benchmarking Algorithms On Computer Vision Platforms

More Community Discussions

FEATURED NEWS

CogniVue Introduces Advanced Driver Assistance System (ADAS) Application for 2nd Generation APEX Image Cognition Processor

GEO Expands Product Line with Compact Geometric Image Processing Solutions for Automotive Ultra-Wide Angle Camera and Head-Up Display Applications

Xilinx Introduces UltraScale Multi-Processing Architecture for the Industry’s First All Programmable MPSoCs

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top