AutoSens 2018 Show Report: Another 5 Lessons Learned

AutoSens-2016-audience-300x200

This blog post was originally published at videantis' website. It is reprinted here with the permission of videantis.

It seems like it was only yesterday that Robert Stead from Sense Media organized his first AutoSens conference, but time flies when you’re having fun, and last week it was already the fifth AutoSens show that took place. Special pins were handed out to the select group of people that had attended all five shows.

The event is held twice a year, in Brussels and Detroit, and there’s talks about adding a third location in Asia. Just like in 2016 (show report) and 2017 (show report), the venue of choice again was the awesome AutoWorld car museum with its collection of over 350 vintage European and American cars on display. What better place to talk about Lidar, radar and image sensing than among Citroën 2CVs, Ferraris and 19th century Benzes.

To give you a good feel for its size, here’s the conference in numbers:

  • 550 attendees (440 in 2017)
  • 70 expert speakers
  • 50 exhibitors (60 last year)
  • 60 IEEE P2020 work group attendees (70 last year)
  • 3 parallel tracks of talks

Good energy attracts good people, and the show grew again, not so much in the number of speakers or exhibitors, but primarily in the number of attendees. The show’s organizers want to keep the more intimate nature of a smaller conference: it’s all about participating actively in the technically savvy AutoSens community, not about just having a big conference.

P2020 and workshops

The setup was similar to last year with Day 1 being allocated to meetings of the IEEE P2020 Standards Association Working Group on Automotive System Image Quality. This workgroup specifies ways to measure and test automotive image capture quality, ensuring we all speak the same language and striving for consistency. This benefits all the automotive sensor makers and their adopters. The workgroup has been working toward this goal for over two years now, and progress is steady. In the beginning of September, the group published a 32-page whitepaper outlining their work.

Day 2 saw four half-day workshops with topics such as HD Maps, Functional safety, simulating sensing systems, and an introduction to the world of Time-of-Flight sensors for 3D imaging by the industry veteran Prof Albert Theuwissen.

AutoSens Awards

Once again the AutoSens Awards were handed out during a very nice dinner and ceremony on Day 3 of the conference. All the winners can be found here. Our own Marco Jacobs won a silver award in the “Most engaging content” category, where EETimes journalist Junko Yoshida rightfully took the first prize. Other winners in different categories were Robert Bosch, Algolux, AEye, Prof Alexander Braun, Udacity, and the North West Advanced Programming Workshop Programme. Robert Stead, Managing Director of Sense Media Group said that  nominations almost doubled compared to last year’s event, making it a difficult task to whittle down the shortlist, but that he’s very happy to have had such an exciting winners list of industry trail-blazers.

Conference

The last two days were filled with keynotes and 3 parallel tracks of presentations. There was a healthy mix of analysts, academia, OEMs, Tier 1s, semiconductor companies, industry organizations, and software vendors giving talks.

We presented a talk titled “How deep learning affects automotive SOC and system designs” where we gave an overview of the status of deep learning in automotive and its impact on automotive chip and system architectures. Let us know if you’d like access to the slides and we’ll send them to you.

In our 2016 and 2017 show reports, we highlighted a top 5 of trends. We’ll repeat them here briefly again, so you can see if you can spot a “trend of trends” and notice where the industry is making progress.

2016 trends:

  • Self-driving cars are hard: Tesla Autopilot accident, the resulting breakup between Mobileye and Tesla, and the new Autopilot software that’s more conservative in letting the driver give up the wheel.
  • Deep learning is hard: the memory and compute resources that are required to run these detectors is still beyond what embedded systems that can go into mainstream cars offer.
  • Image quality is hard: many components affect picture quality: the scene, lens, sensor, and ISP, and then there’s the bigger question of how this all impacts the computer vision algorithms.
  • We need more sensors: OEMs designing 12 cameras into their cars, and then we still need to add radar and lidar.
  • Surround view replacing rear view: These systems are quickly becoming the rear-view camera of yesteryear.

2017 trends:

  • The devil is in the detail: It’s not simply about staying in lanes but for instance to anticipate little potholes. Or recognizing the difference between a red jerry can, which must be avoided, and a red plastic bag, which can be ignored.
  • No one sensor to rule them all: The industry seems to clearly head in the direction of using 5 key sensors: ultrasonic, image sensors, time-of-flight, radar and lidar.
  • No bold predictions: Lyft said 3-4 years for self-driving vehicles to arrive, Yole said that by 2050, 5% of all vehicles sold should be Level-5-ready. Otherwise, nobody dared to make predictions.
  • What will an autonomous car really be like? Once the car drives itself, everything will change: business models, insurance, the vehicle’s interior, exterior, and the way we use them.
  • Deep learning a must-have tool for everyone: Neural nets are not just for detection, classification and segmentation, but also for lens correction, collecting high-resolution maps, implementing driving policies, to simplify the labelling task itself, and even to study ethical aspects.

Another year passed, so here again are our “5 lessons learned at AutoSens”.

1. You’re being watched: interior cameras coming to your car

Already people are talking about having 10+ cameras inside the vehicle, looking at where the passengers are and especially at what and how the driver is doing. Whether it’s for airbag deployment, driver authentication, automated seat and steering wheel adjustments, the car wants to know who and where you are.

Driver monitoring systems look at your gaze, blink rates and head pose, for planning a handover or for safety warnings. Seems like there will not only be a lot of cameras looking at what’s going on around the vehicle, but as many to understand what’s going on inside the car also.

2. We can’t test drive trillions of miles, we need to simulate

In the US alone, Americans drive over 3 trillion miles per year. To drive all those miles with a test vehicle, which typically drive 50,000 miles per year, you’d need 60 million of them, way more than what’s realistically possible. The solution: simulate all those miles on a huge compute cluster. Several presentations discussed simulations, of complete self-driving situations, as well as at the component level.

3. Invisible sensors

While smart sensors are important, looks are even more important. Buying a car is an emotional experience, and brand perception and design are key. Design can make or break a newly introduced car. This means the sensors should be as small as possible, and not impact the design. This holds for both the exterior as well as the interior of the car.

4. Digital mirrors are coming

The latest amendment to UN ECE Regulation No. 46 is based on ISO 16505:2015 and is the first regulation permitting the use of cameras as an alternative to regular mirrors for passenger cars as well as commercial vehicles. BMW’s Hoffman and Bauer showed that a single rear camera isn’t adequate for such a system and only multiple cameras can provide a good replacement for standard mirrors. The best way of stitching the multiple camera images for presentation on a single screen isn’t easy and should be researched further.

5. It’s all about self-driving, but all the talks are about ADAS

The whole automotive industry’s goal can be easily summed up in two words, “autonomous vehicles” but if you look at all the components that make up a self-driving car, it’s quite complex. As a result, most of the talks are about ADAS, incrementally increasing the level of automation.

Conclusion

We’re still at the beginning of the massive change that’s coming to the automotive and transportation industries. These industries are very large, well over a trillion dollars, have a big impact on the environment and our safety, and consume lots of driver’s time in holding the wheel. Progress is tremendous: sensors are getting better, processing architectures are getting better, the algorithms are getting better, and the appetite for this technology by the OEMs and consumers is not getting any smaller. We’re excited to be part of this transformation and to be one of the leaders that are delivering the crucial deep learning acceleration and visual computing solutions to this market. Without low-power and high-performance processing, we can’t build ADAS systems or self-driving vehicles.

The next AutoSens show is scheduled to take place in May 2019 in Detroit. We’re looking forward to meeting everyone there, or preferably sooner!

Please don’t hesitate to contact us to learn more about our deep learning/vision/video processor and software solutions.

By Marco Jacobs
Vice President of Marketing, videantis

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top