David Haber, Head of Deep Learning at Daedalean AI, delivers the presentation “Certifying Neural Networks for Autonomous Flight” at the Edge AI and Vision Alliance’s May 2020 Member Briefing meeting.

Deep neural networks have demonstrated impressive performance on visual recognition tasks relevant to the operation of airborne vehicles such as autonomous drones and personal electric air-taxis. For this reason, their application to visual problems, including object detection and image segmentation, is promising (and even necessary) for autonomous flight. The downside of this increased model performance is higher complexity, which poses challenges related to interpretability, explainability and (eventually) the certification of safety-critical aviation applications.

For example, how do you convince the regulators (and ultimately the public) that your model is robust to adversarial attacks? How do you prove that your training and testing datasets are exhaustive? How do you test edge cases when your input space is infinite and any mistake is potentially fatal? Over the last year, Daedalean AI has partnered with EASA (the European Union Aviation Safety Agency) to explore how existing regulations around safety-critical applications can be adapted to encompass modern machine-learning techniques.

In this talk, Haber discusses the different stages of a typical machine learning pipeline as they relate to design choices for neural network architectures, desirable properties for training and test datasets, model generalizability, and how to protect against adversarial attacks. He also considers the opportunities, challenges and learning that may apply more generally when building AI for safety-critical applications in the future.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top