Shang-Hung Lin, Vice President of Vision and Imaging Products at VeriSilicon, presents the “Utilizing Neural Networks to Validate Display Content in Mission Critical Systems” tutorial at the May 2018 Embedded Vision Summit.
Mission critical display systems in aerospace, automotive and industrial markets require validation of the content presented to the user, in order to enable detection of potential failures and triggering of failsafe mechanisms. Traditional validation methods are based on pixel-perfect matching between expected and presented content. As user interface (UI) designs in these systems become more elaborate, the traditional validation methods become obsolete, and must be replaced with more robust methods that can recognize the mission critical information in a dynamic UI.
In this talk, Lin explores the limitations of the current content integrity checking systems and how they can be overcome by deployment of neural network pattern classification in the display pipeline. He also discusses the downscaling of these neural networks to run efficiently in a functionally safe microcontroller environment, and the requirements imposed on such solutions by the safety standards enforced in these domains.