Login with your Reed credentials to view all events.

3203 Southeast Woodstock Boulevard, Portland, Oregon 97202-8199

View map

Formal Methods for Robust Deep Learning -
Deep learning and neural networks have become an indispensable part of modern computation, appearing in fields as diverse as computer vision, language processing, and cyber-physical system control. Despite this, it has been known for many years that neural networks are susceptible to so-called "adversarial examples". Adversarial examples are inputs which have been slightly (often imperceptibly) perturbed in a way which drastically alters the behavior of the neural network. Such inputs are an impediment to trust in machine learning because they represent a fundamental difference in the way neural networks "think" about concepts in contrast to humans. Therefore, we consider the problem of proving that neural networks are robust, meaning that they are free of adversarial examples within some region. In this talk, we'll start with a brief overview of deep learning and the phenomenon of adversarial examples. We will then introduce abstract interpretation, a classical approach to program verification. Finally, we will talk about two different ways to apply abstract interpretation to verify different kinds of robustness properties for neural networks.

 

 

Event Details

See Who Is Interested

0 people are interested in this event