How to fool neural networks with adversarial attacks
Pranešėjas | Grigory Malivenko |
Kalba | EN |
Scena | Digital |
Veikla | Pranešimas |
Aprašymas
Deep learning is a great approach to solve problems seemed unsolvable before and neural networks are now so widespread that they are used almost everywhere - from mobile applications to safety-critical environments. However, deep neural networks have been (almost) recently found vulnerable to well-designed input samples, called adversarial samples. Adversarial samples may be imperceptible to human but can easily fool deep neural networks in the testing/deploying stage.
In this presentation I'll show how to perform adversarial attack to pretrained model with known architecture and how to make your network less vulnerable to this kind of attack.
Requirements for attendees
basic computer science knowledge