This section provides links to a subset of the resources linked from the book and some additional related resources, especially with code or videos.
- Online Resources for Getting Started With Deep Learning
- Collaborative Projects
- Adversarial Examples Code and Experimentation
- Fooling Humans
Here are some nice resources. There are many more available online.
Four superb introductory videos explaining the mathematics underpinning neural networks are here: 3Blue1Brown Deep Learning
To get started with Keras and Tensorflow, the online documentation provides excellent tutorials Keras docs
For a complete introduction to all things ML, this is a fantastic course https://www.coursera.org/learn/machine-learning.
An open source library for the development of attacks and associated defenses with the aim of benchmarking Machine Learning systems’ vulnerability to adversarial examples. The code repository for Cleverhans is at https://github.com/openai/cleverhans
A toolbox for creating adversarial examples to enable testing of defenses. The documentation for Foolbox is at https://fool box.readthedocs.io/en/latest/.
This library includes adversarial attacks, defenses, and detection. It also supports robustness metrics measurements. The code repository for this library is here: https://github.com/IBM/ adversarial-robustness-toolbox.
Robust ML aims to provide a central website for learning about defenses and their analyses and evaluations. It is located at https://www.robust-ml.org/.
Several competitions have encouraged participation in the generation of adversarial attacks and defenses including competitions from Google and Kaggle (https://www.kaggle.com).
Many research papers have associated GitHub repositories and videos/audio examples. Here are a few.
To see adversarial patches in action, take a look at this video: Adversarial Patch on YouTube. This accompanies the paper Adversarial Patch by Brown et al.. Example code for creating adversarial patches is here
Here's a well presented Jupyter notebook to accompany the paper Synthesizing Robust Adversarial Examples by Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok.
This research considers a different approach, raising the possibility of fooling neural networks by viewing objects from unusual angles. Strike (with) a Pose: Neural networks are easily fooled by strange poses of familiar objects by Michael Alcorn, Qi Li, Zhitao Gong, Chengfei Wang, Long Mai, Wei-shinn Ku, Anh Nguyen
If you'd like to listen to some adversarial audio examples, take a look here. This accompanies the paper Audi Adversarial Examples Targeted Attacks on Speech-to-Text by Carlini and Wagner
Humans perception can be tricked in many different ways. Here are some fun examples:
-
Your brain hallucinates your conscious reality (A. Seth) is an interesting Ted Talk examining how much os what we perceive comes from within.
-
Everything you hear on a film is a lie (T. Frantzolaz) explains how we combine multi-sensory input to understand the world.
-
BBC Two, Try The McGurk Effect! - Horizon: Is Seeing Believing? shows how you can be fooled by conflicting audio and optical input - even when you know you are being tricked.
-
Optical Illusions for Kids for some basic optical illusions.