Skip to content

Latest commit

 

History

History
106 lines (73 loc) · 5.13 KB

RESOURCES.md

File metadata and controls

106 lines (73 loc) · 5.13 KB

Resources

This section provides links to a subset of the resources linked from the book and some additional related resources, especially with code or videos.

Table of Contents

  1. Online Resources for Getting Started With Deep Learning
  2. Collaborative Projects
  3. Adversarial Examples Code and Experimentation
  4. Fooling Humans

Online Resources for Getting Started With Deep Learning

Here are some nice resources. There are many more available online.

3Blue1Brown

Four superb introductory videos explaining the mathematics underpinning neural networks are here: 3Blue1Brown Deep Learning

Keras and Tensorflow

To get started with Keras and Tensorflow, the online documentation provides excellent tutorials Keras docs

Andrew Ng’s Coursera course

For a complete introduction to all things ML, this is a fantastic course https://www.coursera.org/learn/machine-learning.


Collaborative Projects

Cleverhans

An open source library for the development of attacks and associated defenses with the aim of benchmarking Machine Learning systems’ vulnerability to adversarial examples. The code repository for Cleverhans is at https://github.com/openai/cleverhans

Foolbox

A toolbox for creating adversarial examples to enable testing of defenses. The documentation for Foolbox is at https://fool box.readthedocs.io/en/latest/.

IBM’s Adversarial Robustness Toolbox

This library includes adversarial attacks, defenses, and detection. It also supports robustness metrics measurements. The code repository for this library is here: https://github.com/IBM/ adversarial-robustness-toolbox.

RobustML

Robust ML aims to provide a central website for learning about defenses and their analyses and evaluations. It is located at https://www.robust-ml.org/.

Competitions

Several competitions have encouraged participation in the generation of adversarial attacks and defenses including competitions from Google and Kaggle (https://www.kaggle.com).


Early Papers for Concepts of Adversarial Examples


Adversarial Examples Code and Experimentation

Many research papers have associated GitHub repositories and videos/audio examples. Here are a few.

Adversarial Patch

To see adversarial patches in action, take a look at this video: Adversarial Patch on YouTube. This accompanies the paper Adversarial Patch by Brown et al.. Example code for creating adversarial patches is here

Creating Robust and Physical World Examples

Here's a well presented Jupyter notebook to accompany the paper Synthesizing Robust Adversarial Examples by Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok.

This research considers a different approach, raising the possibility of fooling neural networks by viewing objects from unusual angles. Strike (with) a Pose: Neural networks are easily fooled by strange poses of familiar objects by Michael Alcorn, Qi Li, Zhitao Gong, Chengfei Wang, Long Mai, Wei-shinn Ku, Anh Nguyen

Adversarial Audio

If you'd like to listen to some adversarial audio examples, take a look here. This accompanies the paper Audi Adversarial Examples Targeted Attacks on Speech-to-Text by Carlini and Wagner

Fooling Humans

Humans perception can be tricked in many different ways. Here are some fun examples: