- adversarial-robustness-toolbox
- counterfit
- cleverhans
- foolbox
- ml_privacy_meter
- NIST de-identification tools
- robustness
- tensorflow/privacy
- Attacking a Machine Learning Model
- Machine Learning for High-risk Applications: Use Cases (Chapter 11)
-
Introduction and Background:
-
Machine Learning Attacks and Countermeasures:
- Membership Inference Attacks Against Machine Learning Models
- Stealing Machine Learning Models via Prediction APIs
- Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
- Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers
- Sponge Examples: Energy-latency Attacks on Neural Networks
-
Examples of Real-world Attacks:
- Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’
- ISIS 'still evading detection on Facebook', report says
- Researchers bypass airport and payment facial recognition systems using masks
- Slight Street Sign Modifications Can Completely Fool Machine Learning Algorithms
- These students figured out their tests were graded by AI — and the easy way to cheat