diff --git a/README.md b/README.md index 70d1411..3a74bf9 100644 --- a/README.md +++ b/README.md @@ -62,50 +62,11 @@ Corrections or suggestions? Please file a [GitHub issue](https://github.com/jpha ### Lecture 3 Class Materials * [Lecture Notes](tex/lecture_3.pdf) +* [Software Example]() * [Assignment 3](assignments/tex/assignment_3.pdf) * [Model evaluation notebook](https://nbviewer.jupyter.org/github/jphall663/GWU_rml/blob/master/assignments/eval.ipynb?flush_cache=true) * [Full evaluations results](assignments/model_eval_2023_06_28_21_00_17.csv) -* Reading: [_Machine Learning for High-Risk Applications_](https://pages.dataiku.com/oreilly-responsible-ai), Chapter 4 and Chapter 10 - -### Lecture 3 Additional Software Tools - -* **Python**: - * [aequitas](https://github.com/dssg/aequitas) - * [AIF360](https://github.com/IBM/AIF360) - * [Algorithmic Fairness](https://oreil.ly/JNzqk) - * [fairlearn](https://oreil.ly/jYjCi) - * [fairml](https://oreil.ly/DCkZ5) - * [solas-ai-disparity](https://oreil.ly/X9fd6) - * [tensorflow/fairness-indicators](https://oreil.ly/dHBSL) - * [Themis](https://github.com/LASER-UMASS/Themis) - -* **R**: - * [AIF360](https://oreil.ly/J53bZ) - * [fairmodels](https://oreil.ly/nSv8B) - * [fairness](https://oreil.ly/Dequ9) - -### Lecture 3 Additional Software Examples -* [Increase Fairness in Your Machine Learning Project with Disparate Impact Analysis using Python and H2O](https://nbviewer.org/github/jphall663/interpretable_machine_learning_with_python/blob/master/dia.ipynb) -* [Testing a Constrained Model for Discrimination and Remediating Discovered Discrimination](https://nbviewer.jupyter.org/github/jphall663/GWU_rml/blob/master/lecture_3.ipynb) -* _Machine Learning for High-risk Applications_: [Use Cases](https://oreil.ly/machine-learning-high-risk-apps-code) (Chapter 10) - -### Lecture 3 Additional Reading - -* **Introduction and Background**: - * [*50 Years of Test (Un)fairness: Lessons for Machine Learning*](https://oreil.ly/fTlda) - * **Fairness and Machine Learning** - [Introduction](https://fairmlbook.org/introduction.html) - * [NIST SP1270: _Towards a Standard for Identifying and Managing Bias in Artificial Intelligence_](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf) - * [*Fairness Through Awareness*](https://arxiv.org/pdf/1104.3913.pdf) - -* **Discrimination Testing and Remediation Techniques**: - * [*An Empirical Comparison of Bias Reduction Methods on Real-World Problems in High-Stakes Policy Settings*](https://oreil.ly/vmxPz) - * [*Certifying and Removing Disparate Impact*](https://arxiv.org/pdf/1412.3756.pdf) - * [*Data Preprocessing Techniques for Classification Without -Discrimination*](https://link.springer.com/content/pdf/10.1007/s10115-011-0463-8.pdf) - * [*Decision Theory for Discrimination-aware Classification*](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.722.3030&rep=rep1&type=pdf) - * [*Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification Without Disparate Mistreatment*](https://arxiv.org/pdf/1610.08452.pdf) - * [*Learning Fair Representations*](http://proceedings.mlr.press/v28/zemel13.pdf) - * [*Mitigating Unwanted Biases with Adversarial Learning*](https://dl.acm.org/doi/pdf/10.1145/3278721.3278779) +* Reading [_Machine Learning for High-Risk Applications_](https://www.oreilly.com/library/view/machine-learning-for/9781098102425/), Chapter 4 and Chapter 10 ***