“Whether AI will help us reach our
aspirations or reinforce the unjust inequalities is ultimately up to us.” Joy Buolamwini
Slide 14
#3 Tweet
Slide 15
#4 Google’s Algorithm
Slide 16
• 46% false positives for African American
• African American authors are 1.5 times more likely to be labelled “offensive”
https://homes.cs.washington.edu/~msap/pdfs/sap2019risk.pdf
Artificial Intelligence needs to learn from the real world. Creating a smart computer is not enough, you need to teach it the right thing.
https://about.google/stories/gender-balance-diversity-important-tomachine-learning/?hl=pt-BR
Slide 20
Gender Gap in Artificial Intelligence “Only 22% of AI professionals globally are female, compared to 78% who are male.” (The Global Gender Gap Report 2018 - p.28)
Slide 21
Bias
Human Bias
Technology
Slide 22
Slide 23
Even though these decisions affect humans, to
optimize task performance ML models often become too complex to be intelligible to humans: black-box models
Slide 24
INPUT
BLACK BOX
OUTPUT
Slide 25
INPUT
BLACK BOX
OUTPUT
Slide 26
Slide 27
JUSTICE
MATH
Slide 28
“This new law is a complete shame for our democracy.” Louis Larret Chahine
Co-founder PREDICTICE
https://www.artificiallawyer.com/2019/06/04/france-bans-judge-analytics-5-years-in-prison-for-rule-breakers/
How to open this black-box? EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI)
TRANSPARENCY
TRUST
Slide 31
XAI intends to create a new suite of ML techniques that produce more
interpretable ML models
Slide 32
Accuracy vs. Interpretability trade-off
Slide 33
Explainability Pre-modelling explainability
Explainable modelling
Goal Understand/describe data used to develop models
Goal Develop inherently more explainable models
Methodologies • Exploratory data analysis • Dataset description standardization • Dataset summarization • Explainable feature engineering
Methodologies • Adopt explainable model family • Hybrid models • Joint prediction and explanation • Architectural adjustments • Regularization
Post-modelling explainability Goal Extract explanations to describe pre-developed models Methodologies • Perturbation mechanism • Backward propagation • Proxy models • Activation optimization
Slide 34
Slide 35
Post-modelling explainability
The proposed taxonomy of the post-hoc explainability methods including the four aspects of target, drivers, explanation family, and estimator.
Slide 36
Post-modelling explainability
first a perturbation model is used to obtain perturbed versions of the input sequence. Next, associations between input and predicted sequence are inferred using a causal inference model. Finally, the obtained associations are partitioned and the most relevant sets are selected.
Useful links − AI NOW − Racial and Gender bias in Amazon Rekognition − Diversity in faces (IBM) − Google video – Machine Learning and Human Bias − Visão Computacional e Vieses Racializados − Machine Bias on Compas − Machine Learning Explainability Kaggle
− Predictive modeling: striking a balance between accuracy and interpretability
Slide 45
Useful links −Racismo Algorítmico em Plataformas Digitais: microagressões e
discriminação em código −Metrics for Explainable AI: Challenges and Prospects −The Mythos of Model Interpretability
−Towards Robust Interpretability with Self-Explaining Neural
Networks −The How of Explainable AI: Post-modelling Explainability