Due by classtime on Week 10.
Choose one of the following reports about bias in AI and answer these questions, using the knowledge we gained in class about how engineers train neural nets for machine learning.
- What was the input data that trained the model?
- How did the process work (in broad strokes; you don’t need to be too detailed)?
- What “latent variables” produced bias in the outcome? That is, what part of the data caused the model to produce bias?
- Is the process salvageable?
Submit your response via the assignment in BruinLearn.
Reports of bias in AI
- “Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them“
- “Dozens of Mortgage Lenders Showed Significant Disparities. Here Are the Worst“
- “If AI is going to be the world’s doctor, it needs better textbooks“
- “Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks“
- “A Health Care Algorithm Offered Less Care to Black Patients“
