Amsterdam's Fair Fraud Detection Model: A Case Study in Algorithmic Bias
Amsterdam attempted to build a 'fair' AI model for fraud detection in its welfare system, aiming to reduce investigations while improving efficiency and avoiding discrimination against vulnerable groups. The initial model showed bias against non-Dutch and non-Western applicants. While reweighting the training data mitigated some bias, real-world deployment revealed new biases in the opposite direction, along with significant performance degradation. The project was ultimately shelved, highlighting the inherent trade-offs between different fairness definitions in AI. Attempts to reduce bias in one group can inadvertently increase it in others, demonstrating the complexities of achieving fairness in algorithmic decision-making.
Read more