Roundtables: Why It’s So Hard to Make Welfare AI Fair
Summary
Amsterdam’s attempt to use algorithms for fairer welfare assessments still resulted in bias, highlighting persistent challenges in eliminating discrimination from AI systems. Experts discuss why these efforts failed and question whether true fairness in algorithmic decision-making is achievable. The case underscores ongoing concerns about bias and accountability in AI applications for social services.