profile profile

Disparate mistreatment: different error rates for two groups (eg. men and women) if you minimise error rates overall data.

The semantics of the errors also matter, even if the error rates are the same for both groups.

They found an algorithm for predicting likelihood of criminal re-offending disproportionately favoured white people in terms of false negatives, and biased against black people for false positives.

Algorithms learn correlations between features, but what actually are the features and what does this mean in the real world? This is a really nice look beyond the numbers to who is affected by algorithmic decisions... and how to correct for it!

And the speaker was excellent, explained things really well.

(Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez and Krishna P. Gummadi)

🏷 WWW2017 algorithms bias