Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

There's a fundamental tension between generating equitable outcomes and using race-agnostic decision-making processes. This is true for racial policy in general, not just policing - for instance, university admissions has to choose between admitting an unfairly large number of Asians on the one hand, and penalizing Asian applicants merely for being Asian on the other.

If generating a racially fair predictive policing algorithm was merely a question of optimizing for one of these desiderata, it'd be possible in principle. You either ensure that the appropriate racial ratios pop out for the neighborhoods to patrol, or you ensure that racial information and their proxies aren't used in the algorithm. If any algorithm is unacceptable unless it does both, well, you're probably going to be disappointed.



view as:

Legal | privacy