Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

This appears to be the primary concern raised in the article. But surely data scientists could put in guards against this, such as weighting the crime count versus the time spent policing the area? In fact, presumably any functional predictive policing system must already do that, else you'd just tend towards policing one area incredibly intensively.

If they are concerned about the police arrest stats or racism biasing the data set, they could keep the input to something hard to interfere with, like where murders occur or where crimes are reported by the public, rather than where arrests occur.

My opinion is that predictive policing does need regulation, but banning it entirely, to me, seems like an overreaction that will over time result in much less effective use of police resources. I suppose time will tell.



sort by: page size:

This is dumb. Predictive policing algorithms predict where the police will be, not where crime takes place: as crime data is simply a measurement of policing activity.

If there is bias in policing, it would therefore be amplified.

Ban police from using this technology.


I'm very curious what "banning" predictive policing even means here. In the broadest sense, predictive policing is using data to inform where crime will happen in the future.

Is this to say you can't use historical trends to allocate police in a city? Should police be allocated only based on population size/density in a region?

Is using your knowledge of what neighborhoods tend to be "crime heavy" predictive? Are they crime heavy because of increased policing (you found more crime because you were looking) or because there really was more crime?

What is the line here?


Predictive policing does nothing more than sending policing resources where crime is more likely based on statistical models.

It does not racially discriminate and it is not racist. It does not harm anyone.

I feel that this is posturing and shooting the messenger. If crime is statistically higher in "black neighbourhoods" the issue will not be solved by pretending it isn't.

If the technology does not work then of course there is no point spending more money on it. So, does it work or not? Here this feels political, not pragmatical.


> predictive policing relies on algorithms to interpret police records, analyzing arrest or parole data to send officers to target chronic offenders, or identifying places where crime may occur.

So... instead of using algorithms, how will the police decide where to patrol heaviest? Putting a human in charge of that seems like a great way to increase racial bias. Or am I misunderstanding what "predictive policing" is?


https://en.wikipedia.org/wiki/Apophenia

Given the problems in police departments (which have fortunately started to appearing in the news), giving the police a system that will essentially let them see what you want to see is a terrible idea. Police work is already full of "forensic tools" that don't actually work (like idea that fingerprints actually identify someone uniquely, or the various techniques that are examples of the Birthday Paradox).

While I'm sure that it's possible to use modern techniques to estimate where crime will occur, it won't work in practice. There are simply too many ways to bias the results (intentionally or not). I suspect giving police this kind of tool is simply a way to give legitimacy and cover to their bad behavior.

> including information about friendships, social media activity

COINTELPRO is a helluva drug.

> advocates say predictive policing can help improve police-community relations by focusing on the people most likely to become involved in violent crime.

That sounds suspiciously like an excuse to improve white communities, by focusing on the blacks (who have historically been seen as "violent savages" by racists).

> because our predictive tool shows us you might commit a crime at some point in the future

The big question is how long until someone tries to use that "predictive tool" as probable cause.


This is a really bad knee-jerk in the long term. These systems should be improved, and relied upon less, rather than banned. You want to identify the "at risk" individuals and focus on them not getting into trouble, preemptively? Predictive policing could help you with that. Arguably, this should be its main purpose in the first place.

Predictive policing is an idea gone totally wrong (predictably).

Data can tell us what, when and where to keep an eye at, it can help us highlight roots of criminal activity (social inequality, bad schools, unsustainable business practices, pollution[1,2,3], non-restorative penal system, police corruption, etc). but it can never rightfully justify treating any given person as a dangerous criminal or even a suspect.

[1] https://news.ycombinator.com/item?id=21565624

[2] https://news.ycombinator.com/item?id=21193609

[3] https://news.ycombinator.com/item?id=9140942


I’m biased, so I would give the most weight to what the computer science department says.

Predictive policing absolutely does work in some aspects. The opposite of predictive policing is to have all beat cops equidistant from each other in their jurisdiction. This is clearly wrong. The next step is to have cops near population centers, then around areas where there have been more reports, requests, etc. After that you move to time to day. Where you place cops during the day vs night is clearly different. Shops close, bars open. Places where young adults vs seniors gather will clearly have different coverage. Pretty much every study in the subject shows people age out of crime. This is used as supporting evidence against lifetime sentences and geriatric incarceration.

I have to assume only the most naive software will have an algorithm that uses secondary indicators / simplest predictive techniques would get such attention. For example “black neighborhoods” (that happen to be poor, with no other stats other than other departments say they have issues with black neighborhoods) and self fulfilling prophecies (we put most cops at location X and what a surprise, that location had the most arrests, w/o looking at the arrests themselves)


Predictive Policing is already a thing. It was in use for a decade by the LAPD in the US. See:

https://www.latimes.com/local/lanow/la-me-lapd-precision-pol...

and

https://www.latimes.com/california/story/2020-04-21/lapd-end...


I'm confused by this argument; how does predictive policing relate to measuring department performance? The biggest concern I can see is that putting more police in certain neighborhoods will bias the input data for those models (presumably police reports), but this doesn't seem to be what this section of the article is saying. It also doesn't seem like a good reason to avoid computer models, but rather to increase transparency.

This is addressed at the very start of the article. It's about predictive policing.

Predictive policing is old, and has been done in consultation with universities before. There are papers on predicting crime vs criminals, predicting by time and address, predicting by age, etc., including discussion on whether this leads to uneven enforcement on minorities. It would be surprising to expect law enforcement to go towards less statistical and software usage.

What would be new would be formal preemptive detention by generic factors. Then again there was news covered by the Guardian on a police black site in Chicago that held thousands of largely black detainees without lawyers.


Predicting crime is great. It means you're working to understand the state of your society.

Using it for policing is idiotic. You can't arrest a statistic.

Instead, it should be used for public policy, to understand what communities are at risk, direct efforts to understand why they are risk, and engineer better systems to support those communities.


Back before computers were a thing in most people's homes, let alone pockets, police would plot on a map the crime reports of the past year and assign more patrols to areas with a high density of crime.

Is this practice now banned? Because this is predictive policing, albeit in a crude form. Presumably it just means computerized predictive policing, but it still seems odd that people see to think that analyzing past patterns of crime to better allocate police is something novel.


Let me add a slightly more negative take. If you follow police technology, no matter what's implemented, crime magically goes down! Although I can see a system like this being a high-level tool for allocating resources, the notion that it can "predict" crime is likely overblown. Where's the deep analysis to show causation, not just correlation?

The technology used by the police seems to be nothing more than support for their program of targeted intervention. The system does not only identify potential perpetrators, but also their victims. Using data to identify people most in need of "concrete assistance in the form of social services, job training, childcare" just makes the process of doling out limited resources more efficient.

>> “I think this is state of the art for predictive policing,” Lewin says. >> How will this form of predictive policing be received?

Use of the word "predictive" here is utterly inaccurate because the police are not predicting anything, but rather identifying at-risk individuals. The inferential leap from "people with these n properties have historically been party to gun violence" to "prediction" is enormous. As soon as one mentions "prediction" and police in the same breath, it suggests Minority Report-style impingement on personal freedoms of some kind. The reason I stress this is because such a program might very well work to reduce gun violence, and labelling it as though it were the genesis of a Big Brother is not only disingenuous, but may also harm the program's legitimacy in the eyes of people who have the power to shut it down.


The thing about predictive policing (and a lot of similar AI/Machine Learning/Big Data/InsertSalesBSHere) is that if a computer is doing it, you can at least see what it's doing. When a a bunch of human police officers arbitrarily decide to over-patrol black areas for no other reason than race its harder to catch and correct...

Beyond the fact that reported crimes are just a proxy for actual crime, this is clearly a complex, non-linear system with feedback loops. I don't see how simple statistics, markov reasoning, ML or AI could ever realistically model this with the intent being control of the surveyed system.

Huge paradox. Why even try predictive policing without Minority Report oracles to magically do the prediction?


I thought predictive policing was generally taboo?
next

Legal | privacy