This is a really bad knee-jerk in the long term. These systems should be improved, and relied upon less, rather than banned. You want to identify the "at risk" individuals and focus on them not getting into trouble, preemptively? Predictive policing could help you with that. Arguably, this should be its main purpose in the first place.
This appears to be the primary concern raised in the article. But surely data scientists could put in guards against this, such as weighting the crime count versus the time spent policing the area? In fact, presumably any functional predictive policing system must already do that, else you'd just tend towards policing one area incredibly intensively.
If they are concerned about the police arrest stats or racism biasing the data set, they could keep the input to something hard to interfere with, like where murders occur or where crimes are reported by the public, rather than where arrests occur.
My opinion is that predictive policing does need regulation, but banning it entirely, to me, seems like an overreaction that will over time result in much less effective use of police resources. I suppose time will tell.
Predictive policing is an idea gone totally wrong (predictably).
Data can tell us what, when and where to keep an eye at, it can help us highlight roots of criminal activity (social inequality, bad schools, unsustainable business practices, pollution[1,2,3], non-restorative penal system, police corruption, etc). but it can never rightfully justify treating any given person as a dangerous criminal or even a suspect.
This sounds worrying. I don't agree with using predictive policing. Funnily enough, after a quick Google search, I haven't found any Dutch articles about this topic.
This is dumb. Predictive policing algorithms predict where the police will be, not where crime takes place: as crime data is simply a measurement of policing activity.
If there is bias in policing, it would therefore be amplified.
I'm very curious what "banning" predictive policing even means here. In the broadest sense, predictive policing is using data to inform where crime will happen in the future.
Is this to say you can't use historical trends to allocate police in a city? Should police be allocated only based on population size/density in a region?
Is using your knowledge of what neighborhoods tend to be "crime heavy" predictive? Are they crime heavy because of increased policing (you found more crime because you were looking) or because there really was more crime?
Predictive policing is quite the buzz word these days. IBM (via SPSS) is one of the big players in the field. The most common use case is burglary, I suspect because that's somewhat easy (and also directly actionable). You rarely find other use cases in academic papers (well I only browsed the literature a couple of times preparing for related projects).
The basic idea is sending more police patrols to areas that are identified as high thread and thus using your available resources more efficiently. The focus in that area is more on objects/areas than on individuals so you don't try to predict who's a criminal but rather where they'll strike.
It sounds like a good enough idea in theory but at least in Germany I know that research projects for predictive policing will be scaled down due to privacy concerns even if the prediction is only area and not person based (noteworthy that that's usually mentioned by the police as a reason why they won't participate in the research). I'm not completely sure and only talked to a couple of state police research people but quite often the data also involves social media in some way and that's the major problem from what I can tell.
Statistical analysis to figure the optimal use of police resources is a great idea, particularly to prevent violent crime and theft in public areas.
What concerns me, however, is the idea of predictive analytics being used on the vast databases collected by NSA/GCHQ (there's no reason to suspect they're not doing this already). Such a system of "pre-crime" would be bound to have false positives, and suddenly based on your Google searches, movie preferences, musical tastes, friendships, or who knows what, the state decides that you are a "person of interest".
The technology used by the police seems to be nothing more than support for their program of targeted intervention. The system does not only identify potential perpetrators, but also their victims. Using data to identify people most in need of "concrete assistance in the form of social services, job training, childcare" just makes the process of doling out limited resources more efficient.
>> “I think this is state of the art for predictive policing,” Lewin says.
>> How will this form of predictive policing be received?
Use of the word "predictive" here is utterly inaccurate because the police are not predicting anything, but rather identifying at-risk individuals. The inferential leap from "people with these n properties have historically been party to gun violence" to "prediction" is enormous. As soon as one mentions "prediction" and police in the same breath, it suggests Minority Report-style impingement on personal freedoms of some kind. The reason I stress this is because such a program might very well work to reduce gun violence, and labelling it as though it were the genesis of a Big Brother is not only disingenuous, but may also harm the program's legitimacy in the eyes of people who have the power to shut it down.
That sounds like "predictive policing". The problem is that the institution that wants to predict crime might be inclined to use attributes of people who were previously arrested/convicted as the basis for the predictive model.
They won't be watching out for you based on personality disorders, addiction, or any other reasonable indicator. They'll watch out for you because you have attributes that match the type of people in jail right now. It's not going to be indicative of criminal behavior.
Given the problems in police departments (which have fortunately started to appearing in the news), giving the police a system that will essentially let them see what you want to see is a terrible idea. Police work is already full of "forensic tools" that don't actually work (like idea that fingerprints actually identify someone uniquely, or the various techniques that are examples of the Birthday Paradox).
While I'm sure that it's possible to use modern techniques to estimate where crime will occur, it won't work in practice. There are simply too many ways to bias the results (intentionally or not). I suspect giving police this kind of tool is simply a way to give legitimacy and cover to their bad behavior.
> including information about friendships, social media activity
COINTELPRO is a helluva drug.
> advocates say predictive policing can help improve police-community relations by focusing on the people most likely to become involved in violent crime.
That sounds suspiciously like an excuse to improve white communities, by focusing on the blacks (who have historically been seen as "violent savages" by racists).
> because our predictive tool shows us you might commit a crime at some point in the future
The big question is how long until someone tries to use that "predictive tool" as probable cause.
Predictive policing systems seem like they will be very difficult to properly validate. On the one hand, it could go like the drug industry where there are bodies "testing" potential solutions, but the money, incentives, and sheer noise make the results highly suspect. On the other hand, cities and towns may prove to be a good experimentation framework, in the same way that laws are "tested" at the country/state level. I'm curious to see if much useful will come out of this, and how long it will take.
Predicting crime is great. It means you're working to understand the state of your society.
Using it for policing is idiotic. You can't arrest a statistic.
Instead, it should be used for public policy, to understand what communities are at risk, direct efforts to understand why they are risk, and engineer better systems to support those communities.
reply