Given the problems in police departments (which have fortunately started to appearing in the news), giving the police a system that will essentially let them see what you want to see is a terrible idea. Police work is already full of "forensic tools" that don't actually work (like idea that fingerprints actually identify someone uniquely, or the various techniques that are examples of the Birthday Paradox).
While I'm sure that it's possible to use modern techniques to estimate where crime will occur, it won't work in practice. There are simply too many ways to bias the results (intentionally or not). I suspect giving police this kind of tool is simply a way to give legitimacy and cover to their bad behavior.
> including information about friendships, social media activity
COINTELPRO is a helluva drug.
> advocates say predictive policing can help improve police-community relations by focusing on the people most likely to become involved in violent crime.
That sounds suspiciously like an excuse to improve white communities, by focusing on the blacks (who have historically been seen as "violent savages" by racists).
> because our predictive tool shows us you might commit a crime at some point in the future
The big question is how long until someone tries to use that "predictive tool" as probable cause.
This is dumb. Predictive policing algorithms predict where the police will be, not where crime takes place: as crime data is simply a measurement of policing activity.
If there is bias in policing, it would therefore be amplified.
Predictive policing is an idea gone totally wrong (predictably).
Data can tell us what, when and where to keep an eye at, it can help us highlight roots of criminal activity (social inequality, bad schools, unsustainable business practices, pollution[1,2,3], non-restorative penal system, police corruption, etc). but it can never rightfully justify treating any given person as a dangerous criminal or even a suspect.
Predictive policing does nothing more than sending policing resources where crime is more likely based on statistical models.
It does not racially discriminate and it is not racist. It does not harm anyone.
I feel that this is posturing and shooting the messenger. If crime is statistically higher in "black neighbourhoods" the issue will not be solved by pretending it isn't.
If the technology does not work then of course there is no point spending more money on it. So, does it work or not? Here this feels political, not pragmatical.
Apophenia is the _human_ tendency to see patterns in random _data_. Predictive _analytics_ is a machine that pulls non-random patterns out of data and presents this as _information_.
To say that other tools have a bad track record may count as a valid argument, but to me, it is a weak and fatalistic one. Judge each tool on its own merit or discard all tools as useless seems like an easy choice.
Predicting crime works in practice and theory. These models are not black boxes, they can be introspected. Bias can be detected and removed.
COINTELPRO was a program to infiltrate and disturb organizations that the state viewed as unwelcome. Monitoring social media activity is common detective work. The modern equivalent of an officer peeking over the fence in your back garden to see a stolen motorbike. Now they can use Google Maps for that. This is public information: The criminals feel free and safe enough to post and brag about their crimes on Facebook.
Removing or combating criminal elements in any community will improve that community, regardless of skin color. Black youth is helped, not suppressed, when gang recruiters are identified and punished.
Predictive tools are already used as probable cause. Prisoners in Guantanamo Bay can get a brain-wave reader test. This device will tell you what someone is thinking about and may reveal the plans of future terrorist attack.
This appears to be the primary concern raised in the article. But surely data scientists could put in guards against this, such as weighting the crime count versus the time spent policing the area? In fact, presumably any functional predictive policing system must already do that, else you'd just tend towards policing one area incredibly intensively.
If they are concerned about the police arrest stats or racism biasing the data set, they could keep the input to something hard to interfere with, like where murders occur or where crimes are reported by the public, rather than where arrests occur.
My opinion is that predictive policing does need regulation, but banning it entirely, to me, seems like an overreaction that will over time result in much less effective use of police resources. I suppose time will tell.
Let me add a slightly more negative take. If you follow police technology, no matter what's implemented, crime magically goes down! Although I can see a system like this being a high-level tool for allocating resources, the notion that it can "predict" crime is likely overblown. Where's the deep analysis to show causation, not just correlation?
There are initial decisions to be made with the deployment of these automated policing tools. For example, this is from an EFF piece on predictive policiing[0]:
"For instance, if police put majority Black neighborhoods under intense police scrutiny and surveillance, they will be more likely than in other neighborhoods to make stops and arrests according to minor quality of life infractions. Therefore, derivative maps purporting to show where future crimes might be committed will disproportionately weigh those neighborhoods already living under the weight of intense police presence. This can create a self-fulfilling prophecy that uses the supposed objectivity of math to legitimize racially biased police procedures."
There is also a piece on other issues with license plate readers in particular.[1] I'd recommend the whole series if you're interested.[2]
The technology used by the police seems to be nothing more than support for their program of targeted intervention. The system does not only identify potential perpetrators, but also their victims. Using data to identify people most in need of "concrete assistance in the form of social services, job training, childcare" just makes the process of doling out limited resources more efficient.
>> “I think this is state of the art for predictive policing,” Lewin says.
>> How will this form of predictive policing be received?
Use of the word "predictive" here is utterly inaccurate because the police are not predicting anything, but rather identifying at-risk individuals. The inferential leap from "people with these n properties have historically been party to gun violence" to "prediction" is enormous. As soon as one mentions "prediction" and police in the same breath, it suggests Minority Report-style impingement on personal freedoms of some kind. The reason I stress this is because such a program might very well work to reduce gun violence, and labelling it as though it were the genesis of a Big Brother is not only disingenuous, but may also harm the program's legitimacy in the eyes of people who have the power to shut it down.
I’m biased, so I would give the most weight to what the computer science department says.
Predictive policing absolutely does work in some aspects. The opposite of predictive policing is to have all beat cops equidistant from each other in their jurisdiction. This is clearly wrong. The next step is to have cops near population centers, then around areas where there have been more reports, requests, etc. After that you move to time to day. Where you place cops during the day vs night is clearly different. Shops close, bars open. Places where young adults vs seniors gather will clearly have different coverage. Pretty much every study in the subject shows people age out of crime. This is used as supporting evidence against lifetime sentences and geriatric incarceration.
I have to assume only the most naive software will have an algorithm that uses secondary indicators / simplest predictive techniques would get such attention. For example “black neighborhoods” (that happen to be poor, with no other stats other than other departments say they have issues with black neighborhoods) and self fulfilling prophecies (we put most cops at location X and what a surprise, that location had the most arrests, w/o looking at the arrests themselves)
Predicting crime is great. It means you're working to understand the state of your society.
Using it for policing is idiotic. You can't arrest a statistic.
Instead, it should be used for public policy, to understand what communities are at risk, direct efforts to understand why they are risk, and engineer better systems to support those communities.
> predictive policing relies on algorithms to interpret police records, analyzing arrest or parole data to send officers to target chronic offenders, or identifying places where crime may occur.
So... instead of using algorithms, how will the police decide where to patrol heaviest? Putting a human in charge of that seems like a great way to increase racial bias. Or am I misunderstanding what "predictive policing" is?
Back before computers were a thing in most people's homes, let alone pockets, police would plot on a map the crime reports of the past year and assign more patrols to areas with a high density of crime.
Is this practice now banned? Because this is predictive policing, albeit in a crude form. Presumably it just means computerized predictive policing, but it still seems odd that people see to think that analyzing past patterns of crime to better allocate police is something novel.
Predictive policing is old, and has been done in consultation with universities before. There are papers on predicting crime vs criminals, predicting by time and address, predicting by age, etc., including discussion on whether this leads to uneven enforcement on minorities. It would be surprising to expect law enforcement to go towards less statistical and software usage.
What would be new would be formal preemptive detention by generic factors. Then again there was news covered by the Guardian on a police black site in Chicago that held thousands of largely black detainees without lawyers.
I wonder if it did predict a crime, would it change how they act? Would they trust it, or would hubris prevail?
I personally have predicted crimes to the police and been futile, and they happened. Even had a police complaint upheld over it.
So even if you can see the patterns (be that autisticly gifted that way or by machines), it is trust/communication that is the crux. Akin to giving the answer to a math problem without showing the working out. Hence many aspects are reactive to problems with proactiveness very much an uphill battle of complacency.
The real issue though with any probability/prediction is the quality of the data, and filtering that. As always - Garbage in, Garbage out. This is born out that many police systems are old, legacy, with bolt-on's added along the way, so you can and do get data being curtailed in quality by legacy limits. Certainly, can attest to instances in my country and can imagine the story is much the same throughout more than appreciated. After all, an underfunded police is common in many countries.
I think there's a big difference in using it to predict who will commit a crime (a big nono imo) vs. where a crime will be committed. For example allocating patrols by likelyhood of breakins seems acceptable to me and might even free up resources for more important tasks. Same for mundane stuff like more patrol cars/checks for areas with more potential speeding violations/car accidents due to reckless driving etc.
This is such a German judgement. So the police are using big data to predict who will commit crimes, which is sort of dystopian.
Police in the United States are close to this already. Chicago's police department used to brag about its data-crunching ability to predict where crime would happen. At the time, it was just by area, but it was stated that with enough data, it could be narrowed down to the block. It's not that big a leap to extrapolate that to the individual.
My guess, though, is that it didn't work, and the city was just parroting the promises made by the salespeople of whatever system they bought for this task. If it worked, then police would preemptively flood certain areas where they know crime is about to happen. But, as is seen on television most nights, and in the complaints of the aldermen, the police mostly seem to react to crime, rather than prevent it. The use of social media by crime mobs has only made it worse.
Given the problems in police departments (which have fortunately started to appearing in the news), giving the police a system that will essentially let them see what you want to see is a terrible idea. Police work is already full of "forensic tools" that don't actually work (like idea that fingerprints actually identify someone uniquely, or the various techniques that are examples of the Birthday Paradox).
While I'm sure that it's possible to use modern techniques to estimate where crime will occur, it won't work in practice. There are simply too many ways to bias the results (intentionally or not). I suspect giving police this kind of tool is simply a way to give legitimacy and cover to their bad behavior.
> including information about friendships, social media activity
COINTELPRO is a helluva drug.
> advocates say predictive policing can help improve police-community relations by focusing on the people most likely to become involved in violent crime.
That sounds suspiciously like an excuse to improve white communities, by focusing on the blacks (who have historically been seen as "violent savages" by racists).
> because our predictive tool shows us you might commit a crime at some point in the future
The big question is how long until someone tries to use that "predictive tool" as probable cause.
reply