Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

That sounds like "predictive policing". The problem is that the institution that wants to predict crime might be inclined to use attributes of people who were previously arrested/convicted as the basis for the predictive model.

They won't be watching out for you based on personality disorders, addiction, or any other reasonable indicator. They'll watch out for you because you have attributes that match the type of people in jail right now. It's not going to be indicative of criminal behavior.



sort by: page size:

Also, police in at least some cities use systems that predict whether someone will become a criminal and give that person extra attention.

Here's one story on it that I found with a quick search:

http://www.nytimes.com/2015/09/25/us/police-program-aims-to-...


This is a really bad knee-jerk in the long term. These systems should be improved, and relied upon less, rather than banned. You want to identify the "at risk" individuals and focus on them not getting into trouble, preemptively? Predictive policing could help you with that. Arguably, this should be its main purpose in the first place.

Predictive policing is an idea gone totally wrong (predictably).

Data can tell us what, when and where to keep an eye at, it can help us highlight roots of criminal activity (social inequality, bad schools, unsustainable business practices, pollution[1,2,3], non-restorative penal system, police corruption, etc). but it can never rightfully justify treating any given person as a dangerous criminal or even a suspect.

[1] https://news.ycombinator.com/item?id=21565624

[2] https://news.ycombinator.com/item?id=21193609

[3] https://news.ycombinator.com/item?id=9140942


Predictive policing is old, and has been done in consultation with universities before. There are papers on predicting crime vs criminals, predicting by time and address, predicting by age, etc., including discussion on whether this leads to uneven enforcement on minorities. It would be surprising to expect law enforcement to go towards less statistical and software usage.

What would be new would be formal preemptive detention by generic factors. Then again there was news covered by the Guardian on a police black site in Chicago that held thousands of largely black detainees without lawyers.


Yes the police can and do try to predict crime

It definitely is and if you view this from a cynical point, at some point in the future, a simple machine learning model in a police department would be the primary source of predicting who's the convict based on all the factors you just mentioned on a much larger scale.

Social scientists have wanted to predict when victimization, abuse, suicide is going to happen for a long time.

This is the other side of the coin.

I suggest that as long as 'individuals are not targeted for culpability' - ie they are looking in 'general terms' and especially for 'victims' then maybe this might work?

'Jane Doe is at very high risk of abuse from a family member' -> maybe we should just check in.

'123 ABC St. is at very high likelihood of break-in - maybe just cruise on by to 'show a presence'.

All of this said, I often wonder if these 'algorithms' just reproduce with cops on the ground, managing relationships already know.

'Yes, thank you computer, Jane Doe who's been beaten by her husband 3 times is 'at risk''.

'Yes, thank you computer, 6th and Finch, aka 'crack corner' is at 'high risk' of crime'.


Any such system is/would be potentially very dangerous. Crime data is not the same thing as crime. Populations that are over-policed are be disproportionately represented in any such data set, leading to higher prediction of crime, leading in turn more over-policing (feedback loop). I implore anyone attempting to build such a system to consider the serious issue of machine bias and it's implications in the real world.

See this tutorial given at this years NIPS machine learning conference: http://mrtz.org/nips17/#/


This is addressed at the very start of the article. It's about predictive policing.

Also worth noting: predicting that someone will commit a crime doesn't mean that we have to arrest them. We could use the prediction to target other (voluntary) intervention.

Of course, we already use less than 100% effective prediction to target non-arrest intervention, so it's quite possible that 100% prediction would only have the effect to better target what we already do (and potentially alleviate harm or waste), rather than Minority Report style pre-crime arrests.

Im not sure crime prediction is a moral quagmire, just that what we use as a basis for prediction or a few drastic uses, like pre-crime arrests, are tough questions.


Predictive policing

I wonder if it did predict a crime, would it change how they act? Would they trust it, or would hubris prevail?

  I personally have predicted crimes to the police and been futile, and they happened.  Even had a police complaint upheld over it.
So even if you can see the patterns (be that autisticly gifted that way or by machines), it is trust/communication that is the crux. Akin to giving the answer to a math problem without showing the working out. Hence many aspects are reactive to problems with proactiveness very much an uphill battle of complacency.

The real issue though with any probability/prediction is the quality of the data, and filtering that. As always - Garbage in, Garbage out. This is born out that many police systems are old, legacy, with bolt-on's added along the way, so you can and do get data being curtailed in quality by legacy limits. Certainly, can attest to instances in my country and can imagine the story is much the same throughout more than appreciated. After all, an underfunded police is common in many countries.


With this program, I could see people using the predictions to tell them where not to commit a crime.

Want to steal a car? Look for discrete areas that aren't flagged by this program.

I'm sure this system is under heavy lockdown, though, and would require up-to-date police information.


Statistical analysis to figure the optimal use of police resources is a great idea, particularly to prevent violent crime and theft in public areas.

What concerns me, however, is the idea of predictive analytics being used on the vast databases collected by NSA/GCHQ (there's no reason to suspect they're not doing this already). Such a system of "pre-crime" would be bound to have false positives, and suddenly based on your Google searches, movie preferences, musical tastes, friendships, or who knows what, the state decides that you are a "person of interest".


Predictive policing is quite the buzz word these days. IBM (via SPSS) is one of the big players in the field. The most common use case is burglary, I suspect because that's somewhat easy (and also directly actionable). You rarely find other use cases in academic papers (well I only browsed the literature a couple of times preparing for related projects).

The basic idea is sending more police patrols to areas that are identified as high thread and thus using your available resources more efficiently. The focus in that area is more on objects/areas than on individuals so you don't try to predict who's a criminal but rather where they'll strike. It sounds like a good enough idea in theory but at least in Germany I know that research projects for predictive policing will be scaled down due to privacy concerns even if the prediction is only area and not person based (noteworthy that that's usually mentioned by the police as a reason why they won't participate in the research). I'm not completely sure and only talked to a couple of state police research people but quite often the data also involves social media in some way and that's the major problem from what I can tell.


Predictive Policing is already a thing. It was in use for a decade by the LAPD in the US. See:

https://www.latimes.com/local/lanow/la-me-lapd-precision-pol...

and

https://www.latimes.com/california/story/2020-04-21/lapd-end...


Are you looking for predicting future crimes in an area (i.e. city, neighborhood, state, etc...) or predicting whether an individual will commit future crimes?

https://en.wikipedia.org/wiki/Apophenia

Given the problems in police departments (which have fortunately started to appearing in the news), giving the police a system that will essentially let them see what you want to see is a terrible idea. Police work is already full of "forensic tools" that don't actually work (like idea that fingerprints actually identify someone uniquely, or the various techniques that are examples of the Birthday Paradox).

While I'm sure that it's possible to use modern techniques to estimate where crime will occur, it won't work in practice. There are simply too many ways to bias the results (intentionally or not). I suspect giving police this kind of tool is simply a way to give legitimacy and cover to their bad behavior.

> including information about friendships, social media activity

COINTELPRO is a helluva drug.

> advocates say predictive policing can help improve police-community relations by focusing on the people most likely to become involved in violent crime.

That sounds suspiciously like an excuse to improve white communities, by focusing on the blacks (who have historically been seen as "violent savages" by racists).

> because our predictive tool shows us you might commit a crime at some point in the future

The big question is how long until someone tries to use that "predictive tool" as probable cause.


This is a good point. If you are trying to design a system that "tells police officers where to go" then you shouldn't be using data to predict arrests, you should be predicting crime, which we don't have unbiased data for.
next

Legal | privacy