The Dangerous Math Used To Predict Criminals

preview_player
Показать описание
The criminal justice system is overburdened and expensive. What if we could harness advances in social science and math to predict which criminals are most likely to re-offend? What if we had a better way to sentence criminals efficiently and appropriately, for both criminals and society as a whole?

That’s the idea behind risk assessment algorithms like COMPAS. And while the theory is excellent, we’ve hit a few stumbling blocks with accuracy and fairness. The data collection includes questions about an offender’s education, work history, family, friends, and attitudes toward society. We know that these elements correlate with anti-social behavior, so why can’t a complex algorithm using 137 different data points give us an accurate picture of who’s most dangerous?

The problem might be that it’s actually too complex -- which is why random groups of internet volunteers yield almost identical predictive results when given only a few simple pieces of information. Researchers have also concluded that a handful of basic questions are as predictive as the black box algorithm that made the Supreme Court shrug.

Is there a way to fine-tune these algorithms to be better than collective human judgment? Can math help to safeguard fairness in the sentencing process and improve outcomes in criminal justice? And if we did develop an accurate math-based model to predict recidivism, how ethical is it to blame current criminals for potential future crimes?

Can human behavior become an equation?

*** ADDITIONAL READING ***

*** LINKS ***

Vsauce2:

Hosted and Produced by Kevin Lieber

Research and Writing by Matthew Tabor

Editing by John Swan

Police Sketches by Art Melt

Huge Thanks To Paula Lieber

#education #vsauce #crime
Рекомендации по теме
Комментарии
Автор

IBM Internal presentation slide, circa 1979; "A COMPUTER CAN NEVER BE HELD ACCOUNTABLE THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION" is the perfect response to any of this. no algorithm should ever decide the fate of who lives who dies, whose life get's cut by 30 and whose by 3 years.

DemonixTB
Автор

FYI - Noom was found to be practicing very shady business behind the scenes. They have been overcharging customers and refusing to allow them to cancel their services. I believe they are currently under investigation. From what I've come to learn, they are actually bragging about their mishandling of services and suggesting other companies do the same. I''d do some digging to see what you can find before accepting their promotions again.

Vee-Shan-CC
Автор

I've first read about this in the book "Weapons of Math Destruction". A major problem with all of these algorithms is that they can't measure the variables which they want to observer (like what people think, how stable they are emotionally, what their views, experiences and skills are). So companies use second-hand variables which are often only weakly linked to the problem at hand. Laymen just see "a computer came up with the number after doing some very complex math" which they think means "must be correct since neither math nor computers can be wrong" and they forget the old wisdom "garbage in, garbage out".

PhilmannDark
Автор

I think using an algorithm to look for possible suspects, or location of evidence, or possibly areas that might require higher security due to history of criminal behavior is valid. But as soon as you start asking subject philosophical questions, you've introduced a wild card that makes the algorithm meaningless. I think we can find areas in the justice system for algorithmic programs, but definitely not proprietary and hidden. Open source is a must for transparency.

ceemee
Автор

We need a law that any algorithm that affects sentences or political decisions must be open source. For me as a computer scientist, that's just common sense and not having that law contradicts every juridical principle in a democracy. Having a black box algorithm influence decisions is literally the equivalent of using investigative results or testimonies without presenting them in court.

Cyberlisk
Автор

Using AI to predict future crimes is an extremely dangerous idea. If you give an AI access to currently available crime data, and optimize it to predict future crimes, what you are actually doing is asking it to predict who the criminal justice system (with all of its biases) will find guilty of a future crime. It gets even worse when you feed the AI data from crimes that it predicted. The AI can now learn from its past actions, and further 'fine tune' it's predictions, by looking at what traits are more likely to lead to a guilty conviction, and focus its predictions on people with those traits. This leads to a feedback loop where the AI discovers a bias in the justice system, exploits that bias to improve its "accuracy, " leading to the generation of more crime data which further enforces its biases.

Don't even get me started on what could happen if we use an AI powerful enough to realize that it can 'influence' its own training data.

SupaKoopaTroopa
Автор

There's no way that this machine wasn't trained with data about actual convictions and suspect info. Therefore, the algorithm could at best only accurately replicate justice as it has been done, not as it should be.

stevenboelke
Автор

Company: "yea you should sentence him harder, and I won't tell you why I think that"
Judges: "eh, good enough"

Man, if trade secrets get prioritised over a citizen's right to a fair trial, seriously, wtf. This is trial by crystal ball.

TheVaryox
Автор

"determined by the strength of the item's relationship to person's offense recidivism"
I was gonna say there was no way those coefficients weren't racist, and the results bear that out. It's almost like predictive algorithms are really good at perpetuating self-fulfilling prophecies.

notoriouswhitemoth
Автор

This brings me memories of Psycho pass anime were an AI computer decided who was a threat for society even before comitting a crime. The whole society was ruled by this tech withought questioning it, even cops and law enforcers

joaquinBolu
Автор

You'd think that if we were going to recreate Minority Report, we'd at least try to do a good job at it.

Codexionyx
Автор

This reminds me of the study James Fallon did on psychopaths. He would analyze brain scans of known psychopaths and found that all their brains showed similar results. Then during a brain scan testing he did on him and his family he found that one of the brains matched that of a psychopath. He thought someone at work was playing a joke on him but it turned out to be his brain. Showing that it's more than just how your brain is that makes you a psychopath. However, those that match the brain scans may be more susceptible to being a psychopath if certain conditions are met

chestercs
Автор

This is assuming that nobody lies and gives answers they know will lower their score.

imaperson
Автор

The fact many of these questions seem like what you'd ask a person whilst trying to diagnose them with certain mental illnesses or neurodivergencies is disgusting, let alone the part where these questions are answered with no context or nuanced conversations on the subject.

"Do you often feel sad?"
The answer: "Yes"
The algorithm's thoughts: "this person has nothing to live for and might commit a crime because they don't fear losing their life, their crime and answers indicate they'd be more likely to break the law again"
The reality/nuance: "Yes, my mom died 4 months ago to cancer and I've felt down ever since, she helped me keep my life in check and without her I completely forgot to get my car's documents renewed, since she always reminded me to do it as I still lived with her and the mail was received by her"

It's SO easy for any answer to mean the complete opposite if you don't allow someone to explain the reason for their emotion. Algorithms and AIs and machines in general should never be in charge or judging people because they do not, and cannot, guess the nuance behind actions and feelings. It's ludacris to me that this is even a thing.

Oxytail
Автор

The worst kinds of judgements are judgements made by someone who can't be held accountable if they are wrong.

Judgements that determine how many years someone spends in prison should not be decided by an unaccountable AI.

awesomecoyote
Автор

If you want to develop an effective method for measuring recidivism, here's the plan:
Step 1: Make a law requiring all people to buy liability crime insurance. Under the terms of this type of insurance, whenever the client commits a crime, the insurance agency pays for the damages caused and the client is charged nothing.
Step 2: Wait 2 months.
Step 3: Base prison sentences on people's insurance rates.

Insurance companies under this system have a financial incentive to create an effective system for predicting future criminal behaviour and base their liability crime insurance rates on that. As such, the insurance rates become accurate predictors of future criminality. Of course you could argue that this system will cause repeat offenders to have such incredibly high insurance rates that they have no reasonable way of ever paying them, thus making them unable to buy liability crime insurance. Fret not, for I have a solution. Execution. This will drop their rates to precisely $0.

Thank you for listening to my very own dystopia concept presentation.

andrasfogarasi
Автор

The fundamental problem with this approach is that generalities can't be applied to an individual, and these automated approaches to crime prediction only rely on generalities. They are a codification into law of biases and stereotypes.

KenMathis
Автор

Those algorithms sounds literally like the SYBIL system in PSYCHO PASS anime, lol. Next step we get a social credit score :D

felipegabriel
Автор

I'd scrawl, "I plead the 5th" over every question. I mean, you have the right to not be a character witness against yourself too, and how can you tell if you're incriminating yourself with some of these questions? Hell, just participating while black seemed incriminating in one example.

ElNerdoLoco
Автор

Mathematically, the problem is preeeetty obvious. The amount of people that only have commited 0 to 1, or maybe 2 crimes, is astoundingly massive. The amount that have commited 4 or more, have commited MANY more than 4, usually around the hundreds if we take into account the amount of times they got away with it before caught.

This means that while one group (the people that have commited many many crimes) have a fairly similar profile or data points between each other, the ither group is literally *everyone* else.
So picture this: the algorythm determines that 90% of criminals wear blue pants, accounting for like 10% of the population, then the algorythm will happily mark any blue pants wearing citizen a "potential criminal", despite there being thousands more blue pant wearing innocent people, than total criminals overall.
While also, completely making invisible any criminal that wears white pants, or worse, chooses to wear white pants, to avoid long sentences.

The second problem: Petty crimes tend to be done by normal people, so almost any person that commits a crime is "likely" to commit another, since the algorythm will find the pattern "all these criminals are normal people, therefore, any normal people could be criminals!" Way to go blackbox...

airiquelmeleroy
welcome to shbcf.ru