Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Having the same credit score, or other data, does not mean they have identical circumstances. For example, your willingness to follow the rules of society, even to your detriment, might be a result of whether society's rules have treated you fairly in the past.

So belief that you'll get different default rates given some financial data does not imply that you are a scientific racist. To create identical circumstances you'd have to do a brain swap (and some other relevant internal organs) on some black and white infants. A scientific racist view is that then the likelihood of paying off the loans would follow the brain, not the skin.



sort by: page size:

I'm not so sure that saying the concept of a credit score is racist is quite the same as saying credit scores perpetuate and reflect racial injustice/inequalities.

Moreover, the article you cited isn't simply making moral claims, but specific examples. In addition, there is a well documented history of racial discrimination within the financial sector (see the Fair Housing Act).


Are you saying that even if certain groups of people are less likely to be able to repay debt (for reasons outside their control), it is still racist if their FICO scores are lower?

I think you want FICO scores to mean something that they don't mean.


Why are credit scores “racist” (as claimed by Sanders and Vox)? The outcomes being distributed differently among races does not make the methodology “racist”. The input factors into credit scores do not include race. Correlation to race, even if strong, is not causation.

A credit score is not racist because racism implies intent, however credit scores do disproportionately discriminate based on race and thus should not be used for hiring.

Consider the fact that to access any form of credit, and thus a credit score requires a bank account. When you consider the racial disparities between those who are un-banked or under banked it's clear that a credit score disproportionately affects certain communities.

Not only that, but credit ratings in general can be incredibly flawed. Many institutions had their bonds downgraded by the large credit rating agencies after the financial crisis simply due to the fact that balanced sheets were scrutinized more closely due to all the public outrage.


My beliefs aside [0], you do bring up a good point. I can't find the HN thread, but there was a good TEDx presentation on it as well:

Given two curves on the same graph, you are not guaranteed to be able to min/max them simultaneously.

For example: say you are a mortgage broker at a bank and you have to give out mortgages to people in your community. Obviously, the people in your community are diverse. There are men, women, white people, black people, homosexuals, heterosexuals, etc. all wanting mortgages. Say you have some historical data that you know to be accurate, namely the credit score of successful applicants, the associated foreclosure rate, and some demographic data like race and sex. Also say that you want the bank to remain solvent and competitive, handing out loans to the best applicants and also garnering a reputation for being a 'fair' community bank that people will actually be able to get loans from. Now, the question is, what is the cutoff for credit-scores that will cause an applicant to not get a mortgage [1]?

Say, on the y-axis is the foreclosure rate, and on the x-axis is an applicant's credit score. You have some curves for white women, black women, white men, black men, and many other permutations of the human rainbow. These curves are all different from each other.

Now, if you set an arbitrary cut-off score, you may be setting a score that says that white people are less likely to get a mortgage than black people. That's obviously disenfranchising white people in your community, and you should change the cut-off to be more equitable to the people around you.

However, now that you have changed the cut-off for approval of a mortgage to be racial equitable, you have now made it such that men are less likely to get a mortgage than women are. This is also not good, and you should change the cut-off again to be fair to all sexes and genders. However, now you are back to being unfair to white people, again.

Given the data you have, you are in a situation where, no matter what you do, you are discriminatory. In fact, I'd say that this is the most likely situation to be in. That all the curves and graphs would align just so and that you could be non-discriminatory under ALL scenarios is extremely unlikely. Honestly, you do have to pick and choose who you will not discriminate against.

This may seem disheartening, and, yeah, it is [2]. However, that does not mean that we shouldn't try to change things. If anything, understanding that you will very likely be discriminatory no-matter-what, is helpful. You now have a better view of what you can change and how that may affect things. You can choose where to set your parameters with better clarity towards your fellow humans. Maybe you oscillate between gender-parity and height-parity. Maybe you choose to focus on income-inequality for 5 years and then switch to racial-inequality for a focus. Whatever your thesis is on how to gain better equity in your community, maybe you now have a better understanding of the mechanics of the system and you can better affect it positively.

[0] In talking privately to friends that are also considered 'minorities', discrimination is occurring and is systemic. Though this is personal anecdata.

[1] I'm trying to simplify this as much as possible. Obviously, real applications are VERY nuanced and complicated.

[2] Unless you are a journalist. Then, well, this means you will always have a lot to write about!


Mentioning credit scores shows clearly where your misunderstanding lies.

Credit scores are individual and based on actual financial events in that persons hustory.

Blackness tells you nothing about an individual persons history.

This shows for certain that your understanding is faulty, and so are your conclusions.


I don't think we're disagreeing.

That part is to be expected, the important thing is that you don't then recorrelate, either explicity or implicitly, race with the lower credit scores, that can essentially double count race (via income and then via the second check) and that's what you want to avoid.


Please read the article more carefully. The article's authors compared denial rate for black and white applicants by debt-to-income ratio and loan-to-value ratio. The article's authors did not (and could not, because they didn't have the data) compare denial rate by credit score. Credit scores, like every single measure of wealth and socioeconomic stability in the US, are of course correlated with race.

> As for credit scores, it was impossible for us to include them in our analysis because the CFPB strips them from public view from HMDA data—in part due to the mortgage industry’s lobbying to remove them, citing borrower privacy.


Making the process rigid and algorithmic helps make sure that you aren't using race itself as a variable in your analysis, but it doesn't change facts about the world. Black people have lower credit scores because as a group, they are less likely to pay their bills on time or at all. But because race isn't considered as a factor, a black person who is financially responsible is not dinged just because other black people aren't.

Some people dont value FICO scores and so theirs are likely to be lower. This would fall outside of the only two models that you proposed. It requires cultural difference which some people value highly.

If you cannot accept the FICO as non-racist, then I do not know if you could create a system that others would not find racist. I cannot think of a way that your system would not end up with some form of explicit race-based corrections. I think that concept is less palatable in America due to the focus on freedom/individuality. Each person should stand on their own, not colored by the groups that you could fit them in.

Just curious if you have an opinion: how do you propose correcting for the past?

Lastly, you can look up the determinants of a FICO score for yourself as you can use sources that you trust.


Oh that’s interesting! So you can have your “creditworthiness AI” \hat Y) predict that disproportionately more black (A) people are not creditworthy, as long as that’s (roughly) in line with the actual creditworthiness (Y) as determined by the “supervisor”.

Unfortunately whatever algorithm computers come up with to assess creditworthiness is going to be "racist" by the broad disparate impact definition of racism that the government uses. Ironically, the algorithms will have to be made actually racist to correct for this.

People with the exact same FICO score have different default rates, if you manage to bin them by race: Asians > Caucasians > Latin > Black.

> We do get a some variables even if we are not allowed to use them.

Cool, so you had access to an ethnicity variable to measure its proxy power and significance? I feel this is important and very rare outside of Europe.


You’re viewing “fixing” from the wrong lens. There are infinite bits of information about a person that could potentially be used in a model to calculate a credit score. At all times in every model you’re choosing a subset of one’s facets and can build statistically unbiased models based on the data you make available.

But when you talking actually fixing models like this you’re actually forced to correct the final result, not filter the data. Being blind to ethnicity doesn’t work because one’s ethnicity permeates (to different degrees, sure) every part of their lives. All the data is bad, everything is a statistically detectable proxy for ethnicity.


Credit checks had a similar impact for minority and disadvantaged communities. Before, if you wanted to get a mortgage or a car loan, the bank would assess you based on a number of factors. Many banks explicitly used race as a criteria for denial. Now, with near instant credit checks, race has been removed as an explicit criteria, rightfully leading to much higher rates of loans and mortgages for minority and disadvantaged communities. Are credit checks anywhere close to reliable or fair? Can you correlate race based on zip code or other factors? Yes, these methods are not anywhere close to 'good enough'. But they are miles ahead of the explicit racism that preceded these form of financial product applications.

I’m sorry, but it’s pretty absurd to boldly claim that credit scores are provably objective with regard to race, etc. I’d recommend reading _The Big Test_ for another history of a supposedly objective measurement of merit (the SAT) which turned out to be highly discriminatory. Removing discrimination from a system is not as simple as removing such info from its inputs.

Credit scores, racial profiling and other data based discriminations

Re: "a public, transparent algorithm to determine creditworthiness that eliminates racial biases in credit scores.”

Eliminating racial bias is admirable, but is not a thing that we have any idea how to do in any computationally rigorous way, certainly not as assumed in that language.

Modern credit scoring for anything that matters involves lender and context specific analytics. It's a much richer process than the imagined single monolithic FICO score.

In this world, giving a single entity a monopoly in generating these bespoke scores is, simply for mechanical reasons, a complete non-starter.

The way to deal with profit-oriented lenders making what seem to be unfair credit decisions is to have a credit process that solves for fairness, not profit. That amounts to starting from a place of making capital grants, something the government is very good at, and which would be a better angle for Bernie to be taking.

Andrew Yang's UBI proposal could be tweaked to optionally, at the receivers request, treat the grants in part as low interest rate loans, allowing receivers to create a credit history if they so desire.


This is the first I'm hearing of it, but assessing credit risk based on social profiles is almost definitely illegal. Conclusions based on friends or location rather than actual history will have a disparate impact on minorities.

http://en.wikipedia.org/wiki/Disparate_impact

next

Legal | privacy