3 Sources of Unfair Credit Scores in Credit and Lending Decisions

Disclaimer: Thank you for reading. The content you are currently viewing is archived material and may be outdated. We strive to provide accurate and up-to-date information, but please be aware that the content in our archives may no longer reflect the latest developments, events, or changes in our field.

Credit scoring helps lenders and underwriters determine the creditworthiness or risk of a potential borrower. Traditionally lenders have needed to rely on stagnant and incomplete information to then make a judgement call on decisioning. ML and AI provide lenders and credit underwriters with the means to decrease human error and eliminate judgement. But you can still have biased scores or use the wrong vs right data in your credit scoring models.

Three potential causes of unfair credit scoring practices to consider in your machine learning credit underwriting are:

  • Biased data
  • Algorithms within models
  • Business practices

3 Possible Sources of Unfair Scores

1. Biased Data

Data that unduly favors a particular group is biased. And using biased data for credit scoring contributes to an unfair scores.

Example: Zip codes are an example of biased data and a proxy for a protected class of attributes like race. Some US zip codes are predominantly African American based on the racial profiles of people who live there. A credit underwriter or lender may actively avoid lending to those residing in a particular zip code. This is a violation of anti-discrimination law and an example of unfair credit scoring with biased data.

2. Algorithms within Models

Unfair credit scoring can be caused by the machine learning credit underwriting algorithm itself. The most common way that people introduce biases into the credit score algorithm model is in the selection of attributes.

Example: Credit underwriters and credit analysts may have predetermined the set of attributes they consider to be the best indicators of a borrower’s creditworthiness. This assumption of ‘best indicator’ qualification is based on past decisions, not on significantly large data sets. The credit scoring models are then built with the group of attributes subjectively selected, and based on biased or incorrect assumptions.

In this case, it may be only 10 attributes which may not be the correct attributes or sufficient attributes to build a robust credit scoring model. As such, proper due diligence is neglected in the building of credit scoring models.

In contrast, Trust Science encourages customers to share all their data points with us. We add additional alternative data to the customer data to ensure we have a broader, unbiased view of their consumers for credit and lending decisions. We then use robust feature selection algorithms to identify unprotected attributes that most likely explain the credit risk profiles of their consumers. Once these predictive attributes are identified, our model governance team works with dedicated model developers to ensure that the attributes are explainable in the event of adverse action.

3. Business Practices

Lastly, the third source of unfair credit scoring may be prevalent in how a business operates or the business practices realized. A company may only focus on a particular group of people which would introduce a slew of proxies into the credit score algorithm models.

Example: A credit underwriter or lender’s primary target market may be single fathers. This focus may be because of the underlying corporate mission to help single fathers. However, this singular focus on single fathers as a target market inherently creates bias if the credit score developed on the data is used to decide loans for consumers other than single fathers.

And these proxies in turn create unintentional discrimination. So even though the company adheres to fair lending regulations, and doesn’t supply the protected class of attributes such as sex and marital status, its data is biased. When the company provides its data to implement machine learning credit underwriting and build a credit score algorithm model, even though the protected class of attributes aren’t present, proxies are.

 

Three Tips to Reduce Unfair Credit Scores

What can be done to reduce unfairness in scores?

TIP 1: Check for biased data & use the right data.

> Increases validity and fairness of data and algorithms used in credit scoring models.

TIP 2: Document every decision in the process, along with your assumptions and hypothesis. 

Increases clarity and transparency which limits biases in the data introduced.

TIP 3: Combine credit scoring models. 

> Increases accuracy in scores by reducing the biases in a given model.

 

Want help in getting fair credit score models?

As a lender or credit underwriter you need to abide by strict government regulations and ensure the adherence to fair lending practices throughout your business. By being aware of implications involving disparate impact (unintentional discrimination) and disparate treatment (intentional discrimination), companies can reduce and eliminate discrimination with their credit scoring models towards protected groups.

At Trust Science, we enable machine learning credit underwriting that only provides fair scores to our customers and tosses out scores that are deemed unfair. Want to learn more about our scores/how we can help?

Download our 2 pager here:

Download PDF

Your comment could not be saved. Please try again.
Your comment submission has been successful.

Read On