The Role of Protected Attributes in AI Fairness

Disclaimer: Thank you for reading. The content you are currently viewing is archived material and may be outdated. We strive to provide accurate and up-to-date information, but please be aware that the content in our archives may no longer reflect the latest developments, events, or changes in our field.

It is fundamental to implement AI and ML with fairness. Recent coverage of alleged gender-based bias in Apple’s lending algorithm shows how unfair lending practices can damage a company’s reputation.

Ultimately adhering to AI and ML fairness for credit underwriting is good for your business and your customers. You can ensure customers are being serviced correctly which helps your clientele and conversely, your business. And by applying rules for AI and machine learning fairness you can also protect your business against litigation. Consider the following two precedents:

  1. American Express had to pay a settlement of $96 million for credit discrimination of more than 220,000 of its customers. This was the result of the action taken by the Consumer Financial Protection Bureau (CFPB)The auto finance industry has also been fined for discriminatory practices.
  2. American Honda Finance Corp. and Ally Bank/Ally Financial (formerly General Motors Acceptance Corp.) paid a $104 million settlement to African-American, Hispanic, Asian and Pacific Island borrowers for its discriminatory practices.

These are only two small examples of the severity companies face when they implement unfair AI and ML practices.

It is critical that financial service companies and all credit lenders implement fair lending and also comply with regulations. Financial service companies interested in adopting AI and machine learning to drive automated loan underwriting require full compliance with fair lending laws and regulations. Fair lending laws and regulations in the U.S. include the following:

  • Equal Credit Opportunity Act (ECOA)
  • Fair Credit Reporting Act (FCRA)
  • Fair and Accurate Credit Transactions Act (FACTA)
  • Fair Housing Act (FHA)
  • Fair and Equal Housing Act (FEHA)
  • Consumer Credit Protection Act (CCPA)

Protected Attributes

Lending laws such as the ones mentioned above have a list of anti-discriminatory protected attributes that serve to protect individuals and instill fairness in lending.

Attributes are organized by classes. Protected classes under the Equal Credit Opportunity Act (ECOA) include the following:

  • Age
  • Color
  • Marital status (single or married)
  • National origin
  • Race
  • Recipient of public assistance
  • Religion
  • Sex

Lenders or credit underwriters need to ensure that protected classes of attributes are not being used in machine learning models. Even when these classes of attributes aren’t being used in machine learning models, discrimination may still exist through proxies. Discrimination can be unintentional (disparate impact) or intentional (disparate treatment).

Unintentional Discrimination (Disparate Impact)

Disparate impact is when an individual is discriminated against unintentionally or indirectly. A credit decision that disproportionately affects members within a certain protected class of attributes is disparate impact. Disparate impact can be the result of proxies, in other words, other attributes being used to create biases against protected classes of attributes.

For example, regulators do not deem postal codes as a protected attribute and allow them to be used in lending decisioning.  However, a postal code could be a proxy for race or religion and unintentionally discriminate against individuals.

Intentional Discrimination (Disparate Treatment)

Disparate treatment is the intentional discrimination of an individual of a protected class of attributes.

For example, if age or race is used directly as a feature in an AI/ML model then the model is considered discriminatory according to ECOA as both age and race are protected attributes.

How to Identify Disparate Impact

The 80% Rule and Statistical Purity

How can you identify disparate impact? Methodologies should not favour any group, however, certain groups may be unintentionally favoured. Statistical purity is a measure of the quality of data categories with neutrality or objectivity. Take color, color is not binary so a statistically pure data sample should reflect a spectrum of colour versus just black and white.

To help identify the existence of disparate impact the Pareto Principle or the 80% Rule is used.

For example, if 650 is considered a prime score, and 80% of an ethnic majority group score above 650 and only 20% of ethnic minority score above 650, then there is discrimination at play according to the 80% rule. The 80% rule is one of the techniques regulators use for testing fairness.

ML Models and Fair Lending Regulations

Lastly, if using AI and machine learning, credit underwriters and lenders must provide explainability for  adverse action. This is mandated by various lending laws such as ECOA and FCRA. Clear reasons need to be stated as to why an individual was denied credit, ensuring no discrimination was at play.

Fair lending regulations are setting the standards and rules that companies must abide by to limit and reduce potential discriminatory behavior. This is key given the potential for disparate action and disparate treatment.

As technology and capabilities like AI and ML integrate into your business, ensure you protect your company and customers by doing the proper due diligence in regulatory compliance.

Interested in learning how our Trust Science solution enables AI and ML fairness including explainable AI?

Get a demo

Your comment could not be saved. Please try again.
Your comment submission has been successful.

Read On