By: Elizabeth Sloop

From spam filters and online shopping[1] to ChatGPT[2], artificial intelligence (“AI”) is an everyday conversation topic. Likewise, financial institutions have incorporated AI into their daily operations.[3] For example, lenders use AI-driven tools to automate decisions in unsecured personal loan and credit card underwriting.[4] Increased efficiency, faster decision-making, and other benefits have made AI tools attractive for financial institutions.[5] However, the potential for bias in these tools leading to discriminatory outcomes is a common concern that regulators are watching and seeking to address.[6]

The Consumer Financial Protection Bureau (“CFPB”) issued a circular[7] that provides guidance to creditors using AI in lending decisions.[8] Specifically, the guidance indicates that lenders must provide consumers with specific reasons for credit denials as required by law, regardless of the use of “complex algorithms” and “predictive decision-making technologies.”[9] The Equal Credit Opportunity Act (“ECOA”)[10] (Regulation B) prohibits discrimination in credit transactions, including denial of credit to applicants.[11] When a creditor takes an “adverse action”[12] against an applicant, the applicant has the right under ECOA to a “statement of specific reasons for the action taken.”[13]Furthermore, the reasons must be specific, and the statement must include the “principal reason(s) for the adverse action,”[14] a requirement that functions as a tool to “prevent and identify discrimination.”[15] The CFPB has affirmed that these requirements do not change when creditors use complex algorithms to make decisions, and noncompliance with these requirements is not excused by the use of technology that makes the decision-making process complicated or opaque.[16]

In its recent circular, the CFPB further advised that simply using its sample forms for adverse action notices is not sufficient for compliance with ECOA unless the reasons listed on the sample form accurately and specifically reflect the reasons the creditor is taking adverse action, such as a denial of credit.[17] When creditors use complex algorithms or other technologies to make lending decisions, they may be relying on data consumers do not expect to affect their ability to access credit, such as data “harvested from consumer surveillance” and data not provided in the credit application.[18]These data may include information about consumers collected from their online activity, purchase history, and other details.[19] Separately, the CFPB has identified risks to consumers from use of these data.[20] When data outside of the applicant’s credit file are being used to make decisions, specificity becomes especially important for the applicant. In order to be compliant with ECOA, lenders cannot rely on a checklist but instead must be able to articulate the specific reasons for the adverse action, even if it may be surprising to the applicant.[21]

As technology evolves, and as more and more creditors use AI-driven models for extending credit,[22] setting credit limits, underwriting, and other purposes, compliance with the specificity requirements of adverse action notifications will continue to be scrutinized. In order to ensure compliance with these requirements, creditors need to be able to understand the specific reasons for denying credit rather than relying on the output of a complex algorithm or using a sample checklist.[23] If creditors were to rely on such measures, they could be participating in the “latest iteration of unfair practices” by blindly engaging models that may be excluding borrowers based on legally protected characteristics.[24] Instead, the requirement to explain the reasons for an adverse action forces creditors to understand and articulate the specific reasons for the action.[25]

This requirement means that creditors may need to carefully vet the technology they use and consider whether the reasons for an adverse action provided in their notification processes are sufficiently specific. Additionally, because of the risks introduced by relying on a model that may use data outside of the applicants’ expectations, creditors may wish to analyze and document the reasons for implementing specific technologies in their decision-making processes. To the extent that the technology lenders use contemplates information harvested from consumer surveillance, creditors may wish to pursue alternative models to minimize discrimination risks. In order to realize the advantages of AI-driven credit models, financial institutions should consider implementing robust compliance management including vetting, testing, and monitoring to ensure compliance requirements are met and consumer protection risks are addressed.

[1] See Bernard Marr, The 10 Best Examples Of How AI Is Already Used In Our Everyday Life, Forbes (Dec. 16, 2019, 12:13 AM), https://www.forbes.com/sites/bernardmarr/2019/12/16/the-10-best-examples-of-how-ai-is-already-used-in-our-everyday-life/?sh=5ca1656b1171 (listing examples of AI in daily life).

[2] See Introducing ChatGPT, Open AI (Nov. 30, 2022), https://openai.com/blog/chatgpt.

[3] See Jeff Kearns, AI’s Reverberations Across Finance, Int’l Monetary Fund (Dec. 2023), https://www.imf.org/en/Publications/fandd/issues/2023/12/AI-reverberations-across-finance-Kearns#:~:text=AI%20is%20already%20making%20important,or%20money%20laundering%2C%20he%20said (“AI is already making important financial decisions, such as handling credit card applications, and it’s making rapid inroads in the public and private sectors.”).

[4] See Julie Lee, The Future of AI in Lending, Experian (Jan. 18, 2023), https://www.experian.com/blogs/insights/future-ai-lending/.

[5] See id. (explaining the benefits of AI in lending).

[6] See generally Web Arnold, Analysis: What Lenders Should Know About AI and Algorithmic Bias, Bloomberg L. (Apr. 25, 2023, 4:00 PM), https://news.bloomberglaw.com/bloomberg-law-analysis/analysis-what-lenders-should-know-about-ai-and-algorithmic-bias (“Even if the code or training data appear unbiased, an algorithmically derived lending strategy that disparately impacts protected groups would be considered unlawful under existing antidiscrimination laws, even though those laws don’t cover AI at all.”); Joe Decosmo, How to Control for AI Bias in Lending, Forbes (Oct. 18, 2023, 7:45 AM), https://www.forbes.com/sites/forbestechcouncil/2023/10/18/how-to-control-for-ai-bias-in-lending/?sh=145d517a4926 (“[C]ritics worry that algorithms can embed historically discriminatory lending practices into automated credit decisions.”).

[7] Consumer Financial Protection Circular 2023-03, Consumer Fin. Prot. Bureau (Sept. 19, 2023), https://www.consumerfinance.gov/compliance/circulars/circular-2023-03-adverse-action-notification-requirements-and-the-proper-use-of-the-cfpbs-sample-forms-provided-in-regulation-b/ [hereinafter Circular].

[8] Id.

[9] CFPB Issues Guidance on Credit Denials by Lenders Using Artificial Intelligence, Consumer Fin. Prot. Bureau(Sept. 19, 2023), https://www.consumerfinance.gov/about-us/newsroom/cfpb-issues-guidance-on-credit-denials-by-lenders-using-artificial-intelligence/.

[10] See 12 CFR § 1002.

[11] See 12 CFR Part 1002 – Equal Credit Opportunity Act (Regulation B) Consumer Fin. Prot. Bureau (Aug. 29, 2023), https://www.consumerfinance.gov/rules-policy/regulations/1002/.

[12] 12 CFR § 1002.2.

[13] 12 CFR § 1002.9(a)(2).

[14] 12 CFR § 1002.9(b)(2).

[15] See Circular, supra note 7.

[16] See CFPB Acts to Protect the Public from Black-Box Credit Models Using Complex Algorithms, Consumer Fin. Prot. Bureau (May 26, 2022), https://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black-box-credit-models-using-complex-algorithms/ (“‘Companies are not absolved of their legal responsibilities when they let a black-box model make lending decisions,’ said CFPB Director Rohit Chopra.”).

[17] See Circular, supra note 7.

[18] Id.

[19] See FTC Explores Rules Cracking Down on Commercial Surveillance and Lax Data Security Practices, Fed. Trade Comm’n (Aug. 11, 2022), https://www.ftc.gov/news-events/news/press-releases/2022/08/ftc-explores-rules-cracking-down-commercial-surveillance-lax-data-security-practices.

[20] See Circular, supra note 7 (“The CFPB has underscored the harm that can result from consumer surveillance and the risk to consumers that these data may pose.”); see also US Watchdog to Announce Plans to Regulate ‘Surveillance Industry’, Reuters (Aug. 15, 2023, 12:24 PM), https://www.reuters.com/world/us/us-watchdog-announce-plans-regulate-surveillance-industry-2023-08-15/.

[21] See Circular, supra note 7.

[22] See Kearns, supra note 3 (“Financial institutions are forecast to double their spending on AI by 2027.”).

[23] See Circular, supra note 7. (“The CFPB has also made clear that adverse action notice requirements apply equally to all credit decisions, regardless of whether the technology used to make them involves complex or ‘black-box’ algorithmic models, or other technology that creditors may not understand sufficiently to meet their legal obligations.”).

[24] See Decosmo, supra note 6.

[25] See Circular, supra note 7.

Share this post