fbpx

Meidas Exclusive: Federal Agencies Focusing on AI Bias in Consumer Lending

In the debate about which industries are the most vulnerable to ethical lapses in artificial intelligence (AI), there is a clear consensus that the financial services industry sits at or near the top of the list, and that includes consumer lending.  Today, in the midst of heightened concern that increased economic inequality will be one of the many collateral hazards stemming from the Covid-19 pandemic, there exists the parallel fear that biased AI will exacerbate that inequality through adverse actions in mortgage lending, small business lending, student loan lending, automobile lending, and debt collection practices in all of those sectors.

To be sure, the financial services industry was focused on the management of biased decision-making models, defined by the Federal Reserve in 2011 as “a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates” (SR 11-7 April 4, 2011), long before the term AI exploded into the public’s consciousness and vernacular.

Fast forward to April 2020.  The Bureau of Consumer Financial Protection (BCFP), in its annual Fair Lending Report, made it clear that, as part of its mission to ensure “that markets for consumer financial products and services operate transparently and efficiently to facilitate access and innovation . . . [O]ne area of innovation the Bureau is monitoring for fair lending and access to credit issues is artificial intelligence (AI), and more specifically, machine learning (ML), a subset of AI.”

The BCFP’s report shines a light on a fundamental predicate of fair and unbiased AI – “explainability”.  In the context of AI utilized in consumer lending decisions, “explainability” refers to the ability of financial institutions to accurately explain to consumers the complex AI decisions that resulted in their credit decisions.  This type of explainability, to date largely imperfect, is vital to achieving public trust in AI and the systemic decisions powered by AI.

AI ethicists have long recognized the challenges to eliminating bias in an environment that relies heavily on the use of “black box” algorithm models and is hampered by the limits placed on the ability to access essential data on protected groups.  It is certainly encouraging that US governmental institutions have recognized their collective need to monitor, recognize, and eliminate biased adverse decisions in the multitude of consumer lending markets.  The battle to ensure that AI does not contribute to economic inequality will demand their success.      

About the contributor: 

This image has an empty alt attribute; its file name is Mitchell-Mandell.jpg

Mitchell is a commercial litigation and trial attorney and a Senior Partner at Zumpano Patricios & Popok, PLLC. He has extensive experience handling matters around the world in a variety of industries, including technology, artificial intelligence, robotics, investment banking, commercial banking, healthcare, fashion, entertainment, and hospitality, and across myriad substantive areas. He is a recognized authority on the legal and ethical issues in artificial intelligence and robotics.

You can learn more about the law firm of Zumpano Patricios & Popok, PLLC here.

Leave a Reply

Your email address will not be published. Required fields are marked *