The Uneven Terrain of AI in Lending
The recent revocation of AI standards has raised concerns about bias in AI lending practices. Discrimination in mortgage lending continues to disproportionately affect Black and Brown borrowers. With proper oversight, AI can be a tool for equity, but vigilance is necessary.
USAGEFUTUREPOLICYTOOLS
AI Shield Stack
10/30/20252 min read


On day one of the new administration, President Trump revoked former President Biden’s 2023 executive order on U.S. AI Standards, which outlined AI safety, disclosure, and risk management principles. This lack of regulatory oversight, coupled with the continued explosion of AI and machine learning technologies, has put the future of AI development in the U.S. at a critical inflection point for tech companies, investors, and regulators alike.
The financial services industry is projected to spend $97 billion on AI by 2027, growing 29% from 2023. As artificial intelligence reshapes the financial landscape, we face a key question—will these technologies maintain the status quo of inequity, or will they be used to dismantle long-standing injustices and create a more equitable future?
This exponential growth can potentially equalize and democratize financial opportunities for marginalized groups. However, without conscious oversight by regulatory agencies and developers alongside commitments by investors against potential biases, AI could exacerbate existing inequities for low-income Black and Brown communities.
In an unequal society, AI tools can be at risk of simply reflecting existing biases. Bias and discrimination found in AI are often not the result of explicit design but can stem from factors like a lack of diverse design team members, unrepresentative or biased data, or just plain human oversight. This is evident in a slew of notable cases including facial recognition models that are unable to recognize darker skin tones, predictive policing systems that overtarget neighborhoods of color, and tenant screening algorithms that prevent formerly incarcerated individuals from obtaining housing.
Mortgage lending is a prime example of the risks and opportunities associated with AI. The Fair Housing Act of 1968 outlawed discrimination in mortgage lending for all protected statuses. Yet, according to a 2024 Urban Institute analysis of Home Mortgage Disclosure Act data, Black and Brown borrowers were more than twice as likely to be denied a loan than white borrowers. Lending discrimination has substantial consequences on Black and Brown communities, with African American and Latinx borrowers charged nearly 5 basis points in higher interest rates than their credit-equivalent white counterparts—amounting to $450 million in extra interest per year.
With the rise of artificial intelligence and machine learning, credit risk assessment and decision-making for loan applications and refinancing are increasingly delegated fully to machines and algorithms. The proprietary nature of algorithms and the complexity of their constructions allow for discrimination to hide behind their supposed objectivity. These “black box” algorithms can produce life-altering outputs in lending with little knowledge of its underworkings.
When used correctly and with appropriate oversight, AI still presents a promising opportunity for righting inequity. Lending represents a significant motor for economic mobility and opportunity for marginalized communities, especially for the 45 million Americans who are either credit-underserved or unserved. There are optimistic signs AI could help drive economic inclusivity, as AI tools have shown improvements in fair approval and denial rates compared to face-to-face lending.
Leading academic institutions are developing Less Discriminatory Algorithmic Models (LDAs) that account for fairness and equity in novel ways, like MIT’s SenSR model and XAI methods. However, these initiatives must be supported by the investment community and beyond for AI to be deployed ethically. AI Shield Stack can assist in ensuring that your AI systems are developed and deployed with fairness and accountability in mind.