This article is part of the Under the Lens series
Top Takeaways
The data that powers AI likely has bias baked into it. To expand access to credit and end inequity in mortgage lending, AI programs must be trained to be fair.
Researchers say they’ve developed efficient and fair AI models. But getting lenders to take fairness into account requires investment. “Fairness can pay if you invest the effort to try.”
There is no federal guidance for designing AI so that the data or information it produces is fair and prevents discrimination. AI regulation will likely be incumbent on states, though Trump’s “Bill Beautiful Bill” may hamper those efforts.
In 1968, the passage of the Fair Housing Act outlawed the decades-old practice of redlining, a system designed to deny mortgages to non-white people based on where they lived—even if they qualified for loans.
The practice’s abolishment became a hallmark achievement of the Civil Rights Movement, but the legacy of redlining casts a long shadow that continues to drive inequity in homeownership.
Black, Brown, and lower-income loan applicants still experience unfair and unequal treatment in the mortgage lending process, which directly affects how much wealth they will have in the future—for themselves and their families.
And while policy and tech interventions have been deployed over the decades to curtail discriminatory practices, those interventions haven’t moved the needle. The difference in homeownership rates between Black and white households—also called the homeownership gap—is larger today than it was in 1960, when segregation and redlining were still legal.
Enter artificial intelligence (AI), which has been widely adopted within the mortgage industry to standardize and streamline the lending process. It has been used to automate tasks like loan processing and property valuations, and improve the accuracy in assessing a borrower’s credit risk.
Some housing advocates, for their part, believe AI has the potential to overcome the human biases embedded within the mortgage system, and they are pressing the industry to not only build fairness into their models but also to prioritize it as an objective.
Designing AI to be Fair
As lenders explore how best to optimize their performance and expand access to credit, many are looking to incorporate nontraditional data like rent and utility bill payments into the underwriting process and embed AI analysis into their risk assessments, either for benchmarking or decision making.
Traditional credit scoring models have long been criticized for being discriminatory toward communities of color, who were historically deemed “high risk” and largely shut out of the credit-building system. These communities were stuck using fringe lenders, who targeted them for subprime loans and only reported negative data to credit bureaus.
“For a lot of people, especially [those] who are underserved by traditional scoring models with limited credit history or non-traditional income sources, those newer, more sophisticated models can help to extend their access to credit by offering more personalized and inclusive credit evaluations,” says Linna Zhu, a senior research associate in the Housing Finance Policy Center at the Urban Institute, a Washington-based economic and social policy think tank.
But that will only be true if the people designing AI models make fairness their objective.
Research from the Urban Institute found that lending firms primarily use AI to maximize overall productivity, for example, in the volume of loans processed—often at the expense of fairness. Consequently, AI-driven lending models used today often produce evaluations that skew toward borrowers with higher incomes—in other words, privileging white borrowers who have a more stable or longer history of credit scores for loans.
“When you think about any technological application, because of the history of discrimination, the data that is powering the system is already tainted,” says Michael Akinwumi, chief AI officer at the National Fair Housing Alliance. “AI is like a mirror that reflects what is right in front of it, so all it can do is to reflect the patterns of marginalization that you have in the data.”
AI is like a mirror that reflects what is right in front of it, so all it can do is to reflect the patterns of marginalization that you have in the data.”
Michael Akinwumi, National Fair Housing Alliance
Bias, says Zhu, has two potential sources: “One is [that] the data could be already biased, and using that to train [an AI model] is like the ‘garbage in, garbage out’ theory in the AI space,” she says. “For example, you are training the model using the existing data. If you want to ask me the value of property A, and the method is, you need to find five or six comparable properties in that neighborhood to help you make that estimate, if it’s in a redlined neighborhood, then the proper value has already baked in longstanding disparities.”
Zhu identified the second source as algorithmic bias. “Without letting the computer know what fairness metrics you want to include, the computer itself doesn’t know how to define fairness, so you have to tell the computer intentionally that you want to balance the efficiency and fairness in your model design so that [it] can take that into account.”
A joint study by the National Fair Housing Alliance and FairPlay AI, a California-based financial technology firm that produces fair lending analysis, designed machine learning models for mortgage underwriting and pricing with the explicit objective of improving fairness without sacrificing efficiency. Based on the study’s preliminary findings, “We established that that twin objective is achievable,” says Akinwumi, a co-author of the study.
Researchers say they were able to produce a guiding methodology—dubbed “Distribution Matching”—that trained machine learning models to produce outputs for protected groups that were as close as possible to the outputs produced for the control group, limiting disparity and preserving the models’ accuracy. “We often frame this issue as a tension between efficiency and fairness, but maybe, instead of framing it as a trade-off, they can coexist,” says Zhu.
GreenLyne, which develops AI lending programs for banks and credit unions, is another financial technology firm that is tackling historical bias in the lending space as a core objective.
The company fashioned its name as the antithesis of redlining, the legacy of which serves as its primary modeling problem: because redlining excluded entire communities from accessing financial services, data for these groups of potential borrowers is consequently sparse. AI models that do not disaggregate the data will produce outcomes that privilege the borrower profile that input the most data, which are often white borrowers with longer, more stable credit histories. Borrowers from historically disenfranchised communities of color, with shorter credit histories due to discrimination, are then measured against metrics that inherently discount them.
“When you build a model, the model is unduly influenced by whatever corner of the population delivers the most data,” says Syeed Mansur, GreenLyne’s CEO.
“It’s not even about bias as much as it is that you have a lot of data for this particular ‘shape,’ and you have less data for this other ‘shape’,” he says, referring to the different borrower demographics. “What’s going to happen is, when you train a model, the model is going to be very confident about making predictions for this shape and less confident making predictions about this other shape.”
Mansur argues that only by designing the model to pay equal attention to the sparse data can we build its confidence to produce more tailored outcomes.
GreenLyne designed its AI model to separate the data and create predictions for each shape, which Mansur says will help train the AI become more elaborate. The more elaborate the technology, the more capable it is of developing nuance, thereby making the model’s predictions more precise.
And rather than follow risk-based loan pricing—in which the lender offers less favorable loan terms, such as a higher interest rate, to cover what it has determined to be higher risk from the applicant—the model produces risk-based loan sizing to create predictions, which Mansur argues, is more inclusive of different borrower profiles.
“If I gave you a $500 million loan, the odds that I will see that loan back are very slim. Now, if I give you a $1 loan, the odds that I’m going to get it back tomorrow are very high. So the $1 side, I make no income because there’s no interest. At $500 million, I’m going to be out on the street because I’m never going to see that money back,” says Mansur. “Somewhere in the middle is the sweet spot where the loan is maximally affordable for you and maximally profitable for me, and finding that sweet spot boils down to being able to make a very, very precise prediction of your risk. That is what risk-based loan sizing is.”
Mansur says that risk-based loan sizing predictions can drive access and inclusion in a responsible manner. Rather than rejecting a loan application—or charging an impossibly high (otherwise illegal) interest rate—with risk-based loan sizing, lenders instead can adjust the loan amount to adequately cover their risk and extend some credit to the borrower.
And to make smaller loans more attractive—thereby opening access for home buyers from more low- or middle-income backgrounds—GreenLyne’s model also automates the parts of the process that require intensive human labor, so that loans can be originated in minutes (rather than days), at a fraction of the costs.
In the first quarter of this year, the Mortgage Bankers Association reported that it cost $12,579 in total production expenses to originate a loan—the lion’s share of which is for personnel, regardless of the loan size—and that loan production typically takes two weeks. These expenses are partly absorbed by banks—and, to some extent, by borrowers, says Mansur—making it more costly to extend financing for smaller-sized loans.
What’s more, he says, “Instead of completely taking the human being out of the loop, the human being is immersed in the loop. They can do quality control and quality assurance over all the automation, but we’ve freed them up and reduced the cost.”
Human input remains a principle concern in the AI space, particularly when considering how much of any given process to automate.
“[AI models] should be the enablers, not replacers,” says Zhu of the Urban Institute. “You still need the human expertise.”
But the greater concern, according to Akinwumi of the National Fair Housing Alliance, is “automation bias.”
“Because,” says Akinwumi, “as humans, when we interact, there is opportunity for you to question my judgment and ask questions. But when a machine generates the output, users [think] it’s got to be right. The interaction is usually one-sided. When there is full automation, there is more reliance on the final output that is coming out of the system.”
Including the biased outputs.
Who’s Accountable for Ensuring AI Fairness?
In 2024, the percentage of lenders who report that they’re using some form of AI in their work, either to automate manual processes or to adopt machine learning to accurate classify and process documents, rose by 23 percent over the year prior—from 15 percent to 38 percent—more than doubling within a single year.
But not all those lenders have openly embraced AI in their underwriting. The Urban Institute report revealed that adoption rates were lower among smaller and mission-oriented lenders like community development financial institutions.
“For lenders, they are the ones dealing with the customers on a daily basis, so they may want to test out and get a real sense of what AI is and how it could affect their business, at what cost and at what risk,” says Zhu.
It’s a level of caution that Mansur believes should be equally adopted across the housing finance industry, not just in the mortgage space.
We need to take those cues from the biotech industry, not from the IT industry. If we go fast and break things, we end up breaking someone’s financial life.”
Syeed Mansur, GreenLyne’s CEO
“No startup in the pharmaceutical and biotech industries ever says, ‘Go fast and break things.’ There’s no Mark Zuckerberg or Travis Kalanick type of philosophy with a M.D./Ph.D. who’s starting a biotech company,” says Mansur. “We need to take those cues from the biotech industry, not from the IT industry. If we go fast and break things, we end up breaking someone’s financial life.”
As the financial services sector is poised to invest $97 billion in AI by 2027, it’s perhaps more critical than ever to have formal guidance on modeling fairness—that is, ensuring that the data and the design are free from biases that would replicate and perpetuate discrimination.
Under the Biden administration, mortgage giants and government-sponsored enterprises Freddie Mac and Fannie Mae–which control about half of the U.S. residential mortgage market—were committed to addressing some of the disparities in access through algorithmic fairness techniques and special-purpose credit programs, which develop credit underwriting criteria that are more sensitive to low- and moderate income applicants.
But under the new administration, fairness seems far from a priority. In one of his first actions since returning to office, Donald Trump rescinded Biden’s 2023 executive order establishing AI standards in the U.S.
Shortly after, newly sworn-in Federal Housing Finance Agency (FHFA) Director Bill Pulte terminated the special purpose credit programs at Fannie and Freddie, and put on leave staffers in the Office of Equal Opportunity and Fairness and the Office of Minority and Women Inclusion (among others), “people who were responsible for fairness in the mortgage market,” says Kareem Saleh, FairPlay AI’s CEO and co-founder. Pulte also dissolved the Division of Public Interest Examination, which oversaw affordable housing, consumer protection, and diversity initiatives.
“There’s no question that there has been a retreat from supervision and enforcement of fairness issues in the housing market at the federal level with the change of administration,” Saleh says.
While Pulte has installed himself as chairman of both firms (after firing 14 members of their boards of directors), the Trump administration is still considering privatizing the two. At which point, whether the firms—who Zhu says have been “pioneers” in piloting AI to employ more inclusive data in assessing credit worthiness—would then resume mortgage fairness independently is anyone’s guess. “I suppose, as private organizations, maybe they could resume some of these initiatives,” says Saleh, “although, it’s unclear what their motivation to do so would be.”
Saleh predicts that AI regulation will be incumbent on states, particularly with state banking and insurance commissioners, since home insurance—regulated at the state level—is often a prerequisite for a mortgage. “There is actually a role for state insurance commissioners to play here,” he says.
The Stanford Institute for Human-Centered Artificial Intelligence reported that the number of state-level AI-related laws jumped from 49 to 131 between 2023 and 2024, and is expected to grow. But Trump’s latest agenda bill, titled the One Big Beautiful Bill Act, will seek to bar states from enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for the next 10 years, CNN reported.
“Without the regulatory guardrails, without the accountability guidelines, it’s really hard for those companies or modelers to have that in their design phase, because they themselves aren’t very sure how to define fairness,” says Zhu. “And that’s a question largely dependent on who your audience is: Are we talking about low-income borrowers, or low- and moderate-income borrowers? Different criteria would affect your definition, so often modelers don’t have a clear sense.”
In contradiction with most of the administration’s actions to date, however, the White House Office of Management and Budget (OMB)—which implements the president’s budget and oversees the performance of all federal agencies—recently released new policies on federal agency use of AI, identifying housing as a “high impact” area, subject to “strong safeguards for civil rights, civil liberties, and privacy,” as directed by the Accelerating Federal Use of AI through Innovation, Governance, and Public Trust memorandum.
“The guidance is saying that if you’re going to use AI in housing, you must show that you have oversight in place to mitigate civil rights and privacy concerns,” says Akinwumi. “The expectation is that there will be pressure on the tech companies to comply, because the agencies also have to comply.”
But how federal agencies will manage civil rights concerns while also actively waging war against DEI remains an open question for Akinwumi. “There’s no way you attack Diversity, Equity and Inclusion without attacking civil rights as a principal right,” he says. “It’s counterintuitive that the same administration that is attacking the tenet of civil rights is also saying that if you use AI in housing, then you also have to address civil rights concerns. So that’s really why a lot of things beg for the implementation details to see exactly what they mean.”
The National Fair Housing Alliance will release its own template in early June as a recommendation for OMB to practically implement oversight for civil rights, civil liberties and privacy for AI use in housing.
Whether the OMB adopts the alliance’s recommendations, or even seriously oversees civil rights and privacy considerations in AI applications in housing, Saleh believes that lenders will still take up at least some measure of compliance anyway.
“There are a lot of lenders who recognize that the law remains the law, even [if] there’s no cop on the beat,” he says. “Some institutions will choose to continue to maintain their fairness investments, in part because they think it’s just good business to do so: If your models are systematically excluding people who would have paid you back, that’s a business issue, so I don’t think there will be a complete retreat from fairness and inclusion by lenders themselves.”
Lenders that have worked with FairPlay—which also helps lenders optimize their AI models to improve the fairness of their underwriting and pricing strategies—have been able to increase their approval rates by 10 percent, Saleh told Shelterforce.
“Fairness can pay,” he says, “if you invest the effort to try.”
Still, he says, the danger of losing investment in fairness technologies and strategies without an encouraging regulatory environment is high.
Without some level of government subsidy to incentivize this type of investment—which Zhu says is time-consuming and not particularly cost-friendly—“The next question is, even though we know doing this type of search may lead to more or better results to achieve both efficiency and fairness, who should bear the cost? That’s the accountability issue.”
And left to its own devices, the private sector rarely holds itself accountable.
“The industry is gradually shifting the liability [of harm] onto users,” says Akinwumi. “I don’t know how many people these days—myself included—have the time to read the Terms of Use for any technology product. [But] now that the industry is taking this approach, unless we have users or a public that is literate about AI, more and more harms are likely to be done.”
“Even though [the current administration] deemphasizes the importance of equity and fairness, it’s still very important to think about how we move forward in the AI space,” says Zhu. “AI won’t go away. It will affect our lives not just in housing, but in every sector.”
Tech Terms—A Glossary
Artificial Intelligence (AI)—Technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity, and autonomy. There are AI models that automate specific tasks or processes, and models that can optimize performance over time. [IBM]
AI Model—A program that has been trained on a set of data to recognize patterns or make decisions without human intervention. [IBM]
Algorithm— Detailed computational instructions that describe how to solve a problem or perform a specific task. [AP Stylebook]
Machine Learning—An AI process that uses algorithms to analyze large amounts of data, learn from the insights, and then make informed decisions. [Google] As they are exposed to more data, machine learning algorithms improve performance over time.
Training AI Models—The “learning” in machine learning is achieved by training models on sample datasets consisting of real data. [IBM]
Modeling Fairness—the process of ensuring that AI models that use machine learning are unbiased and do not discriminate against individuals or groups based on sensitive attributes like race, gender, or age. [AI overview on Google]
Generative AI, or Gen AI—Uses prompts or existing data to create new written, visual, or auditory content. The Mortgage Bankers Association in 2024 recommended that Congress target its legislation to this type of AI because it can be used to deceive people and financial institutions.
Comments