0% found this document useful (0 votes)
34 views32 pages

Staff Report

Uploaded by

doamaral.anarosa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views32 pages

Staff Report

Uploaded by

doamaral.anarosa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

TABLE OF CONTENTS

I. Overview and Background


II. Roundtables with Regulators
III. Use-cases Roundtable: Capital Markets
IV. Use-cases Roundtable: Housing and Insurance
V. Use-cases Roundtable: Financial Institutions and Nonbank Firms Roundtable
VI. Use-cases Roundtable: National Security and Illicit Finance Roundtable
VII. Bipartisan Committee Staff Takeaways
VIII. Appendix
House Committee on Financial Services
Bipartisan Working Group on Artificial Intelligence
AI Innovation Explored: Insights into AI Applications in
Financial Services and Housing

EXECUTIVE SUMMARY

Beginning in 2022, the public began to experience how artificial intelligence (AI) could not only
dramatically change the lives of individuals but could revolutionize every aspect of our economy. The most
recent development in AI technology that has captured the public’s attention is “Generative AI” or Gen AI.
Members of the U.S. House of Representatives Committee on Financial Services (Committee) recognized
the significant impact of Gen AI, having previously studied AI during the 116th and 117th Congresses at 10
different hearings held by the Task Force on Artificial Intelligence. The Committee considered how the financial
services and housing industries, as well as government agencies, used AI to more effectively do their jobs, and
how individuals across America and around the world both interacted with and experienced AI in their financial
lives.

However, the emergence of Gen AI and the massive investments being made require the Committee
to consider the benefits, risks, and consequences of AI. This includes examining the existing statutory and
regulatory framework to determine whether it is sufficient to safeguard our financial and housing markets. In
response to this watershed moment in AI development, in January 2024, Chair Patrick McHenry and Ranking
Member Waters established the bipartisan AI Working Group (Working Group) comprised of 12 Members1.
The Republican Members included Chairman Patrick McHenry (NC-10), Congressman French Hill (AR-
02), Congresswoman Young Kim (CA-40), Congressman Mike Flood (NE-01), Congressman Zach Nunn
(IA-03), and Congresswoman Erin Houchin (IN-09). The Democratic Members included Ranking Member
Maxine Waters (CA-43), Congressman Stephen F. Lynch (MA-08), Congresswoman Sylvia Garcia (TX-29),
Congresswoman Ayanna Pressley (MA-07), Congressman Sean Casten (IL-06), and Congresswoman Brittany
Pettersen (CO-07).

The Working Group conducted six roundtables focused on AI use cases across the financial services
industry, including the range of benefits and risks the technology poses, and the hurdles to adopting the
technology.

Federal Regulators. During the two sessions held with federal regulators, panelists responded to
concerns that AI could lead to bias and discrimination and make it harder to detect such outcomes due to a lack
of explainability. Regulators generally noted that the use of AI did not absolve entities from complying with
anti-discrimination laws. Several regulators commented that regulated entities are expected to follow all laws,
including anti-discrimination and other consumer protection laws, in a tech-neutral manner.

Capital Markets. In general, market participants stated they are taking a measured approach to
implementing AI technology in certain aspects of their business. Market participants shared that they are still in
the early stages of this technology’s capabilities. However, many participants reported that they have been
using ML models for upwards of 10 years at increasing rates for data analysis and are exploring the potential
for new advancements in AI’s capabilities to create new use cases in capital markets.

1
U.S. House of Representatives, “McHenry, Waters Announce Creation of Bipartisan AI Working Group,” House Financial
Services Committee, (Jan. 11, 2024).

@FinancialCmte @ FSCDems
financialservices.house.gov democrats-financialservices.house.gov
House Committee on Financial Services
Bipartisan Working Group on Artificial Intelligence
AI Innovation Explored: Insights into AI Applications in
Financial Services and Housing

Housing and Insurance. During the housing and insurance session, participants highlighted how AI has
facilitated a major shift in housing and insurance products and services. This shift has allowed for new benefits
and conveniences to consumers, but has also presented fair housing, consumer protection, and other challenges.
For example, businesses are deploying AI to underwrite mortgages and insurance policies, screen tenants,
simplify and enhance the overall customer experience, and perform data analytics that guide their responses to
consumers and risks.

Financial Institutions and Nonbank Firms. This roundtable included two panels. The first panel focused
on specific use cases by financial institutions of all sizes, specifically in loan underwriting, customer service,
fraud detection, and debt collection. The second panel focused on the AI lifecycle at a financial institution or
nonbank from acquisition of the technology, development, and integration. Members and panelists discussed
how financial institutions use AI and comply with anti-discrimination laws, as well as the need for cybersecurity
and privacy safeguards.

National Security. During this session, panelists discussed how bad actors can leverage AI to
compromise financial institutions’ defenses and how these financial institutions are using AI to respond to
threats. In addition, the panel explored how financial institutions use AI to comply with their existing Bank
Secrecy Act/Anti-Money Laundering (BSA/AML) responsibilities. Collectively, the six roundtables along
with the off-site visit to the Massachusetts Institute of Technology (MIT) provided the Working Group with a
comprehensive exploration of AI adoption across the financial services and housing industries.

Takeaways

Based on these extended discussions with the Working Group participants, Committee staff takeaways
for the Committee to consider are below:

• Given the Critical Role of the Financial and Housing Markets, the Committee Should Play a Leading
Role in Overseeing the Adoption of AI in the Financial Services and Housing Industries.
• The Committee Must Ensure Regulators Apply and Enforce Existing Laws, Including Anti-
Discrimination Laws, and Assess Regulatory Gaps as Market Participants Adopt AI.
• The Committee Should Ensure the Financial Regulators Have the Appropriate Focus and Tools to
Oversee New Products and Services.
• The Committee Should Continue to Consider How to Reform Data Privacy Laws Given the Importance
of Data, Especially Consumer Data, to AI.
• The Committee Should Work with Financial Regulators to Understand AI’s Impact on the Workforce.
• The Committee Should Ensure U.S. Global Leadership on AI Development and Use.

@FinancialCmte @ FSCDems
financialservices.house.gov democrats-financialservices.house.gov
I. Overview and Background
Establishment and Process of the AI Working Group
The Working Group was comprised of 12 Republican and Democratic Members.
Republican Members included Chairman Patrick McHenry (NC-10), Congressman French Hill
(AR-02), Congresswoman Young Kim (CA-40), Congressman Mike Flood (NE-01), Congressman
Zach Nunn (IA-03), and Congresswoman Erin Houchin (IN-09). The Democratic Members
included Ranking Member Maxine Waters (CA-43), Congressman Stephen F. Lynch (MA-08),
Congresswoman Sylvia Garcia (TX-29), Congresswoman Ayanna Pressley (MA-07),
Congressman Sean Casten (IL-06), and Congresswoman Brittany Pettersen (CO-07). The AI
Working Group conducted a total of six roundtables.
Throughout the course of the roundtables, Members focused on specific use cases,
technological developments, and regulatory implications. This focus facilitated a better
understanding of the benefits and risks presented by AI throughout the financial services and
housing industries. At the first two roundtables, Members heard directly from the federal agencies
under the Committee’s jurisdiction. These discussions centered around the agencies’ use of AI for
internal operations, the adoption of AI by entities within their purview, and whether additional
guidance or authorities are needed.
After meeting with the federal regulators, the Working Group held four more roundtables
in which Members heard from market participants and consumer advocates. These roundtables
focused on AI use cases across the financial services industry, including the range of benefits and
risks the technology poses, and the hurdles to adopting the technology. Additionally, some
Members of the Working Group conducted an off-site visit with AI experts at the Massachusetts
Institute of Technology (MIT). The six roundtables along with the off-site visit provided the
Working Group with a holistic understanding of AI adoption across the financial services and
housing industries. Collectively, this agenda provided valuable insight as Members work to ensure
the U.S. proceeds in a safe, competitive, fair, and efficient manner as AI use increases.
Other key initiatives include the Bipartisan AI Task Force created by Speaker Mike
Johnson and Democratic Leader Hakeem Jefferies and led by Chair Jay Obernolte (CA-23) and
Co-Chair Ted Lieu (CA-36),2 as well as the AI Insight Forums hosted by U.S. Senate Majority
Leader Chuck Schumer (D-NY), and Senators Mike Rounds (R-SD), Martin Heinrich (D-NM),
and Todd Young (R-IN).3 These initiatives have taken a similar bipartisan approach toward
examining the technology and any potential legislative gaps.
Introduction to AI
Throughout the Working Group sessions, panelists provided an overview of the evolution
of AI, from machine learning (ML) technology to recent developments in Gen AI, as demonstrated
in Exhibit 1. Participants highlighted that they have been utilizing traditional ML models for

2 Speaker Mike Johnson, “House Launches Bipartisan Task Force on Artificial Intelligence,” U.S. House of Representatives, (Feb.
20, 2024).
3 U.S. Senate Democratic Caucus, “Schumer Launches Major Effort To Get Ahead Of Artificial Intelligence,” (May 18, 2023).
upwards of 10 years and in the past few years have generally internally experimented with Gen
AI. While there is no collectively agreed-upon definition for AI currently, panelists and other
stakeholders have raised the importance of shared terms for AI as a starting point. In December
2020, the National Artificial Intelligence Act of 2020, enacted as part of the William M. (Mac)
Thornberry NDAA for Fiscal Year 2021,4 defined AI as “a machine-based system that can, for a
given set of human-defined objectives, make predictions, recommendations or decisions
influencing real or virtual environments. AI systems use machine and human-based inputs to, (A)
perceive real and virtual environments; (B) abstract such perceptions into models through analysis
in an automated manner; and (C) use model inference to formulate options for information or
action.”5 President Biden’s Executive Order 14110, Safe, Secure, and Trustworthy Development
and Use of Artificial Intelligence (Executive Order) adopted this definition of AI. As this
technology expands, Congress is continuing to consider definitions and whether legislation is
needed to address newer AI applications.

Exhibit 1: The Evolution of Artificial Intelligence 6

Roundtables with Regulators


On January 31 and February 6, 2024, the Bipartisan AI Working Group held its first two
sessions with financial regulators to discuss how government and businesses are deploying AI as
part of the supervisory technology they use to implement (i.e., “SupTech”) and comply with rules
and regulations (i.e., “RegTech”), respectively. Representatives from the Office of the Comptroller
of the Currency (OCC), Federal Reserve Board of Governors (Fed), Federal Deposit Insurance

4
P.L. 116-283.
5
Id.
6 McKinsey & Company. "What Is AI (Artificial Intelligence)?" McKinsey & Company. May 15, 2024.
Corporation (FDIC), Securities and Exchange Commission (SEC), and the National Credit Union
Administration (NCUA) participated in the first roundtable. Representatives from the Office of
Cybersecurity and Critical Infrastructure Protection of the Treasury Department, the Financial
Crimes Enforcement Network (FinCEN), the Federal Insurance Office (FIO), the Department of
Housing and Urban Development (HUD), the Federal Housing Finance Administration (FHFA),
and the Consumer Financial Protection Bureau (CFPB) participated in the second roundtable.
“SupTech” (short for “supervisory technology”) refers to innovative technology deployed
by regulators to check for compliance and support their supervisory, rulemaking, and enforcement
efforts.7 Through SupTech, regulators have improved their supervisory capabilities, helped
financial institutions meet regulatory requirements,8 and supported their efforts to collect and
analyze data (e.g. automated reporting, market surveillance, misconduct analysis, and macro and
micro-prudential supervision).9 “RegTech” (short for “regulatory technology”) typically describes
the use of automation for regulatory, compliance, and data reporting obligations for financial firms
and other regulated entities.10 Regulators discussed the extent that their SupTech was keeping pace
with advances in technology being used by businesses, the extent to which regulators were
deploying AI to enhance their oversight responsibilities, whether regulators had the resources
necessary to oversee the rapid adoption of AI among the entities they regulate, and the challenges
in hiring and recruitment programs to ensure sufficient staff with technology backgrounds to help
regulators monitor the evolution of AI. The Treasury Department indicated it is conversing with
large technology companies to create a pipeline of qualified individuals into government service.
The CFPB discussed its work to build interdisciplinary teams and augment technical expertise and
talent in ML, data science, and analytics. Other agencies cited challenges working within current
funding levels and attracting staff with technological backgrounds. When asked if the use of
SupTech was leading to job loss at agencies, the Fed stated that its use of AI would not replace
staff. Its use of AI is intended to enhance staff’s abilities and allow staff to focus on things that are
a better use of time and talent. The OCC observed that, historically, the adoption of technologies
has caused the loss of certain types of jobs, but also creates other jobs, which results in minimal
overall job loss for agencies.
In response to concerns that AI could lead to bias and discrimination and make it harder to
detect such outcomes due to a lack of explainability, regulators generally noted that the use of AI
did not absolve entities from complying with anti-discrimination laws and other consumer
protection laws. Explainability refers to the concept that the output and results of an AI model are
interpretable and explainable to users. The CFPB clarified that the use of AI is considered a

7 What is SupTech? (A Market Overview of Supervisory Technology), Stellex (July 4, 2018).


8 Financial Stability Board, The Use of Supervisory and Regulatory Technology by Authorities and Regulated Institutions: Market
Developments and Financial Stability Implications (Oct. 9, 2020); see also Government by Algorithm: The Myths, Challenges, and
Opportunities, Tony Blair Institute for Global Change (Jan. 25, 2021); see also Tricentis, AI Approaches Compared: Rule-Based
Tesng vs. Learning (accessed on May 1, 2024); and What Can Machines Learn, and What Does It Mean for Occupations and the
Economy? at 43-47, AEA Papers and Proceedings, vol. 108 (May 1, 2018).
9 What is SupTech? (A Market Overview of Supervisory Technology), Stellex (July 4, 2018).
10 Regtech has been used for about ten years and has played an important role in assisting institutions with their national security

and illicit finance programs, including detecting, preventing, and reporting illicit financial activities. See Paul Tierno,
Congressional Research Service (CRS), Artificial Intelligence and Machine Learning in Financial Services (Apr. 3, 2024); Dirk
Broeders and Jermy Prenio, “Financial Stability Institute: Innovative Technology in Financial Supervision (Suptech) – The
Experience of Early Users,” Bank for International Settlements (Jul. 2018).
violation of the Equal Credit Opportunity Act (ECOA), if a lender is unable to explain an adverse
outcome using AI. At the same time, the CFPB explained that many companies have programmed
their AI systems to ensure that outcomes are explainable. Several regulators commented that
regulated entities are expected to follow all laws, including anti-discrimination and other consumer
protection laws, in a tech-neutral manner.
In a discussion about whether AI had the potential to make homeownership more
attainable, FHFA indicated that the government-sponsored entities (GSEs) have been exploring
the use of AI. However, FHFA emphasized the importance and need to have a comprehensive and
standardized set of data before employing AI.
Some agencies stated that they did not need federal legislation from Congress to manage
the unique challenges related to the deployment of AI. Other agencies indicated that legislation
could be helpful. Certain agencies indicated legislative gaps could appear as AI becomes more
widely adopted and sophisticated. Members expressed concerns around the lack of definitional
clarity surrounding the types of AI being used in financial services and the emerging risks that
come with this use. Members indicated a desire to refine and standardize the different AI-related
terms used by the public and private sectors.
Regarding Member concerns about data privacy, particularly with large language models
(LLMs) using consumer data, one regulator noted particular concern for smaller entities that use
third parties to deploy AI compared to larger entities that can develop AI systems internally. LLMs
can be understood as programs that consume and train on massive datasets sourced from public
internet sites to carry out language processing tasks.11 During the regulator roundtable, participants
described their concerns with the quality of input data of Gen AI. The high volumes and wide
range of data used by AI, especially Gen AI, emphasize the importance of ensuring controls around
the data quality, security, and privacy. The Fed emphasized their concerns around the financial
stability implications of Gen AI, deep learning, and other types of AI, highlighting the problematic
nature of having a “monoculture of models,” whereby financial institutions all use the same third-
party providers. One Member shared concerns that a SEC proposed rule was too broad and could
hinder the deployment of AI and other technologies in our capital markets. Another Member raised
questions regarding liability involving AI model failures.
The roundtable participants also discussed the regulators’ use of AI to identify non-
compliance with regulations. The Treasury Department’s Office of Terrorism and Financial
Intelligence stated that the benefits of AI integration in the financial system are readily apparent in
AML, countering the financing of terrorism (CFT), and sanctions compliance. Regulators
emphasized that when properly calibrated, this technology could streamline efficiency in meeting
compliance obligations and monitoring transactions to identify suspicious transactions. For
example, financial institutions may be able to utilize advanced algorithms and ML technology to
examine large amounts of transaction data and identify unusual patterns related to money
laundering activities. Automated AI systems can also monitor transactions against continuously
updated global sanctions lists. However, the panel also heard regulators’ concerns about the

11 IBM, What are large language models (LLMs)? (accessed May 2024).
dynamic nature of financial crime and the evolution and speed of AI systems. For example, AI
may allow illicit actors to deploy AI-generated voice scams or similar schemes to exploit customer
identification processes. One regulator articulated the agency’s ability to address emerging AI
challenges regarding AML/CFT compliance through its existing tools and authorities.
HUD also identified budget constraints as a limitation in its regulation of AI and its ability
to leverage SupTech. Despite these resource constraints, HUD is leveraging AI to assist with its
review of Consolidated Plans, which assess affordable housing and community development
needs. Furthermore, through this project, HUD will explore “creating a database and chatbot that
will enable HUD staff to query features of the nearly 1,000 active Consolidated Plans.”12 HUD
stood up an AI governance board and engaged with a variety of stakeholders to understand how it
is using AI. While HUD has started to modernize its technology across the department, it believes
it needs to address current IT challenges to leverage the technology. Similarly, other regulators
also emphasized the importance of public-private sector partnerships and coordination amongst
regulated entities across state, local, and federal jurisdictions.13 Through public-private sector
partnerships, entities can bolster cooperation, establish a two-way flow of communication,
including feedback on the use of AI by financial institutions and nonbank firms, and processes for
safe adoption.
For a description of key Agency Actions taken to date related to AI, please see Appendix
B.
Market Participant Roundtables
AI in Capital Markets
On March 22, 2024, the Working Group hosted a roundtable comprised of capital markets
participants including:
• a securities exchange,
• a broker-dealer,
• a market intelligence firm,
• a robo-advisor,
• and an investor advocate.
The Working Group gained a broad understanding of current and potential AI use-cases,
benefits, and risks specific to the sector, roadblocks to adoption, and future developments within
our capital markets. In general, because of the regulated nature of capital market participants and
current requirements, market participants stated they are taking a measured approach to
implementing AI technology in certain aspects of their business. Market participants shared that
they are still in the early stages of this technology’s capabilities, and regulated, risk-averse entities
will be slow to adopt such novel technologies. However, many participants reported that they have
been using more ML models for upwards of 10 years at increasing rates for data analysis and are

12HUD, AI Inventory (accessed July 2024).


13Organisation for Economic Co-operation and Development (OECD), Chapter 5: The Use of SupTech to Enhance Market
Supervision and Integrity (accessed Oct. 2, 2022).
exploring the potential for new advancements in AI’s capabilities to create new use cases in capital
markets.
Many industry panelists discussed how they are using AI to optimize their employees’ time.
For example, the capital markets industry is using AI for market research and to synthesize large
amounts of unstructured data to produce more tailored and digestible information. Panelists
discussed using AI to transcribe and summarize earnings calls, financial documents, and market
information. Rather than conducting manual searches through documents or entering highly
technical search instructions into an analog database, new AI products are able to take natural
language search entries and comb through internal and public data to provide high-quality sourced
answers. Participants use these tools in both our public and private markets to streamline the time
spent on research.
Two market participants discussed their ability to use computer vision technology to verify
know-your-customer (KYC) information, with one of those participants noting that this reduces
fraud investigation times by up to 50 percent. Additionally, exchanges are using AI tools to conduct
market surveillance and more effectively meet their regulatory obligations. AI surveillance tools
detect market anomalies and elevate cases that require immediate attention. Additionally, a market
participant discussed a new order type where an exchange can use an AI algorithm to optimize the
duration between trades to reduce price volatility.
With respect to use cases, panelists discussed the difference between internal and external
facing applications. Many of the AI applications market participants have been implementing are
internal rather than public-facing.
Risks
Capital market participants’ adoption of AI may present risks for them and the broader
markets. One risk that panelists discussed with the Working Group was the popularity of certain
models. Gen AI models are algorithms pre-trained to execute a specified range of tasks, which can
lead to multiple market players making the same or similar decisions at the same time. Because
building a Gen AI model takes immense data, money, and time to develop, if a singular
foundational model is used by market participants to build their own AI applications, there could
be a domino effect. For example, trading firms using similar algorithms in the lead up to the Flash
Crash in 2010 saw their automated trading desks execute the same sell orders at the same time,
contributing to the stock market plunging by 9% in a few minutes.14 One panelist warned that the
widespread adoption of certain AI models may encourage herd-like behavior in capital markets.
Firms reported trying to mitigate this risk by subjecting models to rigorous testing before
deployment and actively reviewing the models’ outputs for partial or skewed results.
One panelist noted that “AI washing” is also an area of concern. AI washing involves
companies making unfounded claims exaggerating the capabilities of a product or service that is
sold as ‘AI’. This involves marketing a product or service as ‘AI’ when in reality heavily relying

14
Trading program sparked May 'flash crash', CNN (Oct. 1, 2010).
on human input. There are also situations where firms are claiming to use AI in circumstances
where it’s unnecessary or inefficient and may be misleading to consumers and investors.
Panelists also identified data security and vulnerabilities in intellectual property protections
as key risks of deploying AI models. The usefulness of an AI model depends on the quality of data
it was trained on. As such, there are concerns that a third party would be able to reverse engineer
the data on which it was trained to access proprietary data or acquire curated data sets. The
motivation to acquire underlying data sets will only increase as data becomes more valuable based
on the increased use of AI.
Challenges
In addition to the opportunities that Gen AI presents to the sector, panelists noted a number
of challenges that market participants must overcome. One such challenge is customers’ mistrust
of AI. Market participants are exploring the areas in which they can incorporate Gen AI into their
operations. However, market participants are wary of losing their customers’ trust due to a mistake
made by an AI application. Other commentators pointed to challenges that lie ahead around the
areas of data security, regulatory compliance, and the ethical use of AI in decision-making
processes.
Relatedly, the explainability and reproducibility of advanced AI models is an area of
concern. Market participants must be able to understand where the model went wrong and how to
correct the error to avoid compounding the problem. According to the panelists, this problem
presented itself in newer Gen AI models, which is a reason why some firms stated they are delaying
using Gen AI in critical areas of their operations until these problems have been sufficiently
addressed. In response, many market participants are developing AI governance bodies within their
organizations. These governance groups are composed of members of the technology and business
side of an organization and evaluate AI use cases for potential risks that accompany the deployment
of AI products.
Opportunities
While most capital markets participants have limited their use of AI, and Gen AI
specifically, to internal, nonpublic-facing aspects of their business, market participants are
beginning to deploy the technology in other use cases, including public-facing use cases. Panelists
described using strategies such as studying the input data an application uses to produce a specific
output in order to increase the comfort level of market participants. This allows market participants
to understand why Gen AI took a specific action. This kind of data mapping has proven useful in
detecting certain errors in the model that stem from flawed data inputs rather than the model itself.
Capital markets’ integration of AI has the potential to streamline many functions critical to
its operations. For example, various types of AI applications are currently being used or developed
to streamline customer onboarding, provide better service, protect against fraud, and uncover
overlooked investment opportunities. AI also has the potential to expand access to capital markets
by providing information for all investors.
Roundtable on Housing and Insurance
On April 11, 2024, the AI Working Group held a roundtable on housing and insurance to
explore several use cases for AI in these sectors, their benefits, and challenges. The six participants
included:
• a mortgage lender,
• a credit underwriter,
• a fair housing and civil rights advocate,
• an online real estate platform,
• an AI service provider for multifamily housing owners and operators,
• and an insurance broker/risk management firm.
Recent AI advances have facilitated a major shift in housing and insurance products and
services. This shift has allowed for new benefits and conveniences to consumers, but has also
presented fair housing, consumer protection, and other challenges in the housing and insurance
markets. For example, use cases discussed during the roundtable included the current deployment
of AI for underwriting of mortgages and insurance policies, tenant screening, simplifying and
enhancing the overall customer experience, and data analytics that guide industries in responding
to consumers and risk.
Industry participants in the roundtable discussed several ways that AI enhances their ability
to approve more prospective homebuyers for mortgages; better identify, track, and respond to
customer needs or complaints; automate routine tasks or data analysis to allow staff to focus on
decisions that humans are more suited to make; more accurately assess the market environment
and predict future risks; and more quickly process claims and mortgage applications.
Participants also described AI use cases that augment underwriting for mortgages and
insurance policies, simplify customer experiences, and expedite holistic data visualization and
reporting. However, within the insurance industry, it was stated that most AI use cases currently
help with assessing risk and the probability of future outcomes. According to one industry
participant, the development of underwriting-based AI models has helped realize a 20 percent to
40 percent increase in approvals for loans across protected classes under the Fair Housing Act and
ECOA. This included a 177 percent increase in loan approvals for Black applicants.
Another participant attested that the industry is using AI to improve property searches,
enhance property valuations, create immersive virtual property tours, and streamline employees’
ability to fill-out paperwork, execute transactions, and fulfill compliance obligations. Finally,
another participant explained that their AI tools allow employees of large property managers to
manage and maintain up to 20 apartment buildings at a time, as well as to help housing developers
identify the most productive plots of land to build on based on local land use requirements, density
bonuses, and other relevant factors.
Panelists recognized the opportunities to leverage AI to expand access to housing and
mortgage lending. This is in addition to more accurately assessing risk in insurance. Yet, panelists
also highlighted that the use of AI in housing and insurance has its limitations and potential risks.
Risks
Some of the primary risks discussed by panelists and Members during the roundtable
revolved around inadequate, improperly sourced data, and consumer privacy. One panelist
explained that there is significant risk of exposing confidential, personally identifiable information
when inputting data into an AI model for training purposes. Yet, this risk is only exacerbated if the
model is being trained in an environment without adequate security and privacy controls. One
Member expressed serious concerns around the use of third-party datasets and AI models, citing a
study by MIT. The MIT study found that more than half of AI failures occur when firms purchased
datasets from third parties. 15
Another risk highlighted by panelists included AI models producing hallucinations, which
are nonsensical or erroneous outputs not grounded in the model’s inputs or training data. Many
panelists explained that their firms avoid the use of Gen AI for customer-facing applications
because of hallucinations. Yet, all the panelists agreed that there must be a “human-in-the-loop”
element to monitor how the model is functioning and ensure that any hallucinations are identified
and rectified in a timely manner.
During the roundtable, several Members and participants discussed concerns regarding
how AI technologies may reproduce or even exacerbate biased or discriminatory outcomes due to
the use of biased data inputs, particularly in light of historical segregation and discrimination in
the housing sector. All roundtable participants agreed that AI could be used to reduce bias if
deployed responsibly. One participant highlighted that the responsible deployment of AI includes
the need for routine monitoring and other safeguards to ensure biased or discriminatory outcomes
are not reflected in or exacerbated by AI models. Several participants emphasized that transparency
is a key best practice to reduce discrimination and reduce liability risks in AI models. Additionally,
participants asserted that biased data and discriminatory AI models can be further mitigated by
ensuring diverse groups are part of model engineering, development, testing, and deployment
phases. Participants explained that if the potential for bias and discrimination can be reduced at
the aggregate level, it can also be reduced at the individual level.
Challenges
One Member questioned whether the enhanced speed of AI models comes at the cost of
quality or accuracy. Industry participants replied that AI is best deployed with a human to help
monitor and check for quality and accuracy. Several panelists also asserted that it is not appropriate
to allow AI to make consequential decisions around credit evaluation or mortgage approvals.
Another Member followed up on this line of questioning by raising a specific case study where
landlords used third-party screening companies to make decisions on whether to approve rental
applications. The Member cited concerns around the use of data provided by these third-party
screening companies and the lack of transparency around its provenance. Participants shared these
concerns and emphasized the need for transparency and enhanced due diligence when firms partner
with third parties.

15 Building Robust RAI Programs as Third-Party AI Tools Proliferate: Findings from the 2023 Responsible AI Global Executive
Study and Research Project, MIT Sloan Management Review (June 20, 2023); see also, Third-party AI tools pose increasing
risks for organizations, MIT Sloan School of Management (Sept. 21, 2023).
When asked about the rise in the use of third-party tenant screening and rent-setting AI
technologies and implications on consumer access to fair and affordable housing, one industry
participant recognized that such rental decisions can often feel like a “black box.” Another panelist
urged that the federal government should promote transparency in such models. One panelist also
emphasized the potential for collaboration by market participants in dynamic pricing algorithms
and said relevant federal agencies should further examine the use of rent setting technologies. One
industry participant also shared how their company has moved to provide open-source technology
to help promote fairness in the online housing market, which other panelists agreed was a best
practice.
Opportunities
In response to Member questions about whether the use of AI to approve more people for
mortgages led to more defaults, industry participants stated that AI has not increased risk in
underwriting. Rather they argued it expands access to credit, including by providing a more
innovative and accurate approach to assessing the credit risk of more people. Additionally, several
participants explained that in many cases, the output of these models is not a definitive approval
or denial but may result in a score that indicates the riskiness of default for a particular borrower
and is used by a lender to make a credit decision at the human level. Indeed, industry participants
unanimously agreed that AI is a tool to help supplement human decision-making with data and
information. Several panelists suggested that consumers should be given the option to appeal
potentially inaccurate AI decisions to a human for individual review.
Some panelists discussed the evolution of underwriting from legacy credit scoring systems
to ML-based credit models that can analyze larger amounts of data. This includes data that might
not traditionally be used by credit underwriters, mortgage lenders, and insurance brokers, such as
positive rental payment histories and a consumer’s cash-flow data. Additionally, panelists
discussed how AI-automated underwriting can help industry evaluate longer durations of credit
history, assess meaningful data correlations and trends, and potentially increase credit access for
borrowers of color and other borrowers who have historically been denied credit. While the
panelists’ firms are primarily using traditional AI, which is limited to preset tasks and has been
around for decades, some have started to develop and pilot newer Gen AI models for future use.
For example, one panelist shared that their firm is developing a customized Gen AI tool to assist
lenders with gauging industry trends, consumer credit, and portfolio performance.
Another Gen AI use case mentioned included a chatbot designed to listen, comprehend,
and summarize interactions with customers and employees. Examples of Gen AI systems that
panelists’ firms have deployed included chatbots, which have been limited to internal applications,
tools that process conversations and organize documents to address tenant inquiries and requests
and assist loan originators with reviewing information for loan applications. Other deployed tools
analyze client conversations to determine customer satisfaction and identify keywords and phrases
in call transcripts for compliance purposes.
Another panelist discussed recent advancements in LLMs, which facilitate the ability to
create conversation-like experiences for consumers in their housing search and sales transactions
through online real estate platforms. However, the participant emphasized the importance of
training and evaluating LLMs to ensure compliance with fair housing and fair lending laws, as
well as integrating controls to prevent outputs that violate fair housing requirements.
Use of AI by Financial Institutions and Nonbank Firms
On May 1, 2024, the Working Group hosted a roundtable to discuss use cases of AI by
depository institutions and nonbank financial firms. The discussion highlighted potential
opportunities and challenges associated with the use of AI to expand the availability and reduce
the cost of financial products and services. This roundtable was separated into two distinct panels.
The first panel focused on specific use cases by financial institutions of all sizes,
specifically in loan underwriting, customer service, fraud detection, and debt collection. The
second panel focused on the AI lifecycle at a financial institution or nonbank from acquisition of
the technology, development, and integration. These use cases are demonstrated in Exhibit 2. This
discussion also included institutions’ internal approach to AI and potential opportunities and
challenges as the technology develops.
Exhibit 2: How Financial Services Companies Used AI in 2022
Panel I

The first panel examined specific AI use cases by various types of depository and
non-bank financial firms to explore areas where AI technology is being or could likely be used.
This panel included:

• a minority depository institution,


• a fintech Community Development Financial Institution (CDFI),
• a financial fraud detection firm,
• a credit union,
• a digital debt collector,
• and a consumer advocate.
Underwriting is a core function of financial institutions that lend, and access to credit is an
indicator of financial well-being. Individuals with an unconventional credit history and thin credit
files experience difficulties accessing credit. Some fintech companies have used ML algorithms to
better predict the creditworthiness of loan applicants. One participant noted that after their
partnership with an AI underwriting software provider, they were able to approve 40 percent more
loans to Black and Hispanic borrowers than they would have been using traditional credit score
models. Members of the Working Group expressed concerns that if done incorrectly, the use of AI
in underwriting could lead to discriminatory outcomes. However, panelists discussed the potential
for AI underwriting models to expand the pool of eligible loan applicants.
Fraud detection is a frequent use case of AI technology. This technology allows unique
spending profiles to be created for each customer to detect activity that would be out of the norm
for a specific individual. This method has not only been successful in reducing false positive rates,
but also in detecting fraudulent activity that may have otherwise been unnoticed. Such preliminary
findings from the use of AI detection have helped fraud investigators reduce investigation times.
These fraud detection models are trained on transaction data from around the world and can help
smaller financial institutions, which don’t have the troves of transaction data necessary to train
their models to become more effective at preventing fraud.
Many financial institutions are also employing AI to augment their customer service
operations. One financial institution described their recent integration of a virtual, conversational
AI customer assistant. This panelist reported that this virtual assistant was able to fully handle over
60 percent of inbound phone calls, as opposed to 25 percent with their previous solutions. When a
customer requires a human agent to assist them, the virtual assistant provides recommended
responses and advice to call center employees. This panelist described how their call center
employees now provide more consultative and advisory support and have significantly improved
their customer service operation. The panelist serves customers who speak multiple languages,
such as Spanish and Polish, and is looking to add multiple language functionality to their virtual
assistant to better serve communities. Members expressed concerns that if poorly done, these
customer service tools could trap customers in an endless cycle of voice prompts without providing
the help customers need, including speaking to a human representative. They also expressed
concerns regarding the chatbots’ ability to upsell customers on certain products and services.
Another use case discussed in this panel was debt collection. One panelist described their
use of LLMs to communicate with individuals whose debt is being collected. This panelist uses
Gen AI produced text prompts, which are then reviewed by a human for legal compliance and sent
to customers. These text prompts are refined through engagement analytics and can be tailored to
specific collection scenarios. Statistics provided by the panelist indicate a 25 percent increase in
payment in full when using AI generated text compared to human generated text. These text
prompts are refined through analytics and can be tailored to specific collection scenarios, including
if a customer has already accessed their payment portal, how many times they have been
communicated with before, and how far along an individual is in the debt collection process. This
panelist also described the varying global legal landscape, with many of their new functionalities
tested in foreign jurisdictions before being used in the U.S.
Panel II
The second panel took an enterprise-wide view to examine how different financial
institutions and their technology providers are strategically approaching technology. This panel
consisted of:
• a core provider,
• a cloud and technology infrastructure provider,
• a large bank,
• a midsize bank,
• a software company,
• and a nonprofit independent research center.
Panelists from both the large and small financial institutions described the importance of
having a modern technology infrastructure to be ready to leverage these emerging technologies.
Having practices in place to ensure quality data hygiene and the development of a robust cloud
infrastructure are critical to leveraging emerging AI technologies. There is a range of technological
sophistication among financial institutions, and those who have made investments in the
underlying infrastructure are more ready to implement AI across multiple use cases, both internally
for their employees and externally for their customers. Panelists also discussed the extent to which
they were investing in in-house development or engaging third-party service providers. The
panelists indicated size and existing sophistication are important considerations when deciding
which strategy to pursue.
Members also discussed the potential for AI technology to perpetuate or exacerbate biases
and discrimination in the financial marketplace if the technology becomes increasingly utilized
without appropriate oversight and safeguards to ensure such discriminatory practices do not go
unchecked. This is a significant concern that has frequently come up in the Committee’s work.
One panelist advocated for the Federal government to take a more active role in addressing these
issues.
While some panelists expressed concern that many global competitors have no guardrails
in place with respect to the use of AI, some Members expressed concern about moving so
cautiously that U.S. market participants become less competitive than global counterparts.
Risks
Members and panelists discussed multiple risks facing institutions that employ AI
technologies in their operations. When institutions use AI, Gen AI or otherwise, they need to ensure
they are doing so in a compliant manner, including not engaging in any form of discriminatory
practices. Cybersecurity and privacy were also risks discussed in the session. As more data is
collected by financial institutions and then used to train their models, this data may be subject to
attacks by bad actors. Having robust cybersecurity safeguards in place will only become more
important as the underlying data becomes more valuable and more concentrated. For example, as
more financial institutions utilize cloud services for their data management, there are financial
stability and other risks that have been identified by regulators. With respect to utilizing AI for
debt collection, it is critical to ensure that the AI deployed by such companies is still compliant
with existing consumer protection laws and regulations. Furthermore, there are risks that AI could
fuel other kinds of panics and runs that could undermine safety and soundness in the banking
system.
Challenges
One challenge discussed was compliance with risk management guidance. One panelist
suggested that additional AI-specific updates be included in model risk management frameworks.
Additional challenges may arise as smaller financial institutions likely become more reliant on
third-party service providers of sophisticated AI products. This can be problematic as smaller
financial institutions look to providers to compete with large financial institutions' offerings. As in
prior roundtables, explainability remained a challenge for advanced AI models and may hinder
their deployment in a regulated environment. Some market participants noted they were
experimenting with Gen AI technologies internally rather than deploying products to consumers
due to security, privacy, and the lack of trust and transparency in AI models.
Opportunities
This session discussed the opportunities for AI to improve banking services, from
loan origination to customer service. Many financial institutions are technologically savvy
and accustomed to regulatory oversight, positioning them to capitalize on these innovations. The
use of AI lending models has the potential to expand opportunities and reduce
discrimination. Companies are increasingly using AI and ML algorithms and datasets to test
for underlying historical bias, which could ensure that automated decision-making tools do
not discriminate. Panelists discussed AI’s ability to potentially extend credit to a more diverse
set of borrowers by utilizing alternative data, such as payment data which includes rent and
utility payment histories. By utilizing ML based models with alternative data, lenders may be
able to accurately assess risk and facilitate broader access to credit. Other panelists also
discussed AI’s ability to provide customer service solutions in multiple languages. There is also
an opportunity for smaller financial institutions to leverage this technology to offer products
that compete with larger financial institutions.
Roundtable on National Security and Illicit Finance Roundtable

On May 16, 2024, the Working Group hosted a roundtable to explore the ways in
which AI can impact national security through the financial system. Participants included:
• a cloud-native cybersecurity firm,
• a core infrastructure provider for banks,
• an AI-powered risk and compliance firm,
• a firm specializing in detecting and preventing AI deepfakes,
• a research and development non-profit,
• and a firm that leverages AI to share illicit activity insights among participating financial
institutions.
Panelists educated Members on how AI is being leveraged by bad actors to compromise
financial institutions’ defenses and how these financial institutions are using AI to respond to these
threats. Panelists also discussed how financial institutions are using AI to comply with their
existing BSA/AML responsibilities.
Risks
AI, including Gen AI in particular, has armed criminals with a new tool, which has
contributed to a significant uptick in the frequency and sophistication of attacks against or through
the financial services sector. Helping criminals expand the size, scope, and efficiency of their
operations by enhancing their hacking capabilities, AI is automating discoveries of firms’
vulnerabilities and defeating firms’ safeguards against fraud.
One panelist shared that his firm has seen a 450 percent increase year-over-year in AI-
powered “deep fake attacks” against financial institutions. A deepfake is a type of synthetic media
that uses AI to manipulate or create video or audio versions of the person or thing represented.
This manipulation can be convincing and can occur in real time, making it appear like someone is
saying or doing something that the individual has not. Deepfakes are created using deep learning
algorithms, which are trained on a large amount of data, such as videos or audio recordings of a
particular person. Once the algorithm is trained, it can generate images and voices that look and
sound authentic, potentially undermining identity verification systems and fooling individual
victims. Referencing recent high-profile examples of companies and individuals falling victim to
deepfakes, panelists expressed concern that the increasing prevalence of these attacks will erode
trust in U.S. financial institutions.
Challenges

As financial institutions of all types and sizes are looking to AI to enhance their ability to
meet their national security obligations, one challenge is the governance of these internal systems.
Because many smaller institutions do not have the bandwidth or resources to develop their own
proprietary AI models, they must seek out third party providers to meaningfully incorporate AI
into their operations. Consequently, the smaller institutions may not be as familiar with how the
models work or whether the models into which their data is fed have been corrupted. Panelists
underscored the importance of financial institutions being able to explain to examiners and others
how the AI is being used, its capabilities and deficiencies, the security environment surrounding
it, and why the financial institutions’ tools are reaching certain conclusions. Panelists explained
that their firms will need to routinely assess the effectiveness in deploying AI to ensure that their
systems are adding value to their risk management plans.

Additionally, when Members questioned whether the entirety of the U.S. banking system
has the necessary tools to address illicit actors’ use of AI, panelists indicated that many of the
smaller institutions do not have all the essential countermeasures in their toolboxes. While some
firms use third parties to implement AI into their operations, others, like single-branch community
banks, do not have the financial, technological, or personnel resources to do so. These smaller
institutions are easily identifiable by illicit actors and are significantly more vulnerable to attack.
During the roundtable, one panelist explained that when one of the smaller core provider clients is
compromised, there is an additional risk that the bad actors will use the smaller institutions to
access and subsequently compromise the core banking infrastructure (traveling through the smaller
institution’s systems into larger entities). This would have massive ripple effects throughout the
entire financial system.

One of the most significant challenges to financial institutions leveraging AI is regulatory


uncertainty. Financial institutions are hesitant to implement new AI-driven models without
receiving regulatory “approval” for such novel applications. Clarity is needed about data sharing
with other institutions and the regulators’ position on the use of AI in their transaction monitoring
and other surveillance models. Additionally, FinCEN has yet to propose the rulemaking associated
with the Testing Methods Rulemaking provision under Section 6209 of the Anti-Money
Laundering Amendments Act (AMLA), which defines when and how older systems can be phased
out and replaced by newer technology.

Opportunities
Most monitoring systems used by financial institutions today are still rules-based, which
means that they look for defined activities or anomalies and only produce alerts if the established
filter thresholds are exceeded. Integrating AI within these financial and cybercrime monitoring
systems can detect unusual or suspicious activities in transactions using large data sets, behavioral
analysis, and other means that are either hard or impossible for humans to perform without such
augmentation. AI-driven models could enable a transaction monitoring system, for example, to
continuously learn from prior processed transactions and re-train the model to flag anomalous
activity for human review.
Financial institutions also use AI for identity verification and authentication of their
customers. For example, financial institutions can use AI to analyze the “liveness” of a customer’s
voice or picture to determine whether it is a real human or fake. One panelist boasted a 94 percent
detection rate for voice deepfakes using AI. AI can also assist in reducing financial institutions’
rates for falsely identifying a customer for suspicious activity. This, in turn, bolsters firms’
efficiency in BSA/AML compliance and potentially improves access to financial services. Another
panelist noted a 75 percent reduction in Suspicious Activity Report (SAR) related false-positive
detections because of their use of AI.
There are additional national-security-focused benefits of AI for financial institutions. AI
can enable small, community-based financial institutions, which typically have less-robust in-
house IT and cybersecurity competencies than the largest multinational firms, to detect phishing,
fraudulent identity, and other tactics used to penetrate or fool system defenses. Additionally, AI
can streamline investigation processes by “packaging” key information for analysts to complete
investigations more efficiently and in a manner that is useful for legal and compliance reviews.
One participant provided examples of how utilizing AI to detect fraudsters—who themselves may
be using AI—is the most effective way to determine a human voice or video versus deep-fakes.
Panelists noted, however, that a challenge to financial institutions of all types and sizes is the
governance of these systems. A company must have the knowledge and ability to explain to
examiners and others how the AI is being used, its capabilities and deficiencies, and the security
environment surrounding it. Financial institutions must constantly assess the effectiveness of the
deployment and ensure that the AI systems are adding value to their risk management plans.
It is unclear how many financial institutions have truly incorporated AI into their
compliance programs for financial and cybercrimes prevention and detection. The innovations that
AI could offer in the national security space include better transaction monitoring systems, more
effective customer risk assessments, improved and automated compliance reporting, and risk-
based case management for financial institution investigations. The wider adoption of AI-driven
systems will be dependent on effective governance, the ability of financial institutions to explain
their AI tools, and regulator clarity on expectations for the use of AI models.
Bipartisan Committee Staff Takeaways
Based on these extended discussions with the Working Group participants, Committee staff
takeaways are below:

• Given the Critical Role of the Financial and Housing Markets, the Committee Should
Play a Leading Role in Overseeing the Adoption of AI in the Financial Services and
Housing Industries. In Working Group discussions, market participants in financial
services and housing highlighted the long-standing use of traditional AI tools in their
internal and consumer-facing products. Panelists also explained their experimentation with
newer AI models such as Gen AI. Because the development of AI technology can outpace
Congress and regulators, the Committee must lead in examining the associated benefits
and risks in the financial services and housing sectors and ensure integral consumer and
investor protections. This includes ensuring that financial services and housing industries’
use of AI does not lead to bias and discrimination in decision-making tools.

• The Committee Must Ensure Regulators Apply and Enforce Existing Laws, Including
Anti-Discrimination Laws, and Assess Regulatory Gaps as Market Participants Adopt
AI. Throughout the AI Working Group sessions, regulators and other expert panelists
pointed to the application of existing laws and regulations to AI, including anti-
discrimination laws. Using AI does not exempt market participants from their obligations
under the law, and regulators must leverage their oversight and enforcement authorities to
ensure those obligations are met as well as examine alternative compliance processes,
where appropriate. Congress and regulators must work to identify any legislative or
regulatory gaps or limitations in light of the application of AI in the financial services and
housing industries.

• The Committee Should Ensure that Financial Regulators Have the Appropriate Focus
and Tools to Oversee New AI Products and Services. The Committee should ensure
regulators can keep up with rapid innovation and utilize new technologies that can help
enhance the efficiency of federal programs, improve monitoring of financial markets and
institutions, and bolster oversight of new products and services. The Committee should
explore the potential benefits of a chief AI officer at each financial regulator to oversee the
respective agency’s approach to AI, including risk mitigation processes.

• The Committee Should Continue to Consider How to Reform Data Privacy Laws Given
the Importance of Data, Especially Consumer Data, to AI. The Committee should
continue to review and update other Federal laws that apply to financial institutions and
financial data, like the Gramm-Leach-Bliley Act (GLBA) and the Fair Credit Reporting
Act (FCRA), to strengthen data privacy protections.

• The Committee Should Work with Financial Regulators to Understand AI Impact on the
Workforce. Use cases for Gen AI models include virtual assistants, chatbots, capital
markets research, personalized financial recommendations, and more. In instances where
Gen AI models can address certain tasks, market participants pointed out that workers
could better focus on other priority projects. Panelists highlighted reskilling and upskilling
workers to mitigate job loss and leverage new skills. The Committee and financial
regulators should examine workforce dynamics, including potential social and economic
disparities, and trends in the financial services and housing industries.
• The Committee Should Ensure U.S. Global Leadership on AI Development and Use. The
Committee should ensure that financial services regulators and agencies work with foreign
jurisdictions to understand cross-border applications of AI in financial services and to
ensure American principles are at the forefront of the discussion. This is especially
important in light of efforts by authoritarian governments like China to use AI to spread
repression, curb democracy, and further their anti-American interests.
APPENDIX A
History of AI in Housing and Financial Services
Over the past several decades, the field of AI has experienced major growth and
investment, as well as its share of challenges.16 In the 1980s, AI experienced renewed interest and
exploration, particularly in Japan, the United Kingdom, and the U.S.17 From the early to mid-80s,
the popularity of personal computers and computer hardware exploded following the release of the
Apple II, TRS-80 Model I, and Commodore PET, and later through the release of Lotus 1-2-3 and
the Apple Macintosh.18 While most of the research and investment was focused on the hard
sciences, the financial services industry also began to consider how to automate decisions with the
help of AI. General Electric (GE) used rules-based systems and heuristics to analyze the quality of
commercial loans. 19 In 1989, the Fair Isaac Corporation developed the FICO credit-scoring
algorithm which was created using a combination of multiple factors including payment history,
credit utilization, and length of credit history to assess the creditworthiness of borrowers.20
During the 1980s, Edward Feigenbaum, a computer science professor at Stanford
University, developed the concept of “expert systems,” also known as “knowledge systems,”
which focused on mimicking human reasoning. 21 This technique enabled companies to make
tailored financial plans for consumers, as well as “investment planning, debt planning, retirement
planning, education planning, life-insurance planning, budget recommendations, and income tax
planning.”22 Wall Street began using these expert systems through program trading, also known as
algorithmic trading, to automatically execute trades at high speeds based on predetermined
conditions and without human intervention. 23
In 1982, the mathematician and investor James Simons founded Renaissance Technologies,
a quantitative hedge fund that explored algorithmic trading in the late 1980s.24 Through vast
market data and pattern analytics, these algorithms can execute trading decisions at high speeds
without human intervention. Gradually, algorithmic trading became more popular among
institutional investors and large trading firms due to benefits like faster execution time and reduced
costs. However, on Monday, October 19, 1987, also known as “Black Monday,” global stock

16 Bonnie G. Buchanan, Turing Institute, "Artificial Intelligence in Finance: Turing Report," (Apr. 2019).
17 Bonnie G. Buchanan, Turing Institute, "Artificial Intelligence in Finance: Turing Report," (Apr. 2019).
18 Bonnie G. Buchanan, Making Things Think: How AI and Deep Learning Power the Products We Use (Nov. 2, 2022).
19 Peter Duchessi, Hany Shawky, and John P. Seagle, "A Knowledge-Engineered System for Commercial Loan Decisions,"

Financial Management 17, no. 3 (Autumn 1988): 57-65.


20 Rob Kaufman, "The History of the FICO® Score," myFICO, August 21, 2018; see also, Machine learning in financial

markets: Come to stay, Flossbach von Storch (Feb. 27, 2023); Bank Policy Institute, Navigating Artificial Intelligence in Banking
(Apr. 8, 2024).
21 Expert systems can be understood as “a computer program that, after having been properly instructed by a professional, is able

to deduce information from a set of data and starting information. See Carol E. Brown, Norma L. Nielson, and Mary Ellen
Phillips, “Expert Systems for Personal Financial Planning,” Journal of Financial Planning (Jul. 1990), pgs. 137-143.
22 Bonnie G. Buchanan, Turing Institute, "Artificial Intelligence in Finance: Turing Report," (Apr. 2019).
23 In 1982, James Simons, a renowned mathematician and investor, founded the quantitative hedge fund Renaissance

Technologies and in the late 1980s, the firm began to explore algorithmic trading. See James Simons: A Billionaire
Mathematician's Life's Work, AP News, (May 3, 2023); CRS, “High-Frequency Trading: Background, Concerns, and Regulatory
Developments,” R44443, (Jul. 17, 2014).
24 “James Simons: A Billionaire Mathematician's Life's Work,” AP News, (May 3, 2023), https://apnews.com/article/james-

simons-renaissance-technologies-simons-foundation-9f97b19939806f970bdaa09878e382da.
exchanges plummeted with the Dow Jones Industrial Average (DJIA) falling 22.6%25 and
algorithmic trading intensified the market crash.”26
Later, between the late 1980s to the mid-1990s, AI experienced its second winter. This
period was caused by a variety of factors, including application limits of these advanced systems,
the high costs of maintaining and updating those systems, 27 a lack of funding for projects, and a
shortage of investment from government agencies and the private sector. 28 AI started to emerge
again in the laten 1990s with the development of the internet web search engines, and better
hardware that created space for additional breakthroughs.29 The first internet banking solution was
offered by the Sanford Federal Credit Union in 1994. 30
The increased amount of digitized data and the increased capacity of computing hardware
contributed to the growth of the next generation of AI.31 Because of these advancements, AI experts
began to focus more on ML and neural networks, which are biologically inspired software. 32
Neural networks mimic the way living things process information and identify complex patterns.
This shift first occurred in the early 1990s when IBM developed Deep Blue, a computer chess-
playing system that could search up to 200 million options per second, and defeated Garry
Kasparov, a Russian grandmaster, in one of 6 games in 1996.33 Deep Blue’s success represented
the capabilities of AI systems and inspired a new wave of research to create supercomputers that
could conduct risk analysis in finance, mine data, and more. 34 In addition, the newly established
FinCEN employed a unique application of AI technology to flag suspicious activity, which better
supported analysts in searching their internal database reports to help combat money laundering. 35
Banks, payment processors, and core providers also implemented AI fraud detection systems in
the following years.
Later in the 2010s, advances in graphical processing units, which could process multiple
tasks simultaneously, enabled layers of neural networks to be trained on mass amounts of data.
This method, called “deep learning,” enabled applications to recognize complex patterns and
continually learn in ways similar to humans. 36 For example, neural networks are used to enhance

25 Donald Benhardt and Marshall Eckblad, “Stock Market Crash of 1987,” Federal Reserve Bank of Chicago (Nov. 22, 2013).
26 Donald Benhardt and Marshall Eckblad, “Stock Market Crash of 1987,” Federal Reserve Bank of Chicago (Nov. 22, 2013);
David S. Ruder, "Remarks by David S. Ruder, Chairman, U.S. Securities and Exchange Commission, Before the National
Association of Securities Dealers, Inc.," (Feb. 18, 1988).
27 Perplexity, “History of AI,” (Jun 14, 2024), https://www.perplexity.ai/page/History-of-AI-A8daV1D9Qr2STQ6tgLEOtg.
28John Werner, “Three Lessons Learned from the Second AI Winter,” Forbes, (Apr. 9, 2024),

https://www.forbes.com/sites/johnwerner/2024/04/09/three-lessons-learned-from-the-second-ai-winter/?sh=56e8b0b9c3cd.
29 John Werner, "Three Lessons Learned from the Second AI Winter," Forbes, (Apr. 9, 2024).
30 Stanford Federal Credit Union, "About Us," (Jun 14, 2024), https://www.sfcu.org/about/
31 John Werner, "Three Lessons Learned from the Second AI Winter," Forbes, (Apr. 9, 2024).
32 John Werner, "Three Lessons Learned from the Second AI Winter," Forbes, (Apr. 9, 2024).
33 IBM, Deep Blue (July 3, 2024); IoT World Today, "25 Years Ago Today: How Deep Blue vs. Kasparov Changed AI," (Jun.

14, 2024).
34 IBM, Deep Blue (July 3, 2024).
35
"The FinCEN Artificial Intelligence System: Identifying Potential Money Laundering," National Criminal Justice Reference
Service, (Jun. 14, 2024).
36 National Institute of Justice, "A Brief History of Artificial Intelligence," (Sep. 30, 2018); Sergei Gleyzer, Federico Carminati,

Sofia Vallecorsa, and Denis Perret-Gallix, “The Rise of Deep Learning,” CERN Courier, (Jul. 9, 2018).
FX trading by leveraging simulated data from various types of market conditions to select the best
order placement and execution style designed to minimize market impact. 37
The most recent development in AI technology, Gen AI, is able to respond to natural
language inquiries and generate poems, essays, summaries of large documents, and other high
quality conversational text. The first version of GPT was launched by OpenAI in 2018 and was
trained on 40 gigabytes of internet data.38 In 2021, OpenAI created DALL-E, a ML model that
generates images from text descriptions provided by the user, based on internet data.39 OpenAI’s
launch of ChatGPT in 2022 led to “a rare moment when an AI/ML technology became directly
accessible by the broad public,” 40 as well as significant new interest and investment in AI
technology by a broad range of sectors.
Gen AI models hold enormous potential and can, for example, streamline the examination
process involved in investigating suspected market manipulation and insider trading activity by
producing a consolidated table of the company’s regulatory filings, news sentiment analysis, and
other factors that may impact any given security.41
Gen AI technologies differ from traditional AI tools like predictive ML models that have
been used in the housing and financial services sectors for decades. 42 Financial services companies
have seen the promise of AI in its iterations and have deployed it in a variety of use cases. A survey
in 2022 found that over 75 percent of financial services companies use at least one kind of
advanced computing.43 However, traditional and newer AI models also face risks of bias and
discrimination, particularly if their inputs and the data they are trained on integrate historical
inequities without sufficient guardrails. 44

APPENDIX B
President Biden’s Executive Order 14110, Safe, Secure, and Trustworthy Development and
Use of Artificial Intelligence
President Biden’s Executive Order directed federal agencies to coordinate with each other
to develop guidelines, standards, and best practices for AI safety and security. 45 The Executive
Order outlines eight guiding principles and priorities to govern the development and use of

37 Vention, "Neural Networks in Financial Trading and Analysis," (Jun. 12, 2024).
38 Bernard Marr, “A Short History of ChatGPT: How We Got To Where We Are Today,” Forbes, (May 19, 2023).
39 Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, and Scott Gray, “DALL-E: Creating Images from Text,” OpenAI, (Jan. 5,

2021).
40 Gary Shorter and David W. Perkins, “Artificial Intelligence and Machine Learning in Financial Services,” Congressional

Research Service, R47997 (Jun. 1, 2023).


41 "Nasdaq to Enhance Global Market Surveillance Offering with Generative AI," Nasdaq, May 15, 2024.
42 See, e.g., CRS, Artificial Intelligence and Machine Learning in Financial Services (Apr. 3, 2024).
43 NVIDIA, AI in Financial Services: 2022 Trends, (2022).
44
Humans Are Biased. Generative AI Is Even Worse, Bloomberg (June 9, 2023).
45 Treasury, U.S. Department of the Treasury Releases Report on Managing Artificial Intelligence-Specific Cybersecurity Risks in

the Financial Sector (Mar. 27, 2024).


responsible AI across government agencies.46 At a high level, the Executive Order focuses on the
following:
• Safety and Security: Agencies are directed to promote the development and implementation
of policies and procedures to mitigate AI risks related to biotechnology, cybersecurity,
national security, and critical infrastructure, and other national security dangers.
• Responsible Innovation and Competition: Agencies are encouraged to attract AI talent to
the United States, clarify questions surrounding intellectual property, protect technologists
and creators, and promote AI innovation across all business sectors. Additionally,
addressing risks from major firms’ access to semiconductors, computing power, cloud
storage, and data would create opportunities for small businesses, workers, and
entrepreneurs.
• Worker Support: Agencies are directed to research and develop mechanisms to mitigate
any workforce disruptions occurring as a result of AI adoption. Under this principle, AI
deployment would be established on the engagement of workers, labor unions, educators,
and employers.
• Consideration of AI Bias, Equity, and Civil Rights: Agencies are directed to mitigate
against the potential civil rights violations that implementation of AI models may
perpetuate, and ensure AI complies with all Federal laws through robust technical
evaluations, oversight, and engagement with impacted communities.
• Consumer Protection: Agencies are directed to continue enforcing technology-neutral
regulations and existing consumer protection laws that protect consumers against fraud,
discrimination, privacy infringement, and other harms, and to identify areas where more
authorities are needed as they relate to AI.
• Privacy: Agencies are instructed to evaluate the data privacy risks associated with the
collection, use, and retention of user data for AI and explore potential risk mitigation
mechanisms.
• Federal Use of AI: The Office of Management and Budget is required to establish an
interagency council that will develop guidance on AI use, governance, and risk
management within federal agencies.
• International Leadership: The Executive Order provides that the United States should
establish itself as a global leader in the development and adoption of AI innovation through
engagement with international allies and by leading efforts to develop common regulatory
and accountability principles for AI.

46Treasury, U.S. Department of the Treasury Releases Report on Managing Artificial Intelligence-Specific Cybersecurity Risks in
the Financial Sector (Mar. 27, 2024).
Treasury Department
Deliverable: Within 150 days of the EO, the Secretary of the Treasury “shall issue a public
report on best practices for financial institutions to manage AI-specific cybersecurity risks.”
On March 27, 2024, Treasury issued its report, Managing Artificial Intelligence-Specific
Cybersecurity Risks in the Financial Sector (Report),47 pursuant to the Executive Order,
which explained that financial institutions have been using AI systems within their internal
operations, and specifically to support their cybersecurity and anti-fraud operations, for several
years. Many financial institutions have also incorporated AI-related risks into their risk
management frameworks, particularly those related to information technology and compliance as
well as third-party risk management.
The Report also highlighted the opportunities and challenges AI presents to the
financial services industry and outlined steps to address AI-related operational risk,
cybersecurity, and fraud challenges:
1. Addressing the Technological Gap Between Small and Large Financial Institutions: The
Report discusses the importance of addressing how AI will affect the already growing gap
in technological capabilities between large and small financial institutions. Specifically, it
notes the fact that large institutions are able to leverage their access to cloud services and
large data repositories to develop in-house AI systems while smaller institutions do not
have the necessary resources to develop their own models.
2. Narrowing the Data Gap to Prevent Fraud: The Report discusses the significance of the
data gap between large and small financial institutions in the area of fraud prevention.
Specifically, it highlights the advantage large financial institutions have because of their
access to large historical data repositories and underscores that smaller financial
institutions lack the breadth of internal data and capacity to develop their own systems.
3. Regulatory Coordination: The Report discusses how financial institutions and regulators
are coordinating to address AI oversight concerns, including regulatory fragmentation as
the various regulatory authorities at the state, federal, and international levels begin
establishing and implementing guidelines.
4. Expanding NIST’s AI Risk Model: The Report discusses expanding the National Institute
of Standards and Technology (NIST) AI Risk Framework to incorporate AI-specific
guidelines regarding governance and risk management for financial institutions. Given the
financial sector’s maturity with both AI and risk management, Treasury will assist NIST to
develop an AI risk management framework specific to the financial sector.
5. Data Supply Chain Mapping and Nutrition Labels: The Report highlights the importance
of monitoring data supply chains to ensure AI models are only using accurate and reliable
data. For financial institutions, this means tracking internal data and understanding how it
is being used. The Report recommends that the financial sector develop a standardized set
of best practices for data supply chain mapping and “nutrition labels” for vendor-provided
47Treasury, U.S. Department of the Treasury Releases Report on Managing Artificial Intelligence-Specific Cybersecurity Risks in
the Financial Sector (Mar. 27, 2024).
AI systems and data providers. These “nutrition labels” would provide information
regarding what data was used to train the model in question, where the data originated in
the supply chain, and how the data is being used.
6. Black Box AI Solutions: The Report suggests that financial institutions should produce
research and development on explainability solutions for black-box systems, including Gen
AI.
7. Human Capital Gaps: The Report highlights the rapid pace of development in AI models
has created a significant talent gap between AI technologists and those utilizing AI models.
Further, the Report suggests bridging this gap with a set of best practices for less-skilled
practitioners to ensure safe and effective use of AI by financial institutions. In addition, it
suggested that financial institutions implement role-specific AI training for employees
outside of information technology, e.g., legal, compliance, and operations. Treasury also
found that the rate of change may exacerbate IT related workforce gaps.
8. Need for Common AI Lexicon: The Report suggests that financial institutions, regulators,
and users would benefit greatly from a shared understanding of common, AI-specific
lexicon.
9. Digital Identity Solutions: Digital identity solutions can be of use to financial institutions
in their efforts to combat fraud and further strengthen cybersecurity; however, these
solutions differ in their technology, governance, and security. The Report suggests
standardizing international and national digital identity technical standards to promote
uniformity.
10. International Coordination and Leadership: The Report discusses the importance of
coordinating with international financial regulatory authorities to ensure interoperability
and standards-setting across jurisdictions.
Treasury noted that in the coming year it plans to work alongside market participants,
regulators, and international partners on the key initiatives discussed in the Report. The Committee
will continue to engage with Treasury and keep apprised of all developments related to ongoing
AI initiatives as part of the Executive Order.
Consumer Financial Protection Bureau and Federal Housing Finance Administration
Deliverable: The Director of the Federal Housing Finance Agency and the Director of the
Consumer Financial Protection Bureau are encouraged to consider using their authorities, as they
deem appropriate, to require their respective regulated entities, where possible, to use appropriate
methodologies including AI tools to ensure compliance with Federal law, and: 1.) evaluate their
underwriting models for bias or disparities affecting protected groups; and 2.) evaluate automated
collateral-valuation and appraisal processes in ways that minimize bias.
The Executive Order encourages the CFPB to issue additional guidance requiring their
regulated entities to use “appropriate methodologies including AI tools” to evaluate existing
underwriting models, automated collateral valuation, and appraisal processes for bias. Further, the
Executive Order encourages CFPB to issue guidance “addressing the use of tenant screening
systems in ways that may violate the Fair Housing Act (Public Law 90-284), the Fair Credit
Reporting Act (Public Law 91-508), or other relevant Federal laws, including how the use of data,
such as criminal records, eviction records, and credit information, can lead to discriminatory
outcomes in violation of Federal law.” 48 The CFPB is also encouraged to issue guidance
addressing how the use of AI models for advertisements pertaining to housing, credit, and other
real estate transactions may violate the Fair Housing Act, the Consumer Financial Protection Act,
and the ECOA.
The Executive Order encourages the FHFA, alongside the CFPB, to issue additional
guidance requiring their “respective regulated entities, where possible, to use appropriate
methodologies including AI tools to ensure compliance with Federal law.” The Executive Order
also encourages the FHFA, alongside the CFPB, to issue additional guidance requiring their
regulated entities to use AI to evaluate existing underwriting models, automated collateral
valuation, and appraisal processes for bias.
Housing and Urban Development
Deliverable: Within 180 days of the Executive Order, the Secretary of HUD shall, and the CFPB
Director is encouraged, to issue guidance 1.) addressing the use of tenant screening systems in
ways that may violate the Fair Housing Act, Fair Credit Reporting Act, or other relevant Federal
laws; and 2.) how the Fair Housing Act, the Consumer Financial Protection Act of 2010, and
Equal Credit Opportunity Act apply to the advertising of housing, credit, and other real estate-
related transactions through digital platforms, including those that use algorithms to facilitate
advertising delivery.

The Executive Order directs HUD to issue guidance on the use of AI in housing decisions.
Specifically, it provides that the Secretary shall issue additional guidance addressing the use of
tenant screening systems to ensure they do not violate existing laws. Furthermore, the Executive
Order provides that the Secretary shall issue guidance addressing how the Fair Housing Act, “apply
to the advertising of housing, credit, and other real estate-related transactions through digital
platforms, including those that use algorithms to facilitate advertising delivery, as well as on best
practices to avoid violations of Federal law.”49
On May 2, 2024, HUD released two guidance documents addressing the application of the
Fair Housing Act with AI and tenant screening, as well as advertising of housing, housing credit,
and other real estate related transactions through digital platforms. The tenant screening guidance
describes the type of information that should be considered by an AI model when making renting
decisions. In its guidance document, HUD underscores the importance of nondiscrimination in
housing based on race, color, religion, sex, familial status, national origin, or disability. The
guidance also states that applicants should be given the opportunity to correct inaccuracies in their
records and be provided clear reasons for denial.

48
Exec. Order No. 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023).
49 Exec. Order No. 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023).
The advertising guidance highlights specific risks firms encounter when advertising
housing and housing credit. It emphasizes that advertising platforms are responsible for ensuring
their practices do not exclude or target customers based on protected characteristics.
Other Agency Actions Related to AI
• On June 6, 2024, the Treasury Department released a “Request for Information (RFI) on
Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector.”50
• On March 20, 2024, the FHFA announced Gen AI in Housing Finance TechSprint, which
will bring together housing and AI experts to discuss use cases and risk management
controls for Gen AI in the housing finance system.
• On January 25, 2024, the SEC published an investor alert titled, “Artificial Intelligence
and Investor Fraud,” which sought to make investors aware of the increased instances of
fraud “involving the purported use of AI.”51
• On July 26, 2023, the SEC proposed new rules that would require broker-dealers and
investment advisers to take certain steps to address conflicts of interest associated with
their use of predictive data analytics and similar technologies to interact with investors.52
• In July 2023, FHFA’s Office of Financial Technology held its first-ever TechSprint,
Velocity. The TechSprint brought together experts and practitioners from the technology
and mortgage finance sectors to participate in a team-based, problem-solving event.
Participants looked to solve problems related to automated verification prosses in mortgage
lending, data standard harmonization, alternative data in underwriting, and the digital
mortgage experience.53
• On September 19, 2023, the CFPB issued guidance about certain legal requirements that
lenders must adhere to when using AI and other complex models. The guidance
emphasized that “lenders must use specific and accurate reasons when taking adverse
actions against consumers.”54
• In June 2023, the CFPB published an “Issue Spotlight” focusing on the use of chatbots by
financial institutions.55
• On May 26, 2022, the CFPB released a circular on adverse action notification
requirements regarding credit decisions based on complex algorithms.56

50 Treasury, U.S. Department of Treasury Releases Request for Information on Uses, Opportunities, and Risks of Artificial
Intelligence in the Financial Services Sector (Jun. 6, 2024).
51 SEC, SEC Artificial Intelligence (AI) and Investment Fraud: Investor Alert (Jan. 25, 2024).
52 SEC, SEC Press Release, “SEC Proposes New Requirements to Address Risks to Investors from Conflicts of Interest

Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers” (Jul. 26, 2023).
53 FHFA, FHFA Insights: Recapping FHFA’s Inaugural TechSprint (Oct. 10, 2023).
54
CFPB, CFPB Press Release, “CFPB Issues Guidance on Credit Denials by Lenders Using Artificial Intelligence,” (Sep. 19,
2023).
55 CFPB, CFPB Press Release, “CFPB Issue Spotlight Analyzes “Artificial Intelligence” Chatbots in Banking” (Jun. 6, 2023).
56 CFPB, “Consumer Financial Protection Circular 2022-03” (May 26, 2022).
• In February 2022, the FHFA issued supervisory guidance to Fannie Mae and Freddie Mac.
The guidance provided the GSEs with an AI and Machine Learning risk management
framework that is intended to provide a flexible approach to using these technologies. 57
• On February 23, 2022, the CFPB outlined options to prevent algorithmic bias in home
valuations.
• On March 31, 2021, the CFPB, the Federal Reserve Board (Fed), the Federal Deposit
Insurance Corporation (FDIC), the National Credit Union Administration (NCUA), and the
Office of the Comptroller of the Currency (OCC) issued a RFI regarding financial
institutions’ use of AI.58
• On July 20, 2020, the FDIC issued an RFI seeking the public’s view “on the potential for
a public/private standard-setting partnership and voluntary certification program to
promote the efficient and effective adoption of innovative technologies at FDIC-supervised
financial institutions.”59
APPENDIX C

Roundtable Participants:

Federal Regulator Roundtable I:


1. Office of the Comptroller of the Currency
2. Federal Deposit Insurance Corporation
3. National Credit Union Administration
4. Federal Reserve
5. U.S. Securities and Exchange Commission

Federal Regulator Roundtable II:


1. Office of Cybersecurity and Critical Infrastructure Protection
2. Financial Crimes Enforcement Network
3. Consumer Financial Protection Bureau
4. Federal Insurance Office
5. U.S. Department of Housing and Urban Development
6. Federal Housing Finance Agency

Use-cases Roundtable: Capital Markets


1. NASDAQ
2. Robinhood
3. S&P Global
4. Betterment
5. Public Citizen

Use-cases Roundtable: Housing and Insurance

57
FHFA, FHFA OIG, Enterprises Use of AI and Machine Learning (Feb. 2022).
58
CFPB, CFPB Press Release, “Agencies Seek Wide Range of Views on Financial Institutions’ Use of Artificial Intelligence”
(Mar. 29, 2021).
59 FDIC, FDIC Press Release, “FDIC Seeks Input on Voluntary Certification Program to Promote New Technologies,” (Jul. 20,

2020).
1. Rocket Mortgage
2. ZestAI
3. Zillow
4. Canopy Analytics
5. Marsh McLennan
6. National Fair Housing Alliance

Use-cases Roundtable: Financial Institutions and Nonbank Firms


Panel I
1. Great Lakes Credit Union
2. Optus Bank
3. Featurespace
4. Lendistry
5. Consumer Reports
6. InDebted
Panel II
1. Ameris Bank
2. Fiserv
3. C3.AI
4. Capital One
5. AWS
6. FinRegLab

Use-cases Roundtable: National Security and Illicit Finance Roundtable


1. Quantifind
2. CrowdStrike
3. FIS Global
4. Pindrop
5. MITRE
6. Consilient

You might also like