Navigating Artificial Intelligence in Banking

I. Introduction

Banking organizations[1] have a proven track record of successfully deploying new technologies while continuing to operate in a safe and sound manner and adhering to regulatory requirements.[2] Throughout the years, banking organizations and financial institutions have digitized, gone online, transitioned to mobile services, automated processes, moved infrastructure into the cloud and adopted many other technologies, including machine learning, a form of AI. Many of these new technologies have presented new risks or amplified pre-existing risks, yet banking organizations have been able to manage these risks effectively and evolve to better serve their customers.

Artificial intelligence (AI)—or the ability of a computer to learn or engage in tasks typically associated with human cognition—has received a great deal of attention recently from the public, businesses and government officials. In October 2023, the Biden Administration issued its “Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence” (the AI Executive Order),[3] outlining the Administration’s eight principles for governing the development and use of AI, which include, among other things, ensuring the safety and security of AI technology, promoting innovation and competition and protecting consumers and privacy. The AI Executive Order also directs various government agencies to take actions to promote those goals and affirms that “[h]arnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks.”[4] More recently, in January 2024, the House Financial Services Committee announced the formation of a bipartisan working group to “explore how [AI] is impacting the financial services and housing industries.”[5] AI has also received attention within the banking industry, with banking organizations and their regulatory agencies exploring the potential benefits and potential risks of AI and how the industry may continue to evolve in a safe and sound manner as the technology continues to advance.

Although attention to AI has increased markedly with the broad availability of relatively new technologies like large language models (LLMs), AI is not new. The conceptual foundations of AI were first articulated in scientific literature as early as the late 1940s,[6] and the term “artificial intelligence” was itself coined in 1955.[7] One of the challenges of any discussion of AI is determining the scope of what is meant by “AI.” In this paper, the terms “AI,” “AI model” and “generative AI” have the meanings used in the AI Executive Order[8] and can include a wide range of potential models, processes and use cases that incorporate AI.[9]

Banking organizations may use AI in connection with a variety of activities, including fraud detection, cybersecurity, customer service (such as chatbots) and automated digital investment advising. As with other new technologies, banking organizations have implemented and governed these and other uses of AI within existing risk management frameworks in accordance with applicable regulations, guidance and supervisory expectations. In fact, the integration of AI in the form of machine learning within the financial services sector traces its origins to the 1980s,[10] when it was primarily employed to identify and counteract fraudulent activities. It has expanded its application to a variety of use cases since.[11] This paper describes some of the guidance relevant to the use of AI, while recognizing that there is no “one-size-fits-all” approach to AI risk management. Risk management practices will vary depending on the AI technology, application, context, expected outputs and potential risks specific to the individual organization. In addition to the existing guidance, banking organizations also recognize that existing laws are applicable to the use of AI in the various contexts in which it may be employed and take those laws into account when considering particular use cases.[12]

II. Harnessing AI: Governance and Risk Management for Resilience and Innovation

AI is one of the latest of many technologies that have been, or are in the process of being, implemented by banking organizations. AI has a wide range of potential capabilities, is rapidly evolving and may be incorporated in numerous and highly diverse use cases, creating both opportunities and potential risks for banking organizations. This paper outlines the governance and risk management principles already established by the banking agencies that provide an overarching framework for banking organizations to implement AI in a safe, sound and “fair” manner. The comprehensive approach to risk management required by the banking agencies allows banking organizations to utilize their risk management practices to address evolving technologies and associated potential risks. This is particularly important in the AI context given the speed at which AI technologies are developing. Banking organizations must be able to act quickly to identify, evaluate, monitor and manage risks posed by emerging AI technologies, and use currently available risk management processes to do so.

This paper discusses that (1) while AI’s applications will differ based on the nature of the AI and the applicable use case and business context, banking organizations’ existing governance and risk management principles provide a framework for consistency, coordination and adaptability in the face of the opportunities and potential risks posed by AI, and (2) given the dynamic nature of AI and the potential use cases, continued partnership with the banking and financial sector agencies is necessary to ensure that the sector’s approach to AI remains both responsive and aligned with regulations, guidance and the broader objectives of financial markets safety and soundness and consumer protections.

Responsible implementation of AI benefits from a deliberate approach from regulators and other stakeholders as all parties continue to learn how best to address challenges and take advantage of opportunities in this space. That approach must balance the opportunities and potential risks presented by AI, as well as the need of banking organizations and regulators to consider evolving circumstances. It is in everyone’s best interests for AI tools to be implemented in a safe, sound and fair manner, enabling banking organizations and their customers to benefit from new AI capabilities while appropriately mitigating risks. Those goals are best served by banking organizations and regulators working together to share information and identify benefits and risks, as well as appropriate mitigation strategies. BPI[13] and its technology policy division known as BITS[14], looks forward to continuing to work with its members, the federal banking agencies and other U.S. government offices to facilitate future collaboration and consultations as the AI landscape evolves.[15]

To lay a common groundwork for future conversations, this paper highlights some elements of enterprise risk management (ERM), including risk governance, model risk management, data risk management and third-party risk management, that provide a framework within which banking organizations can identify, assess, manage and monitor the potential risks that may be posed by emerging AI technologies. Through these frameworks, banking organizations have the tools to effectively manage risks posed by AI, even while AI, its use cases and the application of these frameworks to AI are evolving.

III. Embracing Emerging Benefits and Understanding Potential Risks

Integrating AI into the banking sector offers potential benefits, including processing information and detecting patterns with greater efficiency and effectiveness by augmenting human capabilities. The ability of AI to analyze vast, complex datasets can reveal trends and anomalies beyond human detection, enhance decision-making and potentially reduce bias are some of the many new and or advanced outcomes that AI provides. AI tools employing machine learning (ML) have the ability to continuously learn and adapt, improving their pattern recognition capabilities. Even so, AI also has the potential to exacerbate biases within a model or data set which can produce inaccurate or misleading results. Further, the opacity of certain AI models’ methods can present challenges for users to identify and correct for inaccuracies or biases.

The adoption of any new technology requires consideration of its risks and rewards, and banking organizations rely on their robust governance and risk management practices to do so. As BPI has noted in connection with the implementation of other emerging technologies, managing risk is fundamental to the business of banking and it is imperative for banking organizations to assess and manage possible risks and benefits in all aspects of their businesses.[16] Responsible implementation of AI in the banking sector hinges on many factors, including integrating established risk management practices, such as model risk management, risk governance and third-party risk management. This approach to risk management can help to confirm that AI’s performance and outputs meet expectations and allow banking organizations to adapt to evolving risks.

Certain of these established risk management practices, including validation protocols, thorough testing of modeled outputs and ongoing monitoring of AI tools for continuous assessing of model quality, drift in performance and robustness will all play an important role in light of the unique characteristics of certain AI tools. For example, the validation process for an AI tool may benefit from additional or modified human input or intervention. “Human in the loop validation” is useful to validate many AI tools, and is especially important in the specific context of generative AI due to its inherent ability to hallucinate, or produce false or misleading information presented as fact. AI performance can also be evaluated through metrics, including those that measure performance over time, precision, recall and accuracy, among other things. Such metrics will be evaluated by automatic evaluation, human evaluation or a combination of both. Explainability must be considered in applying risk management principles, especially for generative AI technology. Fundamentally, explainability refers to the capacity to discern how outputs are generated in a consistent and understandable manner. Many AI models, especially those employing complex algorithms like deep neural networks, generate outputs where neither the user nor the developer can easily or comprehensively discern the basis for why one or more of the outputs were generated. Practices around data input, decision-making criteria and weighting of those criteria, assurance review and others are being developed to ensure that validation processes keep pace with technology. Likewise, the field of explainable AI, which aims to demystify AI models and make their operations more transparent and understandable, is in its early stages and continuing to develop.[17] This includes developing methodologies to trace how AI models process inputs into outputs and to understand the states of the models before and after processing. This would include, but not be limited to, model evaluation with a primary focus on overall LLM performance and system evaluation with a primary focus on the effectiveness of LLMs in specific use cases.

To read the full white paper, please click here or click on the download button below.

[1] This paper focuses principally on the governance and risk management practices, regulations, guidance, and supervisory expectations applicable to banking organizations. However, many of the principles discussed herein are relevant to other categories of financial institutions and the regulations and policies to which they are subject.

[2] This paper focuses predominantly on regulatory requirements applicable to U.S. bank organizations.

[3] Executive Order No. 14110, 88 Fed. Reg. 75,191 (Oct. 30, 2023)

[4] Id.

[5] Staff of House Financial Services Committee, Press Release, McHenry, Waters Announce Creation of Bipartisan AI Working Group (Jan. 11, 2024),

[6] Bernadette. Longo, Edmund Berkeley, computers, and modern methods of thinking, IEEE Annals of the History of Computing, vol. 26, no. 4, at 4-18, (Oct.- Dec. 2004).

[7] The term “artificial intelligence” was reportedly coined in a 1955 proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). See;

[8] As defined in the AI Executive Order, AI “has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action”; “AI model” means “a component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs”; and “generative AI” means “the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.” We expect the generally accepted industry definitions of these terms to continue to evolve and change as the underlying technologies continue to innovate and change.

[9] This paper does not attempt to describe the full universe of models, processes, and use cases.

[10] K. W. Kindle, R. S. Cann, M. R. Craig, and T. J. Martin, “PFPS – Personal Financial Planning System – AAAI,” in Proceedings of the Eleventh National Conference on Artificial Intelligence, pp. 344-349, 1989.

[11] Ubuntu, “Machine Learning in Finance: History, Technologies, and Outlook,” Ubuntu Blog, [Published/Updated Date],, (accessed Aug. 23, 2023).

[12] The banking agencies have emphasized the applicability of existing laws to the use of AI. Federal Reserve Board Vice Chair for Supervision Michael Barr recently noted that the Federal Reserve is “technology agnostic” when examining firms on compliance with laws such as the Community Reinvestment Act. See Ebrima Santos Sanneh, Regulators Say They Have the Tools to Address AI Risks, supra note 19. In addition, fair lending laws (e.g., the Equal Credit Opportunity Act, Fair Housing Act, and their implementing regulations and related guidance) require explanations for adverse decisions as a means of ensuring fair treatment, and the Consumer Financial Protection Bureau has issued a number of circulars addressing financial institutions’ obligation to provide specific and accurate explanations to customers when their decisions to take adverse actions with respect to credit involve algorithms, such as AI models. See CFPB, Circular 2023-03: Adverse Action Notification Requirements and the Proper Use of the CFPB’s Sample Forms Provided in Regulation B (Sept. 19, 2023),; CFPB, Circular 2022-03: Adverse Action Notification Requirements in Connection with Credit Decisions Based on Complex Algorithms (May 26, 2022),

[13] The Bank Policy Institute is a nonpartisan public policy, research, and advocacy group, representing the nation’s leading banks and their customers. Our members include universal banks, regional banks, and the major foreign banks doing business in the United States. Collectively, they employ almost 2 million Americans, make nearly half of the nation’s small business loans, and are an engine for financial innovation and economic growth.

[14] BITS – Business, Innovation, Technology, and Security – is BPI’s technology policy division that provides an executive-level forum to discuss and promote current and emerging technology, foster innovation, reduce fraud, and improve cybersecurity and risk management practices for the nation’s financial sector.

[15] BPI and its members have already been engaging in advocacy with respect to the safe and sound adoption of AI in the financial services industry. See, e.g.,BPI, Letter re Response to OSTP RFI: National Priorities for Artificial Intelligence (July 7, 2023), (“We are committed to the responsible use and development of AI technologies, underpinned by strong governance, oversight, and risk management. The banking industry’s foundational adherence to, and experience with, robust risk management practices, including model risk management, IT risk management, cyber risk management, enterprise risk management, operational risk management and resilience, data security, and privacy, can be effectively leveraged to assist in establishing a framework designed to allow for the responsible use of AI within the financial services sector.”); BPI and Covington & Burling LLP, Artificial Intelligence: Recommendations for Principled Modernization of the Regulatory Framework (Sep. 14, 2020),; Greg Baer and Naeha Prakash, Machine Learning and Consumer Banking: An Appropriate Role for Regulation, BPI (Mar. 14, 2019),

[16] Paige Paridon and Joshua Smith, Distributed Ledger Technology: A Case Study of The Regulatory Approach to Banks’ Use of New Technology, BPI (Feb. 1, 2024),

[17] See, e.g., Defense Advanced Research Projects Agency, Explainable Artificial Intelligence (XAI),