MANAGING BANK MODEL RISK IN THE AGE OF AI

Managing Bank Model Risk in the Age of AI

  • Denise Rinear
  • Published: 12 July 2024



The advent of new AI tools requires new thinking about model risk. Model use is broader today, and model risk management (MRM) programs need to focus on protecting both the bank and its customers from potential harms resulting from model error.

Model risk management (MRM) efforts typically begin with the question “What models do we have?” which inevitably leads to “What is a model?” These questions are no longer the first ones to ask. Rather, when identifying and evaluating automation used in your bank, the focus needs to shift from whether a tool, system, or application is a model to understanding and mitigating the potential risk of each. 

The regulatory definition of a model focuses on its quantitative output, its use of assumptions, and its use in decision making. The Federal Reserve Board’s Guidance on Model Risk Management (SR11-7) has been around since 2011 and requires banks to maintain an inventory of models. Regulatory Agencies have issued additional guidance, including the OCC’s Comptrollers Handbook: Model Risk Management. 

Throughout the regulatory guidance, the responsibilities of a model owner have remained consistent. Model owners are responsible for effective model development and use, as well as testing and monitoring. Each model is required to be validated by parties independent of the owner and users.

Regulators’ focus on models is well founded. Models with quantitative output and assumptions used in decisioning – such as CECL, stress testing, or credit underwriting – were the focus because of their direct impact on the financial health of the bank. However, regulators found that a focus on credit and financial models wasn’t enough. 

Regulators looking more closely at bank model inventories clarified their guidance to broaden how the guidance should be interpreted. They shared that their definition of a model should be applied to critical tools, even when those tools did not appear to meet the technical definition of a model. Bank Secrecy Act (BSA) rules-based systems were the poster children for critical tools, but today most banks just list their BSA system as a model, regardless of its core processing design. 

There is more, however. Regulatory guidance on model risk management has matured since 2011 – but not as quickly as the market for automation. Banks looking to enhance productivity and improve customer service are flocking to a suite of models and tools that may or may not meet the regulatory definition of a model. 

Most of these new models and tools are designed to include some form of machine learning (ML) or artificial intelligence (AI). We are talking about tools like chat bots and intelligent advisors, forecasting programs, digital platforms, as well as the highly popular ChatGPT and Copilot. Some natural language process (NLP) models even mimic the popular Alexa or Siri solutions that process voice commands. All of these are examples of new digital assets within organizations and should be assumed to be models or critical tools. 

The explosion of such new tools requires new thinking about model risk management. Initially, model risk management was focused on ensuring that models used for critical bank decisions were working properly, preventing the bank from harming itself due to model error. Model use is broader today and therefore model risk management has a broader mission, namely to protect both the bank and its customers from decision harm due to model error.

Regardless if a tool, system, or application is technically a model, begin your new thinking with a focus on classifying the risk. At a minimum, start with potential impact, pervasiveness of use, and complexity and explainability.

For instance, start by considering the potential impact if a tool is misused or its outputs are inaccurate. Identify individual tools and classify them according to their impact to humans – consumers, customers, and employees – and the downstream ramifications. Many new AI tools are designed to mimic human decision making, and as with any human we need to be clear on the consequences of them being wrong. This is particularly true if they are wrong inconsistently. Inconsistent errors are like human error. They are harder to find and potentially more damaging due to the potential for consumer harm that comes from disparate treatment. 

The Consumer Financial Protection Bureau (CFPB) warned in 2022 that banks using black box credit models with complex algorithms remain responsible for compliance with consumer protection laws. Their messages continued in 2023 as they cautioned banks around algorithmic marketing, digital redlining, and use of AI in appraisals to name a few. They entered the chat on chatbots with a 2023 report highlighting some of the challenges with deployment of chatbots in consumer financial services. Their overarching message has been that automated systems and advanced technology are not an excuse for lawbreaking behavior. 

Distributed tools with a wide range of users represent higher risk due to the expanded potential for misuse or misunderstanding. The likelihood of an unwanted or unexpected impact goes up as more individuals use the tool. Tool users are not limited to the designers and owners of those tools, or even to bank employees. Some of the latest tools are designed to be used by customers without human (employee) intervention. 

Nor is pervasiveness of use is limited to humans using the model. Some models are intended to be engines that power other models, tools, and applications. These are a higher risk due to their reach across a variety of operations or functions, often without users even knowing they exist. 

Finally, tools represent a higher risk when they are more complex and require extensive documentation and/or expertise for their functionality to be properly understood, explained, and managed. Tools designed by third parties may be the most challenging, since model owners may not be qualified to manage the risk that such tools bring into the bank. 

Once tools have been inventoried and categorized, and their risk level evaluated, then the work of risk management and risk oversight begins. Here is where the Fed’s Guidance on Model Risk Management remains a useful framework for how to best manage risk. It offers four core recommendations for managing models and critical tools:

  • Test prior to using
  • Monitor when using
  • Solicit independent validation and review
  • Fix what you find.


These core risk management practices remain critical to protecting the bank. The level of rigor applied may vary for models versus critical tools but demonstrating that these activities are performed represents the new model for model risk management. 

Please contact us to discover how Capco can help you navigate the complexities of identifying, categorizing, and managing your automation tools and models. 


© Capco 2025, A Wipro Company