AI in financial services: Applying model risk management guidance in a new world
Behnaz Kibria
Director, Government Affairs and Public Policy, Google Cloud
Advances in artificial intelligence (AI) and machine learning (ML) have led to increased adoption in the financial services sector. A prominent use for this technology is to assist in key compliance and risk functions, including the detection of fraud, money laundering, and other financial crimes and illicit finance, as well as trade manipulation — collectively referred to as “Risk AI/ML.” As the use of these models grows, so do questions about managing risks associated with the models.
In particular, regulators, financial institutions, and technology service providers have been looking into whether existing Model Risk Management (MRM) guidance — which has traditionally been the regulatory regime applicable to managing model risk in the financial services industry — continues to be relevant for AI/ML models. And, if so, how should the guidance be interpreted and applied to this new technology?
As the financial sector increasingly adopts artificial intelligence and machine learning techniques, it is critical for regulators, financial companies and technology providers to work together to assure that there are clear rules of the road,” says Jo Ann Barefoot, AIR CEO and co-founder. “Updated guidelines on the responsible use of these models can help prevent novel technologies from causing harm, and can also open up better ways to combat risk in areas like money laundering, illicit finance, and fraud.
Our new white paper, written in partnership with the Alliance for Innovative Regulation (AIR), seeks to address that question, with the aim of fostering thought and dialogue among agencies, the financial services industry, risk model vendors, and entities interested in the performance, outputs, and compliance of models used to identify, mitigate, and combat risks in financial services. This white paper does not address issues that may arise with other applications of AI/ML in the financial services industry, such as consumer credit underwriting or models using generative AI or Large Language Models, which are better addressed iteratively.
The paper argues that MRM guidance, given its broad, principles-based approach, continues to provide an appropriate framework for assessing financial institutions’ management of model risk, even for Risk AI/ML models. Working within an existing framework takes advantage of the knowledge and operational capabilities of institutions that already understand this framework, instead of having to create an entirely new approach, which generally takes longer to implement and make effective. Nonetheless, the paper recognizes that AI/ML models have unique traits and characteristics compared to conventional models, including their potential dynamism and pattern recognition capabilities. These distinctions must be in focus when considering how MRM guidance should be applied to Risk AI/ML models.
Taking into account those unique aspects of AI/ML models, the paper offers specific observations and recommendations regarding the application of MRM guidance to Risk AI/ML models, including:
Risk assessment: In assessing risk, it is important to recognize that AI/ML models are not inherently more risky than conventional models. A risk-tiering assessment must consider the targeted business application or process for which a model is used, as well as the model’s complexity and materiality. To assist in these assessments, regulators could clarify that the use of AI/ML alone does not place a model into a high-risk tier and publish further guidance to help set expectations regarding the materiality/risk ratings of AI/ML models as applied to common use cases.
Safety and soundness: Due to the dynamic nature of Risk AI/ML models, reliance on extensive and ongoing testing focused on outcomes throughout the development and implementation stages of such models should be primary in satisfying regulatory expectations of soundness. To that end, the development of technical metrics and related testing benchmarks should be encouraged. Model “explainability,” while useful for purposes of understanding the specific outputs of AI/ML models, may be less effective or insufficient for establishing whether the model as a whole is sound and fit for purpose.
Model documentation: The touchstone for the sufficiency of documentation should be what is needed for the bank to use and validate the model, and understand its design, theory, and logic. Disclosure of proprietary details, such as model code, is unnecessary and unhelpful in verifying the sufficiency of a model and would deter model builders from sharing best-in-class technology with financial institutions.
Industry standards and best practices: Regulators should support the development of global standards and their use across the financial services and regulatory landscape by explicitly recognizing such standards as presumptive evidence of compliance with the MRM guidance and sound AI/ML risk mitigation practices. In addition, regulators should foster industry collaboration and training based on such standards.
Governance controls: Regulators should use guidance to advance the use of governance controls, including incremental rollouts and circuit breakers, as essential tools in mitigating risks associated with Risk AI/ML models.
In an era where AI technology has the potential to revolutionize financial services, we acknowledge the foresight of our regulators in setting a solid foundation and blueprint for navigating the labyrinth of potential risks through the MRM guidance,” says Philip Moyer, Global VP, AI and Business Solutions at Google Cloud. “We believe there is room for greater coherence and precision, enhanced risk-mitigation approaches, and refined best practices surrounding AI and ML risk models. Whether it’s in capacity building or information sharing, our call to action is for greater collaboration between regulators and financial institutions. We’re confident that our collective efforts today will help shape a more robust and resilient future for financial services.
We invite a discussion of additional considerations, including the importance of examiner and industry training and collaboration, as well as openness by regulators to continue to refine the MRM guidance as AI/ML technologies develop and standards emerge.
Implementing our recommendations would advance several goals. It would help regulators, financial institutions, and technology providers work together to better serve their shared purpose of protecting the safety and soundness of the financial system. At the same time, implementing the recommendations and continuing work in this space would promote the adoption of cutting-edge technologies in the industry, including those that combat such scourges as money laundering, illicit finance, and fraud.
You can read the full white paper here.