• AIPressRoom
  • Posts
  • The four building blocks of responsible generative AI in banking

The four building blocks of responsible generative AI in banking

#2 Regulation

AI will be critical to our economic future, enabling current and future generations to live in a more prosperous, healthy, secure, and sustainable world. Governments, the private sector, educational institutions, and other stakeholders must work together to capitalize on AI’s benefits.

If not developed and deployed responsibly, AI systems could amplify societal issues. Tackling these challenges will again require a multi-stakeholder approach to governance. Some of these challenges will be more appropriately addressed by standards and shared best practices, while others will require regulation – for example, requiring high-risk AI systems to undergo expert risk assessments tailored to specific applications.

Many countries and international organizations have already begun to act — the OECD has created its AI Policy Observatory and Classification Framework, the UK has advanced a pro-innovation approach to AI regulation, and Europe is progressing work on its AI Act. Similarly, Singapore has released its AI Verify framework, Brazil’s House and Senate have introduced AI bills, and Canada has introduced the AI and Data Act. In the United States, NIST has published an AI Risk Management Framework, and the National Security Commission on AI and National AI Advisory Council have issued reports.

Understanding the future role of gen AI within banking would be challenging enough if regulations were fairly clear, but there is still a great deal of uncertainty. As a result, those creating models and applications need to be mindful of changing rules and proposed regulations.

We work with policymakers to promote an enabling legal framework for AI innovation that can support our banking customers. This includes advancing regulation and policies that help support AI innovation and responsible deployment. Further, we encourage policymakers to adopt or maintain proportional privacy laws that protect personal information and enable trusted data flows across national borders.

For the past few years, federal financial regulatory agencies around the world have been gathering insight on financial institutions’ use of AI and how they might update existing Model Risk Management (MRM) guidance for any type of AI. We shared our perspective on applying existing MRM guidance in a blog post earlier this year.

In the US, the Commerce Department’s National Institute of Standards and Technology (NIST) established a Generative AI Public Working Group to provide guidance on applying the existing AI Risk Management Framework to address the risks of gen AI. Congress has also introduced various bills that address elements of the risks that gen AI might pose, but these are in relatively early stages.

Some challenges can be addressed through regulation, ensuring that AI technologies are developed and deployed in line with responsible industry practices and international standards. Others will require fundamental research to better understand AI’s benefits and risks, and how to manage them, and developing and deploying new technical innovations in areas like interpretability. And others may require new groups, organizations, and institutions – as we are seeing at agencies like NIST.

We also believe that sectoral regulators are best positioned to update existing oversight and enforcement regimes to apply to AI systems, including on how existing authorities apply to the use of AI, and how to demonstrate compliance of an AI system with existing regulations using international consensus multistakeholder standards like the ISO 42001 series. In the EU, there are enabling mechanisms to instruct regulatory agencies to issue regular reports identifying capacity gaps that make it difficult both for covered entities to comply with regulations and for regulators to conduct effective oversight.