On April 17, 2026, the OCC, the Federal Reserve, and the FDIC jointly rescinded SR 11-7, OCC Bulletin 2011-12, and FIL-22-2017. The new interagency guidance is more flexible, more principles-based, and — crucially — explicitly excludes generative and agentic AI from its scope. The agencies announced an RFI on AI is forthcoming. Here is what bank technology leaders should do between now and that RFI.
What changed on April 17
SR 11-7 — the Federal Reserve's 2011 Supervisory Letter on Model Risk Management — has been the operative document for bank model governance for fifteen years. It defined what counted as a model, what counted as model risk, and what an effective model risk management program looked like. It was prescriptive. Banks built large MRM functions around it. Examiners examined against it.
That document is now rescinded, alongside its OCC counterpart (OCC Bulletin 2011-12) and the FDIC's 2017 reference (FIL-22-2017). In its place, the three agencies have issued a unified, principles-based framework — OCC Bulletin 2026-13, Federal Reserve SR 26-2, and the FDIC's accompanying FIL — that retains the same conceptual scaffolding (model identification, conceptual soundness, ongoing monitoring, outcomes analysis, governance) but replaces detailed prescription with risk-based judgment scaled to a banking organization's size, complexity, and model risk profile.
For banks above $30 billion in total assets, the new framework is most directly applicable. For smaller institutions, the agencies note the framework remains relevant when model risk is material, particularly where models are used for regulatory reporting, credit, AML, or market risk.
The AI carve-out — and what it actually says
Embedded in the new guidance is a paragraph that has received less attention than it deserves:
"Generative AI and agentic AI models are novel and rapidly evolving and are not within the scope of this guidance. The OCC, Federal Reserve Board, and FDIC plan to issue in the near future a request for information that addresses model risk management generally and considers, in particular, banks' use of AI, including generative AI and agentic AI and AI-based models."
Three things to take from this. First, the agencies are not punting permanently. They are signaling that something specific to AI is coming, but they want input before they write it. Second, the carve-out does not mean AI is unregulated. Existing third-party risk management guidance (the June 2023 interagency guidance, Federal Reserve SR 23-4 and OCC Bulletin 2023-17), the OCC's heightened standards, the FDIC's Part 364 cybersecurity expectations, and the SEC's Regulation S-P amendments all still apply to AI use cases that touch their domains. Third, between now and the RFI's publication, banks deploying AI are operating in a window of regulatory ambiguity that examiners will fill with their own judgment.
That last point is the operational reality. "Examiners will fill the ambiguity with their own judgment" is what "principles-based" actually means in practice. The bank that walks into a 2026 or 2027 exam with weak AI governance documentation will be measured against an examiner's expectation, not a bulletin's text.
What technology leaders should do this quarter
Five things. None of them are new — they are what bank technology leaders should already be doing. The April 2026 guidance does not change the answer; it changes the urgency.
1. Inventory every place AI touches a consequential decision
"Consequential" means: any decision a regulator would expect a human to be accountable for, with reasoning that holds up to examination. Credit decisioning, fraud and AML, customer communication that could be construed as advice, vendor due diligence summaries, transaction monitoring, suspicious activity report generation. Inventory means a list — every model, every vendor, every shadow-IT use of ChatGPT in compliance work, every co-pilot deployment in operations. The first inventory is always longer than expected.
2. Distinguish models you own from models you rent
Bank-built and bank-tuned models live under your model risk framework. Models you access through a vendor (a SaaS underwriting tool, a fraud-as-a-service platform, a CRM with embedded AI) live under your third-party risk framework. Both have to be governed, but the documentation requirements and the questions examiners will ask differ. The April 2026 guidance does not change this division; the June 2023 interagency third-party risk guidance (Federal Reserve SR 23-4 and OCC Bulletin 2023-17) still controls the rented side.
3. Draft an AI use policy, before the RFI lands
Banks that already have AI use policies in place, written to current best practice (NIST AI Risk Management Framework, the EU AI Act's high-risk system requirements as a useful sketch even for U.S. firms, the OCC's third-party guidance, internal cybersecurity policy), will spend the months after the RFI mapping their existing policy to the new requirements. Banks that are starting from a blank page will spend the same months in catch-up. The cost of writing a policy that is later 80% adopted is much lower than the cost of writing one from scratch under regulatory pressure.
4. Establish a single accountable executive
In every conversation we have had with an examiner across the past twelve months, the question has been: who is accountable. Not who is responsible. Not who is consulted. Who is accountable. The answer is rarely "the data science team." It is more often the CIO, the CISO, the Chief Risk Officer, or — increasingly — a fractional or in-house CTO whose specific scope includes AI accountability. Whatever title you choose, name the person.
5. Document, document, document
Principles-based regulation is not lighter regulation. It is regulation that requires you to demonstrate the reasoning behind your decisions. That requires written records: model purpose statements, data lineage, validation history, performance monitoring outputs, incident logs, vendor due-diligence files, board-level reporting. The bank that can produce this in under a day at examination has a meaningfully different exam outcome than the bank that has to assemble it over weeks.
Why this matters specifically for mid-market banks
Global banks have model risk teams of fifty people. The April 2026 update will produce some adjustment in their workflow but will not change their organizational shape. The same is not true for the regional and community banks where AI is increasingly being deployed by middle managers using off-the-shelf tools, often without IT or compliance sign-off.
For a $1B–$10B regional bank, the gap between "we are using AI in production" and "we have governance for our AI use" is now visible to examiners. Closing it does not necessarily require hiring a full-time CTO or a full-time CISO. It frequently requires a fractional one, with a six- to twelve-month engagement scope: build the inventory, draft the policy, name the accountable executive, set up the documentation cadence, and brief the board. Once the system is in place, the same fractional executive often transitions out as the bank promotes an internal lead into the seat.
Frequently asked
Does the April 2026 guidance apply to community banks under $30 billion?
It applies when model risk is material. For most community banks, the operative threshold is whether models drive regulatory reporting, credit decisioning, BSA/AML monitoring, or market risk. If yes, the framework applies. If no, the bank should still document why model risk is not material — the document is the answer to the examiner's first question.
Is generative AI now unregulated for banks?
No. Generative AI is outside the scope of model risk management guidance, but it remains subject to third-party risk management guidance, cybersecurity expectations under Part 364, fair lending and disparate impact obligations, customer communication standards, and recordkeeping rules. The April 2026 carve-out is narrow.
When will the AI RFI be published?
The agencies have not announced a date. Practitioner expectation is a draft in the second half of 2026 with a comment period of 60 to 90 days, suggesting a final framework in 2027 at the earliest. Use the intervening time.
What is the difference between "AI readiness" and "AI governance"?
AI readiness is the state of being prepared to deploy and govern AI responsibly — inventories, policies, named accountability, documentation, vendor due diligence. AI governance is the operating system that runs once readiness is achieved — review cadences, exception handling, incident response, board reporting. Most banks today need readiness work first.
The window between now and the RFI
The April 2026 update is not a regulatory holiday. It is a window in which banks can build the governance posture that the next round of guidance will assume is already in place. The banks that use the window deliberately — inventory, policy, accountability, documentation — will spend a small amount of effort now and a small amount of effort later mapping to the new framework. The banks that wait for the RFI to be published before starting will have one or two quarters to catch up under examination pressure.
There is not a third option.
Technology consulting for financial services.
Tyche Consulting places senior IT professionals into banks, asset managers, fintechs, and insurers; provides fractional CTO, CIO, and CISO leadership; and advises on technology strategy and AI readiness.
Start a conversationSources & further reading
- OCC Bulletin 2026-13 — Model Risk Management: Revised Guidance
- Federal Reserve SR 26-2 — Revised Guidance on Model Risk Management
- FDIC FIL — Agencies Revise the Interagency Model Risk Management Guidance, April 17, 2026
- Federal Reserve SR 23-4 — Interagency Guidance on Third-Party Relationships, June 2023 (still in force)
- NIST AI Risk Management Framework