AI Governance for Financial Institutions

Your controls mapped.
Your gaps surfaced.
Examination-ready.

AuditAlign AI maps your bank's existing internal controls to NIST AI RMF and the CRI Financial Services AI RMF using semantic embeddings — surfacing gaps and generating examiner-ready rationale for every finding. Built by a former federal bank examiner who got tired of watching institutions fail audits they were prepared for.

Request a pilot Watch 60-second demo
auditalign.io  ·  Built for banks, credit unions & regulated financial institutions
Frameworks covered
NIST AI RMF 1.0 — 72 control requirements CRI FS AI RMF — 230 control objectives SR 11-7 alignment guidance Examiner-ready structured output
The problem

Most institutions fail AI governance reviews
not because they lack controls — but because they can't demonstrate them

Four structural gaps that generate examination findings, consistently across every institution size.

01 —

Incomplete AI inventory

MRM-validated models are documented. Vendor-embedded AI — fraud scoring, document processing, decisioning tools — rarely is. SR 11-7 requires a comprehensive inventory regardless of source.

02 —

Policy–practice disconnect

Governance policies reference NIST or SR 11-7 by name but contain no mapping between specific internal controls and the framework requirements those controls are supposed to satisfy.

03 —

Validation methodology mismatch

Traditional SR 11-7 validation procedures applied to GenAI and ML systems without adaptation. The OCC has explicitly noted that standard back-testing may not be appropriate for non-deterministic systems.

04 —

Board reporting without substance

Reporting acknowledges AI risk in general terms but cannot answer the examination question: what are your open AI governance gaps, who owns them, and what is the remediation timeline?

How it works

From control library
to gap analysis in minutes

01

Upload your internal controls

Provide your existing control library in any format — CSV, Excel, or text. No reformatting required. AuditAlign accepts natural-language control descriptions as written by your team.

02

Select your framework

Choose NIST AI RMF (72 requirements), CRI FS AI RMF (230 control objectives), or run both simultaneously. Frameworks are built in — no configuration needed.

03

Semantic embedding and matching

The engine uses sentence-level embeddings and reciprocal matching logic to compare each internal control against every framework requirement — capturing substantive coverage regardless of terminology.

04

Gap classification with rationale

Every finding is classified as Confirmed Match, Partial Match, or Gap — with LLM-generated examiner-ready rationale explaining the finding in plain language an auditor can use.

05

Structured Excel export

Output is a 10-tab workbook: Dashboard, Confirmed Matches, Partial Matches, Gaps, NIST match status, gap register, summary views, and run metadata. Designed for audit trail purposes from the first run.

AuditAlign output — sample run
Framework control Best match Score Status
GV 1.6 AI Model Inventory Registration 0.81 Confirmed
MS 3.1 Performance & Drift Monitoring 0.76 Confirmed
GV 1.2 AI Privacy & Data Governance 0.61 Partial
MG 4.1 — no internal control — Gap
GV 6.1 — no internal control — Gap
MS 2.5 Bias & Fairness Testing 0.58 Partial
Supported frameworks

Built on the frameworks
examiners actually reference

Both frameworks are bundled into the platform. No configuration, no manual uploads.

NIST · 2023

NIST AI Risk Management Framework

The voluntary, sector-agnostic framework that has become the de facto reference standard for AI governance in financial services. Four functions — Govern, Map, Measure, Manage — with 72 control requirements.

72 control requirements · All 4 functions
CRI / Treasury · February 2026

CRI Financial Services AI RMF

Co-developed by 108 financial institutions and the U.S. Treasury. The first framework purpose-built for financial services AI governance. Explicitly extends SR 11-7. Four maturity stages from Initial (21 controls) to Embedded (230). Examination implications begin now.

230 control objectives · 4 maturity stages
Federal Reserve / OCC · 2011

SR 11-7 Alignment

Every gap analysis is framed in SR 11-7 language — the supervisory standard examiners will cite during an AI governance review. Output rationale is written to address effective challenge, validation sufficiency, and governance documentation requirements.

Examination-ready language throughout
The output

Built for
audit trails

Every output is structured for the people who will use it — second-line MRM teams, internal audit, and the examiners who review their work.

Examiner-ready rationale for every finding

LLM-generated plain-English explanation of why each control was classified — written in the language an auditor would use in a workpaper, not chatbot output.

10-tab structured workbook

Dashboard, match tables, gap register, framework-level summaries, category breakdowns, and full run metadata. Every tab is audit-ready on export.

Gap register with workflow tracking

Each gap has an owner, risk classification, remediation status, and evidence notes field. The register is designed to look the same in a board package as it does in an examiner's workpapers.

Visual dashboard and run analytics

Coverage donut by NIST function, gap score histogram, stacked bar by framework category, and key findings summary — all from within the platform before export.

Workbook tabs
Sample rationale
DashboardCoverage summary & key metrics
Confirmed MatchesControls meeting framework requirements
Partial MatchesControls with coverage gaps
GapsRequirements with no internal control
NIST Match StatusPer-requirement coverage view
NIST GapsUnaddressed framework requirements
Internal SummaryControl-level coverage view
Category SummaryCoverage by framework function
Run MetadataThresholds, model, timestamp
Pilot program

Run your controls.
Get your gap analysis.

We're currently accepting 2–3 design partner institutions for a no-cost pilot. You provide your controls, we provide the full analysis and a debrief call.

Full gap analysis against both frameworks

Your controls mapped to NIST AI RMF and CRI FS AI RMF simultaneously, with classification and similarity scores for every finding.

Examiner-ready rationale for every finding

LLM-generated rationale for every confirmed match, partial match, and gap — written in examiner language, not vendor marketing language.

10-tab structured Excel export

The full output workbook, ready for your MRM team, internal audit, or board reporting package.

60-minute debrief call

We walk through the results together — explaining each gap, discussing remediation priority, and answering what an examiner would be looking for.

No cost. No commitment.

Design partners receive the full analysis at no charge. In return, we ask for an hour of your time and honest feedback that shapes the product roadmap.

Request a pilot

Tell us about your institution and we'll follow up within one business day to schedule your analysis.

We accept 2–3 design partners per month. All control data is treated as confidential and used solely for the analysis you request.

Thank you — we'll be in touch within one business day.
BK
Brian Kantanka
Founder, AuditAlign AI
Former Federal Bank Examiner
Second-Line Model Risk Management
Regional Bank & Fortune 100 FI
Data Scientist & Full-Stack Developer
AI Governance Practitioner
ProSight MRM Summit Speaker · April 2026
"Most institutions I've examined don't lack commitment to AI governance. They lack the operational infrastructure to demonstrate it."

I spent years on both sides of the examination table — first as a federal bank examiner, then in second-line model risk management roles at a regional bank and a Fortune 100 financial institution. The same problem appeared everywhere: institutions with genuine governance programs that couldn't produce the evidence an examiner needed to see it.

The manual process of mapping internal controls to regulatory frameworks — NIST AI RMF, SR 11-7, the new CRI FS AI RMF — takes weeks of practitioner time and still produces outputs that are inconsistent and hard to defend. I built AuditAlign to automate that specific workflow. Not to replace expert judgment, but to do the mapping work so practitioners can focus on the decisions rather than the documentation.

The platform uses semantic embeddings to compare control language against framework requirements, reciprocal matching logic to classify coverage accurately, and LLM-generated rationale to explain every finding in the language an auditor would recognize. The output looks like something an MRM team produced — because it was built by one.

Pricing

Straightforward pricing.
No long-term contracts.

Start with a free pilot. Scale when you're ready.

Design partner
Pilot
$0 / one analysis

Full gap analysis at no cost for 2–3 institutions per month. In exchange for honest feedback that shapes the roadmap.

NIST AI RMF gap analysis
CRI FS AI RMF gap analysis
Full Excel export (10 tabs)
LLM rationale for all findings
60-minute debrief call
Platform access for team
Gap tracker & workflow
Request pilot
Enterprise
Multi-entity
Custom

For holding companies, bank networks, and consulting firms running analyses across multiple institutions or client portfolios.

Everything in Institution
Multi-institution management
Custom framework uploads
API access
SSO / enterprise auth
Dedicated onboarding
SLA & SOC 2 (roadmap)
Contact us
Common questions

Before you request
a pilot

Any format — CSV, Excel, or plain text. We accept natural-language control descriptions as written by your team. No reformatting required. The most common input is an export from your existing GRC system or a controls register spreadsheet with a description column. We'll clean it before running the analysis.
The engine uses sentence-level semantic embeddings and bidirectional (reciprocal) matching. A Confirmed Match means the internal control is the best-available match for a framework requirement AND the similarity score exceeds the high-confidence threshold (default 0.40). A Partial Match means there's semantic overlap but coverage is incomplete — the internal control addresses related topics but doesn't fully satisfy the requirement's intent. A Gap means no internal control scored above the minimum threshold for a given framework requirement. The key insight: high similarity alone doesn't produce a Confirmed Match. The reciprocal match logic ensures that topical overlap without directional coverage is surfaced as a Gap rather than a false confirmation — which is how an examiner would evaluate it.
Yes. Control data uploaded for analysis is used solely for the gap analysis you requested. We do not use client control data for model training, benchmarking, or any other purpose. Pilot engagements operate under a mutual NDA on request. We treat your control library with the same confidentiality expectations a bank examiner applies to examination materials.
No — and it's not designed to. AuditAlign replaces the manual mapping work that comes before expert judgment, so your practitioners can focus on decisions rather than documentation. The output is a starting point for your MRM team, not a finished product. Every finding includes a rationale that practitioners should review, override if needed, and use as a structured input to their governance process. The gap register and workflow tools are designed to be embedded in your existing processes, not replace them.
The CRI FS AI RMF classifies institutions into four maturity stages — Initial (21 control objectives), Minimal (126), Evolving (193), and Embedded (230) — based on the AI Adoption Stage Questionnaire. AuditAlign can scope its gap analysis to the control objectives applicable at your declared adoption stage, so institutions at an Initial or Minimal stage aren't evaluated against 230 controls that don't yet apply to them. The adoption stage you select is recorded in the run metadata and reflected in the output.
The output is designed to be examination-ready — structured the way examiner workpapers are structured, with rationale written in regulatory language. However, no automated tool produces a complete examination response on its own. AuditAlign gives you the mapping, the gap identification, and the evidence structure. Your team provides the institutional context, the remediation commitments, and the judgment calls that make a governance program defensible. Think of it as the first 80% of the documentation work — done accurately, consistently, and in a fraction of the time.
A 60-minute walkthrough of your results with the founder — a former federal bank examiner. We cover: the most significant gaps by examination risk, which partial matches are closest to confirmed and what documentation would close them, which gaps represent new capabilities you'd need to build versus controls you may already have that aren't documented correctly, and how to prioritize remediation. You leave with a clear picture of where you stand and what to do next.