AuditAlign AI maps your bank's existing internal controls to NIST AI RMF and the CRI Financial Services AI RMF using semantic embeddings — surfacing gaps and generating examiner-ready rationale for every finding. Built by a former federal bank examiner who got tired of watching institutions fail audits they were prepared for.
Four structural gaps that generate examination findings, consistently across every institution size.
MRM-validated models are documented. Vendor-embedded AI — fraud scoring, document processing, decisioning tools — rarely is. SR 11-7 requires a comprehensive inventory regardless of source.
Governance policies reference NIST or SR 11-7 by name but contain no mapping between specific internal controls and the framework requirements those controls are supposed to satisfy.
Traditional SR 11-7 validation procedures applied to GenAI and ML systems without adaptation. The OCC has explicitly noted that standard back-testing may not be appropriate for non-deterministic systems.
Reporting acknowledges AI risk in general terms but cannot answer the examination question: what are your open AI governance gaps, who owns them, and what is the remediation timeline?
Provide your existing control library in any format — CSV, Excel, or text. No reformatting required. AuditAlign accepts natural-language control descriptions as written by your team.
Choose NIST AI RMF (72 requirements), CRI FS AI RMF (230 control objectives), or run both simultaneously. Frameworks are built in — no configuration needed.
The engine uses sentence-level embeddings and reciprocal matching logic to compare each internal control against every framework requirement — capturing substantive coverage regardless of terminology.
Every finding is classified as Confirmed Match, Partial Match, or Gap — with LLM-generated examiner-ready rationale explaining the finding in plain language an auditor can use.
Output is a 10-tab workbook: Dashboard, Confirmed Matches, Partial Matches, Gaps, NIST match status, gap register, summary views, and run metadata. Designed for audit trail purposes from the first run.
| Framework control | Best match | Score | Status |
|---|---|---|---|
| GV 1.6 | AI Model Inventory Registration | 0.81 | Confirmed |
| MS 3.1 | Performance & Drift Monitoring | 0.76 | Confirmed |
| GV 1.2 | AI Privacy & Data Governance | 0.61 | Partial |
| MG 4.1 | — no internal control — | — | Gap |
| GV 6.1 | — no internal control — | — | Gap |
| MS 2.5 | Bias & Fairness Testing | 0.58 | Partial |
Both frameworks are bundled into the platform. No configuration, no manual uploads.
The voluntary, sector-agnostic framework that has become the de facto reference standard for AI governance in financial services. Four functions — Govern, Map, Measure, Manage — with 72 control requirements.
Co-developed by 108 financial institutions and the U.S. Treasury. The first framework purpose-built for financial services AI governance. Explicitly extends SR 11-7. Four maturity stages from Initial (21 controls) to Embedded (230). Examination implications begin now.
Every gap analysis is framed in SR 11-7 language — the supervisory standard examiners will cite during an AI governance review. Output rationale is written to address effective challenge, validation sufficiency, and governance documentation requirements.
Every output is structured for the people who will use it — second-line MRM teams, internal audit, and the examiners who review their work.
LLM-generated plain-English explanation of why each control was classified — written in the language an auditor would use in a workpaper, not chatbot output.
Dashboard, match tables, gap register, framework-level summaries, category breakdowns, and full run metadata. Every tab is audit-ready on export.
Each gap has an owner, risk classification, remediation status, and evidence notes field. The register is designed to look the same in a board package as it does in an examiner's workpapers.
Coverage donut by NIST function, gap score histogram, stacked bar by framework category, and key findings summary — all from within the platform before export.
We're currently accepting 2–3 design partner institutions for a no-cost pilot. You provide your controls, we provide the full analysis and a debrief call.
Your controls mapped to NIST AI RMF and CRI FS AI RMF simultaneously, with classification and similarity scores for every finding.
LLM-generated rationale for every confirmed match, partial match, and gap — written in examiner language, not vendor marketing language.
The full output workbook, ready for your MRM team, internal audit, or board reporting package.
We walk through the results together — explaining each gap, discussing remediation priority, and answering what an examiner would be looking for.
Design partners receive the full analysis at no charge. In return, we ask for an hour of your time and honest feedback that shapes the product roadmap.
Tell us about your institution and we'll follow up within one business day to schedule your analysis.
"Most institutions I've examined don't lack commitment to AI governance. They lack the operational infrastructure to demonstrate it."
I spent years on both sides of the examination table — first as a federal bank examiner, then in second-line model risk management roles at a regional bank and a Fortune 100 financial institution. The same problem appeared everywhere: institutions with genuine governance programs that couldn't produce the evidence an examiner needed to see it.
The manual process of mapping internal controls to regulatory frameworks — NIST AI RMF, SR 11-7, the new CRI FS AI RMF — takes weeks of practitioner time and still produces outputs that are inconsistent and hard to defend. I built AuditAlign to automate that specific workflow. Not to replace expert judgment, but to do the mapping work so practitioners can focus on the decisions rather than the documentation.
The platform uses semantic embeddings to compare control language against framework requirements, reciprocal matching logic to classify coverage accurately, and LLM-generated rationale to explain every finding in the language an auditor would recognize. The output looks like something an MRM team produced — because it was built by one.
Start with a free pilot. Scale when you're ready.
Full gap analysis at no cost for 2–3 institutions per month. In exchange for honest feedback that shapes the roadmap.
Full platform access for your MRM, compliance, and audit teams. Unlimited runs, both frameworks, full gap tracker workflow.
For holding companies, bank networks, and consulting firms running analyses across multiple institutions or client portfolios.