The governance layer LLMs were missing.
IronFrame sits between any LLM and your domain application — enforcing tool risk, logging every decision, and producing compliance-ready audit trails. Works with Claude, GPT-4o, Gemini, Llama, and any model your organization uses.
pip install ironframe
Prompts don’t produce audit trails.
Most “AI reliability” products try to fix hallucination with more AI. IronFrame takes a different approach: deterministic enforcement outside the LLM context window. A model cannot rationalize around hooks it never sees.
| Capability | Prompt Engineering | RAG / Chain-of-Thought | IronFrame |
|---|---|---|---|
| Enforces tool boundaries at hook level | ✗ | ✗ | ✓ |
| Tamper-resistant audit trail | ✗ | ✗ | ✓ |
| Persists state across sessions | ✗ | Partial | ✓ |
| Compliance-ready out of the box | ✗ | ✗ | ✓ |
| Model-agnostic (any LLM) | ✓ | ✓ | ✓ |
| MRM / supervisory audit export | ✗ | ✗ | ✓ |
Above the model. Below your application.
IronFrame is a governance stratum. The enforcement logic executes outside the LLM context window. The model never sees the rules it can’t break.
Everything LLM infrastructure needs.
- Hook Engine — deterministic pre/post hooks outside LLM context
- Tool Risk Tier System (C21) — LOW / MED / HIGH classification
- Capability Fence (C24) — exploit, recon, credential patterns blocked
- State Machine · Agent Trust · I/O Schema
- Immutable Audit Log — write-before-release, SHA-256 integrity
- MRM Metadata (C22) — SR 11-7 & EU AI Act Art. 12 aligned
- Supervisory Audit Export (C23) — tamper-resistant, CLI exportable
- Conformance & Drift Engine · Context Budget
- Model Abstraction Layer — fast / smart / cheap / verification routing
- Budget Manager — per-request, per-session, per-day spend caps
- Error Recovery
- Self-Audit Engine — confidence scoring on every output
- Logic Skills · Eval & Regression
- KB Grounding
Have a question? Ask the AI.
Ask about architecture, compliance mapping, or whether IronFrame solves your deployment challenge. Unbuilt capabilities go directly to the roadmap.
Production-grade LLM governance in minutes.
Open-source core, Apache 2.0. Install, wire up your API key, and every LLM call is audited, budget-capped, and enforcement-gated from line one.
Up in 3 lines.
# Install pip install ironframe from ironframe import IronFrameConfig from ironframe.mal.client_v1_0 import IronFrameClient config = IronFrameConfig.from_env() client = IronFrameClient(config) response = client.complete( prompt="Summarize key contract risks.", capability="smart", # fast|smart|cheap|verify ) print(response.content) print(f"Confidence: {response.confidence}") print(f"Cost: ${response.cost:.4f}") # Every call: audited, budget-capped, confidence-scored.
pip install "ironframe[openai]" # GPT-4o / Perplexity pip install "ironframe[z3]" # Symbolic verification pip install "ironframe[all]" # Everything
Open core. Commercial power.
- C19 Session Methodology Registry
- C20 Dependency Registry + Scanner
- C21 Tool Risk Tier System (core)
- Hook Engine · Self-Audit Engine
- Model Abstraction Layer · Budget Manager
- Base compliance classes (build your own adapters)
- C22 MRM Metadata + Decision Log
- C23 Supervisory Audit Export (SHA-256, CLI)
- C24 Offensive Capability Fence
- HIPAA, FINRA, SOC2, SEC, GDPR adapters
- C25 Bank Reference Architecture
- Multi-user management · Hosted tier (coming)
LLM governance for regulated industries.
IronFrame’s commercial tier is built for financial services, healthcare, and government — organizations that cannot deploy AI without a verifiable audit trail, risk management log, and explainable output chain.
Built for regulated environments.
IronFrame is purpose-built for organizations where an unaudited AI decision has legal, financial, or patient-safety consequences.
Which component satisfies which requirement.
| Regulation | Requirement | IronFrame Component | Notes |
|---|---|---|---|
| EU AI Act Art. 9 | Risk management system | C21 Tool Risk C24 Capability Fence | Tool tier classification + offensive capability blocking |
| EU AI Act Art. 12 | Logging & traceability | C22 MRM Log C23 Audit Export | 6-month retention, SHA-256 integrity, supervisory export |
| EU AI Act Art. 14 | Human oversight | C21 HIGH gate C22 MRM Log | TOOL_APPROVAL_REQUIRED blocks until human approves |
| EU AI Act Art. 15 | Cybersecurity & robustness | C24 Capability Fence | Exploit/recon/credential patterns blocked by allowlist |
| SR 11-7 / BCBS 350 | Model risk management | C22 MRM Metadata C23 Audit Export | MRMSession + MRMDecision; JSON/YAML supervisory export |
| FINRA Rule 3110 | Supervision & records | C23 Supervisory Export | --supervisory flag strips internal metadata for regulators |
| HIPAA | PHI audit trail | Compliance Adapter Audit Log | HIPAA fields captured natively in audit schema |
| FedRAMP Moderate | Continuous monitoring | C23 Audit Export C24 Fence | LLM-agnostic — works on approved models, not Anthropic-locked |