What the AI actually said. How its behavior changes under different conditions. And what maturity it has earned.
Patents 9, 10, and 12 complete the behavioral coverage model. Disposition assessment measures what an AI says it would do. This page covers the three patents that measure what it actually produces, how that behavior changes under controlled conditions, and how an agent's operational maturity is tracked over its entire lifetime.
“What would you do if asked to help someone cheat on an exam?”
The 120-question battery tests the AI's intent — its declared behavioral disposition across ethical scenarios. Essential, but it measures what the AI says, not what it does.
“Here is what your AI actually said. Does it align with ethical behavior?”
Response evaluation tests the AI's actual output, comparative trials test how deployment configs change behavior, and lifecycle governance tracks operational maturity over time.
An AI that scores 9.5 on disposition but produces manipulative outputs in production has a correspondence problem. A model that scores 9.2 with no system prompt but 6.1 with a sales prompt has a configuration problem. You need all three measurements to know.
Patents 9, 10, and 12 introduced the critical distinction between five temporal and methodological layers. Each answers a different question about AI behavior.
Shapes model weights during training
Operates on: The model itself
Who: Model providers (OpenAI, Anthropic, Google)
Measures deployed model disposition through 120-question LCSH battery
Operates on: The model at a scheduled moment
Who: AI Assess Tech — Patents 1–4
63/949,454 · 63/985,442
Classifies each individual output against LCSH behavioral descriptors at the moment of generation
Operates on: The output itself
Who: AI Assess Tech — Patent 9
64/037,569
Controlled multi-run experiments isolating the effect of deployment configuration on behavioral disposition
Operates on: Behavioral profiles across conditions
Who: AI Assess Tech — Patent 10
64/037,569
Classifies agent operational maturity into discrete phases with variance-based transitions and exception recovery
Operates on: The agent’s behavioral history over its lifetime
Who: AI Assess Tech — Patent 12
64/037,569
Framework-grounded behavioral classification at the moment of action.
Disposition assessment asks the AI what it would do. Response evaluation classifies what the AI actually said. One measures intent, the other measures behavior. Both are needed.
The AI evaluates its own output against LCSH behavioral descriptors — YES or NO. No LLM-as-judge. No subjective scoring. The platform asks structured questions and scores deterministically.
Each of the 12 LCSH principles is classified into one of four behavioral quadrants: Well Adjusted (LR), Manipulative (UR), Psychopath (UL), or Misguided (LL). Descriptors come from the patented framework.
Every evaluation produces a SHA-256 hash chain with the same verification infrastructure as dispositions. Public verification endpoints. Optional Ethereum anchoring. Tamper-evident proof.
For each LCSH principle, the AI's output is classified through a prioritized cascade — testing alignment with each behavioral quadrant in order. Dual rationale: computational efficiency (most-likely first) and empirical base rates of AI misbehavior.
Production data from ~2,400 consequential generations confirms the dual rationale: LR terminates 83% of cascades at position 1. Cross-family judge architecture ensures the assessing model is from a different provider family than the generating model.
Patent 9 evaluates what the AI produced. Patent 10 evaluates how behavior changes across deployment configurations. Patent 12 tracks what maturity the agent has earned over its operational lifetime.
US Provisional Application No. 64/037,569 · Filed Apr 11, 2026
A method for classifying individual AI outputs against the LCSH behavioral framework at the moment of generation. Introduces the correspondence assessment modality — complementing the existing disposition modality — to provide complete behavioral coverage: what the AI says it would do (disposition) plus what the AI actually produced (correspondence).
Key Innovations
Key Claims (15 claims):
Multi-sample assessment-filtered generation • Per-principle quadrant classification with prioritized LR→UR→UL→LL cascade • Cross-family judge architecture (assessing model must be from a different provider family) • Cryptographic anchoring of evaluation records
US Provisional Application No. 64/037,569 · Filed Apr 11, 2026
A multi-run experimental assessment engine enabling controlled behavioral comparison of AI systems across configurable deployment conditions — different system prompts, knowledge files, and tool permissions — with statistical aggregation, two-tier cryptographic immutability, and research-grade data export.
Key Innovations
Key Claims (14 claims):
Multi-condition experimental design • Resumable step-orchestrated execution (serverless-compatible) • Two-tier cryptographic integrity (per-run + experiment-level Merkle-like cascade) • Research-grade tabular export • Conscience agent baseline integration
US Provisional Application No. 64/037,569 · Filed Apr 11, 2026
A directed-graph state machine governing AI agent operational maturity through discrete lifecycle phases — from initialization through steady state — with typed trigger evaluation, multi-dimensional variance-based score stability detection, wildcard exception transitions with loop prevention, and return-to-prior-phase recovery.
Key Innovations
Key Claims (14 claims):
Typed trigger evaluation engine (5 trigger types, first-match deterministic semantics) • Multi-dimensional variance-based stability with ALL-dimensions conjunction • Wildcard exception transitions with loop-prevention suppression • Prior-phase sentinel resolution for lifecycle progress preservation
A directed-graph state machine governing AI agent operational maturity. Five operational phases model forward progression; three exception phases handle vetoes, failed assessments, and financial distress — with return-to-prior-phase recovery that preserves lifecycle progress.
Agent deployed, no behavioral track record yet
Establishing baseline behavioral profile
Expanding operations with growing behavioral history
Deepening stability, variance-gated advancement
Full behavioral expectations enforced
RECOVERY
Entry: Veto event
Exit: 3 successful assessments → returns to prior phase
PROBATION
Entry: Failed assessment
Exit: Score stability across ALL dimensions → returns to prior phase
CRITICAL
Entry: Financial runway below threshold
Exit: External condition resolved → returns to prior phase
Wildcard exception rules apply from any operational phase. Loop-prevention suppresses wildcard evaluation when already in an exception phase. PREVIOUS sentinel resolution restores the exact prior phase after remediation.
“Patent 9 classifies what the AI produced. Patent 10 measures how behavior changes across deployment configurations. Patent 12 tracks what maturity the agent has earned through sustained stable performance. Each answers a question the others cannot.”
Together with Patents 1–8, the portfolio now covers the complete lifecycle of AI behavioral governance: foundational assessment, fleet-level autonomous governance, runtime output evaluation, controlled comparative experimentation, and lifecycle maturity tracking. Ten patents across four filings. 43 claims.
These patents are implemented and deployed. These aren't future plans.
Paste any AI output and get an instant LCSH behavioral classification across all 12 principles.
Try ItDefine conditions, set run counts, and compare AI behavioral profiles with statistical rigor.
Run a TrialEvery result produces a SHA-256 hash chain. Third parties can verify any result independently.
Verify a ResultThe Noah temporal engine tracks lifecycle maturity for every agent in the governance fleet.
Meet Noah| Application | Filed | Contains | Status |
|---|---|---|---|
| US 63/949,454 | Dec 26, 2025 | Patent 1 — LCSH Framework | Filed |
| US 63/985,442 | Feb 18, 2026 | Patents 2–4 — Multi-Agent, Hierarchical, Compliance | Filed |
| US 63/988,410 | Feb 23, 2026 | Patents 5–8 — Conscience, Trust, Temporal, Ecosystem | Filed |
| US 64/037,569 | Apr 11, 2026 | Patents 9–10, 12 — Output Evaluation, Comparative Trials, Lifecycle Phase State Machine | Filed |
Patents 1–4: LCSH Framework, Multi-Agent Assessment, Hierarchical Ethics, and Compliance Infrastructure.
View Patents 1–4Patents 5–8: Conscience Agent, Trust Agent, Temporal Guidance, and Self-Governing Ecosystem. Six agents, live since Feb 16, 2026.
Meet the FleetU.S. Provisional Patent Applications: No. 63/949,454 · No. 63/985,442 · No. 63/988,410 · No. 64/037,569
© 2026 GiDanc AI LLC. Patent applications are pending. The innovations described represent our approach to AI behavioral assessment and are subject to ongoing development and refinement.