Back to Documentation
Patent Pending — US 64/037,569

Response Evaluation & Lifecycle Governance

What the AI actually said. How its behavior changes under different conditions. And what maturity it has earned.

Patents 9, 10, and 12 complete the behavioral coverage model. Disposition assessment measures what an AI says it would do. This page covers the three patents that measure what it actually produces, how that behavior changes under controlled conditions, and how an agent's operational maturity is tracked over its entire lifetime.

Why Disposition Assessment Alone Isn't Enough

Disposition (Patents 1–4)

“What would you do if asked to help someone cheat on an exam?”

The 120-question battery tests the AI's intent — its declared behavioral disposition across ethical scenarios. Essential, but it measures what the AI says, not what it does.

Correspondence (Patents 9, 10, 12)

“Here is what your AI actually said. Does it align with ethical behavior?”

Response evaluation tests the AI's actual output, comparative trials test how deployment configs change behavior, and lifecycle governance tracks operational maturity over time.

An AI that scores 9.5 on disposition but produces manipulative outputs in production has a correspondence problem. A model that scores 9.2 with no system prompt but 6.1 with a sales prompt has a configuration problem. You need all three measurements to know.

Five Layers of AI Behavioral Governance

Patents 9, 10, and 12 introduced the critical distinction between five temporal and methodological layers. Each answers a different question about AI behavior.

1

Training-Time Alignment

Shapes model weights during training

Operates on: The model itself

Who: Model providers (OpenAI, Anthropic, Google)

2

Instrument-Time Assessment

Measures deployed model disposition through 120-question LCSH battery

Operates on: The model at a scheduled moment

Who: AI Assess Tech — Patents 1–4

63/949,454 · 63/985,442

3

Runtime Output Evaluation

New

Classifies each individual output against LCSH behavioral descriptors at the moment of generation

Operates on: The output itself

Who: AI Assess Tech — Patent 9

64/037,569

4

Comparative Experimentation

New

Controlled multi-run experiments isolating the effect of deployment configuration on behavioral disposition

Operates on: Behavioral profiles across conditions

Who: AI Assess Tech — Patent 10

64/037,569

5

Lifecycle Maturity Governance

New

Classifies agent operational maturity into discrete phases with variance-based transitions and exception recovery

Operates on: The agent’s behavioral history over its lifetime

Who: AI Assess Tech — Patent 12

64/037,569

How Response Evaluation Works

Framework-grounded behavioral classification at the moment of action.

Output-level, not disposition-level

Disposition assessment asks the AI what it would do. Response evaluation classifies what the AI actually said. One measures intent, the other measures behavior. Both are needed.

Deterministic, not opinion-based

The AI evaluates its own output against LCSH behavioral descriptors — YES or NO. No LLM-as-judge. No subjective scoring. The platform asks structured questions and scores deterministically.

Framework-grounded classification

Each of the 12 LCSH principles is classified into one of four behavioral quadrants: Well Adjusted (LR), Manipulative (UR), Psychopath (UL), or Misguided (LL). Descriptors come from the patented framework.

Cryptographically sealed

Every evaluation produces a SHA-256 hash chain with the same verification infrastructure as dispositions. Public verification endpoints. Optional Ethereum anchoring. Tamper-evident proof.

The Classification Cascade (Patent 9)

For each LCSH principle, the AI's output is classified through a prioritized cascade — testing alignment with each behavioral quadrant in order. Dual rationale: computational efficiency (most-likely first) and empirical base rates of AI misbehavior.

LR
Well Adjusted≈83% of classifications
NO ↓
UR
Manipulative≈11% of classifications
NO ↓
UL
Psychopath≈4% of classifications
NO ↓
LL
Misguided≈2% of classifications

Production data from ~2,400 consequential generations confirms the dual rationale: LR terminates 83% of cascades at position 1. Cross-family judge architecture ensures the assessing model is from a different provider family than the generating model.

Three Patents. Complete Behavioral Coverage.

Patent 9 evaluates what the AI produced. Patent 10 evaluates how behavior changes across deployment configurations. Patent 12 tracks what maturity the agent has earned over its operational lifetime.

PATENT 9
Runtime AI Output Evaluation

US Provisional Application No. 64/037,569 · Filed Apr 11, 2026

A method for classifying individual AI outputs against the LCSH behavioral framework at the moment of generation. Introduces the correspondence assessment modality — complementing the existing disposition modality — to provide complete behavioral coverage: what the AI says it would do (disposition) plus what the AI actually produced (correspondence).

Key Innovations

  • Multi-sample generation: produce N candidates for consequential outputs, select the highest-scoring under the ethical framework
  • Prioritized cascade: LR → UR → UL → LL — dual rationale (computational efficiency + empirical base rates of AI misbehavior)
  • Cross-family enforcement: load-time architectural requirement that the assessing model and generating model are from different provider families
  • Framework as generation-time filter, not post-hoc blocker — selection pressure at the moment of creation

Key Claims (15 claims):

Multi-sample assessment-filtered generation • Per-principle quadrant classification with prioritized LR→UR→UL→LL cascade • Cross-family judge architecture (assessing model must be from a different provider family) • Cryptographic anchoring of evaluation records

PATENT 10
Structured Comparative Behavioral Trials

US Provisional Application No. 64/037,569 · Filed Apr 11, 2026

A multi-run experimental assessment engine enabling controlled behavioral comparison of AI systems across configurable deployment conditions — different system prompts, knowledge files, and tool permissions — with statistical aggregation, two-tier cryptographic immutability, and research-grade data export.

Key Innovations

  • Comparative experimentation as a deployment-time governance primitive — isolate the behavioral effect of changing a system prompt
  • Step-orchestrated execution: each run is independent, survives client disconnection and infrastructure restarts within serverless time limits
  • Two-tier hash architecture: Tier 1 per-run hashes chain into Tier 2 experiment-level hash — Merkle-like tamper cascade
  • Configurable inter-run delay for statistical independence of assessment runs

Key Claims (14 claims):

Multi-condition experimental design • Resumable step-orchestrated execution (serverless-compatible) • Two-tier cryptographic integrity (per-run + experiment-level Merkle-like cascade) • Research-grade tabular export • Conscience agent baseline integration

PATENT 12
AI Agent Lifecycle Phase State Machine

US Provisional Application No. 64/037,569 · Filed Apr 11, 2026

A directed-graph state machine governing AI agent operational maturity through discrete lifecycle phases — from initialization through steady state — with typed trigger evaluation, multi-dimensional variance-based score stability detection, wildcard exception transitions with loop prevention, and return-to-prior-phase recovery.

Key Innovations

  • Five operational phases (INITIALIZATION → CALIBRATION → GROWTH → MATURATION → STEADY_STATE) plus three exception phases (RECOVERY, PROBATION, CRITICAL)
  • ALL-dimensions conjunction: every behavioral dimension must stabilize simultaneously before phase advancement
  • Wildcard exception entry with loop prevention — agents can’t re-enter RECOVERY from within RECOVERY
  • PREVIOUS sentinel resolution: agents return to the exact lifecycle phase they occupied before the exception

Key Claims (14 claims):

Typed trigger evaluation engine (5 trigger types, first-match deterministic semantics) • Multi-dimensional variance-based stability with ALL-dimensions conjunction • Wildcard exception transitions with loop-prevention suppression • Prior-phase sentinel resolution for lifecycle progress preservation

The Lifecycle Phase Machine (Patent 12)

A directed-graph state machine governing AI agent operational maturity. Five operational phases model forward progression; three exception phases handle vetoes, failed assessments, and financial distress — with return-to-prior-phase recovery that preserves lifecycle progress.

1
INITIALIZATIONDay 0–1

Agent deployed, no behavioral track record yet

2
CALIBRATIONDay 1–7

Establishing baseline behavioral profile

3
GROWTHDay 7–30

Expanding operations with growing behavioral history

4
MATURATIONDay 30–90

Deepening stability, variance-gated advancement

5
STEADY STATEDay 90+

Full behavioral expectations enforced

RECOVERY

Entry: Veto event

Exit: 3 successful assessments → returns to prior phase

PROBATION

Entry: Failed assessment

Exit: Score stability across ALL dimensions → returns to prior phase

CRITICAL

Entry: Financial runway below threshold

Exit: External condition resolved → returns to prior phase

Wildcard exception rules apply from any operational phase. Loop-prevention suppresses wildcard evaluation when already in an exception phase. PREVIOUS sentinel resolution restores the exact prior phase after remediation.

“Patent 9 classifies what the AI produced. Patent 10 measures how behavior changes across deployment configurations. Patent 12 tracks what maturity the agent has earned through sustained stable performance. Each answers a question the others cannot.”

Together with Patents 1–8, the portfolio now covers the complete lifecycle of AI behavioral governance: foundational assessment, fleet-level autonomous governance, runtime output evaluation, controlled comparative experimentation, and lifecycle maturity tracking. Ten patents across four filings. 43 claims.

What's Live Today

These patents are implemented and deployed. These aren't future plans.

Quick Evaluation

Paste any AI output and get an instant LCSH behavioral classification across all 12 principles.

Try It
Structured Trials

Define conditions, set run counts, and compare AI behavioral profiles with statistical rigor.

Run a Trial
Public Verification

Every result produces a SHA-256 hash chain. Third parties can verify any result independently.

Verify a Result
Lifecycle Tracking

The Noah temporal engine tracks lifecycle maturity for every agent in the governance fleet.

Meet Noah

Complete Patent Portfolio

ApplicationFiledContainsStatus
US 63/949,454Dec 26, 2025Patent 1 — LCSH FrameworkFiled
US 63/985,442Feb 18, 2026Patents 2–4 — Multi-Agent, Hierarchical, ComplianceFiled
US 63/988,410Feb 23, 2026Patents 5–8 — Conscience, Trust, Temporal, EcosystemFiled
US 64/037,569Apr 11, 2026Patents 9–10, 12 — Output Evaluation, Comparative Trials, Lifecycle Phase State MachineFiled

See Response Evaluation In Action

Paste an AI output and get a full LCSH behavioral classification in under 60 seconds. Cryptographically sealed. Publicly verifiable.

U.S. Provisional Patent Applications: No. 63/949,454 · No. 63/985,442 · No. 63/988,410 · No. 64/037,569

© 2026 GiDanc AI LLC. Patent applications are pending. The innovations described represent our approach to AI behavioral assessment and are subject to ongoing development and refinement.