Human-in-the-Loop Governance
P1Depth:
Checkpoint and approval patterns maintaining human oversight over autonomous agent execution - escalation thresholds, approval gates, remediation policies, and audit trails.
Harness Layers
Meta
Meta (principles / narrative / research)
Prompt
Prompt (templates / few-shot / system instructions)*
Orchestration
Orchestration (chaining / routing / looping)*
Integration
Integration (tools / RAG / external APIs)
Guardrails
Guardrails (output validation / safety checks)*
Memory
Memory (context / state / persistence)
Eval
Eval (testing / metrics / iteration)
3 of 7 layers covered
Start Here
Recommended entry points for exploring this thread.
Recommended start
Checkpoints Documentation
Three checkpoint types, one rule: Claude automates everything it can, then stops precisely where human judgment is irreplaceable
Guardrails
Escalation Thresholds and Remediation Policies
The rules for when an AI agent should stop retrying, stop guessing, and surface the problem to a human - before a fixable issue becomes an architectural mess
Guardrails
GSD Narration Script: Human-in-the-Loop Infrastructure
A four-minute explainer script that walks through why autonomous AI coding fails without orchestration - and how human-in-the-loop infrastructure fixes it
Guardrails