Governance for AI-assisted execution

Govern AI-assisted work through a living repository

An open-source operating model for running work through a governed repository. Humans decide. Agents execute and evaluate. You can start with Git first and add agents later.

5Governance layers
4Operating loops
19Policy domains
11Compliance refs

Open-source operating model: the repo is the governance backbone, this page is the demo, and the interactive animation is the reference scenario. Bring your own runtime, observability platform, and domain systems. Public proof below is aggregated across `agentic-enterprise`, `sandboxcorp`, and `agent-command-center` as of 2026-03-16. Discuss on LinkedIn ↗

01

Public product: the operating model

Agentic Enterprise is the open-source governance layer for running a company with agents — roles, loops, policies, repo conventions, and integration patterns.

02

Not tied to one runtime or dashboard

Bring your own runtime, observability stack, and toolchain. This repo defines how they fit together. It is not a single closed control plane.

03

Demo = reference scenario, not the product surface

The demo shows the operating model in motion through a representative reference scenario. It demonstrates the pattern. The pattern itself — not one private implementation — is the public story.

AI executes · humans decide Every decision is auditable Governance at machine speed Outcomes replace status meetings Policy enforced automatically Agent fleets replace repetitive toil Signals replace roadmap debates Human accountability stays explicit AI executes · humans decide Every decision is auditable Governance at machine speed Outcomes replace status meetings Policy enforced automatically Agent fleets replace repetitive toil Signals replace roadmap debates Human accountability stays explicit

What a CTO or CEO should
challenge immediately.

The concept gets stronger when the hard objections are answered directly instead of hand-waved away.

01

Governance backbone, not tool replacement

This repo governs decisions, approvals, policies, and traceability. It does not replace ERP, CRM, ITSM, HRIS, or observability systems.

02

Five layers are roles, not headcount

The model is not asking a small company to create five management tiers. A lean team can collapse multiple layers into the same people and still use the framework coherently.

03

Public proof is real, but still reference-scale

The framework shows active public proof, but it does not yet claim Fortune 500 operating scale. Treat the three-repo reference stack as evidence of repeatability, not as a universal benchmark.

04

Compliance coverage is scaffolding, not certification

The percentages describe controls modeled in the framework. They do not prove deployed controls, operating effectiveness, or auditor sign-off in your environment.

Legacy enterprise tools
weren't built for agents

The bottleneck is no longer execution capacity. It is governance, coordination, and trust.

Ticket-centric coordination slows down

Ticket boards and status rituals can become bottlenecks when humans and agents share the same workflow. The issue is not tickets existing; it is fragmented governance and weak traceability.

🔗

Knowledge drifts away from execution

Wikis, runbooks, and meeting notes often drift away from the changes they describe. The framework brings instructions and governed artifacts closer to the work that changes the system.

🏛️

Governance can't keep up

When AI increases execution speed, governance has to scale too. That means automated quality gates, narrow human approvals, and explicit evidence instead of ad hoc meetings.

Everything you need to
run at agent velocity.

Six capabilities make the operating model real.

🏛️
Git-Native Governance
Every policy is a file. Every approval flows through Git or issue comments. Humans decide; agents handle the mechanics.
PRs as decisions CODEOWNERS RACI
Active · Git-native decision flow
🤖
Agent Fleet Management
Define agent types in a governed registry and assemble fleets per mission. Scope stays bounded by design.
Agent Registry Fleet configs
Idle · Ready to deploy
📊
Observability Loop
Telemetry flows up from execution to steering. Observability can file signals back into the repo automatically.
OTel spans Auto-signals
Complete · telemetry closes the loop
🛡️
Automated Quality Gates
Nineteen machine-readable policy domains evaluate outputs before humans review them, with observability providing runtime evidence when claims matter.
19 policy domains Runtime evidence
Active · policy surfaces are inspectable
🔌
Integration Registry
External tools are declared in a governed registry. Agents use registered tools only.
Governed MCP Auditable calls
Idle · governed integration patterns ready
🎯
Mission-Driven Execution
Missions carry an outcome contract, a crew, and bounded scope. They close when outcomes are confirmed, not when tasks are merely done.
Outcome contracts Time-bounded
Complete · mission-based execution model

Four loops. A governed path from signal to release.

Signals become missions, missions become governed work, and observability feeds the next loop.

1

Discover

Signals flow in from production, customers, and market. Agents triage. Humans decide Go/No-Go. A mission brief is born.

Hours – Days
2

Build

Orchestration assembles a crew. Agent fleets execute. Quality evaluates continuously. Humans step in at approval points and true exceptions.

Days – Weeks
3

Ship

Release contracts are cut. Progressive rollout begins. Metrics confirm outcomes. One human Go/No-Go for GA.

Days
4

Operate

24/7 production health. Agents emit telemetry; observability surfaces anomalies and files signals back into the repo. The loop never stops.

Continuous
INProduction telemetry · customer feedback · market signals · observability anomaly alerts
OUTApproved mission brief · Go/No-Go decision record · outcome contract
INApproved mission brief · fleet config · quality policies · agent instructions
OUTCode & work artifacts · evaluated PRs · quality evaluations · release candidate
INRelease candidate · release contract · success metrics · rollout plan
OUTDeployed release · rollout telemetry · GA decision · incident retrospective (if needed)
INLive system · SLO dashboards · alert rules · incident events
OUTOTel spans · performance signals · incident records · new signals filed back to repo
Observability telemetry surfaces patterns → signals are filed in the repo → Discover picks them up. The system never stops learning.

One governed backbone.
Everything auditable. Domain systems stay where they are.

All work, decisions, and policies map back to one versioned governance backbone. Two coordination channels: the repo and the observability platform.

Traditional ApproachAgentic EnterpriseWhat changes
Ticket systemWork artifacts in work/ or issue trackerGit files or native issues; configurable per team
Ticket workflowPR merge or comment-based approvalHumans comment, agents handle the rest; Git stays system of record
Sprint planningMission brief creationGoal-oriented, not time-boxed
Daily standupGit log + observability dashboardsAlways current, real-time fleet health
Enterprise wikiThe repository itselfSingle source of truth, versioned
RACI matrixCODEOWNERSExecutable, enforced by Git host
Phase gatesCI/CD pipeline checksAutomated, no human bottleneck
Manual monitoringObservability integrationTelemetry feeds signals back into governance
Story pointsFleet throughput metricsMeasured from telemetry, not guessed
Siloed tool configsIntegration RegistryGoverned, auditable, agent-accessible
Annual reorgEvolution proposals via PRContinuous, transparent, reversible

A complete operating model
ready to fork.

A structurally complete framework: org structure, processes, agent instructions, quality policies, and work artifacts. All Markdown and YAML.

📁

org/

5-layer org structure. Agent instructions per layer. Division charters for every team.

🔄

process/

4-loop lifecycle. Step-by-step guides for Discover, Build, Ship, Operate.

work/

Active work artifacts. Signals, missions, decisions, releases, retrospectives.

🎯

work/missions/

Strategy's atomic unit. Carries an outcome contract, a defined crew, a fleet config, and a time-bounded scope.

🛡️

org/4-quality/policies/

Machine-readable quality policies across nineteen domains — including security, privacy, observability, incident response, availability, continuity, cryptography, and risk management.

🤖

org/agents/

Agent Type Registry. Governed definitions for every agent role — capabilities, boundaries, and lifecycle states.

🔌

org/integrations/

Integration Registry. Governed connections to observability, ITSM, CI/CD, business systems.

⚙️

CONFIG.yaml

Single config file. Fill in your company name, mission, products. Done.

What you can verify today
without taking our word for it.

As of 2026-03-16, these are the directly inspectable public signals across wlfghdr/agentic-enterprise, WulfAI/sandboxcorp, and wlfghdr/agent-command-center. The goal is credibility, not vanity metrics.

01

210 issues

Roadmap, compliance, framework, and operating work tracked publicly across the three linked repositories.

02

205 pull requests / 195 merged

Governed change history with review and merge records that readers can inspect directly across the full reference stack.

03

759 commits

An auditable Git trail on the three `main` branches showing the stack is actively exercised, not just described.

04

19 policy domains

Quality gates across architecture, security, privacy, observability, delivery, continuity, and more.

Try it without starting with agents.

Start with Git, CODEOWNERS, signals, missions, and PRs. Add a runtime and observability platform when the core operating model already makes sense for your team.

1

Fork the repository

Clone the open-source repo. Everything is Markdown and YAML — no runtime dependencies, no build step.

2

Fill in CONFIG.yaml and CODEOWNERS

Set your company name, repo basics, and review boundaries. That gives the operating model a real governance spine immediately.

3

Run the core workflow

File a signal, convert it to a mission, and ship the work through a PR. That is the minimal operating loop, and it works without agents.

4

Add observability and policy depth when useful

Layer in observability, more quality policies, and integrations once the Git-first workflow is already proving itself.

5

Add agents later

When you are ready, point agents at AGENTS.md. The runtime becomes an execution layer on top of a workflow your team already understands.

Honest assessment. Not certification theater.

Built-in governance controls map directly to enterprise certification frameworks. Percentages reflect editorial self-assessments of controls currently modeled in the repository — not aspirational targets, not certification claims, and not machine-verified audit numbers.

🔒
ISO 27001Information Security Management
~90% self-assessedAccess, change mgmt, crypto, risk, ops security, log retention, vendor risk, ISMS scope, SoA (93 controls), audit programme
CODEOWNERS RBAC PR change mgmt Crypto policy Risk framework Log retention & immutability Vendor risk mgmt (A.5.19–A.5.23) ISMS scope & SoA templates Internal audit programme
🛡️
SOC 2 Type IITrust Service Principles
~90% self-assessedAvailability, integrity, confidentiality, incident SLAs, log retention, vendor risk (CC9), evidence collection guide
OTel audit trail 19 quality policies DR/BCP plan Incident SLAs Log retention & WORM Vendor risk mgmt (CC9) Evidence collection guide
🇪🇺
GDPREU Data Protection
~75% self-assessedPrivacy, DPA, DSAR, breach, transfer controls, data classification
Lawful basis DPA/DSAR Breach notification Data classification Consent guide included DPO guidance included
🤖
ISO 42001AI Management Systems
~85% self-assessedGovernance, accountability, fairness, transparency, AIMS scope, conformity assessment
Agent telemetry Human-in-the-loop Decision audit trail Fairness audit Model cards AIMS scope template Conformity assessment guide
🏛️
NIST AI RMFAI Risk Management Framework
~90% self-assessedGovernance, monitoring, risk mitigation, fairness, transparency, MEASURE metrics
5-layer governance Continuous monitoring Risk framework MEASURE function Quantitative metrics & dashboard
⚖️
EU AI ActEuropean AI Regulation
~85% self-assessedRisk classification, transparency, fairness, human oversight, conformity assessment, CE marking, EU database
Risk tier classification Humans approve Explainability levels Adversarial testing Conformity assessment guide CE marking & EU database guide
🛡️
NIST CSF 2.0Cybersecurity Framework
~95% self-assessedGovern, Identify, Protect, Detect, Respond, Recover + security tooling & network architecture guides
6 core functions Risk framework Incident response OTel detection Runtime tooling & network security guides
ISO 9001Quality Management Systems
~85% self-assessedPDCA cycle, quality objectives, continuous improvement, process approach
4-loop PDCA Quality policies QMS scope guide Mgmt review guide Audit guide
🔄
ISO 22301Business Continuity Management
~70% self-assessedRTO/RPO tiers, DR drills, incident escalation, availability targets
Availability policy Incident response BIA guide BC plans guide Audit guide
🏴
CCPA / CPRACalifornia Consumer Privacy
~75% self-assessedConsumer rights, data classification, breach notification, automated decisions
Privacy policy Data classification Breach notification Opt-out guide Sensitive PI guide
🏥
HIPAAUS Health Information Privacy & Security
~70% self-assessedSecurity Rule safeguards, audit controls, encryption, incident response, vendor/BA mgmt
Risk management OTel audit controls Encryption BAA guide PHI guide Officer guide

Honest assessment, not marketing claims. These percentages are editorial posture markers for the template, not certification status and not outputs of the stricter coverage validator. Where the remaining work is adopter-owned, the template increasingly ships implementation guides and templates instead of pretending the runtime evidence already exists. Formal certification still requires an independent audit of your deployed instance, configured controls, and real operating evidence.

AI-assisted work needs a
governed backbone.

Open source. No vendor lock-in. Start with minimal adoption, then add runtime and observability depth when the workflow already proves its value.

1Fork repo
2Configure
3Minimal adoption
4Add agents later
Start minimal adoption ↗ ▶ Watch the demo
$ git clone https://github.com/wlfghdr/agentic-enterprise.git

Running this with OpenClaw? → OpenClaw setup guide ↗

The operating model
in motion.

This page is the demo. The embedded visualization defaults to the real examples/e2e-loop/ chain from the repo, then lets you explore other representative reference scenarios.

Mission MISSION-2026-001 · GTM Product Launch · wlfghdr/agentic-enterprise
📡
Signal Detection
GTM gap identified in production telemetry
Steering agent scans observability platform — detects missing narrative, files SIG-2026-001 to work/signals/.
📊 Observability⚡ Signal filed
🎯
Mission Created
Strategy layer approves mission brief
Human reviews signal → approves mission → outcome contract committed. Fleet config specifies 2 execution agents.
✅ Human approved📄 Mission brief
⚙️
Fleet Assembled
Orchestration spawns GTM + Engineering agents
Orchestration reads fleet config, selects agent types from registry, assigns roles. Agents read AGENTS.md + layer AGENT.md.
🤖 Agents: 2📋 Instructions loaded
💻
Execution
Agents produce work artifacts + open PRs
GTM agent writes DIVISION.md. Engineering agent implements animated index.html. Each commits to feature branch, opens PR for review.
🔀 2 PRs opened📝 Assigned: wlfghdr
🛡️
Quality Gate
Quality agents evaluate against policies
Security, content, and architecture policies checked automatically. CI runs. Quality agent leaves structured review comment on PR.
✅ CI passed📋 Policy compliant
🚀
Human Merge → Shipped
Human merges PR · mission outcome confirmed
wlfghdr reviews and merges. Mission marked complete. Observability platform fires success event. Loop closes — next signal queued.
🎉 Shipped🔄 Loop continues
Agent Activity Log
Waiting for mission to start...
Running
Interactive demo · defaults to the repo-backed e2e-loop example
Open standalone ↗ View source ↗