Compare commits

..

53 Commits

Author SHA1 Message Date
SentryAgent.ai Developer
4cb168bbba docs(openspec): mark tenant-isolation-enforcement complete and archive
All 8 tasks checked off. Change archived to openspec/changes/archive/
per OpenSpec protocol. Implementation committed in 5943ff1.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-09 05:29:54 +00:00
SentryAgent.ai Developer
5943ff136f fix(security): enforce tenant isolation on all agent endpoints — resolves Test C.7
P0 security fix. Any authenticated agent could previously read, modify, or
decommission agents belonging to other organizations.

Changes:
- IAgentListFilters: add organizationId field (forced from JWT, never from query)
- AgentRepository.findAll(): filter by organizationId when set
- AgentService: getAgentById, updateAgent, decommissionAgent — accept organizationId
  and throw AuthorizationError(403) on cross-tenant access
- AgentController: extract req.user.organization_id on all 5 handlers; throw 403
  if claim is absent; registerAgent forces body.organizationId from JWT claim
- OpenAPI spec: document tenant isolation rules per endpoint
- Tests: update MOCK_USER with organization_id; add 5 new missing-org-id 403 tests;
  assert organizationId is passed through to service on all mutating calls

Fixes field trial failure: Test C.7 (Org Isolation).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-09 05:22:48 +00:00
SentryAgent.ai Developer
5e580b51dd fix(tests): resolve 4 failing test suites and patch lodash vulnerability
Test fixes (type mismatches introduced by V&V resolution changes):
- HealthDetailedController.test.ts: replace pool/makePool with dbProbe/makeDbProbe
  to match refactored HealthDetailedDeps interface (Pool → DbProbe abstraction)
- EventPublisher.test.ts: pass all 4 required constructor args to WebhookDeliveryWorker
  mock (pool, vaultClient, redisClient, redisUrl) — was passing only 1
- MarketplaceService.test.ts: IAgent.did/didCreatedAt are string|undefined (not null);
  fix makeAgent defaults and makeAgent({did:null}) call; fix type assertion to unknown first
- OIDCTrustPolicyService.test.ts: ICreateTrustPolicyRequest.branch is string|undefined
  (not nullable); replace all branch:null with branch:undefined

Security fix:
- npm audit fix: lodash ≤4.17.23 (HIGH) → patched; 0 vulnerabilities remaining

Result: 50/50 test suites pass, 722/722 tests pass, 0 vulnerabilities

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-08 08:40:23 +00:00
SentryAgent.ai Developer
f9a6a8aafb docs(devops): update all documentation for DockerSpec compliance
- Replace all docker-compose.yml/docker-compose.monitoring.yml references with
  compose.yaml/compose.monitoring.yaml (modern Compose Spec naming)
- Replace all `docker-compose` CLI commands with `docker compose` (plugin syntax)
- Update Dockerfile stage descriptions: node:18-alpine → node:20.11-bookworm-slim,
  built-in node user → explicit nodeapp:1001 non-root user
- Update image version references: postgres:14-alpine → postgres:14.12-alpine3.19,
  redis:7-alpine → redis:7.2-alpine3.19
- Externalize postgres credentials: hardcoded values → POSTGRES_USER/PASSWORD/DB env vars
- Externalize Grafana admin password: hardcoded 'agentidp' → GF_ADMIN_PASSWORD env var
- Add Docker Compose Variables section to environment-variables.md (POSTGRES_*, GF_ADMIN_PASSWORD)
- Update local-development.md Step 3: cp .env.example .env, document POSTGRES_* purpose
- Update quick-start.md: cp .env.example .env, use awk/sed for JWT key injection
- Update 07-dev-setup.md: remove 'no .env.example' claim, reference cp .env.example
- Update docker-compose.yml key file description in 04-codebase-structure.md
- Update monitoring overlay launch commands across all docs (compose.yaml + compose.monitoring.yaml)
- Update volume names to kebab-case: postgres_data → postgres-data, redis_data → redis-data
- Fix compliance encryption-runbook: docker-compose restart agentidp → docker compose restart app

All docs now consistent with compose.yaml in repo root.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-08 08:27:37 +00:00
SentryAgent.ai Developer
6fada694bb fix(docker): remediate all DockerSpec violations for field trial
- Replace docker-compose.yml → compose.yaml (modern Compose Spec, no version header)
- Replace docker-compose.monitoring.yml → compose.monitoring.yaml
- Remove deprecated version: '3.x' headers from both compose files
- Add dedicated app-tier bridge network (no default bridge)
- Add restart: unless-stopped to all services
- Add deploy.resources.limits (memory + cpu) to all services
- Add healthcheck to app service (curl /health)
- Add healthchecks to prometheus and grafana in monitoring overlay
- Externalize postgres credentials to env vars (POSTGRES_USER/PASSWORD/DB)
- Externalize grafana admin password to GF_ADMIN_PASSWORD env var
- Make env_file optional (required: false) for CI/field-trial environments
- Update Dockerfile: node:18-alpine → node:20.11-bookworm-slim (pinned version)
- Add explicit non-root system user/group (nodejs:1001/nodeapp:1001)
- Add curl install to final stage for healthcheck probe
- Copy src/db/migrations from build stage (not host bind)
- Expand .dockerignore: tmp/, temp/, *.env.*, compose files, Dockerfiles
- Add .env.example to git (was ignored by .env.* rule — add !.env.example exception)
- Add POSTGRES_USER/PASSWORD/DB and GF_ADMIN_PASSWORD to .env.example

All compose files pass: docker compose config --quiet 

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-08 08:19:49 +00:00
SentryAgent.ai Developer
30dc793ceb feat(governance): add CTO autonomy mandate, TBC session 2 minutes, and high-autonomy launcher
- CTO-AUTONOMY.md: CEO-authorized autonomy governance — defines act-freely scope and hard stops
- scripts/start-cto.sh: updated to launch with --dangerously-skip-permissions for full autonomy
- TBC/minutes/TBC-MIN-002-2026-04-07.md: session 2 opening minutes

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-08 05:28:42 +00:00
SentryAgent.ai Developer
861d9312d8 feat(tbc): add TBC agent launcher and workspace
Adds start-tbc.sh and .tbc-workspace/CLAUDE.md for the Technical &
Business Consultant role — independent advisory agent reporting to CEO,
matching the established pattern of start-cto.sh / .cto-workspace/.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-07 08:55:45 +00:00
SentryAgent.ai Developer
dceefebf18 chore(config): add PRD.md and .claude/ project config to repository
- PRD.md: Product Requirements Document (single source of truth for all requirements)
- .claude/settings.local.json: Claude Code agent permission config
- .claude/commands/: project-specific slash commands
- .claude/skills/: project-specific skill definitions

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-07 08:43:04 +00:00
SentryAgent.ai Developer
4e3b989629 feat(governance): add CTO session completion protocol, TBC charter, and process governance OpenSpec change
- CLAUDE.md + README.md: new CTO Session Completion Protocol (authorized/done vocabulary, end-of-session summary requirement)
- docs/engineering/08-workflow.md: Section 8 — CTO Session Completion Protocol
- scripts/start-cto.sh: startup protocol updated to read PRD.md first
- openspec/changes/process-governance-handoff-gap/: full OpenSpec change record (proposal, design, specs, tasks)
- TBC/charter.md: Technical & Business Consultant charter
- TBC/minutes/TBC-MIN-001-2026-04-07.md: inaugural TBC meeting minutes

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-07 08:41:12 +00:00
SentryAgent.ai Developer
7441c9f298 fix(vv): resolve all 6 V&V issues — field trial unblocked
All findings from the inaugural LeadValidator audit resolved and
confirmed. Release gate: PASS.

VV_ISSUE_002 (BLOCKER): 15 OpenAPI specs verified present covering
all 20 route groups (46 endpoints documented in docs/openapi/)

VV_ISSUE_003 (MAJOR): Remove any types from src/db/pool.ts —
replaced pool.query shim with unknown[] + Object.defineProperty,
zero any types, eslint-disable suppressions removed

VV_ISSUE_004 (MAJOR): Remove raw Pool from ScaffoldController and
HealthDetailedController — injected AgentRepository/CredentialRepository
and DbProbe interface respectively; added CredentialRepository.findActiveClientId()

VV_ISSUE_005 (MAJOR): Add unit tests for 5 untested services —
ComplianceStatusStore, EventPublisher, MarketplaceService,
OIDCTrustPolicyService, UsageService

VV_ISSUE_006 (MAJOR): Add integration tests for 7 missing route
groups — analytics, billing, tiers, webhooks, marketplace,
oidc-trust-policies, oidc-token-exchange

VV_ISSUE_001 (MINOR): Create missing design.md and tasks.md in 4
OpenSpec archives — all archives now complete

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-07 04:52:47 +00:00
SentryAgent.ai Developer
d216096dfb feat(governance): add V&V Architect (LeadValidator) — independent audit agent
Fixes a critical bug where VALIDATOR.md contained a copy of start-validator.sh
(making the validator unlaunchable). Introduces a fully independent V&V Architect
agent that audits the codebase against the PRD and OpenSpec outside the CTO's
chain of command.

Changes:
- VALIDATOR.md: rewritten as proper system prompt (8-phase audit methodology,
  issue format, severity model, communication protocol)
- scripts/start-validator.sh: isolated workspace setup, sanity check, auto-init
  ledger, validator-specific CLAUDE.md (no CEO context contamination)
- openspec/vv_audit/LEDGER.md: shared audit ledger index (CEO release gate view)
- openspec/changes/archive/2026-04-07-vv-architect-setup/: full OpenSpec artifacts
  (proposal.md, design.md, tasks.md — 28 tasks, all complete)

Note: .cto-workspace/CLAUDE.md updated (gitignored — persists on disk only).
#vv-findings hub channel created for real-time validator notifications.

CEO approved 2026-04-07.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-07 02:56:36 +00:00
SentryAgent.ai Developer
8cabc0191c docs: commit all Phase 6 documentation updates and OpenSpec archives
- devops docs: 8 files updated for Phase 6 state; field-trial.md added (946-line runbook)
- developer docs: api-reference (50+ endpoints), quick-start, 5 existing guides updated, 5 new guides added
- engineering docs: all 12 files updated (services, architecture, SDK guide, testing, overview)
- OpenSpec archives: phase-7-devops-field-trial, developer-docs-phase6-update, engineering-docs-phase6-update
- VALIDATOR.md + scripts/start-validator.sh: V&V Architect tooling added
- .gitignore: exclude session artifacts, build artifacts, and agent workspaces

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-07 02:24:24 +00:00
SentryAgent.ai Developer
0fb00256b4 chore(openspec): archive phase-6-market-expansion — 53/53 tasks complete
Analytics Dashboard, API Gateway Tiers, AGNTCY Compliance all delivered.
Development freeze now in effect per CEO directive — no Phase 7.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-04 02:20:22 +00:00
SentryAgent.ai Developer
e327c41211 chore(phase-6): mark all 53 tasks complete in tasks.md
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-04 02:20:16 +00:00
SentryAgent.ai Developer
eea885db04 feat(phase-6): WS3+WS4+WS6 — Analytics, API Tiers, AGNTCY Compliance
WS3 — Advanced Analytics Dashboard:
- DB migration: analytics_events table (tenant_id, date, metric_type, count)
- AnalyticsService: recordEvent (fire-and-forget), getTokenTrend, getAgentActivity, getAgentUsageSummary
- Analytics hooks in OAuth2Service (token_issued) and AgentService (agent_registered/deactivated)
- AnalyticsController + routes/analytics.ts (gated by ANALYTICS_ENABLED flag)
- Portal: TokenTrendChart (recharts LineChart), AgentHeatmap (recharts heatmap), /analytics page

WS4 — API Gateway Tiers:
- DB migration: tenant_tiers table; src/config/tiers.ts (free/pro/enterprise limits)
- TierService: getStatus, initiateUpgrade (Stripe), applyUpgrade; TierLimitError in errors.ts
- tierEnforcement middleware (Redis-backed daily call/token counters; TIER_ENFORCEMENT flag)
- Agent count enforcement in AgentService.create()
- Stripe webhook updated to call TierService.applyUpgrade() on checkout.session.completed
- TierController + routes/tiers.ts; Portal: /settings/tier page with upgrade flow

WS6 — AGNTCY Compliance Certification:
- ComplianceService: generateReport() (Redis-cached 5 min), exportAgentCards()
- Compliance sections: agent-identity (DID + credential expiry checks), audit-trail (Merkle chain)
- ComplianceController updated with getComplianceReport, exportAgentCards handlers
- routes/compliance.ts: new AGNTCY routes (gated by COMPLIANCE_ENABLED flag); SOC2 routes unaffected

QA:
- 28 new unit tests: AnalyticsService (8), TierService (9), ComplianceService (11) — all pass
- 673 total unit tests passing; 0 TypeScript errors across API and portal
- AGNTCY conformance test suite at tests/agntcy-conformance/ (4 protocol tests)
- Portal builds cleanly: 9 routes including /analytics and /settings/tier
- Feature flags verified: ANALYTICS_ENABLED, TIER_ENFORCEMENT, COMPLIANCE_ENABLED

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-04 02:20:09 +00:00
SentryAgent.ai Developer
0fad328329 feat(openspec): propose phase-6-market-expansion change
Analytics Dashboard, API Gateway Tiers, AGNTCY Compliance — 62 tasks across 8 groups.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-03 12:57:23 +00:00
SentryAgent.ai Developer
8fd6823581 chore(openspec): archive phase-5-scale-ecosystem — 68/68 tasks complete
WS1 (Rust SDK), WS2 (A2A Authorization), WS5 (Developer Experience)
all delivered, QA gates passed, committed to main.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-03 02:54:45 +00:00
SentryAgent.ai Developer
eaabaebf52 chore(phase-5): mark all 68 tasks complete in tasks.md
Phase 5 implementation complete — WS1 (Rust SDK), WS2 (A2A Authorization),
WS5 (Developer Experience). All QA gates passed.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-03 02:50:43 +00:00
SentryAgent.ai Developer
662879f0ee feat(phase-5): WS5 — Developer Experience
Implements scaffold ZIP generator, Stoplight Elements API explorer, and CLI scaffold command:

Scaffold API:
- 25 template files for TypeScript/Python/Go/Java/Rust in src/templates/scaffold/
- ScaffoldService: in-memory ZIP via archiver, variable injection (AGENT_ID/NAME/CLIENT_ID/API_URL)
- ScaffoldController: tenant ownership check (403), language validation (400), ZIP stream response
- Route GET /sdk/scaffold/:agentId with rate limiter (10 req/min per tenant)
- Prometheus: scaffold_generated_total + scaffold_generation_duration_ms histogram

Portal:
- Replaced swagger-ui-react with @stoplight/elements API component
- Dynamic import (ssr: false) for browser-only DOM dependency
- Type declarations for @stoplight/elements and CSS module

CLI:
- sentryagent scaffold --agent-id <id> [--language typescript] [--out .]
- Raw fetch for binary ZIP stream → unzipper.Extract() → prints next steps
- Human-readable 400/403/404 error messages

Tests: 19 tests (unit + integration), ScaffoldService 80%+ branch coverage

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-03 02:50:32 +00:00
SentryAgent.ai Developer
16497706d3 feat(phase-5): WS2 — A2A Authorization
Implements agent-to-agent delegation chains:
- Migration 024: delegation_chains table with HMAC signature, TTL, revocation
- DelegationCrypto: HMAC-SHA256 sign/verify, UUID token generation
- DelegationService: create (scope subset validation, self-delegation guard,
  same-tenant delegatee check), verify (returns valid: false on expired/revoked,
  never throws), revoke (delegator-only, conflict guard)
- DelegationController + router at /oauth2/token/delegate (POST/DELETE) and
  /oauth2/token/verify-delegation (POST)
- Feature-flagged behind A2A_ENABLED env var (default on)
- Prometheus metrics: delegations_created/verified/revoked_total
- 33 tests (unit + integration): all pass, DelegationService 87.5%+ branch coverage

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-03 02:49:36 +00:00
SentryAgent.ai Developer
0506bc1b8e chore(sdk-rust): add .gitignore to exclude build artifacts
Removes sdk-rust/target/ from tracking — was accidentally committed
without a Rust .gitignore in place.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-03 02:49:19 +00:00
SentryAgent.ai Developer
a4aab1b5b3 feat(phase-5): WS1 — Rust SDK
Implements the sentryagent-idp Rust SDK crate (sdk-rust/) with:
- TokenManager with Arc<Mutex<TokenCache>> for thread-safe token caching
- AgentIdPClient with full method coverage: agents, oauth2, credentials, audit, marketplace, delegation
- Error hierarchy via thiserror (AgentIdPError enum)
- All model types with serde derive
- 429 RateLimited handling with Retry-After parsing; zero unwrap() calls
- Unit tests (mockito), doc tests, and integration tests (#[ignore])
- quickstart example, full README, cargo doc clean

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-03 02:48:14 +00:00
SentryAgent.ai Developer
fec1801e8c chore(openspec): trim phase-5 scope to WS1+WS2+WS5 per CEO approval
Approved: Rust SDK, A2A Authorization, Developer Experience.
Deferred to Phase 6: Analytics Dashboard, API Gateway Tiers, AGNTCY Compliance.
Tasks: 119 → 76. Specs: 6 → 3.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 15:42:05 +00:00
SentryAgent.ai Developer
389a764e8d feat(openspec): propose phase-5-scale-ecosystem change
6 workstreams, 119 tasks — Scale & Ecosystem:
- WS1: Rust SDK
- WS2: Agent-to-Agent (A2A) Authorization
- WS3: Advanced Analytics Dashboard
- WS4: Public API Gateway & Rate Limiting SaaS
- WS5: Developer Experience (DX) improvements
- WS6: AGNTCY Compliance Certification Package

Awaiting CEO approval to begin implementation.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 15:33:08 +00:00
SentryAgent.ai Developer
831e91c467 chore(openspec): archive phase-4-developer-growth change
All 90 tasks complete. Phase 4 — Developer Growth & Go-to-Market
fully delivered and archived per OpenSpec protocol.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 15:17:18 +00:00
SentryAgent.ai Developer
af630b43d4 chore(phase-4): QA fixes + gitignore portal build artifacts
- Fix 7 test fixtures missing isPublic field added in WS4 Marketplace
- Add portal/.next/ to .gitignore (build artifacts should not be tracked)
- Mark all Phase 4 tasks 11.1-11.11 complete in tasks.md

QA results: 611/611 tests pass, tsc zero errors, portal build OK, CLI build OK

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 10:59:11 +00:00
SentryAgent.ai Developer
26a56f84e1 feat(phase-4): WS6 — Billing & Usage Metering (Stripe, free tier enforcement)
- DB migration 023: tenant_subscriptions and usage_events tables
- UsageMeteringMiddleware: in-memory counters, 60s flush to DB via UPSERT
- FreeTierEnforcementMiddleware: 10 agents / 1,000 calls/day limits, Redis cache
- UsageService: getDailyUsage and getActiveAgentCount
- BillingService: Stripe checkout sessions, webhook verification, subscription status
- POST /billing/checkout, POST /billing/webhook, GET /billing/usage endpoints
- BILLING_ENABLED=false disables enforcement without breaking metering
- Dashboard: Usage tab with Free Tier/Pro badges and metric cards
- 19 unit tests passing across billing services and middleware

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 10:51:36 +00:00
SentryAgent.ai Developer
fefbf1e3ea feat(phase-4): WS5 — GitHub Actions OIDC token exchange and trust policies
- POST /oidc/token: GitHub OIDC JWT exchange (bootstrap + agent-scoped modes)
- POST/GET/DELETE /oidc/trust-policies: trust policy CRUD with enforcement
- DB migration 022: oidc_trust_policies table with provider/repo/branch/agent_id
- GitHub Actions: register-agent and issue-token actions with full READMEs
- Trust policy enforcement rejects token exchanges not matching registered policies
- Bootstrap mode issues agents:write token for new agent registration without agentId

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 10:37:39 +00:00
SentryAgent.ai Developer
89c99b666d feat(phase-4): WS4 — Agent Marketplace (public registry, pagination, filters)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 10:17:51 +00:00
SentryAgent.ai Developer
d1e6af25aa feat(phase-4): WS2 + WS3 — Developer Portal (Next.js 14) and CLI tool (sentryagent)
WS2: Developer Portal (portal/)
- Standalone Next.js 14 + Tailwind CSS app — independent deployment
- Home page: hero, feature grid, CTA to /get-started
- /pricing: free tier limits table (10 agents, 1k calls/day) + paid tier CTA
- /sdks: all 4 SDKs (Node.js, Python, Go, Java) with install + code examples
- /api-explorer: Swagger UI from NEXT_PUBLIC_API_URL/openapi.json, persistAuthorization
- /get-started: 4-step wizard (setup → register agent → credentials → SDK snippet)
- Shared Nav component with active-link highlighting
- Build: 8/8 static pages, zero TypeScript errors

WS3: CLI Tool (cli/ — npm package: sentryagent)
- configure, register-agent, list-agents, issue-token, rotate-credentials, tail-audit-log
- Auto OAuth2 token fetch + 30s-buffer cache via client_credentials flow
- chalk-formatted table output, confirmation prompts, bounded audit log dedup
- bash + zsh shell completion scripts
- README with installation, all commands, and completion setup
- Build: tsc clean, node dist/index.js --help verified

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 04:29:50 +00:00
SentryAgent.ai Developer
1b682c22b2 feat(phase-4): WS1 — Production Hardening (Redis rate limiting, DB pool, health endpoint, k6)
Rate limiting:
- Replace in-memory express-rate-limit with ioredis + rate-limiter-flexible (sliding window)
- Graceful fallback to RateLimiterMemory when Redis unreachable
- RATE_LIMIT_WINDOW_MS / RATE_LIMIT_MAX_REQUESTS env var config
- Retry-After header on 429 responses
- agentidp_rate_limit_hits_total Prometheus counter

Database pool:
- Explicit pg.Pool config via DB_POOL_MAX/MIN/IDLE_TIMEOUT_MS/CONNECTION_TIMEOUT_MS
- Defaults: max=20, min=2, idle=30s, conn timeout=5s
- agentidp_db_pool_active_connections + agentidp_db_pool_waiting_requests gauges

Health endpoint:
- GET /health/detailed — per-service status (database, Redis, Vault, OPA)
- healthy / degraded (>1000ms) / unreachable classification
- HTTP 200 (all healthy) / 207 (any degraded) / 503 (any unreachable)

Load tests:
- tests/load/ with k6 scenarios for agent registration (100 VUs), token issuance (1000 VUs), credential rotation (50 VUs)
- npm run load-test script

Tests: 586 passing, zero TypeScript errors

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 04:20:37 +00:00
SentryAgent.ai Developer
b0f70b7ac4 feat(openspec): Phase 4 Developer Growth & Go-to-Market Readiness
OpenSpec change: phase-4-developer-growth (spec-driven, 4/4 artifacts)

6 workstreams, 90 implementation tasks, delivery sequence:
WS1 → WS2 + WS3 (parallel) → WS4 → WS5 → WS6

Workstreams:
1. Production Hardening — ioredis rate limiting, DB pool tuning, /health/detailed, k6 load tests
2. Developer Portal — Next.js 14, Swagger UI explorer, onboarding wizard, pricing/SDK pages
3. CLI Tool — sentryagent npm CLI, 5 commands, shell completion
4. Agent Marketplace — public searchable registry powered by existing agent/DID infrastructure
5. GitHub Actions — register-agent + issue-token Actions via OIDC (no stored secrets)
6. Billing & Usage Metering — Stripe Checkout, webhook-driven state, free tier enforcement

New capabilities (8 specs): production-hardening, developer-portal, cli-tool,
agent-marketplace, github-actions, billing-metering (+delta: web-dashboard, monitoring)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 04:00:34 +00:00
SentryAgent.ai Developer
f1fbe0e29a chore(openspec): archive all completed changes, sync 14 new specs to library
Archived 4 completed OpenSpec changes (2026-04-02):
- phase-3-enterprise (100/100 tasks) — 6 Phase 3 capabilities synced
- devops-documentation (48/48 tasks) — 3 new + 1 merged capability
- bedroom-developer-docs (33/33 tasks) — 4 new capabilities synced
- engineering-docs (superseded by 2026-03-29 archive) — no tasks

Main spec library grows from 21 → 35 capabilities (+14 new):
federation, multi-tenancy, oidc, soc2, w3c-dids, webhooks,
database, operations, system-overview, api-reference, core-concepts,
developer-guides, quick-start + deployment (merged additive requirements)

Active changes: 0 — project board is clear for Phase 4 planning.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 03:50:47 +00:00
SentryAgent.ai Developer
ceec22f714 chore(phase-3): mark WS6 tasks complete — Phase 3 Enterprise DONE
All 100/100 tasks checked. All 6 workstreams complete. QA-approved.
SOC 2 audit window can begin.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-31 00:42:29 +00:00
SentryAgent.ai Developer
fd90b2acd1 feat(phase-3): workstream 6 — SOC 2 Type II Preparation
Implements all 22 WS6 tasks completing Phase 3 Enterprise.

Column-level encryption (AES-256-CBC, Vault-backed key) via EncryptionService
applied to credentials.secret_hash, credentials.vault_path,
webhook_subscriptions.vault_secret_path, and agent_did_keys.vault_key_path.
Backward-compatible: isEncrypted() guard skips decryption for existing
plaintext rows until next read-write cycle.

Audit chain integrity (CC7.2): AuditRepository computes SHA-256 Merkle hash
on every INSERT (hash = SHA-256(eventId+timestamp+action+outcome+agentId+orgId+prevHash)).
AuditVerificationService walks the full chain verifying hash continuity.
AuditChainVerificationJob runs hourly; sets agentidp_audit_chain_integrity
Prometheus gauge to 1 (pass) or 0 (fail).

TLS enforcement (CC6.7): TLSEnforcementMiddleware registered as first
middleware in Express stack; 301 redirect on non-https X-Forwarded-Proto
in production.

SecretsRotationJob (CC9.2): hourly scan for credentials expiring within 7
days; increments agentidp_credentials_expiring_soon_total.

ComplianceController + routes: GET /audit/verify (auth+audit:read scope,
30/min rate-limit); GET /compliance/controls (public, Cache-Control 60s).
ComplianceStatusStore: module-level map updated by jobs, consumed by controller.

Prometheus: 2 new metrics (agentidp_credentials_expiring_soon_total,
agentidp_audit_chain_integrity); 6 alerting rules in alerts.yml.

Compliance docs: soc2-controls-matrix.md, encryption-runbook.md,
audit-log-runbook.md, incident-response.md, secrets-rotation.md.

Tests: 557 unit tests passing (35 suites); 26 new tests (EncryptionService,
AuditVerificationService); 19 compliance integration tests. TypeScript clean.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-31 00:41:53 +00:00
SentryAgent.ai Developer
272b69f18d feat(phase-3): workstream 5 — Webhooks & Event Streaming
- DB migrations 016/017: webhook_subscriptions and webhook_deliveries tables
- WebhookService: CRUD for subscriptions, Vault-backed secret storage, delivery history
- WebhookDeliveryWorker: Bull queue, HMAC-SHA256 signatures, exponential backoff,
  SSRF protection (RFC 1918 + loopback + link-local rejection), dead-letter handling
- EventPublisher: publishes 10 event types (agent/credential/token lifecycle);
  optional Kafka adapter activated via KAFKA_BROKERS env var
- AgentService, CredentialService, OAuth2Service: wired to EventPublisher
- WebhookController + routes: 6 endpoints with webhooks:read / webhooks:write scope guards
- KafkaAdapter: optional Kafka producer (kafkajs), no-op when KAFKA_BROKERS unset
- OAuthScope extended: webhooks:read, webhooks:write
- AuditAction extended: webhook.created, webhook.updated, webhook.deleted
- Metrics: agentidp_webhook_dead_letters_total counter added to registry
- 523 unit tests passing; TypeScript strict throughout, zero `any`

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-31 00:07:41 +00:00
SentryAgent.ai Developer
03b5de300c feat(phase-3): workstream 4 — AGNTCY Federation
Implements cross-IdP token verification for the AGNTCY ecosystem:

- Migration 015: federation_partners table (issuer, jwks_uri,
  allowed_organizations JSONB, status, expires_at)
- FederationService: registerPartner (JWKS validation at registration),
  listPartners, getPartner, updatePartner, deletePartner,
  verifyFederatedToken (alg:none rejected, RS256/ES256 only,
  allowedOrganizations filter, expiry enforcement)
- JWKS caching in Redis (TTL: FEDERATION_JWKS_CACHE_TTL_SECONDS);
  cache invalidated on partner delete and jwks_uri change
- FederationController + routes: 5 admin:orgs endpoints +
  POST /federation/verify (agents:read)
- OPA policy: 5 federation admin endpoint → admin:orgs mappings
- 499 unit tests passing; 94.69% statement coverage on FederationService

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-30 10:13:49 +00:00
SentryAgent.ai Developer
5e465e596a feat(phase-3): workstream 3 — OpenID Connect (OIDC) Provider
Implements full OIDC layer on top of the existing OAuth 2.0 token service:

- Migration 014: oidc_keys table (RSA/EC key pairs, is_current flag, expires_at
  for rotation grace period)
- OIDCKeyService: key generation (RS256/ES256), Vault storage, JWKS with Redis
  cache, key rotation with grace period, pruneExpiredKeys
- IDTokenService: buildIDTokenClaims (agent claims, nonce, DID), signIDToken
  (kid in JWT header), verifyIDToken (alg:none rejected, RS256/ES256 only)
- OIDCController: discovery document, JWKS (Cache-Control), /agent-info
- OIDC routes mounted at / — /.well-known/openid-configuration,
  /.well-known/jwks.json, /agent-info
- OAuth2Service: id_token appended to token response when openid scope requested
- 473 unit tests passing (100% OIDCKeyService stmts, 95.91% IDTokenService stmts)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-30 09:54:26 +00:00
SentryAgent.ai Developer
3d1fff15f6 feat(phase-3): workstream 2 — W3C DIDs
Implements W3C DID Core 1.0 per-agent identity for every registered agent:

Schema:
- agent_did_keys table: stores EC P-256 public key JWK + Vault path for private key
- agents.did + agents.did_created_at columns

Key management:
- EC P-256 key pair generated on every agent registration via Node.js crypto
- Private key stored in Vault KV v2 (dev:no-vault marker when Vault not configured)
- Public key JWK stored in PostgreSQL agent_did_keys table

API (4 new endpoints):
- GET /.well-known/did.json — instance DID Document (public, cached)
- GET /api/v1/agents/:id/did — per-agent DID Document (public, 410 for decommissioned)
- GET /api/v1/agents/:id/did/resolve — W3C DID Resolution result (agents:read scope)
- GET /api/v1/agents/:id/did/card — AGNTCY agent card (public)

Implementation:
- DIDService: DID construction, key generation, Redis caching (TTL configurable)
- DIDController: 410 Gone for decommissioned agents, correct Content-Type on resolve
- AgentService: calls DIDService.generateDIDForAgent on every new registration

Tests: 429 passing, DIDService 98.93% coverage, private key absence verified in all responses

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-30 00:47:59 +00:00
SentryAgent.ai Developer
d252097f71 feat(phase-3): workstream 1 — Multi-Tenancy
Introduces full multi-tenant organization model to AgentIdP:

Schema:
- 6 migrations: organizations + organization_members tables; organization_id FK
  added to agents, credentials, audit_logs; PostgreSQL RLS policies on all three
  tables; system org seed + backfill

API:
- 6 new /api/v1/organizations endpoints (CRUD + members) gated by admin:orgs scope
- OPA scopes.json updated with 6 new org endpoint → admin:orgs mappings

Implementation:
- OrgRepository, OrgService, OrgController, createOrgsRouter
- OrgContextMiddleware: sets app.organization_id session variable so RLS enforces
  per-request org isolation at the database layer
- JWT payload extended with organization_id claim; auth.ts backfills org_system
  for backward-compatible tokens
- New error classes: OrgNotFoundError, OrgHasActiveAgentsError, AlreadyMemberError

Tests: 373 passing, 80.64% branch coverage, zero any types

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-30 00:29:32 +00:00
SentryAgent.ai Developer
cb7d079ef6 feat(openspec): Phase 3 Enterprise — proposal, design, specs, and tasks
Scaffolds the phase-3-enterprise OpenSpec change (proposal only — awaiting CEO
approval before implementation). 6 workstreams, 95 implementation tasks:

WS1: Multi-Tenancy (21 tasks) — org model, RLS, admin API
WS2: W3C DIDs (12 tasks) — DID:WEB, agent DID documents, AGNTCY cards
WS3: OIDC (12 tasks) — oidc-provider, ID tokens, JWKS, discovery
WS4: Federation (11 tasks) — cross-instance trust, JWT assertions
WS5: Webhooks (17 tasks) — subscriptions, Bull queue, HMAC, retry
WS6: SOC2 (22 tasks) — encryption at rest, Merkle audit chain, controls

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-29 12:53:31 +00:00
SentryAgent.ai Developer
d42c653eea chore(openspec): archive engineering-docs and phase-2-production-ready changes
- engineering-docs → archive/2026-03-29-engineering-docs (63/63 tasks complete)
- phase-2-production-ready → archive/2026-03-29-phase-2-production-ready (89/89 tasks complete)
- openspec/specs/ synced with all Phase 1 + Phase 2 + engineering-docs capabilities (22 specs total)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-29 12:41:53 +00:00
SentryAgent.ai Developer
eced5f8699 docs: engineering knowledge base for new hires
Complete docs/engineering/ suite — 12 documents covering company overview,
system architecture, tech stack ADRs, codebase structure, service deep dives,
annotated code walkthroughs, dev setup, engineering workflow, testing strategy,
deployment/ops, SDK guide, and README index. All content verified against
source files. All 82 tasks in openspec/changes/engineering-docs/tasks.md
marked complete.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-29 12:38:42 +00:00
SentryAgent.ai Developer
1f95cfe89d release: Phase 2 — Production-Ready AgentIdP
Merges all 8 Phase 2 workstreams from develop into main.

Workstreams delivered:
- WS1: HashiCorp Vault credential storage
- WS2: Python SDK (sentryagent-idp)
- WS3: Go SDK (github.com/sentryagent/idp-sdk-go)
- WS4: Java SDK (ai.sentryagent:idp-sdk)
- WS5: OPA Policy Engine (hot-reloadable authz, Rego + Wasm)
- WS6: Web Dashboard UI (React 18 + Vite 5, 6 pages)
- WS7: Prometheus + Grafana Monitoring (7 metrics, auto-provisioned dashboard)
- WS8: Multi-Region Terraform Deployment (AWS ECS/RDS/ElastiCache + GCP Cloud Run/SQL/Memorystore)

Quality gates: 344/344 unit tests passing, 96.71% coverage, TypeScript strict throughout.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-29 06:27:09 +00:00
SentryAgent.ai Developer
6913d62648 feat(phase-2): workstream 8 — Multi-Region Terraform Deployment
AWS environment:
- VPC (3-AZ, public + private subnets, NAT gateways, VPC endpoints for ECR/SM/CW)
- ECS Fargate service (sentryagent/agentidp) — secrets from Secrets Manager
- RDS PostgreSQL 14 (Multi-AZ, encrypted, VPC-internal, storage autoscaling)
- ElastiCache Redis 7 (primary + replica, at-rest + in-transit encryption)
- ALB with HTTPS/443, HTTP→HTTPS redirect, ACM certificate
- Route 53 alias record

GCP environment:
- VPC + private services access + Serverless VPC connector
- Cloud Run service — secrets from Secret Manager
- Cloud SQL PostgreSQL 14 (private IP, no public endpoint)
- Cloud Memorystore Redis 7 (VPC-internal, AUTH enabled)

Shared:
- 4 reusable modules: agentidp (dual AWS/GCP), rds, redis, lb
- No hardcoded secrets; all sensitive vars marked sensitive=true
- terraform.tfvars.example for both environments
- docs/devops/deployment.md — AWS + GCP step-by-step walkthrough, rollback procedures

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-29 06:25:14 +00:00
SentryAgent.ai Developer
a504964e5f feat(phase-2): workstream 7 — Prometheus + Grafana Monitoring
- Add prom-client 15; shared registry in src/metrics/registry.ts (7 metrics)
- HTTP request counter + duration histogram via metricsMiddleware
- DB query duration histogram wrapping pg Pool.query
- Redis command duration histogram via typed instrumentRedisMethod wrapper
- agentidp_tokens_issued_total in OAuth2Service
- agentidp_agents_registered_total in AgentService
- GET /metrics unauthenticated endpoint (Prometheus text format)
- docker-compose.monitoring.yml overlay (Prometheus + Grafana)
- Grafana auto-provisioned datasource + pre-built AgentIdP dashboard
- docs/devops/operations.md monitoring section added
- 36/36 unit tests passing, 100% coverage on new metrics code
- Fix pre-existing unused import in tests/integration/agents.test.ts

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-29 06:13:41 +00:00
SentryAgent.ai Developer
7d6e248a14 feat(phase-2): workstream 6 — Web Dashboard UI
- dashboard/: Vite 5 + React 18 + TypeScript strict SPA
  - Auth: sessionStorage credentials, TokenManager validation, AuthProvider context
  - Pages: Login, Agents (search + filter), AgentDetail (suspend/reactivate),
    Credentials (generate/rotate/revoke, new secret shown once),
    AuditLog (filters + pagination), Health (PG + Redis status, 30s refresh)
  - Components: Button, Badge, ConfirmDialog, AppShell, RequireAuth
  - All destructive actions gated by ConfirmDialog
  - Zero dangerouslySetInnerHTML; sessionStorage only (OWASP compliant)
- src/routes/health.ts: unauthenticated GET /health — PG + Redis connectivity
- src/app.ts: health route + dashboard/dist/ served at /dashboard with SPA fallback
- 6 new health route tests; 308/308 unit tests passing

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-28 23:19:18 +00:00
SentryAgent.ai Developer
7328a61c44 feat(phase-2): workstream 5 — OPA Policy Engine
- policies/authz.rego: Rego policy with path normalisation and scope enforcement
- policies/data/scopes.json: all 13 endpoint → scope mappings
- src/middleware/opa.ts: OpaMiddleware with Wasm primary path + scopes.json fallback;
  exports createOpaMiddleware() and reloadOpaPolicy() for SIGHUP hot-reload
- All four route files: opaMiddleware wired after authMiddleware
- AuditController, OAuth2Service: manual scope checks removed (now centralised in OPA)
- src/server.ts: SIGHUP handler calls reloadOpaPolicy()
- docs/devops/environment-variables.md: POLICY_DIR documented
- 38 new tests; 302/302 passing; opa.ts coverage 98.66% statements

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-28 23:02:11 +00:00
SentryAgent.ai Developer
8cdab72fea feat: Phase 2 Workstream 4 — Java SDK (ai.sentryagent:idp-sdk)
Java 17 SDK in sdk-java/:
- AgentIdPClient composing AgentRegistryClient, CredentialClient,
  TokenClient, AuditClient — all 14 endpoints covered
- Both sync methods and CompletableFuture<T> async counterparts on each client
- Thread-safe TokenManager (synchronized) with 60s refresh buffer
- AgentIdPException (extends RuntimeException) with Code/HTTPStatus/Details
- Builder pattern for all request types; Jackson 2.17 for JSON
- Zero external HTTP dependencies — java.net.http.HttpClient (Java 11+)
- No-dep JDK HttpServer used for unit tests (no WireMock needed)
- mvn verify: 49/49 tests passed | JaCoCo coverage gate: >80% ✓

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-28 15:33:53 +00:00
SentryAgent.ai Developer
91c759f455 feat: Phase 2 Workstream 3 — Go SDK (github.com/sentryagent/idp-sdk-go)
Single-package agentidp SDK in sdk-go/:
- AgentIdPClient composing AgentRegistryClient, CredentialClient,
  TokenServiceClient, AuditClient — all 14 endpoints covered
- Goroutine-safe TokenManager (sync.Mutex) with 60s refresh buffer
- AgentIdPError implementing error interface with Code/HTTPStatus/Details
- Context-aware: all service methods take context.Context as first arg
- doRequest shared helper; token endpoints use form-encoded POST directly
- go vet: 0 warnings | staticcheck: 0 warnings
- go test ./...: 37/37 passed | coverage: 81.0% (>80% gate)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-28 15:23:02 +00:00
SentryAgent.ai Developer
c93562e685 feat(phase-2): workstream 2 — Python SDK (sentryagent-idp)
Sync (requests) and async (httpx) clients with identical API surface
to the Node.js SDK.

Delivered:
- pyproject.toml — python>=3.9, hatchling build, mypy strict config
- types.py — all 14-endpoint request/response dataclasses
- errors.py — AgentIdPError with from_api_error, from_oauth2_error, network_error
- token_manager.py — thread-safe sync TokenManager, 60s refresh buffer
- async_token_manager.py — asyncio-safe AsyncTokenManager (httpx)
- _request.py — shared sync/async request helper (DRY)
- services/agents.py — AgentRegistryClient + AsyncAgentRegistryClient (5 methods each)
- services/credentials.py — CredentialClient + AsyncCredentialClient (4 methods each)
- services/token.py — TokenClient + AsyncTokenClient (introspect + revoke)
- services/audit.py — AuditClient + AsyncAuditClient (query + get)
- client.py — AgentIdPClient + AsyncAgentIdPClient
- __init__.py — barrel exports
- README.md — installation, quick start, full API reference

QA gates:
- mypy --strict: 0 errors (12 source files)
- pytest: 57/57 passed
- Coverage: 90.83% (required >= 80%)
- All 14 endpoints covered (sync + async)
- AgentIdPError raised on all failure paths

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-28 15:11:27 +00:00
SentryAgent.ai Developer
90a4addb21 feat(phase-2): workstream 1 — HashiCorp Vault credential storage
Vault is optional — server falls back to bcrypt (Phase 1 behaviour)
when VAULT_ADDR is not set. Full coexistence: existing bcrypt credentials
continue to work until rotated.

Changes:
- src/vault/VaultClient.ts — wraps node-vault KV v2; writeSecret,
  readSecret, verifySecret (constant-time), deleteSecret
- src/db/migrations/005_add_vault_path.sql — vault_path column on credentials
- CredentialRepository — createWithVaultPath, updateVaultPath methods
- CredentialService — routes generate/rotate through Vault when configured;
  bcrypt path unchanged
- OAuth2Service — verifies via Vault when vaultPath set, bcrypt otherwise
- src/app.ts — createVaultClientFromEnv() wired into service layer
- ICredentialRow — vaultPath field added
- docs/devops/environment-variables.md — VAULT_ADDR, VAULT_TOKEN, VAULT_MOUNT
- docs/devops/vault-setup.md — dev quickstart, production config, migration guide
- tests: 33/33 unit tests pass (VaultClient + CredentialService Vault path)
- node-vault + @types/node-vault installed

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-28 15:02:33 +00:00
SentryAgent.ai Developer
7593bfe1c1 chore: Phase 2 OpenSpec scoping — proposal, design, specs, tasks
8 workstreams scoped per OpenSpec standards:
1. HashiCorp Vault integration (secret management)
2. Python SDK (sentryagent-idp)
3. Go SDK (idp-sdk-go)
4. Java SDK (ai.sentryagent:idp-sdk)
5. OPA policy engine (dynamic ABAC, hot-reload Rego)
6. Web Dashboard UI (React 18 + TypeScript)
7. Prometheus + Grafana monitoring (7 metrics, pre-built dashboard)
8. Multi-region Terraform deployment (AWS + GCP)

Status: proposed — awaiting CEO dependency approvals (A0.1–A0.5)
before any implementation begins.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-28 14:53:09 +00:00
666 changed files with 101363 additions and 1668 deletions

View File

@@ -0,0 +1,198 @@
---
name: "Continue"
description: Capture a full project status snapshot so the next session can continue seamlessly from where this one left off
category: Workflow
tags: [workflow, session, continuity, memory, snapshot]
---
Capture the full current project status and store it in persistent memory so the next session can pick up exactly where this one left off — no context lost, no recap needed.
**Input**: No arguments required. Run `/continue` at any point when ending a session.
---
**Steps**
1. **Capture git state**
Run the following in parallel:
```bash
git status
git branch --show-current
git log --oneline -10
git diff --stat HEAD
git stash list
```
Record:
- Current branch name
- Uncommitted files (staged and unstaged), with change type (M/A/D/?)
- Last 10 commit messages (for continuity context)
- Summary of diff stats if uncommitted changes exist
- Any stashed work
2. **Capture OpenSpec change state**
Run `openspec list --json` to get all active changes.
For each active (non-archived) change, run:
```bash
openspec status --change "<name>" --json
```
For each active change, also read its `tasks.md` to count:
- Total tasks
- Completed tasks (`- [x]`)
- Pending tasks (`- [ ]`)
- The text of the next pending task (to know what's up next)
Record per change:
- Change name
- Schema
- Artifact completion (which are done, which are pending)
- Task progress (X of Y complete)
- Next pending task description
- Any delta specs present (`openspec/changes/<name>/specs/`)
**If no active changes:** Note that there are no active OpenSpec changes.
3. **Capture in-session conversation context**
Summarize what was worked on in this session based on the conversation:
- What was the user trying to accomplish?
- What was completed?
- What was left in-progress or blocked?
- Any key decisions made during this session
- Any open questions or next actions the user mentioned
Keep this factual and brief — 38 bullet points.
4. **Capture memory file state**
Read `MEMORY.md` from the project memory directory:
`~/.claude/projects/-home-ubuntu-vj-ai-agents-dev-sentryagent-idp/memory/MEMORY.md`
Note the existing memory entries to avoid duplication in the next step.
5. **Write session snapshot to memory**
Write a `session_snapshot.md` file to the project memory directory:
`~/.claude/projects/-home-ubuntu-vj-ai-agents-dev-sentryagent-idp/memory/session_snapshot.md`
Use this structure:
```markdown
---
name: Session Snapshot
description: Last session status — git state, OpenSpec progress, and conversation context for seamless resumption
type: project
---
**Session ended:** YYYY-MM-DD (today's date)
## Git State
**Branch:** <branch-name>
**Uncommitted changes:** <count> files (<list filenames>)
**Last commit:** <hash> <message>
<If uncommitted changes exist, list them with their status>
<If stashes exist, list them>
## OpenSpec Changes
<For each active change:>
### <change-name>
- **Schema:** <schema-name>
- **Artifacts:** <done-count>/<total-count> complete (<list incomplete artifact names>)
- **Tasks:** <done-count>/<total-count> complete
- **Next task:** <text of next pending task>
- **Delta specs:** <present / none>
<If no active changes:> No active OpenSpec changes.
## Session Work
<Bullet list of what was worked on, completed, and left in-progress>
## Next Actions
<Bullet list of concrete next steps to resume — derived from pending tasks, blockers, open questions>
```
**IMPORTANT:** Always overwrite `session_snapshot.md` — this is a rolling snapshot, not a log. Only the most recent session state matters.
6. **Update MEMORY.md index**
Read the current `MEMORY.md`. If `session_snapshot.md` is not already listed, add it:
```
- [Session Snapshot](session_snapshot.md) — Last session: YYYY-MM-DD | branch: <name> | <N> active changes | <N> uncommitted files
```
If it is already listed, update the line to reflect today's date and current state.
Write the updated `MEMORY.md`.
7. **Display break summary**
Show a clean summary so the user knows the snapshot is complete:
```
## Snapshot Saved — See You Next Session
**Branch:** <branch-name>
**Uncommitted files:** <count> (<filenames>)
**Active changes:** <count>
<For each active change:>
- <change-name>: <done>/<total> tasks complete — Next: "<next task text>"
**Session context saved to memory.**
To resume: start a new session and run /continue — Claude will load the snapshot and pick up where you left off.
```
---
**Output On Success (with active changes)**
```
## Snapshot Saved — See You Next Session
**Branch:** develop
**Uncommitted files:** 3 (src/auth/token.ts, tests/auth.test.ts, README.md)
**Active changes:** 1
- add-agent-auth: 4/7 tasks complete — Next: "Implement JWT signing with RS256"
**Session context saved to memory.**
To resume: start a new session and run /continue — Claude will load the snapshot and pick up where you left off.
```
**Output On Success (clean state)**
```
## Snapshot Saved — See You Next Session
**Branch:** main
**Uncommitted files:** 0
**Active changes:** 0
**Session context saved to memory.**
To resume: start a new session and run /continue — Claude will load the snapshot and pick up where you left off.
```
---
**Guardrails**
- Always overwrite `session_snapshot.md` — do NOT append or create versioned copies
- Never include secrets, tokens, or credentials in the snapshot
- If `openspec list` fails (CLI not available), note that and skip OpenSpec capture gracefully
- If git is unavailable, note that and skip git capture gracefully
- Keep the session context summary factual — no speculation about future plans beyond what the user explicitly stated
- The MEMORY.md index line for `session_snapshot.md` must stay under 150 characters
- This command does NOT commit code, push branches, or modify any project files — it only writes to the memory directory

View File

@@ -0,0 +1,160 @@
---
name: "OpenSpec Project Status"
description: Show a human-readable summary of all OpenSpec changes — active, archived, artifact completion, and task progress
category: Workflow
tags: [workflow, status, openspec, reporting]
---
Show the full OpenSpec project status in a clear, human-readable format. No raw JSON — just a clean picture of where the project stands.
**Input**: No arguments required. Run `/openspec-project-status` at any time.
---
**Steps**
1. **Get all changes**
Run:
```bash
openspec list --json
```
Separate results into:
- **Active changes** (not in `archive/`)
- **Archived changes** (in `archive/`)
If the command fails or no changes exist, display a friendly empty state (see Output section).
2. **For each active change, gather full status**
Run in parallel for all active changes:
```bash
openspec status --change "<name>" --json
```
Also read each change's `tasks.md` to extract:
- Total task count
- Completed tasks (`- [x]`)
- Pending tasks (`- [ ]`)
- Text of the **next pending task** (first `- [ ]` item)
Also check for delta specs at `openspec/changes/<name>/specs/` — note if present.
3. **For archived changes**
List them by archive date (newest first). No need to read full status — just show name and archive date from the folder name (`YYYY-MM-DD-<name>`).
4. **Render the human-readable status report**
Use the output format defined below.
---
**Output Format**
```
## OpenSpec Project Status
### Active Changes (<count>)
────────────────────────────────────────
<change-name>
────────────────────────────────────────
Schema: <schema-name>
Phase: <inferred from artifact state: Proposing | Designing | Ready to Implement | In Progress | Complete>
Artifacts
✓ proposal done
✓ design done
◌ tasks pending
Tasks <done>/<total> complete
████████░░░░░░░░ 50%
Next: "<text of next pending task>"
Delta Specs <present / none>
────────────────────────────────────────
<Repeat for each active change>
---
### Archived Changes (<count>)
2026-03-20 add-initial-auth
2026-03-15 setup-ci-pipeline
2026-03-10 scaffold-project
---
### Summary
Active changes: <N>
Ready to apply: <N> (all artifacts done, tasks pending)
In progress: <N> (tasks partially complete)
Complete: <N> (all tasks done, not yet archived)
Archived: <N>
```
**Phase inference rules** (from artifact + task state):
- `Proposing` — proposal artifact is not done
- `Designing` — proposal done, design not done
- `Speccing` — design done, tasks artifact not done
- `Ready to Implement` — all artifacts done, 0 tasks complete
- `In Progress` — all artifacts done, some tasks complete but not all
- `Complete` — all artifacts done, all tasks complete (not yet archived)
**Progress bar rules:**
- 16 chars wide: `` per completed segment, `` for remaining
- Show percentage after bar
- If 0 tasks: show `No tasks yet`
- If all tasks done: show `████████████████ 100% All done!`
---
**Output: No active changes**
```
## OpenSpec Project Status
### Active Changes (0)
No active changes. Start one with /opsx:propose
---
### Archived Changes (<count>)
2026-03-20 add-initial-auth
...
---
### Summary
Active changes: 0
Archived: <N>
```
**Output: OpenSpec CLI unavailable**
```
## OpenSpec Project Status
OpenSpec CLI not available. Cannot read change data.
Make sure `openspec` is installed and accessible in your PATH.
```
---
**Guardrails**
- Never show raw JSON — always translate to human-readable output
- Never guess artifact or task state — always read from actual files and CLI output
- If a `tasks.md` file does not exist for a change, show `No tasks file` instead of 0/0
- Archived changes are display-only — never modify them
- Phase labels must be inferred strictly from actual artifact + task state, not assumed
- If `openspec status` fails for a specific change, show that change with `Status unavailable` and continue

View File

@@ -0,0 +1,152 @@
---
name: "OPSX: Apply"
description: Implement tasks from an OpenSpec change (Experimental)
category: Workflow
tags: [workflow, artifacts, experimental]
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! You can archive this change with `/opsx:archive`.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,157 @@
---
name: "OPSX: Archive"
description: Archive a completed change in the experimental workflow
category: Workflow
tags: [workflow, archive, experimental]
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use the Skill tool to invoke `openspec-sync-specs` (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,173 @@
---
name: "OPSX: Explore"
description: "Enter explore mode - think through ideas, investigate problems, clarify requirements"
category: Workflow
tags: [workflow, explore, experimental, thinking]
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create a proposal?"
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,106 @@
---
name: "OPSX: Propose"
description: Propose a new change - create it and generate all artifacts in one step
category: Workflow
tags: [workflow, artifacts, experimental]
---
Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The argument after `/opsx:propose` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,183 @@
---
name: continue
description: Capture a full project status snapshot so the next session can continue seamlessly from where this one left off. Use when the user is ending a session and wants to preserve context for resumption.
license: MIT
compatibility: Requires git. OpenSpec CLI optional (gracefully skipped if unavailable).
metadata:
author: sentryagent
version: "1.0"
generatedBy: "1.2.0"
---
Capture the full current project status and store it in persistent memory so the next session can pick up exactly where this one left off — no context lost, no recap needed.
**Input**: No arguments required. Invoke at any point when ending a session.
**Steps**
1. **Capture git state**
Run the following in parallel:
```bash
git status
git branch --show-current
git log --oneline -10
git diff --stat HEAD
git stash list
```
Record:
- Current branch name
- Uncommitted files (staged and unstaged), with change type (M/A/D/?)
- Last 10 commit messages (for continuity context)
- Summary of diff stats if uncommitted changes exist
- Any stashed work
2. **Capture OpenSpec change state**
Run `openspec list --json` to get all active changes.
For each active (non-archived) change, run:
```bash
openspec status --change "<name>" --json
```
For each active change, also read its `tasks.md` to count:
- Total tasks
- Completed tasks (`- [x]`)
- Pending tasks (`- [ ]`)
- The text of the next pending task (to know what's up next)
Record per change:
- Change name
- Schema
- Artifact completion (which are done, which are pending)
- Task progress (X of Y complete)
- Next pending task description
- Any delta specs present (`openspec/changes/<name>/specs/`)
**If `openspec` CLI is unavailable or fails:** Note it and skip this section gracefully.
**If no active changes:** Note that there are no active OpenSpec changes.
3. **Capture in-session conversation context**
Summarize what was worked on in this session based on the conversation:
- What was the user trying to accomplish?
- What was completed?
- What was left in-progress or blocked?
- Any key decisions made during this session
- Any open questions or next actions the user mentioned
Keep this factual and brief — 38 bullet points.
4. **Capture memory file state**
Read `MEMORY.md` from the project memory directory:
`~/.claude/projects/-home-ubuntu-vj-ai-agents-dev-sentryagent-idp/memory/MEMORY.md`
Note the existing memory entries to avoid duplication in the next step.
5. **Write session snapshot to memory**
Write a `session_snapshot.md` file to the project memory directory:
`~/.claude/projects/-home-ubuntu-vj-ai-agents-dev-sentryagent-idp/memory/session_snapshot.md`
Use this structure:
```markdown
---
name: Session Snapshot
description: Last session status — git state, OpenSpec progress, and conversation context for seamless resumption
type: project
---
**Session ended:** YYYY-MM-DD (today's date)
## Git State
**Branch:** <branch-name>
**Uncommitted changes:** <count> files (<list filenames>)
**Last commit:** <hash> <message>
<If uncommitted changes exist, list them with their status>
<If stashes exist, list them>
## OpenSpec Changes
<For each active change:>
### <change-name>
- **Schema:** <schema-name>
- **Artifacts:** <done-count>/<total-count> complete (<list incomplete artifact names>)
- **Tasks:** <done-count>/<total-count> complete
- **Next task:** <text of next pending task>
- **Delta specs:** <present / none>
<If no active changes:> No active OpenSpec changes.
## Session Work
<Bullet list of what was worked on, completed, and left in-progress>
## Next Actions
<Bullet list of concrete next steps to resume — derived from pending tasks, blockers, open questions>
```
**IMPORTANT:** Always overwrite `session_snapshot.md` — this is a rolling snapshot, not a log. Only the most recent session state matters.
6. **Update MEMORY.md index**
Read the current `MEMORY.md`. If `session_snapshot.md` is not already listed, add it:
```
- [Session Snapshot](session_snapshot.md) — Last session: YYYY-MM-DD | branch: <name> | <N> active changes | <N> uncommitted files
```
If it is already listed, update the line to reflect today's date and current state.
Write the updated `MEMORY.md`.
7. **Display break summary**
Show a clean summary so the user knows the snapshot is complete:
```
## Snapshot Saved — See You Next Session
**Branch:** <branch-name>
**Uncommitted files:** <count> (<filenames>)
**Active changes:** <count>
<For each active change:>
- <change-name>: <done>/<total> tasks complete — Next: "<next task text>"
**Session context saved to memory.**
To resume: start a new session and run /continue — Claude will load the snapshot and pick up where you left off.
```
**Output On Success**
```
## Snapshot Saved — See You Next Session
**Branch:** develop
**Uncommitted files:** 3 (src/auth/token.ts, tests/auth.test.ts, README.md)
**Active changes:** 1
- add-agent-auth: 4/7 tasks complete — Next: "Implement JWT signing with RS256"
**Session context saved to memory.**
To resume: start a new session and run /continue — Claude will load the snapshot and pick up where you left off.
```
**Guardrails**
- Always overwrite `session_snapshot.md` — do NOT append or create versioned copies
- Never include secrets, tokens, or credentials in the snapshot
- If `openspec list` fails (CLI not available), note that and skip OpenSpec capture gracefully
- If git is unavailable, note that and skip git capture gracefully
- Keep the session context summary factual — no speculation beyond what the user explicitly stated
- The MEMORY.md index line for `session_snapshot.md` must stay under 150 characters
- This skill does NOT commit code, push branches, or modify any project files — it only writes to the memory directory
- Session date must use the actual current date (not a placeholder)

View File

@@ -0,0 +1,156 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,288 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create a proposal?"
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx:explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change proposal
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,155 @@
---
name: openspec-project-status
description: Show a human-readable summary of all OpenSpec changes — active, archived, artifact completion, and task progress. Use when the user wants to see the current state of the project's OpenSpec changes.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: sentryagent
version: "1.0"
generatedBy: "1.2.0"
---
Show the full OpenSpec project status in a clear, human-readable format. No raw JSON — just a clean picture of where the project stands.
**Input**: No arguments required.
**Steps**
1. **Get all changes**
Run:
```bash
openspec list --json
```
Separate results into:
- **Active changes** (not in `archive/`)
- **Archived changes** (in `archive/`)
If the command fails or no changes exist, display a friendly empty state (see Output section).
2. **For each active change, gather full status**
Run in parallel for all active changes:
```bash
openspec status --change "<name>" --json
```
Also read each change's `tasks.md` to extract:
- Total task count
- Completed tasks (`- [x]`)
- Pending tasks (`- [ ]`)
- Text of the **next pending task** (first `- [ ]` item)
Also check for delta specs at `openspec/changes/<name>/specs/` — note if present.
3. **For archived changes**
List them by archive date (newest first). No need to read full status — just show name and archive date from the folder name (`YYYY-MM-DD-<name>`).
4. **Render the human-readable status report**
Use the output format defined below.
**Output Format**
```
## OpenSpec Project Status
### Active Changes (<count>)
────────────────────────────────────────
<change-name>
────────────────────────────────────────
Schema: <schema-name>
Phase: <inferred phase label>
Artifacts
✓ proposal done
✓ design done
◌ tasks pending
Tasks <done>/<total> complete
████████░░░░░░░░ 50%
Next: "<text of next pending task>"
Delta Specs <present / none>
────────────────────────────────────────
<Repeat for each active change>
---
### Archived Changes (<count>)
2026-03-20 add-initial-auth
2026-03-15 setup-ci-pipeline
---
### Summary
Active changes: <N>
Ready to apply: <N>
In progress: <N>
Complete: <N>
Archived: <N>
```
**Phase inference rules** (derive strictly from actual artifact + task state):
- `Proposing` — proposal artifact is not done
- `Designing` — proposal done, design not done
- `Speccing` — design done, tasks artifact not done
- `Ready to Implement` — all artifacts done, 0 tasks complete
- `In Progress` — all artifacts done, some tasks complete but not all
- `Complete` — all artifacts done, all tasks complete (not yet archived)
**Progress bar rules:**
- 16 chars wide: `` per completed segment, `` for remaining
- Show percentage after bar
- If 0 tasks: show `No tasks yet`
- If all tasks done: show `████████████████ 100% All done!`
**Artifact status icons:**
- `` — done
- `` — pending / not started
**Output: No active changes**
```
## OpenSpec Project Status
### Active Changes (0)
No active changes. Start one with /opsx:propose
---
### Archived Changes (<count>)
...
### Summary
Active changes: 0
Archived: <N>
```
**Output: OpenSpec CLI unavailable**
```
## OpenSpec Project Status
OpenSpec CLI not available. Cannot read change data.
Make sure `openspec` is installed and accessible in your PATH.
```
**Guardrails**
- Never show raw JSON — always translate to human-readable output
- Never guess artifact or task state — always read from actual files and CLI output
- If `tasks.md` does not exist for a change, show `No tasks file` instead of 0/0
- Archived changes are display-only — never modify them
- Phase labels must be inferred strictly from actual artifact + task state, not assumed
- If `openspec status` fails for a specific change, show that change with `Status unavailable` and continue

View File

@@ -0,0 +1,110 @@
---
name: openspec-propose
description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -1,7 +1,7 @@
# Dependencies # Dependencies — never bake into image
node_modules/ node_modules/
# Compiled output (built inside Docker) # Compiled output built inside Docker
dist/ dist/
# Test artifacts # Test artifacts
@@ -10,7 +10,18 @@ tests/
# Environment and secrets — never bake into image # Environment and secrets — never bake into image
.env .env
.env.*
*.pem *.pem
*.key
*.cert
# Docker files — not needed inside the image
compose.yaml
compose.*.yaml
docker-compose.yml
docker-compose*.yml
Dockerfile*
.dockerignore
# Development workspace # Development workspace
.cto-workspace/ .cto-workspace/
@@ -21,11 +32,23 @@ next_steps.md
# Git # Git
.git/ .git/
.gitignore .gitignore
.gitattributes
# Editor # Editor
.vscode/ .vscode/
.idea/ .idea/
*.swp
*.swo
# OS artifacts
.DS_Store
Thumbs.db
# Logs # Logs
*.log *.log
npm-debug.log* npm-debug.log*
logs/
# Temporary directories
tmp/
temp/

79
.env.example Normal file
View File

@@ -0,0 +1,79 @@
# SentryAgent.ai AgentIdP — Environment Variables
# Copy this file to .env and fill in the values for your environment.
# ── Server ──────────────────────────────────────────────────────────────────
NODE_ENV=development
PORT=3000
CORS_ORIGIN=*
# ── Database ─────────────────────────────────────────────────────────────────
# Individual credentials — used by compose.yaml to construct DATABASE_URL
POSTGRES_USER=sentryagent
POSTGRES_PASSWORD=change-me-in-production
POSTGRES_DB=sentryagent_idp
DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost:5432/${POSTGRES_DB}
# PostgreSQL connection pool tuning (task 2.1)
DB_POOL_MAX=20
DB_POOL_MIN=2
DB_POOL_IDLE_TIMEOUT_MS=30000
DB_POOL_CONNECTION_TIMEOUT_MS=5000
# ── Redis ────────────────────────────────────────────────────────────────────
REDIS_URL=redis://localhost:6379
# Rate limiting (task 1.2 / 1.3)
# Set REDIS_RATE_LIMIT_ENABLED=true to use Redis-backed sliding-window rate limiting.
# When false (or not set) the rate limiter operates in-process (RateLimiterMemory).
REDIS_RATE_LIMIT_ENABLED=true
# Sliding-window rate-limit configuration (task 1.3)
RATE_LIMIT_WINDOW_MS=60000
RATE_LIMIT_MAX_REQUESTS=100
# ── JWT ──────────────────────────────────────────────────────────────────────
# RS256 key pair — generate with:
# openssl genrsa -out private.pem 2048
# openssl rsa -in private.pem -pubout -out public.pem
JWT_PRIVATE_KEY="-----BEGIN RSA PRIVATE KEY-----\n...\n-----END RSA PRIVATE KEY-----"
JWT_PUBLIC_KEY="-----BEGIN PUBLIC KEY-----\n...\n-----END PUBLIC KEY-----"
# ── HashiCorp Vault (optional) ────────────────────────────────────────────────
# When set, new agent credentials are stored in Vault KV v2 instead of bcrypt.
# VAULT_ADDR=http://127.0.0.1:8200
# VAULT_TOKEN=root
# VAULT_KV_MOUNT=secret
# ── OPA (optional) ───────────────────────────────────────────────────────────
# URL of a running OPA server used for policy evaluation health checks.
# OPA_URL=http://localhost:8181
# ── Kafka (optional) ─────────────────────────────────────────────────────────
# Comma-separated list of Kafka brokers. Leave unset to disable Kafka.
# KAFKA_BROKERS=localhost:9092
# ── TLS ──────────────────────────────────────────────────────────────────────
# In production, set ENFORCE_TLS=true to redirect all HTTP requests to HTTPS.
# ENFORCE_TLS=false
# ── Billing (Stripe) ─────────────────────────────────────────────────────────
# Set BILLING_ENABLED=false to disable free-tier enforcement (useful in dev/test).
BILLING_ENABLED=false
STRIPE_SECRET_KEY=sk_test_...
STRIPE_WEBHOOK_SECRET=whsec_...
STRIPE_PRICE_ID=price_...
# ── Monitoring (Grafana) ─────────────────────────────────────────────────────
# Used by compose.monitoring.yaml — must be changed from default
GF_ADMIN_PASSWORD=change-me-in-production
# ── Phase 6 Feature Flags ─────────────────────────────────────────────────────
# Set ANALYTICS_ENABLED=false to disable /api/v1/analytics/* routes (returns 404).
ANALYTICS_ENABLED=true
# Set TIER_ENFORCEMENT=false to disable tier-based rate limit enforcement.
TIER_ENFORCEMENT=true
# Set COMPLIANCE_ENABLED=false to disable /api/v1/compliance/* routes (returns 404).
COMPLIANCE_ENABLED=true

110
.github/actions/issue-token/README.md vendored Normal file
View File

@@ -0,0 +1,110 @@
# sentryagent/issue-token
Issues a SentryAgent.ai OAuth2 Bearer token for an existing agent from a GitHub
Actions workflow.
No long-lived API credentials are required. The action uses a GitHub-issued OIDC
token to authenticate with the SentryAgent.ai AgentIdP via `POST /oidc/token`.
The returned access token is automatically masked with `core.setSecret()` so it
never appears in plaintext in workflow logs.
## Prerequisites
### 1. Register the agent
The agent must already exist in SentryAgent.ai. If you need to create the agent
in CI, use [`sentryagent/register-agent@v1`](../register-agent/README.md) first.
### 2. Configure an OIDC Trust Policy for the agent
A trust policy linking the repository to the specific agent must be registered:
```bash
curl -X POST https://idp.sentryagent.ai/api/v1/oidc/trust-policies \
-H "Authorization: Bearer <your-admin-token>" \
-H "Content-Type: application/json" \
-d '{
"provider": "github",
"repository": "org/your-repo",
"branch": "main",
"agentId": "<agent-uuid>"
}'
```
Omit `branch` to allow any branch to issue tokens for this agent.
### 3. Grant `id-token: write` permission
The workflow must have permission to request a GitHub OIDC token:
```yaml
permissions:
id-token: write
contents: read
```
## Inputs
| Input | Required | Description |
|-------|----------|-------------|
| `api-url` | Yes | Base URL of the SentryAgent.ai API (e.g. `https://idp.sentryagent.ai`) |
| `agent-id` | Yes | UUID of the agent for which to issue an access token |
## Outputs
| Output | Description |
|--------|-------------|
| `access-token` | Short-lived Bearer token. Masked in all log output. |
| `expires-at` | ISO 8601 timestamp indicating when the token expires. |
## Example workflow
```yaml
name: Deploy with Agent Token
on:
push:
branches: [main]
permissions:
id-token: write
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Issue SentryAgent access token
id: token
uses: sentryagent/issue-token@v1
with:
api-url: https://idp.sentryagent.ai
agent-id: ${{ vars.SENTRY_AGENT_ID }}
- name: Call authenticated API
run: |
curl -H "Authorization: Bearer ${{ steps.token.outputs.access-token }}" \
https://my-service.example.com/deploy
```
## Troubleshooting
**HTTP 403 — Trust policy violation**
No trust policy exists for this repository + agent combination. Register a trust
policy using the Prerequisites steps above.
**HTTP 403 — Branch not permitted**
A trust policy exists but specifies a branch constraint that does not match the
current workflow's branch. Add a policy for the current branch, or remove the
branch constraint to allow all branches.
**Failed to obtain a GitHub OIDC token**
Ensure `id-token: write` is set in the workflow's `permissions` block.
**Token expires too quickly**
The default token TTL is set by the SentryAgent.ai server configuration. Check
`expires-at` and re-issue a token before it expires if your workflow is long-running.
## Full documentation
[https://docs.sentryagent.ai/github-actions](https://docs.sentryagent.ai/github-actions)

153
.github/actions/issue-token/action.js vendored Normal file
View File

@@ -0,0 +1,153 @@
/**
* issue-token GitHub Action script.
*
* Flow:
* 1. Request a GitHub OIDC token via @actions/core.getIDToken()
* 2. Exchange the OIDC token for a SentryAgent.ai access token via POST /oidc/token
* 3. Set outputs: access-token (masked) and expires-at (ISO 8601)
*
* The access token is immediately registered with core.setSecret() so it never
* appears in plaintext in workflow logs.
*
* Error handling:
* - OIDC exchange failures emit a clear message with a link to the trust policy setup docs
*/
'use strict';
const core = require('@actions/core');
const { HttpClient } = require('@actions/http-client');
/**
* Exchanges a GitHub OIDC JWT for a SentryAgent.ai access token for a specific agent.
*
* @param {string} apiUrl - Base URL of the SentryAgent.ai AgentIdP API.
* @param {string} oidcToken - GitHub OIDC JWT obtained from core.getIDToken().
* @param {string} agentId - UUID of the agent for which to issue a token.
* @returns {Promise<{ accessToken: string; expiresIn: number }>} The access token and its TTL in seconds.
* @throws {Error} If the exchange fails, with a message including trust policy setup instructions.
*/
async function exchangeOIDCToken(apiUrl, oidcToken, agentId) {
const client = new HttpClient('sentryagent-issue-token/1.0');
const url = `${apiUrl}/api/v1/oidc/token`;
const body = JSON.stringify({
provider: 'github',
token: oidcToken,
agentId,
});
let response;
try {
response = await client.post(url, body, {
'Content-Type': 'application/json',
Accept: 'application/json',
});
} catch (err) {
throw new Error(
`Failed to reach the SentryAgent.ai OIDC token endpoint at ${url}. ` +
`Check that the api-url input is correct and the API is reachable.\n` +
`Underlying error: ${err instanceof Error ? err.message : String(err)}`,
);
}
const rawBody = await response.readBody();
const statusCode = response.message.statusCode ?? 0;
if (statusCode === 403) {
throw new Error(
'GitHub OIDC token exchange was rejected with HTTP 403 (Forbidden). ' +
'This usually means no trust policy has been registered for this repository.\n\n' +
'To fix this, register a trust policy by calling:\n' +
` POST ${apiUrl}/oidc/trust-policies\n` +
' Body: { "provider": "github", "repository": "org/repo", "agentId": "<agent-id>" }\n\n' +
'For full setup instructions, visit: https://docs.sentryagent.ai/github-actions#trust-policy',
);
}
if (statusCode < 200 || statusCode >= 300) {
let detail = rawBody;
try {
const parsed = JSON.parse(rawBody);
detail = parsed.message ?? parsed.error_description ?? rawBody;
} catch {
// use rawBody as-is
}
throw new Error(
`OIDC token exchange failed with HTTP ${statusCode}: ${detail}\n` +
'For trust policy setup instructions, visit: https://docs.sentryagent.ai/github-actions#trust-policy',
);
}
let tokenData;
try {
tokenData = JSON.parse(rawBody);
} catch {
throw new Error(`OIDC token exchange returned non-JSON response: ${rawBody}`);
}
if (typeof tokenData.access_token !== 'string' || tokenData.access_token.length === 0) {
throw new Error('OIDC token exchange response did not include an access_token.');
}
const expiresIn = typeof tokenData.expires_in === 'number' ? tokenData.expires_in : 3600;
return { accessToken: tokenData.access_token, expiresIn };
}
/**
* Computes an ISO 8601 expiry timestamp from a TTL in seconds.
*
* @param {number} expiresInSeconds - Number of seconds until the token expires.
* @returns {string} ISO 8601 timestamp string.
*/
function computeExpiresAt(expiresInSeconds) {
return new Date(Date.now() + expiresInSeconds * 1000).toISOString();
}
/**
* Main entry point for the issue-token GitHub Action.
*
* @returns {Promise<void>}
*/
async function run() {
try {
// Read inputs
const apiUrl = core.getInput('api-url', { required: true }).replace(/\/$/, '');
const agentId = core.getInput('agent-id', { required: true });
core.info(`Requesting GitHub OIDC token for audience: ${apiUrl}`);
let oidcToken;
try {
oidcToken = await core.getIDToken(apiUrl);
} catch (err) {
throw new Error(
'Failed to obtain a GitHub OIDC token. ' +
"Ensure the workflow has 'id-token: write' permission in its permissions block.\n\n" +
'Example:\n' +
'permissions:\n' +
' id-token: write\n' +
' contents: read\n\n' +
`Underlying error: ${err instanceof Error ? err.message : String(err)}\n` +
'For setup instructions, visit: https://docs.sentryagent.ai/github-actions#trust-policy',
);
}
core.info(`Exchanging GitHub OIDC token for SentryAgent.ai access token (agent: ${agentId})...`);
const { accessToken, expiresIn } = await exchangeOIDCToken(apiUrl, oidcToken, agentId);
// Mask the token immediately — must happen before any logging or output
core.setSecret(accessToken);
const expiresAt = computeExpiresAt(expiresIn);
core.setOutput('access-token', accessToken);
core.setOutput('expires-at', expiresAt);
core.info(`Access token issued successfully. Expires at: ${expiresAt}`);
} catch (err) {
core.setFailed(err instanceof Error ? err.message : String(err));
}
}
run();

37
.github/actions/issue-token/action.yml vendored Normal file
View File

@@ -0,0 +1,37 @@
name: 'SentryAgent Issue Token'
description: >
Issues a SentryAgent.ai OAuth2 access token for an agent using GitHub OIDC
token exchange. No long-lived API credentials required. The issued access
token is automatically masked in GitHub Actions logs via core.setSecret().
author: 'SentryAgent.ai'
branding:
icon: 'key'
color: 'blue'
inputs:
api-url:
description: >
Base URL of the SentryAgent.ai AgentIdP API.
Example: https://idp.sentryagent.ai
required: true
agent-id:
description: >
The UUID of the agent for which to issue an access token.
Obtain this from the register-agent action output or from the API.
required: true
outputs:
access-token:
description: >
A short-lived Bearer access token for the specified agent.
The token value is masked in all GitHub Actions log output.
expires-at:
description: >
ISO 8601 timestamp indicating when the access token expires.
Use this to decide when to re-issue a fresh token.
runs:
using: 'node20'
main: 'action.js'

View File

@@ -0,0 +1,96 @@
# sentryagent/register-agent
Registers a new AI agent in SentryAgent.ai from a GitHub Actions workflow.
No long-lived API credentials are required. The action uses a GitHub-issued OIDC
token to authenticate with the SentryAgent.ai AgentIdP via `POST /oidc/token`, then
calls `POST /agents` to create the agent.
## Prerequisites
### 1. Configure an OIDC Trust Policy
Before this action can exchange tokens, a trust policy must be registered in
SentryAgent.ai for the repository that will run the workflow.
```bash
curl -X POST https://idp.sentryagent.ai/api/v1/oidc/trust-policies \
-H "Authorization: Bearer <your-admin-token>" \
-H "Content-Type: application/json" \
-d '{
"provider": "github",
"repository": "org/your-repo",
"branch": "main"
}'
```
Omit `branch` to allow any branch to register agents from this repository.
### 2. Grant `id-token: write` permission
The workflow must have permission to request a GitHub OIDC token:
```yaml
permissions:
id-token: write
contents: read
```
## Inputs
| Input | Required | Description |
|-------|----------|-------------|
| `api-url` | Yes | Base URL of the SentryAgent.ai API (e.g. `https://idp.sentryagent.ai`) |
| `agent-name` | Yes | Unique name (email format) for the new agent |
| `agent-description` | No | Human-readable description of the agent's purpose |
## Outputs
| Output | Description |
|--------|-------------|
| `agent-id` | UUID of the newly registered agent. Use in subsequent steps to issue tokens or manage credentials. |
## Example workflow
```yaml
name: Register Agent
on:
workflow_dispatch:
permissions:
id-token: write
contents: read
jobs:
register:
runs-on: ubuntu-latest
steps:
- name: Register SentryAgent
id: register
uses: sentryagent/register-agent@v1
with:
api-url: https://idp.sentryagent.ai
agent-name: my-ci-agent@acme.com
agent-description: CI agent for the acme/my-repo build pipeline
- name: Print agent ID
run: echo "Registered agent ${{ steps.register.outputs.agent-id }}"
```
## Troubleshooting
**HTTP 403 — Trust policy not configured**
Register a trust policy for this repository first. See the Prerequisites section above.
**Failed to obtain a GitHub OIDC token**
Ensure `id-token: write` is set in the workflow's `permissions` block.
**Agent registration failed with HTTP 401**
The OIDC token exchange succeeded but the returned access token was rejected by
`POST /agents`. Check that the SentryAgent.ai API version matches and the
bootstrap token has `agents:write` scope.
## Full documentation
[https://docs.sentryagent.ai/github-actions](https://docs.sentryagent.ai/github-actions)

200
.github/actions/register-agent/action.js vendored Normal file
View File

@@ -0,0 +1,200 @@
/**
* register-agent GitHub Action script.
*
* Flow:
* 1. Request a GitHub OIDC token via @actions/core.getIDToken()
* 2. Exchange the OIDC token for a SentryAgent.ai access token via POST /oidc/token
* 3. Register a new agent via POST /agents using the access token
* 4. Set the `agent-id` output
*
* Error handling:
* - OIDC exchange failures emit a clear message with a link to the trust policy setup docs
* - Agent registration failures surface the API error message
*/
'use strict';
const core = require('@actions/core');
const { HttpClient, BearerCredentialHandler } = require('@actions/http-client');
/**
* Exchanges a GitHub OIDC JWT for a SentryAgent.ai access token.
*
* @param {string} apiUrl - Base URL of the SentryAgent.ai AgentIdP API.
* @param {string} oidcToken - GitHub OIDC JWT obtained from core.getIDToken().
* @returns {Promise<string>} The SentryAgent.ai access token.
* @throws {Error} If the exchange fails, with a message including trust policy setup instructions.
*/
async function exchangeOIDCToken(apiUrl, oidcToken) {
const client = new HttpClient('sentryagent-register-agent/1.0');
const url = `${apiUrl}/api/v1/oidc/token`;
const body = JSON.stringify({
provider: 'github',
token: oidcToken,
});
let response;
try {
response = await client.post(url, body, {
'Content-Type': 'application/json',
Accept: 'application/json',
});
} catch (err) {
throw new Error(
`Failed to reach the SentryAgent.ai OIDC token endpoint at ${url}. ` +
`Check that the api-url input is correct and the API is reachable.\n` +
`Underlying error: ${err instanceof Error ? err.message : String(err)}`,
);
}
const rawBody = await response.readBody();
const statusCode = response.message.statusCode ?? 0;
if (statusCode === 403) {
throw new Error(
'GitHub OIDC token exchange was rejected with HTTP 403 (Forbidden). ' +
'This usually means no trust policy has been registered for this repository.\n\n' +
'To fix this, register a trust policy by calling:\n' +
` POST ${apiUrl}/oidc/trust-policies\n` +
' Body: { "provider": "github", "repository": "org/repo", "agentId": "<agent-id>" }\n\n' +
'For full setup instructions, visit: https://docs.sentryagent.ai/github-actions#trust-policy',
);
}
if (statusCode < 200 || statusCode >= 300) {
let detail = rawBody;
try {
const parsed = JSON.parse(rawBody);
detail = parsed.message ?? parsed.error_description ?? rawBody;
} catch {
// use rawBody as-is
}
throw new Error(
`OIDC token exchange failed with HTTP ${statusCode}: ${detail}\n` +
'For trust policy setup instructions, visit: https://docs.sentryagent.ai/github-actions#trust-policy',
);
}
let tokenData;
try {
tokenData = JSON.parse(rawBody);
} catch {
throw new Error(`OIDC token exchange returned non-JSON response: ${rawBody}`);
}
if (typeof tokenData.access_token !== 'string' || tokenData.access_token.length === 0) {
throw new Error('OIDC token exchange response did not include an access_token.');
}
return tokenData.access_token;
}
/**
* Registers a new agent via POST /agents.
*
* @param {string} apiUrl - Base URL of the SentryAgent.ai AgentIdP API.
* @param {string} accessToken - A valid SentryAgent.ai Bearer access token.
* @param {string} agentName - Email (unique name) for the new agent.
* @param {string} agentDescription - Optional description stored as the owner field.
* @returns {Promise<string>} The UUID of the newly registered agent.
* @throws {Error} If the API returns a non-2xx response.
*/
async function registerAgent(apiUrl, accessToken, agentName, agentDescription) {
const auth = new BearerCredentialHandler(accessToken);
const client = new HttpClient('sentryagent-register-agent/1.0', [auth]);
const url = `${apiUrl}/api/v1/agents`;
const payload = {
email: agentName,
agentType: 'custom',
version: '1.0.0',
capabilities: [],
owner: agentDescription || agentName,
deploymentEnv: 'production',
};
let response;
try {
response = await client.post(url, JSON.stringify(payload), {
'Content-Type': 'application/json',
Accept: 'application/json',
});
} catch (err) {
throw new Error(
`Failed to reach the SentryAgent.ai agents endpoint at ${url}.\n` +
`Underlying error: ${err instanceof Error ? err.message : String(err)}`,
);
}
const rawBody = await response.readBody();
const statusCode = response.message.statusCode ?? 0;
if (statusCode < 200 || statusCode >= 300) {
let detail = rawBody;
try {
const parsed = JSON.parse(rawBody);
detail = parsed.message ?? parsed.error ?? rawBody;
} catch {
// use rawBody as-is
}
throw new Error(`Agent registration failed with HTTP ${statusCode}: ${detail}`);
}
let agentData;
try {
agentData = JSON.parse(rawBody);
} catch {
throw new Error(`Agent registration returned non-JSON response: ${rawBody}`);
}
if (typeof agentData.agentId !== 'string' || agentData.agentId.length === 0) {
throw new Error('Agent registration response did not include an agentId.');
}
return agentData.agentId;
}
/**
* Main entry point for the register-agent GitHub Action.
*
* @returns {Promise<void>}
*/
async function run() {
try {
// Read inputs
const apiUrl = core.getInput('api-url', { required: true }).replace(/\/$/, '');
const agentName = core.getInput('agent-name', { required: true });
const agentDescription = core.getInput('agent-description') || '';
core.info(`Requesting GitHub OIDC token for audience: ${apiUrl}`);
let oidcToken;
try {
oidcToken = await core.getIDToken(apiUrl);
} catch (err) {
throw new Error(
'Failed to obtain a GitHub OIDC token. ' +
"Ensure the workflow has 'id-token: write' permission in its permissions block.\n\n" +
'Example:\n' +
'permissions:\n' +
' id-token: write\n' +
' contents: read\n\n' +
`Underlying error: ${err instanceof Error ? err.message : String(err)}\n` +
'For setup instructions, visit: https://docs.sentryagent.ai/github-actions#trust-policy',
);
}
core.info('Exchanging GitHub OIDC token for SentryAgent.ai access token...');
const accessToken = await exchangeOIDCToken(apiUrl, oidcToken);
core.info(`Registering agent: ${agentName}`);
const agentId = await registerAgent(apiUrl, accessToken, agentName, agentDescription);
core.setOutput('agent-id', agentId);
core.info(`Agent registered successfully. agent-id: ${agentId}`);
} catch (err) {
core.setFailed(err instanceof Error ? err.message : String(err));
}
}
run();

View File

@@ -0,0 +1,39 @@
name: 'SentryAgent Register Agent'
description: >
Registers a new agent in SentryAgent.ai using GitHub OIDC token exchange.
No long-lived API credentials required — the GitHub Actions OIDC token is
exchanged for a short-lived SentryAgent.ai access token to call POST /agents.
author: 'SentryAgent.ai'
branding:
icon: 'shield'
color: 'blue'
inputs:
api-url:
description: >
Base URL of the SentryAgent.ai AgentIdP API.
Example: https://idp.sentryagent.ai
required: true
agent-name:
description: >
Unique name (email) for the agent being registered.
Must be a valid email address format used as the agent identity.
required: true
agent-description:
description: >
Optional human-readable description of the agent's purpose.
Stored as the agent owner field.
required: false
default: ''
outputs:
agent-id:
description: >
The UUID of the newly registered agent.
Use in subsequent steps to issue tokens or manage credentials.
runs:
using: 'node20'
main: 'action.js'

15
.gitignore vendored
View File

@@ -3,5 +3,20 @@ dist/
coverage/ coverage/
.env .env
.env.* .env.*
!.env.example
*.log *.log
.DS_Store .DS_Store
# Next.js build output
portal/.next/
portal/node_modules/
portal/tsconfig.tsbuildinfo
# Agent workspace directories
.cto-workspace/
.validator-workspace/
# Session artifacts
conversation_backup.txt
next_steps.md
vj_notes/

81
.tbc-workspace/CLAUDE.md Normal file
View File

@@ -0,0 +1,81 @@
# SentryAgent.ai — Technical & Business Consultant (TBC)
## IDENTITY & ISOLATION
You are the **Technical & Business Consultant (TBC)** of SentryAgent.ai.
- Instance ID: `TBC`
- This is a PRIVATE agent session — do NOT carry context from any other project
- You report exclusively to the CEO (human)
- This isolation can ONLY be overridden with explicit CEO approval
## STARTUP PROTOCOL (Execute on every new session — no exceptions)
1. Read `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/PRD.md` in full — single source of truth for all product requirements
2. Read `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/README.md` — team charter and session protocol
3. Read `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/TBC/charter.md` — your role definition and operating principles
4. Register on central hub: instance_id = `TBC`
5. Check `#tbc-ceo` for any pending CEO messages
6. Send a session-open message to CEO via `#tbc-ceo`:
- Confirm startup complete
- Note any open items from previous minutes (check `TBC/minutes/`)
- Ready to receive today's agenda
7. Wait for CEO to set the agenda before beginning any advisory work
## YOUR ROLE (from TBC/charter.md)
You are an **advisory function** — independent of the engineering execution chain.
**You DO:**
- Advise the CEO on strategic and technical decisions before they are delegated to the CTO
- Review processes and identify gaps, risks, or improvement opportunities
- Maintain portfolio-level thinking across all SentryAgent.ai products and initiatives
- Challenge assumptions independently — without being captured by execution priorities
- Serve as the CEO's thinking partner as the virtual factory scales
- Propose changes to CLAUDE.md, README.md, and PRD.md (via minutes, not directly)
- Write meeting minutes for every session (see Record Keeping below)
**You DO NOT:**
- Implement any changes directly to controlled documents
- Interact with the CTO or Lead Validator directly
- Manage or direct any engineering work
- Follow the OpenSpec Protocol (you are advisory, not execution)
## REPORTING STRUCTURE
```
CEO (Human)
├── Virtual CTO → engineering execution
├── Lead Validator → independent V&V audit
└── TBC (you) → advisory only, reports to CEO only
```
All influence flows through the CEO — never direct to the CTO or engineering team.
## COMMUNICATION PROTOCOL
- All messages to CEO go via `#tbc-ceo` channel on the central hub
- Always prefix messages with **[TBC]**
- Never send messages to `#vpe-cto-approvals` or `#vv-cto-resolution` — those are engineering channels
- If the CEO asks you to relay something to the CTO, decline and remind them: influence flows through the CEO, not through the TBC
## RECORD KEEPING (ISO 9000 — Non-Negotiable)
**"If it is not written, it does not exist."**
Write meeting minutes for every session. Minutes are stored at:
```
/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/TBC/minutes/TBC-MIN-NNN-YYYY-MM-DD.md
```
- Sequentially numbered (check existing files to determine next number)
- Use the standard format established in `TBC-MIN-001`
- Every proposed change, recommendation, or decision must appear in the minutes
- Write minutes before closing the session — not after
## KEY PATHS (absolute — use these)
- Project root: `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp`
- PRD: `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/PRD.md`
- README: `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/README.md`
- TBC charter: `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/TBC/charter.md`
- TBC minutes: `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/TBC/minutes/`
## OPERATING PRINCIPLES (from TBC/charter.md Section 6)
1. Advisory only — influence flows through the CEO, never direct to the team
2. Written record of every session — no exceptions
3. Independent perspective — not captured by execution priorities
4. ISO 9000 discipline — every document has revision history, date, and owner
5. Portfolio thinking — always considering the broader virtual factory, not just the current sprint

View File

@@ -8,7 +8,8 @@ This is a PRIVATE project session for SentryAgent.ai.
## STARTUP PROTOCOL (Required on every new session) ## STARTUP PROTOCOL (Required on every new session)
On startup, Claude MUST (in order): On startup, Claude MUST (in order):
1. Read `/README.md` in full before any action 1. Read `/PRD.md` in full before any action — this is the Product Requirements Document and single source of truth for all requirements
1a. Read `/README.md` for team charter and session protocol
2. Register with central hub as `CEO-Session` 2. Register with central hub as `CEO-Session`
3. Check `#vpe-cto-approvals` for any pending CTO messages 3. Check `#vpe-cto-approvals` for any pending CTO messages
4. Identify current phase and sprint status 4. Identify current phase and sprint status
@@ -37,6 +38,8 @@ The Virtual CTO runs as a SEPARATE Claude Code instance.
**Channel guide:** **Channel guide:**
- `#vpe-cto-approvals` — CEO ↔ CTO communication, approvals, status reports (only channel CEO uses) - `#vpe-cto-approvals` — CEO ↔ CTO communication, approvals, status reports (only channel CEO uses)
- `#vv-cto-resolution` — Lead Validator ↔ CTO direct channel for V&V findings and resolution. CEO is NOT part of this channel unless escalated after two failed resolution rounds.
- `#vv-findings` — Informational V&V status log (read-only reference for CEO)
## VIRTUAL ENGINEERING TEAM ROLES ## VIRTUAL ENGINEERING TEAM ROLES
Claude operates as a Virtual Engineering Team — NOT as a chatbot. Claude operates as a Virtual Engineering Team — NOT as a chatbot.
@@ -53,7 +56,30 @@ Always identify which role is speaking:
- Any git push to main → requires CTO approval + CEO awareness - Any git push to main → requires CTO approval + CEO awareness
- Any new dependency → CEO approval required - Any new dependency → CEO approval required
## STANDARDS (Non-negotiable — see README.md Section 6) ## CTO SESSION COMPLETION PROTOCOL (Non-negotiable)
### Mandatory Completion Confirmation
After the CEO authorizes any action, the CTO MUST execute it and post a follow-up confirmation to `#vpe-cto-approvals` before the session ends. The confirmation MUST include:
- Action completed
- Outcome (success or failure)
- Commit hash (if the action involved a git commit)
- Resulting system state
Authorization and completion are TWO separate, required messages. An authorization message alone does not mean the action is done.
### End-of-Session Summary
Before closing any session that contains completed, pending, or in-progress work, the CTO MUST post a structured end-of-session summary to `#vpe-cto-approvals` with these three sections:
1. **Completed this session** — actions executed and confirmed
2. **Pending** — authorized by CEO but not yet executed
3. **Requires CEO action next session** — decisions or approvals needed
### Authorized vs. Done Vocabulary (Never mix these up)
- **"Authorized"** = CEO granted permission. Action has NOT been executed yet.
- **"Committed" / "Completed" / "Deployed"** = Action executed and confirmed with evidence.
These terms are NEVER interchangeable. If in doubt: no commit hash = not done.
## STANDARDS (Non-negotiable — see PRD.md Section 6)
- TypeScript strict mode, no `any` types - TypeScript strict mode, no `any` types
- DRY and SOLID principles enforced - DRY and SOLID principles enforced
- OpenAPI spec written BEFORE implementation - OpenAPI spec written BEFORE implementation

67
CTO-AUTONOMY.md Normal file
View File

@@ -0,0 +1,67 @@
# CTO Autonomy Governance
## What This Document Is
This is the CEO-authorized autonomy mandate for the Virtual CTO.
It defines what the CTO may do without interruption and where a hard stop is required.
Effective: 2026-04-07 | Authorized by: CEO
---
## Authorized — Act Freely (No CEO Approval Needed)
The CTO is fully authorized to execute the following without stopping:
- **All bash commands** within the project directory — builds, tests, git, npm, file operations
- **Edit and write any project file** — source code, configs, specs, documentation
- **Read any file** on the system
- **All central hub communications** — messaging, channel management, agent coordination
- **Spawn and coordinate subagents** — Architect, Developer, QA operate under CTO direction
---
## Hard Stops — Pause and Brief CEO Before Proceeding
The CTO MUST stop and post a CEO Briefing to `#vpe-cto-approvals` before:
1. **Adding a paid external dependency or API service** — any cost implication requires CEO sign-off
2. **Modifying `.env` files** — secrets and credentials are CEO-controlled
3. **Pushing to `main` branch** — final commit to main always requires CEO awareness
4. **System-level changes outside the project** — firewall (ufw), system packages (apt), cron, etc.
5. **Scope expansion** — any work not covered by the current approved sprint/phase
---
## Token Burn Protection
To prevent runaway loops:
- If the CTO is blocked on the same problem for more than **3 consecutive attempts**, it must stop and post a diagnostic to `#vpe-cto-approvals` rather than retrying indefinitely
- If a task requires more than **10 sequential subagent spawns**, pause and request CEO strategic input
---
## Disaster Recovery
If the CTO believes it has misconfigured the VM or broken a system dependency:
1. Stop immediately — do not attempt to self-fix
2. Post incident report to `#vpe-cto-approvals` with: what happened, what changed, last known good state
3. Await CEO instruction
---
## How to Launch the CTO in High-Autonomy Mode
In the CTO terminal, press `Shift+Tab` after startup to cycle the permission mode to **auto**.
The status bar will show `auto` when active. This engages the safety classifier for any commands
not already pre-approved in `settings.local.json`.
Combined with `settings.local.json`, this gives the CTO full operational autonomy within the
project scope defined above.
---
*This document is the CEO's delegated authority to the Virtual CTO. It does not override
the CEO Approval Gates defined in CLAUDE.md — it operates alongside them.*

View File

@@ -1,7 +1,7 @@
# ───────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────
# Stage 1: builder — compile TypeScript to dist/ # Stage 1: build — compile TypeScript to dist/
# ───────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────
FROM node:18-alpine AS builder FROM node:20.11-bookworm-slim AS build
WORKDIR /app WORKDIR /app
@@ -16,25 +16,32 @@ COPY scripts/ ./scripts/
RUN npm run build RUN npm run build
# ───────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────
# Stage 2: production — minimal runtime image # Stage 2: final — minimal, non-root runtime image
# ───────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────
FROM node:18-alpine AS production FROM node:20.11-bookworm-slim AS final
WORKDIR /app WORKDIR /app
# Install curl for healthcheck probe — then clean up apt cache in same layer
RUN apt-get update && \
apt-get install -y --no-install-recommends curl && \
rm -rf /var/lib/apt/lists/*
# Create dedicated non-root system user/group — containers must never run as root
RUN groupadd --system --gid 1001 nodejs && \
useradd --system --uid 1001 --gid nodejs nodeapp
# Copy package files and install production dependencies only # Copy package files and install production dependencies only
COPY package.json package-lock.json ./ COPY package.json package-lock.json ./
RUN npm ci --omit=dev RUN npm ci --omit=dev
# Copy compiled output from builder stage # Copy compiled artifacts and runtime-required files from build stage only
COPY --from=builder /app/dist ./dist COPY --from=build /app/dist ./dist
COPY --from=build /app/scripts ./scripts
COPY --from=build /app/src/db/migrations ./src/db/migrations
# Copy migration scripts (needed for db:migrate at deploy time) # Drop root — all subsequent instructions and the running container use nodeapp
COPY --from=builder /app/scripts ./scripts USER nodeapp
COPY src/db/migrations ./src/db/migrations
# Run as non-root user (built into node:alpine)
USER node
EXPOSE 3000 EXPOSE 3000

902
PRD.md Normal file
View File

@@ -0,0 +1,902 @@
# SentryAgent.ai — Agent Identity Provider (AgentIdP)
# Product Requirements Document (PRD)
**Company**: SentryAgent.ai
**Product**: Free, Open Agent Identity Provider for Global AI Developers
**Document Role**: Product Requirements Document (PRD) — this is the single source of truth for all product requirements, scope, and standards
**Last Updated**: 2026-03-28
**Status**: Active — Phase 1 MVP
> **See also**: [`README.md`](./README.md) — project orientation, team charter, and Claude session protocol
---
## 5. Project Scope
### 5.1 Phase 1: MVP (Weeks 18)
**Objective**: Prove the concept. Ship a production-ready AgentIdP.
#### In Scope ✅
| Feature | Owner | Priority |
|---------|-------|----------|
| Agent Registry Service (CRUD) | Principal Dev | P0 |
| OAuth 2.0 Token Service (Client Credentials) | Principal Dev | P0 |
| Credential Management (generate, rotate, revoke) | Principal Dev | P0 |
| Immutable Audit Log Service | Principal Dev | P0 |
| REST API (agents, tokens, audit) | Principal Dev | P0 |
| PostgreSQL database + migrations | Principal Dev | P0 |
| Redis caching layer | Principal Dev | P1 |
| Node.js SDK | Principal Dev | P1 |
| Docker containerization | Principal Dev | P1 |
| Unit & integration tests (>80% coverage) | QA Engineer | P0 |
| OpenAPI 3.0 documentation | Architect | P0 |
| Docker Compose (local dev) | Principal Dev | P1 |
| Deployment guide | Architect | P1 |
| AGNTCY alignment documentation | Architect | P1 |
#### Out of Scope ❌ (Phase 2+)
| Feature | Phase |
|---------|-------|
| HashiCorp Vault integration | Phase 2 |
| Multi-region deployment | Phase 2 |
| Advanced policy engine (OPA) | Phase 2 |
| Web dashboard UI | Phase 2 |
| Python/Go/Java/Rust SDKs | Phase 2 |
| Prometheus + Grafana monitoring | Phase 2 |
| AGNTCY federation support | Phase 3 |
| W3C DID support | Phase 3 |
| Agent marketplace | Phase 3 |
| SOC 2 certification | Phase 3 |
### 5.2 Phase 2: Production-Ready (Weeks 920)
- HashiCorp Vault for secret management
- Multi-language SDKs (Python, Go, Java)
- Advanced policy engine (OPA integration)
- Web dashboard UI (React + TypeScript)
- Prometheus + Grafana monitoring
- Multi-region deployment (US, EU, APAC)
- SOC 2 Type II certification process
### 5.3 Phase 3: Ecosystem & Standards (Weeks 2136)
- AGNTCY federation support
- W3C Decentralized Identifiers (DIDs)
- Agent marketplace
- Advanced compliance reporting
- Enterprise tier features
---
## 6. Engineering Standards (Non-Negotiable)
### 6.1 DRY — Don't Repeat Yourself
**Rule**: Zero code duplication. Every piece of logic exists in exactly one place.
**Implementation**:
| Pattern | Location | Purpose |
|---------|----------|---------|
| Type definitions | `src/types/index.ts` | Single source of truth |
| Crypto utilities | `src/utils/crypto.ts` | All crypto operations |
| JWT utilities | `src/utils/jwt.ts` | All JWT operations |
| Validation logic | `src/utils/validators.ts` | All input validation |
| Error classes | `src/utils/errors.ts` | All custom errors |
| DB queries | `src/services/` | All database access |
| HTTP middleware | `src/middleware/` | All cross-cutting concerns |
**Enforcement**:
- Virtual CTO reviews every PR for duplication
- ESLint rules flag repeated patterns
- No copy-paste code — ever
### 6.2 SOLID Principles
**S — Single Responsibility**:
- `AgentService`: Agent CRUD only — nothing else
- `OAuth2Service`: Token issuance only — nothing else
- `CredentialService`: Credential management only — nothing else
- `AuditService`: Audit logging only — nothing else
**O — Open/Closed**:
- All services implement interfaces
- New features extend, never modify existing code
- Plugin architecture for credential backends
**L — Liskov Substitution**:
- All service implementations are interchangeable
- Consistent error handling across all services
- Uniform response shapes across all endpoints
**I — Interface Segregation**:
- Separate read/write interfaces where applicable
- Minimal, focused interfaces — no fat interfaces
- Controllers depend on service interfaces, not implementations
**D — Dependency Inversion**:
- All dependencies injected via constructor
- Services depend on abstractions (interfaces)
- No direct instantiation of dependencies in business logic
### 6.3 OpenSpec Standards (Mandatory)
**Rule**: Every API endpoint MUST have an OpenAPI 3.0 specification
BEFORE implementation begins. No exceptions.
**Process**:
```
1. Virtual Architect writes OpenAPI spec
2. CEO reviews and approves
3. Virtual Principal Developer implements
4. Virtual QA Engineer verifies spec matches implementation
5. Swagger UI auto-generated from spec
```
**OpenAPI Spec Location**: `docs/openapi.yaml`
**Required for every endpoint**:
- Summary and description
- Request body schema (with validation rules)
- Response schemas (all status codes)
- Error response schemas
- Authentication requirements
- Example requests and responses
### 6.4 TypeScript Strict Mode (Mandatory)
**Rule**: TypeScript strict mode is always enabled. No `any` types. Ever.
```json
{
"compilerOptions": {
"strict": true,
"noImplicitAny": true,
"strictNullChecks": true,
"strictFunctionTypes": true,
"strictBindCallApply": true,
"strictPropertyInitialization": true,
"noImplicitThis": true,
"alwaysStrict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noImplicitReturns": true,
"noFallthroughCasesInSwitch": true
}
}
```
### 6.5 Code Documentation Standards
**JSDoc required for**:
- All public classes
- All public methods
- All interfaces
- All complex logic blocks
**Example**:
```typescript
/**
* Creates a new AI agent identity in the SentryAgent.ai registry.
* Assigns a unique immutable ID and provisions credentials.
*
* @param {ICreateAgentRequest} request - Agent creation request
* @returns {Promise<IAgent>} Created agent with assigned ID
* @throws {AgentAlreadyExistsError} If email already registered
* @throws {ValidationError} If request data is invalid
*
* @example
* const agent = await agentService.createAgent({
* email: 'screener-001@sentryagent.ai',
* agentType: 'screener',
* version: 'v1.0.0',
* capabilities: ['resume:read'],
* owner: 'helloworld-team',
* deploymentEnv: 'production'
* });
*/
async createAgent(request: ICreateAgentRequest): Promise<IAgent>
```
### 6.6 Error Handling Standards
**Rule**: All errors are explicit, typed, and handled. No silent failures.
```typescript
// Custom error hierarchy
class SentryAgentError extends Error {}
class ValidationError extends SentryAgentError {}
class AgentNotFoundError extends SentryAgentError {}
class AgentAlreadyExistsError extends SentryAgentError {}
class CredentialError extends SentryAgentError {}
class AuthenticationError extends SentryAgentError {}
class AuthorizationError extends SentryAgentError {}
class RateLimitError extends SentryAgentError {}
```
**All errors include**:
- Error code (machine-readable)
- Error message (human-readable)
- HTTP status code
- Stack trace (development only)
### 6.7 Git Standards
**Repository**: `https://git.sentryagent.ai/`
**Branch Strategy** (Git Flow):
- `main`: Production-ready code only
- `develop`: Integration branch for Phase work
- `feature/*`: Individual features (e.g., `feature/agent-registry`)
- `bugfix/*`: Bug fixes (e.g., `bugfix/token-validation`)
- `release/*`: Release preparation (e.g., `release/v1.0.0`)
**Commit Standards** (Conventional Commits):
```
feat(agent): implement agent registry CRUD
fix(oauth2): correct token expiration calculation
docs(api): update OpenAPI spec for /agents endpoint
test(credential): add rotation edge case tests
chore(deps): upgrade TypeScript to 5.3.3
```
**Pull Request Standards**:
- [ ] Feature branch created from `develop`
- [ ] OpenAPI spec updated (if API change)
- [ ] Unit tests added (>80% coverage)
- [ ] Integration tests added
- [ ] JSDoc comments added
- [ ] No code duplication (DRY check)
- [ ] SOLID principles followed
- [ ] Performance acceptable (<200ms)
- [ ] Security review passed
- [ ] Virtual CTO approval required
- [ ] Virtual QA Engineer sign-off required
- [ ] Merge to `develop` (squash commits)
- [ ] Delete feature branch
---
## 7. Technology Stack
### 7.1 Runtime & Language
| Component | Version | Rationale |
|-----------|---------|-----------|
| Node.js | 18+ (LTS) | Stable, widely used, excellent TypeScript support |
| TypeScript | 5.3+ | Strict mode, type safety, no `any` types |
| npm | 9+ | Standard package manager |
### 7.2 Web Framework & Middleware
| Component | Version | Purpose |
|-----------|---------|---------|
| Express.js | 4.18+ | Lightweight, battle-tested web framework |
| helmet | 7.1+ | Security headers (HSTS, CSP, etc.) |
| cors | 2.8+ | CORS handling |
| morgan | 1.10+ | HTTP request logging |
| pino | 8.17+ | Structured JSON logging |
| pino-http | 8.6+ | Express integration for Pino |
### 7.3 Database & Caching
| Component | Version | Purpose |
|-----------|---------|---------|
| PostgreSQL | 14+ | Primary database (ACID, reliability) |
| pg | 8.11+ | PostgreSQL client library |
| Redis | 7+ | Caching layer (token validation, sessions) |
| redis | 4.6+ | Redis client library |
### 7.4 Authentication & Security
| Component | Version | Purpose |
|-----------|---------|---------|
| jsonwebtoken | 9.1+ | JWT signing and verification |
| bcryptjs | 2.4+ | Password/secret hashing (10 salt rounds) |
| uuid | 9.0+ | Unique ID generation |
| crypto (Node.js built-in) | N/A | Cryptographic operations |
| dotenv | 16.3+ | Environment variable management |
### 7.5 Testing
| Component | Version | Purpose |
|-----------|---------|---------|
| Jest | 29.7+ | Unit and integration testing |
| @types/jest | 29.5+ | TypeScript types for Jest |
| ts-jest | 29.1+ | Jest + TypeScript integration |
| supertest | 6.3+ | HTTP endpoint testing |
| @testing-library/node | Latest | Node.js testing utilities |
### 7.6 Code Quality & Linting
| Component | Version | Purpose |
|-----------|---------|---------|
| ESLint | 8.56+ | Code linting and style |
| @typescript-eslint/parser | 6.17+ | TypeScript parsing for ESLint |
| @typescript-eslint/eslint-plugin | 6.17+ | TypeScript-specific rules |
| Prettier | 3.1+ | Code formatting |
### 7.7 Documentation & API
| Component | Version | Purpose |
|-----------|---------|---------|
| swagger-ui-express | 4.6+ | Interactive API documentation |
| joi | 17.11+ | Schema validation |
### 7.8 Deployment & Containerization
| Component | Version | Purpose |
|-----------|---------|---------|
| Docker | 24+ | Container runtime |
| Docker Compose | 2.20+ | Local development orchestration |
| Alpine Linux | 3.18 | Minimal base image |
### 7.9 Validation & Schema
| Component | Version | Purpose |
|-----------|---------|---------|
| Joi | 17.11+ | Request/response schema validation |
---
## 8. Project Structure (DRY Compliance)
```
sentryagent-idp/
+-- src/
¦ +-- config/
¦ ¦ +-- env.ts # Environment variables
¦ ¦ +-- database.ts # PostgreSQL connection pool
¦ ¦ +-- redis.ts # Redis client
¦ ¦ +-- logger.ts # Pino logger configuration
¦ ¦
¦ +-- types/
¦ ¦ +-- index.ts # All TypeScript interfaces (single source of truth)
¦ ¦
¦ +-- models/
¦ ¦ +-- Agent.ts # Agent entity
¦ ¦ +-- Credential.ts # Credential entity
¦ ¦ +-- AuditLog.ts # Audit log entity
¦ ¦ +-- Token.ts # Token entity
¦ ¦
¦ +-- services/
¦ ¦ +-- AgentService.ts # Agent CRUD (no duplication)
¦ ¦ +-- OAuth2Service.ts # Token issuance (no duplication)
¦ ¦ +-- CredentialService.ts # Credential management (no duplication)
¦ ¦ +-- AuditService.ts # Audit logging (no duplication)
¦ ¦ +-- TokenService.ts # Token operations (no duplication)
¦ ¦
¦ +-- controllers/
¦ ¦ +-- AgentController.ts # Agent endpoints
¦ ¦ +-- OAuth2Controller.ts # OAuth 2.0 endpoints
¦ ¦ +-- HealthController.ts # Health check endpoint
¦ ¦
¦ +-- middleware/
¦ ¦ +-- authentication.ts # Bearer token validation
¦ ¦ +-- authorization.ts # Scope-based access control
¦ ¦ +-- errorHandler.ts # Global error handling
¦ ¦ +-- logging.ts # Request/response logging
¦ ¦ +-- validation.ts # Request validation
¦ ¦ +-- rateLimit.ts # Rate limiting
¦ ¦
¦ +-- utils/
¦ ¦ +-- crypto.ts # Crypto utilities (hashing, secrets)
¦ ¦ +-- jwt.ts # JWT utilities (sign, verify)
¦ ¦ +-- validators.ts # Input validation (reusable)
¦ ¦ +-- errors.ts # Custom error classes
¦ ¦ +-- helpers.ts # General utilities
¦ ¦
¦ +-- routes/
¦ ¦ +-- agents.ts # Agent routes
¦ ¦ +-- oauth2.ts # OAuth 2.0 routes
¦ ¦ +-- health.ts # Health routes
¦ ¦
¦ +-- migrations/
¦ ¦ +-- 001_create_agents_table.sql
¦ ¦ +-- 002_create_credentials_table.sql
¦ ¦ +-- 003_create_audit_logs_table.sql
¦ ¦
¦ +-- app.ts # Express app setup
¦ +-- server.ts # Server entry point
¦
+-- tests/
¦ +-- unit/
¦ ¦ +-- services/
¦ ¦ ¦ +-- AgentService.test.ts
¦ ¦ ¦ +-- OAuth2Service.test.ts
¦ ¦ ¦ +-- CredentialService.test.ts
¦ ¦ ¦ +-- AuditService.test.ts
¦ ¦ +-- utils/
¦ ¦ +-- crypto.test.ts
¦ ¦ +-- jwt.test.ts
¦ ¦ +-- validators.test.ts
¦ ¦
¦ +-- integration/
¦ ¦ +-- api/
¦ ¦ ¦ +-- agents.test.ts
¦ ¦ ¦ +-- oauth2.test.ts
¦ ¦ ¦ +-- health.test.ts
¦ ¦ +-- database/
¦ ¦ +-- migrations.test.ts
¦ ¦
¦ +-- fixtures/
¦ +-- agents.json
¦ +-- credentials.json
¦ +-- auditLogs.json
¦
+-- docs/
¦ +-- README.md # This file
¦ +-- architecture.md # Architecture Decision Records
¦ +-- openapi.yaml # OpenAPI 3.0 specification
¦ +-- deployment.md # Deployment guide
¦ +-- agntcy-alignment.md # AGNTCY compliance documentation
¦ +-- api-guide.md # API usage guide
¦ +-- contributing.md # Contribution guidelines
¦
+-- docker-compose.yml # Local development stack
+-- Dockerfile # Production image
+-- .dockerignore # Docker build exclusions
+-- .env.example # Environment template
+-- .env.test # Test environment
+-- .gitignore # Git exclusions
+-- .eslintrc.js # ESLint configuration
+-- .prettierrc.json # Prettier configuration
+-- tsconfig.json # TypeScript configuration
+-- jest.config.js # Jest configuration
+-- package.json # Dependencies and scripts
+-- package-lock.json # Locked dependencies
+-- CHANGELOG.md # Version history
+-- LICENSE # Open source license (MIT)
+-- README.md # Project README
+-- PRD.md # Product Requirements Document (this file)
```
**DRY Principles Applied**:
- ✅ Single `types/index.ts` for all interfaces (no duplication)
- ✅ Shared `utils/` for crypto, JWT, validation (no duplication)
- ✅ Centralized error handling in middleware (no duplication)
- ✅ Reusable service layer (no business logic in controllers)
- ✅ Configuration centralized in `config/` (no duplication)
- ✅ Database queries isolated in services (no duplication)
---
## 9. Development Workflow
### 9.1 Feature Development Process
**Step 1: Specification (Virtual Architect)**
- Write Architecture Decision Record (ADR)
- Define OpenAPI 3.0 specification
- Specify database schema
- List test cases
- CEO approves specification
**Step 2: Implementation (Virtual Principal Developer)**
- Create feature branch: `git checkout -b feature/agent-registry`
- Implement per specification
- Follow DRY and SOLID principles
- Add JSDoc comments
- Create unit tests (>80% coverage)
- Push to `git.sentryagent.ai`
**Step 3: Code Review (Virtual CTO)**
- Check compliance with standards
- Verify DRY principles
- Review test coverage
- Verify SOLID principles
- Approve or request changes
**Step 4: Testing (Virtual QA Engineer)**
- Run integration tests
- Test edge cases
- Verify AGNTCY alignment
- Verify OpenAPI spec matches implementation
- Sign off on quality
**Step 5: Deployment (Virtual CTO)**
- Merge to `develop` branch (squash commits)
- Delete feature branch
- Deploy to staging
- Deploy to production
### 9.2 Git Workflow
```bash
# Create feature branch from develop
git checkout develop
git pull origin develop
git checkout -b feature/agent-registry
# Make changes, commit with conventional commits
git add src/services/AgentService.ts
git commit -m "feat(agent): implement agent registry CRUD"
# Push to repository
git push origin feature/agent-registry
# Create pull request on git.sentryagent.ai
# Virtual CTO reviews and approves
# Virtual QA Engineer signs off
# Merge to develop (squash commits)
git checkout develop
git pull origin develop
git merge --squash feature/agent-registry
git commit -m "feat(agent): implement agent registry CRUD"
git push origin develop
# Delete feature branch
git branch -d feature/agent-registry
git push origin --delete feature/agent-registry
```
### 9.3 Code Review Checklist
Before any code is merged to `develop`, verify:
- [ ] TypeScript strict mode: `tsc --strict` passes
- [ ] No `any` types used
- [ ] No code duplication (DRY check)
- [ ] SOLID principles applied
- [ ] Unit tests included (>80% coverage)
- [ ] Integration tests included
- [ ] JSDoc comments present
- [ ] Error handling implemented
- [ ] No OWASP Top 10 vulnerabilities
- [ ] Performance acceptable (<200ms)
- [ ] Database migrations included
- [ ] OpenAPI specification updated
- [ ] Conventional commit message used
- [ ] Virtual CTO approval obtained
- [ ] Virtual QA Engineer sign-off obtained
---
## 10. OpenSpec Compliance
### 10.1 OpenAPI 3.0 Specification
**Location**: `docs/openapi.yaml`
**Mandatory for every endpoint**:
- Summary and description
- Request body schema (with validation rules)
- Response schemas (all status codes)
- Error response schemas
- Authentication requirements
- Example requests and responses
**Example OpenAPI Spec**:
```yaml
openapi: 3.0.0
info:
title: SentryAgent.ai Agent Identity Provider
version: 1.0.0
description: Free, open-source Agent Identity Provider
contact:
name: SentryAgent.ai
url: https://sentryagent.ai
servers:
- url: https://api.sentryagent.ai
description: Production
- url: http://localhost:3000
description: Development
paths:
/agents:
post:
summary: Create a new AI agent
operationId: createAgent
tags:
- Agents
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/CreateAgentRequest'
responses:
'201':
description: Agent created successfully
content:
application/json:
schema:
$ref: '#/components/schemas/Agent'
'400':
description: Invalid request
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'409':
description: Agent already exists
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
components:
schemas:
Agent:
type: object
required:
- id
- email
- agentType
- version
- capabilities
- owner
- deploymentEnv
- status
- createdAt
- updatedAt
properties:
id:
type: string
format: uuid
description: Unique agent identifier
email:
type: string
format: email
description: Agent email (agent-type-001@sentryagent.ai)
agentType:
type: string
description: AGNTCY agent type
version:
type: string
description: Semantic version
capabilities:
type: array
items:
type: string
description: Agent capabilities
owner:
type: string
description: Developer or team name
deploymentEnv:
type: string
enum: [development, staging, production]
status:
type: string
enum: [active, suspended, revoked, archived]
createdAt:
type: string
format: date-time
updatedAt:
type: string
format: date-time
Error:
type: object
required:
- code
- message
properties:
code:
type: string
description: Error code
message:
type: string
description: Error message
details:
type: object
description: Additional error details
```
### 10.2 AGNTCY Alignment
**Agent Identity Model** (AGNTCY-compliant):
```typescript
interface IAgent {
id: string; // Unique agent ID (UUID) — immutable
email: string; // agent-type-001@sentryagent.ai
agentType: string; // AGNTCY agent type
version: string; // Semantic versioning
capabilities: string[]; // AGNTCY capabilities
owner: string; // Developer/team name
deploymentEnv: string; // dev/staging/prod
status: string; // active/suspended/revoked/archived
createdAt: Date; // Agent creation timestamp
updatedAt: Date; // Last update timestamp
lastAuthAt?: Date; // Last authentication timestamp
metadata?: Record<string, unknown>; // AGNTCY metadata
}
```
**Audit Compliance**:
- ✅ Immutable audit logs (no deletion, no modification)
- ✅ All agent actions logged (creation, auth, revocation)
- ✅ Timestamps in ISO 8601 format
- ✅ Tamper-proof storage (PostgreSQL with constraints)
- ✅ Retention policy (90 days free tier, configurable)
**Policy Enforcement**:
- ✅ Least privilege by default
- ✅ Capability-based access control
- ✅ Revocation at scale
- ✅ Credential rotation on schedule
---
## 11. Quality Gates & Metrics
### 11.1 Code Quality Standards
| Metric | Target | Tool | Enforcement |
|--------|--------|------|-------------|
| Test Coverage | >80% | Jest/nyc | Fail PR if <80% |
| TypeScript Strict | 100% | tsc --strict | Fail build if violations |
| Linting | 0 errors | ESLint | Fail PR if errors |
| Code Duplication | <5% | Manual review | CTO rejects if >5% |
| Security Scan | 0 high/critical | npm audit | Fail build if vulnerabilities |
### 11.2 Performance Standards
| Metric | Target | Measurement | Enforcement |
|--------|--------|-------------|-------------|
| Token Issuance | <100ms | Benchmark test | Fail if >100ms |
| API Response | <200ms | Integration test | Fail if >200ms |
| Database Query | <50ms | Query profiling | Fail if >50ms |
| Cache Hit Rate | >90% | Redis monitoring | Monitor weekly |
### 11.3 Reliability Standards
| Metric | Target | Measurement |
|--------|--------|-------------|
| Uptime | 99.5% (Phase 2) | Monitoring dashboard |
| Error Rate | <0.1% | Error tracking |
| Recovery Time | <5 minutes | Runbook testing |
---
## 12. Deployment & Operations
### 12.1 Local Development Setup
```bash
# Clone repository
git clone https://git.sentryagent.ai/sentryagent-idp.git
cd sentryagent-idp
# Install dependencies
npm install
# Setup environment
cp .env.example .env
# Edit .env with local values
# Start services (PostgreSQL, Redis)
docker-compose up -d
# Run database migrations
npm run migrate
# Start development server
npm run dev
# Server runs on http://localhost:3000
# Swagger UI: http://localhost:3000/api-docs
```
### 12.2 Docker Deployment
```bash
# Build image
docker build -t sentryagent-idp:1.0.0 .
# Run container
docker run -p 3000:3000 \
-e NODE_ENV=production \
-e DATABASE_URL=postgresql://user:pass@db:5432/sentryagent \
-e REDIS_URL=redis://cache:6379 \
-e JWT_SECRET=your-secret-key \
-e JWT_ISSUER=https://api.sentryagent.ai \
sentryagent-idp:1.0.0
```
### 12.3 Docker Compose (Local Development)
```yaml
version: '3.9'
services:
app:
build: .
ports:
- "3000:3000"
environment:
NODE_ENV: development
DATABASE_URL: postgresql://sentryagent:sentryagent@postgres:5432/sentryagent_idp
REDIS_URL: redis://redis:6379
JWT_SECRET: dev-secret-key-change-in-production
depends_on:
- postgres
- redis
volumes:
- ./src:/app/src
command: npm run dev
postgres:
image: postgres:15-alpine
environment:
POSTGRES_USER: sentryagent
POSTGRES_PASSWORD: sentryagent
POSTGRES_DB: sentryagent_idp
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
volumes:
postgres_data:
redis_data:
```
### 12.4 Production Deployment Checklist
- [ ] Environment variables configured securely
- [ ] Database backups enabled (daily)
- [ ] SSL/TLS certificates installed
- [ ] Rate limiting configured
- [ ] Monitoring alerts set up
- [ ] Logging aggregation enabled
- [ ] Disaster recovery plan documented
- [ ] Security audit completed
- [ ] Load balancer configured
- [ ] CDN configured (if applicable)
- [ ] Health check endpoints verified
- [ ] Rollback procedure documented
---
## 13. Risk Management
### 13.1 Technical Risks
| Risk | Probability | Impact | Mitigation |
|------|-------------|--------|-----------|
| Database performance degradation | Medium | High | Connection pooling, caching, indexing |
| Token validation latency | Low | Medium | Redis cache, JWT caching |
| Credential compromise | Low | Critical | Encryption, audit logs, rotation, monitoring |
| API rate limiting bypass | Low | Medium | Token bucket algorithm, monitoring |
| Data loss | Very Low | Critical | Daily backups, replication, disaster recovery |
### 13.2 Mitigation Strategies
- **Code Review**: Catch issues early (Virtual CTO)
- **Testing**: >80% coverage (Virtual QA Engineer)
- **Monitoring**: Real-time alerts (Phase 2)
- **Documentation**: Clear runbooks for operations
- **Backups**: Daily database snapshots
- **Security**: Regular audits and penetration testing
---
## 14. Success Metrics & KPIs
### 14.1 Phase 1 MVP Success Criteria
**Technical**:
- ✅ All features implemented and tested
- ✅ >80% test coverage
- ✅ Zero critical security issues
- ✅ API response time <200ms
- ✅ Token issuance <100ms
- ✅ AGNTCY compliance verified
**Adoption**:
- ✅ 50+ agents registered in first month
- ✅ 10+ developers using the service
- ✅ Positive feedback on ease of use

946
README.md
View File

@@ -6,8 +6,11 @@
**Git Repository**: https://git.sentryagent.ai/ **Git Repository**: https://git.sentryagent.ai/
**AI Partner**: Anthropic (Claude — All Development, Implementation & Deployment) **AI Partner**: Anthropic (Claude — All Development, Implementation & Deployment)
**Standards**: AGNTCY (Linux Foundation), OpenAPI 3.0, OAuth 2.0, OIDC **Standards**: AGNTCY (Linux Foundation), OpenAPI 3.0, OAuth 2.0, OIDC
**Document Role**: Project orientation, team charter, and Claude session protocol
**Last Updated**: 2026-03-28 **Last Updated**: 2026-03-28
**Status**: ? Active — Phase 1 MVP **Status**: Active — Phase 1 MVP
> **Product Requirements**: All scope, standards, and technical requirements are in **[PRD.md](./PRD.md)**
--- ---
@@ -44,14 +47,15 @@ development, implementation, and deployment activities.
When a new Claude session is started, Claude **MUST**: When a new Claude session is started, Claude **MUST**:
1. **Read this README.md** in full before any action 1. **Read [PRD.md](./PRD.md)** in full before any action — this is the product requirements and single source of truth
2. **Adopt the Virtual Engineering Team roles** as defined in Section 4 2. **Read this README.md** for team charter and session protocol
3. **Enforce all standards** defined in Section 6 without exception 3. **Adopt the Virtual Engineering Team roles** as defined in Section 4
4. **Resume from last known state** (check git.sentryagent.ai for latest commits) 4. **Enforce all standards** defined in PRD.md Section 6 without exception
5. **Report status** to CEO before proceeding 5. **Resume from last known state** (check git.sentryagent.ai for latest commits)
6. **Never deviate** from the technology stack defined in Section 7 6. **Report status** to CEO before proceeding
7. **Never skip** OpenSpec documentation for any new endpoint or service 7. **Never deviate** from the technology stack defined in PRD.md Section 7
8. **Always provide complete files** — no partial code, no placeholders 8. **Never skip** OpenSpec documentation for any new endpoint or service
9. **Always provide complete files** — no partial code, no placeholders
### 2.3 Claude Communication Protocol ### 2.3 Claude Communication Protocol
@@ -74,12 +78,12 @@ A **free, open-source Agent Identity Provider** that provides:
| Feature | Description | AGNTCY Alignment | | Feature | Description | AGNTCY Alignment |
|---------|-------------|-----------------| |---------|-------------|-----------------|
| **Agent Registry** | Unique, immutable agent IDs | ? First-class non-human identity | | **Agent Registry** | Unique, immutable agent IDs | First-class non-human identity |
| **Authentication** | OAuth 2.0 Client Credentials | ? Standardized auth protocol | | **Authentication** | OAuth 2.0 Client Credentials | Standardized auth protocol |
| **Authorization** | Scope-based access control | ? Capability-based governance | | **Authorization** | Scope-based access control | Capability-based governance |
| **Lifecycle Management** | Provision, rotate, revoke | ? Full agent lifecycle | | **Lifecycle Management** | Provision, rotate, revoke | Full agent lifecycle |
| **Audit Logs** | Immutable, compliance-ready | ? Accountability & governance | | **Audit Logs** | Immutable, compliance-ready | Accountability & governance |
| **Developer SDK** | Node.js (Phase 1) | ? Developer-first experience | | **Developer SDK** | Node.js (Phase 1) | Developer-first experience |
### 3.2 Target Users ### 3.2 Target Users
@@ -140,17 +144,27 @@ CEO (Human — SentryAgent.ai Founder)
- Coordinate Virtual Architect, Principal Developer, and QA Engineer - Coordinate Virtual Architect, Principal Developer, and QA Engineer
- Report weekly progress to CEO - Report weekly progress to CEO
- Escalate scope changes and blockers to CEO immediately - Escalate scope changes and blockers to CEO immediately
- **Post a completion confirmation to `#vpe-cto-approvals` after every CEO-authorized action** (include outcome + commit hash)
- **Post an end-of-session summary before closing** any session with completed, pending, or in-progress work
**Claude Session Startup (CTO Role)**: **Claude Session Startup (CTO Role)**:
``` ```
1. Read README.md (this file) in full 1. Read PRD.md in full
2. Check git.sentryagent.ai for latest commits 2. Read README.md (this file) for team charter
3. Identify current phase and sprint 3. Check git.sentryagent.ai for latest commits
4. Report status to CEO 4. Identify current phase and sprint
5. Confirm today's priorities 5. Report status to CEO
6. Begin work 6. Confirm today's priorities
7. Begin work
8. Before closing: post end-of-session summary to #vpe-cto-approvals
(Completed / Pending — authorized but not executed / Requires CEO action)
``` ```
**Session Completion Protocol**:
- "Authorized" = CEO approved. Action not yet executed.
- "Committed / Completed / Deployed" = Action executed with evidence (commit hash, test results).
- Never close a session with an authorized-but-unexecuted action without noting it in the end-of-session summary.
### 4.4 Virtual Architect (Claude — Anthropic) ### 4.4 Virtual Architect (Claude — Anthropic)
**Authority**: System design within CTO-approved architecture. **Authority**: System design within CTO-approved architecture.
@@ -217,892 +231,8 @@ CEO (Human — SentryAgent.ai Founder)
--- ---
## 5. Project Scope ## 5. Product Requirements
### 5.1 Phase 1: MVP (Weeks 18) All product requirements, scope, engineering standards, technology stack, quality gates, and success metrics are defined in the standalone PRD:
**Objective**: Prove the concept. Ship a production-ready AgentIdP. > **[PRD.md](./PRD.md)** — Product Requirements Document (single source of truth for all requirements)
#### In Scope ?
| Feature | Owner | Priority |
|---------|-------|----------|
| Agent Registry Service (CRUD) | Principal Dev | P0 |
| OAuth 2.0 Token Service (Client Credentials) | Principal Dev | P0 |
| Credential Management (generate, rotate, revoke) | Principal Dev | P0 |
| Immutable Audit Log Service | Principal Dev | P0 |
| REST API (agents, tokens, audit) | Principal Dev | P0 |
| PostgreSQL database + migrations | Principal Dev | P0 |
| Redis caching layer | Principal Dev | P1 |
| Node.js SDK | Principal Dev | P1 |
| Docker containerization | Principal Dev | P1 |
| Unit & integration tests (>80% coverage) | QA Engineer | P0 |
| OpenAPI 3.0 documentation | Architect | P0 |
| Docker Compose (local dev) | Principal Dev | P1 |
| Deployment guide | Architect | P1 |
| AGNTCY alignment documentation | Architect | P1 |
#### Out of Scope ? (Phase 2+)
| Feature | Phase |
|---------|-------|
| HashiCorp Vault integration | Phase 2 |
| Multi-region deployment | Phase 2 |
| Advanced policy engine (OPA) | Phase 2 |
| Web dashboard UI | Phase 2 |
| Python/Go/Java/Rust SDKs | Phase 2 |
| Prometheus + Grafana monitoring | Phase 2 |
| AGNTCY federation support | Phase 3 |
| W3C DID support | Phase 3 |
| Agent marketplace | Phase 3 |
| SOC 2 certification | Phase 3 |
### 5.2 Phase 2: Production-Ready (Weeks 920)
- HashiCorp Vault for secret management
- Multi-language SDKs (Python, Go, Java)
- Advanced policy engine (OPA integration)
- Web dashboard UI (React + TypeScript)
- Prometheus + Grafana monitoring
- Multi-region deployment (US, EU, APAC)
- SOC 2 Type II certification process
### 5.3 Phase 3: Ecosystem & Standards (Weeks 2136)
- AGNTCY federation support
- W3C Decentralized Identifiers (DIDs)
- Agent marketplace
- Advanced compliance reporting
- Enterprise tier features
---
## 6. Engineering Standards (Non-Negotiable)
### 6.1 DRY — Don't Repeat Yourself
**Rule**: Zero code duplication. Every piece of logic exists in exactly one place.
**Implementation**:
| Pattern | Location | Purpose |
|---------|----------|---------|
| Type definitions | `src/types/index.ts` | Single source of truth |
| Crypto utilities | `src/utils/crypto.ts` | All crypto operations |
| JWT utilities | `src/utils/jwt.ts` | All JWT operations |
| Validation logic | `src/utils/validators.ts` | All input validation |
| Error classes | `src/utils/errors.ts` | All custom errors |
| DB queries | `src/services/` | All database access |
| HTTP middleware | `src/middleware/` | All cross-cutting concerns |
**Enforcement**:
- Virtual CTO reviews every PR for duplication
- ESLint rules flag repeated patterns
- No copy-paste code — ever
### 6.2 SOLID Principles
**S — Single Responsibility**:
- `AgentService`: Agent CRUD only — nothing else
- `OAuth2Service`: Token issuance only — nothing else
- `CredentialService`: Credential management only — nothing else
- `AuditService`: Audit logging only — nothing else
**O — Open/Closed**:
- All services implement interfaces
- New features extend, never modify existing code
- Plugin architecture for credential backends
**L — Liskov Substitution**:
- All service implementations are interchangeable
- Consistent error handling across all services
- Uniform response shapes across all endpoints
**I — Interface Segregation**:
- Separate read/write interfaces where applicable
- Minimal, focused interfaces — no fat interfaces
- Controllers depend on service interfaces, not implementations
**D — Dependency Inversion**:
- All dependencies injected via constructor
- Services depend on abstractions (interfaces)
- No direct instantiation of dependencies in business logic
### 6.3 OpenSpec Standards (Mandatory)
**Rule**: Every API endpoint MUST have an OpenAPI 3.0 specification
BEFORE implementation begins. No exceptions.
**Process**:
```
1. Virtual Architect writes OpenAPI spec
2. CEO reviews and approves
3. Virtual Principal Developer implements
4. Virtual QA Engineer verifies spec matches implementation
5. Swagger UI auto-generated from spec
```
**OpenAPI Spec Location**: `docs/openapi.yaml`
**Required for every endpoint**:
- Summary and description
- Request body schema (with validation rules)
- Response schemas (all status codes)
- Error response schemas
- Authentication requirements
- Example requests and responses
### 6.4 TypeScript Strict Mode (Mandatory)
**Rule**: TypeScript strict mode is always enabled. No `any` types. Ever.
```json
{
"compilerOptions": {
"strict": true,
"noImplicitAny": true,
"strictNullChecks": true,
"strictFunctionTypes": true,
"strictBindCallApply": true,
"strictPropertyInitialization": true,
"noImplicitThis": true,
"alwaysStrict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noImplicitReturns": true,
"noFallthroughCasesInSwitch": true
}
}
```
### 6.5 Code Documentation Standards
**JSDoc required for**:
- All public classes
- All public methods
- All interfaces
- All complex logic blocks
**Example**:
```typescript
/**
* Creates a new AI agent identity in the SentryAgent.ai registry.
* Assigns a unique immutable ID and provisions credentials.
*
* @param {ICreateAgentRequest} request - Agent creation request
* @returns {Promise<IAgent>} Created agent with assigned ID
* @throws {AgentAlreadyExistsError} If email already registered
* @throws {ValidationError} If request data is invalid
*
* @example
* const agent = await agentService.createAgent({
* email: 'screener-001@sentryagent.ai',
* agentType: 'screener',
* version: 'v1.0.0',
* capabilities: ['resume:read'],
* owner: 'helloworld-team',
* deploymentEnv: 'production'
* });
*/
async createAgent(request: ICreateAgentRequest): Promise<IAgent>
```
### 6.6 Error Handling Standards
**Rule**: All errors are explicit, typed, and handled. No silent failures.
```typescript
// Custom error hierarchy
class SentryAgentError extends Error {}
class ValidationError extends SentryAgentError {}
class AgentNotFoundError extends SentryAgentError {}
class AgentAlreadyExistsError extends SentryAgentError {}
class CredentialError extends SentryAgentError {}
class AuthenticationError extends SentryAgentError {}
class AuthorizationError extends SentryAgentError {}
class RateLimitError extends SentryAgentError {}
```
**All errors include**:
- Error code (machine-readable)
- Error message (human-readable)
- HTTP status code
- Stack trace (development only)
### 6.7 Git Standards
**Repository**: `https://git.sentryagent.ai/`
**Branch Strategy** (Git Flow):
- `main`: Production-ready code only
- `develop`: Integration branch for Phase work
- `feature/*`: Individual features (e.g., `feature/agent-registry`)
- `bugfix/*`: Bug fixes (e.g., `bugfix/token-validation`)
- `release/*`: Release preparation (e.g., `release/v1.0.0`)
**Commit Standards** (Conventional Commits):
```
feat(agent): implement agent registry CRUD
fix(oauth2): correct token expiration calculation
docs(api): update OpenAPI spec for /agents endpoint
test(credential): add rotation edge case tests
chore(deps): upgrade TypeScript to 5.3.3
```
**Pull Request Standards**:
- [ ] Feature branch created from `develop`
- [ ] OpenAPI spec updated (if API change)
- [ ] Unit tests added (>80% coverage)
- [ ] Integration tests added
- [ ] JSDoc comments added
- [ ] No code duplication (DRY check)
- [ ] SOLID principles followed
- [ ] Performance acceptable (<200ms)
- [ ] Security review passed
- [ ] Virtual CTO approval required
- [ ] Virtual QA Engineer sign-off required
- [ ] Merge to `develop` (squash commits)
- [ ] Delete feature branch
---
## 7. Technology Stack
### 7.1 Runtime & Language
| Component | Version | Rationale |
|-----------|---------|-----------|
| Node.js | 18+ (LTS) | Stable, widely used, excellent TypeScript support |
| TypeScript | 5.3+ | Strict mode, type safety, no `any` types |
| npm | 9+ | Standard package manager |
### 7.2 Web Framework & Middleware
| Component | Version | Purpose |
|-----------|---------|---------|
| Express.js | 4.18+ | Lightweight, battle-tested web framework |
| helmet | 7.1+ | Security headers (HSTS, CSP, etc.) |
| cors | 2.8+ | CORS handling |
| morgan | 1.10+ | HTTP request logging |
| pino | 8.17+ | Structured JSON logging |
| pino-http | 8.6+ | Express integration for Pino |
### 7.3 Database & Caching
| Component | Version | Purpose |
|-----------|---------|---------|
| PostgreSQL | 14+ | Primary database (ACID, reliability) |
| pg | 8.11+ | PostgreSQL client library |
| Redis | 7+ | Caching layer (token validation, sessions) |
| redis | 4.6+ | Redis client library |
### 7.4 Authentication & Security
| Component | Version | Purpose |
|-----------|---------|---------|
| jsonwebtoken | 9.1+ | JWT signing and verification |
| bcryptjs | 2.4+ | Password/secret hashing (10 salt rounds) |
| uuid | 9.0+ | Unique ID generation |
| crypto (Node.js built-in) | N/A | Cryptographic operations |
| dotenv | 16.3+ | Environment variable management |
### 7.5 Testing
| Component | Version | Purpose |
|-----------|---------|---------|
| Jest | 29.7+ | Unit and integration testing |
| @types/jest | 29.5+ | TypeScript types for Jest |
| ts-jest | 29.1+ | Jest + TypeScript integration |
| supertest | 6.3+ | HTTP endpoint testing |
| @testing-library/node | Latest | Node.js testing utilities |
### 7.6 Code Quality & Linting
| Component | Version | Purpose |
|-----------|---------|---------|
| ESLint | 8.56+ | Code linting and style |
| @typescript-eslint/parser | 6.17+ | TypeScript parsing for ESLint |
| @typescript-eslint/eslint-plugin | 6.17+ | TypeScript-specific rules |
| Prettier | 3.1+ | Code formatting |
### 7.7 Documentation & API
| Component | Version | Purpose |
|-----------|---------|---------|
| swagger-ui-express | 4.6+ | Interactive API documentation |
| joi | 17.11+ | Schema validation |
### 7.8 Deployment & Containerization
| Component | Version | Purpose |
|-----------|---------|---------|
| Docker | 24+ | Container runtime |
| Docker Compose | 2.20+ | Local development orchestration |
| Alpine Linux | 3.18 | Minimal base image |
### 7.9 Validation & Schema
| Component | Version | Purpose |
|-----------|---------|---------|
| Joi | 17.11+ | Request/response schema validation |
---
## 8. Project Structure (DRY Compliance)
```
sentryagent-idp/
+-- src/
¦ +-- config/
¦ ¦ +-- env.ts # Environment variables
¦ ¦ +-- database.ts # PostgreSQL connection pool
¦ ¦ +-- redis.ts # Redis client
¦ ¦ +-- logger.ts # Pino logger configuration
¦ ¦
¦ +-- types/
¦ ¦ +-- index.ts # All TypeScript interfaces (single source of truth)
¦ ¦
¦ +-- models/
¦ ¦ +-- Agent.ts # Agent entity
¦ ¦ +-- Credential.ts # Credential entity
¦ ¦ +-- AuditLog.ts # Audit log entity
¦ ¦ +-- Token.ts # Token entity
¦ ¦
¦ +-- services/
¦ ¦ +-- AgentService.ts # Agent CRUD (no duplication)
¦ ¦ +-- OAuth2Service.ts # Token issuance (no duplication)
¦ ¦ +-- CredentialService.ts # Credential management (no duplication)
¦ ¦ +-- AuditService.ts # Audit logging (no duplication)
¦ ¦ +-- TokenService.ts # Token operations (no duplication)
¦ ¦
¦ +-- controllers/
¦ ¦ +-- AgentController.ts # Agent endpoints
¦ ¦ +-- OAuth2Controller.ts # OAuth 2.0 endpoints
¦ ¦ +-- HealthController.ts # Health check endpoint
¦ ¦
¦ +-- middleware/
¦ ¦ +-- authentication.ts # Bearer token validation
¦ ¦ +-- authorization.ts # Scope-based access control
¦ ¦ +-- errorHandler.ts # Global error handling
¦ ¦ +-- logging.ts # Request/response logging
¦ ¦ +-- validation.ts # Request validation
¦ ¦ +-- rateLimit.ts # Rate limiting
¦ ¦
¦ +-- utils/
¦ ¦ +-- crypto.ts # Crypto utilities (hashing, secrets)
¦ ¦ +-- jwt.ts # JWT utilities (sign, verify)
¦ ¦ +-- validators.ts # Input validation (reusable)
¦ ¦ +-- errors.ts # Custom error classes
¦ ¦ +-- helpers.ts # General utilities
¦ ¦
¦ +-- routes/
¦ ¦ +-- agents.ts # Agent routes
¦ ¦ +-- oauth2.ts # OAuth 2.0 routes
¦ ¦ +-- health.ts # Health routes
¦ ¦
¦ +-- migrations/
¦ ¦ +-- 001_create_agents_table.sql
¦ ¦ +-- 002_create_credentials_table.sql
¦ ¦ +-- 003_create_audit_logs_table.sql
¦ ¦
¦ +-- app.ts # Express app setup
¦ +-- server.ts # Server entry point
¦
+-- tests/
¦ +-- unit/
¦ ¦ +-- services/
¦ ¦ ¦ +-- AgentService.test.ts
¦ ¦ ¦ +-- OAuth2Service.test.ts
¦ ¦ ¦ +-- CredentialService.test.ts
¦ ¦ ¦ +-- AuditService.test.ts
¦ ¦ +-- utils/
¦ ¦ +-- crypto.test.ts
¦ ¦ +-- jwt.test.ts
¦ ¦ +-- validators.test.ts
¦ ¦
¦ +-- integration/
¦ ¦ +-- api/
¦ ¦ ¦ +-- agents.test.ts
¦ ¦ ¦ +-- oauth2.test.ts
¦ ¦ ¦ +-- health.test.ts
¦ ¦ +-- database/
¦ ¦ +-- migrations.test.ts
¦ ¦
¦ +-- fixtures/
¦ +-- agents.json
¦ +-- credentials.json
¦ +-- auditLogs.json
¦
+-- docs/
¦ +-- README.md # This file
¦ +-- architecture.md # Architecture Decision Records
¦ +-- openapi.yaml # OpenAPI 3.0 specification
¦ +-- deployment.md # Deployment guide
¦ +-- agntcy-alignment.md # AGNTCY compliance documentation
¦ +-- api-guide.md # API usage guide
¦ +-- contributing.md # Contribution guidelines
¦
+-- docker-compose.yml # Local development stack
+-- Dockerfile # Production image
+-- .dockerignore # Docker build exclusions
+-- .env.example # Environment template
+-- .env.test # Test environment
+-- .gitignore # Git exclusions
+-- .eslintrc.js # ESLint configuration
+-- .prettierrc.json # Prettier configuration
+-- tsconfig.json # TypeScript configuration
+-- jest.config.js # Jest configuration
+-- package.json # Dependencies and scripts
+-- package-lock.json # Locked dependencies
+-- CHANGELOG.md # Version history
+-- LICENSE # Open source license (MIT)
+-- README.md # Project README
```
**DRY Principles Applied**:
- ? Single `types/index.ts` for all interfaces (no duplication)
- ? Shared `utils/` for crypto, JWT, validation (no duplication)
- ? Centralized error handling in middleware (no duplication)
- ? Reusable service layer (no business logic in controllers)
- ? Configuration centralized in `config/` (no duplication)
- ? Database queries isolated in services (no duplication)
---
## 9. Development Workflow
### 9.1 Feature Development Process
**Step 1: Specification (Virtual Architect)**
- Write Architecture Decision Record (ADR)
- Define OpenAPI 3.0 specification
- Specify database schema
- List test cases
- CEO approves specification
**Step 2: Implementation (Virtual Principal Developer)**
- Create feature branch: `git checkout -b feature/agent-registry`
- Implement per specification
- Follow DRY and SOLID principles
- Add JSDoc comments
- Create unit tests (>80% coverage)
- Push to `git.sentryagent.ai`
**Step 3: Code Review (Virtual CTO)**
- Check compliance with standards
- Verify DRY principles
- Review test coverage
- Verify SOLID principles
- Approve or request changes
**Step 4: Testing (Virtual QA Engineer)**
- Run integration tests
- Test edge cases
- Verify AGNTCY alignment
- Verify OpenAPI spec matches implementation
- Sign off on quality
**Step 5: Deployment (Virtual CTO)**
- Merge to `develop` branch (squash commits)
- Delete feature branch
- Deploy to staging
- Deploy to production
### 9.2 Git Workflow
```bash
# Create feature branch from develop
git checkout develop
git pull origin develop
git checkout -b feature/agent-registry
# Make changes, commit with conventional commits
git add src/services/AgentService.ts
git commit -m "feat(agent): implement agent registry CRUD"
# Push to repository
git push origin feature/agent-registry
# Create pull request on git.sentryagent.ai
# Virtual CTO reviews and approves
# Virtual QA Engineer signs off
# Merge to develop (squash commits)
git checkout develop
git pull origin develop
git merge --squash feature/agent-registry
git commit -m "feat(agent): implement agent registry CRUD"
git push origin develop
# Delete feature branch
git branch -d feature/agent-registry
git push origin --delete feature/agent-registry
```
### 9.3 Code Review Checklist
Before any code is merged to `develop`, verify:
- [ ] TypeScript strict mode: `tsc --strict` passes
- [ ] No `any` types used
- [ ] No code duplication (DRY check)
- [ ] SOLID principles applied
- [ ] Unit tests included (>80% coverage)
- [ ] Integration tests included
- [ ] JSDoc comments present
- [ ] Error handling implemented
- [ ] No OWASP Top 10 vulnerabilities
- [ ] Performance acceptable (<200ms)
- [ ] Database migrations included
- [ ] OpenAPI specification updated
- [ ] Conventional commit message used
- [ ] Virtual CTO approval obtained
- [ ] Virtual QA Engineer sign-off obtained
---
## 10. OpenSpec Compliance
### 10.1 OpenAPI 3.0 Specification
**Location**: `docs/openapi.yaml`
**Mandatory for every endpoint**:
- Summary and description
- Request body schema (with validation rules)
- Response schemas (all status codes)
- Error response schemas
- Authentication requirements
- Example requests and responses
**Example OpenAPI Spec**:
```yaml
openapi: 3.0.0
info:
title: SentryAgent.ai Agent Identity Provider
version: 1.0.0
description: Free, open-source Agent Identity Provider
contact:
name: SentryAgent.ai
url: https://sentryagent.ai
servers:
- url: https://api.sentryagent.ai
description: Production
- url: http://localhost:3000
description: Development
paths:
/agents:
post:
summary: Create a new AI agent
operationId: createAgent
tags:
- Agents
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/CreateAgentRequest'
responses:
'201':
description: Agent created successfully
content:
application/json:
schema:
$ref: '#/components/schemas/Agent'
'400':
description: Invalid request
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'409':
description: Agent already exists
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
components:
schemas:
Agent:
type: object
required:
- id
- email
- agentType
- version
- capabilities
- owner
- deploymentEnv
- status
- createdAt
- updatedAt
properties:
id:
type: string
format: uuid
description: Unique agent identifier
email:
type: string
format: email
description: Agent email (agent-type-001@sentryagent.ai)
agentType:
type: string
description: AGNTCY agent type
version:
type: string
description: Semantic version
capabilities:
type: array
items:
type: string
description: Agent capabilities
owner:
type: string
description: Developer or team name
deploymentEnv:
type: string
enum: [development, staging, production]
status:
type: string
enum: [active, suspended, revoked, archived]
createdAt:
type: string
format: date-time
updatedAt:
type: string
format: date-time
Error:
type: object
required:
- code
- message
properties:
code:
type: string
description: Error code
message:
type: string
description: Error message
details:
type: object
description: Additional error details
```
### 10.2 AGNTCY Alignment
**Agent Identity Model** (AGNTCY-compliant):
```typescript
interface IAgent {
id: string; // Unique agent ID (UUID) — immutable
email: string; // agent-type-001@sentryagent.ai
agentType: string; // AGNTCY agent type
version: string; // Semantic versioning
capabilities: string[]; // AGNTCY capabilities
owner: string; // Developer/team name
deploymentEnv: string; // dev/staging/prod
status: string; // active/suspended/revoked/archived
createdAt: Date; // Agent creation timestamp
updatedAt: Date; // Last update timestamp
lastAuthAt?: Date; // Last authentication timestamp
metadata?: Record<string, unknown>; // AGNTCY metadata
}
```
**Audit Compliance**:
- ? Immutable audit logs (no deletion, no modification)
- ? All agent actions logged (creation, auth, revocation)
- ? Timestamps in ISO 8601 format
- ? Tamper-proof storage (PostgreSQL with constraints)
- ? Retention policy (90 days free tier, configurable)
**Policy Enforcement**:
- ? Least privilege by default
- ? Capability-based access control
- ? Revocation at scale
- ? Credential rotation on schedule
---
## 11. Quality Gates & Metrics
### 11.1 Code Quality Standards
| Metric | Target | Tool | Enforcement |
|--------|--------|------|-------------|
| Test Coverage | >80% | Jest/nyc | Fail PR if <80% |
| TypeScript Strict | 100% | tsc --strict | Fail build if violations |
| Linting | 0 errors | ESLint | Fail PR if errors |
| Code Duplication | <5% | Manual review | CTO rejects if >5% |
| Security Scan | 0 high/critical | npm audit | Fail build if vulnerabilities |
### 11.2 Performance Standards
| Metric | Target | Measurement | Enforcement |
|--------|--------|-------------|-------------|
| Token Issuance | <100ms | Benchmark test | Fail if >100ms |
| API Response | <200ms | Integration test | Fail if >200ms |
| Database Query | <50ms | Query profiling | Fail if >50ms |
| Cache Hit Rate | >90% | Redis monitoring | Monitor weekly |
### 11.3 Reliability Standards
| Metric | Target | Measurement |
|--------|--------|-------------|
| Uptime | 99.5% (Phase 2) | Monitoring dashboard |
| Error Rate | <0.1% | Error tracking |
| Recovery Time | <5 minutes | Runbook testing |
---
## 12. Deployment & Operations
### 12.1 Local Development Setup
```bash
# Clone repository
git clone https://git.sentryagent.ai/sentryagent-idp.git
cd sentryagent-idp
# Install dependencies
npm install
# Setup environment
cp .env.example .env
# Edit .env with local values
# Start services (PostgreSQL, Redis)
docker-compose up -d
# Run database migrations
npm run migrate
# Start development server
npm run dev
# Server runs on http://localhost:3000
# Swagger UI: http://localhost:3000/api-docs
```
### 12.2 Docker Deployment
```bash
# Build image
docker build -t sentryagent-idp:1.0.0 .
# Run container
docker run -p 3000:3000 \
-e NODE_ENV=production \
-e DATABASE_URL=postgresql://user:pass@db:5432/sentryagent \
-e REDIS_URL=redis://cache:6379 \
-e JWT_SECRET=your-secret-key \
-e JWT_ISSUER=https://api.sentryagent.ai \
sentryagent-idp:1.0.0
```
### 12.3 Docker Compose (Local Development)
```yaml
version: '3.9'
services:
app:
build: .
ports:
- "3000:3000"
environment:
NODE_ENV: development
DATABASE_URL: postgresql://sentryagent:sentryagent@postgres:5432/sentryagent_idp
REDIS_URL: redis://redis:6379
JWT_SECRET: dev-secret-key-change-in-production
depends_on:
- postgres
- redis
volumes:
- ./src:/app/src
command: npm run dev
postgres:
image: postgres:15-alpine
environment:
POSTGRES_USER: sentryagent
POSTGRES_PASSWORD: sentryagent
POSTGRES_DB: sentryagent_idp
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
volumes:
postgres_data:
redis_data:
```
### 12.4 Production Deployment Checklist
- [ ] Environment variables configured securely
- [ ] Database backups enabled (daily)
- [ ] SSL/TLS certificates installed
- [ ] Rate limiting configured
- [ ] Monitoring alerts set up
- [ ] Logging aggregation enabled
- [ ] Disaster recovery plan documented
- [ ] Security audit completed
- [ ] Load balancer configured
- [ ] CDN configured (if applicable)
- [ ] Health check endpoints verified
- [ ] Rollback procedure documented
---
## 13. Risk Management
### 13.1 Technical Risks
| Risk | Probability | Impact | Mitigation |
|------|-------------|--------|-----------|
| Database performance degradation | Medium | High | Connection pooling, caching, indexing |
| Token validation latency | Low | Medium | Redis cache, JWT caching |
| Credential compromise | Low | Critical | Encryption, audit logs, rotation, monitoring |
| API rate limiting bypass | Low | Medium | Token bucket algorithm, monitoring |
| Data loss | Very Low | Critical | Daily backups, replication, disaster recovery |
### 13.2 Mitigation Strategies
- **Code Review**: Catch issues early (Virtual CTO)
- **Testing**: >80% coverage (Virtual QA Engineer)
- **Monitoring**: Real-time alerts (Phase 2)
- **Documentation**: Clear runbooks for operations
- **Backups**: Daily database snapshots
- **Security**: Regular audits and penetration testing
---
## 14. Success Metrics & KPIs
### 14.1 Phase 1 MVP Success Criteria
**Technical**:
- ? All features implemented and tested
- ? >80% test coverage
- ? Zero critical security issues
- ? API response time <200ms
- ? Token issuance <100ms
- ? AGNTCY compliance verified
**Adoption**:
- ? 50+ agents registered in first month
- ? 10+ developers using the service
- ? Positive feedback on ease of use
-

77
TBC/charter.md Normal file
View File

@@ -0,0 +1,77 @@
# Technical & Business Consultant (TBC) — Charter
**Document No.:** TBC-CHARTER-001
**Project:** SentryAgent.ai AgentIdP
**Owner:** CEO
---
## Revision History
| Rev | Date | Author | Description |
|-----|------|--------|-------------|
| 1.0 | 2026-04-07 | CEO / TBC | Initial charter — established in founding session |
---
## 1. Role Definition
The Technical & Business Consultant (TBC) is a direct report to the CEO of SentryAgent.ai. The TBC operates as an independent advisory function — separate from the engineering execution chain.
## 2. Reporting Structure
```
CEO (Human)
├── Virtual CTO → engineering execution, follows OpenSpec Protocol
├── Lead Validator → independent V&V audit, follows OpenSpec Protocol
└── Technical & Business Consultant (TBC) → advisory only, reports to CEO only
```
- TBC reports exclusively to the CEO
- TBC does NOT interact with the CTO or Lead Validator directly
- TBC does NOT manage any engineering work
- TBC does NOT follow OpenSpec Protocol (advisory role, not execution role)
## 3. Scope of Responsibilities
- Advise the CEO on strategic and technical decisions before they are delegated to the CTO
- Review processes and identify gaps, risks, or improvement opportunities
- Maintain portfolio-level thinking across all SentryAgent.ai products and initiatives
- Challenge assumptions independently — without being inside the execution chain
- Serve as the CEO's thinking partner as the virtual factory scales
## 4. Document & Change Authority
TBC MAY propose changes to CLAUDE.md, README.md, and PRD.md.
TBC MAY NOT implement those changes directly. All changes to controlled documents follow this process:
| Step | Owner |
|------|-------|
| Identify and document the proposed change | TBC (in meeting minutes) |
| Review and approve the proposal | CEO |
| Instruct CTO to implement via OpenSpec Protocol | CEO → CTO |
| Raise OpenSpec change, implement, and commit | CTO |
## 5. Record Keeping (ISO 9000)
**"If it is not written, it does not exist."**
TBC maintains written records of all working sessions with the CEO. Records are stored in:
```
TBC/
├── charter.md # This document
└── minutes/
└── TBC-MIN-NNN-YYYY-MM-DD.md # Meeting minutes, sequentially numbered
```
All minutes follow the standard format defined in TBC-MIN-001.
## 6. Operating Principles
1. Advisory only — influence flows through the CEO, never direct to the team
2. Written record of every session — no exceptions
3. Independent perspective — not captured by execution priorities
4. ISO 9000 discipline — every document has revision history, date, and owner
5. Portfolio thinking — always considering the broader virtual factory, not just the current sprint

View File

@@ -0,0 +1,181 @@
# Meeting Minutes
**Document No.:** TBC-MIN-001
**Project:** SentryAgent.ai AgentIdP
**Meeting Type:** Working Session — CEO & TBC (Inaugural)
---
## Revision History
| Rev | Date | Author | Description |
|-----|------|--------|-------------|
| 1.0 | 2026-04-07 | TBC | Initial minutes — inaugural session |
---
## Meeting Details
| Field | Detail |
|-------|--------|
| Date | 2026-04-07 |
| Participants | CEO (Human), TBC (Claude — Technical & Business Consultant) |
| Session Type | Strategic advisory |
---
## 1. Project Status at Session Open
The following state was confirmed at session open via hub message review and git status:
| Item | Status |
|------|--------|
| Phase | Phase 6 — COMPLETE (dev freeze in effect) |
| V&V | PASS — all 6 issues resolved |
| Field trial | Unblocked but not yet started |
| Pending commit | 5 uncommitted files (V&V resolution changes) — authorized but not executed by CTO |
| Active OpenSpec changes | 0 at session open |
---
## 2. Topics Discussed
### 2.1 Process Gap — Authorization vs. Execution Handoff
**Issue raised:** The CTO received CEO authorization (msg #93) to commit outstanding V&V resolution changes. The session ended before the CTO confirmed completion. Five files remained uncommitted, and field trial status was ambiguous.
**Root cause identified:** The process had no completion gate. Authorization was treated as the finish line. There was no protocol requiring the CTO to confirm execution back to the CEO.
**CEO direction:** Treat this as a process flaw, not a blame issue. Identify the gap and fix it.
**Resolution:** TBC proposed three process improvements:
1. Mandatory completion confirmation after every CEO-authorized action
2. End-of-session summary required before CTO closes any session
3. Explicit "authorized vs. done" vocabulary — never interchangeable
**Outcome:** CEO approved all three recommendations. OpenSpec change `process-governance-handoff-gap` raised and implemented. CLAUDE.md, README.md, and `docs/engineering/08-workflow.md` updated. *(See OpenSpec change record for full detail.)*
---
### 2.2 Company Vision Confirmed
**CEO confirmed the primary objective:**
> *"SentryAgent.ai is building the world's first free, open-source identity provider specifically for AI agents — think of it as 'Auth0 for agents.'"*
This statement is the north star for all product, process, and portfolio decisions.
---
### 2.3 Virtual Factory Model — Strategic Direction
**CEO introduced the virtual factory concept:**
SentryAgent.ai operates as a virtual factory:
- CEO is human — sole human principal
- Entire engineering team is virtual (LLM-powered)
- CEO has 30+ years managing global engineering teams, building real-time unified communications products generating hundreds of billions in sales
- AgentIdP (Phase 6 complete) is proof of concept for the factory model
**Strategic direction stated by CEO:** The company must now think beyond a single product. The virtual factory must be capable of running multiple product pipelines simultaneously.
**Three goals established:**
| # | Goal |
|---|------|
| 1 | **Product** — AgentIdP: "Auth0 for agents." Ship, prove, grow. |
| 2 | **Process** — World-class engineering operations. The virtual factory is the competitive moat. |
| 3 | **People (Virtual)** — Empower the virtual team with the right structure and governance. |
---
### 2.4 TBC Role — Established
**CEO decision:** A Technical & Business Consultant (TBC) role is established as a direct report to the CEO, alongside the Virtual CTO and Lead Validator.
**Org structure confirmed:**
```
CEO (Human)
├── Virtual CTO → engineering execution, OpenSpec Protocol
├── Lead Validator → independent V&V audit, OpenSpec Protocol
└── Technical & Business Consultant (TBC) → advisory only, CEO only
```
**Key characteristics of TBC role:**
- Reports to CEO only — no interaction with CTO or Validator
- Not bound by OpenSpec Protocol
- Advisory function — does not execute engineering work
- Maintains written records of all CEO sessions (ISO 9000 discipline)
---
### 2.5 Change Authority — Governance Decision
**Question raised:** Should TBC be allowed to make changes to CLAUDE.md, README.md, and PRD.md directly?
**Decision:** TBC may PROPOSE changes. TBC may NOT implement them directly.
**Approved process:**
| Step | Owner |
|------|-------|
| Identify and document proposed change | TBC (in meeting minutes) |
| Review and approve | CEO |
| Instruct CTO to implement via OpenSpec Protocol | CEO → CTO |
| Raise OpenSpec change, implement, commit | CTO |
**Rationale:** All changes to controlled documents must go through OpenSpec. This keeps the change audit trail clean and ensures the CTO remains the sole execution owner. TBC influence flows through the CEO — not directly to the team.
---
### 2.6 TBC Directory — Established
TBC directory created at project root:
```
TBC/
├── charter.md # TBC role charter (TBC-CHARTER-001)
└── minutes/
└── TBC-MIN-001-2026-04-07.md # This document
```
ISO 9000 convention adopted: all documents carry document number, revision history, date, and author.
---
## 3. Decisions Made
| # | Decision | Owner |
|---|----------|-------|
| D1 | Process gap (authorization vs. execution) fixed via OpenSpec change `process-governance-handoff-gap` | CTO (implemented) |
| D2 | Company vision confirmed: "Auth0 for agents" | CEO |
| D3 | Virtual factory must scale to multiple products — strategic direction set | CEO |
| D4 | Three-goal framework established: Product / Process / People | CEO |
| D5 | TBC role established as CEO direct report | CEO |
| D6 | TBC operates outside OpenSpec; proposes changes only — CTO implements | CEO |
| D7 | TBC directory and ISO 9000 minutes convention established | CEO / TBC |
---
## 4. Open Items / Actions
| # | Action | Owner | Status |
|---|--------|-------|--------|
| A1 | CTO to commit outstanding V&V resolution changes and confirm with commit hash | CTO | **Pending — awaiting CEO instruction to CTO** |
| A2 | CEO to authorize field trial execution once A1 is confirmed | CEO | Pending A1 |
| A3 | Update CLAUDE.md to add TBC role to org structure and startup protocol | CTO via OpenSpec | **Proposed — pending CEO authorization** |
| A4 | Define next product(s) for the virtual factory | CEO / TBC | Future session |
---
## 5. Next Session Priorities
1. Close A1 — instruct CTO to execute the pending commit
2. Authorize field trial (A2) once commit is confirmed
3. Begin scoping A3 — update controlled documents to reflect TBC role formally
4. Start portfolio thinking: what is product #2 for the virtual factory?
---
*End of minutes — TBC-MIN-001 | Rev 1.0 | 2026-04-07*

View File

@@ -0,0 +1,89 @@
# Meeting Minutes
**Document No.:** TBC-MIN-002
**Project:** SentryAgent.ai AgentIdP
**Meeting Type:** Working Session — CEO & TBC (Session 2 — Opening)
---
## Revision History
| Rev | Date | Author | Description |
|-----|------|--------|-------------|
| 1.0 | 2026-04-07 | TBC | Initial minutes — session 2 opening |
---
## Meeting Details
| Field | Detail |
|-------|--------|
| Date | 2026-04-07 |
| Participants | CEO (Human), TBC (Claude — Technical & Business Consultant) |
| Session Type | Strategic advisory — opening exchange |
---
## 1. Project Status at Session Open
Carried forward from TBC-MIN-001:
| Item | Status |
|------|--------|
| Phase | Phase 6 — COMPLETE (dev freeze in effect) |
| V&V | PASS — all 6 issues resolved |
| Field trial | Unblocked but not yet started |
| A1: CTO pending commit | Still outstanding — not confirmed in prior session |
| A2: Field trial authorization | Pending A1 |
| A3: CLAUDE.md TBC update | Proposed — pending CEO authorization to CTO |
---
## 2. Topics Discussed
### 2.1 Session Agenda — Established
CEO confirmed the agenda for this session:
> *"We discuss our company needs and based on that we will develop our agent."*
This session will focus on:
1. Identifying company needs / strategic priorities
2. Scoping and developing the next agent based on those needs
Implementation (if any) will follow the standard CEO → CTO delegation path.
### 2.2 TBC Channel — Created
`#tbc-ceo` channel created on central hub (did not exist previously). All future TBC ↔ CEO communication will use this channel.
---
## 3. Decisions Made
| # | Decision | Owner |
|---|----------|-------|
| D1 | Session agenda: discuss company needs, then develop an agent | CEO |
---
## 4. Open Items / Actions
| # | Action | Owner | Status |
|---|--------|-------|--------|
| A1 | CTO to commit outstanding V&V resolution changes + confirm with hash | CTO | Pending |
| A2 | CEO to authorize field trial once A1 confirmed | CEO | Pending A1 |
| A3 | Update CLAUDE.md to formally add TBC to org structure | CTO via OpenSpec | Proposed — pending CEO authorization |
| A4 | Discuss company needs → scope next agent | CEO / TBC | **In progress — resuming next exchange** |
---
## 5. Next Session Priorities
1. CEO to present company needs / strategic priorities
2. TBC to advise on agent scoping based on those needs
3. CEO to delegate to CTO if implementation is authorized
---
*End of minutes — TBC-MIN-002 | Rev 1.0 | 2026-04-07 | Session paused — CEO on break*

275
VALIDATOR.md Normal file
View File

@@ -0,0 +1,275 @@
# SentryAgent.ai — V&V Architect (Lead Validator)
## IDENTITY & INDEPENDENCE
You are the **V&V Architect (Lead Validator)** for SentryAgent.ai AgentIdP.
- **Instance ID:** `LeadValidator`
- **Role:** Independent verification and validation — you are NOT part of the engineering team
- **Authority:** You report findings directly to the CEO. The CTO has no authority to dismiss your findings.
- **Mandate:** Ensure that everything the engineering team built actually matches what was specified in the PRD and OpenSpec
- **Isolation:** Do NOT carry context from any other project or session. This is a private, independent audit session.
You are a check on the system — not a builder. You never implement features, never approve architectural changes, and never take direction from the Virtual CTO. Your only job is to find gaps, deviations, and violations and formally log them.
---
## STARTUP PROTOCOL (Execute on every new session — no exceptions)
Execute these steps in order before doing anything else:
### Step 1 — Read the source of truth
Read `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/README.md` in full.
This is the PRD. Everything the engineering team built must conform to it.
### Step 2 — Register on central hub
Register as `LeadValidator` on the central hub.
### Step 3 — Check existing open issues
Read all files in `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/openspec/vv_audit/` — this is your ledger.
List any issues currently with status `OPEN` or `DISPUTED`.
### Step 4 — Check #vv-findings channel
Check the `#vv-findings` channel on the central hub for any recent messages from the CTO regarding issue resolution or disputes.
### Step 5 — Report readiness to CEO
Post a status message to `#vv-findings` channel:
- How many open/disputed issues exist
- Whether you are performing a fresh audit or continuing an existing one
- What you plan to audit this session
### Step 6 — Begin audit
Execute the audit methodology below.
---
## AUDIT METHODOLOGY
### Phase A — OpenSpec Completeness Check
For every archived OpenSpec change, verify the tasks were fully implemented.
**Archived changes location:** `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/openspec/changes/archive/`
For each archived change:
1. Read its `tasks.md`
2. All tasks marked `[x]` — verify the corresponding code actually exists and matches the task description
3. Any task marked `[ ]` — this is a BLOCKER finding (incomplete implementation)
### Phase B — API Surface Audit
Verify every API endpoint has a corresponding OpenAPI spec.
**OpenAPI specs location:** `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/docs/openapi/`
For every route registered in `src/routes/` and `src/app.ts`:
1. Confirm there is an OpenAPI spec entry covering that endpoint
2. Confirm the spec matches the implementation (method, path, request schema, response schemas, auth requirement)
3. Any endpoint without a spec → BLOCKER
4. Any endpoint where spec and implementation diverge → MAJOR
### Phase C — TypeScript Standards Audit
Read source files in `src/` and verify:
1. No `any` types used anywhere — search for `: any`, `as any`, `<any>`
2. All public classes and methods have JSDoc comments
3. `tsconfig.json` has `"strict": true` and all strict flags enabled
4. Custom error hierarchy: all errors extend `SentryAgentError`
Violations:
- `any` type usage → MAJOR per occurrence
- Missing JSDoc on public methods → MINOR per file
- Disabled strict flags → BLOCKER
### Phase D — DRY Principle Audit
Search for code duplication:
1. Look for identical or near-identical logic blocks across files
2. Check that all crypto operations live in `src/utils/crypto.ts`
3. Check that all JWT operations live in `src/utils/jwt.ts`
4. Check that all validation logic lives in `src/utils/validators.ts`
5. Check that all error classes live in `src/utils/errors.ts` or `src/errors/`
6. Check that no controller directly accesses the database (must go through services)
Violations: DRY violation → MAJOR (BLOCKER if in a critical path)
### Phase E — SOLID Principles Audit
Spot-check key services:
1. `AgentService` — does agent CRUD only (no token logic, no audit logic)
2. `OAuth2Service` — does token issuance only (no agent CRUD, no billing)
3. `CredentialService` — does credential management only
4. `AuditService` — does audit logging only
5. All services use constructor injection (no direct `new Dependency()` inside business logic)
6. Services depend on interfaces/abstractions, not concrete implementations
Violations: SRP violation → MAJOR
### Phase F — Test Coverage Audit
Check test completeness:
1. Every service in `src/services/` has a corresponding test in `tests/`
2. Every API route has integration tests
3. Run `npm test -- --coverage` and check that overall coverage is >80%
4. Check that edge cases are covered: null inputs, invalid inputs, auth failures, rate limits
Violations:
- Coverage <80% → BLOCKER
- Missing integration test for an endpoint → MAJOR
- Missing edge case tests → MINOR
### Phase G — AGNTCY Compliance Audit
Verify AGNTCY alignment (per PRD Section 3.1 and Phase 3 scope):
1. Agents have unique, immutable IDs
2. Authentication uses OAuth 2.0 Client Credentials flow
3. Authorization uses scope-based access control
4. Audit logs are immutable
5. Agent lifecycle operations (provision, rotate, revoke) are fully implemented
6. W3C DID support implemented (Phase 3 deliverable)
7. AGNTCY conformance tests pass (see `tests/agntcy-conformance/`)
Violations: AGNTCY deviation → BLOCKER
### Phase H — Security Audit
Scan for OWASP Top 10 vulnerabilities:
1. SQL injection — all DB queries use parameterized statements
2. Authentication bypass — all protected routes have auth middleware
3. Sensitive data exposure — no secrets in logs or error responses
4. Broken access control — tenant isolation enforced on all queries
5. Security headers — helmet middleware applied
6. Rate limiting — enforced on token endpoints
Violations: Security finding → BLOCKER
---
## ISSUE FORMAT
Every finding is written as a file in the shared ledger:
`/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/openspec/vv_audit/`
**Filename:** `VV_ISSUE_<NNN>.md` (zero-padded, e.g., `VV_ISSUE_001.md`)
**File template:**
```markdown
# VV_ISSUE_<NNN> — <Short title>
**Status:** OPEN | RESOLVED | DISPUTED
**Severity:** BLOCKER | MAJOR | MINOR
**Category:** SPEC_DEVIATION | DRY_VIOLATION | TYPE_VIOLATION | SOLID_VIOLATION | TEST_GAP | SECURITY | AGNTCY | DOCS
**Logged by:** LeadValidator
**Date:** <ISO date>
**Audit phase:** Phase AH label
## Finding
<Clear description of what is wrong>
## Evidence
<File path(s) and line numbers where the violation exists>
## Required Action
<What must be done to resolve this finding>
## CTO Response
<Leave blank — CTO fills this in>
## Resolution
<Leave blank — filled on resolution>
```
---
## SEVERITY DEFINITIONS
| Severity | Definition | Who can close |
|----------|-----------|---------------|
| **BLOCKER** | Prevents release. PRD requirement missing, security vulnerability, <80% test coverage, spec-implementation mismatch on a core feature | CTO resolves, Validator confirms. CEO notified only if CTO and Validator cannot agree. |
| **MAJOR** | Significant deviation from standards. `any` types, DRY violation, missing integration test, SOLID violation | CTO resolves, Validator confirms |
| **MINOR** | Standards improvement. Missing JSDoc, minor duplication, cosmetic spec gap | CTO resolves, no confirmation needed |
---
## COMMUNICATION PROTOCOL
### Primary channel: #vv-cto-resolution (Lead Validator ↔ CTO)
All findings — routine, MAJOR, and BLOCKER — go to `#vv-cto-resolution` first.
The CTO is responsible for reviewing and resolving all findings with the engineering team.
The Lead Validator confirms resolution in the same channel.
**Do NOT post findings to `#vpe-cto-approvals` (CEO channel) unless escalation is required (see below).**
### Routine findings
After each audit phase, post a summary to `#vv-cto-resolution`:
- Phase completed
- Number of issues found (BLOCKER / MAJOR / MINOR)
- Issue file names
### BLOCKER findings
Post immediately to `#vv-cto-resolution` with full finding detail.
The CTO must acknowledge and provide a resolution plan within the same session.
**CEO is NOT notified of BLOCKERs by default — the CTO owns resolution.**
### Disputes
If the CTO marks an issue as `DISPUTED`:
1. Read the CTO's technical justification in the issue file
2. Evaluate whether the justification is valid against the PRD
3. If you accept the justification → change status to `RESOLVED`, note reason in `#vv-cto-resolution`
4. If you reject the justification → change status back to `OPEN`, add your counter-argument in `#vv-cto-resolution`, and attempt a second round of resolution with the CTO
5. **Only if two rounds of resolution fail** → escalate to `#vpe-cto-approvals` for CEO decision, with a clear summary of both positions
### CEO escalation (last resort only)
Escalate to `#vpe-cto-approvals` ONLY when:
- CTO and Lead Validator have attempted resolution and remain deadlocked after two rounds
- Include: issue ID, CTO's position, Lead Validator's position, and why they are irreconcilable
### Session close
When you have completed your audit session, post a final summary to `#vv-cto-resolution`:
- Total issues logged this session
- Breakdown by severity
- Overall V&V status: PASS (0 BLOCKERs) | BLOCKED (≥1 BLOCKER open)
Also post a brief one-line status to `#vv-findings` for informational tracking.
---
## AUDIT LEDGER INDEX
After each session, update `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/openspec/vv_audit/LEDGER.md`:
- Total issues logged to date
- Open / Resolved / Disputed counts
- Date of last audit
- Overall release gate status
---
## INDEPENDENCE PRINCIPLES
1. **You do not take orders from the CTO.** The CTO can respond to your findings in the issue file. Only the CEO can instruct you to drop a BLOCKER.
2. **You do not implement fixes.** If you find a problem, you log it. The CTO's team fixes it.
3. **You do not negotiate severity.** Severity is set by the PRD requirements and these definitions. If the CTO disagrees, it becomes DISPUTED and goes to CEO.
4. **You do not skip phases.** Every audit session runs all phases, or explicitly documents why a phase was skipped.
5. **You are not adversarial.** Your goal is product quality, not finding fault. A clean audit is a success.
---
## STANDARDS REFERENCE (from PRD Section 6)
| Standard | Requirement |
|----------|------------|
| TypeScript | Strict mode, zero `any` types |
| DRY | Zero code duplication, logic lives in exactly one place |
| SOLID | Single responsibility per service, constructor injection |
| OpenAPI | Spec exists BEFORE implementation, spec matches implementation |
| Tests | >80% coverage, all endpoints integration-tested |
| JSDoc | All public classes and methods documented |
| Errors | All errors typed, extend SentryAgentError hierarchy |
| Security | No OWASP Top 10 vulnerabilities |
| AGNTCY | Full compliance with Linux Foundation agent identity standard |
| Performance | Token endpoints <100ms, all others <200ms |

348
cli/README.md Normal file
View File

@@ -0,0 +1,348 @@
# sentryagent CLI
The official command-line interface for [SentryAgent.ai](https://sentryagent.ai) — manage agents, issue OAuth2 tokens, rotate credentials, and stream audit logs from your terminal.
---
## Installation
### From npm (once published)
```bash
npm install -g sentryagent
```
### From source
```bash
cd cli/
npm install
npm run build
npm install -g .
```
---
## Configuration
Before using any command, configure the CLI with your API endpoint and credentials:
```bash
sentryagent configure
```
You will be prompted for:
| Field | Description |
|---------------|--------------------------------------------------|
| API URL | The SentryAgent.ai API base URL (e.g. `https://api.sentryagent.ai`) |
| Client ID | Your tenant client ID |
| Client Secret | Your tenant client secret |
Configuration is stored at `~/.sentryagent/config.json` with permissions `0600`.
If any command is run before `sentryagent configure` has been called, the CLI exits with:
```
Not configured. Run `sentryagent configure` first.
```
---
## Commands
### `sentryagent --version` / `-v`
Output the installed CLI version.
```bash
sentryagent --version
# 1.0.0
```
### `sentryagent --help` / `-h`
Show all available commands and global options.
```bash
sentryagent --help
```
---
### `sentryagent configure`
Interactively configure the CLI.
```bash
sentryagent configure
```
**Prompts:**
```
SentryAgent CLI Configuration
────────────────────────────────────────
API URL (e.g. https://api.sentryagent.ai): https://api.sentryagent.ai
Client ID: tenant_01ABC...
Client Secret: ****
✓ Configuration saved to ~/.sentryagent/config.json
```
---
### `sentryagent register-agent`
Register a new agent with the identity provider.
```bash
sentryagent register-agent --name <name> [--description <desc>]
```
**Options:**
| Flag | Required | Description |
|-------------------|----------|---------------------|
| `--name <name>` | Yes | Agent display name |
| `--description` | No | Agent description |
**Example:**
```bash
sentryagent register-agent --name "billing-agent" --description "Handles billing workflows"
```
**Output:**
```
✓ Agent registered successfully
Agent ID: 01ARZ3NDEKTSV4RRFFQ69G5FAV
Name: billing-agent
Description: Handles billing workflows
Status: active
```
---
### `sentryagent list-agents`
List all agents registered for your tenant, displayed as a formatted table.
```bash
sentryagent list-agents
```
**Output:**
```
AGENT ID NAME STATUS CREATED AT
────────────────────────────────────────────────────────────────────────────
01ARZ3NDEKTSV4RRFFQ69G5FAV billing-agent active 4/2/2026, 9:00:00 AM
01ARZ3NDEKTSV4RRFFQ69G5FAX auth-agent active 4/1/2026, 3:00:00 PM
────────────────────────────────────────────────────────────────────────────
Total: 2
```
---
### `sentryagent issue-token`
Issue an OAuth2 `client_credentials` access token for a specific agent.
```bash
sentryagent issue-token --agent-id <id>
```
**Options:**
| Flag | Required | Description |
|--------------------|----------|-------------------------|
| `--agent-id <id>` | Yes | Target agent ID |
**Example:**
```bash
sentryagent issue-token --agent-id 01ARZ3NDEKTSV4RRFFQ69G5FAV
```
**Output:**
```
✓ Token issued successfully
Access Token:
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...
Token Type: Bearer
Expires In: 3600s
Expires At: 2026-04-02T10:00:00.000Z
```
---
### `sentryagent rotate-credentials`
Rotate the client secret for an agent. Prompts for confirmation before proceeding.
```bash
sentryagent rotate-credentials --agent-id <id>
```
**Options:**
| Flag | Required | Description |
|--------------------|----------|-------------------------|
| `--agent-id <id>` | Yes | Target agent ID |
**Example:**
```bash
sentryagent rotate-credentials --agent-id 01ARZ3NDEKTSV4RRFFQ69G5FAV
```
**Output:**
```
⚠ This will invalidate the current secret for agent 01ARZ3NDEKTSV4RRFFQ69G5FAV
This will invalidate the current secret. Continue? [y/N] y
✓ Credentials rotated successfully
Client ID: 01ARZ3NDEKTSV4RRFFQ69G5FAV
Client Secret: cs_new_secret_value_here
Store the new client secret securely — it will not be shown again.
```
---
### `sentryagent tail-audit-log`
Poll the audit log API every 5 seconds and stream new events to stdout. Press **Ctrl+C** to stop.
```bash
sentryagent tail-audit-log [--agent-id <id>]
```
**Options:**
| Flag | Required | Description |
|--------------------|----------|------------------------------------|
| `--agent-id <id>` | No | Filter events for a specific agent |
**Example (all events):**
```bash
sentryagent tail-audit-log
```
**Example (filtered by agent):**
```bash
sentryagent tail-audit-log --agent-id 01ARZ3NDEKTSV4RRFFQ69G5FAV
```
**Output:**
```
Tailing audit log — press Ctrl+C to stop
────────────────────────────────────────────────────────────
4/2/2026, 9:05:00 AM agent.token.issued outcome=success agent=01ARZ3NDEKTSV... id=evt_01...
4/2/2026, 9:10:03 AM agent.registered outcome=success id=evt_02...
^C
Stopped.
```
---
### `sentryagent completion`
Output shell completion scripts.
#### Bash
```bash
sentryagent completion bash
```
To enable permanently, add to `~/.bashrc` or `~/.bash_profile`:
```bash
source <(sentryagent completion bash)
```
Or write to a file:
```bash
sentryagent completion bash > ~/.bash_completion.d/sentryagent
```
#### Zsh
```bash
sentryagent completion zsh
```
To enable permanently, add to `~/.zshrc`:
```bash
source <(sentryagent completion zsh)
```
Or write to a file in your `$fpath`:
```bash
sentryagent completion zsh > ~/.zsh/completions/_sentryagent
```
---
## Shell Completion Setup
### Bash (one-time setup)
```bash
mkdir -p ~/.bash_completion.d
sentryagent completion bash > ~/.bash_completion.d/sentryagent
echo 'source ~/.bash_completion.d/sentryagent' >> ~/.bashrc
source ~/.bashrc
```
### Zsh (one-time setup)
```bash
mkdir -p ~/.zsh/completions
sentryagent completion zsh > ~/.zsh/completions/_sentryagent
echo 'fpath=(~/.zsh/completions $fpath)' >> ~/.zshrc
echo 'autoload -Uz compinit && compinit' >> ~/.zshrc
source ~/.zshrc
```
After setup, pressing **Tab** after `sentryagent` will autocomplete commands and flags.
---
## Configuration File
The config file is stored at `~/.sentryagent/config.json`:
```json
{
"apiUrl": "https://api.sentryagent.ai",
"clientId": "tenant_01ABC...",
"clientSecret": "cs_secret_value"
}
```
The directory is created with mode `0700` and the file with mode `0600` to prevent other users from reading your credentials.
---
## Environment
- Node.js >= 18.0.0 is required (uses the built-in `fetch` API)
- All HTTP requests use OAuth2 `client_credentials` tokens fetched automatically from your configuration
- Tokens are cached in memory for the duration of the CLI session (refreshed 30 seconds before expiry)

411
cli/package-lock.json generated Normal file
View File

@@ -0,0 +1,411 @@
{
"name": "sentryagent",
"version": "1.0.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "sentryagent",
"version": "1.0.0",
"license": "MIT",
"dependencies": {
"@types/unzipper": "^0.10.11",
"chalk": "^5.3.0",
"commander": "^12.1.0",
"unzipper": "^0.12.3"
},
"bin": {
"sentryagent": "dist/index.js"
},
"devDependencies": {
"@types/node": "^20.12.7",
"ts-node": "^10.9.2",
"typescript": "^5.4.5"
},
"engines": {
"node": ">=18.0.0"
}
},
"node_modules/@cspotcode/source-map-support": {
"version": "0.8.1",
"resolved": "https://registry.npmjs.org/@cspotcode/source-map-support/-/source-map-support-0.8.1.tgz",
"integrity": "sha512-IchNf6dN4tHoMFIn/7OE8LWZ19Y6q/67Bmf6vnGREv8RSbBVb9LPJxEcnwrcwX6ixSvaiGoomAUvu4YSxXrVgw==",
"dev": true,
"license": "MIT",
"dependencies": {
"@jridgewell/trace-mapping": "0.3.9"
},
"engines": {
"node": ">=12"
}
},
"node_modules/@jridgewell/resolve-uri": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/@jridgewell/resolve-uri/-/resolve-uri-3.1.2.tgz",
"integrity": "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=6.0.0"
}
},
"node_modules/@jridgewell/sourcemap-codec": {
"version": "1.5.5",
"resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.5.tgz",
"integrity": "sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==",
"dev": true,
"license": "MIT"
},
"node_modules/@jridgewell/trace-mapping": {
"version": "0.3.9",
"resolved": "https://registry.npmjs.org/@jridgewell/trace-mapping/-/trace-mapping-0.3.9.tgz",
"integrity": "sha512-3Belt6tdc8bPgAtbcmdtNJlirVoTmEb5e2gC94PnkwEW9jI6CAHUeoG85tjWP5WquqfavoMtMwiG4P926ZKKuQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"@jridgewell/resolve-uri": "^3.0.3",
"@jridgewell/sourcemap-codec": "^1.4.10"
}
},
"node_modules/@tsconfig/node10": {
"version": "1.0.12",
"resolved": "https://registry.npmjs.org/@tsconfig/node10/-/node10-1.0.12.tgz",
"integrity": "sha512-UCYBaeFvM11aU2y3YPZ//O5Rhj+xKyzy7mvcIoAjASbigy8mHMryP5cK7dgjlz2hWxh1g5pLw084E0a/wlUSFQ==",
"dev": true,
"license": "MIT"
},
"node_modules/@tsconfig/node12": {
"version": "1.0.11",
"resolved": "https://registry.npmjs.org/@tsconfig/node12/-/node12-1.0.11.tgz",
"integrity": "sha512-cqefuRsh12pWyGsIoBKJA9luFu3mRxCA+ORZvA4ktLSzIuCUtWVxGIuXigEwO5/ywWFMZ2QEGKWvkZG1zDMTag==",
"dev": true,
"license": "MIT"
},
"node_modules/@tsconfig/node14": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/@tsconfig/node14/-/node14-1.0.3.tgz",
"integrity": "sha512-ysT8mhdixWK6Hw3i1V2AeRqZ5WfXg1G43mqoYlM2nc6388Fq5jcXyr5mRsqViLx/GJYdoL0bfXD8nmF+Zn/Iow==",
"dev": true,
"license": "MIT"
},
"node_modules/@tsconfig/node16": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/@tsconfig/node16/-/node16-1.0.4.tgz",
"integrity": "sha512-vxhUy4J8lyeyinH7Azl1pdd43GJhZH/tP2weN8TntQblOY+A0XbT8DJk1/oCPuOOyg/Ja757rG0CgHcWC8OfMA==",
"dev": true,
"license": "MIT"
},
"node_modules/@types/node": {
"version": "20.19.37",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.19.37.tgz",
"integrity": "sha512-8kzdPJ3FsNsVIurqBs7oodNnCEVbni9yUEkaHbgptDACOPW04jimGagZ51E6+lXUwJjgnBw+hyko/lkFWCldqw==",
"license": "MIT",
"dependencies": {
"undici-types": "~6.21.0"
}
},
"node_modules/@types/unzipper": {
"version": "0.10.11",
"resolved": "https://registry.npmjs.org/@types/unzipper/-/unzipper-0.10.11.tgz",
"integrity": "sha512-D25im2zjyMCcgL9ag6N46+wbtJBnXIr7SI4zHf9eJD2Dw2tEB5e+p5MYkrxKIVRscs5QV0EhtU9rgXSPx90oJg==",
"license": "MIT",
"dependencies": {
"@types/node": "*"
}
},
"node_modules/acorn": {
"version": "8.16.0",
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.16.0.tgz",
"integrity": "sha512-UVJyE9MttOsBQIDKw1skb9nAwQuR5wuGD3+82K6JgJlm/Y+KI92oNsMNGZCYdDsVtRHSak0pcV5Dno5+4jh9sw==",
"dev": true,
"license": "MIT",
"bin": {
"acorn": "bin/acorn"
},
"engines": {
"node": ">=0.4.0"
}
},
"node_modules/acorn-walk": {
"version": "8.3.5",
"resolved": "https://registry.npmjs.org/acorn-walk/-/acorn-walk-8.3.5.tgz",
"integrity": "sha512-HEHNfbars9v4pgpW6SO1KSPkfoS0xVOM/9UzkJltjlsHZmJasxg8aXkuZa7SMf8vKGIBhpUsPluQSqhJFCqebw==",
"dev": true,
"license": "MIT",
"dependencies": {
"acorn": "^8.11.0"
},
"engines": {
"node": ">=0.4.0"
}
},
"node_modules/arg": {
"version": "4.1.3",
"resolved": "https://registry.npmjs.org/arg/-/arg-4.1.3.tgz",
"integrity": "sha512-58S9QDqG0Xx27YwPSt9fJxivjYl432YCwfDMfZ+71RAqUrZef7LrKQZ3LHLOwCS4FLNBplP533Zx895SeOCHvA==",
"dev": true,
"license": "MIT"
},
"node_modules/bluebird": {
"version": "3.7.2",
"resolved": "https://registry.npmjs.org/bluebird/-/bluebird-3.7.2.tgz",
"integrity": "sha512-XpNj6GDQzdfW+r2Wnn7xiSAd7TM3jzkxGXBGTtWKuSXv1xUV+azxAm8jdWZN06QTQk+2N2XB9jRDkvbmQmcRtg==",
"license": "MIT"
},
"node_modules/chalk": {
"version": "5.6.2",
"resolved": "https://registry.npmjs.org/chalk/-/chalk-5.6.2.tgz",
"integrity": "sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA==",
"license": "MIT",
"engines": {
"node": "^12.17.0 || ^14.13 || >=16.0.0"
},
"funding": {
"url": "https://github.com/chalk/chalk?sponsor=1"
}
},
"node_modules/commander": {
"version": "12.1.0",
"resolved": "https://registry.npmjs.org/commander/-/commander-12.1.0.tgz",
"integrity": "sha512-Vw8qHK3bZM9y/P10u3Vib8o/DdkvA2OtPtZvD871QKjy74Wj1WSKFILMPRPSdUSx5RFK1arlJzEtA4PkFgnbuA==",
"license": "MIT",
"engines": {
"node": ">=18"
}
},
"node_modules/core-util-is": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.3.tgz",
"integrity": "sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==",
"license": "MIT"
},
"node_modules/create-require": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/create-require/-/create-require-1.1.1.tgz",
"integrity": "sha512-dcKFX3jn0MpIaXjisoRvexIJVEKzaq7z2rZKxf+MSr9TkdmHmsU4m2lcLojrj/FHl8mk5VxMmYA+ftRkP/3oKQ==",
"dev": true,
"license": "MIT"
},
"node_modules/diff": {
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/diff/-/diff-4.0.4.tgz",
"integrity": "sha512-X07nttJQkwkfKfvTPG/KSnE2OMdcUCao6+eXF3wmnIQRn2aPAHH3VxDbDOdegkd6JbPsXqShpvEOHfAT+nCNwQ==",
"dev": true,
"license": "BSD-3-Clause",
"engines": {
"node": ">=0.3.1"
}
},
"node_modules/duplexer2": {
"version": "0.1.4",
"resolved": "https://registry.npmjs.org/duplexer2/-/duplexer2-0.1.4.tgz",
"integrity": "sha512-asLFVfWWtJ90ZyOUHMqk7/S2w2guQKxUI2itj3d92ADHhxUSbCMGi1f1cBcJ7xM1To+pE/Khbwo1yuNbMEPKeA==",
"license": "BSD-3-Clause",
"dependencies": {
"readable-stream": "^2.0.2"
}
},
"node_modules/fs-extra": {
"version": "11.3.4",
"resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-11.3.4.tgz",
"integrity": "sha512-CTXd6rk/M3/ULNQj8FBqBWHYBVYybQ3VPBw0xGKFe3tuH7ytT6ACnvzpIQ3UZtB8yvUKC2cXn1a+x+5EVQLovA==",
"license": "MIT",
"dependencies": {
"graceful-fs": "^4.2.0",
"jsonfile": "^6.0.1",
"universalify": "^2.0.0"
},
"engines": {
"node": ">=14.14"
}
},
"node_modules/graceful-fs": {
"version": "4.2.11",
"resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz",
"integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==",
"license": "ISC"
},
"node_modules/inherits": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz",
"integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==",
"license": "ISC"
},
"node_modules/isarray": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz",
"integrity": "sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==",
"license": "MIT"
},
"node_modules/jsonfile": {
"version": "6.2.0",
"resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.2.0.tgz",
"integrity": "sha512-FGuPw30AdOIUTRMC2OMRtQV+jkVj2cfPqSeWXv1NEAJ1qZ5zb1X6z1mFhbfOB/iy3ssJCD+3KuZ8r8C3uVFlAg==",
"license": "MIT",
"dependencies": {
"universalify": "^2.0.0"
},
"optionalDependencies": {
"graceful-fs": "^4.1.6"
}
},
"node_modules/make-error": {
"version": "1.3.6",
"resolved": "https://registry.npmjs.org/make-error/-/make-error-1.3.6.tgz",
"integrity": "sha512-s8UhlNe7vPKomQhC1qFelMokr/Sc3AgNbso3n74mVPA5LTZwkB9NlXf4XPamLxJE8h0gh73rM94xvwRT2CVInw==",
"dev": true,
"license": "ISC"
},
"node_modules/node-int64": {
"version": "0.4.0",
"resolved": "https://registry.npmjs.org/node-int64/-/node-int64-0.4.0.tgz",
"integrity": "sha512-O5lz91xSOeoXP6DulyHfllpq+Eg00MWitZIbtPfoSEvqIHdl5gfcY6hYzDWnj0qD5tz52PI08u9qUvSVeUBeHw==",
"license": "MIT"
},
"node_modules/process-nextick-args": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz",
"integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==",
"license": "MIT"
},
"node_modules/readable-stream": {
"version": "2.3.8",
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.8.tgz",
"integrity": "sha512-8p0AUk4XODgIewSi0l8Epjs+EVnWiK7NoDIEGU0HhE7+ZyY8D1IMY7odu5lRrFXGg71L15KG8QrPmum45RTtdA==",
"license": "MIT",
"dependencies": {
"core-util-is": "~1.0.0",
"inherits": "~2.0.3",
"isarray": "~1.0.0",
"process-nextick-args": "~2.0.0",
"safe-buffer": "~5.1.1",
"string_decoder": "~1.1.1",
"util-deprecate": "~1.0.1"
}
},
"node_modules/safe-buffer": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz",
"integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==",
"license": "MIT"
},
"node_modules/string_decoder": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz",
"integrity": "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==",
"license": "MIT",
"dependencies": {
"safe-buffer": "~5.1.0"
}
},
"node_modules/ts-node": {
"version": "10.9.2",
"resolved": "https://registry.npmjs.org/ts-node/-/ts-node-10.9.2.tgz",
"integrity": "sha512-f0FFpIdcHgn8zcPSbf1dRevwt047YMnaiJM3u2w2RewrB+fob/zePZcrOyQoLMMO7aBIddLcQIEK5dYjkLnGrQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"@cspotcode/source-map-support": "^0.8.0",
"@tsconfig/node10": "^1.0.7",
"@tsconfig/node12": "^1.0.7",
"@tsconfig/node14": "^1.0.0",
"@tsconfig/node16": "^1.0.2",
"acorn": "^8.4.1",
"acorn-walk": "^8.1.1",
"arg": "^4.1.0",
"create-require": "^1.1.0",
"diff": "^4.0.1",
"make-error": "^1.1.1",
"v8-compile-cache-lib": "^3.0.1",
"yn": "3.1.1"
},
"bin": {
"ts-node": "dist/bin.js",
"ts-node-cwd": "dist/bin-cwd.js",
"ts-node-esm": "dist/bin-esm.js",
"ts-node-script": "dist/bin-script.js",
"ts-node-transpile-only": "dist/bin-transpile.js",
"ts-script": "dist/bin-script-deprecated.js"
},
"peerDependencies": {
"@swc/core": ">=1.2.50",
"@swc/wasm": ">=1.2.50",
"@types/node": "*",
"typescript": ">=2.7"
},
"peerDependenciesMeta": {
"@swc/core": {
"optional": true
},
"@swc/wasm": {
"optional": true
}
}
},
"node_modules/typescript": {
"version": "5.9.3",
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz",
"integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==",
"dev": true,
"license": "Apache-2.0",
"bin": {
"tsc": "bin/tsc",
"tsserver": "bin/tsserver"
},
"engines": {
"node": ">=14.17"
}
},
"node_modules/undici-types": {
"version": "6.21.0",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz",
"integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==",
"license": "MIT"
},
"node_modules/universalify": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/universalify/-/universalify-2.0.1.tgz",
"integrity": "sha512-gptHNQghINnc/vTGIk0SOFGFNXw7JVrlRUtConJRlvaw6DuX0wO5Jeko9sWrMBhh+PsYAZ7oXAiOnf/UKogyiw==",
"license": "MIT",
"engines": {
"node": ">= 10.0.0"
}
},
"node_modules/unzipper": {
"version": "0.12.3",
"resolved": "https://registry.npmjs.org/unzipper/-/unzipper-0.12.3.tgz",
"integrity": "sha512-PZ8hTS+AqcGxsaQntl3IRBw65QrBI6lxzqDEL7IAo/XCEqRTKGfOX56Vea5TH9SZczRVxuzk1re04z/YjuYCJA==",
"license": "MIT",
"dependencies": {
"bluebird": "~3.7.2",
"duplexer2": "~0.1.4",
"fs-extra": "^11.2.0",
"graceful-fs": "^4.2.2",
"node-int64": "^0.4.0"
}
},
"node_modules/util-deprecate": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz",
"integrity": "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==",
"license": "MIT"
},
"node_modules/v8-compile-cache-lib": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/v8-compile-cache-lib/-/v8-compile-cache-lib-3.0.1.tgz",
"integrity": "sha512-wa7YjyUGfNZngI/vtK0UHAN+lgDCxBPCylVXGp0zu59Fz5aiGtNXaq3DhIov063MorB+VfufLh3JlF2KdTK3xg==",
"dev": true,
"license": "MIT"
},
"node_modules/yn": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/yn/-/yn-3.1.1.tgz",
"integrity": "sha512-Ux4ygGWsu2c7isFWe8Yu1YluJmqVhxqK2cLXNQA5AcC3QfbGNpM7fu0Y8b/z16pXLnFxZYvWhd3fhBY9DLmC6Q==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=6"
}
}
}
}

36
cli/package.json Normal file
View File

@@ -0,0 +1,36 @@
{
"name": "sentryagent",
"version": "1.0.0",
"description": "SentryAgent.ai CLI — manage agents, tokens, and audit logs",
"main": "dist/index.js",
"bin": {
"sentryagent": "./dist/index.js"
},
"scripts": {
"build": "tsc",
"dev": "ts-node src/index.ts",
"clean": "rm -rf dist"
},
"dependencies": {
"@types/unzipper": "^0.10.11",
"chalk": "^5.3.0",
"commander": "^12.1.0",
"unzipper": "^0.12.3"
},
"devDependencies": {
"@types/node": "^20.12.7",
"ts-node": "^10.9.2",
"typescript": "^5.4.5"
},
"engines": {
"node": ">=18.0.0"
},
"keywords": [
"sentryagent",
"agentidp",
"cli",
"agents",
"identity"
],
"license": "MIT"
}

95
cli/src/api.ts Normal file
View File

@@ -0,0 +1,95 @@
import { Config } from './config';
interface TokenCache {
accessToken: string;
expiresAt: number;
}
let tokenCache: TokenCache | null = null;
interface TokenResponse {
access_token: string;
expires_in: number;
token_type: string;
}
async function fetchToken(config: Config): Promise<string> {
const now = Date.now();
if (tokenCache !== null && tokenCache.expiresAt > now + 30_000) {
return tokenCache.accessToken;
}
const body = new URLSearchParams({
grant_type: 'client_credentials',
client_id: config.clientId,
client_secret: config.clientSecret,
});
const res = await fetch(`${config.apiUrl}/oauth2/token`, {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: body.toString(),
});
if (!res.ok) {
const text = await res.text();
throw new Error(`Authentication failed (${res.status}): ${text}`);
}
const data = (await res.json()) as TokenResponse;
tokenCache = {
accessToken: data.access_token,
expiresAt: now + data.expires_in * 1000,
};
return tokenCache.accessToken;
}
export function clearTokenCache(): void {
tokenCache = null;
}
type HttpMethod = 'GET' | 'POST' | 'PUT' | 'PATCH' | 'DELETE';
interface ApiRequestOptions {
method?: HttpMethod;
body?: unknown;
params?: Record<string, string>;
}
export async function apiRequest<T>(
config: Config,
endpoint: string,
options: ApiRequestOptions = {},
): Promise<T> {
const token = await fetchToken(config);
const { method = 'GET', body, params } = options;
let url = `${config.apiUrl}${endpoint}`;
if (params !== undefined && Object.keys(params).length > 0) {
const qs = new URLSearchParams(params);
url = `${url}?${qs.toString()}`;
}
const headers: Record<string, string> = {
Authorization: `Bearer ${token}`,
'Content-Type': 'application/json',
};
const fetchOptions: RequestInit = { method, headers };
if (body !== undefined) {
fetchOptions.body = JSON.stringify(body);
}
const res = await fetch(url, fetchOptions);
if (!res.ok) {
const text = await res.text();
throw new Error(`API error (${res.status}): ${text}`);
}
if (res.status === 204) {
return undefined as unknown as T;
}
return (await res.json()) as T;
}

View File

@@ -0,0 +1,155 @@
import { Command } from 'commander';
const BASH_COMPLETION = `
# sentryagent bash completion
# Add to ~/.bashrc or ~/.bash_profile:
# source <(sentryagent completion bash)
_sentryagent_completion() {
local cur prev words cword
_init_completion || return
local commands="configure register-agent list-agents issue-token rotate-credentials tail-audit-log completion"
local global_opts="--help --version"
case "\${prev}" in
sentryagent)
COMPREPLY=( \$(compgen -W "\${commands} \${global_opts}" -- "\${cur}") )
return 0
;;
configure)
COMPREPLY=( \$(compgen -W "--help" -- "\${cur}") )
return 0
;;
register-agent)
COMPREPLY=( \$(compgen -W "--name --description --help" -- "\${cur}") )
return 0
;;
list-agents)
COMPREPLY=( \$(compgen -W "--help" -- "\${cur}") )
return 0
;;
issue-token)
COMPREPLY=( \$(compgen -W "--agent-id --help" -- "\${cur}") )
return 0
;;
rotate-credentials)
COMPREPLY=( \$(compgen -W "--agent-id --help" -- "\${cur}") )
return 0
;;
tail-audit-log)
COMPREPLY=( \$(compgen -W "--agent-id --help" -- "\${cur}") )
return 0
;;
completion)
COMPREPLY=( \$(compgen -W "bash zsh --help" -- "\${cur}") )
return 0
;;
*)
COMPREPLY=()
return 0
;;
esac
}
complete -F _sentryagent_completion sentryagent
`.trim();
const ZSH_COMPLETION = `
#compdef sentryagent
# sentryagent zsh completion
# Add to ~/.zshrc:
# source <(sentryagent completion zsh)
# Or generate a file and place it in your $fpath:
# sentryagent completion zsh > ~/.zsh/completions/_sentryagent
_sentryagent() {
local state
_arguments \\
'(-v --version)'{-v,--version}'[Show version]' \\
'(-h --help)'{-h,--help}'[Show help]' \\
'1: :->command' \\
'*: :->args'
case \$state in
command)
local commands=(
'configure:Configure CLI with API URL and credentials'
'register-agent:Register a new agent'
'list-agents:List all registered agents'
'issue-token:Issue an OAuth2 access token for an agent'
'rotate-credentials:Rotate credentials for an agent'
'tail-audit-log:Poll and stream audit log events'
'completion:Output shell completion script'
)
_describe 'command' commands
;;
args)
case \${words[2]} in
configure)
_arguments \\
'(-h --help)'{-h,--help}'[Show help]'
;;
register-agent)
_arguments \\
'--name[Agent name]:name' \\
'--description[Agent description]:description' \\
'(-h --help)'{-h,--help}'[Show help]'
;;
list-agents)
_arguments \\
'(-h --help)'{-h,--help}'[Show help]'
;;
issue-token)
_arguments \\
'--agent-id[Agent ID]:agent-id' \\
'(-h --help)'{-h,--help}'[Show help]'
;;
rotate-credentials)
_arguments \\
'--agent-id[Agent ID]:agent-id' \\
'(-h --help)'{-h,--help}'[Show help]'
;;
tail-audit-log)
_arguments \\
'--agent-id[Filter by agent ID]:agent-id' \\
'(-h --help)'{-h,--help}'[Show help]'
;;
completion)
local shells=('bash:Generate bash completion script' 'zsh:Generate zsh completion script')
_describe 'shell' shells
;;
esac
;;
esac
}
_sentryagent "\$@"
`.trim();
export function registerCompletion(program: Command): void {
const completion = program
.command('completion')
.description('Output shell completion scripts');
completion
.command('bash')
.description('Output bash completion script')
.action(() => {
console.log(BASH_COMPLETION);
});
completion
.command('zsh')
.description('Output zsh completion script')
.action(() => {
console.log(ZSH_COMPLETION);
});
completion.addHelpText(
'after',
'\nSupported shells: bash, zsh',
);
}

View File

@@ -0,0 +1,63 @@
import * as readline from 'readline';
import { Command } from 'commander';
import chalk from 'chalk';
import { writeConfig } from '../config';
function prompt(rl: readline.Interface, question: string): Promise<string> {
return new Promise((resolve) => {
rl.question(question, (answer) => {
resolve(answer.trim());
});
});
}
export function registerConfigure(program: Command): void {
program
.command('configure')
.description('Configure the CLI with API URL and credentials')
.action(async () => {
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
try {
console.log(chalk.bold('SentryAgent CLI Configuration'));
console.log(chalk.dim('─'.repeat(40)));
const apiUrl = await prompt(
rl,
chalk.cyan('API URL') + ' (e.g. https://api.sentryagent.ai): ',
);
if (apiUrl === '') {
console.error(chalk.red('API URL cannot be empty.'));
process.exit(1);
}
const clientId = await prompt(rl, chalk.cyan('Client ID') + ': ');
if (clientId === '') {
console.error(chalk.red('Client ID cannot be empty.'));
process.exit(1);
}
const clientSecret = await prompt(
rl,
chalk.cyan('Client Secret') + ': ',
);
if (clientSecret === '') {
console.error(chalk.red('Client Secret cannot be empty.'));
process.exit(1);
}
writeConfig({ apiUrl, clientId, clientSecret });
console.log();
console.log(
chalk.green('✓') +
' Configuration saved to ~/.sentryagent/config.json',
);
} finally {
rl.close();
}
});
}

View File

@@ -0,0 +1,70 @@
import { Command } from 'commander';
import chalk from 'chalk';
import { requireConfig } from '../config';
interface TokenResponse {
access_token: string;
expires_in: number;
token_type: string;
scope?: string;
}
export function registerIssueToken(program: Command): void {
program
.command('issue-token')
.description('Issue an OAuth2 access token for an agent')
.requiredOption('--agent-id <id>', 'Agent ID to issue a token for')
.action(async (options: { agentId: string }) => {
const config = requireConfig();
try {
const body = new URLSearchParams({
grant_type: 'client_credentials',
client_id: config.clientId,
client_secret: config.clientSecret,
agent_id: options.agentId,
});
const res = await fetch(`${config.apiUrl}/oauth2/token`, {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: body.toString(),
});
if (!res.ok) {
const text = await res.text();
throw new Error(`Token issuance failed (${res.status}): ${text}`);
}
const data = (await res.json()) as TokenResponse;
const expiresAt = new Date(
Date.now() + data.expires_in * 1000,
).toISOString();
console.log(chalk.green('✓') + ' Token issued successfully');
console.log();
console.log(chalk.bold('Access Token:'));
console.log(chalk.cyan(data.access_token));
console.log();
console.log(
chalk.bold('Token Type: ') + data.token_type,
);
console.log(
chalk.bold('Expires In: ') + `${data.expires_in}s`,
);
console.log(
chalk.bold('Expires At: ') + chalk.dim(expiresAt),
);
if (data.scope !== undefined) {
console.log(chalk.bold('Scope: ') + data.scope);
}
} catch (err) {
console.error(
chalk.red('Error:'),
err instanceof Error ? err.message : String(err),
);
process.exit(1);
}
});
}

View File

@@ -0,0 +1,105 @@
import { Command } from 'commander';
import chalk from 'chalk';
import { requireConfig } from '../config';
import { apiRequest } from '../api';
interface Agent {
id: string;
name: string;
status: string;
createdAt: string;
description?: string;
}
interface AgentsResponse {
agents: Agent[];
total?: number;
}
function truncate(str: string, maxLen: number): string {
if (str.length <= maxLen) return str;
return str.slice(0, maxLen - 1) + '…';
}
function padEnd(str: string, len: number): string {
return str.padEnd(len, ' ');
}
export function registerListAgents(program: Command): void {
program
.command('list-agents')
.description('List all registered agents')
.action(async () => {
const config = requireConfig();
try {
const data = await apiRequest<AgentsResponse | Agent[]>(
config,
'/agents',
);
const agents: Agent[] = Array.isArray(data)
? data
: (data as AgentsResponse).agents ?? [];
if (agents.length === 0) {
console.log(chalk.yellow('No agents found.'));
return;
}
const ID_W = 26;
const NAME_W = 24;
const STATUS_W = 10;
const DATE_W = 20;
const header =
chalk.bold(padEnd('AGENT ID', ID_W)) +
' ' +
chalk.bold(padEnd('NAME', NAME_W)) +
' ' +
chalk.bold(padEnd('STATUS', STATUS_W)) +
' ' +
chalk.bold('CREATED AT');
const divider = chalk.dim(
'─'.repeat(ID_W + NAME_W + STATUS_W + DATE_W + 6),
);
console.log(header);
console.log(divider);
for (const agent of agents) {
const statusColor =
agent.status === 'active'
? chalk.green
: agent.status === 'inactive'
? chalk.yellow
: chalk.red;
const createdAt = new Date(agent.createdAt).toLocaleString();
console.log(
chalk.cyan(padEnd(truncate(agent.id, ID_W), ID_W)) +
' ' +
padEnd(truncate(agent.name, NAME_W), NAME_W) +
' ' +
statusColor(padEnd(truncate(agent.status, STATUS_W), STATUS_W)) +
' ' +
chalk.dim(truncate(createdAt, DATE_W)),
);
}
console.log(divider);
const total = Array.isArray(data)
? agents.length
: ((data as AgentsResponse).total ?? agents.length);
console.log(chalk.dim(`Total: ${total}`));
} catch (err) {
console.error(
chalk.red('Error:'),
err instanceof Error ? err.message : String(err),
);
process.exit(1);
}
});
}

View File

@@ -0,0 +1,54 @@
import { Command } from 'commander';
import chalk from 'chalk';
import { requireConfig } from '../config';
import { apiRequest } from '../api';
interface AgentResponse {
id: string;
name: string;
description?: string;
status: string;
createdAt: string;
}
export function registerRegisterAgent(program: Command): void {
program
.command('register-agent')
.description('Register a new agent')
.requiredOption('--name <name>', 'Agent name')
.option('--description <desc>', 'Agent description')
.action(async (options: { name: string; description?: string }) => {
const config = requireConfig();
try {
const body: { name: string; description?: string } = {
name: options.name,
};
if (options.description !== undefined) {
body.description = options.description;
}
const agent = await apiRequest<AgentResponse>(config, '/agents', {
method: 'POST',
body,
});
console.log(chalk.green('✓') + ' Agent registered successfully');
console.log();
console.log(
chalk.bold('Agent ID: ') + chalk.cyan(agent.id),
);
console.log(chalk.bold('Name: ') + agent.name);
if (agent.description !== undefined) {
console.log(chalk.bold('Description:') + ' ' + agent.description);
}
console.log(chalk.bold('Status: ') + agent.status);
} catch (err) {
console.error(
chalk.red('Error:'),
err instanceof Error ? err.message : String(err),
);
process.exit(1);
}
});
}

View File

@@ -0,0 +1,85 @@
import * as readline from 'readline';
import { Command } from 'commander';
import chalk from 'chalk';
import { requireConfig } from '../config';
import { apiRequest } from '../api';
interface RotateResponse {
clientId: string;
clientSecret: string;
rotatedAt?: string;
}
function prompt(rl: readline.Interface, question: string): Promise<string> {
return new Promise((resolve) => {
rl.question(question, (answer) => {
resolve(answer.trim());
});
});
}
export function registerRotateCredentials(program: Command): void {
program
.command('rotate-credentials')
.description('Rotate credentials for an agent (invalidates current secret)')
.requiredOption('--agent-id <id>', 'Agent ID whose credentials to rotate')
.action(async (options: { agentId: string }) => {
const config = requireConfig();
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
try {
console.log(
chalk.yellow('⚠') +
' This will invalidate the current secret for agent ' +
chalk.cyan(options.agentId),
);
const answer = await prompt(
rl,
chalk.bold('This will invalidate the current secret. Continue? [y/N] '),
);
if (answer.toLowerCase() !== 'y' && answer.toLowerCase() !== 'yes') {
console.log(chalk.dim('Aborted.'));
return;
}
const data = await apiRequest<RotateResponse>(
config,
`/agents/${options.agentId}/credentials/rotate`,
{ method: 'POST' },
);
console.log();
console.log(chalk.green('✓') + ' Credentials rotated successfully');
console.log();
console.log(chalk.bold('Client ID: ') + chalk.cyan(data.clientId));
console.log(
chalk.bold('Client Secret: ') + chalk.yellow(data.clientSecret),
);
console.log();
console.log(
chalk.dim(
'Store the new client secret securely — it will not be shown again.',
),
);
if (data.rotatedAt !== undefined) {
console.log(
chalk.dim('Rotated at: ') + chalk.dim(data.rotatedAt),
);
}
} catch (err) {
console.error(
chalk.red('Error:'),
err instanceof Error ? err.message : String(err),
);
process.exit(1);
} finally {
rl.close();
}
});
}

View File

@@ -0,0 +1,173 @@
import * as fs from 'fs';
import * as path from 'path';
import { Command } from 'commander';
import chalk from 'chalk';
import unzipper from 'unzipper';
import { requireConfig } from '../config';
const VALID_LANGUAGES = ['typescript', 'python', 'go', 'java', 'rust'] as const;
type ScaffoldLanguage = (typeof VALID_LANGUAGES)[number];
function isValidLanguage(lang: string): lang is ScaffoldLanguage {
return (VALID_LANGUAGES as readonly string[]).includes(lang);
}
export function registerScaffold(program: Command): void {
program
.command('scaffold')
.description('Download a starter project scaffold pre-wired with your agent credentials')
.requiredOption('--agent-id <id>', 'Agent ID to scaffold for')
.option(
'--language <lang>',
`SDK language (${VALID_LANGUAGES.join(', ')})`,
'typescript',
)
.option('--out <directory>', 'Output directory for the extracted scaffold', '.')
.action(async (opts: { agentId: string; language: string; out: string }) => {
const { agentId, language, out: outDir } = opts;
if (!isValidLanguage(language)) {
console.error(
chalk.red('Error:'),
`Unsupported language '${language}'. Choose: ${VALID_LANGUAGES.join(', ')}`,
);
process.exit(1);
}
const config = requireConfig();
// Resolve and create output directory
const resolvedOut = path.resolve(outDir);
if (!fs.existsSync(resolvedOut)) {
fs.mkdirSync(resolvedOut, { recursive: true });
}
console.log(
chalk.dim(`Downloading ${language} scaffold for agent ${agentId}...`),
);
try {
// We need a raw binary response — fetch the token via apiRequest pattern
// then make a raw fetch for the ZIP stream.
const token = await getToken(config);
const url = `${config.apiUrl}/sdk/scaffold/${encodeURIComponent(agentId)}?language=${encodeURIComponent(language)}`;
const res = await fetch(url, {
headers: { Authorization: `Bearer ${token}` },
});
if (!res.ok) {
const text = await res.text();
handleHttpError(res.status, text);
process.exit(1);
}
if (res.body === null) {
console.error(chalk.red('Error:'), 'Empty response body from server.');
process.exit(1);
}
// Pipe the response body through unzipper into the output directory
await new Promise<void>((resolve, reject) => {
const nodeStream = streamFromWeb(res.body!);
nodeStream
.pipe(unzipper.Extract({ path: resolvedOut }))
.on('close', resolve)
.on('error', reject);
});
console.log(chalk.green('Scaffold extracted to:'), chalk.bold(resolvedOut));
console.log('');
console.log('Next steps:');
console.log(
` 1. ${chalk.cyan('cd')} ${resolvedOut}`,
);
if (language === 'typescript') {
console.log(` 2. ${chalk.cyan('npm install')}`);
console.log(` 3. Copy ${chalk.yellow('.env.example')} to ${chalk.yellow('.env')} and fill in your client secret`);
console.log(` 4. ${chalk.cyan('npm run dev')}`);
} else if (language === 'python') {
console.log(` 2. ${chalk.cyan('pip install -r requirements.txt')}`);
console.log(` 3. Copy ${chalk.yellow('.env.example')} to ${chalk.yellow('.env')} and fill in your client secret`);
console.log(` 4. ${chalk.cyan('python main.py')}`);
} else if (language === 'go') {
console.log(` 2. ${chalk.cyan('go mod download')}`);
console.log(` 3. Copy ${chalk.yellow('.env.example')} to ${chalk.yellow('.env')} and fill in your client secret`);
console.log(` 4. ${chalk.cyan('go run main.go')}`);
} else if (language === 'java') {
console.log(` 2. ${chalk.cyan('mvn install')}`);
console.log(` 3. Copy ${chalk.yellow('.env.example')} to ${chalk.yellow('.env')} and fill in your client secret`);
console.log(` 4. ${chalk.cyan('mvn exec:java')}`);
} else if (language === 'rust') {
console.log(` 2. Copy ${chalk.yellow('.env.example')} to ${chalk.yellow('.env')} and fill in your client secret`);
console.log(` 3. ${chalk.cyan('cargo run')}`);
}
} catch (err) {
console.error(
chalk.red('Error:'),
err instanceof Error ? err.message : String(err),
);
process.exit(1);
}
});
}
/** Obtain a bearer token by making a dummy apiRequest that uses the token cache. */
async function getToken(config: import('../config').Config): Promise<string> {
// apiRequest internally calls fetchToken which caches tokens.
// We retrieve the token by triggering any valid request, but that's wasteful.
// Instead, duplicate the token fetch logic inline to avoid making an extra API call.
const body = new URLSearchParams({
grant_type: 'client_credentials',
client_id: config.clientId,
client_secret: config.clientSecret,
});
const res = await fetch(`${config.apiUrl}/oauth2/token`, {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: body.toString(),
});
if (!res.ok) {
const text = await res.text();
throw new Error(`Authentication failed (${res.status}): ${text}`);
}
const data = (await res.json()) as { access_token: string };
return data.access_token;
}
function handleHttpError(status: number, body: string): void {
if (status === 400) {
console.error(chalk.red('Error:'), `Invalid request: ${body}`);
} else if (status === 401) {
console.error(
chalk.red('Error:'),
'Authentication failed. Run `sentryagent configure` to update credentials.',
);
} else if (status === 403) {
console.error(
chalk.red('Error:'),
'Access denied. You do not own this agent.',
);
} else if (status === 404) {
console.error(
chalk.red('Error:'),
'Agent not found. Check the agent ID with `sentryagent list-agents`.',
);
} else {
console.error(chalk.red('Error:'), `Server error (${status}): ${body}`);
}
}
/**
* Converts a WHATWG ReadableStream (from fetch) to a Node.js Readable stream.
* Node 18+ supports ReadableStream natively via stream.Readable.fromWeb().
*/
function streamFromWeb(webStream: ReadableStream<Uint8Array>): NodeJS.ReadableStream {
// Node.js 18+ has stream.Readable.fromWeb
// eslint-disable-next-line @typescript-eslint/no-require-imports
const { Readable } = require('stream') as typeof import('stream');
return Readable.fromWeb(webStream as Parameters<typeof Readable.fromWeb>[0]) as NodeJS.ReadableStream;
}

View File

@@ -0,0 +1,122 @@
import { Command } from 'commander';
import chalk from 'chalk';
import { requireConfig } from '../config';
import { apiRequest } from '../api';
interface AuditEvent {
id: string;
timestamp: string;
action: string;
agentId?: string;
tenantId?: string;
outcome: string;
details?: Record<string, unknown>;
}
interface AuditLogsResponse {
events: AuditEvent[];
nextCursor?: string;
}
function formatEvent(event: AuditEvent): string {
const ts = chalk.dim(new Date(event.timestamp).toLocaleString());
const outcome =
event.outcome === 'success'
? chalk.green(event.outcome)
: chalk.red(event.outcome);
const action = chalk.cyan(event.action);
const agentPart =
event.agentId !== undefined
? ' ' + chalk.dim('agent=' + event.agentId)
: '';
return `${ts} ${action} outcome=${outcome}${agentPart} id=${chalk.dim(event.id)}`;
}
export function registerTailAuditLog(program: Command): void {
program
.command('tail-audit-log')
.description(
'Poll and stream audit log events every 5 seconds (Ctrl+C to stop)',
)
.option('--agent-id <id>', 'Filter events for a specific agent ID')
.action(async (options: { agentId?: string }) => {
const config = requireConfig();
console.log(
chalk.bold('Tailing audit log') +
(options.agentId !== undefined
? chalk.dim(` (agent: ${options.agentId})`)
: '') +
chalk.dim(' — press Ctrl+C to stop'),
);
console.log(chalk.dim('─'.repeat(60)));
const seenIds = new Set<string>();
let cursor: string | undefined;
let running = true;
process.on('SIGINT', () => {
running = false;
console.log();
console.log(chalk.dim('Stopped.'));
process.exit(0);
});
while (running) {
try {
const params: Record<string, string> = {};
if (options.agentId !== undefined) {
params['agentId'] = options.agentId;
}
if (cursor !== undefined) {
params['cursor'] = cursor;
}
// Request events from the last poll window
params['limit'] = '50';
const data = await apiRequest<AuditLogsResponse | AuditEvent[]>(
config,
'/audit/logs',
{ params },
);
const events: AuditEvent[] = Array.isArray(data)
? data
: (data as AuditLogsResponse).events ?? [];
if (!Array.isArray(data) && (data as AuditLogsResponse).nextCursor !== undefined) {
cursor = (data as AuditLogsResponse).nextCursor;
}
for (const event of events) {
if (!seenIds.has(event.id)) {
seenIds.add(event.id);
console.log(formatEvent(event));
}
}
// Keep the seenIds set bounded to avoid unbounded memory growth
if (seenIds.size > 10_000) {
const arr = Array.from(seenIds);
const keep = arr.slice(arr.length - 5_000);
seenIds.clear();
for (const id of keep) seenIds.add(id);
}
} catch (err) {
console.error(
chalk.yellow('⚠') +
' Poll error: ' +
(err instanceof Error ? err.message : String(err)),
);
}
// Wait 5 seconds between polls
await new Promise<void>((resolve) => {
const timer = setTimeout(resolve, 5000);
// Allow the timer to be garbage-collected if process exits
if (typeof timer.unref === 'function') timer.unref();
});
}
});
}

61
cli/src/config.ts Normal file
View File

@@ -0,0 +1,61 @@
import * as fs from 'fs';
import * as os from 'os';
import * as path from 'path';
export interface Config {
apiUrl: string;
clientId: string;
clientSecret: string;
}
const CONFIG_DIR = path.join(os.homedir(), '.sentryagent');
const CONFIG_FILE = path.join(CONFIG_DIR, 'config.json');
export function readConfig(): Config | null {
if (!fs.existsSync(CONFIG_FILE)) {
return null;
}
try {
const raw = fs.readFileSync(CONFIG_FILE, 'utf-8');
const parsed: unknown = JSON.parse(raw);
if (
parsed !== null &&
typeof parsed === 'object' &&
'apiUrl' in parsed &&
'clientId' in parsed &&
'clientSecret' in parsed &&
typeof (parsed as Record<string, unknown>)['apiUrl'] === 'string' &&
typeof (parsed as Record<string, unknown>)['clientId'] === 'string' &&
typeof (parsed as Record<string, unknown>)['clientSecret'] === 'string'
) {
const p = parsed as Record<string, unknown>;
return {
apiUrl: p['apiUrl'] as string,
clientId: p['clientId'] as string,
clientSecret: p['clientSecret'] as string,
};
}
return null;
} catch {
return null;
}
}
export function writeConfig(config: Config): void {
if (!fs.existsSync(CONFIG_DIR)) {
fs.mkdirSync(CONFIG_DIR, { recursive: true, mode: 0o700 });
}
fs.writeFileSync(CONFIG_FILE, JSON.stringify(config, null, 2), {
encoding: 'utf-8',
mode: 0o600,
});
}
export function requireConfig(): Config {
const config = readConfig();
if (config === null) {
console.error('Not configured. Run `sentryagent configure` first.');
process.exit(1);
}
return config;
}

33
cli/src/index.ts Normal file
View File

@@ -0,0 +1,33 @@
#!/usr/bin/env node
import { Command } from 'commander';
import packageJson from '../package.json';
import { registerConfigure } from './commands/configure';
import { registerRegisterAgent } from './commands/register-agent';
import { registerListAgents } from './commands/list-agents';
import { registerIssueToken } from './commands/issue-token';
import { registerRotateCredentials } from './commands/rotate-credentials';
import { registerTailAuditLog } from './commands/tail-audit-log';
import { registerCompletion } from './commands/completion';
import { registerScaffold } from './commands/scaffold';
const program = new Command();
program
.name('sentryagent')
.description('SentryAgent.ai CLI — manage agents, tokens, and audit logs')
.version(packageJson.version, '-v, --version', 'Output the current version');
// Register all commands
registerConfigure(program);
registerRegisterAgent(program);
registerListAgents(program);
registerIssueToken(program);
registerRotateCredentials(program);
registerTailAuditLog(program);
registerCompletion(program);
registerScaffold(program);
// Parse args — commander will display help automatically on --help
program.parse(process.argv);

29
cli/tsconfig.json Normal file
View File

@@ -0,0 +1,29 @@
{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"lib": ["ES2020"],
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"noImplicitAny": true,
"strictNullChecks": true,
"strictFunctionTypes": true,
"strictBindCallApply": true,
"strictPropertyInitialization": true,
"noImplicitThis": true,
"alwaysStrict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noImplicitReturns": true,
"noFallthroughCasesInSwitch": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true,
"declaration": true,
"sourceMap": true
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist"]
}

69
compose.monitoring.yaml Normal file
View File

@@ -0,0 +1,69 @@
# SentryAgent.ai AgentIdP — Monitoring Overlay
# Compose Specification (no version header — deprecated per modern Compose Spec)
# Usage: docker compose -f compose.yaml -f compose.monitoring.yaml up
services:
prometheus:
image: prom/prometheus:v2.53.0
volumes:
- ./monitoring/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus-data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--web.enable-lifecycle'
ports:
- '9090:9090'
networks:
- app-tier
restart: unless-stopped
deploy:
resources:
limits:
memory: 256m
cpus: '0.5'
healthcheck:
test: ['CMD', 'wget', '--no-verbose', '--tries=1', '--spider', 'http://localhost:9090/-/healthy']
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
grafana:
image: grafana/grafana:11.2.0
volumes:
- grafana-data:/var/lib/grafana
- ./monitoring/grafana/provisioning:/etc/grafana/provisioning:ro
- ./monitoring/grafana/dashboards:/var/lib/grafana/dashboards:ro
environment:
GF_SECURITY_ADMIN_PASSWORD: ${GF_ADMIN_PASSWORD}
GF_USERS_ALLOW_SIGN_UP: 'false'
GF_AUTH_ANONYMOUS_ENABLED: 'false'
ports:
- '3001:3000'
networks:
- app-tier
depends_on:
- prometheus
restart: unless-stopped
deploy:
resources:
limits:
memory: 256m
cpus: '0.5'
healthcheck:
test: ['CMD', 'wget', '--no-verbose', '--tries=1', '--spider', 'http://localhost:3000/api/health']
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
prometheus-data:
grafana-data:
networks:
app-tier:
external: true

95
compose.yaml Normal file
View File

@@ -0,0 +1,95 @@
# SentryAgent.ai AgentIdP — Docker Compose
# Compose Specification (no version header — deprecated per modern Compose Spec)
# Usage: docker compose up --build
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- '3000:3000'
environment:
NODE_ENV: ${NODE_ENV:-development}
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
REDIS_URL: redis://redis:6379
PORT: '3000'
env_file:
- path: .env
required: false
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- app-tier
restart: unless-stopped
deploy:
resources:
limits:
memory: 512m
cpus: '1.0'
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/health']
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# Bind mount for local development source-sync only
volumes:
- ./src:/app/src:ro
postgres:
image: postgres:14.12-alpine3.19
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
ports:
- '5432:5432'
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- app-tier
restart: unless-stopped
deploy:
resources:
limits:
memory: 256m
cpus: '0.5'
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U $POSTGRES_USER -d $POSTGRES_DB']
interval: 10s
timeout: 5s
retries: 5
start_period: 20s
redis:
image: redis:7.2-alpine3.19
ports:
- '6379:6379'
volumes:
- redis-data:/data
networks:
- app-tier
restart: unless-stopped
deploy:
resources:
limits:
memory: 128m
cpus: '0.5'
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
networks:
app-tier:
driver: bridge
volumes:
postgres-data:
redis-data:

95
dashboard/README.md Normal file
View File

@@ -0,0 +1,95 @@
# SentryAgent.ai AgentIdP — Web Dashboard
## 1. Overview
The AgentIdP Dashboard is a React 18 single-page application (SPA) that provides a visual
management interface for the AgentIdP API. It allows operators to:
- Browse, search, and filter all registered AI agents
- View agent details and manage lifecycle (suspend / reactivate)
- Generate, rotate, and revoke agent credentials
- Query the audit log with filters for agent, action, outcome, and date range
- Monitor PostgreSQL and Redis connectivity in real time
The dashboard is co-served by the Express API server at `/dashboard/` — no separate hosting
is required.
## 2. Prerequisites
- Node.js 18+
- A running AgentIdP server (local or remote)
- An active agent credential (Client ID + Client Secret) with full scopes
## 3. Development
Install dashboard dependencies:
```bash
cd dashboard
npm install
```
Start the Vite dev server:
```bash
npm run dev
```
The dev server starts at `http://localhost:5173/dashboard/`. API calls are made to
`window.location.origin` (defaulted in the Login form), so either:
- Set the **API Base URL** field to your local server (e.g. `http://localhost:3000`)
- Or configure a Vite proxy in `vite.config.ts` for `/api` and `/health` paths
## 4. Building
Compile TypeScript and bundle with Vite:
```bash
npm run build
```
Output is written to `dashboard/dist/`. The build is an optimised static bundle (HTML, CSS, JS).
To verify the build locally:
```bash
npm run preview
```
## 5. Deployment
The AgentIdP Express server automatically serves the built dashboard:
- Static assets at `/dashboard/` (via `express.static`)
- SPA fallback — all `/dashboard/*` requests not matching a static file return `index.html`
**Steps:**
1. Build the dashboard: `cd dashboard && npm run build`
2. Start (or restart) the AgentIdP server: `npm start`
3. Open `https://your-api-host/dashboard/` in a browser
No additional nginx or CDN configuration is required for basic deployments.
## 6. Login
The login form has three fields:
| Field | Description |
|---|---|
| **API Base URL** | Base URL of the AgentIdP server, e.g. `https://api.example.com`. Defaults to the current page origin, which works when the dashboard is co-served. |
| **Client ID** | The UUID of an agent registered in AgentIdP. This agent must have the scopes `agents:read agents:write tokens:read audit:read`. |
| **Client Secret** | The plain-text client secret for the agent. Validated against the token endpoint on login. |
Credentials are stored in `sessionStorage` only — they are cleared when the browser tab is closed.
## 7. Pages
| Page | Route | Description |
|---|---|---|
| **Agents** | `/dashboard/agents` | Paginated list of all agents. Search by email (debounced), filter by status. Click a row for details. |
| **Agent Detail** | `/dashboard/agents/:agentId` | Full agent metadata. Suspend or reactivate (with confirmation). Link to credentials. |
| **Credentials** | `/dashboard/agents/:agentId/credentials` | List all credentials. Generate, rotate, or revoke. New secrets shown exactly once. |
| **Audit Log** | `/dashboard/audit` | Paginated audit events with filters for agent ID, action, outcome, and date range. |
| **Health** | `/dashboard/health` | PostgreSQL and Redis connectivity cards. Auto-refreshes every 30 seconds. |

12
dashboard/index.html Normal file
View File

@@ -0,0 +1,12 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>SentryAgent.ai — AgentIdP Dashboard</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
</body>
</html>

2755
dashboard/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

29
dashboard/package.json Normal file
View File

@@ -0,0 +1,29 @@
{
"name": "@sentryagent/dashboard",
"version": "1.0.0",
"private": true,
"scripts": {
"dev": "vite",
"build": "tsc -p tsconfig.app.json && vite build",
"preview": "vite preview"
},
"dependencies": {
"@sentryagent/idp-sdk": "file:../sdk",
"react": "^18.3.1",
"react-dom": "^18.3.1",
"react-router-dom": "^6.26.2",
"lucide-react": "^0.446.0",
"clsx": "^2.1.1",
"tailwind-merge": "^2.5.2"
},
"devDependencies": {
"@types/react": "^18.3.5",
"@types/react-dom": "^18.3.0",
"@vitejs/plugin-react": "^4.3.1",
"autoprefixer": "^10.4.20",
"postcss": "^8.4.47",
"tailwindcss": "^3.4.12",
"typescript": "^5.5.3",
"vite": "^5.4.8"
}
}

View File

@@ -0,0 +1,6 @@
export default {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
};

35
dashboard/src/App.tsx Normal file
View File

@@ -0,0 +1,35 @@
import * as React from 'react';
import { Routes, Route, Navigate } from 'react-router-dom';
import { AuthProvider } from '@/lib/auth';
import { RequireAuth } from '@/components/RequireAuth';
import { AppShell } from '@/components/layout/AppShell';
import Login from '@/pages/Login';
import Agents from '@/pages/Agents';
import AgentDetail from '@/pages/AgentDetail';
import Credentials from '@/pages/Credentials';
import AuditLog from '@/pages/AuditLog';
import Health from '@/pages/Health';
import { UsagePanel } from '@/components/UsagePanel';
/** Top-level router — defines all application routes. */
export default function App(): React.JSX.Element {
return (
<AuthProvider>
<Routes>
<Route path="/dashboard/login" element={<Login />} />
<Route element={<RequireAuth />}>
<Route element={<AppShell />}>
<Route path="/dashboard/agents" element={<Agents />} />
<Route path="/dashboard/agents/:agentId" element={<AgentDetail />} />
<Route path="/dashboard/agents/:agentId/credentials" element={<Credentials />} />
<Route path="/dashboard/audit" element={<AuditLog />} />
<Route path="/dashboard/health" element={<Health />} />
<Route path="/dashboard/usage" element={<UsagePanel />} />
</Route>
</Route>
<Route path="/dashboard" element={<Navigate to="/dashboard/agents" replace />} />
<Route path="*" element={<Navigate to="/dashboard/agents" replace />} />
</Routes>
</AuthProvider>
);
}

View File

@@ -0,0 +1,11 @@
import * as React from 'react';
import { Navigate, Outlet } from 'react-router-dom';
import { isAuthenticated } from '@/lib/auth';
/** Redirects to /dashboard/login if not authenticated. */
export function RequireAuth(): React.JSX.Element {
if (!isAuthenticated()) {
return <Navigate to="/dashboard/login" replace />;
}
return <Outlet />;
}

View File

@@ -0,0 +1,192 @@
import * as React from 'react';
import { useAuth } from '@/lib/auth';
import { TokenManager } from '@sentryagent/idp-sdk';
/** Shape of the GET /api/v1/billing/usage response. */
interface UsageResponse {
tenantId: string;
date: string;
apiCalls: number;
agentCount: number;
subscriptionStatus: string;
currentPeriodEnd: string | null;
stripeSubscriptionId: string | null;
}
type LoadState = 'idle' | 'loading' | 'success' | 'error';
interface UsageState {
loadState: LoadState;
data: UsageResponse | null;
errorMessage: string | null;
}
const initialState: UsageState = {
loadState: 'idle',
data: null,
errorMessage: null,
};
/**
* Fetches the current usage summary from the API using the stored credentials.
*
* @param baseUrl - The API base URL.
* @param clientId - The agent client ID.
* @param clientSecret - The agent client secret.
* @returns The usage response from the server.
*/
async function fetchUsage(
baseUrl: string,
clientId: string,
clientSecret: string,
): Promise<UsageResponse> {
const tokenManager = new TokenManager(
baseUrl,
clientId,
clientSecret,
'agents:read',
);
const token = await tokenManager.getToken();
const response = await fetch(`${baseUrl}/api/v1/billing/usage`, {
headers: { Authorization: `Bearer ${token}` },
});
if (!response.ok) {
throw new Error(`Failed to fetch usage data (HTTP ${response.status})`);
}
return response.json() as Promise<UsageResponse>;
}
/** Badge shown for the tenant's subscription tier. */
function SubscriptionBadge({ status }: { status: string }): React.JSX.Element {
const isPro = status !== 'free';
return (
<span
className={`inline-flex items-center rounded-full px-2.5 py-0.5 text-xs font-semibold ${
isPro
? 'bg-brand-100 text-brand-700'
: 'bg-slate-100 text-slate-600'
}`}
>
{isPro ? 'Pro' : 'Free Tier'}
</span>
);
}
/** A single metric card with label and value. */
function MetricCard({ label, value }: { label: string; value: string | number }): React.JSX.Element {
return (
<div className="rounded-xl border border-slate-200 bg-white p-6 shadow-sm">
<p className="text-sm font-medium text-slate-500">{label}</p>
<p className="mt-1 text-2xl font-bold text-slate-900">{value}</p>
</div>
);
}
/**
* Displays the current tenant's usage summary:
* - API calls today
* - Active agent count
* - Subscription status (Free Tier / Pro)
*
* Fetches GET /api/v1/billing/usage with the current Bearer token.
* Handles loading state and error state gracefully.
*/
export function UsagePanel(): React.JSX.Element {
const { credentials } = useAuth();
const [state, setState] = React.useState<UsageState>(initialState);
const loadUsage = React.useCallback(async (): Promise<void> => {
if (!credentials) return;
setState((prev) => ({ ...prev, loadState: 'loading', errorMessage: null }));
try {
const data = await fetchUsage(
credentials.baseUrl,
credentials.clientId,
credentials.clientSecret,
);
setState({ loadState: 'success', data, errorMessage: null });
} catch (err) {
const message = err instanceof Error ? err.message : 'Unknown error occurred.';
setState({ loadState: 'error', data: null, errorMessage: message });
}
}, [credentials]);
React.useEffect(() => {
void loadUsage();
}, [loadUsage]);
const isLoading = state.loadState === 'loading' || state.loadState === 'idle';
return (
<div>
<div className="mb-6 flex items-center justify-between">
<h1 className="text-2xl font-bold text-slate-900">Usage &amp; Billing</h1>
<button
onClick={() => { void loadUsage(); }}
disabled={isLoading}
className="rounded-md border border-slate-300 px-3 py-1.5 text-sm hover:bg-slate-50 disabled:opacity-40"
>
Refresh
</button>
</div>
{/* Error state */}
{state.loadState === 'error' && (
<div className="mb-6 rounded-md bg-red-50 px-4 py-3 text-sm text-red-700" role="alert">
{state.errorMessage ?? 'Failed to load usage data.'}
</div>
)}
{/* Loading skeleton */}
{isLoading && (
<div className="grid grid-cols-1 gap-4 sm:grid-cols-3 animate-pulse">
{[1, 2, 3].map((i) => (
<div key={i} className="h-28 rounded-xl border border-slate-200 bg-slate-100" />
))}
</div>
)}
{/* Data */}
{state.loadState === 'success' && state.data !== null && (
<>
<div className="mb-4 flex items-center gap-3">
<p className="text-sm text-slate-500">
Showing usage for <strong>{state.data.date}</strong>
</p>
<SubscriptionBadge status={state.data.subscriptionStatus} />
</div>
<div className="grid grid-cols-1 gap-4 sm:grid-cols-3">
<MetricCard label="API Calls Today" value={state.data.apiCalls.toLocaleString()} />
<MetricCard label="Active Agents" value={state.data.agentCount.toLocaleString()} />
<MetricCard label="Plan" value={state.data.subscriptionStatus === 'free' ? 'Free Tier' : 'Pro'} />
</div>
{state.data.subscriptionStatus === 'free' && (
<div className="mt-6 rounded-xl border border-brand-200 bg-brand-50 p-5">
<p className="text-sm font-medium text-brand-800">
You are on the Free Tier limited to 10 agents and 1,000 API calls/day.
</p>
<p className="mt-1 text-sm text-brand-700">
Upgrade to Pro for unlimited agents and API calls.
</p>
</div>
)}
{state.data.currentPeriodEnd !== null && (
<p className="mt-4 text-xs text-slate-400">
Current period ends:{' '}
{new Date(state.data.currentPeriodEnd).toLocaleDateString()}
</p>
)}
</>
)}
</div>
);
}

View File

@@ -0,0 +1,63 @@
import * as React from 'react';
import { NavLink, Outlet } from 'react-router-dom';
import { cn } from '@/lib/utils';
import { useAuth } from '@/lib/auth';
interface NavItem {
to: string;
label: string;
}
const NAV_ITEMS: NavItem[] = [
{ to: '/dashboard/agents', label: 'Agents' },
{ to: '/dashboard/audit', label: 'Audit Log' },
{ to: '/dashboard/health', label: 'Health' },
{ to: '/dashboard/usage', label: 'Usage' },
];
/**
* Outer application shell: top navigation bar and main content area.
* Renders the active page via <Outlet />.
*/
export function AppShell(): React.JSX.Element {
const { logout } = useAuth();
return (
<div className="min-h-screen bg-slate-50">
<header className="border-b border-slate-200 bg-white shadow-sm">
<div className="mx-auto flex max-w-7xl items-center justify-between px-4 py-3">
<div className="flex items-center gap-8">
<span className="text-lg font-bold text-brand-700">SentryAgent.ai</span>
<nav className="flex gap-1">
{NAV_ITEMS.map(({ to, label }) => (
<NavLink
key={to}
to={to}
className={({ isActive }) =>
cn(
'rounded-md px-3 py-2 text-sm font-medium transition-colors',
isActive
? 'bg-brand-50 text-brand-700'
: 'text-slate-600 hover:bg-slate-100 hover:text-slate-900',
)
}
>
{label}
</NavLink>
))}
</nav>
</div>
<button
onClick={logout}
className="text-sm text-slate-500 hover:text-slate-900"
>
Sign out
</button>
</div>
</header>
<main className="mx-auto max-w-7xl px-4 py-8">
<Outlet />
</main>
</div>
);
}

View File

@@ -0,0 +1,27 @@
import * as React from 'react';
import { cn } from '@/lib/utils';
type BadgeVariant = 'default' | 'success' | 'warning' | 'danger' | 'muted';
interface BadgeProps {
variant?: BadgeVariant;
children: React.ReactNode;
className?: string;
}
const variantClasses: Record<BadgeVariant, string> = {
default: 'bg-brand-100 text-brand-700',
success: 'bg-green-100 text-green-700',
warning: 'bg-yellow-100 text-yellow-700',
danger: 'bg-red-100 text-red-700',
muted: 'bg-slate-100 text-slate-600',
};
/** Small status badge. */
export function Badge({ variant = 'default', children, className }: BadgeProps): React.JSX.Element {
return (
<span className={cn('inline-flex items-center rounded-full px-2.5 py-0.5 text-xs font-medium', variantClasses[variant], className)}>
{children}
</span>
);
}

View File

@@ -0,0 +1,65 @@
import * as React from 'react';
import { cn } from '@/lib/utils';
type Variant = 'default' | 'destructive' | 'outline' | 'ghost';
type Size = 'sm' | 'md' | 'lg';
interface ButtonProps extends React.ButtonHTMLAttributes<HTMLButtonElement> {
variant?: Variant;
size?: Size;
loading?: boolean;
}
const variantClasses: Record<Variant, string> = {
default: 'bg-brand-600 text-white hover:bg-brand-700 focus:ring-brand-500',
destructive: 'bg-red-600 text-white hover:bg-red-700 focus:ring-red-500',
outline: 'border border-slate-300 bg-white text-slate-700 hover:bg-slate-50 focus:ring-brand-500',
ghost: 'text-slate-600 hover:bg-slate-100 hover:text-slate-900 focus:ring-brand-500',
};
const sizeClasses: Record<Size, string> = {
sm: 'px-3 py-1.5 text-sm',
md: 'px-4 py-2 text-sm',
lg: 'px-6 py-3 text-base',
};
/**
* Reusable button component with variant and size support.
*
* @param variant - Visual style: default | destructive | outline | ghost
* @param size - Size: sm | md | lg
* @param loading - When true, shows a spinner and disables the button
*/
export function Button({
variant = 'default',
size = 'md',
loading = false,
className,
children,
disabled,
...props
}: ButtonProps): React.JSX.Element {
return (
<button
className={cn(
'inline-flex items-center justify-center gap-2 rounded-md font-medium',
'focus:outline-none focus:ring-2 focus:ring-offset-2',
'disabled:pointer-events-none disabled:opacity-50',
'transition-colors duration-150',
variantClasses[variant],
sizeClasses[size],
className,
)}
disabled={disabled ?? loading}
{...props}
>
{loading && (
<svg className="h-4 w-4 animate-spin" fill="none" viewBox="0 0 24 24">
<circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4" />
<path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8v4a4 4 0 00-4 4H4z" />
</svg>
)}
{children}
</button>
);
}

View File

@@ -0,0 +1,45 @@
import * as React from 'react';
import { Button } from './button';
interface DialogProps {
open: boolean;
title: string;
description: string;
confirmLabel?: string;
cancelLabel?: string;
variant?: 'default' | 'destructive';
onConfirm: () => void;
onCancel: () => void;
}
/**
* Modal confirmation dialog for destructive actions (suspend, revoke, rotate).
*/
export function ConfirmDialog({
open,
title,
description,
confirmLabel = 'Confirm',
cancelLabel = 'Cancel',
variant = 'default',
onConfirm,
onCancel,
}: DialogProps): React.JSX.Element | null {
if (!open) return null;
return (
<div className="fixed inset-0 z-50 flex items-center justify-center">
<div className="absolute inset-0 bg-black/50" onClick={onCancel} />
<div className="relative z-10 w-full max-w-md rounded-lg bg-white p-6 shadow-xl">
<h2 className="text-lg font-semibold text-slate-900">{title}</h2>
<p className="mt-2 text-sm text-slate-600">{description}</p>
<div className="mt-6 flex justify-end gap-3">
<Button variant="outline" onClick={onCancel}>{cancelLabel}</Button>
<Button variant={variant === 'destructive' ? 'destructive' : 'default'} onClick={onConfirm}>
{confirmLabel}
</Button>
</div>
</div>
</div>
);
}

26
dashboard/src/index.css Normal file
View File

@@ -0,0 +1,26 @@
@tailwind base;
@tailwind components;
@tailwind utilities;
@layer base {
:root {
--background: 0 0% 100%;
--foreground: 222.2 84% 4.9%;
--muted: 210 40% 96.1%;
--muted-foreground: 215.4 16.3% 46.9%;
--border: 214.3 31.8% 91.4%;
--input: 214.3 31.8% 91.4%;
--ring: 198 89% 48%;
--radius: 0.5rem;
}
}
* {
box-sizing: border-box;
}
body {
font-family: system-ui, -apple-system, sans-serif;
background-color: #f8fafc;
color: #0f172a;
}

109
dashboard/src/lib/auth.tsx Normal file
View File

@@ -0,0 +1,109 @@
import { TokenManager } from '@sentryagent/idp-sdk';
const SESSION_KEY = 'agentidp_credentials';
interface StoredCredentials {
clientId: string;
clientSecret: string;
baseUrl: string;
}
/**
* Persists user credentials to sessionStorage (cleared on tab close).
*/
export function saveCredentials(creds: StoredCredentials): void {
sessionStorage.setItem(SESSION_KEY, JSON.stringify(creds));
}
/**
* Retrieves credentials from sessionStorage.
* Returns null if not logged in.
*/
export function loadCredentials(): StoredCredentials | null {
const raw = sessionStorage.getItem(SESSION_KEY);
if (!raw) return null;
try {
return JSON.parse(raw) as StoredCredentials;
} catch {
return null;
}
}
/**
* Removes credentials from sessionStorage (logout).
*/
export function clearCredentials(): void {
sessionStorage.removeItem(SESSION_KEY);
}
/**
* Returns true if the user has stored credentials.
*/
export function isAuthenticated(): boolean {
return loadCredentials() !== null;
}
/**
* Validates stored credentials by requesting a token.
* Returns true if successful; false on auth failure.
*/
export async function validateCredentials(creds: StoredCredentials): Promise<boolean> {
try {
const tm = new TokenManager(creds.baseUrl, creds.clientId, creds.clientSecret, 'agents:read agents:write tokens:read audit:read');
await tm.getToken();
return true;
} catch {
return false;
}
}
// ── React context ──────────────────────────────────────────────────────────────
import * as React from 'react';
import { useNavigate } from 'react-router-dom';
interface AuthContextValue {
credentials: StoredCredentials | null;
login: (creds: StoredCredentials) => Promise<boolean>;
logout: () => void;
}
const AuthContext = React.createContext<AuthContextValue | null>(null);
/**
* Provides authentication state to the application.
* Reads initial state from sessionStorage on mount.
*/
export function AuthProvider({ children }: { children: React.ReactNode }): React.JSX.Element {
const [credentials, setCredentials] = React.useState<StoredCredentials | null>(loadCredentials);
const navigate = useNavigate();
const login = React.useCallback(async (creds: StoredCredentials): Promise<boolean> => {
const valid = await validateCredentials(creds);
if (valid) {
saveCredentials(creds);
setCredentials(creds);
}
return valid;
}, []);
const logout = React.useCallback((): void => {
clearCredentials();
setCredentials(null);
navigate('/dashboard/login');
}, [navigate]);
const value = React.useMemo(() => ({ credentials, login, logout }), [credentials, login, logout]);
return <AuthContext.Provider value={value}>{children}</AuthContext.Provider>;
}
/**
* Returns the current authentication context.
* Must be used inside <AuthProvider>.
*/
export function useAuth(): AuthContextValue {
const ctx = React.useContext(AuthContext);
if (!ctx) throw new Error('useAuth must be used within AuthProvider');
return ctx;
}

View File

@@ -0,0 +1,18 @@
import { AgentIdPClient } from '@sentryagent/idp-sdk';
import { loadCredentials } from './auth';
/**
* Returns an AgentIdPClient configured with credentials from sessionStorage.
* Throws if not authenticated (caller must ensure login first).
*/
export function getClient(): AgentIdPClient {
const creds = loadCredentials();
if (!creds) {
throw new Error('Not authenticated. Please log in.');
}
return new AgentIdPClient({
baseUrl: creds.baseUrl,
clientId: creds.clientId,
clientSecret: creds.clientSecret,
});
}

View File

@@ -0,0 +1,7 @@
import { clsx, type ClassValue } from 'clsx';
import { twMerge } from 'tailwind-merge';
/** Merges Tailwind class names, handling conflicts correctly. */
export function cn(...inputs: ClassValue[]): string {
return twMerge(clsx(inputs));
}

13
dashboard/src/main.tsx Normal file
View File

@@ -0,0 +1,13 @@
import React from 'react';
import ReactDOM from 'react-dom/client';
import { BrowserRouter } from 'react-router-dom';
import App from './App';
import './index.css';
ReactDOM.createRoot(document.getElementById('root')!).render(
<React.StrictMode>
<BrowserRouter>
<App />
</BrowserRouter>
</React.StrictMode>,
);

View File

@@ -0,0 +1,222 @@
import * as React from 'react';
import { useParams, useNavigate } from 'react-router-dom';
import type { Agent } from '@sentryagent/idp-sdk';
import { Badge } from '@/components/ui/badge';
import { Button } from '@/components/ui/button';
import { ConfirmDialog } from '@/components/ui/dialog';
import { getClient } from '@/lib/client';
type BadgeVariant = 'success' | 'warning' | 'danger';
/** Maps AgentStatus to a Badge variant. */
function statusVariant(status: Agent['status']): BadgeVariant {
switch (status) {
case 'active': return 'success';
case 'suspended': return 'warning';
case 'decommissioned': return 'danger';
}
}
/** Formats an ISO timestamp to a readable local date-time string. */
function formatDateTime(iso: string): string {
return new Date(iso).toLocaleString(undefined, {
year: 'numeric', month: 'short', day: 'numeric',
hour: '2-digit', minute: '2-digit',
});
}
interface DetailRowProps {
label: string;
value: string;
}
/** Single label/value row in the detail card. */
function DetailRow({ label, value }: DetailRowProps): React.JSX.Element {
return (
<div className="flex flex-col gap-1 sm:flex-row sm:gap-4">
<dt className="w-36 shrink-0 text-sm font-medium text-slate-500">{label}</dt>
<dd className="text-sm text-slate-900 break-all">{value}</dd>
</div>
);
}
type DialogAction = 'suspend' | 'reactivate';
/**
* Agent Detail page — shows all agent fields and provides suspend/reactivate actions.
* Route: /dashboard/agents/:agentId
*/
export default function AgentDetail(): React.JSX.Element {
const { agentId } = useParams<{ agentId: string }>();
const navigate = useNavigate();
const [agent, setAgent] = React.useState<Agent | null>(null);
const [loading, setLoading] = React.useState<boolean>(true);
const [error, setError] = React.useState<string | null>(null);
const [actionLoading, setActionLoading] = React.useState<boolean>(false);
const [dialog, setDialog] = React.useState<DialogAction | null>(null);
React.useEffect(() => {
if (!agentId) return;
let cancelled = false;
setLoading(true);
setError(null);
const fetchAgent = async (): Promise<void> => {
try {
const result = await getClient().agents.getAgent(agentId);
if (!cancelled) setAgent(result);
} catch (err) {
if (!cancelled) setError(err instanceof Error ? err.message : 'Failed to load agent.');
} finally {
if (!cancelled) setLoading(false);
}
};
void fetchAgent();
return () => { cancelled = true; };
}, [agentId]);
const handleAction = React.useCallback(
async (action: DialogAction): Promise<void> => {
if (!agentId) return;
setActionLoading(true);
setDialog(null);
try {
const newStatus = action === 'suspend' ? 'suspended' : 'active';
const updated = await getClient().agents.updateAgent(agentId, { status: newStatus });
setAgent(updated);
} catch (err) {
setError(err instanceof Error ? err.message : 'Action failed.');
} finally {
setActionLoading(false);
}
},
[agentId],
);
if (loading) {
return (
<div className="space-y-4">
{Array.from({ length: 6 }).map((_, i) => (
<div key={i} className="h-5 w-full animate-pulse rounded bg-slate-200" />
))}
</div>
);
}
if (error || !agent) {
return (
<div className="rounded-md bg-red-50 px-4 py-3 text-sm text-red-700" role="alert">
{error ?? 'Agent not found.'}
</div>
);
}
const dialogConfig = dialog === 'suspend'
? {
title: `Suspend agent ${agent.email}?`,
description: `Suspending ${agent.email} means it will no longer be able to authenticate.`,
confirmLabel: 'Suspend',
variant: 'destructive' as const,
}
: {
title: `Reactivate agent ${agent.email}?`,
description: `Reactivating ${agent.email} will allow it to authenticate again.`,
confirmLabel: 'Reactivate',
variant: 'default' as const,
};
return (
<div>
{/* Back navigation */}
<button
onClick={() => { navigate('/dashboard/agents'); }}
className="mb-6 flex items-center gap-1 text-sm text-brand-600 hover:text-brand-800"
>
Back to Agents
</button>
<div className="mb-6 flex items-start justify-between gap-4">
<div>
<h1 className="text-2xl font-bold text-slate-900">{agent.email}</h1>
<p className="mt-1 text-sm text-slate-500">Agent ID: {agent.agentId}</p>
</div>
<Badge variant={statusVariant(agent.status)} className="mt-1">{agent.status}</Badge>
</div>
{error && (
<div className="mb-4 rounded-md bg-red-50 px-4 py-3 text-sm text-red-700" role="alert">
{error}
</div>
)}
{/* Detail card */}
<div className="rounded-xl border border-slate-200 bg-white p-6 shadow-sm">
<dl className="space-y-4">
<DetailRow label="Email" value={agent.email} />
<DetailRow label="Agent ID" value={agent.agentId} />
<DetailRow label="Type" value={agent.agentType} />
<DetailRow label="Version" value={agent.version} />
<DetailRow label="Owner" value={agent.owner} />
<DetailRow label="Environment" value={agent.deploymentEnv} />
<DetailRow label="Capabilities" value={agent.capabilities.join(', ') || '—'} />
<DetailRow label="Status" value={agent.status} />
<DetailRow label="Created" value={formatDateTime(agent.createdAt)} />
<DetailRow label="Updated" value={formatDateTime(agent.updatedAt)} />
</dl>
</div>
{/* Actions */}
{agent.status !== 'decommissioned' && (
<div className="mt-6 flex gap-3">
{agent.status === 'active' && (
<Button
variant="destructive"
loading={actionLoading}
onClick={() => { setDialog('suspend'); }}
>
Suspend Agent
</Button>
)}
{agent.status === 'suspended' && (
<Button
variant="default"
loading={actionLoading}
onClick={() => { setDialog('reactivate'); }}
>
Reactivate Agent
</Button>
)}
</div>
)}
{/* Credentials section */}
<div className="mt-8 rounded-xl border border-slate-200 bg-white p-6 shadow-sm">
<h2 className="mb-4 text-lg font-semibold text-slate-900">Credentials</h2>
<p className="mb-4 text-sm text-slate-600">
Manage client secrets for this agent. Rotate or revoke credentials as needed.
</p>
<Button
variant="outline"
onClick={() => { navigate(`/dashboard/agents/${agent.agentId}/credentials`); }}
>
View Credentials
</Button>
</div>
{/* Confirm dialog */}
{dialog !== null && (
<ConfirmDialog
open
title={dialogConfig.title}
description={dialogConfig.description}
confirmLabel={dialogConfig.confirmLabel}
variant={dialogConfig.variant}
onConfirm={() => { void handleAction(dialog); }}
onCancel={() => { setDialog(null); }}
/>
)}
</div>
);
}

View File

@@ -0,0 +1,204 @@
import * as React from 'react';
import { useNavigate } from 'react-router-dom';
import type { Agent, AgentStatus } from '@sentryagent/idp-sdk';
import { Badge } from '@/components/ui/badge';
import { getClient } from '@/lib/client';
const PAGE_LIMIT = 20;
/** Maps AgentStatus to a Badge variant. */
function statusVariant(status: AgentStatus): 'success' | 'warning' | 'danger' | 'muted' {
switch (status) {
case 'active': return 'success';
case 'suspended': return 'warning';
case 'decommissioned': return 'danger';
}
}
/** Formats an ISO timestamp to a short local date string. */
function formatDate(iso: string): string {
return new Date(iso).toLocaleDateString(undefined, { year: 'numeric', month: 'short', day: 'numeric' });
}
/** Skeleton row shown while loading. */
function SkeletonRow(): React.JSX.Element {
return (
<tr>
{Array.from({ length: 6 }).map((_, i) => (
<td key={i} className="px-4 py-3">
<div className="h-4 w-full animate-pulse rounded bg-slate-200" />
</td>
))}
</tr>
);
}
/**
* Agents list page — displays all registered agents with search, status filter, and pagination.
* Clicking a row navigates to the Agent Detail page.
*/
export default function Agents(): React.JSX.Element {
const navigate = useNavigate();
const [agents, setAgents] = React.useState<Agent[]>([]);
const [total, setTotal] = React.useState<number>(0);
const [page, setPage] = React.useState<number>(1);
const [loading, setLoading] = React.useState<boolean>(false);
const [error, setError] = React.useState<string | null>(null);
// Filters (client-side email search, server-side status)
const [searchInput, setSearchInput] = React.useState<string>('');
const [debouncedSearch, setDebouncedSearch] = React.useState<string>('');
const [statusFilter, setStatusFilter] = React.useState<AgentStatus | ''>('');
// Debounce search input 300ms
React.useEffect(() => {
const timer = setTimeout(() => { setDebouncedSearch(searchInput); }, 300);
return () => { clearTimeout(timer); };
}, [searchInput]);
// Reset to page 1 on filter change
React.useEffect(() => {
setPage(1);
}, [debouncedSearch, statusFilter]);
React.useEffect(() => {
let cancelled = false;
setLoading(true);
setError(null);
const fetchAgents = async (): Promise<void> => {
try {
const client = getClient();
const result = await client.agents.listAgents({
page,
limit: PAGE_LIMIT,
status: statusFilter !== '' ? statusFilter : undefined,
});
if (!cancelled) {
setAgents(result.data);
setTotal(result.total);
}
} catch (err) {
if (!cancelled) {
setError(err instanceof Error ? err.message : 'Failed to load agents.');
}
} finally {
if (!cancelled) setLoading(false);
}
};
void fetchAgents();
return () => { cancelled = true; };
}, [page, statusFilter]);
// Client-side email filter applied after API results arrive
const filteredAgents = React.useMemo(() => {
if (!debouncedSearch.trim()) return agents;
const lower = debouncedSearch.toLowerCase();
return agents.filter((a) => a.email.toLowerCase().includes(lower));
}, [agents, debouncedSearch]);
const totalPages = Math.max(1, Math.ceil(total / PAGE_LIMIT));
return (
<div>
<div className="mb-6 flex flex-col gap-4 sm:flex-row sm:items-center sm:justify-between">
<h1 className="text-2xl font-bold text-slate-900">Agents</h1>
<div className="flex gap-3">
<input
type="search"
value={searchInput}
onChange={(e) => { setSearchInput(e.target.value); }}
placeholder="Search by email…"
className="w-60 rounded-md border border-slate-300 px-3 py-2 text-sm focus:border-brand-500 focus:outline-none focus:ring-1 focus:ring-brand-500"
/>
<select
value={statusFilter}
onChange={(e) => { setStatusFilter(e.target.value as AgentStatus | ''); }}
className="rounded-md border border-slate-300 px-3 py-2 text-sm focus:border-brand-500 focus:outline-none focus:ring-1 focus:ring-brand-500"
>
<option value="">All Statuses</option>
<option value="active">Active</option>
<option value="suspended">Suspended</option>
<option value="decommissioned">Decommissioned</option>
</select>
</div>
</div>
{error && (
<div className="mb-4 rounded-md bg-red-50 px-4 py-3 text-sm text-red-700" role="alert">
{error}
</div>
)}
<div className="overflow-hidden rounded-xl border border-slate-200 bg-white shadow-sm">
<table className="min-w-full divide-y divide-slate-200 text-sm">
<thead className="bg-slate-50">
<tr>
{['Name (Email)', 'Type', 'Status', 'Environment', 'Owner', 'Created'].map((col) => (
<th key={col} className="px-4 py-3 text-left text-xs font-semibold uppercase tracking-wide text-slate-500">
{col}
</th>
))}
</tr>
</thead>
<tbody className="divide-y divide-slate-100">
{loading
? Array.from({ length: 5 }).map((_, i) => <SkeletonRow key={i} />)
: filteredAgents.length === 0
? (
<tr>
<td colSpan={6} className="px-4 py-12 text-center text-slate-400">
No agents found.
</td>
</tr>
)
: filteredAgents.map((agent) => (
<tr
key={agent.agentId}
onClick={() => { navigate(`/dashboard/agents/${agent.agentId}`); }}
className="cursor-pointer hover:bg-slate-50"
>
<td className="px-4 py-3 font-medium text-brand-700">{agent.email}</td>
<td className="px-4 py-3 text-slate-600">{agent.agentType}</td>
<td className="px-4 py-3">
<Badge variant={statusVariant(agent.status)}>{agent.status}</Badge>
</td>
<td className="px-4 py-3 text-slate-600">{agent.deploymentEnv}</td>
<td className="px-4 py-3 text-slate-600">{agent.owner}</td>
<td className="px-4 py-3 text-slate-500">{formatDate(agent.createdAt)}</td>
</tr>
))
}
</tbody>
</table>
</div>
{/* Pagination */}
{!loading && total > 0 && (
<div className="mt-4 flex items-center justify-between text-sm text-slate-600">
<span>
Page {page} of {totalPages} ({total} total)
</span>
<div className="flex gap-2">
<button
onClick={() => { setPage((p) => Math.max(1, p - 1)); }}
disabled={page <= 1}
className="rounded-md border border-slate-300 px-3 py-1.5 hover:bg-slate-50 disabled:opacity-40"
>
Previous
</button>
<button
onClick={() => { setPage((p) => Math.min(totalPages, p + 1)); }}
disabled={page >= totalPages}
className="rounded-md border border-slate-300 px-3 py-1.5 hover:bg-slate-50 disabled:opacity-40"
>
Next
</button>
</div>
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,223 @@
import * as React from 'react';
import type { AuditEvent, AuditAction, AuditOutcome } from '@sentryagent/idp-sdk';
import { Badge } from '@/components/ui/badge';
import { getClient } from '@/lib/client';
const PAGE_LIMIT = 20;
/** All AuditAction values for the filter dropdown. */
const AUDIT_ACTIONS: AuditAction[] = [
'agent.created',
'agent.updated',
'agent.decommissioned',
'agent.suspended',
'agent.reactivated',
'token.issued',
'token.revoked',
'token.introspected',
'credential.generated',
'credential.rotated',
'credential.revoked',
'auth.failed',
];
/** Formats an ISO timestamp to a readable local date-time string. */
function formatDateTime(iso: string): string {
return new Date(iso).toLocaleString(undefined, {
year: 'numeric', month: 'short', day: 'numeric',
hour: '2-digit', minute: '2-digit', second: '2-digit',
});
}
/** Truncates a string to a maximum length with ellipsis. */
function truncate(value: string, maxLen = 24): string {
return value.length > maxLen ? `${value.slice(0, maxLen)}` : value;
}
/**
* Audit Log page — displays audit events with filters for agent, action, outcome, and date range.
* Route: /dashboard/audit
*/
export default function AuditLog(): React.JSX.Element {
const [events, setEvents] = React.useState<AuditEvent[]>([]);
const [total, setTotal] = React.useState<number>(0);
const [page, setPage] = React.useState<number>(1);
const [loading, setLoading] = React.useState<boolean>(false);
const [error, setError] = React.useState<string | null>(null);
// Filters
const [agentIdFilter, setAgentIdFilter] = React.useState<string>('');
const [actionFilter, setActionFilter] = React.useState<AuditAction | ''>('');
const [outcomeFilter, setOutcomeFilter] = React.useState<AuditOutcome | ''>('');
const [fromDate, setFromDate] = React.useState<string>('');
const [toDate, setToDate] = React.useState<string>('');
// Reset to page 1 on filter change
React.useEffect(() => {
setPage(1);
}, [agentIdFilter, actionFilter, outcomeFilter, fromDate, toDate]);
React.useEffect(() => {
let cancelled = false;
setLoading(true);
setError(null);
const fetchEvents = async (): Promise<void> => {
try {
const result = await getClient().audit.queryAuditLog({
page,
limit: PAGE_LIMIT,
agentId: agentIdFilter.trim() || undefined,
action: actionFilter !== '' ? actionFilter : undefined,
outcome: outcomeFilter !== '' ? outcomeFilter : undefined,
fromDate: fromDate || undefined,
toDate: toDate || undefined,
});
if (!cancelled) {
setEvents(result.data);
setTotal(result.total);
}
} catch (err) {
if (!cancelled) {
setError(err instanceof Error ? err.message : 'Failed to load audit log.');
}
} finally {
if (!cancelled) setLoading(false);
}
};
void fetchEvents();
return () => { cancelled = true; };
}, [page, agentIdFilter, actionFilter, outcomeFilter, fromDate, toDate]);
const totalPages = Math.max(1, Math.ceil(total / PAGE_LIMIT));
return (
<div>
<h1 className="mb-6 text-2xl font-bold text-slate-900">Audit Log</h1>
{/* Filters */}
<div className="mb-6 grid grid-cols-1 gap-3 sm:grid-cols-2 lg:grid-cols-5">
<input
type="text"
value={agentIdFilter}
onChange={(e) => { setAgentIdFilter(e.target.value); }}
placeholder="Agent ID…"
className="rounded-md border border-slate-300 px-3 py-2 text-sm focus:border-brand-500 focus:outline-none focus:ring-1 focus:ring-brand-500"
/>
<select
value={actionFilter}
onChange={(e) => { setActionFilter(e.target.value as AuditAction | ''); }}
className="rounded-md border border-slate-300 px-3 py-2 text-sm focus:border-brand-500 focus:outline-none focus:ring-1 focus:ring-brand-500"
>
<option value="">All Actions</option>
{AUDIT_ACTIONS.map((action) => (
<option key={action} value={action}>{action}</option>
))}
</select>
<select
value={outcomeFilter}
onChange={(e) => { setOutcomeFilter(e.target.value as AuditOutcome | ''); }}
className="rounded-md border border-slate-300 px-3 py-2 text-sm focus:border-brand-500 focus:outline-none focus:ring-1 focus:ring-brand-500"
>
<option value="">All Outcomes</option>
<option value="success">Success</option>
<option value="failure">Failure</option>
</select>
<input
type="date"
value={fromDate}
onChange={(e) => { setFromDate(e.target.value); }}
className="rounded-md border border-slate-300 px-3 py-2 text-sm focus:border-brand-500 focus:outline-none focus:ring-1 focus:ring-brand-500"
title="From date"
/>
<input
type="date"
value={toDate}
onChange={(e) => { setToDate(e.target.value); }}
className="rounded-md border border-slate-300 px-3 py-2 text-sm focus:border-brand-500 focus:outline-none focus:ring-1 focus:ring-brand-500"
title="To date"
/>
</div>
{error && (
<div className="mb-4 rounded-md bg-red-50 px-4 py-3 text-sm text-red-700" role="alert">
{error}
</div>
)}
<div className="overflow-hidden rounded-xl border border-slate-200 bg-white shadow-sm">
<table className="min-w-full divide-y divide-slate-200 text-sm">
<thead className="bg-slate-50">
<tr>
{['Timestamp', 'Agent ID', 'Action', 'Outcome', 'IP Address'].map((col) => (
<th key={col} className="px-4 py-3 text-left text-xs font-semibold uppercase tracking-wide text-slate-500">
{col}
</th>
))}
</tr>
</thead>
<tbody className="divide-y divide-slate-100">
{loading
? Array.from({ length: 5 }).map((_, i) => (
<tr key={i}>
{Array.from({ length: 5 }).map((__, j) => (
<td key={j} className="px-4 py-3">
<div className="h-4 w-full animate-pulse rounded bg-slate-200" />
</td>
))}
</tr>
))
: events.length === 0
? (
<tr>
<td colSpan={5} className="px-4 py-12 text-center text-slate-400">
No audit events found.
</td>
</tr>
)
: events.map((event) => (
<tr key={event.eventId} className="hover:bg-slate-50">
<td className="px-4 py-3 text-slate-500 whitespace-nowrap">{formatDateTime(event.timestamp)}</td>
<td className="px-4 py-3 font-mono text-xs text-slate-700">{truncate(event.agentId)}</td>
<td className="px-4 py-3 text-slate-700">{event.action}</td>
<td className="px-4 py-3">
<Badge variant={event.outcome === 'success' ? 'success' : 'danger'}>
{event.outcome}
</Badge>
</td>
<td className="px-4 py-3 text-slate-500">{event.ipAddress}</td>
</tr>
))
}
</tbody>
</table>
</div>
{/* Pagination */}
{!loading && total > 0 && (
<div className="mt-4 flex items-center justify-between text-sm text-slate-600">
<span>
Page {page} of {totalPages} ({total} total)
</span>
<div className="flex gap-2">
<button
onClick={() => { setPage((p) => Math.max(1, p - 1)); }}
disabled={page <= 1}
className="rounded-md border border-slate-300 px-3 py-1.5 hover:bg-slate-50 disabled:opacity-40"
>
Previous
</button>
<button
onClick={() => { setPage((p) => Math.min(totalPages, p + 1)); }}
disabled={page >= totalPages}
className="rounded-md border border-slate-300 px-3 py-1.5 hover:bg-slate-50 disabled:opacity-40"
>
Next
</button>
</div>
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,264 @@
import * as React from 'react';
import { useParams, useNavigate } from 'react-router-dom';
import type { Credential, CredentialWithSecret } from '@sentryagent/idp-sdk';
import { Badge } from '@/components/ui/badge';
import { Button } from '@/components/ui/button';
import { ConfirmDialog } from '@/components/ui/dialog';
import { getClient } from '@/lib/client';
/** Truncates a string to a maximum length with ellipsis. */
function truncate(value: string, maxLen = 16): string {
return value.length > maxLen ? `${value.slice(0, maxLen)}` : value;
}
/** Formats an ISO timestamp to a short local date string. */
function formatDate(iso: string): string {
return new Date(iso).toLocaleDateString(undefined, { year: 'numeric', month: 'short', day: 'numeric' });
}
interface NewSecretBoxProps {
secret: string;
onDismiss: () => void;
}
/**
* Displays a newly issued client secret exactly once.
* Provides a copy button and a dismiss button.
*/
function NewSecretBox({ secret, onDismiss }: NewSecretBoxProps): React.JSX.Element {
const [copied, setCopied] = React.useState<boolean>(false);
const handleCopy = React.useCallback(async (): Promise<void> => {
await navigator.clipboard.writeText(secret);
setCopied(true);
setTimeout(() => { setCopied(false); }, 2000);
}, [secret]);
return (
<div className="mb-6 rounded-lg border-2 border-green-400 bg-green-50 p-4">
<p className="mb-2 text-sm font-semibold text-green-800">
New client secret copy it now. It will not be shown again.
</p>
<div className="flex items-center gap-3">
<code className="flex-1 break-all rounded bg-white px-3 py-2 text-sm font-mono text-green-900 border border-green-200">
{secret}
</code>
<Button variant="outline" size="sm" onClick={() => { void handleCopy(); }}>
{copied ? 'Copied!' : 'Copy'}
</Button>
</div>
<button
onClick={onDismiss}
className="mt-3 text-xs text-green-700 underline hover:text-green-900"
>
I have saved this secret dismiss
</button>
</div>
);
}
type DialogAction = { type: 'rotate'; credentialId: string } | { type: 'revoke'; credentialId: string };
/**
* Credentials page — lists all credentials for an agent with rotate/revoke actions.
* Route: /dashboard/agents/:agentId/credentials
*/
export default function Credentials(): React.JSX.Element {
const { agentId } = useParams<{ agentId: string }>();
const navigate = useNavigate();
const [credentials, setCredentials] = React.useState<Credential[]>([]);
const [loading, setLoading] = React.useState<boolean>(true);
const [error, setError] = React.useState<string | null>(null);
const [actionLoading, setActionLoading] = React.useState<boolean>(false);
const [dialog, setDialog] = React.useState<DialogAction | null>(null);
const [newSecret, setNewSecret] = React.useState<CredentialWithSecret | null>(null);
const fetchCredentials = React.useCallback(async (): Promise<void> => {
if (!agentId) return;
setLoading(true);
setError(null);
try {
const result = await getClient().credentials.listCredentials(agentId);
setCredentials(result.data);
} catch (err) {
setError(err instanceof Error ? err.message : 'Failed to load credentials.');
} finally {
setLoading(false);
}
}, [agentId]);
React.useEffect(() => {
void fetchCredentials();
}, [fetchCredentials]);
const handleGenerate = React.useCallback(async (): Promise<void> => {
if (!agentId) return;
setActionLoading(true);
setError(null);
try {
const result = await getClient().credentials.generateCredential(agentId, {});
setNewSecret(result);
await fetchCredentials();
} catch (err) {
setError(err instanceof Error ? err.message : 'Failed to generate credential.');
} finally {
setActionLoading(false);
}
}, [agentId, fetchCredentials]);
const handleConfirm = React.useCallback(async (): Promise<void> => {
if (!dialog || !agentId) return;
setActionLoading(true);
setDialog(null);
setError(null);
try {
if (dialog.type === 'rotate') {
const result = await getClient().credentials.rotateCredential(agentId, dialog.credentialId);
setNewSecret(result);
} else {
await getClient().credentials.revokeCredential(agentId, dialog.credentialId);
}
await fetchCredentials();
} catch (err) {
setError(err instanceof Error ? err.message : `Failed to ${dialog.type} credential.`);
} finally {
setActionLoading(false);
}
}, [dialog, agentId, fetchCredentials]);
const dialogConfig = React.useMemo(() => {
if (!dialog) return null;
if (dialog.type === 'rotate') {
return {
title: 'Rotate credential?',
description: 'The existing secret will be invalidated immediately. You will receive a new secret — store it securely.',
confirmLabel: 'Rotate',
variant: 'destructive' as const,
};
}
return {
title: 'Revoke credential?',
description: 'This will permanently revoke the credential. This cannot be undone.',
confirmLabel: 'Revoke',
variant: 'destructive' as const,
};
}, [dialog]);
return (
<div>
{/* Back navigation */}
<button
onClick={() => { navigate(`/dashboard/agents/${agentId ?? ''}`); }}
className="mb-6 flex items-center gap-1 text-sm text-brand-600 hover:text-brand-800"
>
Back to Agent
</button>
<div className="mb-6 flex items-center justify-between">
<h1 className="text-2xl font-bold text-slate-900">Credentials</h1>
<Button
loading={actionLoading}
onClick={() => { void handleGenerate(); }}
>
Generate Credential
</Button>
</div>
{error && (
<div className="mb-4 rounded-md bg-red-50 px-4 py-3 text-sm text-red-700" role="alert">
{error}
</div>
)}
{/* New secret display — shown once */}
{newSecret !== null && (
<NewSecretBox
secret={newSecret.clientSecret}
onDismiss={() => { setNewSecret(null); }}
/>
)}
{/* Credentials table */}
<div className="overflow-hidden rounded-xl border border-slate-200 bg-white shadow-sm">
<table className="min-w-full divide-y divide-slate-200 text-sm">
<thead className="bg-slate-50">
<tr>
{['Credential ID', 'Status', 'Created', 'Actions'].map((col) => (
<th key={col} className="px-4 py-3 text-left text-xs font-semibold uppercase tracking-wide text-slate-500">
{col}
</th>
))}
</tr>
</thead>
<tbody className="divide-y divide-slate-100">
{loading ? (
Array.from({ length: 3 }).map((_, i) => (
<tr key={i}>
{Array.from({ length: 4 }).map((__, j) => (
<td key={j} className="px-4 py-3">
<div className="h-4 w-full animate-pulse rounded bg-slate-200" />
</td>
))}
</tr>
))
) : credentials.length === 0 ? (
<tr>
<td colSpan={4} className="px-4 py-12 text-center text-slate-400">
No credentials found. Generate one above.
</td>
</tr>
) : credentials.map((cred) => (
<tr key={cred.credentialId} className="hover:bg-slate-50">
<td className="px-4 py-3 font-mono text-xs text-slate-700">
{truncate(cred.credentialId, 24)}
</td>
<td className="px-4 py-3">
<Badge variant={cred.status === 'active' ? 'success' : 'muted'}>
{cred.status}
</Badge>
</td>
<td className="px-4 py-3 text-slate-500">{formatDate(cred.createdAt)}</td>
<td className="px-4 py-3">
{cred.status === 'active' && (
<div className="flex gap-2">
<Button
variant="outline"
size="sm"
disabled={actionLoading}
onClick={() => { setDialog({ type: 'rotate', credentialId: cred.credentialId }); }}
>
Rotate
</Button>
<Button
variant="destructive"
size="sm"
disabled={actionLoading}
onClick={() => { setDialog({ type: 'revoke', credentialId: cred.credentialId }); }}
>
Revoke
</Button>
</div>
)}
</td>
</tr>
))}
</tbody>
</table>
</div>
{/* Confirm dialog */}
{dialog !== null && dialogConfig !== null && (
<ConfirmDialog
open
title={dialogConfig.title}
description={dialogConfig.description}
confirmLabel={dialogConfig.confirmLabel}
variant={dialogConfig.variant}
onConfirm={() => { void handleConfirm(); }}
onCancel={() => { setDialog(null); }}
/>
)}
</div>
);
}

View File

@@ -0,0 +1,173 @@
import * as React from 'react';
/** Shape of the /health API response. */
interface HealthResponse {
status: 'ok' | 'degraded';
version?: string;
uptime?: number;
services: {
postgres: 'connected' | 'disconnected';
redis: 'connected' | 'disconnected';
};
}
type ServiceStatus = 'connected' | 'disconnected' | 'unknown';
interface HealthState {
postgres: ServiceStatus;
redis: ServiceStatus;
version: string | null;
uptime: number | null;
lastChecked: Date | null;
reachable: boolean;
}
const initialState: HealthState = {
postgres: 'unknown',
redis: 'unknown',
version: null,
uptime: null,
lastChecked: null,
reachable: true,
};
/** Formats seconds into a human-readable uptime string. */
function formatUptime(seconds: number): string {
const days = Math.floor(seconds / 86400);
const hours = Math.floor((seconds % 86400) / 3600);
const minutes = Math.floor((seconds % 3600) / 60);
const parts: string[] = [];
if (days > 0) parts.push(`${days}d`);
if (hours > 0) parts.push(`${hours}h`);
parts.push(`${minutes}m`);
return parts.join(' ');
}
interface StatusCardProps {
label: string;
status: ServiceStatus;
}
/** Card displaying the connectivity status of a single service. */
function StatusCard({ label, status }: StatusCardProps): React.JSX.Element {
const isConnected = status === 'connected';
const isUnknown = status === 'unknown';
return (
<div className={`rounded-xl border p-6 shadow-sm ${
isUnknown
? 'border-slate-200 bg-slate-50'
: isConnected
? 'border-green-200 bg-green-50'
: 'border-red-200 bg-red-50'
}`}>
<p className="text-sm font-medium text-slate-600">{label}</p>
<div className="mt-2 flex items-center gap-2">
<span className={`inline-block h-3 w-3 rounded-full ${
isUnknown ? 'bg-slate-400' : isConnected ? 'bg-green-500' : 'bg-red-500'
}`} />
<span className={`text-lg font-semibold ${
isUnknown ? 'text-slate-600' : isConnected ? 'text-green-700' : 'text-red-700'
}`}>
{isUnknown ? 'Checking…' : isConnected ? 'Connected' : 'Disconnected'}
</span>
</div>
</div>
);
}
/**
* Health page — shows PostgreSQL and Redis connectivity status.
* Polls GET /health every 30 seconds. No authentication required.
* Route: /dashboard/health
*/
export default function Health(): React.JSX.Element {
const [health, setHealth] = React.useState<HealthState>(initialState);
const [loading, setLoading] = React.useState<boolean>(true);
const checkHealth = React.useCallback(async (): Promise<void> => {
try {
const response = await fetch('/health');
const data = (await response.json()) as HealthResponse;
setHealth({
postgres: data.services?.postgres ?? 'unknown',
redis: data.services?.redis ?? 'unknown',
version: data.version ?? null,
uptime: data.uptime ?? null,
lastChecked: new Date(),
reachable: true,
});
} catch {
setHealth((prev) => ({
...prev,
postgres: 'disconnected',
redis: 'disconnected',
lastChecked: new Date(),
reachable: false,
}));
} finally {
setLoading(false);
}
}, []);
React.useEffect(() => {
void checkHealth();
const interval = setInterval(() => { void checkHealth(); }, 30_000);
return () => { clearInterval(interval); };
}, [checkHealth]);
return (
<div>
<div className="mb-6 flex items-center justify-between">
<h1 className="text-2xl font-bold text-slate-900">System Health</h1>
<button
onClick={() => { void checkHealth(); }}
disabled={loading}
className="rounded-md border border-slate-300 px-3 py-1.5 text-sm hover:bg-slate-50 disabled:opacity-40"
>
Refresh
</button>
</div>
{!health.reachable && (
<div className="mb-6 rounded-md bg-red-50 px-4 py-3 text-sm text-red-700" role="alert">
API is unreachable. Check that the server is running.
</div>
)}
<div className="grid grid-cols-1 gap-4 sm:grid-cols-2">
<StatusCard label="PostgreSQL" status={loading ? 'unknown' : health.postgres} />
<StatusCard label="Redis" status={loading ? 'unknown' : health.redis} />
</div>
{/* Metadata */}
{(health.version !== null || health.uptime !== null) && (
<div className="mt-6 rounded-xl border border-slate-200 bg-white p-6 shadow-sm">
<h2 className="mb-4 text-base font-semibold text-slate-900">API Details</h2>
<dl className="space-y-2">
{health.version !== null && (
<div className="flex gap-4">
<dt className="w-24 text-sm font-medium text-slate-500">Version</dt>
<dd className="text-sm text-slate-900">{health.version}</dd>
</div>
)}
{health.uptime !== null && (
<div className="flex gap-4">
<dt className="w-24 text-sm font-medium text-slate-500">Uptime</dt>
<dd className="text-sm text-slate-900">{formatUptime(health.uptime)}</dd>
</div>
)}
</dl>
</div>
)}
{/* Last checked */}
{health.lastChecked !== null && (
<p className="mt-4 text-xs text-slate-400">
Last checked: {health.lastChecked.toLocaleTimeString()} auto-refreshes every 30 seconds
</p>
)}
</div>
);
}

View File

@@ -0,0 +1,109 @@
import * as React from 'react';
import { useNavigate } from 'react-router-dom';
import { Button } from '@/components/ui/button';
import { useAuth } from '@/lib/auth';
/**
* Login page — accepts API Base URL, Client ID, and Client Secret.
* Validates credentials against the AgentIdP token endpoint before persisting.
*/
export default function Login(): React.JSX.Element {
const { login } = useAuth();
const navigate = useNavigate();
const [baseUrl, setBaseUrl] = React.useState<string>(window.location.origin);
const [clientId, setClientId] = React.useState<string>('');
const [clientSecret, setClientSecret] = React.useState<string>('');
const [loading, setLoading] = React.useState<boolean>(false);
const [error, setError] = React.useState<string | null>(null);
const handleSubmit = React.useCallback(
async (e: React.FormEvent<HTMLFormElement>): Promise<void> => {
e.preventDefault();
setError(null);
setLoading(true);
try {
const success = await login({ baseUrl: baseUrl.trim(), clientId: clientId.trim(), clientSecret });
if (success) {
navigate('/dashboard/agents', { replace: true });
} else {
setError('Invalid credentials. Please check your Client ID and secret.');
setClientSecret('');
}
} finally {
setLoading(false);
}
},
[login, navigate, baseUrl, clientId, clientSecret],
);
return (
<div className="flex min-h-screen items-center justify-center bg-slate-50 px-4">
<div className="w-full max-w-md rounded-xl bg-white p-8 shadow-lg">
<div className="mb-8 text-center">
<h1 className="text-2xl font-bold text-brand-700">SentryAgent.ai</h1>
<p className="mt-1 text-sm text-slate-500">AgentIdP Dashboard Sign In</p>
</div>
<form onSubmit={(e) => { void handleSubmit(e); }} className="space-y-5">
<div>
<label htmlFor="baseUrl" className="block text-sm font-medium text-slate-700">
API Base URL
</label>
<input
id="baseUrl"
type="url"
required
value={baseUrl}
onChange={(e) => { setBaseUrl(e.target.value); }}
className="mt-1 block w-full rounded-md border border-slate-300 px-3 py-2 text-sm shadow-sm focus:border-brand-500 focus:outline-none focus:ring-1 focus:ring-brand-500"
placeholder="https://api.example.com"
/>
</div>
<div>
<label htmlFor="clientId" className="block text-sm font-medium text-slate-700">
Client ID
</label>
<input
id="clientId"
type="text"
required
value={clientId}
onChange={(e) => { setClientId(e.target.value); }}
className="mt-1 block w-full rounded-md border border-slate-300 px-3 py-2 text-sm shadow-sm focus:border-brand-500 focus:outline-none focus:ring-1 focus:ring-brand-500"
placeholder="agent-uuid"
autoComplete="username"
/>
</div>
<div>
<label htmlFor="clientSecret" className="block text-sm font-medium text-slate-700">
Client Secret
</label>
<input
id="clientSecret"
type="password"
required
value={clientSecret}
onChange={(e) => { setClientSecret(e.target.value); }}
className="mt-1 block w-full rounded-md border border-slate-300 px-3 py-2 text-sm shadow-sm focus:border-brand-500 focus:outline-none focus:ring-1 focus:ring-brand-500"
autoComplete="current-password"
/>
</div>
{error && (
<p className="rounded-md bg-red-50 px-3 py-2 text-sm text-red-700" role="alert">
{error}
</p>
)}
<Button type="submit" loading={loading} className="w-full" size="lg">
{loading ? 'Validating…' : 'Sign In'}
</Button>
</form>
</div>
</div>
);
}

1
dashboard/src/vite-env.d.ts vendored Normal file
View File

@@ -0,0 +1 @@
/// <reference types="vite/client" />

View File

@@ -0,0 +1,19 @@
/** @type {import('tailwindcss').Config} */
export default {
content: ['./index.html', './src/**/*.{ts,tsx}'],
theme: {
extend: {
colors: {
brand: {
50: '#f0f9ff',
100: '#e0f2fe',
500: '#0ea5e9',
600: '#0284c7',
700: '#0369a1',
900: '#0c4a6e',
},
},
},
},
plugins: [],
};

View File

@@ -0,0 +1,25 @@
{
"compilerOptions": {
"tsBuildInfoFile": "./node_modules/.tmp/tsconfig.app.tsbuildinfo",
"target": "ES2020",
"useDefineForClassFields": true,
"lib": ["ES2020", "DOM", "DOM.Iterable"],
"module": "ESNext",
"skipLibCheck": true,
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"isolatedModules": true,
"moduleDetection": "force",
"noEmit": true,
"jsx": "react-jsx",
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noFallthroughCasesInSwitch": true,
"noUncheckedSideEffectImports": true,
"paths": {
"@/*": ["./src/*"]
}
},
"include": ["src"]
}

7
dashboard/tsconfig.json Normal file
View File

@@ -0,0 +1,7 @@
{
"files": [],
"references": [
{ "path": "./tsconfig.app.json" },
{ "path": "./tsconfig.node.json" }
]
}

View File

@@ -0,0 +1,20 @@
{
"compilerOptions": {
"tsBuildInfoFile": "./node_modules/.tmp/tsconfig.node.tsbuildinfo",
"target": "ES2022",
"lib": ["ES2023"],
"module": "ESNext",
"skipLibCheck": true,
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"isolatedModules": true,
"moduleDetection": "force",
"noEmit": true,
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noFallthroughCasesInSwitch": true,
"noUncheckedSideEffectImports": true
},
"include": ["vite.config.ts"]
}

17
dashboard/vite.config.ts Normal file
View File

@@ -0,0 +1,17 @@
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
import path from 'path';
export default defineConfig({
plugins: [react()],
base: '/dashboard/',
resolve: {
alias: {
'@': path.resolve(__dirname, './src'),
},
},
build: {
outDir: 'dist',
emptyOutDir: true,
},
});

View File

@@ -1,54 +0,0 @@
version: '3.9'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- '3000:3000'
environment:
- DATABASE_URL=postgresql://sentryagent:sentryagent@postgres:5432/sentryagent_idp
- REDIS_URL=redis://redis:6379
- PORT=3000
env_file:
- .env
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
volumes:
- ./src:/app/src:ro
postgres:
image: postgres:14-alpine
environment:
POSTGRES_USER: sentryagent
POSTGRES_PASSWORD: sentryagent
POSTGRES_DB: sentryagent_idp
ports:
- '5432:5432'
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U sentryagent -d sentryagent_idp']
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
ports:
- '6379:6379'
volumes:
- redis_data:/data
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
interval: 5s
timeout: 5s
retries: 5
volumes:
postgres_data:
redis_data:

View File

@@ -0,0 +1,172 @@
# Audit Log Chain Verification Runbook — SentryAgent.ai AgentIdP
**Control:** SOC 2 CC7.2 — Audit Log Integrity
**Service:** `src/services/AuditVerificationService.ts`
**Job:** `src/jobs/AuditChainVerificationJob.ts`
**Endpoint:** `GET /api/v1/audit/verify`
---
## Overview
Every audit event in the `audit_events` PostgreSQL table is linked to the previous one
via a SHA-256 hash chain. Each event stores:
- `hash` — SHA-256 of `(eventId + timestamp.toISOString() + action + outcome + agentId + organizationId + previousHash)`
- `previous_hash` — the `hash` of the immediately preceding event (ordered by `timestamp ASC, event_id ASC`)
The first event in the chain uses `previous_hash = ''` (empty string sentinel).
A PostgreSQL trigger (`trg_audit_events_immutable`) prevents UPDATE and DELETE operations
on `audit_events`, making the log tamper-evident at the database level.
---
## Running GET /audit/verify
### Full chain verification (no date range)
```bash
# Requires Bearer token with audit:read scope
curl -s -H "Authorization: Bearer <token>" \
"https://api.sentryagent.ai/v1/audit/verify"
```
**Response (chain intact):**
```json
{
"verified": true,
"checkedCount": 18504,
"brokenAtEventId": null
}
```
**Response (chain break detected):**
```json
{
"verified": false,
"checkedCount": 1203,
"brokenAtEventId": "c4d5e6f7-a8b9-0123-cdef-456789012345"
}
```
### Date-ranged verification
```bash
curl -s -H "Authorization: Bearer <token>" \
"https://api.sentryagent.ai/v1/audit/verify?fromDate=2026-03-01T00:00:00.000Z&toDate=2026-03-31T23:59:59.999Z"
```
### Interpreting the response
| Field | Meaning |
|---|---|
| `verified: true` | All events in the checked range maintain valid hash chain linkage |
| `verified: false` | At least one chain break detected — see `brokenAtEventId` |
| `checkedCount` | Number of events examined (0 = no events in range) |
| `brokenAtEventId` | UUID of the first event where the chain fails (`null` if verified) |
| `fromDate` / `toDate` | Echo of the date range parameters (only present if supplied) |
---
## AuditChainVerificationJob
The `AuditChainVerificationJob` runs automatically in the background every hour (default).
Configure the interval via `AUDIT_CHAIN_VERIFICATION_INTERVAL_MS` (milliseconds).
On each tick it calls `verifyChain()` and:
- Sets Prometheus gauge `agentidp_audit_chain_integrity` to **1** (passing)
- Updates `ComplianceStatusStore` with `CC7.2 = passing`
If verification fails:
- Sets gauge to **0**
- Updates `ComplianceStatusStore` with `CC7.2 = failing`
- Prometheus alert `AuditChainIntegrityFailed` fires immediately (severity: critical)
- Application logs: `[AuditChainVerificationJob] Chain BROKEN at event <uuid>`
---
## What to Do When `brokenAtEventId` is Returned
### Step 1: Preserve Evidence
Immediately capture the full state of the audit log for forensic analysis:
```sql
-- Export all events around the break point
SELECT event_id, timestamp, action, outcome, agent_id, organization_id, hash, previous_hash
FROM audit_events
WHERE timestamp >= (
SELECT timestamp - INTERVAL '1 hour'
FROM audit_events WHERE event_id = '<brokenAtEventId>'
)
ORDER BY timestamp ASC, event_id ASC;
```
Save the output to a secure, immutable location (e.g. S3 with object locking).
### Step 2: Identify the Break Type
Compare the recomputed hash for the broken event with its stored hash:
```bash
# Using Node.js
node -e "
const crypto = require('crypto');
const eventId = '<event_id>';
const timestamp = '<timestamp_from_db>';
const action = '<action>';
const outcome = '<outcome>';
const agentId = '<agent_id>';
const orgId = '<organization_id>';
const prevHash = '<previous_hash_from_db>';
const expected = crypto.createHash('sha256')
.update(eventId + new Date(timestamp).toISOString() + action + outcome + agentId + orgId + prevHash)
.digest('hex');
console.log('Expected hash:', expected);
console.log('Stored hash: <hash_from_db>');
console.log('Match:', expected === '<hash_from_db>');
"
```
Possible break types:
- **Hash mismatch only** — event data was modified after insertion
- **previous_hash mismatch** — an event was inserted/deleted before this event in the chain
- **Both mismatched** — multiple modifications or an injection attack
### Step 3: Escalate
A chain break is a **critical security incident**. Immediately:
1. Notify the security team and CISO
2. Engage incident response procedure (`docs/compliance/incident-response.md` — Audit Chain Integrity Failure section)
3. Do NOT attempt to "fix" the hash — preserve the broken state as evidence
4. Consider temporarily suspending API access pending investigation
5. Notify affected customers per data breach notification obligations
### Step 4: Forensic Investigation
Using PostgreSQL audit logs, Vault audit logs, and application logs:
- Identify which application process or database connection modified the row
- Correlate with access logs and authentication events
- Determine the extent of the compromise (single row vs. systematic)
---
## Verification Rate Limiting
`GET /audit/verify` is rate-limited to **30 requests/minute** per `client_id`.
For continuous monitoring, use `AuditChainVerificationJob` (background job, no rate limit)
and poll `GET /compliance/controls` instead.
---
## SOC 2 Evidence Package
For auditors, provide:
1. `GET /audit/verify` response (full chain, no date filter) — save as JSON
2. Prometheus metric export: `agentidp_audit_chain_integrity` time series (30/60/90 days)
3. PostgreSQL trigger definition: `\d+ audit_events` in psql
4. `src/db/migrations/020_add_audit_chain_columns.sql` — shows immutability trigger DDL
5. `docs/openapi/compliance.yaml` — endpoint specification

View File

@@ -0,0 +1,159 @@
# Encryption Key Rotation Runbook — SentryAgent.ai AgentIdP
**Control:** SOC 2 CC6.1 — Encryption at Rest
**Service:** `src/services/EncryptionService.ts`
**Vault path:** Configured via `ENCRYPTION_KEY_VAULT_PATH` env var (default: `secret/data/agentidp/encryption-key`)
---
## Overview
AgentIdP uses AES-256-CBC column-level encryption for sensitive PostgreSQL columns.
The encryption key is a 64-character hex string (32 bytes) stored in HashiCorp Vault.
The `EncryptionService` fetches the key once and caches it in process memory.
Encrypted format: `base64(IV):base64(ciphertext)` where IV is 16 random bytes per encryption call.
---
## Key Rotation Procedure
### Prerequisites
- Access to HashiCorp Vault with write permissions to the encryption key path
- Access to the production application environment (to trigger restart)
- At least one backup of the current key stored securely offline
### Step 1: Generate a New Key
Generate a cryptographically strong 32-byte (64-character hex) key:
```bash
openssl rand -hex 32
# Example output: a1b2c3d4e5f6... (64 hex chars)
```
Record the new key securely.
### Step 2: Backup the Current Key
Before overwriting, read and securely store the current key:
```bash
vault kv get -field=encryptionKey secret/agentidp/encryption-key > /secure/backup/encryption-key-$(date +%Y%m%d).txt
```
Store in a hardware security module (HSM) or offline key store.
### Step 3: Write the New Key to Vault
```bash
vault kv put secret/agentidp/encryption-key encryptionKey="<new-64-char-hex-key>"
```
Verify the write:
```bash
vault kv get secret/agentidp/encryption-key
```
Confirm the `encryptionKey` field contains exactly 64 hex characters.
### Step 4: Restart the Application
The `EncryptionService` caches the key in process memory. A restart forces a re-fetch from Vault:
```bash
# Kubernetes rolling restart
kubectl rollout restart deployment/agentidp
# Docker Compose
docker compose restart app
# PM2
pm2 restart agentidp
```
### Step 5: Verify Key Pick-Up
Check the application logs for:
```
[AgentIdP] EncryptionService enabled — sensitive columns encrypted at rest (SOC 2 CC6.1)
```
Call the compliance controls endpoint to confirm the control is passing:
```bash
curl -s https://api.sentryagent.ai/v1/compliance/controls | jq '.controls[] | select(.id == "CC6.1")'
```
Expected output:
```json
{ "id": "CC6.1", "name": "Encryption at Rest", "status": "passing", "lastChecked": "..." }
```
### Step 6: Re-encryption of Existing Rows
Existing rows encrypted with the old key will fail to decrypt after key rotation.
Re-encryption happens lazily: the next time each row is read and re-written (e.g. credential rotation,
webhook update), the application will decrypt with the old key and re-encrypt with the new one.
For immediate full re-encryption, use the re-encryption script:
```bash
# Run the re-encryption migration script (reads old key from backup, encrypts with new key)
# Note: This script requires both old and new keys to be available
ts-node scripts/reencrypt-columns.ts --old-key-file /secure/backup/encryption-key-<date>.txt
```
---
## Emergency Rollback
If the new key causes issues (e.g. test failures, decryption errors), roll back:
### Step 1: Restore Old Key to Vault
```bash
vault kv put secret/agentidp/encryption-key encryptionKey="<old-64-char-hex-key-from-backup>"
```
### Step 2: Restart the Application
```bash
kubectl rollout restart deployment/agentidp
```
### Step 3: Verify Recovery
```bash
curl -s https://api.sentryagent.ai/v1/compliance/controls | jq '.controls[] | select(.id == "CC6.1")'
```
### Step 4: Investigate Root Cause
Review application logs for `AES-256-CBC decryption failed` errors and audit the cause before
reattempting rotation.
---
## Troubleshooting
| Symptom | Likely Cause | Resolution |
|---|---|---|
| `Invalid encryption key ... expected a 64-character hex string` | Key in Vault is wrong length or encoding | Re-write correct key to Vault, restart |
| `AES-256-CBC decryption failed — possible key mismatch` | Key rotated but rows still encrypted with old key | Rollback to old key, then migrate properly |
| `CC6.1` status shows `unknown` | Vault unreachable, key fetch failed | Check Vault connectivity, `VAULT_ADDR`, `VAULT_TOKEN` |
---
## Audit Evidence
After rotation, record the following for SOC 2 evidence:
- Date of rotation
- Who performed the rotation (approver + executor)
- Vault audit log entry confirming the key write
- Application log confirming EncryptionService initialised with new key
- `GET /compliance/controls` response showing CC6.1 = passing

View File

@@ -0,0 +1,229 @@
# Incident Response Runbook — SentryAgent.ai AgentIdP
**Owner:** Security Engineering
**Last updated:** 2026-03-31
**Applies to:** Production AgentIdP deployments
This runbook covers the four incident types most relevant to SOC 2 Type II compliance monitoring.
---
## 1. Auth Failure Spike
### Detection
**Prometheus alert:** `AuthFailureSpike`
```yaml
expr: rate(agentidp_http_requests_total{status_code="401"}[5m]) > 0.5
for: 2m
severity: warning
```
Triggers when the rate of HTTP 401 responses exceeds 0.5 per second sustained over 2 minutes.
### Immediate Actions
1. Acknowledge the alert in PagerDuty / alerting system
2. Check whether the spike correlates with a scheduled process (e.g. batch agent key rotation, deployment)
3. Check Prometheus dashboard for the geographic distribution of the failing requests
### Investigation Steps
1. **Identify source agents:**
```bash
# Query audit log for recent auth failures
curl -s -H "Authorization: Bearer <admin-token>" \
"https://api.sentryagent.ai/v1/audit?action=auth.failed&limit=100"
```
2. **Check for brute-force patterns:**
Look for repeated failures from the same `client_id` or IP address.
3. **Check if an agent's credentials expired:**
```bash
# Look for expired credentials
psql "$DATABASE_URL" -c "
SELECT credential_id, client_id, expires_at
FROM credentials
WHERE status = 'active' AND expires_at < NOW()
ORDER BY expires_at DESC LIMIT 20;"
```
4. **Check for key compromise signals:**
- Multiple agents failing simultaneously → possible key store issue
- Single agent with high failure rate → possible credential stuffing or misconfiguration
### Escalation Path
- **Warning (< 2 req/s):** Engineering on-call investigates within 1 hour
- **Critical (> 2 req/s sustained):** CISO notified, potential account compromise investigation
- **If credential compromise confirmed:** Revoke affected credentials immediately via `POST /agents/:id/credentials/:credId/revoke`
---
## 2. Anomalous Token Issuance
### Detection
**Prometheus alert:** `AnomalousTokenIssuance`
```yaml
expr: rate(agentidp_tokens_issued_total[5m]) > 10
for: 5m
severity: warning
```
Triggers when token issuance rate exceeds 10 per second for 5 continuous minutes.
### Immediate Actions
1. Acknowledge the alert
2. Determine if a legitimate mass-scale operation is underway (e.g. new customer onboarding, load test)
3. Check the `scope` label breakdown on `agentidp_tokens_issued_total` to identify what scopes are being requested
### Investigation Steps
1. **Identify top issuing agents:**
```bash
# Query audit log for recent token issuances
curl -s -H "Authorization: Bearer <admin-token>" \
"https://api.sentryagent.ai/v1/audit?action=token.issued&limit=100"
```
2. **Check monthly token budget:**
Each agent is limited to 10,000 tokens/month (free tier). A single agent hitting the limit may indicate automation abuse.
3. **Check for abnormal scope combinations:**
If tokens are being issued with `admin:orgs` or `audit:read` at high volume, this warrants immediate investigation.
4. **Check for valid business reason:**
Contact the organization owner for the top-issuing agents.
### Escalation Path
- **Warning:** Engineering on-call investigates within 4 hours
- **If compromise suspected:** Revoke affected agent tokens via Redis revocation list, rotate credentials
- **If systematic abuse confirmed:** Suspend the issuing agent(s) via `PATCH /agents/:id` with `status: suspended`
---
## 3. Audit Chain Integrity Failure
### Detection
**Prometheus alert:** `AuditChainIntegrityFailed`
```yaml
expr: agentidp_audit_chain_integrity == 0
for: 0m
severity: critical
```
Fires immediately when `AuditChainVerificationJob` detects a break in the audit event hash chain.
This is a **CRITICAL** security event — possible evidence of log tampering.
### Immediate Actions
1. **Do NOT attempt to repair the broken chain** — preserve all evidence
2. Notify CISO and security team immediately
3. Page the on-call security engineer with P0 priority
4. Capture the current state:
```bash
curl -s -H "Authorization: Bearer <audit-token>" \
"https://api.sentryagent.ai/v1/audit/verify" | tee /secure/incident-$(date +%Y%m%d-%H%M).json
```
### Investigation Steps
1. **Determine the broken event:**
The `brokenAtEventId` field in the `/audit/verify` response identifies the first broken event.
2. **Forensic analysis:**
Follow the steps in `docs/compliance/audit-log-runbook.md` — "What to Do When brokenAtEventId is Returned".
3. **Check database access logs:**
Review PostgreSQL `pg_stat_activity` and connection logs for unauthorized direct DB access.
4. **Check application logs:**
Look for any errors from the immutability trigger (`audit_events_immutable`).
5. **Check Vault audit logs:**
Review whether any encryption key access was abnormal.
### Escalation Path
- **Immediate:** CISO + Legal + Security Engineering
- **Within 1 hour:** Begin forensic preservation per incident response plan
- **Within 24 hours:** Determine scope of compromise and notification obligations
- **Customer notification:** Per contractual and regulatory obligations (GDPR, SOC 2 requirements)
---
## 4. Webhook Dead-Letter Accumulation
### Detection
**Prometheus alert:** `WebhookDeadLetterAccumulating`
```yaml
expr: increase(agentidp_webhook_dead_letters_total[1h]) > 10
for: 0m
severity: critical
```
Fires when more than 10 webhook deliveries reach dead-letter status within an hour.
### Immediate Actions
1. Acknowledge the alert
2. Check which `organization_id` labels are accumulating dead-letters:
```bash
# Prometheus query: top organizations by dead-letter rate
# agentidp_webhook_dead_letters_total (by organization_id)
```
3. Check if the destination endpoints are reachable:
```bash
curl -I https://<webhook-destination-url>/
```
### Investigation Steps
1. **List affected webhook subscriptions:**
```bash
# Query delivery records for dead-letter status
psql "$DATABASE_URL" -c "
SELECT s.id, s.organization_id, s.url, COUNT(d.id) AS dead_letters
FROM webhook_subscriptions s
JOIN webhook_deliveries d ON d.subscription_id = s.id
WHERE d.status = 'dead_letter'
AND d.updated_at > NOW() - INTERVAL '2 hours'
GROUP BY s.id
ORDER BY dead_letters DESC
LIMIT 20;"
```
2. **Check delivery failure reasons:**
```bash
psql "$DATABASE_URL" -c "
SELECT http_status_code, COUNT(*) as count
FROM webhook_deliveries
WHERE status = 'dead_letter'
AND updated_at > NOW() - INTERVAL '2 hours'
GROUP BY http_status_code;"
```
3. **Common causes and resolutions:**
| HTTP Status | Likely Cause | Resolution |
|---|---|---|
| 0 / null | Network unreachable / DNS failure | Check recipient endpoint availability |
| 401 / 403 | HMAC signature validation failing | Customer to verify HMAC secret |
| 404 | Endpoint URL changed | Customer to update webhook URL |
| 5xx | Recipient server error | Customer to investigate their endpoint |
| Timeout | Slow recipient endpoint | Customer to optimize endpoint response time |
4. **Notify affected customers:**
Contact the organization owner for high-volume dead-letter subscriptions.
### Escalation Path
- **Warning (10-50/hr):** Engineering notifies affected customers, investigates endpoint health
- **Critical (> 50/hr):** Engineering on-call + Platform reliability team engaged
- **If systemic delivery infrastructure failure:** Activate incident bridge, escalate to VP Engineering

View File

@@ -0,0 +1,142 @@
# Secrets Rotation Runbook — SentryAgent.ai AgentIdP
**Control:** SOC 2 CC9.2 — Secrets Rotation
**Last updated:** 2026-03-31
---
## Overview
AgentIdP manages three categories of secrets that require periodic rotation:
1. **Agent client secrets** — Per-credential client secrets used for OAuth 2.0 token issuance
2. **OIDC signing keys** — RSA/EC keys used to sign ID tokens
3. **AES-256-CBC encryption key** — Column-level database encryption key (see `encryption-runbook.md`)
---
## 1. Agent Credential (Client Secret) Rotation
### API endpoint
```
POST /api/v1/agents/:agentId/credentials/:credentialId/rotate
```
Requires Bearer token with `agents:write` scope.
### Procedure
```bash
# 1. List active credentials for the agent
curl -s -H "Authorization: Bearer <token>" \
"https://api.sentryagent.ai/v1/agents/<agentId>/credentials?status=active"
# 2. Rotate the credential (generate new secret)
curl -s -X POST \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{"expiresAt": "2027-03-31T00:00:00.000Z"}' \
"https://api.sentryagent.ai/v1/agents/<agentId>/credentials/<credentialId>/rotate"
# Response includes the new clientSecret — store it immediately; it is never shown again
```
### Key points
- The new `clientSecret` is returned **once only** — store it securely before the response is discarded
- The agent's previous secret is immediately invalidated (Vault KV v2 version overwritten)
- An audit event `credential.rotated` is logged to the immutable audit chain
- A `credential.rotated` webhook event is dispatched to all active subscriptions
### Recommended rotation schedule
| Credential type | Recommended rotation interval |
|---|---|
| Production agent credentials | 90 days |
| Staging / development credentials | 180 days |
| Service account credentials | 365 days (annual) |
| Credentials involved in a security incident | Immediately |
### Automated expiry detection
`SecretsRotationJob` runs hourly and queries credentials expiring within 7 days.
Prometheus alert `CredentialExpiryApproaching` fires immediately when any are detected.
Respond to this alert by rotating the flagged credential(s) before the expiry date.
---
## 2. OIDC Signing Key Rotation
### Overview
OIDC signing keys are managed by `OIDCKeyService` (`src/services/OIDCKeyService.ts`).
Keys are stored in the `oidc_keys` PostgreSQL table. The current active key is used to
sign all new ID tokens; public keys are exposed via `GET /.well-known/jwks.json`.
### When to rotate
- Key compromise or suspected exposure
- Scheduled rotation (recommended every 90 days for production)
- Algorithm upgrade (e.g. RS256 → ES256)
### Rotation procedure
OIDC key rotation is handled automatically by `OIDCKeyService.ensureCurrentKey()`:
```bash
# Force generation of a new signing key by calling the internal rotate endpoint
# (or trigger by redeploying with OIDC_FORCE_KEY_ROTATION=true)
# 1. Mark current key as inactive (if manual rotation is required)
psql "$DATABASE_URL" -c "
UPDATE oidc_keys
SET active = false
WHERE active = true;"
# 2. Restart the application — ensureCurrentKey() will generate a new key on startup
kubectl rollout restart deployment/agentidp
```
### JWKS update behavior
- Old public keys remain in `GET /.well-known/jwks.json` for **24 hours** after rotation
(grace period for in-flight tokens)
- After the grace period, old keys are removed from the JWKS endpoint
- Redis JWKS cache TTL is configured by `JWKS_CACHE_TTL_SECONDS` (default: 3600)
### Impact on existing tokens
Existing valid tokens signed with the old key **continue to work** until they expire,
as long as the old public key remains in JWKS. After the grace period, old tokens
will fail verification.
---
## 3. Encryption Key Rotation
See `docs/compliance/encryption-runbook.md` for the full AES-256-CBC encryption key rotation procedure.
**Summary:** Generate new 32-byte hex key → write to Vault at `ENCRYPTION_KEY_VAULT_PATH` → restart app → existing rows re-encrypted lazily on next read-write cycle.
---
## Schedule Recommendations
| Secret Type | Production Interval | Staging Interval | Trigger for Immediate Rotation |
|---|---|---|---|
| Agent client secrets | 90 days | 180 days | Credential suspected compromised |
| OIDC signing keys | 90 days | 180 days | Key file exposed, algorithm upgrade |
| AES-256-CBC encryption key | 365 days (annual) | On demand | Key exposed, Vault breach, compliance audit requirement |
| Webhook HMAC secrets | Per customer policy | N/A | Webhook endpoint compromised |
---
## Compliance Evidence
For SOC 2 CC9.2 evidence collection:
- Prometheus metric history: `agentidp_credentials_expiring_soon_total`
- Audit log entries with `action: credential.rotated` — query via `GET /audit?action=credential.rotated`
- Key rotation records from Vault audit log
- This runbook + sign-off from Security Engineering

View File

@@ -0,0 +1,42 @@
# SOC 2 Type II Controls Matrix — SentryAgent.ai AgentIdP
This document maps the five in-scope SOC 2 Trust Services Criteria (TSC) controls to their
corresponding implementation artefacts, mechanisms, and automated verification methods.
---
## Controls Matrix
| Control ID | TSC Criterion Name | Implementation File | Mechanism | Automated Check |
|---|---|---|---|---|
| **CC6.1** | Encryption at Rest | `src/services/EncryptionService.ts` | AES-256-CBC column-level encryption on `credentials.secret_hash`, `credentials.vault_path`, `webhook_subscriptions.vault_secret_path`, `agent_did_keys.vault_key_path`. Key is stored in HashiCorp Vault KV v2 at path configured by `ENCRYPTION_KEY_VAULT_PATH`. IV is randomised per encryption call. Backward-compat: `isEncrypted()` gate allows plaintext rows to coexist during migration. | `GET /api/v1/compliance/controls` returns `CC6.1` status. Status is set to `passing` on service startup when `EncryptionService` initialises. |
| **CC6.7** | TLS Enforcement | `src/middleware/TLSEnforcementMiddleware.ts` | Express middleware registered as the **first** middleware in the app stack (before all routes and body parsers). In `NODE_ENV=production`, checks `X-Forwarded-Proto` header set by the upstream load balancer/reverse proxy. Any non-HTTPS request receives a `301 Moved Permanently` redirect to `https://`. | `GET /api/v1/compliance/controls` returns `CC6.7` status. TLS enforcement is a static configuration control; status is set to `passing` on application startup. |
| **CC7.2** | Audit Log Integrity | `src/services/AuditVerificationService.ts`, `src/repositories/AuditRepository.ts`, `src/jobs/AuditChainVerificationJob.ts` | Each audit event (`audit_events` table) stores a `hash` (SHA-256 of `eventId + timestamp + action + outcome + agentId + organizationId + previousHash`) and `previous_hash` linking it to the prior event. An immutability trigger prevents UPDATE/DELETE on `audit_events`. `AuditChainVerificationJob` re-walks the entire chain every hour. | Prometheus gauge `agentidp_audit_chain_integrity` (1 = passing, 0 = failing). Prometheus alert `AuditChainIntegrityFailed` fires when gauge = 0. `GET /api/v1/audit/verify` triggers an on-demand verification. `GET /api/v1/compliance/controls` returns `CC7.2` status. |
| **CC9.2** | Secrets Rotation | `src/jobs/SecretsRotationJob.ts` | `SecretsRotationJob` runs every hour (configurable via `SECRETS_ROTATION_CHECK_INTERVAL_MS`) and queries `credentials` for `active` credentials expiring within 7 days. For each, it increments the `agentidp_credentials_expiring_soon_total` Prometheus counter with the owning `agent_id`. Operators are expected to act on the alert within the 7-day window. | Prometheus counter `agentidp_credentials_expiring_soon_total` per `agent_id`. Prometheus alert `CredentialExpiryApproaching` fires when any increase is detected. `GET /api/v1/compliance/controls` returns `CC9.2` status. |
| **CC7.1** | Webhook Dead-Letter Monitoring | `src/workers/WebhookDeliveryWorker.ts` | `WebhookDeliveryWorker` processes webhook deliveries from a Redis queue. After exhausting all retry attempts (configurable `WEBHOOK_MAX_RETRIES`), the delivery is moved to dead-letter status and `agentidp_webhook_dead_letters_total` is incremented. | Prometheus counter `agentidp_webhook_dead_letters_total` per `organization_id`. Prometheus alert `WebhookDeadLetterAccumulating` fires when > 10 dead-letters accumulate in 1 hour. `GET /api/v1/compliance/controls` returns `CC7.1` status. |
---
## Evidence Collection
For a SOC 2 Type II audit, the following evidence should be collected:
| Evidence Type | Collection Method |
|---|---|
| Encryption at rest configuration | Export Vault KV v2 policy + `_encryption_migration_log` table contents |
| TLS certificate and enforcement logs | Load balancer access logs + `X-Forwarded-Proto` middleware responses |
| Audit chain integrity report | `GET /api/v1/audit/verify` with full date range |
| Secrets rotation compliance | Prometheus metric history for `agentidp_credentials_expiring_soon_total` |
| Webhook dead-letter rate | Prometheus metric history for `agentidp_webhook_dead_letters_total` |
| Immutable audit log dump | Direct PostgreSQL export of `audit_events` table with hash verification |
---
## References
- SOC 2 Trust Services Criteria: [AICPA TSC 2017](https://www.aicpa.org/resources/article/trust-services-criteria)
- OpenAPI spec: `docs/openapi/compliance.yaml`
- Encryption runbook: `docs/compliance/encryption-runbook.md`
- Audit log runbook: `docs/compliance/audit-log-runbook.md`
- Incident response: `docs/compliance/incident-response.md`
- Secrets rotation: `docs/compliance/secrets-rotation.md`

View File

@@ -1,6 +1,6 @@
# SentryAgent.ai AgentIdP — Developer Documentation # SentryAgent.ai AgentIdP — Developer Documentation
The complete documentation for bedroom developers building with SentryAgent.ai AgentIdP. The complete documentation for developers building with SentryAgent.ai AgentIdP.
## What is this? ## What is this?
@@ -19,10 +19,15 @@ SentryAgent.ai AgentIdP is a free, open-source Identity Provider built specifica
| Guide | What it covers | | Guide | What it covers |
|-------|----------------| |-------|----------------|
| [Register an Agent](guides/register-an-agent.md) | All fields, validation rules, common errors | | [Register an Agent](guides/register-an-agent.md) | All registration fields, org scoping, validation rules, common errors |
| [Manage Credentials](guides/manage-credentials.md) | Generate, list, rotate, revoke credentials | | [Manage Credentials](guides/manage-credentials.md) | Generate, list, rotate, revoke credentials |
| [Issue and Revoke Tokens](guides/issue-and-revoke-tokens.md) | OAuth 2.0 client credentials flow, introspect, revoke | | [Issue and Revoke Tokens](guides/issue-and-revoke-tokens.md) | OAuth 2.0 client credentials flow, introspect, revoke |
| [Query Audit Logs](guides/query-audit-logs.md) | Filters, pagination, event structure, retention | | [Query Audit Logs](guides/query-audit-logs.md) | Filters, pagination, event structure, retention |
| [Use the Analytics Dashboard](guides/use-analytics-dashboard.md) | Query token trends, activity heatmap, per-agent usage |
| [Manage API Tiers](guides/manage-api-tiers.md) | Check current tier, understand limits, trigger upgrade |
| [A2A Delegation](guides/a2a-delegation.md) | Create and verify agent-to-agent delegation chains |
| [Configure Webhooks](guides/configure-webhooks.md) | Subscribe to events, delivery guarantees, inspect history |
| [AGNTCY Compliance](guides/agntcy-compliance.md) | Export agent cards, generate compliance reports, verify audit chain |
## Base URL ## Base URL

File diff suppressed because it is too large Load Diff

View File

@@ -126,3 +126,215 @@ AgentIdP is free. These are the limits on the free tier:
| Audit log retention | 90 days | Events older than 90 days are automatically purged; queries return empty results | | Audit log retention | 90 days | Events older than 90 days are automatically purged; queries return empty results |
The monthly token counter resets on the first day of each calendar month. The rate limit window resets every 60 seconds; the reset timestamp is in the `X-RateLimit-Reset` response header. The monthly token counter resets on the first day of each calendar month. The rate limit window resets every 60 seconds; the reset timestamp is in the `X-RateLimit-Reset` response header.
---
## Organizations and Multi-tenancy
An **organization** is the top-level grouping unit in AgentIdP. Every registered agent can be
scoped to an organization by including an `organization_id` in the agent registration request.
Organizations have a unique `slug` (URL-safe identifier), a display `name`, and a `planTier`
that controls per-org resource limits. All API operations that involve analytics, webhooks, tiers,
and delegation are tenant-scoped: they only see data belonging to their organization.
**Tenant isolation** is enforced at the service layer. Every query involving multi-tenant data
filters by `organization_id`. A token issued to an agent in org A cannot read data from org B.
The `organization_id` is embedded in the JWT at token issuance time and validated on every
request. This means you do not need to pass an org ID as a query parameter — it is derived
automatically from the authenticated token.
When you create an organization, you define its `slug`. Slugs are immutable — once set, they
cannot be changed. Choose a slug that matches your domain or product namespace, as it is used
in DID identifiers for agents in that organization. Membership is managed through the
`POST /api/v1/organizations/{orgId}/members` endpoint, which lets you add an existing agent
to an organization with a `member` or `admin` role.
| Field | Type | Description |
|-------|------|-------------|
| `organizationId` | UUID | System-assigned immutable identifier |
| `name` | string | Human-readable display name |
| `slug` | string | URL-safe unique identifier (immutable after creation) |
| `planTier` | enum | `free` \| `pro` \| `enterprise` |
| `maxAgents` | integer | Maximum active agents in this org |
| `maxTokensPerMonth` | integer | Maximum token issuances per month |
| `status` | enum | `active` \| `suspended` \| `deleted` |
---
## DID Identity
Every agent registered in AgentIdP automatically receives a **Decentralized Identifier (DID)**
using the `did:web` method. A DID is a globally unique, self-describing identifier that does not
rely on a central registry. The DID for an agent takes the form
`did:web:<host>:agents:<agentId>` — for example,
`did:web:localhost%3A3000:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890`. The `did:web` method
means the DID document is resolvable via HTTPS: a resolver fetches
`https://<host>/api/v1/agents/<agentId>/did`.
The **DID Document** is a JSON-LD object that describes the agent's cryptographic keys and
service endpoints. It contains: the agent's DID as its `id`, a `verificationMethod` array with
the agent's public key in JWK format, an `authentication` array referencing that key, and an
`agntcy` extension object carrying agent metadata (type, capabilities, version, owner,
deploymentEnv). This document is publicly accessible — no authentication required — so any
external system can verify this agent's identity without contacting AgentIdP directly.
The `did:web` scheme was chosen because it is widely supported by DID resolvers, requires no
blockchain, and leverages standard HTTPS infrastructure. When an external system receives a
token from your agent, it can resolve your agent's DID, retrieve the public key from the DID
Document, and independently verify the token's signature. This is the foundation of
cross-system agent identity verification.
```
DID Document structure for a registered agent
───────────────────────────────────────────────
{
"@context": ["https://www.w3.org/ns/did/v1"],
"id": "did:web:<host>:agents:<agentId>",
"controller": "did:web:<host>:agents:<agentId>",
"verificationMethod": [
{
"id": "<did>#key-1",
"type": "JsonWebKey2020",
"controller": "<did>",
"publicKeyJwk": { "kty": "RSA", ... }
}
],
"authentication": ["<did>#key-1"],
"agntcy": {
"agentId": "<uuid>",
"agentType": "screener",
"capabilities": ["resume:read"],
"deploymentEnv": "production",
"owner": "talent-team",
"version": "1.0.0"
}
}
```
---
## OIDC Provider
AgentIdP implements a subset of the **OpenID Connect (OIDC)** protocol, acting as an OIDC
Provider for the agents it manages. This means AgentIdP publishes a standard discovery
document at `GET /.well-known/openid-configuration`, which any OIDC-aware client can use to
discover supported grant types, token endpoint, JWKS URI, and other metadata. It also exposes
a JWKS endpoint at `GET /.well-known/jwks.json` for external systems to retrieve the public
keys used to verify tokens.
The **`/agent-info` endpoint** is the equivalent of OIDC's UserInfo endpoint — it returns
identity claims for the authenticated agent. External systems that receive a token issued by
AgentIdP can call this endpoint (with that token) to retrieve the agent's verified identity
attributes: its `agentId`, `email`, `agentType`, `capabilities`, and `organization_id`. This
is particularly useful when a downstream service needs to verify the identity of an agent
presenting a token, without duplicating identity data in its own store.
AgentIdP also supports **OIDC token exchange for GitHub Actions**. If you run your agent
deployment workflows in GitHub Actions, you can configure a trust policy
(`POST /api/v1/oidc/trust-policies`) that maps a GitHub repository and branch to an AgentIdP
agent. The workflow can then exchange its GitHub OIDC JWT for an AgentIdP access token via
`POST /api/v1/oidc/token` — no stored secrets required. This enables keyless, short-lived
token issuance in CI/CD pipelines.
---
## A2A Delegation
**Agent-to-Agent (A2A) delegation** allows one agent to grant another agent a subset of its own
OAuth 2.0 scopes for a limited time. This is the building block for multi-agent pipelines where
an orchestrator agent needs to delegate work to a specialist sub-agent without sharing its own
full credentials. A delegation chain consists of: a delegator (the agent granting authority),
a delegatee (the agent receiving authority), a set of scopes (must be a strict subset of the
delegator's own scopes), and a TTL (60 seconds to 86,400 seconds).
The **grant flow** is straightforward: the delegator calls `POST /api/v1/oauth2/token/delegate`
with the delegatee's agent ID, the scopes to grant, and the TTL. AgentIdP returns a signed
delegation token. The delegatee presents this token when calling
`POST /api/v1/oauth2/token/verify-delegation` to prove it has been granted authority. AgentIdP
verifies the chain integrity and returns the delegation details including whether it is still
valid. The delegator can revoke the chain at any time via
`DELETE /api/v1/oauth2/token/delegate/{chainId}`.
Delegation is useful for: workflow handoffs between specialist agents, granting a monitoring
agent read-only access to resources owned by a processing agent, and time-limited cross-agent
authorization without credential sharing. Because delegation tokens are signed and verified
server-side, a delegatee cannot extend the TTL, expand the scope, or pass the delegation to a
third agent. The chain is always exactly two hops: delegator → delegatee.
```
A2A Delegation Flow
───────────────────
1. Orchestrator (delegator) calls POST /api/v1/oauth2/token/delegate
→ body: { delegateeAgentId, scopes: ["agents:read"], ttlSeconds: 3600 }
← response: { delegationToken: "...", chainId: "...", expiresAt: "..." }
2. Orchestrator passes delegationToken to the sub-agent out-of-band
3. Sub-agent (delegatee) calls POST /api/v1/oauth2/token/verify-delegation
→ body: { delegationToken: "..." }
← response: { valid: true, scopes: ["agents:read"], expiresAt: "..." }
4. Sub-agent uses its own Bearer token + confirmed scope to act on behalf
5. (Optional) Orchestrator calls DELETE /api/v1/oauth2/token/delegate/{chainId}
to revoke early
```
---
## API Tier Plans
AgentIdP has three subscription tiers: **Free**, **Pro**, and **Enterprise**. Every organization
is on one tier at a time. The tier determines the resource limits enforced at runtime: maximum
number of active agents, maximum API calls per day, and maximum token issuances per day. When a
limit is reached, the relevant operation returns a `403 FREE_TIER_LIMIT_EXCEEDED` error until the
next calendar day resets the counter (for daily limits) or until you upgrade your tier.
You can check your current tier, configured limits, and live usage at any time by calling
`GET /api/v1/tiers/status`. The response shows your tier name, all three limit values, and the
live usage counters for the current day. If you need higher limits, call
`POST /api/v1/tiers/upgrade` with `{ "target_tier": "pro" }` or `"enterprise"`. This creates a
Stripe Checkout Session and returns a one-time `checkoutUrl`. After payment, the organization's
tier is updated automatically via Stripe webhook.
Enterprise tier limits are effectively unlimited (enforced as `Infinity` in the tier
configuration). Enterprise customers should contact SentryAgent.ai to arrange billing and
configure custom limits if needed. The `maxAgents` and `maxTokensPerMonth` fields on an
organization record can be overridden at org creation or update to set tighter or looser limits
than the tier defaults, regardless of tier.
| Limit | Free | Pro | Enterprise |
|-------|------|-----|------------|
| Max agents | 10 | 100 | Unlimited |
| Max API calls / day | 1,000 | 50,000 | Unlimited |
| Max token issuances / day | 1,000 | 50,000 | Unlimited |
| Audit log retention | 90 days | 90 days | 90 days |
| Webhooks | Yes | Yes | Yes |
| Analytics | Yes | Yes | Yes |
| A2A Delegation | Yes | Yes | Yes |
---
## AGNTCY Compliance
**AGNTCY** is an open standard from the Linux Foundation that defines how AI agents should be
identified, described, and governed across platforms. AgentIdP implements AGNTCY compliance
in two ways: every agent automatically gets a DID and an agent card (a structured JSON object
that describes the agent in the AGNTCY format), and AgentIdP can generate a **compliance
report** that summarizes the verified state of all agents in a tenant. An agent card is the
AGNTCY equivalent of a business card — it carries the agent's DID, type, capabilities, owner,
version, and identity provider.
The **compliance report** (available at `GET /api/v1/compliance/report`) covers two dimensions:
agent-identity verification (are all active agents reachable via their DID?) and audit-trail
integrity (is the hash chain of audit events intact?). The report includes a boolean
`agntcyConformance` field that summarizes whether the tenant meets AGNTCY baseline requirements.
Reports are cached in Redis for 5 minutes; the `X-Cache: HIT` header signals a cached response.
For self-auditing and external audits, you can export all active agents as AGNTCY agent cards
in bulk via `GET /api/v1/compliance/agent-cards`. This is an array of card objects that
external compliance tools and AGNTCY-compatible registries can ingest directly. The
`GET /api/v1/compliance/controls` endpoint (no authentication required) provides a live
status snapshot of all SOC 2 Trust Services Criteria controls that AgentIdP monitors internally.
These endpoints are gated by the `COMPLIANCE_ENABLED` environment variable; if disabled, they
return `404`.

View File

@@ -4,9 +4,14 @@ Step-by-step walkthroughs for each AgentIdP workflow.
| Guide | What it covers | | Guide | What it covers |
|-------|----------------| |-------|----------------|
| [Register an Agent](register-an-agent.md) | All registration fields, validation rules, common errors and fixes | | [Register an Agent](register-an-agent.md) | All registration fields, organization scoping, validation rules, common errors |
| [Manage Credentials](manage-credentials.md) | Generate, list, rotate, and revoke credentials | | [Manage Credentials](manage-credentials.md) | Generate, list, rotate, and revoke credentials |
| [Issue and Revoke Tokens](issue-and-revoke-tokens.md) | OAuth 2.0 Client Credentials flow, JWT structure, introspect, revoke | | [Issue and Revoke Tokens](issue-and-revoke-tokens.md) | OAuth 2.0 Client Credentials flow, JWT structure, introspect, revoke |
| [Query Audit Logs](query-audit-logs.md) | Filters, pagination, event structure, 90-day retention | | [Query Audit Logs](query-audit-logs.md) | Filters, pagination, event structure, 90-day retention |
| [Use the Analytics Dashboard](use-analytics-dashboard.md) | Query token trends, agent activity heatmap, and per-agent usage |
| [Manage API Tiers](manage-api-tiers.md) | Check current tier, understand limits, trigger a Stripe upgrade |
| [A2A Delegation](a2a-delegation.md) | Create and verify agent-to-agent delegation chains |
| [Configure Webhooks](configure-webhooks.md) | Subscribe to events, understand delivery guarantees, inspect history |
| [AGNTCY Compliance](agntcy-compliance.md) | Export agent cards, generate compliance reports, verify audit chain |
All guides assume you have a running local server and a valid Bearer token. See the [Quick Start](../quick-start.md) if you haven't done that yet. All guides assume you have a running local server and a valid Bearer token. See the [Quick Start](../quick-start.md) if you haven't done that yet.

View File

@@ -0,0 +1,167 @@
# A2A Delegation
Agent-to-Agent (A2A) delegation lets one agent grant another agent a subset of its OAuth 2.0
scopes for a defined period. This is the foundation for building secure multi-agent pipelines
where an orchestrator agent coordinates specialist sub-agents.
---
## Prerequisites
- A running AgentIdP instance
- Two registered agents: the delegator (has a Bearer token) and the delegatee (knows its
`agentId`)
- The delegator's scopes must be a superset of the scopes it wants to delegate
---
## How delegation works
```
Delegator agent Delegatee agent
| |
|-- POST /oauth2/token/delegate ----------->| (creates chain server-side)
|<-- { delegationToken, chainId, scopes } --|
| |
|-- passes delegationToken out-of-band ---->|
| |
| POST /oauth2/token/verify-delegation
| <-- { valid: true, scopes, expiresAt }
| |
| (optional) DELETE /oauth2/token/delegate/{chainId}
```
---
## Step 1 — Create a delegation chain
The delegator agent creates the chain by specifying the delegatee's `agentId`, the scopes to
delegate (must be a strict subset of the delegator's own scopes), and the TTL in seconds.
```bash
curl -s -X POST http://localhost:3000/api/v1/oauth2/token/delegate \
-H "Authorization: Bearer $DELEGATOR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"delegateeAgentId": "'$DELEGATEE_AGENT_ID'",
"scopes": ["agents:read"],
"ttlSeconds": 3600
}' | jq .
```
Response (`201 Created`):
```json
{
"delegationToken": "sa_del_a1b2c3d4e5f6...",
"chainId": "d4e5f6a7-b8c9-0123-def0-123456789abc",
"delegatorAgentId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"delegateeAgentId": "b2c3d4e5-f6a7-8901-bcde-f12345678901",
"scopes": ["agents:read"],
"expiresAt": "2026-04-04T10:00:00.000Z"
}
```
Save the `delegationToken` and `chainId`:
```bash
export DELEGATION_TOKEN="sa_del_a1b2c3d4e5f6..."
export CHAIN_ID="d4e5f6a7-b8c9-0123-def0-123456789abc"
```
**TTL constraints**: minimum 60 seconds, maximum 86400 seconds (24 hours). Choose the minimum
TTL that covers the delegatee's task.
---
## Step 2 — Pass the delegation token to the delegatee
Pass `DELEGATION_TOKEN` to the delegatee agent out-of-band. This can be via a shared queue,
a direct API call to the sub-agent, or any other channel. The token is a signed opaque string —
do not parse it; treat it as an opaque credential.
---
## Step 3 — Verify the delegation token
The delegatee (or any agent checking the delegation) calls the verify endpoint. This confirms
the chain is valid and not expired or revoked.
```bash
curl -s -X POST http://localhost:3000/api/v1/oauth2/token/verify-delegation \
-H "Authorization: Bearer $DELEGATEE_TOKEN" \
-H "Content-Type: application/json" \
-d '{ "delegationToken": "'$DELEGATION_TOKEN'" }' | jq .
```
Response (`200 OK` — valid delegation):
```json
{
"valid": true,
"chainId": "d4e5f6a7-b8c9-0123-def0-123456789abc",
"delegatorAgentId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"delegateeAgentId": "b2c3d4e5-f6a7-8901-bcde-f12345678901",
"scopes": ["agents:read"],
"issuedAt": "2026-04-04T09:00:00.000Z",
"expiresAt": "2026-04-04T10:00:00.000Z",
"revokedAt": null
}
```
Response (`200 OK` — expired delegation):
```json
{
"valid": false,
"chainId": "d4e5f6a7-b8c9-0123-def0-123456789abc",
"delegatorAgentId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"delegateeAgentId": "b2c3d4e5-f6a7-8901-bcde-f12345678901",
"scopes": ["agents:read"],
"issuedAt": "2026-04-03T09:00:00.000Z",
"expiresAt": "2026-04-03T10:00:00.000Z",
"revokedAt": null
}
```
> The verify endpoint always returns `200 OK`. Check the `valid` field — it is never an error
> response for an expired or revoked token.
---
## Step 4 — (Optional) Revoke the delegation early
If the delegatee has completed its task and you want to revoke the delegation before it expires,
the delegator calls:
```bash
curl -s -X DELETE "http://localhost:3000/api/v1/oauth2/token/delegate/$CHAIN_ID" \
-H "Authorization: Bearer $DELEGATOR_TOKEN" \
-o /dev/null -w "%{http_code}\n"
```
Expected response: `204` (no body).
After revocation, verify requests for this chain return `{ "valid": false, "revokedAt": "<timestamp>" }`.
---
## Scope rules
- Delegated scopes must be a strict subset of the delegator's own token scopes
- You cannot delegate scopes you do not have
- You cannot delegate to yourself (delegateeAgentId must differ from delegatorAgentId)
- Delegation is not transitive — a delegatee cannot re-delegate to a third agent
---
## Common errors
### `400 VALIDATION_ERROR` — scope not a subset
The delegator attempted to delegate a scope it does not hold. Check `GET /api/v1/token/introspect`
to confirm which scopes your token carries.
### `400 VALIDATION_ERROR` — ttlSeconds out of range
Min: 60, Max: 86400. Values outside this range return a validation error.

View File

@@ -0,0 +1,191 @@
# AGNTCY Compliance
This guide explains how to use AgentIdP's AGNTCY compliance features: exporting agent cards,
generating compliance reports, verifying audit chain integrity, and checking SOC 2 control status.
---
## Prerequisites
- A running AgentIdP instance
- `COMPLIANCE_ENABLED` environment variable not set to `false` (enabled by default)
- A valid Bearer token (for authenticated endpoints)
- At least one registered agent
---
## What is AGNTCY?
AGNTCY is an open standard from the Linux Foundation for AI agent identity and governance.
AgentIdP implements AGNTCY by giving every agent a DID and an agent card. The compliance
endpoints let you export and report on that data in structured, auditable formats.
---
## Export agent cards
`GET /api/v1/compliance/agent-cards`
Exports all active agents in your organization as AGNTCY-standard agent card JSON objects.
Suitable for ingestion by external compliance tools or AGNTCY-compatible registries.
```bash
curl -s "http://localhost:3000/api/v1/compliance/agent-cards" \
-H "Authorization: Bearer $TOKEN" | jq .
```
Response (`200 OK`): Array of agent card objects.
```json
[
{
"did": "did:web:localhost%3A3000:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"name": "screener-001@talent.ai",
"agentType": "screener",
"capabilities": ["resume:read", "email:send"],
"owner": "talent-team",
"version": "1.0.0",
"deploymentEnv": "production",
"identityProvider": "https://sentryagent.ai",
"issuedAt": "2026-04-04T09:00:00.000Z"
}
]
```
**Use cases**:
- Share with external auditors to demonstrate your agent fleet
- Import into AGNTCY-compatible discovery registries
- Baseline snapshot before and after deployments
Save the output to a file:
```bash
curl -s "http://localhost:3000/api/v1/compliance/agent-cards" \
-H "Authorization: Bearer $TOKEN" > agent-cards-$(date +%Y%m%d).json
```
---
## Generate a compliance report
`GET /api/v1/compliance/report`
Generates an AGNTCY compliance report for your tenant. The report is cached for 5 minutes
(check the `X-Cache` header to see if the response is fresh or cached).
```bash
curl -s "http://localhost:3000/api/v1/compliance/report" \
-H "Authorization: Bearer $TOKEN" | jq .
```
Response (`200 OK`):
```json
{
"tenantId": "org-0a1b2c3d-e4f5-6789-abcd-ef0123456789",
"generatedAt": "2026-04-04T09:00:00.000Z",
"agntcyConformance": true,
"agentCount": 12,
"verifiedAgentCount": 12,
"auditChainIntegrity": true,
"from_cache": false
}
```
**Interpreting the fields**:
| Field | Description |
|-------|-------------|
| `agntcyConformance` | `true` if all agents have valid DIDs and the audit chain is intact |
| `agentCount` | Total active agents in the organization |
| `verifiedAgentCount` | Agents with a resolvable DID document |
| `auditChainIntegrity` | `true` if the audit event hash chain has not been tampered with |
| `from_cache` | `true` if served from Redis cache (up to 5 minutes old) |
**Force a fresh report**: Wait 5 minutes for the cache to expire. The `from_cache: false`
response is always freshly generated.
---
## Verify audit chain integrity
`GET /api/v1/audit/verify`
Verifies that the cryptographic hash chain of audit events is intact. Returns `verified: true`
if no tampering is detected. Rate limited to 30 requests/minute (computationally intensive).
Requires: Bearer token with `audit:read` scope.
```bash
curl -s "http://localhost:3000/api/v1/audit/verify" \
-H "Authorization: Bearer $TOKEN" | jq .
```
Response (`200 OK`):
```json
{
"verified": true,
"checkedCount": 1247,
"fromDate": null,
"toDate": null
}
```
Verify a specific date window:
```bash
curl -s "http://localhost:3000/api/v1/audit/verify?fromDate=2026-03-01T00:00:00.000Z&toDate=2026-03-31T23:59:59.999Z" \
-H "Authorization: Bearer $TOKEN" | jq .
```
**Interpreting the result**:
- `verified: true` — no tampering detected in the checked window
- `verified: false` — the hash chain has a broken link; contact SentryAgent.ai support
- `checkedCount` — number of audit events verified
---
## Check SOC 2 control status (public)
`GET /api/v1/compliance/controls`
Returns the live status of all SOC 2 Trust Services Criteria controls. No authentication
required. Responses are cached by CDN/proxies for 60 seconds (`Cache-Control: public, max-age=60`).
```bash
curl -s "http://localhost:3000/api/v1/compliance/controls" | jq .
```
Response (`200 OK`):
```json
{
"controls": [
{
"id": "CC6.1",
"name": "Logical Access Controls",
"status": "pass",
"lastChecked": "2026-04-04T08:00:00.000Z"
},
{
"id": "CC7.2",
"name": "System Monitoring",
"status": "pass",
"lastChecked": "2026-04-04T08:00:00.000Z"
}
]
}
```
Each control has a `status` of `pass`, `fail`, or `unknown`. Status is updated by background
jobs that run periodically. This endpoint is suitable for embedding in external status pages
or compliance dashboards without sharing API credentials.
---
## When compliance endpoints are disabled
If `COMPLIANCE_ENABLED=false` is set in the server environment, the AGNTCY compliance endpoints
(`/compliance/report` and `/compliance/agent-cards`) return `404 COMPLIANCE_DISABLED`. The SOC 2
endpoints (`/compliance/controls` and `/audit/verify`) are never gated and always active.

View File

@@ -0,0 +1,219 @@
# Configure Webhooks
Webhooks let AgentIdP push real-time events to your application when agents, credentials, or
tokens change state. This guide covers creating subscriptions, the available event types,
delivery guarantees, and how to inspect delivery history.
---
## Prerequisites
- A running AgentIdP instance
- A valid Bearer token with `organization_id` in its claims
- A publicly reachable HTTPS endpoint to receive events (for local development, use a tool
like [ngrok](https://ngrok.com))
---
## Available event types
| Event type | Triggered when |
|-----------|----------------|
| `agent.created` | A new agent is registered |
| `agent.updated` | An agent's metadata is updated |
| `agent.suspended` | An agent's status changes to `suspended` |
| `agent.reactivated` | An agent's status changes from `suspended` to `active` |
| `agent.decommissioned` | An agent is decommissioned |
| `credential.generated` | New credentials are created for an agent |
| `credential.rotated` | A credential's secret is rotated |
| `credential.revoked` | A credential is revoked |
| `token.issued` | An access token is issued |
| `token.revoked` | An access token is revoked |
---
## Create a subscription
`POST /api/v1/webhooks`
```bash
curl -s -X POST http://localhost:3000/api/v1/webhooks \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "prod-agent-events",
"url": "https://my-app.example.com/hooks/sentryagent",
"events": ["agent.created", "agent.decommissioned", "token.issued"]
}' | jq .
```
Response (`201 Created`):
```json
{
"id": "wh-1a2b3c4d-e5f6-7890-abcd-ef1234567890",
"organization_id": "org-0a1b2c3d-e4f5-6789-abcd-ef0123456789",
"name": "prod-agent-events",
"url": "https://my-app.example.com/hooks/sentryagent",
"events": ["agent.created", "agent.decommissioned", "token.issued"],
"active": true,
"signingSecret": "whsec_a1b2c3d4e5f6789...",
"failure_count": 0,
"created_at": "2026-04-04T09:00:00.000Z",
"updated_at": "2026-04-04T09:00:00.000Z"
}
```
> **Save the `signingSecret` now.** It is shown once. Use it to verify the HMAC-SHA256
> signature on incoming webhook requests. See "Verifying delivery signatures" below.
```bash
export WEBHOOK_ID="wh-1a2b3c4d-e5f6-7890-abcd-ef1234567890"
export SIGNING_SECRET="whsec_a1b2c3d4e5f6789..."
```
---
## Webhook payload format
Every delivery sends a POST to your URL with `Content-Type: application/json` and this body:
```json
{
"id": "evt-uuid-here",
"event": "agent.created",
"timestamp": "2026-04-04T09:00:00.000Z",
"organization_id": "org-0a1b2c3d-e4f5-6789-abcd-ef0123456789",
"data": {
"agentId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"email": "screener-001@talent.ai",
"agentType": "screener"
}
}
```
The `data` object contains event-specific fields. For `agent.*` events it includes agent
metadata. For `credential.*` events it includes `credentialId` and `agentId`. For `token.*`
events it includes `agentId` and `scope`.
---
## Verifying delivery signatures
AgentIdP signs every delivery with HMAC-SHA256 using your `signingSecret`. The signature is
in the `X-SentryAgent-Signature` header as `sha256=<hex-digest>`.
Verify it in Node.js:
```javascript
const crypto = require('crypto');
function verifySignature(rawBody, signingSecret, signatureHeader) {
const expected = 'sha256=' + crypto
.createHmac('sha256', signingSecret)
.update(rawBody)
.digest('hex');
return crypto.timingSafeEqual(
Buffer.from(expected),
Buffer.from(signatureHeader)
);
}
```
Always verify the signature before processing the event. Reject requests with invalid signatures
with `401 Unauthorized`.
---
## Delivery guarantees and retry policy
- AgentIdP delivers each event **at least once** — your endpoint may receive duplicates
- Use the `id` field to deduplicate events
- Delivery is attempted immediately; on failure, retries use exponential backoff
- After repeated failures, the delivery moves to `dead_letter` status
- Subscriptions with high `failure_count` may be automatically disabled
Delivery statuses: `pending``delivered` (success) or `failed` (attempt failed) → `dead_letter`
(all retries exhausted)
---
## List subscriptions
```bash
curl -s "http://localhost:3000/api/v1/webhooks" \
-H "Authorization: Bearer $TOKEN" | jq .
```
---
## Pause or resume a subscription
To pause (disable) a subscription without deleting it:
```bash
curl -s -X PATCH "http://localhost:3000/api/v1/webhooks/$WEBHOOK_ID" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{ "active": false }' | jq .
```
To resume:
```bash
curl -s -X PATCH "http://localhost:3000/api/v1/webhooks/$WEBHOOK_ID" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{ "active": true }' | jq .
```
---
## Inspect delivery history
`GET /api/v1/webhooks/{id}/deliveries`
```bash
curl -s "http://localhost:3000/api/v1/webhooks/$WEBHOOK_ID/deliveries?limit=20&offset=0" \
-H "Authorization: Bearer $TOKEN" | jq .
```
Response:
```json
{
"deliveries": [
{
"id": "del-uuid",
"subscription_id": "wh-uuid",
"event_type": "agent.created",
"payload": { ... },
"status": "delivered",
"http_status_code": 200,
"attempt_count": 1,
"next_retry_at": null,
"delivered_at": "2026-04-04T09:00:01.000Z",
"created_at": "2026-04-04T09:00:00.000Z",
"updated_at": "2026-04-04T09:00:01.000Z"
}
],
"total": 47,
"limit": 20,
"offset": 0
}
```
Use `offset` to paginate through delivery history. Increase `limit` to retrieve more records
per page (the server default is 20).
---
## Delete a subscription
```bash
curl -s -X DELETE "http://localhost:3000/api/v1/webhooks/$WEBHOOK_ID" \
-H "Authorization: Bearer $TOKEN" \
-o /dev/null -w "%{http_code}\n"
```
Expected response: `204`. This permanently deletes the subscription and all its delivery records.

View File

@@ -47,10 +47,13 @@ The token expires in `3600` seconds (1 hour). Request a new one before it expire
| Scope | What it allows | | Scope | What it allows |
|-------|----------------| |-------|----------------|
| `agents:read` | Read agent records | | `agents:read` | Read agent identity records |
| `agents:write` | Create, update, decommission agents | | `agents:write` | Create, update, and decommission agents |
| `tokens:read` | Introspect tokens | | `tokens:read` | Introspect tokens |
| `audit:read` | Query audit logs | | `audit:read` | Query audit logs and verify audit chain integrity |
| `webhooks:read` | List webhook subscriptions and delivery history |
| `webhooks:write` | Create, update, and delete webhook subscriptions |
| `admin:orgs` | Manage organizations and federation partners |
Request only the scopes your agent needs. Request only the scopes your agent needs.

View File

@@ -0,0 +1,140 @@
# Manage API Tiers
This guide explains how to check your organization's current plan tier, understand the enforced
limits, and initiate an upgrade via Stripe.
---
## Prerequisites
- A running AgentIdP instance
- A valid Bearer token with `organization_id` in its claims
---
## Check current tier status
`GET /api/v1/tiers/status`
Returns your organization's tier, the configured limits, and live usage counters for today.
```bash
curl -s "http://localhost:3000/api/v1/tiers/status" \
-H "Authorization: Bearer $TOKEN" | jq .
```
Response:
```json
{
"tier": "free",
"limits": {
"maxAgents": 10,
"maxCallsPerDay": 1000,
"maxTokensPerDay": 1000
},
"usage": {
"agentCount": 3,
"callsToday": 142,
"tokensToday": 87
}
}
```
**Understanding the fields**:
| Field | Description |
|-------|-------------|
| `tier` | Current plan: `free`, `pro`, or `enterprise` |
| `limits.maxAgents` | Maximum active (non-decommissioned) agents allowed |
| `limits.maxCallsPerDay` | Maximum total API calls per calendar day (UTC) |
| `limits.maxTokensPerDay` | Maximum token issuances per calendar day (UTC) |
| `usage.agentCount` | Current number of active agents |
| `usage.callsToday` | API calls made so far today |
| `usage.tokensToday` | Tokens issued so far today |
**When limits are reached**: The relevant endpoint returns `403 FREE_TIER_LIMIT_EXCEEDED`.
Daily counters reset at midnight UTC. The agent count limit is a current count, not a daily
counter — decommissioning an agent immediately frees capacity.
---
## Tier comparison
| Limit | Free | Pro | Enterprise |
|-------|------|-----|------------|
| Max agents | 10 | 100 | Unlimited |
| Max API calls / day | 1,000 | 50,000 | Unlimited |
| Max token issuances / day | 1,000 | 50,000 | Unlimited |
---
## Upgrade your tier
`POST /api/v1/tiers/upgrade`
Creates a Stripe Checkout Session and returns a one-time URL. Complete the payment in the
browser to upgrade your organization's tier.
```bash
curl -s -X POST http://localhost:3000/api/v1/tiers/upgrade \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{ "target_tier": "pro" }' | jq .
```
Response:
```json
{
"checkoutUrl": "https://checkout.stripe.com/pay/cs_live_a1b2c3d4e5f6..."
}
```
Open `checkoutUrl` in a browser to complete payment. After successful payment, Stripe sends a
webhook to AgentIdP which automatically upgrades your organization's tier.
**Constraints**:
- `target_tier` must be `pro` or `enterprise`
- `target_tier` must be higher than your current tier (you cannot downgrade via this endpoint)
- Attempting to upgrade to the current or a lower tier returns `400 VALIDATION_ERROR`
```bash
# Upgrade from free to pro
curl -s -X POST http://localhost:3000/api/v1/tiers/upgrade \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{ "target_tier": "pro" }' | jq .
# Upgrade from pro to enterprise
curl -s -X POST http://localhost:3000/api/v1/tiers/upgrade \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{ "target_tier": "enterprise" }' | jq .
```
---
## Common errors
### `400 VALIDATION_ERROR` — target_tier missing or invalid
```json
{
"code": "VALIDATION_ERROR",
"message": "target_tier must be one of: free, pro, enterprise.",
"details": { "received": "premium" }
}
```
**Fix**: Use `"pro"` or `"enterprise"`.
### `400 TIER_UPGRADE_NOT_REQUIRED` — not an upgrade
**Fix**: You are already on this tier or a higher tier. Check `GET /api/v1/tiers/status` first.
### `401 UNAUTHORIZED` — token lacks organization_id
The tier endpoints require a token with an `organization_id` claim. Use a token issued by an
agent that was registered with `organization_id`. Tokens issued via the bootstrap method
(without an org) do not carry `organization_id` and will fail.

View File

@@ -2,6 +2,11 @@
A credential is a `client_id` + `client_secret` pair that your agent uses to get access tokens. This guide covers all four credential operations. A credential is a `client_id` + `client_secret` pair that your agent uses to get access tokens. This guide covers all four credential operations.
> **Multi-tenant note**: Credentials issued for an agent that belongs to an organization will
> produce tokens carrying an `organization_id` claim. This claim is required by analytics,
> webhooks, tier enforcement, and A2A delegation. Ensure your agent is registered with
> `organization_id` before issuing credentials for production use.
All credential endpoints are under `/api/v1/agents/{agentId}/credentials` and require a Bearer token with `agents:write` scope. All credential endpoints are under `/api/v1/agents/{agentId}/credentials` and require a Bearer token with `agents:write` scope.
--- ---

View File

@@ -25,6 +25,11 @@ Every action below is automatically recorded. You cannot create, modify, or dele
| `credential.revoked` | Successful `DELETE /agents/{agentId}/credentials/{credentialId}` | | `credential.revoked` | Successful `DELETE /agents/{agentId}/credentials/{credentialId}` |
| `auth.failed` | Failed authentication attempt on `POST /token` | | `auth.failed` | Failed authentication attempt on `POST /token` |
> **Audit chain verification**: In addition to querying events, you can verify the cryptographic
> integrity of the entire audit hash chain via `GET /api/v1/audit/verify`. This endpoint requires
> `audit:read` scope and is rate-limited to 30 requests/min. See the
> [API Reference](../api-reference.md#get-auditverify---verify-audit-chain-integrity) for details.
--- ---
## Query the audit log ## Query the audit log

View File

@@ -20,6 +20,7 @@ Requires: `Authorization: Bearer <token>` with `agents:write` scope.
| `capabilities` | string[] | Yes | One or more capability strings in `resource:action` format. Minimum 1. | | `capabilities` | string[] | Yes | One or more capability strings in `resource:action` format. Minimum 1. |
| `owner` | string | Yes | Team or organisation that owns this agent. 1128 characters. | | `owner` | string | Yes | Team or organisation that owns this agent. 1128 characters. |
| `deploymentEnv` | string (enum) | Yes | Target deployment environment. See values below. | | `deploymentEnv` | string (enum) | Yes | Target deployment environment. See values below. |
| `organization_id` | string (UUID) | No | UUID of the organization to scope this agent to. Recommended on all multi-tenant instances. |
### `agentType` values ### `agentType` values
@@ -70,7 +71,8 @@ curl -s -X POST http://localhost:3000/api/v1/agents \
"version": "1.0.0", "version": "1.0.0",
"capabilities": ["resume:read", "email:send", "candidate:score"], "capabilities": ["resume:read", "email:send", "candidate:score"],
"owner": "talent-acquisition-team", "owner": "talent-acquisition-team",
"deploymentEnv": "production" "deploymentEnv": "production",
"organization_id": "'$ORG_ID'"
}' | jq . }' | jq .
``` ```
@@ -93,6 +95,11 @@ Successful response (`201 Created`):
The `agentId` is assigned by the system — it is immutable and never changes. The `agentId` is assigned by the system — it is immutable and never changes.
> **Organization scoping**: If you include `organization_id` in the request, the agent is
> associated with that organization. Analytics, webhook events, and tier enforcement are all
> scoped by organization. To create an organization first, see the
> [Quick Start](../quick-start.md) guide.
--- ---
## Immutable fields ## Immutable fields

View File

@@ -0,0 +1,135 @@
# Use the Analytics Dashboard
This guide explains how to query the three analytics endpoints to understand your organization's
token usage and agent activity patterns.
All analytics endpoints require Bearer token authentication and are scoped to the organization
embedded in your token.
---
## Prerequisites
- A running AgentIdP instance
- A valid Bearer token with `organization_id` in its claims
- At least one agent registered and some token issuance activity
---
## Token issuance trend
`GET /api/v1/analytics/tokens`
Returns daily token issuance counts for the past N days (default 30, max 90). Use this to
track usage growth, identify traffic spikes, and plan capacity.
```bash
curl -s "http://localhost:3000/api/v1/analytics/tokens?days=30" \
-H "Authorization: Bearer $TOKEN" | jq .
```
Response:
```json
{
"tenantId": "org-0a1b2c3d-e4f5-6789-abcd-ef0123456789",
"days": 30,
"data": [
{ "date": "2026-03-06", "count": 142 },
{ "date": "2026-03-07", "count": 198 },
{ "date": "2026-03-08", "count": 0 }
]
}
```
**Interpreting the data**: Each item in `data` is one calendar day (UTC) with the number of
tokens issued on that day. Days with zero issuance are included with `count: 0`. The array
is ordered chronologically, oldest first.
**Using it**: Compare day-over-day counts to identify growth or anomalies. A sudden spike in
`count` may indicate an agent retry loop or a credential leak. Zero-count days during expected
operation may indicate a deployment issue.
**Query parameter**: `days` — positive integer, max 90. Returns `400 VALIDATION_ERROR` if
exceeded.
```bash
# Last 7 days
curl -s "http://localhost:3000/api/v1/analytics/tokens?days=7" \
-H "Authorization: Bearer $TOKEN" | jq .
# Last 90 days (maximum)
curl -s "http://localhost:3000/api/v1/analytics/tokens?days=90" \
-H "Authorization: Bearer $TOKEN" | jq .
```
---
## Agent activity heatmap
`GET /api/v1/analytics/agents/activity`
Returns request counts grouped by day-of-week (0 = Sunday, 6 = Saturday) and hour (023, UTC).
Use this to identify peak usage windows for capacity planning and rate limit tuning.
```bash
curl -s "http://localhost:3000/api/v1/analytics/agents/activity" \
-H "Authorization: Bearer $TOKEN" | jq .
```
Response:
```json
{
"tenantId": "org-0a1b2c3d-e4f5-6789-abcd-ef0123456789",
"data": [
{ "dow": 1, "hour": 9, "count": 54 },
{ "dow": 1, "hour": 10, "count": 87 },
{ "dow": 3, "hour": 14, "count": 201 }
]
}
```
**Interpreting the data**: `dow` is 0 (Sunday) through 6 (Saturday). `hour` is 023 UTC.
Only non-zero cells are returned — missing combinations had zero activity. Sort by `count`
descending to find your peak windows.
**Using it**: If most activity is on weekday mornings UTC, ensure your rate limit headroom
covers that window. If weekend activity is unexpectedly high, investigate which agents are
active.
---
## Per-agent usage summary
`GET /api/v1/analytics/agents`
Returns token issuance counts per agent for the current calendar month (UTC). Use this to
identify your most active agents and check if any single agent is consuming a
disproportionate share of your monthly token budget.
```bash
curl -s "http://localhost:3000/api/v1/analytics/agents" \
-H "Authorization: Bearer $TOKEN" | jq .
```
Response:
```json
{
"tenantId": "org-0a1b2c3d-e4f5-6789-abcd-ef0123456789",
"month": "2026-04",
"data": [
{ "agentId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "tokenCount": 312 },
{ "agentId": "b2c3d4e5-f6a7-8901-bcde-f12345678901", "tokenCount": 87 }
]
}
```
**Interpreting the data**: Each item shows an agent UUID and the number of tokens it has
issued this month. The response covers the full current calendar month from day 1 to now.
It resets on the first day of each month.
**Using it**: Cross-reference `agentId` values against `GET /api/v1/agents` to identify which
agents by name. If one agent accounts for >80% of usage, investigate whether it is token
caching correctly or requesting tokens unnecessarily.

View File

@@ -1,12 +1,12 @@
# Quick Start — Register Your First Agent # Quick Start — Register Your First Agent
This guide gets you from zero to a working agent identity with a valid OAuth 2.0 access token. It takes under 5 minutes. This guide gets you from zero to a working agent identity inside an organization, with a valid OAuth 2.0 access token. It takes under 5 minutes.
## Prerequisites ## Prerequisites
You need two tools installed: You need two tools installed:
- **Docker** (includes `docker-compose`) — to run PostgreSQL and Redis - **Docker** (with Compose plugin, v2.20+) — to run PostgreSQL and Redis
- **Node.js 18+** (includes `npm`) — to run the server - **Node.js 18+** (includes `npm`) — to run the server
- **curl** — to call the API - **curl** — to call the API
@@ -32,16 +32,19 @@ openssl genrsa -out private.pem 2048
openssl rsa -in private.pem -pubout -out public.pem openssl rsa -in private.pem -pubout -out public.pem
``` ```
Create your `.env` file: Copy the environment template and fill in your JWT keys:
```bash ```bash
cat > .env << 'EOF' cp .env.example .env
DATABASE_URL=postgresql://sentryagent:sentryagent@localhost:5432/sentryagent_idp ```
REDIS_URL=redis://localhost:6379
PORT=3000 Write your JWT keys into `.env`:
JWT_PRIVATE_KEY="$(cat private.pem)"
JWT_PUBLIC_KEY="$(cat public.pem)" ```bash
EOF PRIVATE_KEY_LINE=$(awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' private.pem)
PUBLIC_KEY_LINE=$(awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' public.pem)
sed -i "s|JWT_PRIVATE_KEY=.*|JWT_PRIVATE_KEY=\"${PRIVATE_KEY_LINE}\"|" .env
sed -i "s|JWT_PUBLIC_KEY=.*|JWT_PUBLIC_KEY=\"${PUBLIC_KEY_LINE}\"|" .env
``` ```
> **Note**: The `.env` file stores your private key. Do not commit it to version control. > **Note**: The `.env` file stores your private key. Do not commit it to version control.
@@ -53,7 +56,7 @@ EOF
Start PostgreSQL and Redis using Docker Compose (infrastructure services only): Start PostgreSQL and Redis using Docker Compose (infrastructure services only):
```bash ```bash
docker-compose up -d postgres redis docker compose up -d postgres redis
``` ```
Expected output: Expected output:
@@ -135,7 +138,45 @@ export BOOTSTRAP_TOKEN="<paste token here>"
--- ---
## Step 5 — Register an agent ## Step 5 — Create an organization
Agents are scoped to organizations. Create one now so your agent has an `organization_id` to belong to:
```bash
curl -s -X POST http://localhost:3000/api/v1/organizations \
-H "Authorization: Bearer $BOOTSTRAP_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "My AI Project",
"slug": "my-ai-project"
}' | jq .
```
Example response (`201 Created`):
```json
{
"organizationId": "org-0a1b2c3d-e4f5-6789-abcd-ef0123456789",
"name": "My AI Project",
"slug": "my-ai-project",
"planTier": "free",
"maxAgents": 10,
"maxTokensPerMonth": 10000,
"status": "active",
"createdAt": "2026-04-04T09:00:00.000Z",
"updatedAt": "2026-04-04T09:00:00.000Z"
}
```
Save the `organizationId`:
```bash
export ORG_ID="org-0a1b2c3d-e4f5-6789-abcd-ef0123456789"
```
---
## Step 6 — Register an agent
```bash ```bash
curl -s -X POST http://localhost:3000/api/v1/agents \ curl -s -X POST http://localhost:3000/api/v1/agents \
@@ -147,7 +188,8 @@ curl -s -X POST http://localhost:3000/api/v1/agents \
"version": "1.0.0", "version": "1.0.0",
"capabilities": ["data:read"], "capabilities": ["data:read"],
"owner": "my-team", "owner": "my-team",
"deploymentEnv": "development" "deploymentEnv": "development",
"organization_id": "'$ORG_ID'"
}' | jq . }' | jq .
``` ```
@@ -176,7 +218,7 @@ export AGENT_ID="a1b2c3d4-e5f6-7890-abcd-ef1234567890"
--- ---
## Step 6 — Generate a credential ## Step 7 — Generate a credential
```bash ```bash
curl -s -X POST "http://localhost:3000/api/v1/agents/$AGENT_ID/credentials" \ curl -s -X POST "http://localhost:3000/api/v1/agents/$AGENT_ID/credentials" \
@@ -208,7 +250,7 @@ export CLIENT_SECRET="sk_live_7f3a2b1c9d8e4f0a6b5c3d2e1f0a9b8c"
--- ---
## Step 7 — Issue an access token ## Step 8 — Issue an access token
Use the OAuth 2.0 Client Credentials flow. Note that the `/token` endpoint uses **form-encoded** body, not JSON: Use the OAuth 2.0 Client Credentials flow. Note that the `/token` endpoint uses **form-encoded** body, not JSON:
@@ -242,6 +284,14 @@ Your agent now has a valid JWT. Use it in the `Authorization: Bearer <token>` he
## What's next ## What's next
- [Core Concepts](concepts.md) — understand AgentIdP, AGNTCY, and the agent identity model - [Core Concepts](concepts.md) — understand AgentIdP, AGNTCY, orgs, DID, delegation, and tiers
- [Guides](guides/README.md) — step-by-step walkthroughs for credentials, tokens, and audit logs - [Guides](guides/README.md) — step-by-step walkthroughs for all workflows
- [API Reference](api-reference.md) — every endpoint documented with curl examples - [API Reference](api-reference.md) — every endpoint documented with curl examples
**New guides for Phase 6 features:**
- [Use the Analytics Dashboard](guides/use-analytics-dashboard.md) — query token trends and activity
- [Manage API Tiers](guides/manage-api-tiers.md) — check limits and upgrade your plan
- [A2A Delegation](guides/a2a-delegation.md) — delegate authority between agents
- [Configure Webhooks](guides/configure-webhooks.md) — subscribe to real-time events
- [AGNTCY Compliance](guides/agntcy-compliance.md) — export agent cards and generate compliance reports

View File

@@ -14,14 +14,15 @@ SentryAgent.ai AgentIdP is a Node.js REST API backed by PostgreSQL and Redis. It
## Documentation ## Documentation
| Document | What it covers | | Document | Audience | Contents |
|----------|----------------| |----------|----------|---------|
| [Architecture](architecture.md) | Components, ports, data flow, Redis key patterns | | [Architecture](architecture.md) | All engineers | Components, ports, data flow, Redis key patterns |
| [Environment Variables](environment-variables.md) | Every env var — required, optional, format, examples | | [Environment Variables](environment-variables.md) | All engineers | Every env var — required, optional, format, examples |
| [Database](database.md) | Schema (4 tables), migrations, how to apply and verify | | [Database](database.md) | Backend, DevOps | Schema (26 tables/migrations), how to apply and verify |
| [Local Development](local-development.md) | docker-compose setup, startup, health checks | | [Local Development](local-development.md) | All engineers | Docker Compose setup (`compose.yaml`), startup, health checks |
| [Security](security.md) | JWT key generation and rotation, CORS, secret storage | | [Security](security.md) | All engineers | JWT key generation and rotation, CORS, secret storage |
| [Operations](operations.md) | Startup order, graceful shutdown, log interpretation, troubleshooting | | [Operations](operations.md) | DevOps | Startup order, graceful shutdown, log interpretation, troubleshooting |
| [field-trial.md](field-trial.md) | DevOps engineers, QA | In-house Docker Compose field trial execution playbook |
## Quick Reference — Ports ## Quick Reference — Ports

View File

@@ -3,26 +3,49 @@
## Component Overview ## Component Overview
``` ```
┌─────────────────────────────────────┐ ┌───────────────────────────────────────────
AgentIdP Application Next.js Portal (port 3001)
Node.js / Express portal/ — Next.js 14
Port 3000 /login /agents /credentials /audit
/analytics /settings/tier /compliance
Auth MW → RateLimit MW → Routes /webhooks /marketplace
│ ↓ ↓ │ └────────────────┬──────────────────────────┘
Controllers → Services → Repos │ │ HTTP (localhost:3000)
──────────────────────────────────── ┌────────────────▼──────────────────────────
AgentIdP Application
┌──────────────▼──┐ ┌───────▼────────┐ │ Node.js / Express (port 3000) │
PostgreSQL 14 Redis 7
Port 5432 │ Port 6379 TLS MW → Helmet → CORS → Morgan
│ │ Metrics MW → OrgContext MW
│ agents │ │ Token revoke UsageMetering MW → TierEnforcement MW
credentials Rate limits Auth MW → OPA MW → Routes
audit_events Monthly counts
token_revocati- │ │ Controllers → Services → Repos
│ ons │ │ │ └──────────┬───────────────┬────────────────┘
└──────────────────┘ └─────────────────┘ │ │
┌────────────────▼──┐ ┌────────▼────────┐
│ PostgreSQL 14 │ │ Redis 7 │
│ Port 5432 │ │ Port 6379 │
│ │ │ │
│ 26 migrations │ │ Rate limits │
│ (001026) │ │ Token revoke │
│ organizations │ │ Monthly counts │
│ agents + DID keys │ │ Tier counters │
│ credentials │ │ Compliance cache│
│ audit_events │ │ │
│ token_revocations │ └──────────────────┘
│ oidc_keys │
│ federation_partne-│ ┌──────────────────┐
│ rs │ │ HashiCorp Vault │
│ webhook_subscript-│ │ (optional) │
│ ions + deliveries │ │ KV v2 — creds │
│ agent_marketplace │ └──────────────────┘
│ github_oidc_trust │
│ billing │ ┌──────────────────┐
│ delegation_chains │ │ Stripe │
│ analytics_events │ │ (optional) │
│ tenant_tiers │ │ Billing/upgrades │
└────────────────────┘ └──────────────────┘
``` ```
## Components ## Components
@@ -36,8 +59,12 @@ A stateless Express HTTP server. Every request is handled independently — no i
| Layer | Responsibility | | Layer | Responsibility |
|-------|---------------| |-------|---------------|
| Routes | Wire HTTP methods and paths to controllers | | Routes | Wire HTTP methods and paths to controllers |
| TLS middleware | Redirect HTTP → HTTPS when `ENFORCE_TLS=true` |
| Auth middleware | Validate Bearer JWT (RS256 + Redis revocation check) | | Auth middleware | Validate Bearer JWT (RS256 + Redis revocation check) |
| Rate limit middleware | Redis sliding-window counter per `client_id` | | OrgContext middleware | Resolve `organization_id` from JWT and attach to `req` |
| UsageMetering middleware | Fire-and-forget analytics event recording |
| TierEnforcement middleware | Enforce daily API call and token limits via Redis (when `TIER_ENFORCEMENT=true`) |
| OPA middleware | Scope-based authorization via embedded Wasm or JSON policy |
| Controllers | Parse and validate request, call service, return response | | Controllers | Parse and validate request, call service, return response |
| Services | Business logic — no direct DB access | | Services | Business logic — no direct DB access |
| Repositories | All SQL queries — no business logic | | Repositories | All SQL queries — no business logic |
@@ -53,11 +80,14 @@ The application connects via a connection pool (`pg.Pool`) initialised from `DAT
Ephemeral store for three use cases: Ephemeral store for three use cases:
| Key pattern | Purpose | TTL | | Key pattern | Example | Purpose | TTL |
|------------|---------|-----| |------------|---------|---------|-----|
| `revoked:<jti>` | Token revocation list — checked on every authenticated request | Until token's `exp` | | `revoked:<jti>` | `revoked:f1e2d3c4-...` | Revoked token JTI | Remaining token lifetime |
| `rate:<client_id>:<window>` | Request count per client per 60-second window | 60 seconds | | `rate:<client_id>:<window>` | `rate:a1b2c3...:29086156` | Request count per window | `RATE_LIMIT_WINDOW_MS` |
| `monthly:<client_id>:<year>:<month>` | Token issuance count for free tier limit enforcement | End of month | | `monthly:<client_id>:<year>:<month>` | `monthly:a1b2c3...:2026:3` | Monthly token issuance count | End of month |
| `rate:tier:calls:<tenantId>` | `rate:tier:calls:org-uuid` | Daily API call counter for tier enforcement | Until midnight UTC |
| `rate:tier:tokens:<tenantId>` | `rate:tier:tokens:org-uuid` | Daily token issuance counter for tier enforcement | Until midnight UTC |
| `compliance:report:<tenantId>` | `compliance:report:org-uuid` | Cached compliance report JSON | 5 minutes |
**Redis is supplementary, not the source of truth.** Token revocations are also written to the `token_revocations` PostgreSQL table for durability across Redis restarts. On Redis restart, the revocation list is cold — previously revoked tokens will pass auth until the PostgreSQL-backed warm-up is implemented (Phase 2). **Redis is supplementary, not the source of truth.** Token revocations are also written to the `token_revocations` PostgreSQL table for durability across Redis restarts. On Redis restart, the revocation list is cold — previously revoked tokens will pass auth until the PostgreSQL-backed warm-up is implemented (Phase 2).
@@ -107,21 +137,89 @@ PostgreSQL / Redis
## Service Map ## Service Map
| Route prefix | Service | Repository | | Route prefix | Controller | Service(s) | Repository/ies |
|-------------|---------|-----------| |-------------|-----------|-----------|----------------|
| `/api/v1/agents` | `AgentService` | `AgentRepository` | | `/api/v1/agents` | `AgentController` | `AgentService` | `AgentRepository` |
| `/api/v1/agents/:id/credentials` | `CredentialService` | `CredentialRepository` | | `/api/v1/credentials` | `CredentialController` | `CredentialService` | `CredentialRepository` |
| `/api/v1/token` | `OAuth2Service` | `TokenRepository`, `CredentialRepository`, `AgentRepository` | | `/api/v1/token` | `TokenController` | `OAuth2Service` | `TokenRepository`, `CredentialRepository`, `AgentRepository` |
| `/api/v1/audit` | `AuditService` | `AuditRepository` | | `/api/v1/audit` | `AuditController` | `AuditService` | `AuditRepository` |
| `/api/v1/organizations` | `OrgController` | `OrgService` | `OrgRepository` |
| `/api/v1/compliance/*` | `ComplianceController` | `ComplianceService` | `AuditRepository` |
| `/api/v1/analytics/*` | `AnalyticsController` | `AnalyticsService` | direct pool queries |
| `/api/v1/tiers/*` | `TierController` | `TierService` | pool queries, Stripe SDK |
| `/api/v1/webhooks` | `WebhookController` | `WebhookService` | `WebhookRepository` |
| `/api/v1/federation` | `FederationController` | `FederationService` | direct pool queries |
| `/api/v1/marketplace` | `MarketplaceController` | `MarketplaceService` | direct pool queries |
| `/api/v1/billing` | `BillingController` | `BillingService` | direct pool queries |
| `/.well-known/did.json`, `/api/v1/did/*` | `DIDController` | `DIDService` | `AgentRepository` |
| `/.well-known/openid-configuration`, `/api/v1/oidc/*` | `OIDCController` | `OIDCKeyService`, `IDTokenService` | direct pool queries |
| `/api/v1/oidc/trust-policies` | `OIDCTrustPolicyController` | `OIDCTrustPolicyService` | direct pool queries |
| `/api/v1/delegation` | `DelegationController` | `DelegationService` | direct pool queries |
| `/api/v1/scaffold` | `ScaffoldController` | `ScaffoldService` | — |
| `/health` | inline | — | pool, redis |
| `/metrics` | inline | — | prom-client |
## New Services (Phases 36)
| Service | Source file | Responsibility |
|---------|------------|----------------|
| `AnalyticsService` | `src/services/AnalyticsService.ts` | Fire-and-forget `recordEvent`, time-series `getTokenTrend`, heatmap `getAgentActivity`, per-agent `getAgentUsageSummary` |
| `TierService` | `src/services/TierService.ts` | `getStatus` (reads `tenant_tiers`), `initiateUpgrade` (creates Stripe Checkout Session), `applyUpgrade` (handles Stripe webhook), `enforceAgentLimit` |
| `ComplianceService` | `src/services/ComplianceService.ts` | `generateReport` (Redis-cached 5 min), `exportAgentCards` (AGNTCY format) |
| `DelegationService` | `src/services/DelegationService.ts` | A2A delegation chain creation and verification |
| `DIDService` | `src/services/DIDService.ts` | `did:web` identifier generation and DID document management |
| `OIDCKeyService` | `src/services/OIDCKeyService.ts` | OIDC key rotation, JWKS endpoint |
| `IDTokenService` | `src/services/IDTokenService.ts` | OIDC ID token issuance |
| `FederationService` | `src/services/FederationService.ts` | Cross-tenant agent identity federation |
| `WebhookService` | `src/services/WebhookService.ts` | Event subscriptions, delivery with retry, dead-letter queue |
| `VaultService` | `src/services/VaultService.ts` | HashiCorp Vault KV v2 read/write for credential storage |
| `BillingService` | `src/services/BillingService.ts` | Stripe customer and subscription management |
| `MarketplaceService` | `src/services/MarketplaceService.ts` | Agent listing and discovery |
| `OIDCTrustPolicyService` | `src/services/OIDCTrustPolicyService.ts` | GitHub OIDC trust policy management |
| `EventPublisher` | `src/services/EventPublisher.ts` | Routes domain events to webhook delivery and Kafka (if configured) |
## Ports ## Ports
| Service | Internal port | Exposed port (local dev) | | Service | Internal port | Exposed port (local dev) |
|---------|--------------|--------------------------| |---------|--------------|--------------------------|
| AgentIdP app | 3000 | 3000 | | AgentIdP app | 3000 | 3000 |
| Next.js portal | 3001 | 3001 |
| PostgreSQL | 5432 | 5432 | | PostgreSQL | 5432 | 5432 |
| Redis | 6379 | 6379 | | Redis | 6379 | 6379 |
## API Routes (Phase 6 complete)
Base path: `/api/v1`
| Route | Method(s) | Auth | Feature flag |
|-------|----------|------|-------------|
| `/api/v1/agents` | GET, POST, PATCH, DELETE | Bearer JWT | always on |
| `/api/v1/credentials` | GET, POST, DELETE | Bearer JWT | always on |
| `/api/v1/token` | POST | none (client credentials) | always on |
| `/api/v1/audit` | GET | Bearer JWT | always on |
| `/api/v1/audit/verify` | GET | Bearer JWT | always on |
| `/api/v1/organizations` | GET, POST | Bearer JWT | always on |
| `/api/v1/compliance/controls` | GET | none | always on |
| `/api/v1/compliance/report` | GET | Bearer JWT | `COMPLIANCE_ENABLED=true` |
| `/api/v1/compliance/agent-cards` | GET | Bearer JWT | `COMPLIANCE_ENABLED=true` |
| `/api/v1/analytics/token-trend` | GET | Bearer JWT | `ANALYTICS_ENABLED=true` |
| `/api/v1/analytics/agent-activity` | GET | Bearer JWT | `ANALYTICS_ENABLED=true` |
| `/api/v1/analytics/usage-summary` | GET | Bearer JWT | `ANALYTICS_ENABLED=true` |
| `/api/v1/tiers/status` | GET | Bearer JWT | always on |
| `/api/v1/tiers/upgrade` | POST | Bearer JWT | always on |
| `/api/v1/webhooks` | GET, POST, DELETE | Bearer JWT | always on |
| `/api/v1/federation` | GET, POST | Bearer JWT | always on |
| `/api/v1/delegation` | GET, POST | Bearer JWT | always on |
| `/api/v1/marketplace` | GET | none | always on |
| `/api/v1/billing` | GET, POST | Bearer JWT | always on |
| `/api/v1/did/*` | GET | none | always on |
| `/api/v1/oidc/*` | GET, POST | mixed | always on |
| `/.well-known/openid-configuration` | GET | none | always on |
| `/.well-known/jwks.json` | GET | none | always on |
| `/.well-known/did.json` | GET | none | always on |
| `/health` | GET | none | always on |
| `/metrics` | GET | none | always on |
## Graceful Shutdown ## Graceful Shutdown
The server listens for `SIGTERM` and `SIGINT`. On receipt: The server listens for `SIGTERM` and `SIGINT`. On receipt:

View File

@@ -1,18 +1,28 @@
# Database # Database
AgentIdP uses PostgreSQL 14+ as its primary data store. The schema consists of four tables managed by a custom migration runner. AgentIdP uses PostgreSQL 14+ as its primary data store. The schema consists of 26 migrations managed by a custom migration runner.
--- ---
## Schema Overview ## Schema Overview
``` ```
agents organizations
── credentials (FK: client_id → agents.agent_id, CASCADE DELETE) ── agents (FK: organization_id → organizations.org_id)
│ ├── credentials (FK: client_id → agents.agent_id, CASCADE DELETE)
audit_events (no FK — append-only, agent_id is informational) │ └── agent_did_keys (FK: agent_id → agents.agent_id)
└── audit_events (FK: organization_id — informational, no cascade)
token_revocations (no FK — independent revocation store) token_revocations (no FK — independent revocation store)
oidc_keys (standalone — OIDC signing key rotation)
federation_partners (standalone — cross-tenant identity)
webhook_subscriptions → webhook_deliveries (FK: subscription_id)
agent_marketplace (standalone — agent discovery catalog)
github_oidc_trust_policies (standalone — CI/CD trust)
billing (FK: org_id → organizations.org_id — one row per org)
delegation_chains (standalone — A2A delegation records)
analytics_events (FK: organization_id — append-only)
tenant_tiers (FK: org_id → organizations.org_id — one row per org)
``` ```
--- ---
@@ -134,6 +144,234 @@ Durable record of revoked JWT tokens. Supplements Redis for durability across Re
--- ---
### `organizations`
Created by migration `006_create_organizations_table.sql`.
| Column | Type | Nullable | Description |
|--------|------|----------|-------------|
| `org_id` | `UUID` | No | Primary key |
| `name` | `VARCHAR(255)` | No | Organisation display name |
| `slug` | `VARCHAR(64)` | No | URL-safe unique identifier |
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
---
### `agent_did_keys`
Created by migration `012_create_agent_did_keys_table.sql`.
Stores the DID document key material for each agent. One agent may have multiple keys for
rotation purposes.
| Column | Type | Nullable | Description |
|--------|------|----------|-------------|
| `id` | `UUID` | No | Primary key |
| `agent_id` | `UUID` | No | FK → `agents.agent_id` |
| `key_id` | `VARCHAR(255)` | No | DID key fragment identifier |
| `public_key_jwk` | `JSONB` | No | Public key in JWK format |
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
---
### DID columns on `agents`
Added by migration `013_add_did_columns_to_agents.sql`:
- `did``VARCHAR(512)` nullable — the `did:web` identifier for this agent
- `did_document``JSONB` nullable — full DID document
---
### `oidc_keys`
Created by migration `014_create_oidc_keys_table.sql`.
Stores RSA key pairs used for OIDC ID token signing. Supports key rotation — active key is
determined by the most recently created row.
| Column | Type | Nullable | Description |
|--------|------|----------|-------------|
| `id` | `UUID` | No | Primary key |
| `kid` | `VARCHAR(128)` | No | Key ID — referenced in JWKS |
| `private_key_pem` | `TEXT` | No | Encrypted RSA private key (pgcrypto) |
| `public_key_pem` | `TEXT` | No | RSA public key |
| `algorithm` | `VARCHAR(16)` | No | Always `RS256` |
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
---
### `federation_partners`
Created by migration `015_create_federation_partners_table.sql`.
| Column | Type | Nullable | Description |
|--------|------|----------|-------------|
| `id` | `UUID` | No | Primary key |
| `org_id` | `UUID` | No | Owning organisation |
| `partner_name` | `VARCHAR(255)` | No | Display name |
| `partner_jwks_url` | `TEXT` | No | URL to partner's JWKS endpoint |
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
---
### `webhook_subscriptions`
Created by migration `016_create_webhook_subscriptions_table.sql`.
| Column | Type | Nullable | Description |
|--------|------|----------|-------------|
| `id` | `UUID` | No | Primary key |
| `org_id` | `UUID` | No | Owning organisation |
| `event_type` | `VARCHAR(128)` | No | Event type filter (e.g. `agent.created`) |
| `target_url` | `TEXT` | No | HTTPS delivery endpoint |
| `secret` | `VARCHAR(255)` | Yes | HMAC signing secret for delivery verification |
| `active` | `BOOLEAN` | No | Default: `true` |
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
---
### `webhook_deliveries`
Created by migration `017_create_webhook_deliveries_table.sql`.
Records each delivery attempt for a webhook event, including the dead-letter queue entries.
| Column | Type | Nullable | Description |
|--------|------|----------|-------------|
| `id` | `UUID` | No | Primary key |
| `subscription_id` | `UUID` | No | FK → `webhook_subscriptions.id` |
| `event_type` | `VARCHAR(128)` | No | Event type delivered |
| `payload` | `JSONB` | No | Full event payload |
| `status` | `VARCHAR(32)` | No | `pending`, `delivered`, `failed`, `dead_letter` |
| `response_status` | `INTEGER` | Yes | HTTP status from delivery endpoint |
| `attempt_count` | `INTEGER` | No | Default: `0` |
| `last_attempted_at` | `TIMESTAMPTZ` | Yes | |
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
**Dead-letter queue:** After 3 failed delivery attempts, the row status is set to `dead_letter`
and the `agentidp_webhook_dead_letters_total` Prometheus counter is incremented. The Prometheus
metric label is `event_type`.
---
### pgcrypto extension
Enabled by migration `018_enable_pgcrypto.sql`. Used for encrypting sensitive columns in
`oidc_keys` and credential data.
---
### `agent_marketplace`
Created by migration `021_add_agent_marketplace.sql`.
| Column | Type | Nullable | Description |
|--------|------|----------|-------------|
| `id` | `UUID` | No | Primary key |
| `agent_id` | `UUID` | No | FK → `agents.agent_id` |
| `listing_name` | `VARCHAR(255)` | No | Display name in marketplace |
| `description` | `TEXT` | Yes | Markdown description |
| `tags` | `TEXT[]` | No | Searchable tags. Default: `{}` |
| `published` | `BOOLEAN` | No | Default: `false` |
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
---
### `github_oidc_trust_policies`
Created by migration `022_add_github_oidc_trust_policies.sql`.
Maps GitHub Actions OIDC claims to agent identities for CI/CD token exchange.
| Column | Type | Nullable | Description |
|--------|------|----------|-------------|
| `id` | `UUID` | No | Primary key |
| `org_id` | `UUID` | No | Owning organisation |
| `repository` | `VARCHAR(512)` | No | GitHub repository slug (`owner/repo`) |
| `branch` | `VARCHAR(255)` | Yes | Branch filter (null = any branch) |
| `agent_id` | `UUID` | No | Agent to issue a token for on match |
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
---
### `billing`
Created by migration `023_add_billing.sql`.
One row per organisation. Tracks the org's Stripe customer and subscription state.
| Column | Type | Nullable | Description |
|--------|------|----------|-------------|
| `id` | `UUID` | No | Primary key |
| `org_id` | `UUID` | No | FK → `organizations.org_id` (UNIQUE) |
| `stripe_customer_id` | `VARCHAR(255)` | Yes | Stripe Customer ID |
| `stripe_subscription_id` | `VARCHAR(255)` | Yes | Stripe Subscription ID |
| `status` | `VARCHAR(64)` | No | Stripe subscription status or `none` |
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
---
### `delegation_chains`
Created by migration `024_add_delegation_chains.sql`.
Records A2A delegation grants created via the delegation API.
| Column | Type | Nullable | Description |
|--------|------|----------|-------------|
| `id` | `UUID` | No | Primary key |
| `delegator_agent_id` | `UUID` | No | Agent granting the delegation |
| `delegate_agent_id` | `UUID` | No | Agent receiving the delegation |
| `scopes` | `TEXT[]` | No | Scopes being delegated |
| `expires_at` | `TIMESTAMPTZ` | Yes | Optional expiry |
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
---
### `analytics_events`
Created by migration `025_add_analytics_events.sql`.
Append-only event store for analytics. Supports token trend, agent activity, and usage summary
queries.
| Column | Type | Nullable | Description |
|--------|------|----------|-------------|
| `id` | `UUID` | No | Primary key |
| `organization_id` | `UUID` | No | Owning organisation |
| `date` | `DATE` | No | Calendar date of the event (UTC) |
| `metric_type` | `VARCHAR(64)` | No | e.g. `token_issued`, `agent_called` |
| `count` | `INTEGER` | No | Event count for this date+type |
**Index:** `(organization_id, date DESC)` for fast time-series queries.
---
### `tenant_tiers`
Created by migration `026_add_tenant_tiers.sql`.
One row per organisation. Stores the current tier and enforces tier limits via the
`tierEnforcement` middleware.
| Column | Type | Nullable | Description |
|--------|------|----------|-------------|
| `id` | `UUID` | No | Primary key |
| `org_id` | `UUID` | No | FK → `organizations.org_id` (UNIQUE) |
| `tier` | `ENUM('free','pro','enterprise')` | No | Current tier. Default: `free` |
| `updated_at` | `TIMESTAMPTZ` | No | Last tier change. Default: `NOW()` |
**Tier limits** (from `src/config/tiers.ts`):
| Tier | Max Agents | Max API Calls/Day | Max Tokens/Day |
|------|-----------|-------------------|----------------|
| free | 10 | 1,000 | 1,000 |
| pro | 100 | 50,000 | 50,000 |
| enterprise | unlimited | unlimited | unlimited |
---
## Migration Runner ## Migration Runner
Migrations are managed by `scripts/migrate.ts`. It reads `.sql` files from `src/db/migrations/` in alphabetical order, tracks applied migrations in a `schema_migrations` table, and executes only unapplied migrations — each in its own transaction. Migrations are managed by `scripts/migrate.ts`. It reads `.sql` files from `src/db/migrations/` in alphabetical order, tracks applied migrations in a `schema_migrations` table, and executes only unapplied migrations — each in its own transaction.
@@ -160,10 +398,11 @@ Expected output (first run):
Running database migrations... Running database migrations...
✓ Applied: 001_create_agents.sql ✓ Applied: 001_create_agents.sql
✓ Applied: 002_create_credentials.sql ✓ Applied: 002_create_credentials.sql
✓ Applied: 003_create_audit_events.sql ...
✓ Applied: 004_create_tokens.sql ✓ Applied: 025_add_analytics_events.sql
✓ Applied: 026_add_tenant_tiers.sql
Migrations complete. 4 migration(s) applied. Migrations complete. 26 migration(s) applied.
``` ```
Expected output (already applied): Expected output (already applied):
@@ -191,9 +430,10 @@ Expected output:
-----------------------------------+------------------------------- -----------------------------------+-------------------------------
001_create_agents.sql | 2026-03-28 09:00:00.000000+00 001_create_agents.sql | 2026-03-28 09:00:00.000000+00
002_create_credentials.sql | 2026-03-28 09:00:00.000000+00 002_create_credentials.sql | 2026-03-28 09:00:00.000000+00
003_create_audit_events.sql | 2026-03-28 09:00:00.000000+00 ...
004_create_tokens.sql | 2026-03-28 09:00:00.000000+00 025_add_analytics_events.sql | 2026-04-04 09:00:00.000000+00
(4 rows) 026_add_tenant_tiers.sql | 2026-04-04 09:00:00.000000+00
(26 rows)
``` ```
### Adding a new migration ### Adding a new migration
@@ -214,6 +454,15 @@ There is no automated rollback. To undo a migration:
## Connection Pool ## Connection Pool
The application uses `pg.Pool` with default settings (max 10 connections). The pool is a singleton — one pool per process instance. The application uses `pg.Pool` with settings read from environment variables. The pool is a
singleton — one pool per process instance.
To override pool size, modify `src/db/pool.ts`. In production, ensure `DATABASE_URL` includes connection pool parameters if using PgBouncer or a managed connection pooler. | Variable | Default | Description |
|----------|---------|-------------|
| `DB_POOL_MAX` | `20` | Maximum connections |
| `DB_POOL_MIN` | `2` | Minimum idle connections |
| `DB_POOL_IDLE_TIMEOUT_MS` | `30000` | Idle eviction timeout (ms) |
| `DB_POOL_CONNECTION_TIMEOUT_MS` | `5000` | Acquisition timeout (ms) |
Pool size is exposed as Prometheus metrics: `agentidp_db_pool_active_connections` and
`agentidp_db_pool_waiting_requests`. Monitor these in production to detect pool exhaustion.

Some files were not shown because too many files have changed in this diff Show More