docs: commit all Phase 6 documentation updates and OpenSpec archives

- devops docs: 8 files updated for Phase 6 state; field-trial.md added (946-line runbook)
- developer docs: api-reference (50+ endpoints), quick-start, 5 existing guides updated, 5 new guides added
- engineering docs: all 12 files updated (services, architecture, SDK guide, testing, overview)
- OpenSpec archives: phase-7-devops-field-trial, developer-docs-phase6-update, engineering-docs-phase6-update
- VALIDATOR.md + scripts/start-validator.sh: V&V Architect tooling added
- .gitignore: exclude session artifacts, build artifacts, and agent workspaces

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
SentryAgent.ai Developer
2026-04-07 02:24:24 +00:00
parent 0fb00256b4
commit 8cabc0191c
56 changed files with 12780 additions and 446 deletions

View File

@@ -0,0 +1,259 @@
# WS1 — Service Deep Dives: Phase 36 Additions
**Target file:** `docs/engineering/05-services.md`
**Operation:** Append the following 9 service entries after the existing `### Prometheus/Grafana Monitoring` section (which is the last entry in the current file). Each entry follows the exact format of existing entries.
---
## Instructions to Developer
Append the following Markdown verbatim to the end of `docs/engineering/05-services.md`, starting after the final line of the `### Prometheus/Grafana Monitoring` section. Do not modify any existing content.
---
## Content to Append
```markdown
---
### AnalyticsService
**Purpose**: Records daily aggregated analytics events (token issuances, agent activity) and exposes query methods for token trends, agent activity heatmaps, and per-agent usage summaries. All query methods scope results strictly to the supplied `tenantId`. The `recordEvent` method is fire-and-forget — it catches all errors internally and never propagates them to the caller, so analytics writes never block primary request paths.
**Public methods**:
| Method | Parameters | Returns | Description |
|--------|-----------|---------|-------------|
| `recordEvent` | `tenantId: string, metricType: string` | `Promise<void>` | Upserts a daily counter row in `analytics_events` via `INSERT ... ON CONFLICT DO UPDATE SET count = count + 1`. Catches and swallows all errors; safe to call with `void` on hot paths. |
| `getTokenTrend` | `tenantId: string, days: number` | `Promise<ITokenTrendEntry[]>` | Returns daily token issuance counts for the last N days (clamped to 90). Uses `generate_series` + `LEFT JOIN` so that days with no events appear as `count: 0`. Results sorted ascending by date. |
| `getAgentActivity` | `tenantId: string` | `Promise<IAgentActivityEntry[]>` | Returns agent activity bucketed by day-of-week (0=Sun…6=Sat) and hour-of-day for the last 30 days. Reads only rows whose `metric_type` matches the pattern `agent:<agentId>:<metricType>`. |
| `getAgentUsageSummary` | `tenantId: string` | `Promise<IAgentUsageSummaryEntry[]>` | Returns per-agent token issuance totals for the current calendar month, joined with the agent name (`owner` field). Sorted descending by `token_count`. Excludes decommissioned agents. |
**Dependencies**: PostgreSQL connection pool (`Pool` from `pg`). No Redis usage.
**Configuration**: None. `MAX_TREND_DAYS = 90` is a module-level constant.
**DB tables**:
- `analytics_events`: `organization_id` (UUID FK to `organizations`), `date` (DATE), `metric_type` (text — e.g. `'token_issued'`, `'agent:<agentId>:token_issued'`), `count` (integer). Unique constraint on `(organization_id, date, metric_type)`.
- `agents`: read in `getAgentUsageSummary` to join `owner` and filter by `organization_id`.
---
### TierService
**Purpose**: Single authority for all subscription tier business logic — fetches current tier and live usage, initiates Stripe Checkout sessions for upgrades, applies confirmed upgrades to the `organizations` table, and enforces per-tier agent count limits. Controllers and middleware delegate all tier decisions to this service; no tier logic lives elsewhere.
**Public methods**:
| Method | Parameters | Returns | Description |
|--------|-----------|---------|-------------|
| `getStatus` | `orgId: string` | `Promise<ITierStatus>` | Returns current `tier`, per-tier `limits` (from `TIER_CONFIG`), live `usage` (Redis counters + DB agent count), and `resetAt` (ISO 8601 next UTC midnight). Falls back to `0` for Redis counters when Redis is unavailable. |
| `initiateUpgrade` | `orgId: string, targetTier: TierName` | `Promise<IUpgradeInitiation>` | Validates that `targetTier` is strictly higher rank than current tier. Creates a Stripe Checkout Session with `mode: 'subscription'`, `metadata: { orgId, targetTier }`, and the price ID from `STRIPE_PRICE_ID_<TIER>` env var. Returns `{ checkoutUrl }`. |
| `applyUpgrade` | `orgId: string, tier: TierName` | `Promise<void>` | Sets `organizations.tier` and `organizations.tier_updated_at = NOW()`. Called by the Stripe webhook handler after `checkout.session.completed`. |
| `fetchTier` | `orgId: string` | `Promise<TierName>` | Queries `organizations.tier` for the given org. Returns `'free'` as a safe default when no row is found or the stored value is not a valid `TierName`. |
| `enforceAgentLimit` | `orgId: string, tier: TierName` | `Promise<void>` | Counts non-decommissioned agents for the org and throws `TierLimitError` when count is at or over `TIER_CONFIG[tier].maxAgents`. No-op for Enterprise (infinite limit). Called by `AgentService` before creating a new agent. |
**Dependencies**: PostgreSQL (`Pool`), Redis (`RedisClientType`), Stripe client (`Stripe`). Imports `TIER_CONFIG` and `TIER_RANK` from `src/config/tiers.ts`.
**Configuration**:
- `STRIPE_PRICE_ID_PRO` — Stripe price ID for the Pro tier
- `STRIPE_PRICE_ID_ENTERPRISE` — Stripe price ID for the Enterprise tier
- `STRIPE_PRICE_ID` — Fallback Stripe price ID when tier-specific vars are not set
- `STRIPE_SUCCESS_URL` — Redirect URL on successful checkout (default: `APP_BASE_URL/dashboard?billing=success`)
- `STRIPE_CANCEL_URL` — Redirect URL when checkout is cancelled (default: `APP_BASE_URL/dashboard?billing=cancel`)
- `APP_BASE_URL` — Base URL for redirect URL construction (default: `http://localhost:3000`)
**Redis keys**:
- `rate:tier:calls:<orgId>` — integer, daily API call counter; TTL set at next UTC midnight. Read in `getStatus`.
- `rate:tier:tokens:<orgId>` — integer, daily token issuance counter; same TTL. Read in `getStatus`.
**DB tables**:
- `organizations`: `organization_id` (UUID PK), `tier` (text — `'free'|'pro'|'enterprise'`), `tier_updated_at` (timestamptz). Read in `fetchTier`; written in `applyUpgrade`.
- `agents`: read in `enforceAgentLimit` and `getStatus` to count non-decommissioned agents per org.
**Error types**:
- `ValidationError` (400) — target tier is not higher than current tier
- `TierLimitError` (429) — agent count limit reached for the current tier
---
### ComplianceService
**Purpose**: Generates AGNTCY-standard compliance reports and exports agent cards for a tenant. Reports cover two sections: `agent-identity` (DID presence and credential expiry checks) and `audit-trail` (cryptographic hash chain verification). Reports are cached in Redis for 5 minutes to avoid repeated expensive DB queries. Agent card export returns all active agents in AGNTCY-standard JSON format.
**Public methods**:
| Method | Parameters | Returns | Description |
|--------|-----------|---------|-------------|
| `generateReport` | `tenantId: string` | `Promise<IComplianceReport>` | Attempts to read `compliance:report:<tenantId>` from Redis; if found, returns it with `from_cache: true`. Otherwise builds the report by running `buildAgentIdentitySection` and `buildAuditTrailSection` in parallel, rolls up the overall status (fail > warn > pass), caches the result for 300 seconds, and returns it. |
| `exportAgentCards` | `tenantId: string` | `Promise<IAgentCard[]>` | Queries all non-decommissioned agents for the tenant and maps each to an AGNTCY agent card with `id` (DID or agent UUID), `name`, `capabilities`, `endpoint`, `created_at`, and `agntcy_schema_version: '1.0'`. |
**Dependencies**: PostgreSQL (`Pool`), Redis (`RedisClientType`). Internally instantiates `AuditVerificationService` for hash chain verification.
**Configuration**: None. `CACHE_TTL_SECONDS = 300` and `AGNTCY_SCHEMA_VERSION = '1.0'` are module-level constants.
**Redis keys**:
- `compliance:report:<tenantId>` — JSON-serialised `IComplianceReport`, TTL 300 seconds. Written by `generateReport`; read on every call within the cache window.
**DB tables**:
- `agents`: queried in both `buildAgentIdentitySection` (checks DID presence) and `exportAgentCards`.
- `credentials`: queried in `buildAgentIdentitySection` to check active credential expiry per agent.
- `audit_events`: read via `AuditVerificationService` in `buildAuditTrailSection` to verify hash chain integrity.
**Error types**: None thrown directly. Internal errors in section builders produce `status: 'fail'` sections rather than exceptions.
**Report structure**:
- `agent-identity` section: `fail` when any active agent is missing a DID or has expired credentials; `warn` when any credential expires within 7 days; `pass` otherwise.
- `audit-trail` section: `fail` when `AuditVerificationService.verifyChain()` returns `verified: false`; `pass` otherwise.
---
### FederationService
**Purpose**: Manages trusted federation partners and cross-IdP JWT token verification. At partner registration, the partner's JWKS endpoint is validated and the keys are cached in Redis. At token verification, the service fetches (or reuses cached) partner JWKS, verifies the JWT signature and standard claims, enforces the partner's `allowed_organizations` filter, and rejects tokens from suspended or expired partners.
**Public methods**:
| Method | Parameters | Returns | Description |
|--------|-----------|---------|-------------|
| `registerPartner` | `req: ICreatePartnerRequest` | `Promise<IFederationPartner>` | Validates the `jwks_uri` is reachable (5-second timeout) and returns valid JWKS. Inserts the partner row into `federation_partners`. Caches the JWKS in Redis under `federation:jwks:<issuer>`. |
| `listPartners` | _(none)_ | `Promise<IFederationPartner[]>` | Updates any partners past `expires_at` to `status = 'expired'` before returning all rows ordered by `created_at DESC`. |
| `getPartner` | `id: string` | `Promise<IFederationPartner>` | Applies the same expiry update, then returns the partner row. Throws `FederationPartnerNotFoundError` (404) when not found. |
| `updatePartner` | `id: string, req: IUpdatePartnerRequest` | `Promise<IFederationPartner>` | Applies a partial update. When `jwks_uri` changes, invalidates the old issuer's JWKS cache entry (`DEL federation:jwks:<oldIssuer>`). |
| `deletePartner` | `id: string` | `Promise<void>` | Deletes the partner row and invalidates the JWKS cache. |
| `verifyFederatedToken` | `req: IFederationVerifyRequest` | `Promise<IFederationVerifyResult>` | Decodes token header/payload without verification, rejects `alg:none`, looks up partner by `iss`, checks partner status and expiry, fetches JWKS (cache-first), finds matching key by `kid`, converts JWK to PEM, verifies signature via `jsonwebtoken.verify` (RS256 or ES256), enforces `allowed_organizations` filter. Returns `{ valid, issuer, subject, organization_id, claims }`. |
**Dependencies**: PostgreSQL (`Pool`), Redis (`RedisClientType`). Uses `jsonwebtoken` for JWT decoding/verification and Node.js `crypto.createPublicKey` for JWK-to-PEM conversion.
**Configuration**:
- `FEDERATION_JWKS_CACHE_TTL_SECONDS` — TTL for cached partner JWKS in Redis (default: `3600`)
**Redis keys**:
- `federation:jwks:<issuer>` — JSON-serialised `IJWKSKey[]`, TTL from `FEDERATION_JWKS_CACHE_TTL_SECONDS`. Written on partner registration and on cache miss during token verification; deleted when a partner is updated (JWKS URI changed) or deleted.
**DB tables**:
- `federation_partners`: `id` (UUID PK), `name` (text), `issuer` (text — the IdP's issuer URL), `jwks_uri` (text), `allowed_organizations` (text[] — empty means all orgs allowed), `status` (`active|suspended|expired`), `created_at`, `updated_at`, `expires_at` (nullable timestamptz).
**Error types**:
- `FederationPartnerError` (400) — JWKS endpoint unreachable or returns invalid JWKS
- `FederationPartnerNotFoundError` (404) — partner UUID not found
- `FederationVerificationError` (401) — token malformed, alg:none, unknown issuer, partner suspended/expired, signature invalid, org not in allow list
---
### DIDService
**Purpose**: Manages W3C DID Core 1.0 document generation, EC P-256 key pair creation, and AGNTCY agent card export. Generates per-agent `did:web` identifiers, stores private keys in HashiCorp Vault (or records a `dev:no-vault` marker in dev mode), and caches DID documents in Redis. Builds both an instance-level DID document (for AgentIdP itself) and per-agent DID documents with AGNTCY extension properties.
**Public methods**:
| Method | Parameters | Returns | Description |
|--------|-----------|---------|-------------|
| `generateDIDForAgent` | `agentId: string, organizationId: string` | `Promise<{ did: string; publicKeyJwk: IPublicKeyJwk }>` | Generates an EC P-256 key pair. Stores the private key PEM in Vault KV v2 at `{mount}/data/agentidp/agents/{agentId}/did-key`. Encrypts the vault path via `EncryptionService` (when configured). Inserts a row into `agent_did_keys`. Updates `agents.did` and `agents.did_created_at`. Returns the `did:web` identifier and public key JWK. |
| `buildInstanceDIDDocument` | _(none)_ | `Promise<IDIDDocument>` | Builds the root instance DID document for AgentIdP (format: `did:web:{DID_WEB_DOMAIN}`). Cached in Redis under `did:doc:instance`. |
| `buildAgentDIDDocument` | `agentId: string` | `Promise<IAgentDIDDocumentResult>` | Builds a per-agent DID document (format: `did:web:{DID_WEB_DOMAIN}:agents:{agentId}`). Decommissioned agents get a deactivated document with an `AgentStatus: decommissioned` service entry. Cached in Redis under `did:doc:{agentId}` for active agents only. Throws `AgentNotFoundError` if the agent does not exist. |
| `buildResolutionResult` | `agentId: string` | `Promise<IDIDResolutionResult>` | Wraps `buildAgentDIDDocument` with W3C DID Resolution metadata (`didDocumentMetadata`, `didResolutionMetadata`). |
| `buildAgentCard` | `agentId: string` | `Promise<IAgentCard>` | Returns an AGNTCY-format agent card with `did`, `name` (agent email), `agentType`, `capabilities`, `owner`, `version`, `deploymentEnv`, `identityProvider`, and `issuedAt`. |
**Dependencies**: PostgreSQL (`Pool`), Redis (`RedisClientType`), optional `VaultClient`, optional `EncryptionService`. Uses `node-vault` directly for DID private key storage.
**Configuration**:
- `DID_WEB_DOMAIN` — required; the domain for `did:web` DID construction (e.g. `idp.sentryagent.ai`)
- `DID_DOCUMENT_CACHE_TTL_SECONDS` — Redis cache TTL for DID documents (default: `300`)
- `VAULT_ADDR`, `VAULT_TOKEN`, `VAULT_MOUNT` — when set, private keys are stored in Vault; otherwise `dev:no-vault` marker is used
**Redis keys**:
- `did:doc:instance` — JSON-serialised instance `IDIDDocument`, TTL from `DID_DOCUMENT_CACHE_TTL_SECONDS`
- `did:doc:<agentId>` — JSON-serialised per-agent `IDIDDocument`, same TTL. Not cached for decommissioned agents.
**DB tables**:
- `agents`: `did` (text — `did:web:...`), `did_created_at` (timestamptz). Written by `generateDIDForAgent`; read in all document-building methods.
- `agent_did_keys`: `key_id` (UUID PK), `agent_id` (UUID FK), `organization_id` (UUID FK), `public_key_jwk` (JSONB), `vault_key_path` (text — Vault KV v2 path or `dev:no-vault`), `key_type` (`'EC'`), `curve` (`'P-256'`), `created_at`. Written by `generateDIDForAgent`.
**Error types**:
- `AgentNotFoundError` (404) — agent UUID not found in `buildAgentDIDDocument`, `buildResolutionResult`, `buildAgentCard`
---
### WebhookService
**Purpose**: Manages webhook subscriptions and their delivery history for a tenant organisation. HMAC signing secrets are stored in HashiCorp Vault KV v2 (when configured) or bcrypt-hashed in PostgreSQL in local mode. The raw secret is only returned once at subscription creation time. `vault_secret_path` is encrypted at rest via `EncryptionService` (AES-256-CBC) before being written to PostgreSQL (SOC 2 CC6.1 compliance).
**Public methods**:
| Method | Parameters | Returns | Description |
|--------|-----------|---------|-------------|
| `createSubscription` | `orgId: string, req: ICreateWebhookRequest` | `Promise<IWebhookSubscription & { secret: string }>` | Generates a 32-byte random hex HMAC secret. Stores in Vault at `secret/data/agentidp/webhooks/{orgId}/{id}/secret` (Vault mode) or bcrypt-hashes and stores in `secret_hash` (local mode). Encrypts `vault_secret_path` via `EncryptionService`. Returns the subscription including the one-time `secret`. Validates URL must use `https://` and events array must be non-empty. |
| `listSubscriptions` | `orgId: string` | `Promise<IWebhookSubscription[]>` | Returns all subscriptions for the org, ordered by `created_at DESC`. No secret fields are included. |
| `getSubscription` | `id: string, orgId: string` | `Promise<IWebhookSubscription>` | Returns a single subscription. Verifies org ownership. |
| `updateSubscription` | `id: string, orgId: string, req: IUpdateWebhookRequest` | `Promise<IWebhookSubscription>` | Partially updates `name`, `url`, `events`, or `active` fields. Validates `https://` if URL is changing. |
| `deleteSubscription` | `id: string, orgId: string` | `Promise<void>` | Permanently deletes the subscription and all deliveries (via PostgreSQL CASCADE). |
| `getSubscriptionSecret` | `subscriptionId: string, orgId: string` | `Promise<string>` | Retrieves the raw HMAC secret from Vault (Vault mode only). Throws `WebhookValidationError` in local mode since the secret cannot be recovered after creation. |
| `listDeliveries` | `subscriptionId: string, orgId: string, limit: number, offset: number` | `Promise<IPaginatedDeliveriesResponse>` | Returns paginated delivery records for a subscription. Verifies org ownership before querying. |
**Dependencies**: PostgreSQL (`Pool`), optional `VaultClient`, Redis (`RedisClientType` — reserved for future caching), optional `EncryptionService`.
**Configuration**: Inherits Vault configuration from `VaultClient` (`VAULT_ADDR`, `VAULT_TOKEN`, `VAULT_MOUNT`). `EncryptionService` requires `ENCRYPTION_KEY` env var (see `EncryptionService` docs).
**DB tables**:
- `webhook_subscriptions`: `id` (UUID PK), `organization_id` (UUID FK), `name` (text), `url` (text — https only), `events` (JSONB — `WebhookEventType[]`), `secret_hash` (text — bcrypt hash in local mode, `'vault'` in Vault mode), `vault_secret_path` (text — encrypted Vault path or `'local'`), `active` (boolean), `failure_count` (integer), `created_at`, `updated_at`.
- `webhook_deliveries`: `id` (UUID PK), `subscription_id` (UUID FK), `event_type` (text), `payload` (JSONB), `status` (`pending|delivered|failed|dead_letter`), `http_status_code` (integer nullable), `attempt_count` (integer), `next_retry_at` (timestamptz nullable), `delivered_at` (timestamptz nullable), `created_at`, `updated_at`. Cascades on subscription delete.
**Error types**:
- `WebhookNotFoundError` (404) — subscription not found or belongs to another org
- `WebhookValidationError` (400) — invalid URL scheme, empty events array, or secret not recoverable in local mode
---
### BillingService
**Purpose**: Manages Stripe billing integration — creates Checkout Sessions for tenant subscriptions, processes incoming Stripe webhook events (subscription lifecycle and checkout completion), and retrieves current subscription status. When a `checkout.session.completed` event carries `{ orgId, targetTier }` in its metadata, delegates to `TierService.applyUpgrade` to update the organisation's tier.
**Public methods**:
| Method | Parameters | Returns | Description |
|--------|-----------|---------|-------------|
| `createCheckoutSession` | `tenantId: string, successUrl: string, cancelUrl: string` | `Promise<string>` | Creates a Stripe Checkout Session with `mode: 'subscription'`, `client_reference_id: tenantId`, and the price from `STRIPE_PRICE_ID`. Returns the checkout URL. Throws if Stripe does not return a URL. |
| `handleWebhookEvent` | `rawBody: Buffer, sig: string, webhookSecret: string` | `Promise<void>` | Verifies the Stripe webhook signature via `stripe.webhooks.constructEvent`. Handles `customer.subscription.created/updated/deleted` (upserts `tenant_subscriptions`) and `checkout.session.completed` (applies tier upgrade via `TierService` when metadata contains `orgId` and `targetTier`). |
| `getSubscriptionStatus` | `tenantId: string` | `Promise<ISubscriptionStatus>` | Queries `tenant_subscriptions` for the given tenant. Returns `{ tenantId, status: 'free', currentPeriodEnd: null, stripeSubscriptionId: null }` when no row exists. |
**Dependencies**: PostgreSQL (`Pool`), Stripe client (`Stripe`), optional `TierService`.
**Configuration**:
- `STRIPE_PRICE_ID` — Stripe price ID for subscription checkout sessions
- `STRIPE_WEBHOOK_SECRET` — Stripe webhook endpoint secret (`whsec_...`); passed by the webhook controller, not read directly by the service
**DB tables**:
- `tenant_subscriptions`: `tenant_id` (UUID PK or unique), `status` (text — `'free'|'active'|'past_due'|'canceled'`), `stripe_customer_id` (text), `stripe_subscription_id` (text), `current_period_end` (timestamptz nullable), `updated_at`. Upserted on subscription lifecycle events.
**Error types**: None defined in the service. Stripe signature failures raise `Error` from `stripe.webhooks.constructEvent`; these propagate to the error handler as 400 responses.
---
### OIDCService (A2A / OIDC Provider)
**Note**: `src/services/OIDCService.ts` does not exist as a standalone file — OIDC provider functionality is handled by the `oidc-provider` npm package, configured in `src/app.ts` and related route files. The service boundary for OIDC-related business logic is the `DelegationService`. Document the OIDC integration as follows.
**Purpose**: The OIDC/A2A subsystem provides agent-to-agent (A2A) delegation using the `oidc-provider` library (v9.7.x). The provider is mounted as a sub-application at `/oidc` and issues short-lived delegation tokens scoped to a specific `delegatee_id`. The `DelegationService` (`src/services/DelegationService.ts`) manages the `delegation_chains` table for auditing.
**Key endpoints exposed by the OIDC provider**:
- `POST /oidc/token` — issues delegation tokens via `client_credentials` or custom grant
- `GET /oidc/.well-known/openid-configuration` — OIDC discovery document
- `GET /oidc/jwks` — public JWK Set for verifying delegation tokens
**DelegationService public methods** (from `src/services/DelegationService.ts`):
| Method | Parameters | Returns | Description |
|--------|-----------|---------|-------------|
| `createDelegation` | `delegatorId: string, delegateeId: string, scope: string, expiresAt?: Date` | `Promise<IDelegationChain>` | Inserts a delegation chain record into `delegation_chains`. Validates both agents exist and are active. |
| `verifyDelegation` | `token: string, delegateeId: string` | `Promise<IDelegationVerifyResult>` | Verifies the delegation token signature and checks the chain record is active and not expired. |
| `revokeDelegation` | `chainId: string, delegatorId: string` | `Promise<void>` | Sets `delegation_chains.status = 'revoked'` and `revoked_at = NOW()`. Validates the delegator owns the chain. |
**DB tables**:
- `delegation_chains`: `chain_id` (UUID PK), `delegator_id` (UUID), `delegatee_id` (UUID), `scope` (text), `status` (`active|revoked|expired`), `created_at`, `expires_at` (nullable), `revoked_at` (nullable), `token` (text — the delegation JWT).
**Configuration**:
- `A2A_ENABLED` — when set to `'false'`, A2A/delegation endpoints return 404
- `OIDC_ISSUER` — issuer URL for the OIDC provider
```

View File

@@ -0,0 +1,235 @@
# WS2 — Architecture Documentation Updates
**Target file:** `docs/engineering/02-architecture.md`
**Operation:** Surgical replacements and additions to the existing document. Apply in the order listed below.
---
## Change 1 — Replace Component Diagram
**Location:** Section `## 1. Component Diagram`
**Old text (entire Mermaid block and surrounding content — replace from `\`\`\`mermaid` through the closing `\`\`\``):**
```
```mermaid
graph TD
Client["Client (AI Agent / Browser / CI)"]
Client -->|HTTPS| ExpressApp["Express App (AgentIdP)"]
subgraph ExpressApp["Express App — src/app.ts"]
Router["Router (src/routes/)"]
AuthMW["authMiddleware (src/middleware/auth.ts)"]
OpaMW["opaMiddleware (src/middleware/opa.ts)"]
Controller["Controller (src/controllers/)"]
Service["Service (src/services/)"]
Repository["Repository (src/repositories/)"]
Router --> AuthMW --> OpaMW --> Controller --> Service --> Repository
end
Repository -->|parameterized SQL| PG["PostgreSQL 14\n(agents, credentials, audit_events, token_revocations)"]
Service -->|Redis commands| Redis["Redis 7\n(token revocation list, monthly counts, rate-limit counters)"]
Service -->|KV v2 read/write| Vault["HashiCorp Vault\n(opt-in — when VAULT_ADDR is set)"]
ExpressApp -->|evaluate input| OPA["OPA Policy Engine\n(policies/authz.rego + data/scopes.json)"]
ExpressApp -->|expose| Metrics["/metrics (prom-client)"]
Dashboard["Dashboard SPA (React 18 + Vite 5)\ndashboard/dist/ served from /dashboard"]
Client -->|browser| Dashboard
Dashboard -->|REST API calls| ExpressApp
Grafana["Grafana (port 3001)"] -->|scrapes| Metrics
```
```
**New text (replace with the expanded diagram):**
```
```mermaid
graph TD
Client["Client (AI Agent / Browser / CI)"]
Client -->|HTTPS| ExpressApp["Express App (AgentIdP)"]
subgraph ExpressApp["Express App — src/app.ts"]
Router["Router (src/routes/)"]
AuthMW["authMiddleware (src/middleware/auth.ts)"]
TierMW["tierMiddleware (src/middleware/tier.ts)"]
OpaMW["opaMiddleware (src/middleware/opa.ts)"]
Controller["Controller (src/controllers/)"]
Service["Service (src/services/)"]
Repository["Repository (src/repositories/)"]
Router --> AuthMW --> TierMW --> OpaMW --> Controller --> Service --> Repository
end
Repository -->|parameterized SQL| PG["PostgreSQL 14\n(agents, credentials, audit_events,\nanalytics_events, organizations,\nfederation_partners, webhook_subscriptions,\nagent_did_keys, delegation_chains)"]
Service -->|Redis commands| Redis["Redis 7\n(token revocation list, daily tier counters,\nJWKS cache, compliance report cache,\nDID document cache)"]
Service -->|KV v2 read/write| Vault["HashiCorp Vault\n(opt-in — credentials, DID private keys,\nwebhook secrets — when VAULT_ADDR is set)"]
ExpressApp -->|evaluate input| OPA["OPA Policy Engine\n(policies/authz.rego + data/scopes.json)"]
ExpressApp -->|expose| Metrics["/metrics (prom-client)"]
ExpressApp -->|checkout session / webhooks| Stripe["Stripe\n(billing — when STRIPE_SECRET_KEY is set)"]
Dashboard["Dashboard SPA (React 18 + Vite 5)\ndashboard/dist/ served from /dashboard"]
Portal["Developer Portal (Next.js 14)\nportal/ — served separately on port 3002"]
Client -->|browser| Dashboard
Client -->|browser| Portal
Dashboard -->|REST API calls| ExpressApp
Portal -->|REST API calls| ExpressApp
Grafana["Grafana (port 3001)"] -->|scrapes| Metrics
OIDCProvider["OIDC Provider (oidc-provider v9)\nmounted at /oidc — A2A delegation tokens"]
ExpressApp --- OIDCProvider
```
```
---
## Change 2 — Add New Services to Section 2 (HTTP Request Lifecycle)
**Location:** Section `## 2. HTTP Request Lifecycle`
**Find the paragraph that starts with:**
```
7. The service (`src/services/*.ts`) executes all business logic — enforces free-tier limits, resolves domain rules, and calls repositories.
```
**Replace that single numbered item with:**
```
7. The service (`src/services/*.ts`) executes all business logic — enforces tier limits, resolves domain rules, and calls repositories. Phase 36 introduces specialised services: `AnalyticsService` (fire-and-forget event recording), `TierService` (enforces per-tier agent and call limits), `ComplianceService` (AGNTCY compliance reports, cached 5 min in Redis), `FederationService` (cross-IdP JWT verification with cached JWKS), `DIDService` (W3C DID document generation and caching), `WebhookService` (subscription management with Vault-backed HMAC secrets), and `BillingService` (Stripe Checkout and webhook processing). The service has no knowledge of HTTP.
```
---
## Change 3 — Add Tier Enforcement Middleware Description
**Location:** Section `## 2. HTTP Request Lifecycle`
**Find item 5:**
```
5. `opaMiddleware` (`src/middleware/opa.ts`) evaluates the OPA policy
```
**Insert a new item between item 4 (authMiddleware) and item 5 (opaMiddleware). Re-number subsequent items accordingly. The new item is:**
```
5. `tierMiddleware` (`src/middleware/tier.ts`) enforces per-tier daily API call limits. It reads the organisation's current tier from `TierService.fetchTier(orgId)`, checks the daily call counter from Redis key `rate:tier:calls:<orgId>` against `TIER_CONFIG[tier].maxCallsPerDay`, increments the counter on each passing request (fire-and-forget `INCR` with TTL set to next UTC midnight), and throws `TierLimitError` (429) when the limit is reached. This middleware is applied only to API routes, not to `/health`, `/metrics`, or `/dashboard`.
```
Re-number the former item 5 (opaMiddleware) through the end of the list as 6 through 11 (adding one to each subsequent number).
---
## Change 4 — Add New Data Flows Section
**Location:** After the closing of `## 3. OAuth 2.0 Client Credentials Flow` and before `## 4. Multi-Region Deployment Topology`
**Insert the following new section:**
```markdown
---
## 3b. Analytics Event Capture Flow
Every successful token issuance writes a fire-and-forget analytics event:
```mermaid
sequenceDiagram
participant Controller as TokenController
participant OAuth2Svc as OAuth2Service
participant AnalyticsSvc as AnalyticsService
participant PG as PostgreSQL
Controller->>OAuth2Svc: issueToken(clientId, clientSecret, scope, ...)
OAuth2Svc->>OAuth2Svc: signToken() — RS256 JWT
OAuth2Svc-->>Controller: ITokenResponse
Note over OAuth2Svc,AnalyticsSvc: fire-and-forget (void)
OAuth2Svc-)AnalyticsSvc: recordEvent(tenantId, 'token_issued')
AnalyticsSvc-)PG: INSERT INTO analytics_events ... ON CONFLICT DO UPDATE count + 1
```
`recordEvent` uses PostgreSQL `UPSERT` — one row per `(organization_id, date, metric_type)`. If the INSERT conflicts (same date, same org, same metric), the `count` column is incremented atomically. This keeps the table compact (one row per day per metric type per org) and fast to query.
---
## 3c. Tier Enforcement Middleware Chain
```mermaid
sequenceDiagram
actor Agent
participant TierMW as tierMiddleware
participant TierSvc as TierService
participant Redis
participant PG as PostgreSQL
Agent->>TierMW: API request (with valid Bearer token)
TierMW->>TierSvc: fetchTier(orgId)
TierSvc->>PG: SELECT tier FROM organizations WHERE organization_id = $1
PG-->>TierSvc: 'pro'
TierSvc-->>TierMW: 'pro'
TierMW->>Redis: GET rate:tier:calls:<orgId>
Redis-->>TierMW: "4999" (current daily count)
Note over TierMW: TIER_CONFIG['pro'].maxCallsPerDay = 50000 — limit not reached
TierMW-)Redis: INCR rate:tier:calls:<orgId> (fire-and-forget, TTL = next UTC midnight)
TierMW->>Agent: next() — request proceeds to opaMiddleware
```
When the counter equals or exceeds the tier limit, `tierMiddleware` throws `TierLimitError` (429) before `opaMiddleware` runs. The daily counter resets at UTC midnight via Redis TTL.
---
## 3d. A2A Delegation End-to-End Flow
```mermaid
sequenceDiagram
actor Delegator as Delegator Agent
actor Delegatee as Delegatee Agent
participant AgentIdP
participant DelegationSvc as DelegationService
participant OIDCProvider as OIDC Provider
participant PG as PostgreSQL
Delegator->>AgentIdP: POST /api/v1/oauth2/token/delegate<br/>{ delegatee_id, scope }
AgentIdP->>DelegationSvc: createDelegation(delegatorId, delegateeId, scope)
DelegationSvc->>PG: INSERT INTO delegation_chains ...
PG-->>DelegationSvc: chain_id
DelegationSvc->>OIDCProvider: issue delegation JWT (delegator claims + delegatee sub)
OIDCProvider-->>DelegationSvc: signed delegation token
DelegationSvc-->>AgentIdP: IDelegationChain (with token)
AgentIdP-->>Delegator: 201 { token, chain_id }
Note over Delegatee,AgentIdP: Delegatee uses the delegation token
Delegatee->>AgentIdP: POST /api/v1/oauth2/token/verify-delegation<br/>{ token }
AgentIdP->>DelegationSvc: verifyDelegation(token, delegateeId)
DelegationSvc->>PG: SELECT * FROM delegation_chains WHERE chain_id = $1 AND status = 'active'
PG-->>DelegationSvc: chain row (not expired, not revoked)
DelegationSvc->>OIDCProvider: verify token signature
OIDCProvider-->>DelegationSvc: verified claims
DelegationSvc-->>AgentIdP: IDelegationVerifyResult { valid: true, ... }
AgentIdP-->>Delegatee: 200 { valid: true, delegatorId, scope }
```
```
```
---
## Change 5 — Add New PostgreSQL Tables to Section 2
**Location:** Section `## 2. HTTP Request Lifecycle`, item 8 (Repository layer description).
**Find the text:**
```
8. The repository (`src/repositories/*.ts`) executes parameterized SQL against PostgreSQL via `node-postgres`, or issues Redis commands via the `redis` client. No business logic lives here.
```
**Replace with:**
```
8. The repository (`src/repositories/*.ts`) executes parameterized SQL against PostgreSQL via `node-postgres`, or issues Redis commands via the `redis` client. No business logic lives here. Phase 36 added the following tables: `analytics_events` (daily metric counters), `organizations` (org tier and billing), `federation_partners` (cross-IdP trust registry), `webhook_subscriptions` and `webhook_deliveries` (outbound event delivery), `agent_did_keys` (public EC keys for DID documents), `delegation_chains` (A2A delegation records), `tenant_subscriptions` (Stripe subscription status).
```

View File

@@ -0,0 +1,300 @@
# WS3 — Rust SDK Documentation
**Target file:** `docs/engineering/11-sdk-guide.md`
**Operation:** Append the following complete section to the end of `docs/engineering/11-sdk-guide.md`, after the final line of `## 6. SDK Contribution Guide — Adding a New Endpoint`.
---
## Instructions to Developer
Append the following Markdown verbatim to the end of `docs/engineering/11-sdk-guide.md`. Do not modify any existing content. The new section is `## 6. Rust SDK` (the current section 6 becomes section 7 — renumber it as part of this change).
**Renaming instruction:** Change the existing heading `## 6. SDK Contribution Guide — Adding a New Endpoint` to `## 7. SDK Contribution Guide — Adding a New Endpoint` before appending. The new Rust SDK section takes the `## 6` slot.
---
## Content to Insert (before the existing Section 6, which becomes Section 7)
Insert the following after the Java SDK section (`## 5. Java SDK`) and before the existing contribution guide (which becomes `## 7`):
```markdown
---
## 6. Rust SDK
The Rust SDK (`sdk-rust/`) is a production-grade, async-first client for the SentryAgent.ai AgentIdP API. It provides full coverage of the 14 API endpoints across agent identity, OAuth 2.0 token management, credential rotation, audit logs, the public marketplace, and agent-to-agent (A2A) delegation.
**Requirements:** Rust 1.75+ (stable), `tokio` runtime.
---
### Installation
Add the crate to your `Cargo.toml`:
```toml
[dependencies]
sentryagent-idp = "1.0"
tokio = { version = "1.35", features = ["full"] }
```
The crate uses `reqwest` with `rustls-tls` (no OpenSSL dependency) and `serde` for JSON serialisation.
---
### Authentication
The Rust SDK uses the OAuth 2.0 Client Credentials grant, managed transparently by `TokenManager`. You never call `TokenManager` directly — it is embedded in `AgentIdPClient` and invoked automatically before every request.
**Token refresh behaviour:**
- The first API call triggers a `POST /oauth2/token` request with `grant_type=client_credentials`.
- The returned token is cached behind an async `tokio::sync::Mutex`.
- Subsequent calls within the token lifetime return the cached token without a network round trip.
- The cache expires 60 seconds before the server-reported `expires_in`, ensuring tokens never expire mid-flight.
- The `Mutex` guarantees only one refresh happens even when many `tokio` tasks call `get_token()` concurrently.
**Environment variable construction:**
```rust
use sentryagent_idp::AgentIdPClient;
// from_env() reads AGENTIDP_API_URL, AGENTIDP_CLIENT_ID, AGENTIDP_CLIENT_SECRET
let client = AgentIdPClient::from_env()?;
```
**Explicit construction:**
```rust
use sentryagent_idp::AgentIdPClient;
let client = AgentIdPClient::new(
"https://api.sentryagent.ai",
"a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"sk_live_...",
);
```
| Environment Variable | Required | Purpose |
|---|---|---|
| `AGENTIDP_API_URL` | Yes | Base URL of the AgentIdP API |
| `AGENTIDP_CLIENT_ID` | Yes | OAuth 2.0 client identifier |
| `AGENTIDP_CLIENT_SECRET` | Yes | OAuth 2.0 client secret |
---
### Complete Working Example
The following example covers the full agent identity lifecycle: register → generate credentials → issue token → retrieve agent → list audit logs → delete agent.
```rust
use sentryagent_idp::{
AgentIdPClient, AgentIdPError,
AuditLogFilters, MarketplaceFilters, RegisterAgentRequest,
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Build client from environment variables.
// Requires: AGENTIDP_API_URL, AGENTIDP_CLIENT_ID, AGENTIDP_CLIENT_SECRET
let client = AgentIdPClient::from_env()?;
// ── Register a new agent ──────────────────────────────────────────────────
let agent = client.register_agent(RegisterAgentRequest {
name: "my-screener-agent".to_owned(),
description: Some("Screens resumes using ML".to_owned()),
agent_type: "screener".to_owned(),
capabilities: vec!["resume:read".to_owned(), "classify".to_owned()],
metadata: None,
}).await?;
println!("Registered: {} (DID: {})", agent.id, agent.did);
// ── Generate credentials for the agent ───────────────────────────────────
let creds = client.generate_credentials(&agent.id).await?;
println!("Client ID: {}", creds.client_id);
println!("Client Secret: {} (store this — shown once)", creds.client_secret);
// ── Issue a scoped token (TokenManager handles this automatically) ────────
let token_resp = client.issue_token(&agent.id, &["agents:read", "agents:write"]).await?;
println!("Token type: {}, expires in {}s", token_resp.token_type, token_resp.expires_in);
// ── Retrieve the agent ────────────────────────────────────────────────────
let fetched = client.get_agent(&agent.id).await?;
println!("Fetched: {} (public: {})", fetched.name, fetched.is_public);
// ── List agents ───────────────────────────────────────────────────────────
let list = client.list_agents(Some(1), Some(10)).await?;
println!("Total agents: {}", list.total);
// ── Audit logs ────────────────────────────────────────────────────────────
let logs = client.list_audit_logs(AuditLogFilters {
agent_id: Some(agent.id.clone()),
event_type: None,
from: None,
to: None,
page: 1,
per_page: 10,
}).await?;
println!("Audit events: {}", logs.total);
// ── Rotate credentials ────────────────────────────────────────────────────
let new_creds = client.rotate_credentials(&agent.id).await?;
println!("New secret: {}", new_creds.client_secret);
// ── Delete agent ──────────────────────────────────────────────────────────
client.delete_agent(&agent.id).await?;
println!("Agent deleted.");
Ok(())
}
```
Run the bundled quickstart example directly:
```bash
AGENTIDP_API_URL=http://localhost:3000 \
AGENTIDP_CLIENT_ID=your-client-id \
AGENTIDP_CLIENT_SECRET=your-client-secret \
cargo run --example quickstart
```
---
### Client Methods Reference
All methods are `async` and return `Result<T, AgentIdPError>`. The client is cheap to clone — the inner `reqwest::Client` and token cache are shared via `Arc`.
**Agent Registry** (`sdk-rust/src/agents.rs`):
| Method | Signature | Description |
|--------|-----------|-------------|
| `register_agent` | `(req: RegisterAgentRequest) -> Result<Agent>` | `POST /agents` — 201 |
| `get_agent` | `(agent_id: &str) -> Result<Agent>` | `GET /agents/{id}` — 200 |
| `list_agents` | `(page: Option<u32>, per_page: Option<u32>) -> Result<AgentList>` | `GET /agents` — 200 |
| `update_agent` | `(agent_id: &str, req: UpdateAgentRequest) -> Result<Agent>` | `PATCH /agents/{id}` — 200 |
| `delete_agent` | `(agent_id: &str) -> Result<()>` | `DELETE /agents/{id}` — 204 |
**Credential Management** (`sdk-rust/src/credentials.rs`):
| Method | Signature | Description |
|--------|-----------|-------------|
| `generate_credentials` | `(agent_id: &str) -> Result<Credentials>` | `POST /agents/{id}/credentials` — 201. `client_secret` shown once. |
| `rotate_credentials` | `(agent_id: &str) -> Result<Credentials>` | `POST /agents/{id}/credentials/rotate` — 200. New secret shown once. |
| `revoke_credentials` | `(agent_id: &str, cred_id: &str) -> Result<()>` | `DELETE /agents/{id}/credentials/{cred_id}` — 204 |
**Token Operations** (`sdk-rust/src/oauth2.rs`):
| Method | Signature | Description |
|--------|-----------|-------------|
| `issue_token` | `(agent_id: &str, scopes: &[&str]) -> Result<TokenResponse>` | Issues a scoped Bearer JWT. Token is cached by `TokenManager` automatically. |
**Audit Log** (`sdk-rust/src/audit.rs`):
| Method | Signature | Description |
|--------|-----------|-------------|
| `list_audit_logs` | `(filters: AuditLogFilters) -> Result<AuditLogList>` | Paginated audit log query with optional agent_id, event_type, from, to filters. |
**Marketplace** (`sdk-rust/src/marketplace.rs`):
| Method | Signature | Description |
|--------|-----------|-------------|
| `list_public_agents` | `(filters: MarketplaceFilters) -> Result<MarketplaceAgentList>` | Lists publicly discoverable agents with optional `q`, `capability`, `publisher` filters. |
**A2A Delegation** (`sdk-rust/src/delegation.rs`):
| Method | Signature | Description |
|--------|-----------|-------------|
| `delegate` | `(req: DelegateRequest) -> Result<DelegationToken>` | Creates a delegation chain and returns the delegation JWT. |
| `verify_delegation` | `(token: &str) -> Result<DelegationVerification>` | Verifies a delegation token and returns the verified claims. |
---
### Error Types
All SDK operations return `Result<T, AgentIdPError>`. Match on the enum variants for structured error handling:
```rust
use sentryagent_idp::AgentIdPError;
match client.get_agent("unknown-id").await {
Ok(agent) => println!("Found: {}", agent.name),
Err(AgentIdPError::NotFound(msg)) => {
eprintln!("Agent not found: {}", msg);
}
Err(AgentIdPError::AuthError(msg)) => {
eprintln!("Auth failed: {}", msg);
// Token may have been revoked — check credentials
}
Err(AgentIdPError::RateLimited { retry_after_secs }) => {
eprintln!("Rate limited — retry after {}s", retry_after_secs);
tokio::time::sleep(std::time::Duration::from_secs(retry_after_secs)).await;
}
Err(AgentIdPError::ApiError { status, message, code }) => {
eprintln!("API error {}: {} (code: {:?})", status, message, code);
}
Err(AgentIdPError::ConfigError(msg)) => {
// Missing environment variable — fix before running
eprintln!("Config error: {}", msg);
}
Err(AgentIdPError::HttpError(e)) => {
// reqwest transport error — network issue
eprintln!("HTTP transport error: {}", e);
}
Err(AgentIdPError::SerdeError(e)) => {
// JSON parse failure — API response shape mismatch
eprintln!("Serialization error: {}", e);
}
Err(AgentIdPError::DelegationError(msg)) => {
eprintln!("Delegation chain invalid: {}", msg);
}
}
```
| Variant | Trigger | HTTP status |
|---------|---------|-------------|
| `HttpError(reqwest::Error)` | Network-level failure (connection refused, timeout) | N/A |
| `ApiError { status, message, code }` | Non-2xx response not matching a specific variant | Any non-2xx |
| `AuthError(String)` | 401 or 403 from the API | 401, 403 |
| `NotFound(String)` | 404 from the API | 404 |
| `RateLimited { retry_after_secs }` | 429 — parses `Retry-After` header (defaults to 60s) | 429 |
| `ConfigError(String)` | Missing env var in `from_env()` | N/A |
| `SerdeError(serde_json::Error)` | JSON deserialisation failure | N/A |
| `DelegationError(String)` | Invalid delegation chain | N/A |
---
### Adding a New Endpoint to the Rust SDK
When the AgentIdP server adds a new API endpoint, add it to the Rust SDK using this checklist:
**File structure** (`sdk-rust/src/`):
```
sdk-rust/src/
├── lib.rs # Crate root — re-exports and module declarations
├── client.rs # AgentIdPClient struct and new()/from_env() constructors
├── token_manager.rs # TokenManager — async token cache
├── models.rs # All request/response structs (serde Serialize/Deserialize)
├── error.rs # AgentIdPError enum
├── agents.rs # Agent registry methods (impl AgentIdPClient)
├── credentials.rs # Credential management methods
├── oauth2.rs # Token issuance methods
├── audit.rs # Audit log methods
├── marketplace.rs # Marketplace methods
└── delegation.rs # A2A delegation methods
```
**Checklist:**
- [ ] Add request/response structs to `models.rs` with `#[derive(Debug, serde::Serialize, serde::Deserialize)]`
- [ ] Add the method to the appropriate `impl AgentIdPClient` block in the relevant `<domain>.rs` file. If the endpoint belongs to a new domain, create a new file and declare it as `pub mod <domain>;` in `lib.rs`
- [ ] Use `self.get_auth_header().await?` for the `Authorization: Bearer` header
- [ ] Use the shared `parse_response::<T>(resp).await` helper (defined in `agents.rs`) to map HTTP status codes to `AgentIdPError` variants
- [ ] Add a doc comment (`///`) to the method with: the HTTP method + path, the success response type, and `# Errors` listing which `AgentIdPError` variants it can return
- [ ] Re-export new public types from `lib.rs` with `pub use models::{NewRequestType, NewResponseType};`
- [ ] Add a unit test using `mockito::Server` (see `token_manager.rs` tests for the pattern)
- [ ] Run `cargo test` and verify all tests pass
- [ ] Run `cargo doc --no-deps --open` and verify the new method appears with correct documentation
- [ ] Verify `cargo clippy -- -D warnings` exits 0
```

View File

@@ -0,0 +1,179 @@
# WS4 — Testing Documentation Updates
**Target file:** `docs/engineering/09-testing.md`
**Operation:** Append four new subsections to the end of `docs/engineering/09-testing.md`. Do not modify any existing content.
---
## Instructions to Developer
Append the following Markdown verbatim to the end of `docs/engineering/09-testing.md`, after the final line of `## 10.7 OWASP Top 10 Security Testing Reference`.
---
## Content to Append
```markdown
---
## 10.8 AGNTCY Conformance Test Suite
**Location:** `tests/agntcy-conformance/conformance.test.ts`
**Purpose:** Verifies that the AgentIdP platform conforms to the AGNTCY agent identity specification. These tests exercise live HTTP requests through the Express application against real PostgreSQL and Redis instances, exactly like integration tests — but they validate AGNTCY-specific protocol guarantees rather than individual endpoint correctness.
**How to run:**
```bash
# Run the conformance suite (separate Jest config)
npm run test:agntcy-conformance
# Equivalent long form
npx jest --config tests/agntcy-conformance/jest.config.cjs
# Run with TEST_DATABASE_URL and TEST_REDIS_URL overrides
TEST_DATABASE_URL=postgresql://sentryagent:sentryagent@localhost:5432/sentryagent_idp_test \
TEST_REDIS_URL=redis://localhost:6379/1 \
npm run test:agntcy-conformance
# Enable A2A delegation conformance tests (gated by env var)
A2A_ENABLED=true npm run test:agntcy-conformance
```
The conformance suite uses its own `jest.config.cjs` (located in `tests/agntcy-conformance/`) so it does not run with `npm test` by default. This is intentional — the suite requires `COMPLIANCE_ENABLED=true` and optionally `A2A_ENABLED=true`, which should not be required for the standard unit/integration test run.
**What each test validates:**
| Conformance Test | What it validates | AGNTCY Domain |
|-----------------|-------------------|---------------|
| **Conformance 1 — Agent registration creates DID:WEB identifier** | `POST /api/v1/agents` returns a `did` field matching `did:web:*` pattern when `DID_WEB_DOMAIN` is set. The `did` field is optional in the response (test is conditional on presence) — but when present, it must conform to the `did:web:` scheme. | Non-Human Identity |
| **Conformance 2 — Token issuance via `client_credentials` grant** | Registers an agent, generates credentials via API, then exercises the full OAuth 2.0 Client Credentials flow. Validates that `POST /api/v1/token` returns a 200 response with `access_token` (string), `token_type: 'Bearer'`, and a JWT with 3 dot-separated parts. | Authentication |
| **Conformance 3 — A2A delegation chain create + verify** | _(Gated by `A2A_ENABLED=true`.)_ Creates a delegation chain between two agents via `POST /api/v1/oauth2/token/delegate`. If a token is returned, verifies it via `POST /api/v1/oauth2/token/verify-delegation`. Accepts 200 or 201 on creation and 200 or 204 on verification. | Agent-to-Agent Trust |
| **Conformance 4 — Compliance report returns valid AGNTCY structure** | Calls `GET /api/v1/compliance/report` and validates all required AGNTCY fields: `generated_at` (valid ISO 8601), `tenant_id` (string), `agntcy_schema_version: '1.0'`, `sections` (array with `name`, `status`, `details` per entry), `overall_status` (one of `pass/fail/warn`). Also verifies the `agent-identity` and `audit-trail` section names are present. A second request verifies the Redis cache (`X-Cache: HIT` header and `from_cache: true` body field). | Audit, Compliance |
**Schema tables created by conformance suite:** The suite creates its own tables using `CREATE TABLE IF NOT EXISTS` before tests run. The tables match the production schema and include: `organizations`, `agents`, `credentials`, `audit_events`, `token_revocations`, `agent_did_keys`, `delegation_chains`. These are cleaned up via `DELETE` in `afterEach` (child-to-parent order respecting FK constraints) and dropped implicitly when the test database is reset.
**Environment variables used:**
| Variable | Required | Purpose |
|---|---|---|
| `TEST_DATABASE_URL` | Yes (or default) | PostgreSQL connection string for the test database |
| `TEST_REDIS_URL` | Yes (or default) | Redis connection string (index 1 recommended) |
| `COMPLIANCE_ENABLED` | Yes (`'true'`) | Enables the compliance report endpoint |
| `A2A_ENABLED` | No (default `'true'`) | Set to `'false'` to skip Conformance 3 (A2A delegation) |
| `DID_WEB_DOMAIN` | No | When set, Conformance 1 validates the `did:web:` format |
---
## 10.9 Tier Enforcement Tests
**Location:** `tests/unit/services/TierService.test.ts` and `tests/integration/`
**The TierService has the following test cases that must all pass:**
### Unit tests (`tests/unit/services/TierService.test.ts`)
The unit tests mock PostgreSQL (`Pool`) and Redis (`RedisClientType`) and Stripe. Key scenarios:
| Test | Description |
|------|-------------|
| `getStatus() — returns correct tier and limits` | Mocks `SELECT tier FROM organizations` returning `'pro'`; mocks Redis GET calls for `rate:tier:calls` and `rate:tier:tokens`; verifies `ITierStatus.limits` matches `TIER_CONFIG['pro']`. |
| `getStatus() — falls back to 0 when Redis unavailable` | Redis GET throws; verifies `usage.callsToday = 0` and `usage.tokensToday = 0` with no error thrown. |
| `getStatus() — returns 'free' when org not found` | `SELECT` returns 0 rows; verifies `tier === 'free'`. |
| `initiateUpgrade() — throws ValidationError on downgrade attempt` | `targetTier = 'free'` when current is `'pro'`; verifies `ValidationError` is thrown with `TIER_RANK` comparison failure message. |
| `initiateUpgrade() — calls Stripe with correct metadata` | Verifies `stripe.checkout.sessions.create` is called with `metadata: { orgId, targetTier }` and `mode: 'subscription'`. |
| `applyUpgrade() — executes UPDATE organizations SET tier` | Verifies parameterized SQL is called with `[targetTier, orgId]`. |
| `enforceAgentLimit() — throws TierLimitError when limit reached` | Mock agent count equals `TIER_CONFIG[tier].maxAgents`; verifies `TierLimitError` with `limit` and `current` details. |
| `enforceAgentLimit() — no-op for Enterprise tier` | `TIER_CONFIG['enterprise'].maxAgents = Infinity`; verifies no SQL query for agent count and no error. |
| `fetchTier() — returns 'free' for unknown tier string in DB` | DB returns unrecognised string; verifies `isTierName` guard returns `'free'`. |
### Integration (middleware) tests
When writing integration tests for the tier enforcement middleware (`src/middleware/tier.ts`), the following scenarios must be covered:
| Scenario | Expected behaviour |
|----------|-------------------|
| Request with org on `free` tier, under daily call limit | Request proceeds normally (2xx from downstream handler) |
| Request that would exceed `maxCallsPerDay` for the org's tier | `429 TierLimitError` — body contains `code: 'TIER_LIMIT_EXCEEDED'` |
| Request to `/health` or `/metrics` (unprotected routes) | Tier middleware not applied — always 200 |
| Org not found in `organizations` table | Defaults to `free` tier limits |
---
## 10.10 Analytics Service Tests
**Location:** `tests/unit/services/AnalyticsService.test.ts`
The AnalyticsService unit tests mock the PostgreSQL `Pool`. Key scenarios that must be covered:
| Test | Description |
|------|-------------|
| `recordEvent() — executes UPSERT without throwing` | Verifies `pool.query` is called with the `INSERT ... ON CONFLICT DO UPDATE` SQL pattern and the correct `[tenantId, metricType]` parameters. |
| `recordEvent() — catches and swallows pool errors` | Pool `query` throws; verifies `recordEvent` resolves (not rejects) and the error does not propagate. This is the fire-and-forget contract. |
| `getTokenTrend() — clamps days to 90` | Calls with `days = 200`; verifies `pool.query` receives `clampedDays = 90` as the first parameter. |
| `getTokenTrend() — maps rows to ITokenTrendEntry[]` | Mock returns rows with `date: '2026-03-01', count: '42'`; verifies the result is `[{ date: '2026-03-01', count: 42 }]` (count coerced to number). |
| `getAgentActivity() — maps rows to IAgentActivityEntry[]` | Mock returns rows with string-typed `dow`, `hour`, `count`; verifies all are coerced to numbers in the result. |
| `getAgentUsageSummary() — maps rows to IAgentUsageSummaryEntry[]` | Mock returns rows with `token_count: '150'`; verifies `token_count: 150` (number) in the result. |
| `getAgentUsageSummary() — joins with agents table on organization_id` | Verifies the SQL query joins `agents` with `LEFT JOIN analytics_events` and filters `a.organization_id = $1`. |
**Coverage gate:** `AnalyticsService` must maintain >80% statement, branch, function, and line coverage. Run:
```bash
npm run test:unit -- --coverage --testPathPattern=AnalyticsService
```
---
## 10.11 Running the Complete Phase 6 Test Matrix
All of the following must pass before any Phase 6 feature is considered complete:
```bash
# 1. Unit tests (all services including Phase 36)
npm run test:unit -- --coverage
# Must exit 0 with all 4 coverage metrics ≥ 80%
# 2. Integration tests (requires PostgreSQL + Redis running)
npm run test:integration
# 3. AGNTCY conformance suite
COMPLIANCE_ENABLED=true \
A2A_ENABLED=true \
npm run test:agntcy-conformance
# 4. Dependency security audit
npm audit --audit-level=high
# Must exit 0 — no high or critical vulnerabilities
# 5. TypeScript compilation
npx tsc --noEmit
# Must exit 0 — zero type errors
```
**Current test file inventory** (as of Phase 6 completion):
Unit test files in `tests/unit/services/`:
| File | Service tested |
|------|---------------|
| `AgentService.test.ts` | `AgentService` |
| `AnalyticsService.test.ts` | `AnalyticsService` |
| `AuditService.test.ts` | `AuditService` |
| `AuditVerificationService.test.ts` | `AuditVerificationService` |
| `BillingService.test.ts` | `BillingService` |
| `ComplianceService.test.ts` | `ComplianceService` |
| `CredentialService.test.ts` | `CredentialService` |
| `DIDService.test.ts` | `DIDService` |
| `DelegationService.test.ts` | `DelegationService` |
| `EncryptionService.test.ts` | `EncryptionService` |
| `FederationService.test.ts` | `FederationService` |
| `IDTokenService.test.ts` | `IDTokenService` |
| `OAuth2Service.test.ts` | `OAuth2Service` |
| `OIDCKeyService.test.ts` | `OIDCKeyService` |
| `OrgService.test.ts` | `OrgService` |
| `ScaffoldService.test.ts` | `ScaffoldService` |
| `ScaffoldService.errors.test.ts` | `ScaffoldService` error cases |
| `TierService.test.ts` | `TierService` |
| `WebhookService.test.ts` | `WebhookService` |
```

View File

@@ -0,0 +1,609 @@
# WS5 — Remaining Documentation Updates
**Targets:** 5 separate files with surgical edits.
---
## File 1: `docs/engineering/01-overview.md`
**Operation:** Replace the Phase Roadmap table (Section 4) to reflect Phase 36 completion status and add Phase 6 capabilities to the Product Features table.
---
### Change 1a — Update Phase Roadmap Table
**Find (Section 4, the Phase 3 row):**
```
| Phase 3 — Enterprise | PLANNED | AGNTCY federation (cross-IdP agent identity), W3C Decentralised Identifiers (DIDs), agent marketplace, advanced compliance reporting, SOC 2 Type II certification, enterprise tier (custom retention, SLAs, advanced RBAC) |
```
**Replace with (3 rows — Phase 3 was completed and Phases 46 have been added):**
```
| Phase 3 — Enterprise | COMPLETE | AGNTCY federation (cross-IdP agent identity), W3C Decentralised Identifiers (DIDs), agent marketplace, OIDC provider (A2A delegation), Rust SDK, developer portal (Next.js 14) |
| Phase 4 — Compliance & Security | COMPLETE | AGNTCY compliance reports (agent-identity + audit-trail sections), audit hash chain verification, SOC 2 CC6.1 AES-256-CBC column encryption (`EncryptionService`), DID document caching, federation partner JWKS caching |
| Phase 5 — Scale & Ecosystem | COMPLETE | Multi-tier subscription model (free/pro/enterprise), Stripe billing integration (`BillingService`, `TierService`), tier enforcement middleware (daily call and token limits), webhook subscriptions + delivery history (`WebhookService`), analytics service (daily event aggregation + trend queries) |
| Phase 6 — Market Expansion | COMPLETE | AGNTCY conformance test suite (4 conformance scenarios), API tiers enforced end-to-end, analytics dashboard in developer portal, full Phase 6 engineering documentation update |
```
---
### Change 1b — Add Phase 36 Capabilities to Product Features Table
**Find (Section 3, the last row of the features table):**
```
| Health Check | `GET /health` | Checks PostgreSQL and Redis connectivity; unauthenticated; used by load balancers |
```
**Insert the following rows after that line (before the closing of the table):**
```
| W3C Decentralised Identifiers | `GET /api/v1/agents/:id/did`, `GET /api/v1/.well-known/did.json` | DID Core 1.0 documents; `did:web` method; EC P-256 keys; AGNTCY extension fields |
| AGNTCY Agent Cards | `GET /api/v1/agents/:id/card` | Machine-readable agent identity summary; AGNTCY schema v1.0 |
| AGNTCY Compliance Reports | `GET /api/v1/compliance/report`, `GET /api/v1/compliance/agent-cards` | Compliance sections: agent-identity + audit-trail; cached 5 min; AGNTCY schema v1.0 |
| Federation (Cross-IdP) | `POST /api/v1/federation/partners`, `GET /api/v1/federation/partners`, `POST /api/v1/federation/verify` | Register partner IdPs; verify cross-IdP JWTs using cached partner JWKS |
| A2A Delegation | `POST /api/v1/oauth2/token/delegate`, `POST /api/v1/oauth2/token/verify-delegation` | Agent-to-agent delegation tokens; OIDC provider (oidc-provider v9) mounted at `/oidc` |
| Webhook Subscriptions | `POST /api/v1/webhooks`, `GET /api/v1/webhooks`, `GET /api/v1/webhooks/:id/deliveries` | Outbound event delivery with HMAC signing; Vault-backed secrets; delivery history |
| Tier Management | `GET /api/v1/tiers/status`, `POST /api/v1/tiers/upgrade` | Free / Pro / Enterprise tiers; daily call and token limits; Stripe Checkout upgrade flow |
| Billing | `POST /api/v1/billing/checkout`, `POST /api/v1/billing/webhook`, `GET /api/v1/billing/status` | Stripe subscription management; webhook event processing |
| Analytics | Internal (via `AnalyticsService`) | Daily aggregated event counts per org; token trend queries (up to 90 days); agent activity heatmap; usage summary |
| Developer Portal | `/portal` (Next.js 14, separate process) | Get-started wizard, SDK explorer, API reference, analytics dashboard, pricing page |
```
---
### Change 1c — Update Free Tier Limits Table
**Find (Section 6, entire table):**
```
| Limit | Value |
|-------|-------|
| Max agents | 100 |
| Max credentials per agent | No hard cap enforced in code (5 is the documented recommendation) |
| Max tokens in flight | 10,000 per agent per calendar month |
| Token TTL | 3,600 seconds (1 hour) |
| Audit log retention | 90 days |
| API rate limit | 100 requests per minute per IP address |
```
**Replace with:**
```
| Limit | Free Tier | Pro Tier | Enterprise Tier |
|-------|-----------|----------|-----------------|
| Max agents | 100 | 1,000 | Unlimited |
| Max API calls per day | Configured in `TIER_CONFIG` | Configured in `TIER_CONFIG` | Unlimited |
| Max tokens per day | Configured in `TIER_CONFIG` | Configured in `TIER_CONFIG` | Unlimited |
| Token TTL | 3,600 seconds (1 hour) | 3,600 seconds (1 hour) | 3,600 seconds (1 hour) |
| Audit log retention | 90 days | 1 year | Custom |
| API rate limit (per IP) | 100 req/min | 100 req/min | 100 req/min |
| Webhook subscriptions | 0 | 10 | Unlimited |
| Analytics retention | 90 days | 1 year | Custom |
Tier limits are configured in `src/config/tiers.ts` (`TIER_CONFIG`). Enforcement is handled by `TierService.enforceAgentLimit()` (agent cap) and `src/middleware/tier.ts` (daily call/token caps). Tier upgrades are initiated via `POST /api/v1/tiers/upgrade` and confirmed via the Stripe webhook.
```
---
## File 2: `docs/engineering/03-tech-stack.md`
**Operation:** Append new ADR entries after the existing `### ADR-10: Terraform` section.
**Find (last line of the file):**
```
**Consequences**: All infrastructure changes must go through Terraform. No manual edits
via the AWS console or GCP console are permitted — they will be overwritten on the next
`terraform apply`. Terraform state is stored in a remote backend and must not be edited
manually.
```
**Append the following after that line:**
```markdown
---
### ADR-11: Stripe
**Status**: Adopted
**Component**: Billing — subscription management and payment processing
**Decision**: Use Stripe as the payment processing and subscription management platform. The `stripe` npm package (v21+) handles Checkout Session creation, webhook event verification, and subscription lifecycle events.
**Rationale**: Stripe's hosted Checkout flow eliminates the need to handle PCI-DSS scope for card data. The `stripe.webhooks.constructEvent()` method uses HMAC-SHA256 to verify incoming webhook payloads, preventing replay attacks. The `checkout.session.completed` event carries `metadata: { orgId, targetTier }`, allowing `BillingService` to delegate tier upgrades to `TierService.applyUpgrade()` without coupling billing logic to tier logic.
**Alternatives considered**:
- Paddle — rejected because its global merchant-of-record model introduced complexities with the open-source free tier.
- Braintree — rejected because Stripe's webhook reliability and developer experience are superior.
**Consequences**: Stripe requires `STRIPE_SECRET_KEY` (for API calls) and `STRIPE_WEBHOOK_SECRET` (`whsec_...`, for webhook verification). Per-tier Stripe price IDs are configured via `STRIPE_PRICE_ID_PRO` and `STRIPE_PRICE_ID_ENTERPRISE`. All billing webhook handlers must pass the raw `Buffer` body (not parsed JSON) to `stripe.webhooks.constructEvent()` — use `express.raw()` middleware on the webhook route.
---
### ADR-12: oidc-provider (A2A Delegation)
**Status**: Adopted
**Component**: A2A delegation — OIDC provider for agent-to-agent trust tokens
**Decision**: Use the `oidc-provider` npm package (v9.7.x) as the OIDC provider for issuing A2A delegation tokens. The provider is mounted as a sub-application at `/oidc` within the Express app.
**Rationale**: `oidc-provider` is a certified OpenID Connect implementation that handles the full OIDC protocol, including JWKS serving, token endpoint, and discovery document. Rather than implementing a custom delegation token format, using a standards-compliant OIDC provider means delegation tokens can be verified by any OIDC-aware party using the published JWKS at `/oidc/jwks`.
**Alternatives considered**:
- Custom JWT signing — rejected because hand-rolled token formats cannot benefit from OIDC tooling and interoperability.
**Consequences**: `A2A_ENABLED` env var gates the OIDC provider — when set to `'false'`, delegation endpoints return 404. The `OIDC_ISSUER` env var must be set to the full base URL of the OIDC provider (e.g. `https://api.sentryagent.ai`).
---
### ADR-13: Next.js 14 (Developer Portal)
**Status**: Adopted
**Component**: Developer Portal (`portal/`) — public-facing documentation and onboarding
**Decision**: Use Next.js 14 (App Router) with Tailwind CSS for the developer portal. The portal is a separate process served on its own port (independent of the Express API server).
**Rationale**: The developer portal has different performance and SEO requirements than the internal operator dashboard (`dashboard/`). Next.js 14's App Router supports React Server Components, which allows the marketing and documentation pages to be statically generated while the analytics dashboard and API Explorer are client-rendered. Tailwind CSS enables rapid UI development consistent with the design system.
**Alternatives considered**:
- Extending the Vite dashboard — rejected because the developer portal requires server-side rendering for SEO on marketing pages, which Vite does not provide.
- Docusaurus — rejected because the portal includes interactive components (Swagger Explorer, analytics charts) that are not well-suited to a documentation-only tool.
**Consequences**: The portal (`portal/`) has its own `package.json`, `tsconfig.json`, `tailwind.config.ts`, and `next.config.js`. It is built and run independently: `cd portal && npm install && npm run dev`. The portal calls the AgentIdP REST API using the same `@sentryagent/idp-sdk` as the dashboard.
---
### ADR-14: bull (Job Queue) + kafkajs (Event Streaming)
**Status**: Adopted (opt-in)
**Component**: Async job processing and event streaming
**Decision**: Use `bull` (Redis-backed job queue) for async webhook delivery retries and `kafkajs` for event streaming to external consumers. Both are opt-in — the system operates correctly without Kafka configured.
**Rationale**: Webhook delivery requires retry logic with exponential backoff and dead-letter handling. `bull` provides this out of the box using the existing Redis dependency. `kafkajs` enables high-throughput event streaming for analytics and audit events to external data pipelines without blocking the primary request path.
**Alternatives considered**:
- BullMQ — considered as a more modern alternative to `bull` but rejected to avoid adding a new package family during Phase 6. Migration is a future backlog item.
**Consequences**: Kafka is entirely optional. When `KAFKA_BROKERS` is not set, `kafkajs` is not initialised and no events are published. The `bull` queue for webhook delivery requires only the existing Redis instance.
---
### ADR-15: did-resolver + web-did-resolver (W3C DIDs)
**Status**: Adopted
**Component**: W3C DID Core 1.0 document resolution
**Decision**: Use `did-resolver` (v4.1.x) as the DID resolution framework and `web-did-resolver` (v2.0.x) for the `did:web` method implementation.
**Rationale**: `did-resolver` provides a pluggable resolver interface used by both the server (for internal resolution) and by third parties who want to verify AgentIdP-issued DIDs. The `did:web` method maps DID identifiers to HTTPS URLs hosting the DID document JSON, requiring no blockchain. `DIDService` generates documents that conform to the W3C DID Core 1.0 specification and include AGNTCY-specific extension fields.
**Consequences**: `DID_WEB_DOMAIN` env var is required for DID generation. DID documents are cached in Redis (`did:doc:<agentId>`, TTL from `DID_DOCUMENT_CACHE_TTL_SECONDS`, default 300s). Private keys are stored in HashiCorp Vault KV v2 when Vault is configured; in dev mode, a `dev:no-vault` marker is stored and keys are ephemeral.
```
---
## File 3: `docs/engineering/04-codebase-structure.md`
**Operation:** Two surgical edits — update the directory tree and update the `src/` subdirectory table.
---
### Change 3a — Update the Annotated Directory Tree
**Find (inside the code block in Section 1, after the `sdk-java/` line):**
```
├── policies/ # OPA policy files
```
**Replace the entire block from `├── policies/` down through `└── jest.config.ts # Jest configuration — ts-jest, test timeouts, coverage thresholds` with the following updated version:**
```
├── sdk-rust/ # Rust SDK (sentryagent-idp crate) — async, tokio, reqwest, typed errors
├── policies/ # OPA policy files
│ ├── authz.rego # Rego policy — normalise_path + scope-intersection allow rule
│ └── data/scopes.json # Endpoint permission map — used by Rego and TypeScript fallback
├── portal/ # Developer Portal — Next.js 14 App Router, Tailwind CSS
│ ├── app/ # Next.js App Router pages (get-started, pricing, sdks, analytics, settings, login)
│ ├── components/ # Shared UI components (Nav.tsx, SwaggerExplorer.tsx, GetStartedWizard.tsx)
│ ├── hooks/ # React hooks (useAuth.ts)
│ └── types/ # TypeScript type definitions for portal-only types
├── terraform/ # Terraform infrastructure as code
│ ├── modules/ # Reusable modules: agentidp, lb, rds, redis
│ └── environments/ # Environment configs: aws/ (ECS+RDS+ElastiCache), gcp/ (Cloud Run+SQL+Memorystore)
├── monitoring/ # Prometheus and Grafana configuration
│ ├── prometheus/ # prometheus.yml scrape configuration
│ └── grafana/ # Grafana provisioning YAML and dashboard JSON files
├── docs/ # All project documentation
│ ├── engineering/ # Internal engineering knowledge base (this directory)
│ ├── developers/ # End-user API reference and developer guides
│ ├── devops/ # Operator runbooks and environment variable reference
│ ├── agntcy/ # AGNTCY alignment documentation
│ └── openapi/ # OpenAPI 3.0 specification files
├── openspec/ # OpenSpec change management — proposals, designs, specs, tasks, archives
├── tests/ # Jest test suite — mirrors src/ structure
│ ├── unit/ # Unit tests (mocked dependencies) — mirrors src/
│ ├── integration/ # Integration tests (real DB + Redis)
│ ├── agntcy-conformance/ # AGNTCY conformance test suite (separate Jest config)
│ └── load/ # k6 load test scripts
├── Dockerfile # Multi-stage production build (build + runtime stages)
├── docker-compose.yml # Local development: PostgreSQL 14 (port 5432) + Redis 7 (port 6379)
├── docker-compose.monitoring.yml # Monitoring overlay: Prometheus (port 9090) + Grafana (port 3001)
├── package.json # Node.js dependencies and npm scripts
├── tsconfig.json # TypeScript strict configuration — compiled to dist/
└── jest.config.ts # Jest configuration — ts-jest, test timeouts, coverage thresholds
```
---
### Change 3b — Add New src/ Subdirectories to Section 2
**Find (Section 2 table, the last row):**
```
| `src/cache/` | Redis client factory — creates and caches a single `redis` client instance | Client is a singleton created once in `src/app.ts` and passed to repositories |
```
**Insert these rows after that line:**
```
| `src/config/` | Configuration constants — `tiers.ts` exports `TIER_CONFIG`, `TIER_RANK`, `TierName`, and `isTierName()` type guard | Imported by `TierService` and `tierMiddleware`; never imports from services |
| `src/middleware/tier.ts` | Tier enforcement middleware — reads org tier from `TierService`, checks daily call counter in Redis, throws `TierLimitError` (429) when limit is exceeded, increments counter on pass | Applied only to API routes; skips `/health`, `/metrics`, and static file routes |
```
---
### Change 3c — Add New Entries to Section 3 (Where to Add New Code)
**Find (Section 3 table, after the `A new Prometheus metric` row):**
```
| A new TypeScript type used in 2+ files | `src/types/index.ts` | A new `AgentGroupMembership` interface |
```
**Insert these rows after that line:**
```
| A new tier-gated feature | `src/config/tiers.ts` (add limit field) + `src/middleware/tier.ts` (add check) + service (enforce) | Adding a `maxWebhooksPerOrg` tier limit |
| A webhook event handler | `src/services/WebhookService.ts` (add event type to `WebhookEventType`) + the producer that calls `void webhookService.dispatch(orgId, eventType, payload)` | Emitting `agent.decommissioned` events to subscriber URLs |
| A new analytics metric type | `src/services/AnalyticsService.ts` (call `recordEvent(tenantId, 'new_metric')` in the relevant service using `void`) | Recording `credential_rotated` events for analytics |
| A new DID endpoint | `src/controllers/DIDController.ts` + `src/routes/did.ts` + `src/services/DIDService.ts` (if new method needed) + `policies/data/scopes.json` | Adding `GET /api/v1/agents/:id/did/rotate-key` |
```
---
## File 4: `docs/engineering/README.md`
**Operation:** Replace the reading order table and quick reference table to reflect all Phase 6 additions.
---
### Change 4a — Update Reading Order Table
**Find (Section "Reading Order (New Engineers Start Here)", the last row):**
```
| 11 | [SDK Integration Guide](11-sdk-guide.md) | All 4 SDKs — installation, examples, contribution guide | 20 min |
```
**Replace with (adds the Rust SDK to the description and updates the estimated time):**
```
| 11 | [SDK Integration Guide](11-sdk-guide.md) | All 5 SDKs (Node.js, Python, Go, Java, Rust) — installation, examples, contribution guide | 25 min |
```
**Find (the line after the table):**
```
**Total estimated reading time for new engineers: ~3.5 hours**
```
**Replace with:**
```
**Total estimated reading time for new engineers: ~4 hours**
```
---
### Change 4b — Update "Service Deep Dives" Entry
**Find:**
```
| 5 | [Service Deep Dives](05-services.md) | All 8 services/components — purpose, interface, schema, error types | 30 min |
```
**Replace with:**
```
| 5 | [Service Deep Dives](05-services.md) | All 17 services/components (incl. Phase 36: AnalyticsService, TierService, ComplianceService, FederationService, DIDService, WebhookService, BillingService, DelegationService, OIDCService) — purpose, interface, schema, error types | 45 min |
```
---
### Change 4c — Update Quick Reference Table
**Find (in the Quick Reference section):**
```
| Integrate with the SDK | [11-sdk-guide.md](11-sdk-guide.md) |
```
**Replace with:**
```
| Integrate with the SDK (Node.js, Python, Go, Java, Rust) | [11-sdk-guide.md](11-sdk-guide.md) |
```
**Find (after the "Integrate with the SDK" row):**
```
| Understand why a technology was chosen | [03-tech-stack.md](03-tech-stack.md) |
```
**Insert after that row:**
```
| Understand tier limits and billing | [01-overview.md](01-overview.md) (Section 6) + [03-tech-stack.md](03-tech-stack.md) (ADR-11) |
| Understand AGNTCY compliance reports | [05-services.md](05-services.md) (ComplianceService) |
| Understand the A2A delegation flow | [06-walkthroughs.md](06-walkthroughs.md) (Walkthrough 4) |
| Run the AGNTCY conformance suite | [09-testing.md](09-testing.md) (Section 10.8) |
| Add a new Rust SDK endpoint | [11-sdk-guide.md](11-sdk-guide.md) (Section 6 contribution guide) |
```
---
## File 5: `docs/engineering/06-walkthroughs.md`
**Operation:** Append three new walkthrough sections at the end of the file.
**Find (the last line of the file):**
```
Returns `ICredentialWithSecret` — the updated credential including the new
`clientSecret`. This is the only time the new secret is ever returned. The caller
must store it securely.
```
**Append the following after that final JSON block:**
```markdown
---
## Walkthrough 4 — A2A Delegation End-to-End
**Request:** `POST /api/v1/oauth2/token/delegate` — one AI agent delegating a scoped capability to another
This walkthrough traces how agent A (an orchestrator) issues a delegation token that grants agent B (a sub-agent) the right to act on its behalf with a restricted scope.
---
### Step 1 — Route dispatch
**File:** `src/routes/delegation.ts`
```typescript
router.post(
'/token/delegate',
asyncHandler(authMiddleware),
opaMiddleware,
asyncHandler(delegationController.createDelegation.bind(delegationController))
);
```
Both `authMiddleware` and `opaMiddleware` run. The OPA policy requires scope `agents:write` for delegation creation.
---
### Step 2 — Controller: extract delegator and validate
**File:** `src/controllers/DelegationController.ts`
```typescript
const delegatorId = req.user.sub; // From the Bearer token's sub claim
const { delegatee_id, scope, expires_at } = req.body;
```
The controller validates that `delegatee_id` is a non-empty UUID, `scope` is a non-empty string, and `expires_at` (if provided) is a valid ISO 8601 datetime in the future. It passes these to `DelegationService.createDelegation()`.
---
### Step 3 — Service: verify both agents exist
**File:** `src/services/DelegationService.ts`
```typescript
const delegator = await this.agentRepository.findById(delegatorId);
if (!delegator || delegator.status !== 'active') { throw new AgentNotFoundError(delegatorId) }
const delegatee = await this.agentRepository.findById(delegateeId);
if (!delegatee || delegatee.status !== 'active') { throw new AgentNotFoundError(delegateeId) }
```
Both agents must exist and be in `active` status. A suspended or decommissioned agent cannot participate in delegation.
---
### Step 4 — Service: insert delegation chain record
**File:** `src/services/DelegationService.ts`
```typescript
await this.pool.query(
`INSERT INTO delegation_chains (chain_id, delegator_id, delegatee_id, scope, status, expires_at)
VALUES ($1, $2, $3, $4, 'active', $5)`,
[chainId, delegatorId, delegateeId, scope, expiresAt]
);
```
The `chain_id` is a UUID generated by the service. The `delegation_chains` table provides the authoritative source of truth for which delegations are active, independent of any token.
---
### Step 5 — Response
```json
{
"chain_id": "f1e2d3c4-...",
"token": "eyJhbGciOiJSUzI1NiJ9...",
"delegator_id": "a1b2c3d4-...",
"delegatee_id": "b2c3d4e5-...",
"scope": "agents:read",
"status": "active",
"expires_at": "2026-04-05T00:00:00Z"
}
```
The `token` field is the signed delegation JWT. The delegatee presents this token to `POST /api/v1/oauth2/token/verify-delegation` to prove it has authority to act on the delegator's behalf.
**Why store both the DB record and the JWT?** The DB record allows revocation — when the delegator calls `DELETE /api/v1/delegation-chains/:chainId`, the record is soft-deleted and all subsequent `verify-delegation` calls will fail even if the JWT itself has not yet expired.
---
## Walkthrough 5 — Tier Enforcement Request Lifecycle
**Request:** Any authenticated API request when the organisation's daily call limit is reached
This walkthrough traces how `tierMiddleware` intercepts a request before it reaches the OPA middleware, preventing quota-exceeded traffic from consuming service resources.
---
### Step 1 — Auth middleware passes
Same as Walkthrough 2, Step 3. The Bearer JWT is verified and `req.user` is populated with `sub` (agentId) and `organization_id`.
---
### Step 2 — Tier middleware: fetch org tier
**File:** `src/middleware/tier.ts`
```typescript
const orgId = req.user.organization_id;
const tier = await tierService.fetchTier(orgId);
const config = TIER_CONFIG[tier];
```
`fetchTier()` issues `SELECT tier FROM organizations WHERE organization_id = $1`. Returns `'free'` if no row is found (safe default).
---
### Step 3 — Tier middleware: read daily counter
**File:** `src/middleware/tier.ts`
```typescript
const callsKey = `rate:tier:calls:${orgId}`;
const callsToday = await redis.get(callsKey);
const count = callsToday !== null ? parseInt(callsToday, 10) : 0;
if (count >= config.maxCallsPerDay) {
throw new TierLimitError('calls', config.maxCallsPerDay, { orgId, tier, current: count });
}
```
The Redis key `rate:tier:calls:<orgId>` is read. If null (first call of the day), count is 0. When count equals or exceeds the tier limit, `TierLimitError` (HTTP 429) is thrown immediately — no further middleware runs.
---
### Step 4 — Tier middleware: increment counter (fire-and-forget)
**File:** `src/middleware/tier.ts`
```typescript
// Set TTL to next UTC midnight if key is new
void redis.multi()
.incr(callsKey)
.expireAt(callsKey, nextUtcMidnightUnix())
.exec();
next();
```
The counter is incremented atomically using a Redis MULTI block. The `EXPIREAT` command sets the key to auto-delete at the next UTC midnight, resetting the daily counter without any scheduled job. The increment is fire-and-forget — the request proceeds immediately to `opaMiddleware`.
**Why expire at UTC midnight rather than a rolling 24-hour window?** Tier limits are documented as "per day", which users interpret as resetting at midnight. A rolling window would allow a user to consume their full daily quota twice within a 48-hour period straddling midnight, which is counterintuitive. UTC midnight is predictable and easy to reason about.
---
### Step 5 — Error handler serialises TierLimitError
**File:** `src/middleware/errorHandler.ts`
```json
HTTP 429
{
"code": "TIER_LIMIT_EXCEEDED",
"message": "Daily API call limit reached for your tier.",
"details": {
"tier": "free",
"limit": 1000,
"current": 1000
}
}
```
The `Retry-After` header is set to the number of seconds until next UTC midnight so clients can implement automatic backoff.
---
## Walkthrough 6 — Analytics Event Capture Flow
**Trigger:** Any successful token issuance (`POST /api/v1/token`)
This walkthrough traces how an analytics event is captured without affecting the latency of the primary token issuance response.
---
### Step 1 — Token issuance completes
**File:** `src/services/OAuth2Service.ts`
```typescript
const accessToken = signToken(payload, this.privateKey);
// Primary response is ready — analytics is now fire-and-forget
void this.analyticsService.recordEvent(tenantId, 'token_issued');
tokensIssuedTotal.inc({ scope });
```
The `signToken()` call completes synchronously (RSA signing is CPU-bound, not I/O). The controller can now send the response. `analyticsService.recordEvent()` is called with `void` — the `await` is deliberately omitted.
**Why `void` instead of `await`?** Token issuance latency must remain below 100ms (per the QA performance gate). A PostgreSQL write adds 515ms. Since analytics data is aggregated (not transactional), losing an occasional event due to an error is acceptable. The response is never delayed for analytics.
---
### Step 2 — AnalyticsService: UPSERT daily counter
**File:** `src/services/AnalyticsService.ts`
```typescript
async recordEvent(tenantId: string, metricType: string): Promise<void> {
try {
await this.pool.query(
`INSERT INTO analytics_events (organization_id, date, metric_type, count)
VALUES ($1, CURRENT_DATE, $2, 1)
ON CONFLICT (organization_id, date, metric_type)
DO UPDATE SET count = analytics_events.count + 1`,
[tenantId, metricType],
);
} catch (err) {
console.error('[AnalyticsService] recordEvent failed — primary path unaffected', err);
}
}
```
The `ON CONFLICT DO UPDATE` upsert is atomic. Whether this is the first or the ten-thousandth `token_issued` event for this tenant today, the row is updated correctly. All errors are caught and swallowed — the token has already been returned to the caller.
**Why one row per day per metric, not one row per event?** Storing a row per event would create millions of rows. The daily aggregate model keeps the table compact while still providing daily trend data (the granularity that analytics dashboards need). Sub-day granularity is available from the Prometheus `agentidp_tokens_issued_total` counter if needed.
---
### Step 3 — Dashboard query (deferred)
When a developer visits the analytics page in the developer portal, the portal calls:
```
GET /api/v1/analytics/token-trend?days=30
```
**File:** `src/services/AnalyticsService.ts``getTokenTrend(tenantId, 30)`
```sql
SELECT
gs.date::DATE::TEXT AS date,
COALESCE(ae.count, 0)::INTEGER AS count
FROM generate_series(
CURRENT_DATE - 29 * INTERVAL '1 day',
CURRENT_DATE,
INTERVAL '1 day'
) AS gs(date)
LEFT JOIN analytics_events ae
ON ae.date = gs.date::DATE
AND ae.organization_id = $2
AND ae.metric_type = 'token_issued'
ORDER BY gs.date ASC
```
The `generate_series` + `LEFT JOIN` pattern ensures all 30 days appear in the result, with `count: 0` for days with no events. This avoids the need for the client to fill in gaps.
```