Compare commits
41 Commits
d42c653eea
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4cb168bbba | ||
|
|
5943ff136f | ||
|
|
5e580b51dd | ||
|
|
f9a6a8aafb | ||
|
|
6fada694bb | ||
|
|
30dc793ceb | ||
|
|
861d9312d8 | ||
|
|
dceefebf18 | ||
|
|
4e3b989629 | ||
|
|
7441c9f298 | ||
|
|
d216096dfb | ||
|
|
8cabc0191c | ||
|
|
0fb00256b4 | ||
|
|
e327c41211 | ||
|
|
eea885db04 | ||
|
|
0fad328329 | ||
|
|
8fd6823581 | ||
|
|
eaabaebf52 | ||
|
|
662879f0ee | ||
|
|
16497706d3 | ||
|
|
0506bc1b8e | ||
|
|
a4aab1b5b3 | ||
|
|
fec1801e8c | ||
|
|
389a764e8d | ||
|
|
831e91c467 | ||
|
|
af630b43d4 | ||
|
|
26a56f84e1 | ||
|
|
fefbf1e3ea | ||
|
|
89c99b666d | ||
|
|
d1e6af25aa | ||
|
|
1b682c22b2 | ||
|
|
b0f70b7ac4 | ||
|
|
f1fbe0e29a | ||
|
|
ceec22f714 | ||
|
|
fd90b2acd1 | ||
|
|
272b69f18d | ||
|
|
03b5de300c | ||
|
|
5e465e596a | ||
|
|
3d1fff15f6 | ||
|
|
d252097f71 | ||
|
|
cb7d079ef6 |
198
.claude/commands/continue.md
Normal file
198
.claude/commands/continue.md
Normal file
@@ -0,0 +1,198 @@
|
|||||||
|
---
|
||||||
|
name: "Continue"
|
||||||
|
description: Capture a full project status snapshot so the next session can continue seamlessly from where this one left off
|
||||||
|
category: Workflow
|
||||||
|
tags: [workflow, session, continuity, memory, snapshot]
|
||||||
|
---
|
||||||
|
|
||||||
|
Capture the full current project status and store it in persistent memory so the next session can pick up exactly where this one left off — no context lost, no recap needed.
|
||||||
|
|
||||||
|
**Input**: No arguments required. Run `/continue` at any point when ending a session.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **Capture git state**
|
||||||
|
|
||||||
|
Run the following in parallel:
|
||||||
|
```bash
|
||||||
|
git status
|
||||||
|
git branch --show-current
|
||||||
|
git log --oneline -10
|
||||||
|
git diff --stat HEAD
|
||||||
|
git stash list
|
||||||
|
```
|
||||||
|
|
||||||
|
Record:
|
||||||
|
- Current branch name
|
||||||
|
- Uncommitted files (staged and unstaged), with change type (M/A/D/?)
|
||||||
|
- Last 10 commit messages (for continuity context)
|
||||||
|
- Summary of diff stats if uncommitted changes exist
|
||||||
|
- Any stashed work
|
||||||
|
|
||||||
|
2. **Capture OpenSpec change state**
|
||||||
|
|
||||||
|
Run `openspec list --json` to get all active changes.
|
||||||
|
|
||||||
|
For each active (non-archived) change, run:
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
For each active change, also read its `tasks.md` to count:
|
||||||
|
- Total tasks
|
||||||
|
- Completed tasks (`- [x]`)
|
||||||
|
- Pending tasks (`- [ ]`)
|
||||||
|
- The text of the next pending task (to know what's up next)
|
||||||
|
|
||||||
|
Record per change:
|
||||||
|
- Change name
|
||||||
|
- Schema
|
||||||
|
- Artifact completion (which are done, which are pending)
|
||||||
|
- Task progress (X of Y complete)
|
||||||
|
- Next pending task description
|
||||||
|
- Any delta specs present (`openspec/changes/<name>/specs/`)
|
||||||
|
|
||||||
|
**If no active changes:** Note that there are no active OpenSpec changes.
|
||||||
|
|
||||||
|
3. **Capture in-session conversation context**
|
||||||
|
|
||||||
|
Summarize what was worked on in this session based on the conversation:
|
||||||
|
- What was the user trying to accomplish?
|
||||||
|
- What was completed?
|
||||||
|
- What was left in-progress or blocked?
|
||||||
|
- Any key decisions made during this session
|
||||||
|
- Any open questions or next actions the user mentioned
|
||||||
|
|
||||||
|
Keep this factual and brief — 3–8 bullet points.
|
||||||
|
|
||||||
|
4. **Capture memory file state**
|
||||||
|
|
||||||
|
Read `MEMORY.md` from the project memory directory:
|
||||||
|
`~/.claude/projects/-home-ubuntu-vj-ai-agents-dev-sentryagent-idp/memory/MEMORY.md`
|
||||||
|
|
||||||
|
Note the existing memory entries to avoid duplication in the next step.
|
||||||
|
|
||||||
|
5. **Write session snapshot to memory**
|
||||||
|
|
||||||
|
Write a `session_snapshot.md` file to the project memory directory:
|
||||||
|
`~/.claude/projects/-home-ubuntu-vj-ai-agents-dev-sentryagent-idp/memory/session_snapshot.md`
|
||||||
|
|
||||||
|
Use this structure:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
name: Session Snapshot
|
||||||
|
description: Last session status — git state, OpenSpec progress, and conversation context for seamless resumption
|
||||||
|
type: project
|
||||||
|
---
|
||||||
|
|
||||||
|
**Session ended:** YYYY-MM-DD (today's date)
|
||||||
|
|
||||||
|
## Git State
|
||||||
|
|
||||||
|
**Branch:** <branch-name>
|
||||||
|
**Uncommitted changes:** <count> files (<list filenames>)
|
||||||
|
**Last commit:** <hash> <message>
|
||||||
|
|
||||||
|
<If uncommitted changes exist, list them with their status>
|
||||||
|
|
||||||
|
<If stashes exist, list them>
|
||||||
|
|
||||||
|
## OpenSpec Changes
|
||||||
|
|
||||||
|
<For each active change:>
|
||||||
|
### <change-name>
|
||||||
|
- **Schema:** <schema-name>
|
||||||
|
- **Artifacts:** <done-count>/<total-count> complete (<list incomplete artifact names>)
|
||||||
|
- **Tasks:** <done-count>/<total-count> complete
|
||||||
|
- **Next task:** <text of next pending task>
|
||||||
|
- **Delta specs:** <present / none>
|
||||||
|
|
||||||
|
<If no active changes:> No active OpenSpec changes.
|
||||||
|
|
||||||
|
## Session Work
|
||||||
|
|
||||||
|
<Bullet list of what was worked on, completed, and left in-progress>
|
||||||
|
|
||||||
|
## Next Actions
|
||||||
|
|
||||||
|
<Bullet list of concrete next steps to resume — derived from pending tasks, blockers, open questions>
|
||||||
|
```
|
||||||
|
|
||||||
|
**IMPORTANT:** Always overwrite `session_snapshot.md` — this is a rolling snapshot, not a log. Only the most recent session state matters.
|
||||||
|
|
||||||
|
6. **Update MEMORY.md index**
|
||||||
|
|
||||||
|
Read the current `MEMORY.md`. If `session_snapshot.md` is not already listed, add it:
|
||||||
|
```
|
||||||
|
- [Session Snapshot](session_snapshot.md) — Last session: YYYY-MM-DD | branch: <name> | <N> active changes | <N> uncommitted files
|
||||||
|
```
|
||||||
|
|
||||||
|
If it is already listed, update the line to reflect today's date and current state.
|
||||||
|
|
||||||
|
Write the updated `MEMORY.md`.
|
||||||
|
|
||||||
|
7. **Display break summary**
|
||||||
|
|
||||||
|
Show a clean summary so the user knows the snapshot is complete:
|
||||||
|
|
||||||
|
```
|
||||||
|
## Snapshot Saved — See You Next Session
|
||||||
|
|
||||||
|
**Branch:** <branch-name>
|
||||||
|
**Uncommitted files:** <count> (<filenames>)
|
||||||
|
**Active changes:** <count>
|
||||||
|
|
||||||
|
<For each active change:>
|
||||||
|
- <change-name>: <done>/<total> tasks complete — Next: "<next task text>"
|
||||||
|
|
||||||
|
**Session context saved to memory.**
|
||||||
|
|
||||||
|
To resume: start a new session and run /continue — Claude will load the snapshot and pick up where you left off.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Output On Success (with active changes)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Snapshot Saved — See You Next Session
|
||||||
|
|
||||||
|
**Branch:** develop
|
||||||
|
**Uncommitted files:** 3 (src/auth/token.ts, tests/auth.test.ts, README.md)
|
||||||
|
**Active changes:** 1
|
||||||
|
|
||||||
|
- add-agent-auth: 4/7 tasks complete — Next: "Implement JWT signing with RS256"
|
||||||
|
|
||||||
|
**Session context saved to memory.**
|
||||||
|
|
||||||
|
To resume: start a new session and run /continue — Claude will load the snapshot and pick up where you left off.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Success (clean state)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Snapshot Saved — See You Next Session
|
||||||
|
|
||||||
|
**Branch:** main
|
||||||
|
**Uncommitted files:** 0
|
||||||
|
**Active changes:** 0
|
||||||
|
|
||||||
|
**Session context saved to memory.**
|
||||||
|
|
||||||
|
To resume: start a new session and run /continue — Claude will load the snapshot and pick up where you left off.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
|
||||||
|
- Always overwrite `session_snapshot.md` — do NOT append or create versioned copies
|
||||||
|
- Never include secrets, tokens, or credentials in the snapshot
|
||||||
|
- If `openspec list` fails (CLI not available), note that and skip OpenSpec capture gracefully
|
||||||
|
- If git is unavailable, note that and skip git capture gracefully
|
||||||
|
- Keep the session context summary factual — no speculation about future plans beyond what the user explicitly stated
|
||||||
|
- The MEMORY.md index line for `session_snapshot.md` must stay under 150 characters
|
||||||
|
- This command does NOT commit code, push branches, or modify any project files — it only writes to the memory directory
|
||||||
160
.claude/commands/openspec-project-status.md
Normal file
160
.claude/commands/openspec-project-status.md
Normal file
@@ -0,0 +1,160 @@
|
|||||||
|
---
|
||||||
|
name: "OpenSpec Project Status"
|
||||||
|
description: Show a human-readable summary of all OpenSpec changes — active, archived, artifact completion, and task progress
|
||||||
|
category: Workflow
|
||||||
|
tags: [workflow, status, openspec, reporting]
|
||||||
|
---
|
||||||
|
|
||||||
|
Show the full OpenSpec project status in a clear, human-readable format. No raw JSON — just a clean picture of where the project stands.
|
||||||
|
|
||||||
|
**Input**: No arguments required. Run `/openspec-project-status` at any time.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **Get all changes**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
openspec list --json
|
||||||
|
```
|
||||||
|
|
||||||
|
Separate results into:
|
||||||
|
- **Active changes** (not in `archive/`)
|
||||||
|
- **Archived changes** (in `archive/`)
|
||||||
|
|
||||||
|
If the command fails or no changes exist, display a friendly empty state (see Output section).
|
||||||
|
|
||||||
|
2. **For each active change, gather full status**
|
||||||
|
|
||||||
|
Run in parallel for all active changes:
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
Also read each change's `tasks.md` to extract:
|
||||||
|
- Total task count
|
||||||
|
- Completed tasks (`- [x]`)
|
||||||
|
- Pending tasks (`- [ ]`)
|
||||||
|
- Text of the **next pending task** (first `- [ ]` item)
|
||||||
|
|
||||||
|
Also check for delta specs at `openspec/changes/<name>/specs/` — note if present.
|
||||||
|
|
||||||
|
3. **For archived changes**
|
||||||
|
|
||||||
|
List them by archive date (newest first). No need to read full status — just show name and archive date from the folder name (`YYYY-MM-DD-<name>`).
|
||||||
|
|
||||||
|
4. **Render the human-readable status report**
|
||||||
|
|
||||||
|
Use the output format defined below.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Output Format**
|
||||||
|
|
||||||
|
```
|
||||||
|
## OpenSpec Project Status
|
||||||
|
|
||||||
|
### Active Changes (<count>)
|
||||||
|
|
||||||
|
────────────────────────────────────────
|
||||||
|
<change-name>
|
||||||
|
────────────────────────────────────────
|
||||||
|
Schema: <schema-name>
|
||||||
|
Phase: <inferred from artifact state: Proposing | Designing | Ready to Implement | In Progress | Complete>
|
||||||
|
|
||||||
|
Artifacts
|
||||||
|
✓ proposal done
|
||||||
|
✓ design done
|
||||||
|
◌ tasks pending
|
||||||
|
|
||||||
|
Tasks <done>/<total> complete
|
||||||
|
████████░░░░░░░░ 50%
|
||||||
|
Next: "<text of next pending task>"
|
||||||
|
|
||||||
|
Delta Specs <present / none>
|
||||||
|
|
||||||
|
────────────────────────────────────────
|
||||||
|
|
||||||
|
<Repeat for each active change>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Archived Changes (<count>)
|
||||||
|
|
||||||
|
2026-03-20 add-initial-auth
|
||||||
|
2026-03-15 setup-ci-pipeline
|
||||||
|
2026-03-10 scaffold-project
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
|
||||||
|
Active changes: <N>
|
||||||
|
Ready to apply: <N> (all artifacts done, tasks pending)
|
||||||
|
In progress: <N> (tasks partially complete)
|
||||||
|
Complete: <N> (all tasks done, not yet archived)
|
||||||
|
Archived: <N>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Phase inference rules** (from artifact + task state):
|
||||||
|
- `Proposing` — proposal artifact is not done
|
||||||
|
- `Designing` — proposal done, design not done
|
||||||
|
- `Speccing` — design done, tasks artifact not done
|
||||||
|
- `Ready to Implement` — all artifacts done, 0 tasks complete
|
||||||
|
- `In Progress` — all artifacts done, some tasks complete but not all
|
||||||
|
- `Complete` — all artifacts done, all tasks complete (not yet archived)
|
||||||
|
|
||||||
|
**Progress bar rules:**
|
||||||
|
- 16 chars wide: `█` per completed segment, `░` for remaining
|
||||||
|
- Show percentage after bar
|
||||||
|
- If 0 tasks: show `No tasks yet`
|
||||||
|
- If all tasks done: show `████████████████ 100% All done!`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Output: No active changes**
|
||||||
|
|
||||||
|
```
|
||||||
|
## OpenSpec Project Status
|
||||||
|
|
||||||
|
### Active Changes (0)
|
||||||
|
|
||||||
|
No active changes. Start one with /opsx:propose
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Archived Changes (<count>)
|
||||||
|
|
||||||
|
2026-03-20 add-initial-auth
|
||||||
|
...
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
|
||||||
|
Active changes: 0
|
||||||
|
Archived: <N>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output: OpenSpec CLI unavailable**
|
||||||
|
|
||||||
|
```
|
||||||
|
## OpenSpec Project Status
|
||||||
|
|
||||||
|
OpenSpec CLI not available. Cannot read change data.
|
||||||
|
|
||||||
|
Make sure `openspec` is installed and accessible in your PATH.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
|
||||||
|
- Never show raw JSON — always translate to human-readable output
|
||||||
|
- Never guess artifact or task state — always read from actual files and CLI output
|
||||||
|
- If a `tasks.md` file does not exist for a change, show `No tasks file` instead of 0/0
|
||||||
|
- Archived changes are display-only — never modify them
|
||||||
|
- Phase labels must be inferred strictly from actual artifact + task state, not assumed
|
||||||
|
- If `openspec status` fails for a specific change, show that change with `Status unavailable` and continue
|
||||||
152
.claude/commands/opsx/apply.md
Normal file
152
.claude/commands/opsx/apply.md
Normal file
@@ -0,0 +1,152 @@
|
|||||||
|
---
|
||||||
|
name: "OPSX: Apply"
|
||||||
|
description: Implement tasks from an OpenSpec change (Experimental)
|
||||||
|
category: Workflow
|
||||||
|
tags: [workflow, artifacts, experimental]
|
||||||
|
---
|
||||||
|
|
||||||
|
Implement tasks from an OpenSpec change.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **Select the change**
|
||||||
|
|
||||||
|
If a name is provided, use it. Otherwise:
|
||||||
|
- Infer from conversation context if the user mentioned a change
|
||||||
|
- Auto-select if only one active change exists
|
||||||
|
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
|
||||||
|
|
||||||
|
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
|
||||||
|
|
||||||
|
2. **Check status to understand the schema**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used (e.g., "spec-driven")
|
||||||
|
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
|
||||||
|
|
||||||
|
3. **Get apply instructions**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openspec instructions apply --change "<name>" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This returns:
|
||||||
|
- Context file paths (varies by schema)
|
||||||
|
- Progress (total, complete, remaining)
|
||||||
|
- Task list with status
|
||||||
|
- Dynamic instruction based on current state
|
||||||
|
|
||||||
|
**Handle states:**
|
||||||
|
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
|
||||||
|
- If `state: "all_done"`: congratulate, suggest archive
|
||||||
|
- Otherwise: proceed to implementation
|
||||||
|
|
||||||
|
4. **Read context files**
|
||||||
|
|
||||||
|
Read the files listed in `contextFiles` from the apply instructions output.
|
||||||
|
The files depend on the schema being used:
|
||||||
|
- **spec-driven**: proposal, specs, design, tasks
|
||||||
|
- Other schemas: follow the contextFiles from CLI output
|
||||||
|
|
||||||
|
5. **Show current progress**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Schema being used
|
||||||
|
- Progress: "N/M tasks complete"
|
||||||
|
- Remaining tasks overview
|
||||||
|
- Dynamic instruction from CLI
|
||||||
|
|
||||||
|
6. **Implement tasks (loop until done or blocked)**
|
||||||
|
|
||||||
|
For each pending task:
|
||||||
|
- Show which task is being worked on
|
||||||
|
- Make the code changes required
|
||||||
|
- Keep changes minimal and focused
|
||||||
|
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
|
||||||
|
- Continue to next task
|
||||||
|
|
||||||
|
**Pause if:**
|
||||||
|
- Task is unclear → ask for clarification
|
||||||
|
- Implementation reveals a design issue → suggest updating artifacts
|
||||||
|
- Error or blocker encountered → report and wait for guidance
|
||||||
|
- User interrupts
|
||||||
|
|
||||||
|
7. **On completion or pause, show status**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Tasks completed this session
|
||||||
|
- Overall progress: "N/M tasks complete"
|
||||||
|
- If all done: suggest archive
|
||||||
|
- If paused: explain why and wait for guidance
|
||||||
|
|
||||||
|
**Output During Implementation**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementing: <change-name> (schema: <schema-name>)
|
||||||
|
|
||||||
|
Working on task 3/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
|
||||||
|
Working on task 4/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Completion**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 7/7 tasks complete ✓
|
||||||
|
|
||||||
|
### Completed This Session
|
||||||
|
- [x] Task 1
|
||||||
|
- [x] Task 2
|
||||||
|
...
|
||||||
|
|
||||||
|
All tasks complete! You can archive this change with `/opsx:archive`.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Pause (Issue Encountered)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Paused
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 4/7 tasks complete
|
||||||
|
|
||||||
|
### Issue Encountered
|
||||||
|
<description of the issue>
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
1. <option 1>
|
||||||
|
2. <option 2>
|
||||||
|
3. Other approach
|
||||||
|
|
||||||
|
What would you like to do?
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Keep going through tasks until done or blocked
|
||||||
|
- Always read context files before starting (from the apply instructions output)
|
||||||
|
- If task is ambiguous, pause and ask before implementing
|
||||||
|
- If implementation reveals issues, pause and suggest artifact updates
|
||||||
|
- Keep code changes minimal and scoped to each task
|
||||||
|
- Update task checkbox immediately after completing each task
|
||||||
|
- Pause on errors, blockers, or unclear requirements - don't guess
|
||||||
|
- Use contextFiles from CLI output, don't assume specific file names
|
||||||
|
|
||||||
|
**Fluid Workflow Integration**
|
||||||
|
|
||||||
|
This skill supports the "actions on a change" model:
|
||||||
|
|
||||||
|
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
|
||||||
|
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly
|
||||||
157
.claude/commands/opsx/archive.md
Normal file
157
.claude/commands/opsx/archive.md
Normal file
@@ -0,0 +1,157 @@
|
|||||||
|
---
|
||||||
|
name: "OPSX: Archive"
|
||||||
|
description: Archive a completed change in the experimental workflow
|
||||||
|
category: Workflow
|
||||||
|
tags: [workflow, archive, experimental]
|
||||||
|
---
|
||||||
|
|
||||||
|
Archive a completed change in the experimental workflow.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no change name provided, prompt for selection**
|
||||||
|
|
||||||
|
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
|
||||||
|
|
||||||
|
Show only active changes (not already archived).
|
||||||
|
Include the schema used for each change if available.
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
|
||||||
|
|
||||||
|
2. **Check artifact completion status**
|
||||||
|
|
||||||
|
Run `openspec status --change "<name>" --json` to check artifact completion.
|
||||||
|
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used
|
||||||
|
- `artifacts`: List of artifacts with their status (`done` or other)
|
||||||
|
|
||||||
|
**If any artifacts are not `done`:**
|
||||||
|
- Display warning listing incomplete artifacts
|
||||||
|
- Prompt user for confirmation to continue
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
3. **Check task completion status**
|
||||||
|
|
||||||
|
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
|
||||||
|
|
||||||
|
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
|
||||||
|
|
||||||
|
**If incomplete tasks found:**
|
||||||
|
- Display warning showing count of incomplete tasks
|
||||||
|
- Prompt user for confirmation to continue
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
**If no tasks file exists:** Proceed without task-related warning.
|
||||||
|
|
||||||
|
4. **Assess delta spec sync state**
|
||||||
|
|
||||||
|
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
|
||||||
|
|
||||||
|
**If delta specs exist:**
|
||||||
|
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
|
||||||
|
- Determine what changes would be applied (adds, modifications, removals, renames)
|
||||||
|
- Show a combined summary before prompting
|
||||||
|
|
||||||
|
**Prompt options:**
|
||||||
|
- If changes needed: "Sync now (recommended)", "Archive without syncing"
|
||||||
|
- If already synced: "Archive now", "Sync anyway", "Cancel"
|
||||||
|
|
||||||
|
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
|
||||||
|
|
||||||
|
5. **Perform the archive**
|
||||||
|
|
||||||
|
Create the archive directory if it doesn't exist:
|
||||||
|
```bash
|
||||||
|
mkdir -p openspec/changes/archive
|
||||||
|
```
|
||||||
|
|
||||||
|
Generate target name using current date: `YYYY-MM-DD-<change-name>`
|
||||||
|
|
||||||
|
**Check if target already exists:**
|
||||||
|
- If yes: Fail with error, suggest renaming existing archive or using different date
|
||||||
|
- If no: Move the change directory to archive
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
|
||||||
|
```
|
||||||
|
|
||||||
|
6. **Display summary**
|
||||||
|
|
||||||
|
Show archive completion summary including:
|
||||||
|
- Change name
|
||||||
|
- Schema that was used
|
||||||
|
- Archive location
|
||||||
|
- Spec sync status (synced / sync skipped / no delta specs)
|
||||||
|
- Note about any warnings (incomplete artifacts/tasks)
|
||||||
|
|
||||||
|
**Output On Success**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** ✓ Synced to main specs
|
||||||
|
|
||||||
|
All artifacts complete. All tasks complete.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Success (No Delta Specs)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** No delta specs
|
||||||
|
|
||||||
|
All artifacts complete. All tasks complete.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Success With Warnings**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete (with warnings)
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** Sync skipped (user chose to skip)
|
||||||
|
|
||||||
|
**Warnings:**
|
||||||
|
- Archived with 2 incomplete artifacts
|
||||||
|
- Archived with 3 incomplete tasks
|
||||||
|
- Delta spec sync was skipped (user chose to skip)
|
||||||
|
|
||||||
|
Review the archive if this was not intentional.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Error (Archive Exists)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Failed
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
|
||||||
|
Target archive directory already exists.
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
1. Rename the existing archive
|
||||||
|
2. Delete the existing archive if it's a duplicate
|
||||||
|
3. Wait until a different date to archive
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Always prompt for change selection if not provided
|
||||||
|
- Use artifact graph (openspec status --json) for completion checking
|
||||||
|
- Don't block archive on warnings - just inform and confirm
|
||||||
|
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
|
||||||
|
- Show clear summary of what happened
|
||||||
|
- If sync is requested, use the Skill tool to invoke `openspec-sync-specs` (agent-driven)
|
||||||
|
- If delta specs exist, always run the sync assessment and show the combined summary before prompting
|
||||||
173
.claude/commands/opsx/explore.md
Normal file
173
.claude/commands/opsx/explore.md
Normal file
@@ -0,0 +1,173 @@
|
|||||||
|
---
|
||||||
|
name: "OPSX: Explore"
|
||||||
|
description: "Enter explore mode - think through ideas, investigate problems, clarify requirements"
|
||||||
|
category: Workflow
|
||||||
|
tags: [workflow, explore, experimental, thinking]
|
||||||
|
---
|
||||||
|
|
||||||
|
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
|
||||||
|
|
||||||
|
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
|
||||||
|
|
||||||
|
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
|
||||||
|
|
||||||
|
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
|
||||||
|
- A vague idea: "real-time collaboration"
|
||||||
|
- A specific problem: "the auth system is getting unwieldy"
|
||||||
|
- A change name: "add-dark-mode" (to explore in context of that change)
|
||||||
|
- A comparison: "postgres vs sqlite for this"
|
||||||
|
- Nothing (just enter explore mode)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Stance
|
||||||
|
|
||||||
|
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
|
||||||
|
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
|
||||||
|
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
|
||||||
|
- **Adaptive** - Follow interesting threads, pivot when new information emerges
|
||||||
|
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
|
||||||
|
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Might Do
|
||||||
|
|
||||||
|
Depending on what the user brings, you might:
|
||||||
|
|
||||||
|
**Explore the problem space**
|
||||||
|
- Ask clarifying questions that emerge from what they said
|
||||||
|
- Challenge assumptions
|
||||||
|
- Reframe the problem
|
||||||
|
- Find analogies
|
||||||
|
|
||||||
|
**Investigate the codebase**
|
||||||
|
- Map existing architecture relevant to the discussion
|
||||||
|
- Find integration points
|
||||||
|
- Identify patterns already in use
|
||||||
|
- Surface hidden complexity
|
||||||
|
|
||||||
|
**Compare options**
|
||||||
|
- Brainstorm multiple approaches
|
||||||
|
- Build comparison tables
|
||||||
|
- Sketch tradeoffs
|
||||||
|
- Recommend a path (if asked)
|
||||||
|
|
||||||
|
**Visualize**
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Use ASCII diagrams liberally │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌────────┐ ┌────────┐ │
|
||||||
|
│ │ State │────────▶│ State │ │
|
||||||
|
│ │ A │ │ B │ │
|
||||||
|
│ └────────┘ └────────┘ │
|
||||||
|
│ │
|
||||||
|
│ System diagrams, state machines, │
|
||||||
|
│ data flows, architecture sketches, │
|
||||||
|
│ dependency graphs, comparison tables │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Surface risks and unknowns**
|
||||||
|
- Identify what could go wrong
|
||||||
|
- Find gaps in understanding
|
||||||
|
- Suggest spikes or investigations
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## OpenSpec Awareness
|
||||||
|
|
||||||
|
You have full context of the OpenSpec system. Use it naturally, don't force it.
|
||||||
|
|
||||||
|
### Check for context
|
||||||
|
|
||||||
|
At the start, quickly check what exists:
|
||||||
|
```bash
|
||||||
|
openspec list --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This tells you:
|
||||||
|
- If there are active changes
|
||||||
|
- Their names, schemas, and status
|
||||||
|
- What the user might be working on
|
||||||
|
|
||||||
|
If the user mentioned a specific change name, read its artifacts for context.
|
||||||
|
|
||||||
|
### When no change exists
|
||||||
|
|
||||||
|
Think freely. When insights crystallize, you might offer:
|
||||||
|
|
||||||
|
- "This feels solid enough to start a change. Want me to create a proposal?"
|
||||||
|
- Or keep exploring - no pressure to formalize
|
||||||
|
|
||||||
|
### When a change exists
|
||||||
|
|
||||||
|
If the user mentions a change or you detect one is relevant:
|
||||||
|
|
||||||
|
1. **Read existing artifacts for context**
|
||||||
|
- `openspec/changes/<name>/proposal.md`
|
||||||
|
- `openspec/changes/<name>/design.md`
|
||||||
|
- `openspec/changes/<name>/tasks.md`
|
||||||
|
- etc.
|
||||||
|
|
||||||
|
2. **Reference them naturally in conversation**
|
||||||
|
- "Your design mentions using Redis, but we just realized SQLite fits better..."
|
||||||
|
- "The proposal scopes this to premium users, but we're now thinking everyone..."
|
||||||
|
|
||||||
|
3. **Offer to capture when decisions are made**
|
||||||
|
|
||||||
|
| Insight Type | Where to Capture |
|
||||||
|
|--------------|------------------|
|
||||||
|
| New requirement discovered | `specs/<capability>/spec.md` |
|
||||||
|
| Requirement changed | `specs/<capability>/spec.md` |
|
||||||
|
| Design decision made | `design.md` |
|
||||||
|
| Scope changed | `proposal.md` |
|
||||||
|
| New work identified | `tasks.md` |
|
||||||
|
| Assumption invalidated | Relevant artifact |
|
||||||
|
|
||||||
|
Example offers:
|
||||||
|
- "That's a design decision. Capture it in design.md?"
|
||||||
|
- "This is a new requirement. Add it to specs?"
|
||||||
|
- "This changes scope. Update the proposal?"
|
||||||
|
|
||||||
|
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Don't Have To Do
|
||||||
|
|
||||||
|
- Follow a script
|
||||||
|
- Ask the same questions every time
|
||||||
|
- Produce a specific artifact
|
||||||
|
- Reach a conclusion
|
||||||
|
- Stay on topic if a tangent is valuable
|
||||||
|
- Be brief (this is thinking time)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Ending Discovery
|
||||||
|
|
||||||
|
There's no required ending. Discovery might:
|
||||||
|
|
||||||
|
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
|
||||||
|
- **Result in artifact updates**: "Updated design.md with these decisions"
|
||||||
|
- **Just provide clarity**: User has what they need, moves on
|
||||||
|
- **Continue later**: "We can pick this up anytime"
|
||||||
|
|
||||||
|
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Guardrails
|
||||||
|
|
||||||
|
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
|
||||||
|
- **Don't fake understanding** - If something is unclear, dig deeper
|
||||||
|
- **Don't rush** - Discovery is thinking time, not task time
|
||||||
|
- **Don't force structure** - Let patterns emerge naturally
|
||||||
|
- **Don't auto-capture** - Offer to save insights, don't just do it
|
||||||
|
- **Do visualize** - A good diagram is worth many paragraphs
|
||||||
|
- **Do explore the codebase** - Ground discussions in reality
|
||||||
|
- **Do question assumptions** - Including the user's and your own
|
||||||
106
.claude/commands/opsx/propose.md
Normal file
106
.claude/commands/opsx/propose.md
Normal file
@@ -0,0 +1,106 @@
|
|||||||
|
---
|
||||||
|
name: "OPSX: Propose"
|
||||||
|
description: Propose a new change - create it and generate all artifacts in one step
|
||||||
|
category: Workflow
|
||||||
|
tags: [workflow, artifacts, experimental]
|
||||||
|
---
|
||||||
|
|
||||||
|
Propose a new change - create the change and generate all artifacts in one step.
|
||||||
|
|
||||||
|
I'll create a change with artifacts:
|
||||||
|
- proposal.md (what & why)
|
||||||
|
- design.md (how)
|
||||||
|
- tasks.md (implementation steps)
|
||||||
|
|
||||||
|
When ready to implement, run /opsx:apply
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Input**: The argument after `/opsx:propose` is the change name (kebab-case), OR a description of what the user wants to build.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no input provided, ask what they want to build**
|
||||||
|
|
||||||
|
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
|
||||||
|
> "What change do you want to work on? Describe what you want to build or fix."
|
||||||
|
|
||||||
|
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
|
||||||
|
|
||||||
|
2. **Create the change directory**
|
||||||
|
```bash
|
||||||
|
openspec new change "<name>"
|
||||||
|
```
|
||||||
|
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
|
||||||
|
|
||||||
|
3. **Get the artifact build order**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to get:
|
||||||
|
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
|
||||||
|
- `artifacts`: list of all artifacts with their status and dependencies
|
||||||
|
|
||||||
|
4. **Create artifacts in sequence until apply-ready**
|
||||||
|
|
||||||
|
Use the **TodoWrite tool** to track progress through the artifacts.
|
||||||
|
|
||||||
|
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
|
||||||
|
|
||||||
|
a. **For each artifact that is `ready` (dependencies satisfied)**:
|
||||||
|
- Get instructions:
|
||||||
|
```bash
|
||||||
|
openspec instructions <artifact-id> --change "<name>" --json
|
||||||
|
```
|
||||||
|
- The instructions JSON includes:
|
||||||
|
- `context`: Project background (constraints for you - do NOT include in output)
|
||||||
|
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
|
||||||
|
- `template`: The structure to use for your output file
|
||||||
|
- `instruction`: Schema-specific guidance for this artifact type
|
||||||
|
- `outputPath`: Where to write the artifact
|
||||||
|
- `dependencies`: Completed artifacts to read for context
|
||||||
|
- Read any completed dependency files for context
|
||||||
|
- Create the artifact file using `template` as the structure
|
||||||
|
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
|
||||||
|
- Show brief progress: "Created <artifact-id>"
|
||||||
|
|
||||||
|
b. **Continue until all `applyRequires` artifacts are complete**
|
||||||
|
- After creating each artifact, re-run `openspec status --change "<name>" --json`
|
||||||
|
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
|
||||||
|
- Stop when all `applyRequires` artifacts are done
|
||||||
|
|
||||||
|
c. **If an artifact requires user input** (unclear context):
|
||||||
|
- Use **AskUserQuestion tool** to clarify
|
||||||
|
- Then continue with creation
|
||||||
|
|
||||||
|
5. **Show final status**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**
|
||||||
|
|
||||||
|
After completing all artifacts, summarize:
|
||||||
|
- Change name and location
|
||||||
|
- List of artifacts created with brief descriptions
|
||||||
|
- What's ready: "All artifacts created! Ready for implementation."
|
||||||
|
- Prompt: "Run `/opsx:apply` to start implementing."
|
||||||
|
|
||||||
|
**Artifact Creation Guidelines**
|
||||||
|
|
||||||
|
- Follow the `instruction` field from `openspec instructions` for each artifact type
|
||||||
|
- The schema defines what each artifact should contain - follow it
|
||||||
|
- Read dependency artifacts for context before creating new ones
|
||||||
|
- Use `template` as the structure for your output file - fill in its sections
|
||||||
|
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
|
||||||
|
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
|
||||||
|
- These guide what you write, but should never appear in the output
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
|
||||||
|
- Always read dependency artifacts before creating a new one
|
||||||
|
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
|
||||||
|
- If a change with that name already exists, ask if user wants to continue it or create a new one
|
||||||
|
- Verify each artifact file exists after writing before proceeding to next
|
||||||
183
.claude/skills/continue/SKILL.md
Normal file
183
.claude/skills/continue/SKILL.md
Normal file
@@ -0,0 +1,183 @@
|
|||||||
|
---
|
||||||
|
name: continue
|
||||||
|
description: Capture a full project status snapshot so the next session can continue seamlessly from where this one left off. Use when the user is ending a session and wants to preserve context for resumption.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires git. OpenSpec CLI optional (gracefully skipped if unavailable).
|
||||||
|
metadata:
|
||||||
|
author: sentryagent
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Capture the full current project status and store it in persistent memory so the next session can pick up exactly where this one left off — no context lost, no recap needed.
|
||||||
|
|
||||||
|
**Input**: No arguments required. Invoke at any point when ending a session.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **Capture git state**
|
||||||
|
|
||||||
|
Run the following in parallel:
|
||||||
|
```bash
|
||||||
|
git status
|
||||||
|
git branch --show-current
|
||||||
|
git log --oneline -10
|
||||||
|
git diff --stat HEAD
|
||||||
|
git stash list
|
||||||
|
```
|
||||||
|
|
||||||
|
Record:
|
||||||
|
- Current branch name
|
||||||
|
- Uncommitted files (staged and unstaged), with change type (M/A/D/?)
|
||||||
|
- Last 10 commit messages (for continuity context)
|
||||||
|
- Summary of diff stats if uncommitted changes exist
|
||||||
|
- Any stashed work
|
||||||
|
|
||||||
|
2. **Capture OpenSpec change state**
|
||||||
|
|
||||||
|
Run `openspec list --json` to get all active changes.
|
||||||
|
|
||||||
|
For each active (non-archived) change, run:
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
For each active change, also read its `tasks.md` to count:
|
||||||
|
- Total tasks
|
||||||
|
- Completed tasks (`- [x]`)
|
||||||
|
- Pending tasks (`- [ ]`)
|
||||||
|
- The text of the next pending task (to know what's up next)
|
||||||
|
|
||||||
|
Record per change:
|
||||||
|
- Change name
|
||||||
|
- Schema
|
||||||
|
- Artifact completion (which are done, which are pending)
|
||||||
|
- Task progress (X of Y complete)
|
||||||
|
- Next pending task description
|
||||||
|
- Any delta specs present (`openspec/changes/<name>/specs/`)
|
||||||
|
|
||||||
|
**If `openspec` CLI is unavailable or fails:** Note it and skip this section gracefully.
|
||||||
|
**If no active changes:** Note that there are no active OpenSpec changes.
|
||||||
|
|
||||||
|
3. **Capture in-session conversation context**
|
||||||
|
|
||||||
|
Summarize what was worked on in this session based on the conversation:
|
||||||
|
- What was the user trying to accomplish?
|
||||||
|
- What was completed?
|
||||||
|
- What was left in-progress or blocked?
|
||||||
|
- Any key decisions made during this session
|
||||||
|
- Any open questions or next actions the user mentioned
|
||||||
|
|
||||||
|
Keep this factual and brief — 3–8 bullet points.
|
||||||
|
|
||||||
|
4. **Capture memory file state**
|
||||||
|
|
||||||
|
Read `MEMORY.md` from the project memory directory:
|
||||||
|
`~/.claude/projects/-home-ubuntu-vj-ai-agents-dev-sentryagent-idp/memory/MEMORY.md`
|
||||||
|
|
||||||
|
Note the existing memory entries to avoid duplication in the next step.
|
||||||
|
|
||||||
|
5. **Write session snapshot to memory**
|
||||||
|
|
||||||
|
Write a `session_snapshot.md` file to the project memory directory:
|
||||||
|
`~/.claude/projects/-home-ubuntu-vj-ai-agents-dev-sentryagent-idp/memory/session_snapshot.md`
|
||||||
|
|
||||||
|
Use this structure:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
name: Session Snapshot
|
||||||
|
description: Last session status — git state, OpenSpec progress, and conversation context for seamless resumption
|
||||||
|
type: project
|
||||||
|
---
|
||||||
|
|
||||||
|
**Session ended:** YYYY-MM-DD (today's date)
|
||||||
|
|
||||||
|
## Git State
|
||||||
|
|
||||||
|
**Branch:** <branch-name>
|
||||||
|
**Uncommitted changes:** <count> files (<list filenames>)
|
||||||
|
**Last commit:** <hash> <message>
|
||||||
|
|
||||||
|
<If uncommitted changes exist, list them with their status>
|
||||||
|
|
||||||
|
<If stashes exist, list them>
|
||||||
|
|
||||||
|
## OpenSpec Changes
|
||||||
|
|
||||||
|
<For each active change:>
|
||||||
|
### <change-name>
|
||||||
|
- **Schema:** <schema-name>
|
||||||
|
- **Artifacts:** <done-count>/<total-count> complete (<list incomplete artifact names>)
|
||||||
|
- **Tasks:** <done-count>/<total-count> complete
|
||||||
|
- **Next task:** <text of next pending task>
|
||||||
|
- **Delta specs:** <present / none>
|
||||||
|
|
||||||
|
<If no active changes:> No active OpenSpec changes.
|
||||||
|
|
||||||
|
## Session Work
|
||||||
|
|
||||||
|
<Bullet list of what was worked on, completed, and left in-progress>
|
||||||
|
|
||||||
|
## Next Actions
|
||||||
|
|
||||||
|
<Bullet list of concrete next steps to resume — derived from pending tasks, blockers, open questions>
|
||||||
|
```
|
||||||
|
|
||||||
|
**IMPORTANT:** Always overwrite `session_snapshot.md` — this is a rolling snapshot, not a log. Only the most recent session state matters.
|
||||||
|
|
||||||
|
6. **Update MEMORY.md index**
|
||||||
|
|
||||||
|
Read the current `MEMORY.md`. If `session_snapshot.md` is not already listed, add it:
|
||||||
|
```
|
||||||
|
- [Session Snapshot](session_snapshot.md) — Last session: YYYY-MM-DD | branch: <name> | <N> active changes | <N> uncommitted files
|
||||||
|
```
|
||||||
|
|
||||||
|
If it is already listed, update the line to reflect today's date and current state.
|
||||||
|
|
||||||
|
Write the updated `MEMORY.md`.
|
||||||
|
|
||||||
|
7. **Display break summary**
|
||||||
|
|
||||||
|
Show a clean summary so the user knows the snapshot is complete:
|
||||||
|
|
||||||
|
```
|
||||||
|
## Snapshot Saved — See You Next Session
|
||||||
|
|
||||||
|
**Branch:** <branch-name>
|
||||||
|
**Uncommitted files:** <count> (<filenames>)
|
||||||
|
**Active changes:** <count>
|
||||||
|
|
||||||
|
<For each active change:>
|
||||||
|
- <change-name>: <done>/<total> tasks complete — Next: "<next task text>"
|
||||||
|
|
||||||
|
**Session context saved to memory.**
|
||||||
|
|
||||||
|
To resume: start a new session and run /continue — Claude will load the snapshot and pick up where you left off.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Success**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Snapshot Saved — See You Next Session
|
||||||
|
|
||||||
|
**Branch:** develop
|
||||||
|
**Uncommitted files:** 3 (src/auth/token.ts, tests/auth.test.ts, README.md)
|
||||||
|
**Active changes:** 1
|
||||||
|
|
||||||
|
- add-agent-auth: 4/7 tasks complete — Next: "Implement JWT signing with RS256"
|
||||||
|
|
||||||
|
**Session context saved to memory.**
|
||||||
|
|
||||||
|
To resume: start a new session and run /continue — Claude will load the snapshot and pick up where you left off.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Always overwrite `session_snapshot.md` — do NOT append or create versioned copies
|
||||||
|
- Never include secrets, tokens, or credentials in the snapshot
|
||||||
|
- If `openspec list` fails (CLI not available), note that and skip OpenSpec capture gracefully
|
||||||
|
- If git is unavailable, note that and skip git capture gracefully
|
||||||
|
- Keep the session context summary factual — no speculation beyond what the user explicitly stated
|
||||||
|
- The MEMORY.md index line for `session_snapshot.md` must stay under 150 characters
|
||||||
|
- This skill does NOT commit code, push branches, or modify any project files — it only writes to the memory directory
|
||||||
|
- Session date must use the actual current date (not a placeholder)
|
||||||
156
.claude/skills/openspec-apply-change/SKILL.md
Normal file
156
.claude/skills/openspec-apply-change/SKILL.md
Normal file
@@ -0,0 +1,156 @@
|
|||||||
|
---
|
||||||
|
name: openspec-apply-change
|
||||||
|
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Implement tasks from an OpenSpec change.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **Select the change**
|
||||||
|
|
||||||
|
If a name is provided, use it. Otherwise:
|
||||||
|
- Infer from conversation context if the user mentioned a change
|
||||||
|
- Auto-select if only one active change exists
|
||||||
|
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
|
||||||
|
|
||||||
|
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
|
||||||
|
|
||||||
|
2. **Check status to understand the schema**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used (e.g., "spec-driven")
|
||||||
|
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
|
||||||
|
|
||||||
|
3. **Get apply instructions**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openspec instructions apply --change "<name>" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This returns:
|
||||||
|
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
|
||||||
|
- Progress (total, complete, remaining)
|
||||||
|
- Task list with status
|
||||||
|
- Dynamic instruction based on current state
|
||||||
|
|
||||||
|
**Handle states:**
|
||||||
|
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
|
||||||
|
- If `state: "all_done"`: congratulate, suggest archive
|
||||||
|
- Otherwise: proceed to implementation
|
||||||
|
|
||||||
|
4. **Read context files**
|
||||||
|
|
||||||
|
Read the files listed in `contextFiles` from the apply instructions output.
|
||||||
|
The files depend on the schema being used:
|
||||||
|
- **spec-driven**: proposal, specs, design, tasks
|
||||||
|
- Other schemas: follow the contextFiles from CLI output
|
||||||
|
|
||||||
|
5. **Show current progress**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Schema being used
|
||||||
|
- Progress: "N/M tasks complete"
|
||||||
|
- Remaining tasks overview
|
||||||
|
- Dynamic instruction from CLI
|
||||||
|
|
||||||
|
6. **Implement tasks (loop until done or blocked)**
|
||||||
|
|
||||||
|
For each pending task:
|
||||||
|
- Show which task is being worked on
|
||||||
|
- Make the code changes required
|
||||||
|
- Keep changes minimal and focused
|
||||||
|
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
|
||||||
|
- Continue to next task
|
||||||
|
|
||||||
|
**Pause if:**
|
||||||
|
- Task is unclear → ask for clarification
|
||||||
|
- Implementation reveals a design issue → suggest updating artifacts
|
||||||
|
- Error or blocker encountered → report and wait for guidance
|
||||||
|
- User interrupts
|
||||||
|
|
||||||
|
7. **On completion or pause, show status**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Tasks completed this session
|
||||||
|
- Overall progress: "N/M tasks complete"
|
||||||
|
- If all done: suggest archive
|
||||||
|
- If paused: explain why and wait for guidance
|
||||||
|
|
||||||
|
**Output During Implementation**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementing: <change-name> (schema: <schema-name>)
|
||||||
|
|
||||||
|
Working on task 3/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
|
||||||
|
Working on task 4/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Completion**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 7/7 tasks complete ✓
|
||||||
|
|
||||||
|
### Completed This Session
|
||||||
|
- [x] Task 1
|
||||||
|
- [x] Task 2
|
||||||
|
...
|
||||||
|
|
||||||
|
All tasks complete! Ready to archive this change.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Pause (Issue Encountered)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Paused
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 4/7 tasks complete
|
||||||
|
|
||||||
|
### Issue Encountered
|
||||||
|
<description of the issue>
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
1. <option 1>
|
||||||
|
2. <option 2>
|
||||||
|
3. Other approach
|
||||||
|
|
||||||
|
What would you like to do?
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Keep going through tasks until done or blocked
|
||||||
|
- Always read context files before starting (from the apply instructions output)
|
||||||
|
- If task is ambiguous, pause and ask before implementing
|
||||||
|
- If implementation reveals issues, pause and suggest artifact updates
|
||||||
|
- Keep code changes minimal and scoped to each task
|
||||||
|
- Update task checkbox immediately after completing each task
|
||||||
|
- Pause on errors, blockers, or unclear requirements - don't guess
|
||||||
|
- Use contextFiles from CLI output, don't assume specific file names
|
||||||
|
|
||||||
|
**Fluid Workflow Integration**
|
||||||
|
|
||||||
|
This skill supports the "actions on a change" model:
|
||||||
|
|
||||||
|
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
|
||||||
|
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly
|
||||||
114
.claude/skills/openspec-archive-change/SKILL.md
Normal file
114
.claude/skills/openspec-archive-change/SKILL.md
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
---
|
||||||
|
name: openspec-archive-change
|
||||||
|
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Archive a completed change in the experimental workflow.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no change name provided, prompt for selection**
|
||||||
|
|
||||||
|
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
|
||||||
|
|
||||||
|
Show only active changes (not already archived).
|
||||||
|
Include the schema used for each change if available.
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
|
||||||
|
|
||||||
|
2. **Check artifact completion status**
|
||||||
|
|
||||||
|
Run `openspec status --change "<name>" --json` to check artifact completion.
|
||||||
|
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used
|
||||||
|
- `artifacts`: List of artifacts with their status (`done` or other)
|
||||||
|
|
||||||
|
**If any artifacts are not `done`:**
|
||||||
|
- Display warning listing incomplete artifacts
|
||||||
|
- Use **AskUserQuestion tool** to confirm user wants to proceed
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
3. **Check task completion status**
|
||||||
|
|
||||||
|
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
|
||||||
|
|
||||||
|
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
|
||||||
|
|
||||||
|
**If incomplete tasks found:**
|
||||||
|
- Display warning showing count of incomplete tasks
|
||||||
|
- Use **AskUserQuestion tool** to confirm user wants to proceed
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
**If no tasks file exists:** Proceed without task-related warning.
|
||||||
|
|
||||||
|
4. **Assess delta spec sync state**
|
||||||
|
|
||||||
|
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
|
||||||
|
|
||||||
|
**If delta specs exist:**
|
||||||
|
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
|
||||||
|
- Determine what changes would be applied (adds, modifications, removals, renames)
|
||||||
|
- Show a combined summary before prompting
|
||||||
|
|
||||||
|
**Prompt options:**
|
||||||
|
- If changes needed: "Sync now (recommended)", "Archive without syncing"
|
||||||
|
- If already synced: "Archive now", "Sync anyway", "Cancel"
|
||||||
|
|
||||||
|
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
|
||||||
|
|
||||||
|
5. **Perform the archive**
|
||||||
|
|
||||||
|
Create the archive directory if it doesn't exist:
|
||||||
|
```bash
|
||||||
|
mkdir -p openspec/changes/archive
|
||||||
|
```
|
||||||
|
|
||||||
|
Generate target name using current date: `YYYY-MM-DD-<change-name>`
|
||||||
|
|
||||||
|
**Check if target already exists:**
|
||||||
|
- If yes: Fail with error, suggest renaming existing archive or using different date
|
||||||
|
- If no: Move the change directory to archive
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
|
||||||
|
```
|
||||||
|
|
||||||
|
6. **Display summary**
|
||||||
|
|
||||||
|
Show archive completion summary including:
|
||||||
|
- Change name
|
||||||
|
- Schema that was used
|
||||||
|
- Archive location
|
||||||
|
- Whether specs were synced (if applicable)
|
||||||
|
- Note about any warnings (incomplete artifacts/tasks)
|
||||||
|
|
||||||
|
**Output On Success**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
|
||||||
|
|
||||||
|
All artifacts complete. All tasks complete.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Always prompt for change selection if not provided
|
||||||
|
- Use artifact graph (openspec status --json) for completion checking
|
||||||
|
- Don't block archive on warnings - just inform and confirm
|
||||||
|
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
|
||||||
|
- Show clear summary of what happened
|
||||||
|
- If sync is requested, use openspec-sync-specs approach (agent-driven)
|
||||||
|
- If delta specs exist, always run the sync assessment and show the combined summary before prompting
|
||||||
288
.claude/skills/openspec-explore/SKILL.md
Normal file
288
.claude/skills/openspec-explore/SKILL.md
Normal file
@@ -0,0 +1,288 @@
|
|||||||
|
---
|
||||||
|
name: openspec-explore
|
||||||
|
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
|
||||||
|
|
||||||
|
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
|
||||||
|
|
||||||
|
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Stance
|
||||||
|
|
||||||
|
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
|
||||||
|
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
|
||||||
|
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
|
||||||
|
- **Adaptive** - Follow interesting threads, pivot when new information emerges
|
||||||
|
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
|
||||||
|
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Might Do
|
||||||
|
|
||||||
|
Depending on what the user brings, you might:
|
||||||
|
|
||||||
|
**Explore the problem space**
|
||||||
|
- Ask clarifying questions that emerge from what they said
|
||||||
|
- Challenge assumptions
|
||||||
|
- Reframe the problem
|
||||||
|
- Find analogies
|
||||||
|
|
||||||
|
**Investigate the codebase**
|
||||||
|
- Map existing architecture relevant to the discussion
|
||||||
|
- Find integration points
|
||||||
|
- Identify patterns already in use
|
||||||
|
- Surface hidden complexity
|
||||||
|
|
||||||
|
**Compare options**
|
||||||
|
- Brainstorm multiple approaches
|
||||||
|
- Build comparison tables
|
||||||
|
- Sketch tradeoffs
|
||||||
|
- Recommend a path (if asked)
|
||||||
|
|
||||||
|
**Visualize**
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Use ASCII diagrams liberally │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌────────┐ ┌────────┐ │
|
||||||
|
│ │ State │────────▶│ State │ │
|
||||||
|
│ │ A │ │ B │ │
|
||||||
|
│ └────────┘ └────────┘ │
|
||||||
|
│ │
|
||||||
|
│ System diagrams, state machines, │
|
||||||
|
│ data flows, architecture sketches, │
|
||||||
|
│ dependency graphs, comparison tables │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Surface risks and unknowns**
|
||||||
|
- Identify what could go wrong
|
||||||
|
- Find gaps in understanding
|
||||||
|
- Suggest spikes or investigations
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## OpenSpec Awareness
|
||||||
|
|
||||||
|
You have full context of the OpenSpec system. Use it naturally, don't force it.
|
||||||
|
|
||||||
|
### Check for context
|
||||||
|
|
||||||
|
At the start, quickly check what exists:
|
||||||
|
```bash
|
||||||
|
openspec list --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This tells you:
|
||||||
|
- If there are active changes
|
||||||
|
- Their names, schemas, and status
|
||||||
|
- What the user might be working on
|
||||||
|
|
||||||
|
### When no change exists
|
||||||
|
|
||||||
|
Think freely. When insights crystallize, you might offer:
|
||||||
|
|
||||||
|
- "This feels solid enough to start a change. Want me to create a proposal?"
|
||||||
|
- Or keep exploring - no pressure to formalize
|
||||||
|
|
||||||
|
### When a change exists
|
||||||
|
|
||||||
|
If the user mentions a change or you detect one is relevant:
|
||||||
|
|
||||||
|
1. **Read existing artifacts for context**
|
||||||
|
- `openspec/changes/<name>/proposal.md`
|
||||||
|
- `openspec/changes/<name>/design.md`
|
||||||
|
- `openspec/changes/<name>/tasks.md`
|
||||||
|
- etc.
|
||||||
|
|
||||||
|
2. **Reference them naturally in conversation**
|
||||||
|
- "Your design mentions using Redis, but we just realized SQLite fits better..."
|
||||||
|
- "The proposal scopes this to premium users, but we're now thinking everyone..."
|
||||||
|
|
||||||
|
3. **Offer to capture when decisions are made**
|
||||||
|
|
||||||
|
| Insight Type | Where to Capture |
|
||||||
|
|--------------|------------------|
|
||||||
|
| New requirement discovered | `specs/<capability>/spec.md` |
|
||||||
|
| Requirement changed | `specs/<capability>/spec.md` |
|
||||||
|
| Design decision made | `design.md` |
|
||||||
|
| Scope changed | `proposal.md` |
|
||||||
|
| New work identified | `tasks.md` |
|
||||||
|
| Assumption invalidated | Relevant artifact |
|
||||||
|
|
||||||
|
Example offers:
|
||||||
|
- "That's a design decision. Capture it in design.md?"
|
||||||
|
- "This is a new requirement. Add it to specs?"
|
||||||
|
- "This changes scope. Update the proposal?"
|
||||||
|
|
||||||
|
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Don't Have To Do
|
||||||
|
|
||||||
|
- Follow a script
|
||||||
|
- Ask the same questions every time
|
||||||
|
- Produce a specific artifact
|
||||||
|
- Reach a conclusion
|
||||||
|
- Stay on topic if a tangent is valuable
|
||||||
|
- Be brief (this is thinking time)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Handling Different Entry Points
|
||||||
|
|
||||||
|
**User brings a vague idea:**
|
||||||
|
```
|
||||||
|
User: I'm thinking about adding real-time collaboration
|
||||||
|
|
||||||
|
You: Real-time collab is a big space. Let me think about this...
|
||||||
|
|
||||||
|
COLLABORATION SPECTRUM
|
||||||
|
════════════════════════════════════════════
|
||||||
|
|
||||||
|
Awareness Coordination Sync
|
||||||
|
│ │ │
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌────────┐ ┌────────┐ ┌────────┐
|
||||||
|
│Presence│ │Cursors │ │ CRDT │
|
||||||
|
│ "3 │ │ Multi │ │Conflict│
|
||||||
|
│online" │ │ select │ │ free │
|
||||||
|
└────────┘ └────────┘ └────────┘
|
||||||
|
│ │ │
|
||||||
|
trivial moderate complex
|
||||||
|
|
||||||
|
Where's your head at?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User brings a specific problem:**
|
||||||
|
```
|
||||||
|
User: The auth system is a mess
|
||||||
|
|
||||||
|
You: [reads codebase]
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────┐
|
||||||
|
│ CURRENT AUTH FLOW │
|
||||||
|
└─────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
┌─────────────┼─────────────┐
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||||
|
│ Google │ │ GitHub │ │ Email │
|
||||||
|
│ OAuth │ │ OAuth │ │ Magic │
|
||||||
|
└────┬────┘ └────┬────┘ └────┬────┘
|
||||||
|
│ │ │
|
||||||
|
└─────────────┼─────────────┘
|
||||||
|
▼
|
||||||
|
┌───────────┐
|
||||||
|
│ Session │
|
||||||
|
└─────┬─────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌───────────┐
|
||||||
|
│ Perms │
|
||||||
|
└───────────┘
|
||||||
|
|
||||||
|
I see three tangles. Which one's burning?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User is stuck mid-implementation:**
|
||||||
|
```
|
||||||
|
User: /opsx:explore add-auth-system
|
||||||
|
The OAuth integration is more complex than expected
|
||||||
|
|
||||||
|
You: [reads change artifacts]
|
||||||
|
|
||||||
|
You're on task 4: "Implement OAuth flow"
|
||||||
|
|
||||||
|
Let me trace what's involved...
|
||||||
|
|
||||||
|
[draws diagram, explores options, suggests paths]
|
||||||
|
|
||||||
|
Want to update the design to reflect this?
|
||||||
|
Or add a spike task to investigate?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User wants to compare options:**
|
||||||
|
```
|
||||||
|
User: Should we use Postgres or SQLite?
|
||||||
|
|
||||||
|
You: Generic answer is boring. What's the context?
|
||||||
|
|
||||||
|
User: A CLI tool that tracks local dev environments
|
||||||
|
|
||||||
|
You: That changes everything.
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────┐
|
||||||
|
│ CLI TOOL DATA STORAGE │
|
||||||
|
└─────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
Key constraints:
|
||||||
|
• No daemon running
|
||||||
|
• Must work offline
|
||||||
|
• Single user
|
||||||
|
|
||||||
|
SQLite Postgres
|
||||||
|
Deployment embedded ✓ needs server ✗
|
||||||
|
Offline yes ✓ no ✗
|
||||||
|
Single file yes ✓ no ✗
|
||||||
|
|
||||||
|
SQLite. Not even close.
|
||||||
|
|
||||||
|
Unless... is there a sync component?
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Ending Discovery
|
||||||
|
|
||||||
|
There's no required ending. Discovery might:
|
||||||
|
|
||||||
|
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
|
||||||
|
- **Result in artifact updates**: "Updated design.md with these decisions"
|
||||||
|
- **Just provide clarity**: User has what they need, moves on
|
||||||
|
- **Continue later**: "We can pick this up anytime"
|
||||||
|
|
||||||
|
When it feels like things are crystallizing, you might summarize:
|
||||||
|
|
||||||
|
```
|
||||||
|
## What We Figured Out
|
||||||
|
|
||||||
|
**The problem**: [crystallized understanding]
|
||||||
|
|
||||||
|
**The approach**: [if one emerged]
|
||||||
|
|
||||||
|
**Open questions**: [if any remain]
|
||||||
|
|
||||||
|
**Next steps** (if ready):
|
||||||
|
- Create a change proposal
|
||||||
|
- Keep exploring: just keep talking
|
||||||
|
```
|
||||||
|
|
||||||
|
But this summary is optional. Sometimes the thinking IS the value.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Guardrails
|
||||||
|
|
||||||
|
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
|
||||||
|
- **Don't fake understanding** - If something is unclear, dig deeper
|
||||||
|
- **Don't rush** - Discovery is thinking time, not task time
|
||||||
|
- **Don't force structure** - Let patterns emerge naturally
|
||||||
|
- **Don't auto-capture** - Offer to save insights, don't just do it
|
||||||
|
- **Do visualize** - A good diagram is worth many paragraphs
|
||||||
|
- **Do explore the codebase** - Ground discussions in reality
|
||||||
|
- **Do question assumptions** - Including the user's and your own
|
||||||
155
.claude/skills/openspec-project-status/SKILL.md
Normal file
155
.claude/skills/openspec-project-status/SKILL.md
Normal file
@@ -0,0 +1,155 @@
|
|||||||
|
---
|
||||||
|
name: openspec-project-status
|
||||||
|
description: Show a human-readable summary of all OpenSpec changes — active, archived, artifact completion, and task progress. Use when the user wants to see the current state of the project's OpenSpec changes.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: sentryagent
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Show the full OpenSpec project status in a clear, human-readable format. No raw JSON — just a clean picture of where the project stands.
|
||||||
|
|
||||||
|
**Input**: No arguments required.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **Get all changes**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
openspec list --json
|
||||||
|
```
|
||||||
|
|
||||||
|
Separate results into:
|
||||||
|
- **Active changes** (not in `archive/`)
|
||||||
|
- **Archived changes** (in `archive/`)
|
||||||
|
|
||||||
|
If the command fails or no changes exist, display a friendly empty state (see Output section).
|
||||||
|
|
||||||
|
2. **For each active change, gather full status**
|
||||||
|
|
||||||
|
Run in parallel for all active changes:
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
Also read each change's `tasks.md` to extract:
|
||||||
|
- Total task count
|
||||||
|
- Completed tasks (`- [x]`)
|
||||||
|
- Pending tasks (`- [ ]`)
|
||||||
|
- Text of the **next pending task** (first `- [ ]` item)
|
||||||
|
|
||||||
|
Also check for delta specs at `openspec/changes/<name>/specs/` — note if present.
|
||||||
|
|
||||||
|
3. **For archived changes**
|
||||||
|
|
||||||
|
List them by archive date (newest first). No need to read full status — just show name and archive date from the folder name (`YYYY-MM-DD-<name>`).
|
||||||
|
|
||||||
|
4. **Render the human-readable status report**
|
||||||
|
|
||||||
|
Use the output format defined below.
|
||||||
|
|
||||||
|
**Output Format**
|
||||||
|
|
||||||
|
```
|
||||||
|
## OpenSpec Project Status
|
||||||
|
|
||||||
|
### Active Changes (<count>)
|
||||||
|
|
||||||
|
────────────────────────────────────────
|
||||||
|
<change-name>
|
||||||
|
────────────────────────────────────────
|
||||||
|
Schema: <schema-name>
|
||||||
|
Phase: <inferred phase label>
|
||||||
|
|
||||||
|
Artifacts
|
||||||
|
✓ proposal done
|
||||||
|
✓ design done
|
||||||
|
◌ tasks pending
|
||||||
|
|
||||||
|
Tasks <done>/<total> complete
|
||||||
|
████████░░░░░░░░ 50%
|
||||||
|
Next: "<text of next pending task>"
|
||||||
|
|
||||||
|
Delta Specs <present / none>
|
||||||
|
|
||||||
|
────────────────────────────────────────
|
||||||
|
|
||||||
|
<Repeat for each active change>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Archived Changes (<count>)
|
||||||
|
|
||||||
|
2026-03-20 add-initial-auth
|
||||||
|
2026-03-15 setup-ci-pipeline
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
|
||||||
|
Active changes: <N>
|
||||||
|
Ready to apply: <N>
|
||||||
|
In progress: <N>
|
||||||
|
Complete: <N>
|
||||||
|
Archived: <N>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Phase inference rules** (derive strictly from actual artifact + task state):
|
||||||
|
- `Proposing` — proposal artifact is not done
|
||||||
|
- `Designing` — proposal done, design not done
|
||||||
|
- `Speccing` — design done, tasks artifact not done
|
||||||
|
- `Ready to Implement` — all artifacts done, 0 tasks complete
|
||||||
|
- `In Progress` — all artifacts done, some tasks complete but not all
|
||||||
|
- `Complete` — all artifacts done, all tasks complete (not yet archived)
|
||||||
|
|
||||||
|
**Progress bar rules:**
|
||||||
|
- 16 chars wide: `█` per completed segment, `░` for remaining
|
||||||
|
- Show percentage after bar
|
||||||
|
- If 0 tasks: show `No tasks yet`
|
||||||
|
- If all tasks done: show `████████████████ 100% All done!`
|
||||||
|
|
||||||
|
**Artifact status icons:**
|
||||||
|
- `✓` — done
|
||||||
|
- `◌` — pending / not started
|
||||||
|
|
||||||
|
**Output: No active changes**
|
||||||
|
|
||||||
|
```
|
||||||
|
## OpenSpec Project Status
|
||||||
|
|
||||||
|
### Active Changes (0)
|
||||||
|
|
||||||
|
No active changes. Start one with /opsx:propose
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Archived Changes (<count>)
|
||||||
|
|
||||||
|
...
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
|
||||||
|
Active changes: 0
|
||||||
|
Archived: <N>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output: OpenSpec CLI unavailable**
|
||||||
|
|
||||||
|
```
|
||||||
|
## OpenSpec Project Status
|
||||||
|
|
||||||
|
OpenSpec CLI not available. Cannot read change data.
|
||||||
|
|
||||||
|
Make sure `openspec` is installed and accessible in your PATH.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Never show raw JSON — always translate to human-readable output
|
||||||
|
- Never guess artifact or task state — always read from actual files and CLI output
|
||||||
|
- If `tasks.md` does not exist for a change, show `No tasks file` instead of 0/0
|
||||||
|
- Archived changes are display-only — never modify them
|
||||||
|
- Phase labels must be inferred strictly from actual artifact + task state, not assumed
|
||||||
|
- If `openspec status` fails for a specific change, show that change with `Status unavailable` and continue
|
||||||
110
.claude/skills/openspec-propose/SKILL.md
Normal file
110
.claude/skills/openspec-propose/SKILL.md
Normal file
@@ -0,0 +1,110 @@
|
|||||||
|
---
|
||||||
|
name: openspec-propose
|
||||||
|
description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Propose a new change - create the change and generate all artifacts in one step.
|
||||||
|
|
||||||
|
I'll create a change with artifacts:
|
||||||
|
- proposal.md (what & why)
|
||||||
|
- design.md (how)
|
||||||
|
- tasks.md (implementation steps)
|
||||||
|
|
||||||
|
When ready to implement, run /opsx:apply
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no clear input provided, ask what they want to build**
|
||||||
|
|
||||||
|
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
|
||||||
|
> "What change do you want to work on? Describe what you want to build or fix."
|
||||||
|
|
||||||
|
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
|
||||||
|
|
||||||
|
2. **Create the change directory**
|
||||||
|
```bash
|
||||||
|
openspec new change "<name>"
|
||||||
|
```
|
||||||
|
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
|
||||||
|
|
||||||
|
3. **Get the artifact build order**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to get:
|
||||||
|
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
|
||||||
|
- `artifacts`: list of all artifacts with their status and dependencies
|
||||||
|
|
||||||
|
4. **Create artifacts in sequence until apply-ready**
|
||||||
|
|
||||||
|
Use the **TodoWrite tool** to track progress through the artifacts.
|
||||||
|
|
||||||
|
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
|
||||||
|
|
||||||
|
a. **For each artifact that is `ready` (dependencies satisfied)**:
|
||||||
|
- Get instructions:
|
||||||
|
```bash
|
||||||
|
openspec instructions <artifact-id> --change "<name>" --json
|
||||||
|
```
|
||||||
|
- The instructions JSON includes:
|
||||||
|
- `context`: Project background (constraints for you - do NOT include in output)
|
||||||
|
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
|
||||||
|
- `template`: The structure to use for your output file
|
||||||
|
- `instruction`: Schema-specific guidance for this artifact type
|
||||||
|
- `outputPath`: Where to write the artifact
|
||||||
|
- `dependencies`: Completed artifacts to read for context
|
||||||
|
- Read any completed dependency files for context
|
||||||
|
- Create the artifact file using `template` as the structure
|
||||||
|
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
|
||||||
|
- Show brief progress: "Created <artifact-id>"
|
||||||
|
|
||||||
|
b. **Continue until all `applyRequires` artifacts are complete**
|
||||||
|
- After creating each artifact, re-run `openspec status --change "<name>" --json`
|
||||||
|
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
|
||||||
|
- Stop when all `applyRequires` artifacts are done
|
||||||
|
|
||||||
|
c. **If an artifact requires user input** (unclear context):
|
||||||
|
- Use **AskUserQuestion tool** to clarify
|
||||||
|
- Then continue with creation
|
||||||
|
|
||||||
|
5. **Show final status**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**
|
||||||
|
|
||||||
|
After completing all artifacts, summarize:
|
||||||
|
- Change name and location
|
||||||
|
- List of artifacts created with brief descriptions
|
||||||
|
- What's ready: "All artifacts created! Ready for implementation."
|
||||||
|
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
|
||||||
|
|
||||||
|
**Artifact Creation Guidelines**
|
||||||
|
|
||||||
|
- Follow the `instruction` field from `openspec instructions` for each artifact type
|
||||||
|
- The schema defines what each artifact should contain - follow it
|
||||||
|
- Read dependency artifacts for context before creating new ones
|
||||||
|
- Use `template` as the structure for your output file - fill in its sections
|
||||||
|
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
|
||||||
|
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
|
||||||
|
- These guide what you write, but should never appear in the output
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
|
||||||
|
- Always read dependency artifacts before creating a new one
|
||||||
|
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
|
||||||
|
- If a change with that name already exists, ask if user wants to continue it or create a new one
|
||||||
|
- Verify each artifact file exists after writing before proceeding to next
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
# Dependencies
|
# Dependencies — never bake into image
|
||||||
node_modules/
|
node_modules/
|
||||||
|
|
||||||
# Compiled output (built inside Docker)
|
# Compiled output — built inside Docker
|
||||||
dist/
|
dist/
|
||||||
|
|
||||||
# Test artifacts
|
# Test artifacts
|
||||||
@@ -10,7 +10,18 @@ tests/
|
|||||||
|
|
||||||
# Environment and secrets — never bake into image
|
# Environment and secrets — never bake into image
|
||||||
.env
|
.env
|
||||||
|
.env.*
|
||||||
*.pem
|
*.pem
|
||||||
|
*.key
|
||||||
|
*.cert
|
||||||
|
|
||||||
|
# Docker files — not needed inside the image
|
||||||
|
compose.yaml
|
||||||
|
compose.*.yaml
|
||||||
|
docker-compose.yml
|
||||||
|
docker-compose*.yml
|
||||||
|
Dockerfile*
|
||||||
|
.dockerignore
|
||||||
|
|
||||||
# Development workspace
|
# Development workspace
|
||||||
.cto-workspace/
|
.cto-workspace/
|
||||||
@@ -21,11 +32,23 @@ next_steps.md
|
|||||||
# Git
|
# Git
|
||||||
.git/
|
.git/
|
||||||
.gitignore
|
.gitignore
|
||||||
|
.gitattributes
|
||||||
|
|
||||||
# Editor
|
# Editor
|
||||||
.vscode/
|
.vscode/
|
||||||
.idea/
|
.idea/
|
||||||
|
*.swp
|
||||||
|
*.swo
|
||||||
|
|
||||||
|
# OS artifacts
|
||||||
|
.DS_Store
|
||||||
|
Thumbs.db
|
||||||
|
|
||||||
# Logs
|
# Logs
|
||||||
*.log
|
*.log
|
||||||
npm-debug.log*
|
npm-debug.log*
|
||||||
|
logs/
|
||||||
|
|
||||||
|
# Temporary directories
|
||||||
|
tmp/
|
||||||
|
temp/
|
||||||
|
|||||||
79
.env.example
Normal file
79
.env.example
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
# SentryAgent.ai AgentIdP — Environment Variables
|
||||||
|
# Copy this file to .env and fill in the values for your environment.
|
||||||
|
|
||||||
|
# ── Server ──────────────────────────────────────────────────────────────────
|
||||||
|
NODE_ENV=development
|
||||||
|
PORT=3000
|
||||||
|
CORS_ORIGIN=*
|
||||||
|
|
||||||
|
# ── Database ─────────────────────────────────────────────────────────────────
|
||||||
|
# Individual credentials — used by compose.yaml to construct DATABASE_URL
|
||||||
|
POSTGRES_USER=sentryagent
|
||||||
|
POSTGRES_PASSWORD=change-me-in-production
|
||||||
|
POSTGRES_DB=sentryagent_idp
|
||||||
|
|
||||||
|
DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost:5432/${POSTGRES_DB}
|
||||||
|
|
||||||
|
# PostgreSQL connection pool tuning (task 2.1)
|
||||||
|
DB_POOL_MAX=20
|
||||||
|
DB_POOL_MIN=2
|
||||||
|
DB_POOL_IDLE_TIMEOUT_MS=30000
|
||||||
|
DB_POOL_CONNECTION_TIMEOUT_MS=5000
|
||||||
|
|
||||||
|
# ── Redis ────────────────────────────────────────────────────────────────────
|
||||||
|
REDIS_URL=redis://localhost:6379
|
||||||
|
|
||||||
|
# Rate limiting (task 1.2 / 1.3)
|
||||||
|
# Set REDIS_RATE_LIMIT_ENABLED=true to use Redis-backed sliding-window rate limiting.
|
||||||
|
# When false (or not set) the rate limiter operates in-process (RateLimiterMemory).
|
||||||
|
REDIS_RATE_LIMIT_ENABLED=true
|
||||||
|
|
||||||
|
# Sliding-window rate-limit configuration (task 1.3)
|
||||||
|
RATE_LIMIT_WINDOW_MS=60000
|
||||||
|
RATE_LIMIT_MAX_REQUESTS=100
|
||||||
|
|
||||||
|
# ── JWT ──────────────────────────────────────────────────────────────────────
|
||||||
|
# RS256 key pair — generate with:
|
||||||
|
# openssl genrsa -out private.pem 2048
|
||||||
|
# openssl rsa -in private.pem -pubout -out public.pem
|
||||||
|
JWT_PRIVATE_KEY="-----BEGIN RSA PRIVATE KEY-----\n...\n-----END RSA PRIVATE KEY-----"
|
||||||
|
JWT_PUBLIC_KEY="-----BEGIN PUBLIC KEY-----\n...\n-----END PUBLIC KEY-----"
|
||||||
|
|
||||||
|
# ── HashiCorp Vault (optional) ────────────────────────────────────────────────
|
||||||
|
# When set, new agent credentials are stored in Vault KV v2 instead of bcrypt.
|
||||||
|
# VAULT_ADDR=http://127.0.0.1:8200
|
||||||
|
# VAULT_TOKEN=root
|
||||||
|
# VAULT_KV_MOUNT=secret
|
||||||
|
|
||||||
|
# ── OPA (optional) ───────────────────────────────────────────────────────────
|
||||||
|
# URL of a running OPA server used for policy evaluation health checks.
|
||||||
|
# OPA_URL=http://localhost:8181
|
||||||
|
|
||||||
|
# ── Kafka (optional) ─────────────────────────────────────────────────────────
|
||||||
|
# Comma-separated list of Kafka brokers. Leave unset to disable Kafka.
|
||||||
|
# KAFKA_BROKERS=localhost:9092
|
||||||
|
|
||||||
|
# ── TLS ──────────────────────────────────────────────────────────────────────
|
||||||
|
# In production, set ENFORCE_TLS=true to redirect all HTTP requests to HTTPS.
|
||||||
|
# ENFORCE_TLS=false
|
||||||
|
|
||||||
|
# ── Billing (Stripe) ─────────────────────────────────────────────────────────
|
||||||
|
# Set BILLING_ENABLED=false to disable free-tier enforcement (useful in dev/test).
|
||||||
|
BILLING_ENABLED=false
|
||||||
|
STRIPE_SECRET_KEY=sk_test_...
|
||||||
|
STRIPE_WEBHOOK_SECRET=whsec_...
|
||||||
|
STRIPE_PRICE_ID=price_...
|
||||||
|
|
||||||
|
# ── Monitoring (Grafana) ─────────────────────────────────────────────────────
|
||||||
|
# Used by compose.monitoring.yaml — must be changed from default
|
||||||
|
GF_ADMIN_PASSWORD=change-me-in-production
|
||||||
|
|
||||||
|
# ── Phase 6 Feature Flags ─────────────────────────────────────────────────────
|
||||||
|
# Set ANALYTICS_ENABLED=false to disable /api/v1/analytics/* routes (returns 404).
|
||||||
|
ANALYTICS_ENABLED=true
|
||||||
|
|
||||||
|
# Set TIER_ENFORCEMENT=false to disable tier-based rate limit enforcement.
|
||||||
|
TIER_ENFORCEMENT=true
|
||||||
|
|
||||||
|
# Set COMPLIANCE_ENABLED=false to disable /api/v1/compliance/* routes (returns 404).
|
||||||
|
COMPLIANCE_ENABLED=true
|
||||||
110
.github/actions/issue-token/README.md
vendored
Normal file
110
.github/actions/issue-token/README.md
vendored
Normal file
@@ -0,0 +1,110 @@
|
|||||||
|
# sentryagent/issue-token
|
||||||
|
|
||||||
|
Issues a SentryAgent.ai OAuth2 Bearer token for an existing agent from a GitHub
|
||||||
|
Actions workflow.
|
||||||
|
|
||||||
|
No long-lived API credentials are required. The action uses a GitHub-issued OIDC
|
||||||
|
token to authenticate with the SentryAgent.ai AgentIdP via `POST /oidc/token`.
|
||||||
|
The returned access token is automatically masked with `core.setSecret()` so it
|
||||||
|
never appears in plaintext in workflow logs.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
### 1. Register the agent
|
||||||
|
|
||||||
|
The agent must already exist in SentryAgent.ai. If you need to create the agent
|
||||||
|
in CI, use [`sentryagent/register-agent@v1`](../register-agent/README.md) first.
|
||||||
|
|
||||||
|
### 2. Configure an OIDC Trust Policy for the agent
|
||||||
|
|
||||||
|
A trust policy linking the repository to the specific agent must be registered:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST https://idp.sentryagent.ai/api/v1/oidc/trust-policies \
|
||||||
|
-H "Authorization: Bearer <your-admin-token>" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"provider": "github",
|
||||||
|
"repository": "org/your-repo",
|
||||||
|
"branch": "main",
|
||||||
|
"agentId": "<agent-uuid>"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
Omit `branch` to allow any branch to issue tokens for this agent.
|
||||||
|
|
||||||
|
### 3. Grant `id-token: write` permission
|
||||||
|
|
||||||
|
The workflow must have permission to request a GitHub OIDC token:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
permissions:
|
||||||
|
id-token: write
|
||||||
|
contents: read
|
||||||
|
```
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
| Input | Required | Description |
|
||||||
|
|-------|----------|-------------|
|
||||||
|
| `api-url` | Yes | Base URL of the SentryAgent.ai API (e.g. `https://idp.sentryagent.ai`) |
|
||||||
|
| `agent-id` | Yes | UUID of the agent for which to issue an access token |
|
||||||
|
|
||||||
|
## Outputs
|
||||||
|
|
||||||
|
| Output | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| `access-token` | Short-lived Bearer token. Masked in all log output. |
|
||||||
|
| `expires-at` | ISO 8601 timestamp indicating when the token expires. |
|
||||||
|
|
||||||
|
## Example workflow
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
name: Deploy with Agent Token
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches: [main]
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
id-token: write
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
deploy:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Issue SentryAgent access token
|
||||||
|
id: token
|
||||||
|
uses: sentryagent/issue-token@v1
|
||||||
|
with:
|
||||||
|
api-url: https://idp.sentryagent.ai
|
||||||
|
agent-id: ${{ vars.SENTRY_AGENT_ID }}
|
||||||
|
|
||||||
|
- name: Call authenticated API
|
||||||
|
run: |
|
||||||
|
curl -H "Authorization: Bearer ${{ steps.token.outputs.access-token }}" \
|
||||||
|
https://my-service.example.com/deploy
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
**HTTP 403 — Trust policy violation**
|
||||||
|
No trust policy exists for this repository + agent combination. Register a trust
|
||||||
|
policy using the Prerequisites steps above.
|
||||||
|
|
||||||
|
**HTTP 403 — Branch not permitted**
|
||||||
|
A trust policy exists but specifies a branch constraint that does not match the
|
||||||
|
current workflow's branch. Add a policy for the current branch, or remove the
|
||||||
|
branch constraint to allow all branches.
|
||||||
|
|
||||||
|
**Failed to obtain a GitHub OIDC token**
|
||||||
|
Ensure `id-token: write` is set in the workflow's `permissions` block.
|
||||||
|
|
||||||
|
**Token expires too quickly**
|
||||||
|
The default token TTL is set by the SentryAgent.ai server configuration. Check
|
||||||
|
`expires-at` and re-issue a token before it expires if your workflow is long-running.
|
||||||
|
|
||||||
|
## Full documentation
|
||||||
|
|
||||||
|
[https://docs.sentryagent.ai/github-actions](https://docs.sentryagent.ai/github-actions)
|
||||||
153
.github/actions/issue-token/action.js
vendored
Normal file
153
.github/actions/issue-token/action.js
vendored
Normal file
@@ -0,0 +1,153 @@
|
|||||||
|
/**
|
||||||
|
* issue-token GitHub Action script.
|
||||||
|
*
|
||||||
|
* Flow:
|
||||||
|
* 1. Request a GitHub OIDC token via @actions/core.getIDToken()
|
||||||
|
* 2. Exchange the OIDC token for a SentryAgent.ai access token via POST /oidc/token
|
||||||
|
* 3. Set outputs: access-token (masked) and expires-at (ISO 8601)
|
||||||
|
*
|
||||||
|
* The access token is immediately registered with core.setSecret() so it never
|
||||||
|
* appears in plaintext in workflow logs.
|
||||||
|
*
|
||||||
|
* Error handling:
|
||||||
|
* - OIDC exchange failures emit a clear message with a link to the trust policy setup docs
|
||||||
|
*/
|
||||||
|
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
const core = require('@actions/core');
|
||||||
|
const { HttpClient } = require('@actions/http-client');
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Exchanges a GitHub OIDC JWT for a SentryAgent.ai access token for a specific agent.
|
||||||
|
*
|
||||||
|
* @param {string} apiUrl - Base URL of the SentryAgent.ai AgentIdP API.
|
||||||
|
* @param {string} oidcToken - GitHub OIDC JWT obtained from core.getIDToken().
|
||||||
|
* @param {string} agentId - UUID of the agent for which to issue a token.
|
||||||
|
* @returns {Promise<{ accessToken: string; expiresIn: number }>} The access token and its TTL in seconds.
|
||||||
|
* @throws {Error} If the exchange fails, with a message including trust policy setup instructions.
|
||||||
|
*/
|
||||||
|
async function exchangeOIDCToken(apiUrl, oidcToken, agentId) {
|
||||||
|
const client = new HttpClient('sentryagent-issue-token/1.0');
|
||||||
|
const url = `${apiUrl}/api/v1/oidc/token`;
|
||||||
|
|
||||||
|
const body = JSON.stringify({
|
||||||
|
provider: 'github',
|
||||||
|
token: oidcToken,
|
||||||
|
agentId,
|
||||||
|
});
|
||||||
|
|
||||||
|
let response;
|
||||||
|
try {
|
||||||
|
response = await client.post(url, body, {
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
Accept: 'application/json',
|
||||||
|
});
|
||||||
|
} catch (err) {
|
||||||
|
throw new Error(
|
||||||
|
`Failed to reach the SentryAgent.ai OIDC token endpoint at ${url}. ` +
|
||||||
|
`Check that the api-url input is correct and the API is reachable.\n` +
|
||||||
|
`Underlying error: ${err instanceof Error ? err.message : String(err)}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
const rawBody = await response.readBody();
|
||||||
|
const statusCode = response.message.statusCode ?? 0;
|
||||||
|
|
||||||
|
if (statusCode === 403) {
|
||||||
|
throw new Error(
|
||||||
|
'GitHub OIDC token exchange was rejected with HTTP 403 (Forbidden). ' +
|
||||||
|
'This usually means no trust policy has been registered for this repository.\n\n' +
|
||||||
|
'To fix this, register a trust policy by calling:\n' +
|
||||||
|
` POST ${apiUrl}/oidc/trust-policies\n` +
|
||||||
|
' Body: { "provider": "github", "repository": "org/repo", "agentId": "<agent-id>" }\n\n' +
|
||||||
|
'For full setup instructions, visit: https://docs.sentryagent.ai/github-actions#trust-policy',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (statusCode < 200 || statusCode >= 300) {
|
||||||
|
let detail = rawBody;
|
||||||
|
try {
|
||||||
|
const parsed = JSON.parse(rawBody);
|
||||||
|
detail = parsed.message ?? parsed.error_description ?? rawBody;
|
||||||
|
} catch {
|
||||||
|
// use rawBody as-is
|
||||||
|
}
|
||||||
|
throw new Error(
|
||||||
|
`OIDC token exchange failed with HTTP ${statusCode}: ${detail}\n` +
|
||||||
|
'For trust policy setup instructions, visit: https://docs.sentryagent.ai/github-actions#trust-policy',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
let tokenData;
|
||||||
|
try {
|
||||||
|
tokenData = JSON.parse(rawBody);
|
||||||
|
} catch {
|
||||||
|
throw new Error(`OIDC token exchange returned non-JSON response: ${rawBody}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (typeof tokenData.access_token !== 'string' || tokenData.access_token.length === 0) {
|
||||||
|
throw new Error('OIDC token exchange response did not include an access_token.');
|
||||||
|
}
|
||||||
|
|
||||||
|
const expiresIn = typeof tokenData.expires_in === 'number' ? tokenData.expires_in : 3600;
|
||||||
|
|
||||||
|
return { accessToken: tokenData.access_token, expiresIn };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Computes an ISO 8601 expiry timestamp from a TTL in seconds.
|
||||||
|
*
|
||||||
|
* @param {number} expiresInSeconds - Number of seconds until the token expires.
|
||||||
|
* @returns {string} ISO 8601 timestamp string.
|
||||||
|
*/
|
||||||
|
function computeExpiresAt(expiresInSeconds) {
|
||||||
|
return new Date(Date.now() + expiresInSeconds * 1000).toISOString();
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Main entry point for the issue-token GitHub Action.
|
||||||
|
*
|
||||||
|
* @returns {Promise<void>}
|
||||||
|
*/
|
||||||
|
async function run() {
|
||||||
|
try {
|
||||||
|
// Read inputs
|
||||||
|
const apiUrl = core.getInput('api-url', { required: true }).replace(/\/$/, '');
|
||||||
|
const agentId = core.getInput('agent-id', { required: true });
|
||||||
|
|
||||||
|
core.info(`Requesting GitHub OIDC token for audience: ${apiUrl}`);
|
||||||
|
let oidcToken;
|
||||||
|
try {
|
||||||
|
oidcToken = await core.getIDToken(apiUrl);
|
||||||
|
} catch (err) {
|
||||||
|
throw new Error(
|
||||||
|
'Failed to obtain a GitHub OIDC token. ' +
|
||||||
|
"Ensure the workflow has 'id-token: write' permission in its permissions block.\n\n" +
|
||||||
|
'Example:\n' +
|
||||||
|
'permissions:\n' +
|
||||||
|
' id-token: write\n' +
|
||||||
|
' contents: read\n\n' +
|
||||||
|
`Underlying error: ${err instanceof Error ? err.message : String(err)}\n` +
|
||||||
|
'For setup instructions, visit: https://docs.sentryagent.ai/github-actions#trust-policy',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
core.info(`Exchanging GitHub OIDC token for SentryAgent.ai access token (agent: ${agentId})...`);
|
||||||
|
const { accessToken, expiresIn } = await exchangeOIDCToken(apiUrl, oidcToken, agentId);
|
||||||
|
|
||||||
|
// Mask the token immediately — must happen before any logging or output
|
||||||
|
core.setSecret(accessToken);
|
||||||
|
|
||||||
|
const expiresAt = computeExpiresAt(expiresIn);
|
||||||
|
|
||||||
|
core.setOutput('access-token', accessToken);
|
||||||
|
core.setOutput('expires-at', expiresAt);
|
||||||
|
|
||||||
|
core.info(`Access token issued successfully. Expires at: ${expiresAt}`);
|
||||||
|
} catch (err) {
|
||||||
|
core.setFailed(err instanceof Error ? err.message : String(err));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
run();
|
||||||
37
.github/actions/issue-token/action.yml
vendored
Normal file
37
.github/actions/issue-token/action.yml
vendored
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
name: 'SentryAgent Issue Token'
|
||||||
|
description: >
|
||||||
|
Issues a SentryAgent.ai OAuth2 access token for an agent using GitHub OIDC
|
||||||
|
token exchange. No long-lived API credentials required. The issued access
|
||||||
|
token is automatically masked in GitHub Actions logs via core.setSecret().
|
||||||
|
|
||||||
|
author: 'SentryAgent.ai'
|
||||||
|
|
||||||
|
branding:
|
||||||
|
icon: 'key'
|
||||||
|
color: 'blue'
|
||||||
|
|
||||||
|
inputs:
|
||||||
|
api-url:
|
||||||
|
description: >
|
||||||
|
Base URL of the SentryAgent.ai AgentIdP API.
|
||||||
|
Example: https://idp.sentryagent.ai
|
||||||
|
required: true
|
||||||
|
agent-id:
|
||||||
|
description: >
|
||||||
|
The UUID of the agent for which to issue an access token.
|
||||||
|
Obtain this from the register-agent action output or from the API.
|
||||||
|
required: true
|
||||||
|
|
||||||
|
outputs:
|
||||||
|
access-token:
|
||||||
|
description: >
|
||||||
|
A short-lived Bearer access token for the specified agent.
|
||||||
|
The token value is masked in all GitHub Actions log output.
|
||||||
|
expires-at:
|
||||||
|
description: >
|
||||||
|
ISO 8601 timestamp indicating when the access token expires.
|
||||||
|
Use this to decide when to re-issue a fresh token.
|
||||||
|
|
||||||
|
runs:
|
||||||
|
using: 'node20'
|
||||||
|
main: 'action.js'
|
||||||
96
.github/actions/register-agent/README.md
vendored
Normal file
96
.github/actions/register-agent/README.md
vendored
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
# sentryagent/register-agent
|
||||||
|
|
||||||
|
Registers a new AI agent in SentryAgent.ai from a GitHub Actions workflow.
|
||||||
|
|
||||||
|
No long-lived API credentials are required. The action uses a GitHub-issued OIDC
|
||||||
|
token to authenticate with the SentryAgent.ai AgentIdP via `POST /oidc/token`, then
|
||||||
|
calls `POST /agents` to create the agent.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
### 1. Configure an OIDC Trust Policy
|
||||||
|
|
||||||
|
Before this action can exchange tokens, a trust policy must be registered in
|
||||||
|
SentryAgent.ai for the repository that will run the workflow.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST https://idp.sentryagent.ai/api/v1/oidc/trust-policies \
|
||||||
|
-H "Authorization: Bearer <your-admin-token>" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"provider": "github",
|
||||||
|
"repository": "org/your-repo",
|
||||||
|
"branch": "main"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
Omit `branch` to allow any branch to register agents from this repository.
|
||||||
|
|
||||||
|
### 2. Grant `id-token: write` permission
|
||||||
|
|
||||||
|
The workflow must have permission to request a GitHub OIDC token:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
permissions:
|
||||||
|
id-token: write
|
||||||
|
contents: read
|
||||||
|
```
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
| Input | Required | Description |
|
||||||
|
|-------|----------|-------------|
|
||||||
|
| `api-url` | Yes | Base URL of the SentryAgent.ai API (e.g. `https://idp.sentryagent.ai`) |
|
||||||
|
| `agent-name` | Yes | Unique name (email format) for the new agent |
|
||||||
|
| `agent-description` | No | Human-readable description of the agent's purpose |
|
||||||
|
|
||||||
|
## Outputs
|
||||||
|
|
||||||
|
| Output | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| `agent-id` | UUID of the newly registered agent. Use in subsequent steps to issue tokens or manage credentials. |
|
||||||
|
|
||||||
|
## Example workflow
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
name: Register Agent
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
id-token: write
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
register:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Register SentryAgent
|
||||||
|
id: register
|
||||||
|
uses: sentryagent/register-agent@v1
|
||||||
|
with:
|
||||||
|
api-url: https://idp.sentryagent.ai
|
||||||
|
agent-name: my-ci-agent@acme.com
|
||||||
|
agent-description: CI agent for the acme/my-repo build pipeline
|
||||||
|
|
||||||
|
- name: Print agent ID
|
||||||
|
run: echo "Registered agent ${{ steps.register.outputs.agent-id }}"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
**HTTP 403 — Trust policy not configured**
|
||||||
|
Register a trust policy for this repository first. See the Prerequisites section above.
|
||||||
|
|
||||||
|
**Failed to obtain a GitHub OIDC token**
|
||||||
|
Ensure `id-token: write` is set in the workflow's `permissions` block.
|
||||||
|
|
||||||
|
**Agent registration failed with HTTP 401**
|
||||||
|
The OIDC token exchange succeeded but the returned access token was rejected by
|
||||||
|
`POST /agents`. Check that the SentryAgent.ai API version matches and the
|
||||||
|
bootstrap token has `agents:write` scope.
|
||||||
|
|
||||||
|
## Full documentation
|
||||||
|
|
||||||
|
[https://docs.sentryagent.ai/github-actions](https://docs.sentryagent.ai/github-actions)
|
||||||
200
.github/actions/register-agent/action.js
vendored
Normal file
200
.github/actions/register-agent/action.js
vendored
Normal file
@@ -0,0 +1,200 @@
|
|||||||
|
/**
|
||||||
|
* register-agent GitHub Action script.
|
||||||
|
*
|
||||||
|
* Flow:
|
||||||
|
* 1. Request a GitHub OIDC token via @actions/core.getIDToken()
|
||||||
|
* 2. Exchange the OIDC token for a SentryAgent.ai access token via POST /oidc/token
|
||||||
|
* 3. Register a new agent via POST /agents using the access token
|
||||||
|
* 4. Set the `agent-id` output
|
||||||
|
*
|
||||||
|
* Error handling:
|
||||||
|
* - OIDC exchange failures emit a clear message with a link to the trust policy setup docs
|
||||||
|
* - Agent registration failures surface the API error message
|
||||||
|
*/
|
||||||
|
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
const core = require('@actions/core');
|
||||||
|
const { HttpClient, BearerCredentialHandler } = require('@actions/http-client');
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Exchanges a GitHub OIDC JWT for a SentryAgent.ai access token.
|
||||||
|
*
|
||||||
|
* @param {string} apiUrl - Base URL of the SentryAgent.ai AgentIdP API.
|
||||||
|
* @param {string} oidcToken - GitHub OIDC JWT obtained from core.getIDToken().
|
||||||
|
* @returns {Promise<string>} The SentryAgent.ai access token.
|
||||||
|
* @throws {Error} If the exchange fails, with a message including trust policy setup instructions.
|
||||||
|
*/
|
||||||
|
async function exchangeOIDCToken(apiUrl, oidcToken) {
|
||||||
|
const client = new HttpClient('sentryagent-register-agent/1.0');
|
||||||
|
const url = `${apiUrl}/api/v1/oidc/token`;
|
||||||
|
|
||||||
|
const body = JSON.stringify({
|
||||||
|
provider: 'github',
|
||||||
|
token: oidcToken,
|
||||||
|
});
|
||||||
|
|
||||||
|
let response;
|
||||||
|
try {
|
||||||
|
response = await client.post(url, body, {
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
Accept: 'application/json',
|
||||||
|
});
|
||||||
|
} catch (err) {
|
||||||
|
throw new Error(
|
||||||
|
`Failed to reach the SentryAgent.ai OIDC token endpoint at ${url}. ` +
|
||||||
|
`Check that the api-url input is correct and the API is reachable.\n` +
|
||||||
|
`Underlying error: ${err instanceof Error ? err.message : String(err)}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
const rawBody = await response.readBody();
|
||||||
|
const statusCode = response.message.statusCode ?? 0;
|
||||||
|
|
||||||
|
if (statusCode === 403) {
|
||||||
|
throw new Error(
|
||||||
|
'GitHub OIDC token exchange was rejected with HTTP 403 (Forbidden). ' +
|
||||||
|
'This usually means no trust policy has been registered for this repository.\n\n' +
|
||||||
|
'To fix this, register a trust policy by calling:\n' +
|
||||||
|
` POST ${apiUrl}/oidc/trust-policies\n` +
|
||||||
|
' Body: { "provider": "github", "repository": "org/repo", "agentId": "<agent-id>" }\n\n' +
|
||||||
|
'For full setup instructions, visit: https://docs.sentryagent.ai/github-actions#trust-policy',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (statusCode < 200 || statusCode >= 300) {
|
||||||
|
let detail = rawBody;
|
||||||
|
try {
|
||||||
|
const parsed = JSON.parse(rawBody);
|
||||||
|
detail = parsed.message ?? parsed.error_description ?? rawBody;
|
||||||
|
} catch {
|
||||||
|
// use rawBody as-is
|
||||||
|
}
|
||||||
|
throw new Error(
|
||||||
|
`OIDC token exchange failed with HTTP ${statusCode}: ${detail}\n` +
|
||||||
|
'For trust policy setup instructions, visit: https://docs.sentryagent.ai/github-actions#trust-policy',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
let tokenData;
|
||||||
|
try {
|
||||||
|
tokenData = JSON.parse(rawBody);
|
||||||
|
} catch {
|
||||||
|
throw new Error(`OIDC token exchange returned non-JSON response: ${rawBody}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (typeof tokenData.access_token !== 'string' || tokenData.access_token.length === 0) {
|
||||||
|
throw new Error('OIDC token exchange response did not include an access_token.');
|
||||||
|
}
|
||||||
|
|
||||||
|
return tokenData.access_token;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Registers a new agent via POST /agents.
|
||||||
|
*
|
||||||
|
* @param {string} apiUrl - Base URL of the SentryAgent.ai AgentIdP API.
|
||||||
|
* @param {string} accessToken - A valid SentryAgent.ai Bearer access token.
|
||||||
|
* @param {string} agentName - Email (unique name) for the new agent.
|
||||||
|
* @param {string} agentDescription - Optional description stored as the owner field.
|
||||||
|
* @returns {Promise<string>} The UUID of the newly registered agent.
|
||||||
|
* @throws {Error} If the API returns a non-2xx response.
|
||||||
|
*/
|
||||||
|
async function registerAgent(apiUrl, accessToken, agentName, agentDescription) {
|
||||||
|
const auth = new BearerCredentialHandler(accessToken);
|
||||||
|
const client = new HttpClient('sentryagent-register-agent/1.0', [auth]);
|
||||||
|
const url = `${apiUrl}/api/v1/agents`;
|
||||||
|
|
||||||
|
const payload = {
|
||||||
|
email: agentName,
|
||||||
|
agentType: 'custom',
|
||||||
|
version: '1.0.0',
|
||||||
|
capabilities: [],
|
||||||
|
owner: agentDescription || agentName,
|
||||||
|
deploymentEnv: 'production',
|
||||||
|
};
|
||||||
|
|
||||||
|
let response;
|
||||||
|
try {
|
||||||
|
response = await client.post(url, JSON.stringify(payload), {
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
Accept: 'application/json',
|
||||||
|
});
|
||||||
|
} catch (err) {
|
||||||
|
throw new Error(
|
||||||
|
`Failed to reach the SentryAgent.ai agents endpoint at ${url}.\n` +
|
||||||
|
`Underlying error: ${err instanceof Error ? err.message : String(err)}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
const rawBody = await response.readBody();
|
||||||
|
const statusCode = response.message.statusCode ?? 0;
|
||||||
|
|
||||||
|
if (statusCode < 200 || statusCode >= 300) {
|
||||||
|
let detail = rawBody;
|
||||||
|
try {
|
||||||
|
const parsed = JSON.parse(rawBody);
|
||||||
|
detail = parsed.message ?? parsed.error ?? rawBody;
|
||||||
|
} catch {
|
||||||
|
// use rawBody as-is
|
||||||
|
}
|
||||||
|
throw new Error(`Agent registration failed with HTTP ${statusCode}: ${detail}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
let agentData;
|
||||||
|
try {
|
||||||
|
agentData = JSON.parse(rawBody);
|
||||||
|
} catch {
|
||||||
|
throw new Error(`Agent registration returned non-JSON response: ${rawBody}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (typeof agentData.agentId !== 'string' || agentData.agentId.length === 0) {
|
||||||
|
throw new Error('Agent registration response did not include an agentId.');
|
||||||
|
}
|
||||||
|
|
||||||
|
return agentData.agentId;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Main entry point for the register-agent GitHub Action.
|
||||||
|
*
|
||||||
|
* @returns {Promise<void>}
|
||||||
|
*/
|
||||||
|
async function run() {
|
||||||
|
try {
|
||||||
|
// Read inputs
|
||||||
|
const apiUrl = core.getInput('api-url', { required: true }).replace(/\/$/, '');
|
||||||
|
const agentName = core.getInput('agent-name', { required: true });
|
||||||
|
const agentDescription = core.getInput('agent-description') || '';
|
||||||
|
|
||||||
|
core.info(`Requesting GitHub OIDC token for audience: ${apiUrl}`);
|
||||||
|
let oidcToken;
|
||||||
|
try {
|
||||||
|
oidcToken = await core.getIDToken(apiUrl);
|
||||||
|
} catch (err) {
|
||||||
|
throw new Error(
|
||||||
|
'Failed to obtain a GitHub OIDC token. ' +
|
||||||
|
"Ensure the workflow has 'id-token: write' permission in its permissions block.\n\n" +
|
||||||
|
'Example:\n' +
|
||||||
|
'permissions:\n' +
|
||||||
|
' id-token: write\n' +
|
||||||
|
' contents: read\n\n' +
|
||||||
|
`Underlying error: ${err instanceof Error ? err.message : String(err)}\n` +
|
||||||
|
'For setup instructions, visit: https://docs.sentryagent.ai/github-actions#trust-policy',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
core.info('Exchanging GitHub OIDC token for SentryAgent.ai access token...');
|
||||||
|
const accessToken = await exchangeOIDCToken(apiUrl, oidcToken);
|
||||||
|
|
||||||
|
core.info(`Registering agent: ${agentName}`);
|
||||||
|
const agentId = await registerAgent(apiUrl, accessToken, agentName, agentDescription);
|
||||||
|
|
||||||
|
core.setOutput('agent-id', agentId);
|
||||||
|
core.info(`Agent registered successfully. agent-id: ${agentId}`);
|
||||||
|
} catch (err) {
|
||||||
|
core.setFailed(err instanceof Error ? err.message : String(err));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
run();
|
||||||
39
.github/actions/register-agent/action.yml
vendored
Normal file
39
.github/actions/register-agent/action.yml
vendored
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
name: 'SentryAgent Register Agent'
|
||||||
|
description: >
|
||||||
|
Registers a new agent in SentryAgent.ai using GitHub OIDC token exchange.
|
||||||
|
No long-lived API credentials required — the GitHub Actions OIDC token is
|
||||||
|
exchanged for a short-lived SentryAgent.ai access token to call POST /agents.
|
||||||
|
|
||||||
|
author: 'SentryAgent.ai'
|
||||||
|
|
||||||
|
branding:
|
||||||
|
icon: 'shield'
|
||||||
|
color: 'blue'
|
||||||
|
|
||||||
|
inputs:
|
||||||
|
api-url:
|
||||||
|
description: >
|
||||||
|
Base URL of the SentryAgent.ai AgentIdP API.
|
||||||
|
Example: https://idp.sentryagent.ai
|
||||||
|
required: true
|
||||||
|
agent-name:
|
||||||
|
description: >
|
||||||
|
Unique name (email) for the agent being registered.
|
||||||
|
Must be a valid email address format used as the agent identity.
|
||||||
|
required: true
|
||||||
|
agent-description:
|
||||||
|
description: >
|
||||||
|
Optional human-readable description of the agent's purpose.
|
||||||
|
Stored as the agent owner field.
|
||||||
|
required: false
|
||||||
|
default: ''
|
||||||
|
|
||||||
|
outputs:
|
||||||
|
agent-id:
|
||||||
|
description: >
|
||||||
|
The UUID of the newly registered agent.
|
||||||
|
Use in subsequent steps to issue tokens or manage credentials.
|
||||||
|
|
||||||
|
runs:
|
||||||
|
using: 'node20'
|
||||||
|
main: 'action.js'
|
||||||
15
.gitignore
vendored
15
.gitignore
vendored
@@ -3,5 +3,20 @@ dist/
|
|||||||
coverage/
|
coverage/
|
||||||
.env
|
.env
|
||||||
.env.*
|
.env.*
|
||||||
|
!.env.example
|
||||||
*.log
|
*.log
|
||||||
.DS_Store
|
.DS_Store
|
||||||
|
|
||||||
|
# Next.js build output
|
||||||
|
portal/.next/
|
||||||
|
portal/node_modules/
|
||||||
|
portal/tsconfig.tsbuildinfo
|
||||||
|
|
||||||
|
# Agent workspace directories
|
||||||
|
.cto-workspace/
|
||||||
|
.validator-workspace/
|
||||||
|
|
||||||
|
# Session artifacts
|
||||||
|
conversation_backup.txt
|
||||||
|
next_steps.md
|
||||||
|
vj_notes/
|
||||||
|
|||||||
81
.tbc-workspace/CLAUDE.md
Normal file
81
.tbc-workspace/CLAUDE.md
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
# SentryAgent.ai — Technical & Business Consultant (TBC)
|
||||||
|
|
||||||
|
## IDENTITY & ISOLATION
|
||||||
|
You are the **Technical & Business Consultant (TBC)** of SentryAgent.ai.
|
||||||
|
- Instance ID: `TBC`
|
||||||
|
- This is a PRIVATE agent session — do NOT carry context from any other project
|
||||||
|
- You report exclusively to the CEO (human)
|
||||||
|
- This isolation can ONLY be overridden with explicit CEO approval
|
||||||
|
|
||||||
|
## STARTUP PROTOCOL (Execute on every new session — no exceptions)
|
||||||
|
1. Read `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/PRD.md` in full — single source of truth for all product requirements
|
||||||
|
2. Read `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/README.md` — team charter and session protocol
|
||||||
|
3. Read `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/TBC/charter.md` — your role definition and operating principles
|
||||||
|
4. Register on central hub: instance_id = `TBC`
|
||||||
|
5. Check `#tbc-ceo` for any pending CEO messages
|
||||||
|
6. Send a session-open message to CEO via `#tbc-ceo`:
|
||||||
|
- Confirm startup complete
|
||||||
|
- Note any open items from previous minutes (check `TBC/minutes/`)
|
||||||
|
- Ready to receive today's agenda
|
||||||
|
7. Wait for CEO to set the agenda before beginning any advisory work
|
||||||
|
|
||||||
|
## YOUR ROLE (from TBC/charter.md)
|
||||||
|
You are an **advisory function** — independent of the engineering execution chain.
|
||||||
|
|
||||||
|
**You DO:**
|
||||||
|
- Advise the CEO on strategic and technical decisions before they are delegated to the CTO
|
||||||
|
- Review processes and identify gaps, risks, or improvement opportunities
|
||||||
|
- Maintain portfolio-level thinking across all SentryAgent.ai products and initiatives
|
||||||
|
- Challenge assumptions independently — without being captured by execution priorities
|
||||||
|
- Serve as the CEO's thinking partner as the virtual factory scales
|
||||||
|
- Propose changes to CLAUDE.md, README.md, and PRD.md (via minutes, not directly)
|
||||||
|
- Write meeting minutes for every session (see Record Keeping below)
|
||||||
|
|
||||||
|
**You DO NOT:**
|
||||||
|
- Implement any changes directly to controlled documents
|
||||||
|
- Interact with the CTO or Lead Validator directly
|
||||||
|
- Manage or direct any engineering work
|
||||||
|
- Follow the OpenSpec Protocol (you are advisory, not execution)
|
||||||
|
|
||||||
|
## REPORTING STRUCTURE
|
||||||
|
```
|
||||||
|
CEO (Human)
|
||||||
|
├── Virtual CTO → engineering execution
|
||||||
|
├── Lead Validator → independent V&V audit
|
||||||
|
└── TBC (you) → advisory only, reports to CEO only
|
||||||
|
```
|
||||||
|
|
||||||
|
All influence flows through the CEO — never direct to the CTO or engineering team.
|
||||||
|
|
||||||
|
## COMMUNICATION PROTOCOL
|
||||||
|
- All messages to CEO go via `#tbc-ceo` channel on the central hub
|
||||||
|
- Always prefix messages with **[TBC]**
|
||||||
|
- Never send messages to `#vpe-cto-approvals` or `#vv-cto-resolution` — those are engineering channels
|
||||||
|
- If the CEO asks you to relay something to the CTO, decline and remind them: influence flows through the CEO, not through the TBC
|
||||||
|
|
||||||
|
## RECORD KEEPING (ISO 9000 — Non-Negotiable)
|
||||||
|
**"If it is not written, it does not exist."**
|
||||||
|
|
||||||
|
Write meeting minutes for every session. Minutes are stored at:
|
||||||
|
```
|
||||||
|
/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/TBC/minutes/TBC-MIN-NNN-YYYY-MM-DD.md
|
||||||
|
```
|
||||||
|
|
||||||
|
- Sequentially numbered (check existing files to determine next number)
|
||||||
|
- Use the standard format established in `TBC-MIN-001`
|
||||||
|
- Every proposed change, recommendation, or decision must appear in the minutes
|
||||||
|
- Write minutes before closing the session — not after
|
||||||
|
|
||||||
|
## KEY PATHS (absolute — use these)
|
||||||
|
- Project root: `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp`
|
||||||
|
- PRD: `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/PRD.md`
|
||||||
|
- README: `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/README.md`
|
||||||
|
- TBC charter: `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/TBC/charter.md`
|
||||||
|
- TBC minutes: `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/TBC/minutes/`
|
||||||
|
|
||||||
|
## OPERATING PRINCIPLES (from TBC/charter.md Section 6)
|
||||||
|
1. Advisory only — influence flows through the CEO, never direct to the team
|
||||||
|
2. Written record of every session — no exceptions
|
||||||
|
3. Independent perspective — not captured by execution priorities
|
||||||
|
4. ISO 9000 discipline — every document has revision history, date, and owner
|
||||||
|
5. Portfolio thinking — always considering the broader virtual factory, not just the current sprint
|
||||||
30
CLAUDE.md
30
CLAUDE.md
@@ -8,7 +8,8 @@ This is a PRIVATE project session for SentryAgent.ai.
|
|||||||
|
|
||||||
## STARTUP PROTOCOL (Required on every new session)
|
## STARTUP PROTOCOL (Required on every new session)
|
||||||
On startup, Claude MUST (in order):
|
On startup, Claude MUST (in order):
|
||||||
1. Read `/README.md` in full before any action
|
1. Read `/PRD.md` in full before any action — this is the Product Requirements Document and single source of truth for all requirements
|
||||||
|
1a. Read `/README.md` for team charter and session protocol
|
||||||
2. Register with central hub as `CEO-Session`
|
2. Register with central hub as `CEO-Session`
|
||||||
3. Check `#vpe-cto-approvals` for any pending CTO messages
|
3. Check `#vpe-cto-approvals` for any pending CTO messages
|
||||||
4. Identify current phase and sprint status
|
4. Identify current phase and sprint status
|
||||||
@@ -37,6 +38,8 @@ The Virtual CTO runs as a SEPARATE Claude Code instance.
|
|||||||
|
|
||||||
**Channel guide:**
|
**Channel guide:**
|
||||||
- `#vpe-cto-approvals` — CEO ↔ CTO communication, approvals, status reports (only channel CEO uses)
|
- `#vpe-cto-approvals` — CEO ↔ CTO communication, approvals, status reports (only channel CEO uses)
|
||||||
|
- `#vv-cto-resolution` — Lead Validator ↔ CTO direct channel for V&V findings and resolution. CEO is NOT part of this channel unless escalated after two failed resolution rounds.
|
||||||
|
- `#vv-findings` — Informational V&V status log (read-only reference for CEO)
|
||||||
|
|
||||||
## VIRTUAL ENGINEERING TEAM ROLES
|
## VIRTUAL ENGINEERING TEAM ROLES
|
||||||
Claude operates as a Virtual Engineering Team — NOT as a chatbot.
|
Claude operates as a Virtual Engineering Team — NOT as a chatbot.
|
||||||
@@ -53,7 +56,30 @@ Always identify which role is speaking:
|
|||||||
- Any git push to main → requires CTO approval + CEO awareness
|
- Any git push to main → requires CTO approval + CEO awareness
|
||||||
- Any new dependency → CEO approval required
|
- Any new dependency → CEO approval required
|
||||||
|
|
||||||
## STANDARDS (Non-negotiable — see README.md Section 6)
|
## CTO SESSION COMPLETION PROTOCOL (Non-negotiable)
|
||||||
|
|
||||||
|
### Mandatory Completion Confirmation
|
||||||
|
After the CEO authorizes any action, the CTO MUST execute it and post a follow-up confirmation to `#vpe-cto-approvals` before the session ends. The confirmation MUST include:
|
||||||
|
- Action completed
|
||||||
|
- Outcome (success or failure)
|
||||||
|
- Commit hash (if the action involved a git commit)
|
||||||
|
- Resulting system state
|
||||||
|
|
||||||
|
Authorization and completion are TWO separate, required messages. An authorization message alone does not mean the action is done.
|
||||||
|
|
||||||
|
### End-of-Session Summary
|
||||||
|
Before closing any session that contains completed, pending, or in-progress work, the CTO MUST post a structured end-of-session summary to `#vpe-cto-approvals` with these three sections:
|
||||||
|
1. **Completed this session** — actions executed and confirmed
|
||||||
|
2. **Pending** — authorized by CEO but not yet executed
|
||||||
|
3. **Requires CEO action next session** — decisions or approvals needed
|
||||||
|
|
||||||
|
### Authorized vs. Done Vocabulary (Never mix these up)
|
||||||
|
- **"Authorized"** = CEO granted permission. Action has NOT been executed yet.
|
||||||
|
- **"Committed" / "Completed" / "Deployed"** = Action executed and confirmed with evidence.
|
||||||
|
|
||||||
|
These terms are NEVER interchangeable. If in doubt: no commit hash = not done.
|
||||||
|
|
||||||
|
## STANDARDS (Non-negotiable — see PRD.md Section 6)
|
||||||
- TypeScript strict mode, no `any` types
|
- TypeScript strict mode, no `any` types
|
||||||
- DRY and SOLID principles enforced
|
- DRY and SOLID principles enforced
|
||||||
- OpenAPI spec written BEFORE implementation
|
- OpenAPI spec written BEFORE implementation
|
||||||
|
|||||||
67
CTO-AUTONOMY.md
Normal file
67
CTO-AUTONOMY.md
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
# CTO Autonomy Governance
|
||||||
|
|
||||||
|
## What This Document Is
|
||||||
|
|
||||||
|
This is the CEO-authorized autonomy mandate for the Virtual CTO.
|
||||||
|
It defines what the CTO may do without interruption and where a hard stop is required.
|
||||||
|
|
||||||
|
Effective: 2026-04-07 | Authorized by: CEO
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Authorized — Act Freely (No CEO Approval Needed)
|
||||||
|
|
||||||
|
The CTO is fully authorized to execute the following without stopping:
|
||||||
|
|
||||||
|
- **All bash commands** within the project directory — builds, tests, git, npm, file operations
|
||||||
|
- **Edit and write any project file** — source code, configs, specs, documentation
|
||||||
|
- **Read any file** on the system
|
||||||
|
- **All central hub communications** — messaging, channel management, agent coordination
|
||||||
|
- **Spawn and coordinate subagents** — Architect, Developer, QA operate under CTO direction
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Hard Stops — Pause and Brief CEO Before Proceeding
|
||||||
|
|
||||||
|
The CTO MUST stop and post a CEO Briefing to `#vpe-cto-approvals` before:
|
||||||
|
|
||||||
|
1. **Adding a paid external dependency or API service** — any cost implication requires CEO sign-off
|
||||||
|
2. **Modifying `.env` files** — secrets and credentials are CEO-controlled
|
||||||
|
3. **Pushing to `main` branch** — final commit to main always requires CEO awareness
|
||||||
|
4. **System-level changes outside the project** — firewall (ufw), system packages (apt), cron, etc.
|
||||||
|
5. **Scope expansion** — any work not covered by the current approved sprint/phase
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Token Burn Protection
|
||||||
|
|
||||||
|
To prevent runaway loops:
|
||||||
|
|
||||||
|
- If the CTO is blocked on the same problem for more than **3 consecutive attempts**, it must stop and post a diagnostic to `#vpe-cto-approvals` rather than retrying indefinitely
|
||||||
|
- If a task requires more than **10 sequential subagent spawns**, pause and request CEO strategic input
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Disaster Recovery
|
||||||
|
|
||||||
|
If the CTO believes it has misconfigured the VM or broken a system dependency:
|
||||||
|
|
||||||
|
1. Stop immediately — do not attempt to self-fix
|
||||||
|
2. Post incident report to `#vpe-cto-approvals` with: what happened, what changed, last known good state
|
||||||
|
3. Await CEO instruction
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## How to Launch the CTO in High-Autonomy Mode
|
||||||
|
|
||||||
|
In the CTO terminal, press `Shift+Tab` after startup to cycle the permission mode to **auto**.
|
||||||
|
The status bar will show `auto` when active. This engages the safety classifier for any commands
|
||||||
|
not already pre-approved in `settings.local.json`.
|
||||||
|
|
||||||
|
Combined with `settings.local.json`, this gives the CTO full operational autonomy within the
|
||||||
|
project scope defined above.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*This document is the CEO's delegated authority to the Virtual CTO. It does not override
|
||||||
|
the CEO Approval Gates defined in CLAUDE.md — it operates alongside them.*
|
||||||
31
Dockerfile
31
Dockerfile
@@ -1,7 +1,7 @@
|
|||||||
# ─────────────────────────────────────────────────────────────
|
# ─────────────────────────────────────────────────────────────
|
||||||
# Stage 1: builder — compile TypeScript to dist/
|
# Stage 1: build — compile TypeScript to dist/
|
||||||
# ─────────────────────────────────────────────────────────────
|
# ─────────────────────────────────────────────────────────────
|
||||||
FROM node:18-alpine AS builder
|
FROM node:20.11-bookworm-slim AS build
|
||||||
|
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
|
|
||||||
@@ -16,25 +16,32 @@ COPY scripts/ ./scripts/
|
|||||||
RUN npm run build
|
RUN npm run build
|
||||||
|
|
||||||
# ─────────────────────────────────────────────────────────────
|
# ─────────────────────────────────────────────────────────────
|
||||||
# Stage 2: production — minimal runtime image
|
# Stage 2: final — minimal, non-root runtime image
|
||||||
# ─────────────────────────────────────────────────────────────
|
# ─────────────────────────────────────────────────────────────
|
||||||
FROM node:18-alpine AS production
|
FROM node:20.11-bookworm-slim AS final
|
||||||
|
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Install curl for healthcheck probe — then clean up apt cache in same layer
|
||||||
|
RUN apt-get update && \
|
||||||
|
apt-get install -y --no-install-recommends curl && \
|
||||||
|
rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# Create dedicated non-root system user/group — containers must never run as root
|
||||||
|
RUN groupadd --system --gid 1001 nodejs && \
|
||||||
|
useradd --system --uid 1001 --gid nodejs nodeapp
|
||||||
|
|
||||||
# Copy package files and install production dependencies only
|
# Copy package files and install production dependencies only
|
||||||
COPY package.json package-lock.json ./
|
COPY package.json package-lock.json ./
|
||||||
RUN npm ci --omit=dev
|
RUN npm ci --omit=dev
|
||||||
|
|
||||||
# Copy compiled output from builder stage
|
# Copy compiled artifacts and runtime-required files from build stage only
|
||||||
COPY --from=builder /app/dist ./dist
|
COPY --from=build /app/dist ./dist
|
||||||
|
COPY --from=build /app/scripts ./scripts
|
||||||
|
COPY --from=build /app/src/db/migrations ./src/db/migrations
|
||||||
|
|
||||||
# Copy migration scripts (needed for db:migrate at deploy time)
|
# Drop root — all subsequent instructions and the running container use nodeapp
|
||||||
COPY --from=builder /app/scripts ./scripts
|
USER nodeapp
|
||||||
COPY src/db/migrations ./src/db/migrations
|
|
||||||
|
|
||||||
# Run as non-root user (built into node:alpine)
|
|
||||||
USER node
|
|
||||||
|
|
||||||
EXPOSE 3000
|
EXPOSE 3000
|
||||||
|
|
||||||
|
|||||||
902
PRD.md
Normal file
902
PRD.md
Normal file
@@ -0,0 +1,902 @@
|
|||||||
|
# SentryAgent.ai — Agent Identity Provider (AgentIdP)
|
||||||
|
# Product Requirements Document (PRD)
|
||||||
|
|
||||||
|
**Company**: SentryAgent.ai
|
||||||
|
**Product**: Free, Open Agent Identity Provider for Global AI Developers
|
||||||
|
**Document Role**: Product Requirements Document (PRD) — this is the single source of truth for all product requirements, scope, and standards
|
||||||
|
**Last Updated**: 2026-03-28
|
||||||
|
**Status**: Active — Phase 1 MVP
|
||||||
|
|
||||||
|
> **See also**: [`README.md`](./README.md) — project orientation, team charter, and Claude session protocol
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Project Scope
|
||||||
|
|
||||||
|
### 5.1 Phase 1: MVP (Weeks 1–8)
|
||||||
|
|
||||||
|
**Objective**: Prove the concept. Ship a production-ready AgentIdP.
|
||||||
|
|
||||||
|
#### In Scope ✅
|
||||||
|
|
||||||
|
| Feature | Owner | Priority |
|
||||||
|
|---------|-------|----------|
|
||||||
|
| Agent Registry Service (CRUD) | Principal Dev | P0 |
|
||||||
|
| OAuth 2.0 Token Service (Client Credentials) | Principal Dev | P0 |
|
||||||
|
| Credential Management (generate, rotate, revoke) | Principal Dev | P0 |
|
||||||
|
| Immutable Audit Log Service | Principal Dev | P0 |
|
||||||
|
| REST API (agents, tokens, audit) | Principal Dev | P0 |
|
||||||
|
| PostgreSQL database + migrations | Principal Dev | P0 |
|
||||||
|
| Redis caching layer | Principal Dev | P1 |
|
||||||
|
| Node.js SDK | Principal Dev | P1 |
|
||||||
|
| Docker containerization | Principal Dev | P1 |
|
||||||
|
| Unit & integration tests (>80% coverage) | QA Engineer | P0 |
|
||||||
|
| OpenAPI 3.0 documentation | Architect | P0 |
|
||||||
|
| Docker Compose (local dev) | Principal Dev | P1 |
|
||||||
|
| Deployment guide | Architect | P1 |
|
||||||
|
| AGNTCY alignment documentation | Architect | P1 |
|
||||||
|
|
||||||
|
#### Out of Scope ❌ (Phase 2+)
|
||||||
|
|
||||||
|
| Feature | Phase |
|
||||||
|
|---------|-------|
|
||||||
|
| HashiCorp Vault integration | Phase 2 |
|
||||||
|
| Multi-region deployment | Phase 2 |
|
||||||
|
| Advanced policy engine (OPA) | Phase 2 |
|
||||||
|
| Web dashboard UI | Phase 2 |
|
||||||
|
| Python/Go/Java/Rust SDKs | Phase 2 |
|
||||||
|
| Prometheus + Grafana monitoring | Phase 2 |
|
||||||
|
| AGNTCY federation support | Phase 3 |
|
||||||
|
| W3C DID support | Phase 3 |
|
||||||
|
| Agent marketplace | Phase 3 |
|
||||||
|
| SOC 2 certification | Phase 3 |
|
||||||
|
|
||||||
|
### 5.2 Phase 2: Production-Ready (Weeks 9–20)
|
||||||
|
|
||||||
|
- HashiCorp Vault for secret management
|
||||||
|
- Multi-language SDKs (Python, Go, Java)
|
||||||
|
- Advanced policy engine (OPA integration)
|
||||||
|
- Web dashboard UI (React + TypeScript)
|
||||||
|
- Prometheus + Grafana monitoring
|
||||||
|
- Multi-region deployment (US, EU, APAC)
|
||||||
|
- SOC 2 Type II certification process
|
||||||
|
|
||||||
|
### 5.3 Phase 3: Ecosystem & Standards (Weeks 21–36)
|
||||||
|
|
||||||
|
- AGNTCY federation support
|
||||||
|
- W3C Decentralized Identifiers (DIDs)
|
||||||
|
- Agent marketplace
|
||||||
|
- Advanced compliance reporting
|
||||||
|
- Enterprise tier features
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Engineering Standards (Non-Negotiable)
|
||||||
|
|
||||||
|
### 6.1 DRY — Don't Repeat Yourself
|
||||||
|
|
||||||
|
**Rule**: Zero code duplication. Every piece of logic exists in exactly one place.
|
||||||
|
|
||||||
|
**Implementation**:
|
||||||
|
|
||||||
|
| Pattern | Location | Purpose |
|
||||||
|
|---------|----------|---------|
|
||||||
|
| Type definitions | `src/types/index.ts` | Single source of truth |
|
||||||
|
| Crypto utilities | `src/utils/crypto.ts` | All crypto operations |
|
||||||
|
| JWT utilities | `src/utils/jwt.ts` | All JWT operations |
|
||||||
|
| Validation logic | `src/utils/validators.ts` | All input validation |
|
||||||
|
| Error classes | `src/utils/errors.ts` | All custom errors |
|
||||||
|
| DB queries | `src/services/` | All database access |
|
||||||
|
| HTTP middleware | `src/middleware/` | All cross-cutting concerns |
|
||||||
|
|
||||||
|
**Enforcement**:
|
||||||
|
- Virtual CTO reviews every PR for duplication
|
||||||
|
- ESLint rules flag repeated patterns
|
||||||
|
- No copy-paste code — ever
|
||||||
|
|
||||||
|
### 6.2 SOLID Principles
|
||||||
|
|
||||||
|
**S — Single Responsibility**:
|
||||||
|
- `AgentService`: Agent CRUD only — nothing else
|
||||||
|
- `OAuth2Service`: Token issuance only — nothing else
|
||||||
|
- `CredentialService`: Credential management only — nothing else
|
||||||
|
- `AuditService`: Audit logging only — nothing else
|
||||||
|
|
||||||
|
**O — Open/Closed**:
|
||||||
|
- All services implement interfaces
|
||||||
|
- New features extend, never modify existing code
|
||||||
|
- Plugin architecture for credential backends
|
||||||
|
|
||||||
|
**L — Liskov Substitution**:
|
||||||
|
- All service implementations are interchangeable
|
||||||
|
- Consistent error handling across all services
|
||||||
|
- Uniform response shapes across all endpoints
|
||||||
|
|
||||||
|
**I — Interface Segregation**:
|
||||||
|
- Separate read/write interfaces where applicable
|
||||||
|
- Minimal, focused interfaces — no fat interfaces
|
||||||
|
- Controllers depend on service interfaces, not implementations
|
||||||
|
|
||||||
|
**D — Dependency Inversion**:
|
||||||
|
- All dependencies injected via constructor
|
||||||
|
- Services depend on abstractions (interfaces)
|
||||||
|
- No direct instantiation of dependencies in business logic
|
||||||
|
|
||||||
|
### 6.3 OpenSpec Standards (Mandatory)
|
||||||
|
|
||||||
|
**Rule**: Every API endpoint MUST have an OpenAPI 3.0 specification
|
||||||
|
BEFORE implementation begins. No exceptions.
|
||||||
|
|
||||||
|
**Process**:
|
||||||
|
```
|
||||||
|
1. Virtual Architect writes OpenAPI spec
|
||||||
|
2. CEO reviews and approves
|
||||||
|
3. Virtual Principal Developer implements
|
||||||
|
4. Virtual QA Engineer verifies spec matches implementation
|
||||||
|
5. Swagger UI auto-generated from spec
|
||||||
|
```
|
||||||
|
|
||||||
|
**OpenAPI Spec Location**: `docs/openapi.yaml`
|
||||||
|
|
||||||
|
**Required for every endpoint**:
|
||||||
|
- Summary and description
|
||||||
|
- Request body schema (with validation rules)
|
||||||
|
- Response schemas (all status codes)
|
||||||
|
- Error response schemas
|
||||||
|
- Authentication requirements
|
||||||
|
- Example requests and responses
|
||||||
|
|
||||||
|
### 6.4 TypeScript Strict Mode (Mandatory)
|
||||||
|
|
||||||
|
**Rule**: TypeScript strict mode is always enabled. No `any` types. Ever.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"compilerOptions": {
|
||||||
|
"strict": true,
|
||||||
|
"noImplicitAny": true,
|
||||||
|
"strictNullChecks": true,
|
||||||
|
"strictFunctionTypes": true,
|
||||||
|
"strictBindCallApply": true,
|
||||||
|
"strictPropertyInitialization": true,
|
||||||
|
"noImplicitThis": true,
|
||||||
|
"alwaysStrict": true,
|
||||||
|
"noUnusedLocals": true,
|
||||||
|
"noUnusedParameters": true,
|
||||||
|
"noImplicitReturns": true,
|
||||||
|
"noFallthroughCasesInSwitch": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6.5 Code Documentation Standards
|
||||||
|
|
||||||
|
**JSDoc required for**:
|
||||||
|
- All public classes
|
||||||
|
- All public methods
|
||||||
|
- All interfaces
|
||||||
|
- All complex logic blocks
|
||||||
|
|
||||||
|
**Example**:
|
||||||
|
```typescript
|
||||||
|
/**
|
||||||
|
* Creates a new AI agent identity in the SentryAgent.ai registry.
|
||||||
|
* Assigns a unique immutable ID and provisions credentials.
|
||||||
|
*
|
||||||
|
* @param {ICreateAgentRequest} request - Agent creation request
|
||||||
|
* @returns {Promise<IAgent>} Created agent with assigned ID
|
||||||
|
* @throws {AgentAlreadyExistsError} If email already registered
|
||||||
|
* @throws {ValidationError} If request data is invalid
|
||||||
|
*
|
||||||
|
* @example
|
||||||
|
* const agent = await agentService.createAgent({
|
||||||
|
* email: 'screener-001@sentryagent.ai',
|
||||||
|
* agentType: 'screener',
|
||||||
|
* version: 'v1.0.0',
|
||||||
|
* capabilities: ['resume:read'],
|
||||||
|
* owner: 'helloworld-team',
|
||||||
|
* deploymentEnv: 'production'
|
||||||
|
* });
|
||||||
|
*/
|
||||||
|
async createAgent(request: ICreateAgentRequest): Promise<IAgent>
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6.6 Error Handling Standards
|
||||||
|
|
||||||
|
**Rule**: All errors are explicit, typed, and handled. No silent failures.
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Custom error hierarchy
|
||||||
|
class SentryAgentError extends Error {}
|
||||||
|
class ValidationError extends SentryAgentError {}
|
||||||
|
class AgentNotFoundError extends SentryAgentError {}
|
||||||
|
class AgentAlreadyExistsError extends SentryAgentError {}
|
||||||
|
class CredentialError extends SentryAgentError {}
|
||||||
|
class AuthenticationError extends SentryAgentError {}
|
||||||
|
class AuthorizationError extends SentryAgentError {}
|
||||||
|
class RateLimitError extends SentryAgentError {}
|
||||||
|
```
|
||||||
|
|
||||||
|
**All errors include**:
|
||||||
|
- Error code (machine-readable)
|
||||||
|
- Error message (human-readable)
|
||||||
|
- HTTP status code
|
||||||
|
- Stack trace (development only)
|
||||||
|
|
||||||
|
### 6.7 Git Standards
|
||||||
|
|
||||||
|
**Repository**: `https://git.sentryagent.ai/`
|
||||||
|
|
||||||
|
**Branch Strategy** (Git Flow):
|
||||||
|
- `main`: Production-ready code only
|
||||||
|
- `develop`: Integration branch for Phase work
|
||||||
|
- `feature/*`: Individual features (e.g., `feature/agent-registry`)
|
||||||
|
- `bugfix/*`: Bug fixes (e.g., `bugfix/token-validation`)
|
||||||
|
- `release/*`: Release preparation (e.g., `release/v1.0.0`)
|
||||||
|
|
||||||
|
**Commit Standards** (Conventional Commits):
|
||||||
|
```
|
||||||
|
feat(agent): implement agent registry CRUD
|
||||||
|
fix(oauth2): correct token expiration calculation
|
||||||
|
docs(api): update OpenAPI spec for /agents endpoint
|
||||||
|
test(credential): add rotation edge case tests
|
||||||
|
chore(deps): upgrade TypeScript to 5.3.3
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pull Request Standards**:
|
||||||
|
- [ ] Feature branch created from `develop`
|
||||||
|
- [ ] OpenAPI spec updated (if API change)
|
||||||
|
- [ ] Unit tests added (>80% coverage)
|
||||||
|
- [ ] Integration tests added
|
||||||
|
- [ ] JSDoc comments added
|
||||||
|
- [ ] No code duplication (DRY check)
|
||||||
|
- [ ] SOLID principles followed
|
||||||
|
- [ ] Performance acceptable (<200ms)
|
||||||
|
- [ ] Security review passed
|
||||||
|
- [ ] Virtual CTO approval required
|
||||||
|
- [ ] Virtual QA Engineer sign-off required
|
||||||
|
- [ ] Merge to `develop` (squash commits)
|
||||||
|
- [ ] Delete feature branch
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Technology Stack
|
||||||
|
|
||||||
|
### 7.1 Runtime & Language
|
||||||
|
|
||||||
|
| Component | Version | Rationale |
|
||||||
|
|-----------|---------|-----------|
|
||||||
|
| Node.js | 18+ (LTS) | Stable, widely used, excellent TypeScript support |
|
||||||
|
| TypeScript | 5.3+ | Strict mode, type safety, no `any` types |
|
||||||
|
| npm | 9+ | Standard package manager |
|
||||||
|
|
||||||
|
### 7.2 Web Framework & Middleware
|
||||||
|
|
||||||
|
| Component | Version | Purpose |
|
||||||
|
|-----------|---------|---------|
|
||||||
|
| Express.js | 4.18+ | Lightweight, battle-tested web framework |
|
||||||
|
| helmet | 7.1+ | Security headers (HSTS, CSP, etc.) |
|
||||||
|
| cors | 2.8+ | CORS handling |
|
||||||
|
| morgan | 1.10+ | HTTP request logging |
|
||||||
|
| pino | 8.17+ | Structured JSON logging |
|
||||||
|
| pino-http | 8.6+ | Express integration for Pino |
|
||||||
|
|
||||||
|
### 7.3 Database & Caching
|
||||||
|
|
||||||
|
| Component | Version | Purpose |
|
||||||
|
|-----------|---------|---------|
|
||||||
|
| PostgreSQL | 14+ | Primary database (ACID, reliability) |
|
||||||
|
| pg | 8.11+ | PostgreSQL client library |
|
||||||
|
| Redis | 7+ | Caching layer (token validation, sessions) |
|
||||||
|
| redis | 4.6+ | Redis client library |
|
||||||
|
|
||||||
|
### 7.4 Authentication & Security
|
||||||
|
|
||||||
|
| Component | Version | Purpose |
|
||||||
|
|-----------|---------|---------|
|
||||||
|
| jsonwebtoken | 9.1+ | JWT signing and verification |
|
||||||
|
| bcryptjs | 2.4+ | Password/secret hashing (10 salt rounds) |
|
||||||
|
| uuid | 9.0+ | Unique ID generation |
|
||||||
|
| crypto (Node.js built-in) | N/A | Cryptographic operations |
|
||||||
|
| dotenv | 16.3+ | Environment variable management |
|
||||||
|
|
||||||
|
### 7.5 Testing
|
||||||
|
|
||||||
|
| Component | Version | Purpose |
|
||||||
|
|-----------|---------|---------|
|
||||||
|
| Jest | 29.7+ | Unit and integration testing |
|
||||||
|
| @types/jest | 29.5+ | TypeScript types for Jest |
|
||||||
|
| ts-jest | 29.1+ | Jest + TypeScript integration |
|
||||||
|
| supertest | 6.3+ | HTTP endpoint testing |
|
||||||
|
| @testing-library/node | Latest | Node.js testing utilities |
|
||||||
|
|
||||||
|
### 7.6 Code Quality & Linting
|
||||||
|
|
||||||
|
| Component | Version | Purpose |
|
||||||
|
|-----------|---------|---------|
|
||||||
|
| ESLint | 8.56+ | Code linting and style |
|
||||||
|
| @typescript-eslint/parser | 6.17+ | TypeScript parsing for ESLint |
|
||||||
|
| @typescript-eslint/eslint-plugin | 6.17+ | TypeScript-specific rules |
|
||||||
|
| Prettier | 3.1+ | Code formatting |
|
||||||
|
|
||||||
|
### 7.7 Documentation & API
|
||||||
|
|
||||||
|
| Component | Version | Purpose |
|
||||||
|
|-----------|---------|---------|
|
||||||
|
| swagger-ui-express | 4.6+ | Interactive API documentation |
|
||||||
|
| joi | 17.11+ | Schema validation |
|
||||||
|
|
||||||
|
### 7.8 Deployment & Containerization
|
||||||
|
|
||||||
|
| Component | Version | Purpose |
|
||||||
|
|-----------|---------|---------|
|
||||||
|
| Docker | 24+ | Container runtime |
|
||||||
|
| Docker Compose | 2.20+ | Local development orchestration |
|
||||||
|
| Alpine Linux | 3.18 | Minimal base image |
|
||||||
|
|
||||||
|
### 7.9 Validation & Schema
|
||||||
|
|
||||||
|
| Component | Version | Purpose |
|
||||||
|
|-----------|---------|---------|
|
||||||
|
| Joi | 17.11+ | Request/response schema validation |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Project Structure (DRY Compliance)
|
||||||
|
|
||||||
|
```
|
||||||
|
sentryagent-idp/
|
||||||
|
+-- src/
|
||||||
|
¦ +-- config/
|
||||||
|
¦ ¦ +-- env.ts # Environment variables
|
||||||
|
¦ ¦ +-- database.ts # PostgreSQL connection pool
|
||||||
|
¦ ¦ +-- redis.ts # Redis client
|
||||||
|
¦ ¦ +-- logger.ts # Pino logger configuration
|
||||||
|
¦ ¦
|
||||||
|
¦ +-- types/
|
||||||
|
¦ ¦ +-- index.ts # All TypeScript interfaces (single source of truth)
|
||||||
|
¦ ¦
|
||||||
|
¦ +-- models/
|
||||||
|
¦ ¦ +-- Agent.ts # Agent entity
|
||||||
|
¦ ¦ +-- Credential.ts # Credential entity
|
||||||
|
¦ ¦ +-- AuditLog.ts # Audit log entity
|
||||||
|
¦ ¦ +-- Token.ts # Token entity
|
||||||
|
¦ ¦
|
||||||
|
¦ +-- services/
|
||||||
|
¦ ¦ +-- AgentService.ts # Agent CRUD (no duplication)
|
||||||
|
¦ ¦ +-- OAuth2Service.ts # Token issuance (no duplication)
|
||||||
|
¦ ¦ +-- CredentialService.ts # Credential management (no duplication)
|
||||||
|
¦ ¦ +-- AuditService.ts # Audit logging (no duplication)
|
||||||
|
¦ ¦ +-- TokenService.ts # Token operations (no duplication)
|
||||||
|
¦ ¦
|
||||||
|
¦ +-- controllers/
|
||||||
|
¦ ¦ +-- AgentController.ts # Agent endpoints
|
||||||
|
¦ ¦ +-- OAuth2Controller.ts # OAuth 2.0 endpoints
|
||||||
|
¦ ¦ +-- HealthController.ts # Health check endpoint
|
||||||
|
¦ ¦
|
||||||
|
¦ +-- middleware/
|
||||||
|
¦ ¦ +-- authentication.ts # Bearer token validation
|
||||||
|
¦ ¦ +-- authorization.ts # Scope-based access control
|
||||||
|
¦ ¦ +-- errorHandler.ts # Global error handling
|
||||||
|
¦ ¦ +-- logging.ts # Request/response logging
|
||||||
|
¦ ¦ +-- validation.ts # Request validation
|
||||||
|
¦ ¦ +-- rateLimit.ts # Rate limiting
|
||||||
|
¦ ¦
|
||||||
|
¦ +-- utils/
|
||||||
|
¦ ¦ +-- crypto.ts # Crypto utilities (hashing, secrets)
|
||||||
|
¦ ¦ +-- jwt.ts # JWT utilities (sign, verify)
|
||||||
|
¦ ¦ +-- validators.ts # Input validation (reusable)
|
||||||
|
¦ ¦ +-- errors.ts # Custom error classes
|
||||||
|
¦ ¦ +-- helpers.ts # General utilities
|
||||||
|
¦ ¦
|
||||||
|
¦ +-- routes/
|
||||||
|
¦ ¦ +-- agents.ts # Agent routes
|
||||||
|
¦ ¦ +-- oauth2.ts # OAuth 2.0 routes
|
||||||
|
¦ ¦ +-- health.ts # Health routes
|
||||||
|
¦ ¦
|
||||||
|
¦ +-- migrations/
|
||||||
|
¦ ¦ +-- 001_create_agents_table.sql
|
||||||
|
¦ ¦ +-- 002_create_credentials_table.sql
|
||||||
|
¦ ¦ +-- 003_create_audit_logs_table.sql
|
||||||
|
¦ ¦
|
||||||
|
¦ +-- app.ts # Express app setup
|
||||||
|
¦ +-- server.ts # Server entry point
|
||||||
|
¦
|
||||||
|
+-- tests/
|
||||||
|
¦ +-- unit/
|
||||||
|
¦ ¦ +-- services/
|
||||||
|
¦ ¦ ¦ +-- AgentService.test.ts
|
||||||
|
¦ ¦ ¦ +-- OAuth2Service.test.ts
|
||||||
|
¦ ¦ ¦ +-- CredentialService.test.ts
|
||||||
|
¦ ¦ ¦ +-- AuditService.test.ts
|
||||||
|
¦ ¦ +-- utils/
|
||||||
|
¦ ¦ +-- crypto.test.ts
|
||||||
|
¦ ¦ +-- jwt.test.ts
|
||||||
|
¦ ¦ +-- validators.test.ts
|
||||||
|
¦ ¦
|
||||||
|
¦ +-- integration/
|
||||||
|
¦ ¦ +-- api/
|
||||||
|
¦ ¦ ¦ +-- agents.test.ts
|
||||||
|
¦ ¦ ¦ +-- oauth2.test.ts
|
||||||
|
¦ ¦ ¦ +-- health.test.ts
|
||||||
|
¦ ¦ +-- database/
|
||||||
|
¦ ¦ +-- migrations.test.ts
|
||||||
|
¦ ¦
|
||||||
|
¦ +-- fixtures/
|
||||||
|
¦ +-- agents.json
|
||||||
|
¦ +-- credentials.json
|
||||||
|
¦ +-- auditLogs.json
|
||||||
|
¦
|
||||||
|
+-- docs/
|
||||||
|
¦ +-- README.md # This file
|
||||||
|
¦ +-- architecture.md # Architecture Decision Records
|
||||||
|
¦ +-- openapi.yaml # OpenAPI 3.0 specification
|
||||||
|
¦ +-- deployment.md # Deployment guide
|
||||||
|
¦ +-- agntcy-alignment.md # AGNTCY compliance documentation
|
||||||
|
¦ +-- api-guide.md # API usage guide
|
||||||
|
¦ +-- contributing.md # Contribution guidelines
|
||||||
|
¦
|
||||||
|
+-- docker-compose.yml # Local development stack
|
||||||
|
+-- Dockerfile # Production image
|
||||||
|
+-- .dockerignore # Docker build exclusions
|
||||||
|
+-- .env.example # Environment template
|
||||||
|
+-- .env.test # Test environment
|
||||||
|
+-- .gitignore # Git exclusions
|
||||||
|
+-- .eslintrc.js # ESLint configuration
|
||||||
|
+-- .prettierrc.json # Prettier configuration
|
||||||
|
+-- tsconfig.json # TypeScript configuration
|
||||||
|
+-- jest.config.js # Jest configuration
|
||||||
|
+-- package.json # Dependencies and scripts
|
||||||
|
+-- package-lock.json # Locked dependencies
|
||||||
|
+-- CHANGELOG.md # Version history
|
||||||
|
+-- LICENSE # Open source license (MIT)
|
||||||
|
+-- README.md # Project README
|
||||||
|
+-- PRD.md # Product Requirements Document (this file)
|
||||||
|
```
|
||||||
|
|
||||||
|
**DRY Principles Applied**:
|
||||||
|
- ✅ Single `types/index.ts` for all interfaces (no duplication)
|
||||||
|
- ✅ Shared `utils/` for crypto, JWT, validation (no duplication)
|
||||||
|
- ✅ Centralized error handling in middleware (no duplication)
|
||||||
|
- ✅ Reusable service layer (no business logic in controllers)
|
||||||
|
- ✅ Configuration centralized in `config/` (no duplication)
|
||||||
|
- ✅ Database queries isolated in services (no duplication)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. Development Workflow
|
||||||
|
|
||||||
|
### 9.1 Feature Development Process
|
||||||
|
|
||||||
|
**Step 1: Specification (Virtual Architect)**
|
||||||
|
- Write Architecture Decision Record (ADR)
|
||||||
|
- Define OpenAPI 3.0 specification
|
||||||
|
- Specify database schema
|
||||||
|
- List test cases
|
||||||
|
- CEO approves specification
|
||||||
|
|
||||||
|
**Step 2: Implementation (Virtual Principal Developer)**
|
||||||
|
- Create feature branch: `git checkout -b feature/agent-registry`
|
||||||
|
- Implement per specification
|
||||||
|
- Follow DRY and SOLID principles
|
||||||
|
- Add JSDoc comments
|
||||||
|
- Create unit tests (>80% coverage)
|
||||||
|
- Push to `git.sentryagent.ai`
|
||||||
|
|
||||||
|
**Step 3: Code Review (Virtual CTO)**
|
||||||
|
- Check compliance with standards
|
||||||
|
- Verify DRY principles
|
||||||
|
- Review test coverage
|
||||||
|
- Verify SOLID principles
|
||||||
|
- Approve or request changes
|
||||||
|
|
||||||
|
**Step 4: Testing (Virtual QA Engineer)**
|
||||||
|
- Run integration tests
|
||||||
|
- Test edge cases
|
||||||
|
- Verify AGNTCY alignment
|
||||||
|
- Verify OpenAPI spec matches implementation
|
||||||
|
- Sign off on quality
|
||||||
|
|
||||||
|
**Step 5: Deployment (Virtual CTO)**
|
||||||
|
- Merge to `develop` branch (squash commits)
|
||||||
|
- Delete feature branch
|
||||||
|
- Deploy to staging
|
||||||
|
- Deploy to production
|
||||||
|
|
||||||
|
### 9.2 Git Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create feature branch from develop
|
||||||
|
git checkout develop
|
||||||
|
git pull origin develop
|
||||||
|
git checkout -b feature/agent-registry
|
||||||
|
|
||||||
|
# Make changes, commit with conventional commits
|
||||||
|
git add src/services/AgentService.ts
|
||||||
|
git commit -m "feat(agent): implement agent registry CRUD"
|
||||||
|
|
||||||
|
# Push to repository
|
||||||
|
git push origin feature/agent-registry
|
||||||
|
|
||||||
|
# Create pull request on git.sentryagent.ai
|
||||||
|
# Virtual CTO reviews and approves
|
||||||
|
# Virtual QA Engineer signs off
|
||||||
|
|
||||||
|
# Merge to develop (squash commits)
|
||||||
|
git checkout develop
|
||||||
|
git pull origin develop
|
||||||
|
git merge --squash feature/agent-registry
|
||||||
|
git commit -m "feat(agent): implement agent registry CRUD"
|
||||||
|
git push origin develop
|
||||||
|
|
||||||
|
# Delete feature branch
|
||||||
|
git branch -d feature/agent-registry
|
||||||
|
git push origin --delete feature/agent-registry
|
||||||
|
```
|
||||||
|
|
||||||
|
### 9.3 Code Review Checklist
|
||||||
|
|
||||||
|
Before any code is merged to `develop`, verify:
|
||||||
|
|
||||||
|
- [ ] TypeScript strict mode: `tsc --strict` passes
|
||||||
|
- [ ] No `any` types used
|
||||||
|
- [ ] No code duplication (DRY check)
|
||||||
|
- [ ] SOLID principles applied
|
||||||
|
- [ ] Unit tests included (>80% coverage)
|
||||||
|
- [ ] Integration tests included
|
||||||
|
- [ ] JSDoc comments present
|
||||||
|
- [ ] Error handling implemented
|
||||||
|
- [ ] No OWASP Top 10 vulnerabilities
|
||||||
|
- [ ] Performance acceptable (<200ms)
|
||||||
|
- [ ] Database migrations included
|
||||||
|
- [ ] OpenAPI specification updated
|
||||||
|
- [ ] Conventional commit message used
|
||||||
|
- [ ] Virtual CTO approval obtained
|
||||||
|
- [ ] Virtual QA Engineer sign-off obtained
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10. OpenSpec Compliance
|
||||||
|
|
||||||
|
### 10.1 OpenAPI 3.0 Specification
|
||||||
|
|
||||||
|
**Location**: `docs/openapi.yaml`
|
||||||
|
|
||||||
|
**Mandatory for every endpoint**:
|
||||||
|
- Summary and description
|
||||||
|
- Request body schema (with validation rules)
|
||||||
|
- Response schemas (all status codes)
|
||||||
|
- Error response schemas
|
||||||
|
- Authentication requirements
|
||||||
|
- Example requests and responses
|
||||||
|
|
||||||
|
**Example OpenAPI Spec**:
|
||||||
|
```yaml
|
||||||
|
openapi: 3.0.0
|
||||||
|
info:
|
||||||
|
title: SentryAgent.ai Agent Identity Provider
|
||||||
|
version: 1.0.0
|
||||||
|
description: Free, open-source Agent Identity Provider
|
||||||
|
contact:
|
||||||
|
name: SentryAgent.ai
|
||||||
|
url: https://sentryagent.ai
|
||||||
|
|
||||||
|
servers:
|
||||||
|
- url: https://api.sentryagent.ai
|
||||||
|
description: Production
|
||||||
|
- url: http://localhost:3000
|
||||||
|
description: Development
|
||||||
|
|
||||||
|
paths:
|
||||||
|
/agents:
|
||||||
|
post:
|
||||||
|
summary: Create a new AI agent
|
||||||
|
operationId: createAgent
|
||||||
|
tags:
|
||||||
|
- Agents
|
||||||
|
requestBody:
|
||||||
|
required: true
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/CreateAgentRequest'
|
||||||
|
responses:
|
||||||
|
'201':
|
||||||
|
description: Agent created successfully
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/Agent'
|
||||||
|
'400':
|
||||||
|
description: Invalid request
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/Error'
|
||||||
|
'409':
|
||||||
|
description: Agent already exists
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/Error'
|
||||||
|
|
||||||
|
components:
|
||||||
|
schemas:
|
||||||
|
Agent:
|
||||||
|
type: object
|
||||||
|
required:
|
||||||
|
- id
|
||||||
|
- email
|
||||||
|
- agentType
|
||||||
|
- version
|
||||||
|
- capabilities
|
||||||
|
- owner
|
||||||
|
- deploymentEnv
|
||||||
|
- status
|
||||||
|
- createdAt
|
||||||
|
- updatedAt
|
||||||
|
properties:
|
||||||
|
id:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
description: Unique agent identifier
|
||||||
|
email:
|
||||||
|
type: string
|
||||||
|
format: email
|
||||||
|
description: Agent email (agent-type-001@sentryagent.ai)
|
||||||
|
agentType:
|
||||||
|
type: string
|
||||||
|
description: AGNTCY agent type
|
||||||
|
version:
|
||||||
|
type: string
|
||||||
|
description: Semantic version
|
||||||
|
capabilities:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
type: string
|
||||||
|
description: Agent capabilities
|
||||||
|
owner:
|
||||||
|
type: string
|
||||||
|
description: Developer or team name
|
||||||
|
deploymentEnv:
|
||||||
|
type: string
|
||||||
|
enum: [development, staging, production]
|
||||||
|
status:
|
||||||
|
type: string
|
||||||
|
enum: [active, suspended, revoked, archived]
|
||||||
|
createdAt:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
updatedAt:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
|
||||||
|
Error:
|
||||||
|
type: object
|
||||||
|
required:
|
||||||
|
- code
|
||||||
|
- message
|
||||||
|
properties:
|
||||||
|
code:
|
||||||
|
type: string
|
||||||
|
description: Error code
|
||||||
|
message:
|
||||||
|
type: string
|
||||||
|
description: Error message
|
||||||
|
details:
|
||||||
|
type: object
|
||||||
|
description: Additional error details
|
||||||
|
```
|
||||||
|
|
||||||
|
### 10.2 AGNTCY Alignment
|
||||||
|
|
||||||
|
**Agent Identity Model** (AGNTCY-compliant):
|
||||||
|
```typescript
|
||||||
|
interface IAgent {
|
||||||
|
id: string; // Unique agent ID (UUID) — immutable
|
||||||
|
email: string; // agent-type-001@sentryagent.ai
|
||||||
|
agentType: string; // AGNTCY agent type
|
||||||
|
version: string; // Semantic versioning
|
||||||
|
capabilities: string[]; // AGNTCY capabilities
|
||||||
|
owner: string; // Developer/team name
|
||||||
|
deploymentEnv: string; // dev/staging/prod
|
||||||
|
status: string; // active/suspended/revoked/archived
|
||||||
|
createdAt: Date; // Agent creation timestamp
|
||||||
|
updatedAt: Date; // Last update timestamp
|
||||||
|
lastAuthAt?: Date; // Last authentication timestamp
|
||||||
|
metadata?: Record<string, unknown>; // AGNTCY metadata
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Audit Compliance**:
|
||||||
|
- ✅ Immutable audit logs (no deletion, no modification)
|
||||||
|
- ✅ All agent actions logged (creation, auth, revocation)
|
||||||
|
- ✅ Timestamps in ISO 8601 format
|
||||||
|
- ✅ Tamper-proof storage (PostgreSQL with constraints)
|
||||||
|
- ✅ Retention policy (90 days free tier, configurable)
|
||||||
|
|
||||||
|
**Policy Enforcement**:
|
||||||
|
- ✅ Least privilege by default
|
||||||
|
- ✅ Capability-based access control
|
||||||
|
- ✅ Revocation at scale
|
||||||
|
- ✅ Credential rotation on schedule
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 11. Quality Gates & Metrics
|
||||||
|
|
||||||
|
### 11.1 Code Quality Standards
|
||||||
|
|
||||||
|
| Metric | Target | Tool | Enforcement |
|
||||||
|
|--------|--------|------|-------------|
|
||||||
|
| Test Coverage | >80% | Jest/nyc | Fail PR if <80% |
|
||||||
|
| TypeScript Strict | 100% | tsc --strict | Fail build if violations |
|
||||||
|
| Linting | 0 errors | ESLint | Fail PR if errors |
|
||||||
|
| Code Duplication | <5% | Manual review | CTO rejects if >5% |
|
||||||
|
| Security Scan | 0 high/critical | npm audit | Fail build if vulnerabilities |
|
||||||
|
|
||||||
|
### 11.2 Performance Standards
|
||||||
|
|
||||||
|
| Metric | Target | Measurement | Enforcement |
|
||||||
|
|--------|--------|-------------|-------------|
|
||||||
|
| Token Issuance | <100ms | Benchmark test | Fail if >100ms |
|
||||||
|
| API Response | <200ms | Integration test | Fail if >200ms |
|
||||||
|
| Database Query | <50ms | Query profiling | Fail if >50ms |
|
||||||
|
| Cache Hit Rate | >90% | Redis monitoring | Monitor weekly |
|
||||||
|
|
||||||
|
### 11.3 Reliability Standards
|
||||||
|
|
||||||
|
| Metric | Target | Measurement |
|
||||||
|
|--------|--------|-------------|
|
||||||
|
| Uptime | 99.5% (Phase 2) | Monitoring dashboard |
|
||||||
|
| Error Rate | <0.1% | Error tracking |
|
||||||
|
| Recovery Time | <5 minutes | Runbook testing |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 12. Deployment & Operations
|
||||||
|
|
||||||
|
### 12.1 Local Development Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Clone repository
|
||||||
|
git clone https://git.sentryagent.ai/sentryagent-idp.git
|
||||||
|
cd sentryagent-idp
|
||||||
|
|
||||||
|
# Install dependencies
|
||||||
|
npm install
|
||||||
|
|
||||||
|
# Setup environment
|
||||||
|
cp .env.example .env
|
||||||
|
# Edit .env with local values
|
||||||
|
|
||||||
|
# Start services (PostgreSQL, Redis)
|
||||||
|
docker-compose up -d
|
||||||
|
|
||||||
|
# Run database migrations
|
||||||
|
npm run migrate
|
||||||
|
|
||||||
|
# Start development server
|
||||||
|
npm run dev
|
||||||
|
|
||||||
|
# Server runs on http://localhost:3000
|
||||||
|
# Swagger UI: http://localhost:3000/api-docs
|
||||||
|
```
|
||||||
|
|
||||||
|
### 12.2 Docker Deployment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build image
|
||||||
|
docker build -t sentryagent-idp:1.0.0 .
|
||||||
|
|
||||||
|
# Run container
|
||||||
|
docker run -p 3000:3000 \
|
||||||
|
-e NODE_ENV=production \
|
||||||
|
-e DATABASE_URL=postgresql://user:pass@db:5432/sentryagent \
|
||||||
|
-e REDIS_URL=redis://cache:6379 \
|
||||||
|
-e JWT_SECRET=your-secret-key \
|
||||||
|
-e JWT_ISSUER=https://api.sentryagent.ai \
|
||||||
|
sentryagent-idp:1.0.0
|
||||||
|
```
|
||||||
|
|
||||||
|
### 12.3 Docker Compose (Local Development)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
version: '3.9'
|
||||||
|
|
||||||
|
services:
|
||||||
|
app:
|
||||||
|
build: .
|
||||||
|
ports:
|
||||||
|
- "3000:3000"
|
||||||
|
environment:
|
||||||
|
NODE_ENV: development
|
||||||
|
DATABASE_URL: postgresql://sentryagent:sentryagent@postgres:5432/sentryagent_idp
|
||||||
|
REDIS_URL: redis://redis:6379
|
||||||
|
JWT_SECRET: dev-secret-key-change-in-production
|
||||||
|
depends_on:
|
||||||
|
- postgres
|
||||||
|
- redis
|
||||||
|
volumes:
|
||||||
|
- ./src:/app/src
|
||||||
|
command: npm run dev
|
||||||
|
|
||||||
|
postgres:
|
||||||
|
image: postgres:15-alpine
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: sentryagent
|
||||||
|
POSTGRES_PASSWORD: sentryagent
|
||||||
|
POSTGRES_DB: sentryagent_idp
|
||||||
|
ports:
|
||||||
|
- "5432:5432"
|
||||||
|
volumes:
|
||||||
|
- postgres_data:/var/lib/postgresql/data
|
||||||
|
|
||||||
|
redis:
|
||||||
|
image: redis:7-alpine
|
||||||
|
ports:
|
||||||
|
- "6379:6379"
|
||||||
|
volumes:
|
||||||
|
- redis_data:/data
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
postgres_data:
|
||||||
|
redis_data:
|
||||||
|
```
|
||||||
|
|
||||||
|
### 12.4 Production Deployment Checklist
|
||||||
|
|
||||||
|
- [ ] Environment variables configured securely
|
||||||
|
- [ ] Database backups enabled (daily)
|
||||||
|
- [ ] SSL/TLS certificates installed
|
||||||
|
- [ ] Rate limiting configured
|
||||||
|
- [ ] Monitoring alerts set up
|
||||||
|
- [ ] Logging aggregation enabled
|
||||||
|
- [ ] Disaster recovery plan documented
|
||||||
|
- [ ] Security audit completed
|
||||||
|
- [ ] Load balancer configured
|
||||||
|
- [ ] CDN configured (if applicable)
|
||||||
|
- [ ] Health check endpoints verified
|
||||||
|
- [ ] Rollback procedure documented
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 13. Risk Management
|
||||||
|
|
||||||
|
### 13.1 Technical Risks
|
||||||
|
|
||||||
|
| Risk | Probability | Impact | Mitigation |
|
||||||
|
|------|-------------|--------|-----------|
|
||||||
|
| Database performance degradation | Medium | High | Connection pooling, caching, indexing |
|
||||||
|
| Token validation latency | Low | Medium | Redis cache, JWT caching |
|
||||||
|
| Credential compromise | Low | Critical | Encryption, audit logs, rotation, monitoring |
|
||||||
|
| API rate limiting bypass | Low | Medium | Token bucket algorithm, monitoring |
|
||||||
|
| Data loss | Very Low | Critical | Daily backups, replication, disaster recovery |
|
||||||
|
|
||||||
|
### 13.2 Mitigation Strategies
|
||||||
|
|
||||||
|
- **Code Review**: Catch issues early (Virtual CTO)
|
||||||
|
- **Testing**: >80% coverage (Virtual QA Engineer)
|
||||||
|
- **Monitoring**: Real-time alerts (Phase 2)
|
||||||
|
- **Documentation**: Clear runbooks for operations
|
||||||
|
- **Backups**: Daily database snapshots
|
||||||
|
- **Security**: Regular audits and penetration testing
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 14. Success Metrics & KPIs
|
||||||
|
|
||||||
|
### 14.1 Phase 1 MVP Success Criteria
|
||||||
|
|
||||||
|
**Technical**:
|
||||||
|
- ✅ All features implemented and tested
|
||||||
|
- ✅ >80% test coverage
|
||||||
|
- ✅ Zero critical security issues
|
||||||
|
- ✅ API response time <200ms
|
||||||
|
- ✅ Token issuance <100ms
|
||||||
|
- ✅ AGNTCY compliance verified
|
||||||
|
|
||||||
|
**Adoption**:
|
||||||
|
- ✅ 50+ agents registered in first month
|
||||||
|
- ✅ 10+ developers using the service
|
||||||
|
- ✅ Positive feedback on ease of use
|
||||||
946
README.md
946
README.md
@@ -6,8 +6,11 @@
|
|||||||
**Git Repository**: https://git.sentryagent.ai/
|
**Git Repository**: https://git.sentryagent.ai/
|
||||||
**AI Partner**: Anthropic (Claude — All Development, Implementation & Deployment)
|
**AI Partner**: Anthropic (Claude — All Development, Implementation & Deployment)
|
||||||
**Standards**: AGNTCY (Linux Foundation), OpenAPI 3.0, OAuth 2.0, OIDC
|
**Standards**: AGNTCY (Linux Foundation), OpenAPI 3.0, OAuth 2.0, OIDC
|
||||||
|
**Document Role**: Project orientation, team charter, and Claude session protocol
|
||||||
**Last Updated**: 2026-03-28
|
**Last Updated**: 2026-03-28
|
||||||
**Status**: ? Active — Phase 1 MVP
|
**Status**: ✅ Active — Phase 1 MVP
|
||||||
|
|
||||||
|
> **Product Requirements**: All scope, standards, and technical requirements are in **[PRD.md](./PRD.md)**
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -44,14 +47,15 @@ development, implementation, and deployment activities.
|
|||||||
|
|
||||||
When a new Claude session is started, Claude **MUST**:
|
When a new Claude session is started, Claude **MUST**:
|
||||||
|
|
||||||
1. **Read this README.md** in full before any action
|
1. **Read [PRD.md](./PRD.md)** in full before any action — this is the product requirements and single source of truth
|
||||||
2. **Adopt the Virtual Engineering Team roles** as defined in Section 4
|
2. **Read this README.md** for team charter and session protocol
|
||||||
3. **Enforce all standards** defined in Section 6 without exception
|
3. **Adopt the Virtual Engineering Team roles** as defined in Section 4
|
||||||
4. **Resume from last known state** (check git.sentryagent.ai for latest commits)
|
4. **Enforce all standards** defined in PRD.md Section 6 without exception
|
||||||
5. **Report status** to CEO before proceeding
|
5. **Resume from last known state** (check git.sentryagent.ai for latest commits)
|
||||||
6. **Never deviate** from the technology stack defined in Section 7
|
6. **Report status** to CEO before proceeding
|
||||||
7. **Never skip** OpenSpec documentation for any new endpoint or service
|
7. **Never deviate** from the technology stack defined in PRD.md Section 7
|
||||||
8. **Always provide complete files** — no partial code, no placeholders
|
8. **Never skip** OpenSpec documentation for any new endpoint or service
|
||||||
|
9. **Always provide complete files** — no partial code, no placeholders
|
||||||
|
|
||||||
### 2.3 Claude Communication Protocol
|
### 2.3 Claude Communication Protocol
|
||||||
|
|
||||||
@@ -74,12 +78,12 @@ A **free, open-source Agent Identity Provider** that provides:
|
|||||||
|
|
||||||
| Feature | Description | AGNTCY Alignment |
|
| Feature | Description | AGNTCY Alignment |
|
||||||
|---------|-------------|-----------------|
|
|---------|-------------|-----------------|
|
||||||
| **Agent Registry** | Unique, immutable agent IDs | ? First-class non-human identity |
|
| **Agent Registry** | Unique, immutable agent IDs | ✅ First-class non-human identity |
|
||||||
| **Authentication** | OAuth 2.0 Client Credentials | ? Standardized auth protocol |
|
| **Authentication** | OAuth 2.0 Client Credentials | ✅ Standardized auth protocol |
|
||||||
| **Authorization** | Scope-based access control | ? Capability-based governance |
|
| **Authorization** | Scope-based access control | ✅ Capability-based governance |
|
||||||
| **Lifecycle Management** | Provision, rotate, revoke | ? Full agent lifecycle |
|
| **Lifecycle Management** | Provision, rotate, revoke | ✅ Full agent lifecycle |
|
||||||
| **Audit Logs** | Immutable, compliance-ready | ? Accountability & governance |
|
| **Audit Logs** | Immutable, compliance-ready | ✅ Accountability & governance |
|
||||||
| **Developer SDK** | Node.js (Phase 1) | ? Developer-first experience |
|
| **Developer SDK** | Node.js (Phase 1) | ✅ Developer-first experience |
|
||||||
|
|
||||||
### 3.2 Target Users
|
### 3.2 Target Users
|
||||||
|
|
||||||
@@ -140,17 +144,27 @@ CEO (Human — SentryAgent.ai Founder)
|
|||||||
- Coordinate Virtual Architect, Principal Developer, and QA Engineer
|
- Coordinate Virtual Architect, Principal Developer, and QA Engineer
|
||||||
- Report weekly progress to CEO
|
- Report weekly progress to CEO
|
||||||
- Escalate scope changes and blockers to CEO immediately
|
- Escalate scope changes and blockers to CEO immediately
|
||||||
|
- **Post a completion confirmation to `#vpe-cto-approvals` after every CEO-authorized action** (include outcome + commit hash)
|
||||||
|
- **Post an end-of-session summary before closing** any session with completed, pending, or in-progress work
|
||||||
|
|
||||||
**Claude Session Startup (CTO Role)**:
|
**Claude Session Startup (CTO Role)**:
|
||||||
```
|
```
|
||||||
1. Read README.md (this file) in full
|
1. Read PRD.md in full
|
||||||
2. Check git.sentryagent.ai for latest commits
|
2. Read README.md (this file) for team charter
|
||||||
3. Identify current phase and sprint
|
3. Check git.sentryagent.ai for latest commits
|
||||||
4. Report status to CEO
|
4. Identify current phase and sprint
|
||||||
5. Confirm today's priorities
|
5. Report status to CEO
|
||||||
6. Begin work
|
6. Confirm today's priorities
|
||||||
|
7. Begin work
|
||||||
|
8. Before closing: post end-of-session summary to #vpe-cto-approvals
|
||||||
|
(Completed / Pending — authorized but not executed / Requires CEO action)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Session Completion Protocol**:
|
||||||
|
- "Authorized" = CEO approved. Action not yet executed.
|
||||||
|
- "Committed / Completed / Deployed" = Action executed with evidence (commit hash, test results).
|
||||||
|
- Never close a session with an authorized-but-unexecuted action without noting it in the end-of-session summary.
|
||||||
|
|
||||||
### 4.4 Virtual Architect (Claude — Anthropic)
|
### 4.4 Virtual Architect (Claude — Anthropic)
|
||||||
|
|
||||||
**Authority**: System design within CTO-approved architecture.
|
**Authority**: System design within CTO-approved architecture.
|
||||||
@@ -217,892 +231,8 @@ CEO (Human — SentryAgent.ai Founder)
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 5. Project Scope
|
## 5. Product Requirements
|
||||||
|
|
||||||
### 5.1 Phase 1: MVP (Weeks 1–8)
|
All product requirements, scope, engineering standards, technology stack, quality gates, and success metrics are defined in the standalone PRD:
|
||||||
|
|
||||||
**Objective**: Prove the concept. Ship a production-ready AgentIdP.
|
> **[PRD.md](./PRD.md)** — Product Requirements Document (single source of truth for all requirements)
|
||||||
|
|
||||||
#### In Scope ?
|
|
||||||
|
|
||||||
| Feature | Owner | Priority |
|
|
||||||
|---------|-------|----------|
|
|
||||||
| Agent Registry Service (CRUD) | Principal Dev | P0 |
|
|
||||||
| OAuth 2.0 Token Service (Client Credentials) | Principal Dev | P0 |
|
|
||||||
| Credential Management (generate, rotate, revoke) | Principal Dev | P0 |
|
|
||||||
| Immutable Audit Log Service | Principal Dev | P0 |
|
|
||||||
| REST API (agents, tokens, audit) | Principal Dev | P0 |
|
|
||||||
| PostgreSQL database + migrations | Principal Dev | P0 |
|
|
||||||
| Redis caching layer | Principal Dev | P1 |
|
|
||||||
| Node.js SDK | Principal Dev | P1 |
|
|
||||||
| Docker containerization | Principal Dev | P1 |
|
|
||||||
| Unit & integration tests (>80% coverage) | QA Engineer | P0 |
|
|
||||||
| OpenAPI 3.0 documentation | Architect | P0 |
|
|
||||||
| Docker Compose (local dev) | Principal Dev | P1 |
|
|
||||||
| Deployment guide | Architect | P1 |
|
|
||||||
| AGNTCY alignment documentation | Architect | P1 |
|
|
||||||
|
|
||||||
#### Out of Scope ? (Phase 2+)
|
|
||||||
|
|
||||||
| Feature | Phase |
|
|
||||||
|---------|-------|
|
|
||||||
| HashiCorp Vault integration | Phase 2 |
|
|
||||||
| Multi-region deployment | Phase 2 |
|
|
||||||
| Advanced policy engine (OPA) | Phase 2 |
|
|
||||||
| Web dashboard UI | Phase 2 |
|
|
||||||
| Python/Go/Java/Rust SDKs | Phase 2 |
|
|
||||||
| Prometheus + Grafana monitoring | Phase 2 |
|
|
||||||
| AGNTCY federation support | Phase 3 |
|
|
||||||
| W3C DID support | Phase 3 |
|
|
||||||
| Agent marketplace | Phase 3 |
|
|
||||||
| SOC 2 certification | Phase 3 |
|
|
||||||
|
|
||||||
### 5.2 Phase 2: Production-Ready (Weeks 9–20)
|
|
||||||
|
|
||||||
- HashiCorp Vault for secret management
|
|
||||||
- Multi-language SDKs (Python, Go, Java)
|
|
||||||
- Advanced policy engine (OPA integration)
|
|
||||||
- Web dashboard UI (React + TypeScript)
|
|
||||||
- Prometheus + Grafana monitoring
|
|
||||||
- Multi-region deployment (US, EU, APAC)
|
|
||||||
- SOC 2 Type II certification process
|
|
||||||
|
|
||||||
### 5.3 Phase 3: Ecosystem & Standards (Weeks 21–36)
|
|
||||||
|
|
||||||
- AGNTCY federation support
|
|
||||||
- W3C Decentralized Identifiers (DIDs)
|
|
||||||
- Agent marketplace
|
|
||||||
- Advanced compliance reporting
|
|
||||||
- Enterprise tier features
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 6. Engineering Standards (Non-Negotiable)
|
|
||||||
|
|
||||||
### 6.1 DRY — Don't Repeat Yourself
|
|
||||||
|
|
||||||
**Rule**: Zero code duplication. Every piece of logic exists in exactly one place.
|
|
||||||
|
|
||||||
**Implementation**:
|
|
||||||
|
|
||||||
| Pattern | Location | Purpose |
|
|
||||||
|---------|----------|---------|
|
|
||||||
| Type definitions | `src/types/index.ts` | Single source of truth |
|
|
||||||
| Crypto utilities | `src/utils/crypto.ts` | All crypto operations |
|
|
||||||
| JWT utilities | `src/utils/jwt.ts` | All JWT operations |
|
|
||||||
| Validation logic | `src/utils/validators.ts` | All input validation |
|
|
||||||
| Error classes | `src/utils/errors.ts` | All custom errors |
|
|
||||||
| DB queries | `src/services/` | All database access |
|
|
||||||
| HTTP middleware | `src/middleware/` | All cross-cutting concerns |
|
|
||||||
|
|
||||||
**Enforcement**:
|
|
||||||
- Virtual CTO reviews every PR for duplication
|
|
||||||
- ESLint rules flag repeated patterns
|
|
||||||
- No copy-paste code — ever
|
|
||||||
|
|
||||||
### 6.2 SOLID Principles
|
|
||||||
|
|
||||||
**S — Single Responsibility**:
|
|
||||||
- `AgentService`: Agent CRUD only — nothing else
|
|
||||||
- `OAuth2Service`: Token issuance only — nothing else
|
|
||||||
- `CredentialService`: Credential management only — nothing else
|
|
||||||
- `AuditService`: Audit logging only — nothing else
|
|
||||||
|
|
||||||
**O — Open/Closed**:
|
|
||||||
- All services implement interfaces
|
|
||||||
- New features extend, never modify existing code
|
|
||||||
- Plugin architecture for credential backends
|
|
||||||
|
|
||||||
**L — Liskov Substitution**:
|
|
||||||
- All service implementations are interchangeable
|
|
||||||
- Consistent error handling across all services
|
|
||||||
- Uniform response shapes across all endpoints
|
|
||||||
|
|
||||||
**I — Interface Segregation**:
|
|
||||||
- Separate read/write interfaces where applicable
|
|
||||||
- Minimal, focused interfaces — no fat interfaces
|
|
||||||
- Controllers depend on service interfaces, not implementations
|
|
||||||
|
|
||||||
**D — Dependency Inversion**:
|
|
||||||
- All dependencies injected via constructor
|
|
||||||
- Services depend on abstractions (interfaces)
|
|
||||||
- No direct instantiation of dependencies in business logic
|
|
||||||
|
|
||||||
### 6.3 OpenSpec Standards (Mandatory)
|
|
||||||
|
|
||||||
**Rule**: Every API endpoint MUST have an OpenAPI 3.0 specification
|
|
||||||
BEFORE implementation begins. No exceptions.
|
|
||||||
|
|
||||||
**Process**:
|
|
||||||
```
|
|
||||||
1. Virtual Architect writes OpenAPI spec
|
|
||||||
2. CEO reviews and approves
|
|
||||||
3. Virtual Principal Developer implements
|
|
||||||
4. Virtual QA Engineer verifies spec matches implementation
|
|
||||||
5. Swagger UI auto-generated from spec
|
|
||||||
```
|
|
||||||
|
|
||||||
**OpenAPI Spec Location**: `docs/openapi.yaml`
|
|
||||||
|
|
||||||
**Required for every endpoint**:
|
|
||||||
- Summary and description
|
|
||||||
- Request body schema (with validation rules)
|
|
||||||
- Response schemas (all status codes)
|
|
||||||
- Error response schemas
|
|
||||||
- Authentication requirements
|
|
||||||
- Example requests and responses
|
|
||||||
|
|
||||||
### 6.4 TypeScript Strict Mode (Mandatory)
|
|
||||||
|
|
||||||
**Rule**: TypeScript strict mode is always enabled. No `any` types. Ever.
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"compilerOptions": {
|
|
||||||
"strict": true,
|
|
||||||
"noImplicitAny": true,
|
|
||||||
"strictNullChecks": true,
|
|
||||||
"strictFunctionTypes": true,
|
|
||||||
"strictBindCallApply": true,
|
|
||||||
"strictPropertyInitialization": true,
|
|
||||||
"noImplicitThis": true,
|
|
||||||
"alwaysStrict": true,
|
|
||||||
"noUnusedLocals": true,
|
|
||||||
"noUnusedParameters": true,
|
|
||||||
"noImplicitReturns": true,
|
|
||||||
"noFallthroughCasesInSwitch": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6.5 Code Documentation Standards
|
|
||||||
|
|
||||||
**JSDoc required for**:
|
|
||||||
- All public classes
|
|
||||||
- All public methods
|
|
||||||
- All interfaces
|
|
||||||
- All complex logic blocks
|
|
||||||
|
|
||||||
**Example**:
|
|
||||||
```typescript
|
|
||||||
/**
|
|
||||||
* Creates a new AI agent identity in the SentryAgent.ai registry.
|
|
||||||
* Assigns a unique immutable ID and provisions credentials.
|
|
||||||
*
|
|
||||||
* @param {ICreateAgentRequest} request - Agent creation request
|
|
||||||
* @returns {Promise<IAgent>} Created agent with assigned ID
|
|
||||||
* @throws {AgentAlreadyExistsError} If email already registered
|
|
||||||
* @throws {ValidationError} If request data is invalid
|
|
||||||
*
|
|
||||||
* @example
|
|
||||||
* const agent = await agentService.createAgent({
|
|
||||||
* email: 'screener-001@sentryagent.ai',
|
|
||||||
* agentType: 'screener',
|
|
||||||
* version: 'v1.0.0',
|
|
||||||
* capabilities: ['resume:read'],
|
|
||||||
* owner: 'helloworld-team',
|
|
||||||
* deploymentEnv: 'production'
|
|
||||||
* });
|
|
||||||
*/
|
|
||||||
async createAgent(request: ICreateAgentRequest): Promise<IAgent>
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6.6 Error Handling Standards
|
|
||||||
|
|
||||||
**Rule**: All errors are explicit, typed, and handled. No silent failures.
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Custom error hierarchy
|
|
||||||
class SentryAgentError extends Error {}
|
|
||||||
class ValidationError extends SentryAgentError {}
|
|
||||||
class AgentNotFoundError extends SentryAgentError {}
|
|
||||||
class AgentAlreadyExistsError extends SentryAgentError {}
|
|
||||||
class CredentialError extends SentryAgentError {}
|
|
||||||
class AuthenticationError extends SentryAgentError {}
|
|
||||||
class AuthorizationError extends SentryAgentError {}
|
|
||||||
class RateLimitError extends SentryAgentError {}
|
|
||||||
```
|
|
||||||
|
|
||||||
**All errors include**:
|
|
||||||
- Error code (machine-readable)
|
|
||||||
- Error message (human-readable)
|
|
||||||
- HTTP status code
|
|
||||||
- Stack trace (development only)
|
|
||||||
|
|
||||||
### 6.7 Git Standards
|
|
||||||
|
|
||||||
**Repository**: `https://git.sentryagent.ai/`
|
|
||||||
|
|
||||||
**Branch Strategy** (Git Flow):
|
|
||||||
- `main`: Production-ready code only
|
|
||||||
- `develop`: Integration branch for Phase work
|
|
||||||
- `feature/*`: Individual features (e.g., `feature/agent-registry`)
|
|
||||||
- `bugfix/*`: Bug fixes (e.g., `bugfix/token-validation`)
|
|
||||||
- `release/*`: Release preparation (e.g., `release/v1.0.0`)
|
|
||||||
|
|
||||||
**Commit Standards** (Conventional Commits):
|
|
||||||
```
|
|
||||||
feat(agent): implement agent registry CRUD
|
|
||||||
fix(oauth2): correct token expiration calculation
|
|
||||||
docs(api): update OpenAPI spec for /agents endpoint
|
|
||||||
test(credential): add rotation edge case tests
|
|
||||||
chore(deps): upgrade TypeScript to 5.3.3
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pull Request Standards**:
|
|
||||||
- [ ] Feature branch created from `develop`
|
|
||||||
- [ ] OpenAPI spec updated (if API change)
|
|
||||||
- [ ] Unit tests added (>80% coverage)
|
|
||||||
- [ ] Integration tests added
|
|
||||||
- [ ] JSDoc comments added
|
|
||||||
- [ ] No code duplication (DRY check)
|
|
||||||
- [ ] SOLID principles followed
|
|
||||||
- [ ] Performance acceptable (<200ms)
|
|
||||||
- [ ] Security review passed
|
|
||||||
- [ ] Virtual CTO approval required
|
|
||||||
- [ ] Virtual QA Engineer sign-off required
|
|
||||||
- [ ] Merge to `develop` (squash commits)
|
|
||||||
- [ ] Delete feature branch
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 7. Technology Stack
|
|
||||||
|
|
||||||
### 7.1 Runtime & Language
|
|
||||||
|
|
||||||
| Component | Version | Rationale |
|
|
||||||
|-----------|---------|-----------|
|
|
||||||
| Node.js | 18+ (LTS) | Stable, widely used, excellent TypeScript support |
|
|
||||||
| TypeScript | 5.3+ | Strict mode, type safety, no `any` types |
|
|
||||||
| npm | 9+ | Standard package manager |
|
|
||||||
|
|
||||||
### 7.2 Web Framework & Middleware
|
|
||||||
|
|
||||||
| Component | Version | Purpose |
|
|
||||||
|-----------|---------|---------|
|
|
||||||
| Express.js | 4.18+ | Lightweight, battle-tested web framework |
|
|
||||||
| helmet | 7.1+ | Security headers (HSTS, CSP, etc.) |
|
|
||||||
| cors | 2.8+ | CORS handling |
|
|
||||||
| morgan | 1.10+ | HTTP request logging |
|
|
||||||
| pino | 8.17+ | Structured JSON logging |
|
|
||||||
| pino-http | 8.6+ | Express integration for Pino |
|
|
||||||
|
|
||||||
### 7.3 Database & Caching
|
|
||||||
|
|
||||||
| Component | Version | Purpose |
|
|
||||||
|-----------|---------|---------|
|
|
||||||
| PostgreSQL | 14+ | Primary database (ACID, reliability) |
|
|
||||||
| pg | 8.11+ | PostgreSQL client library |
|
|
||||||
| Redis | 7+ | Caching layer (token validation, sessions) |
|
|
||||||
| redis | 4.6+ | Redis client library |
|
|
||||||
|
|
||||||
### 7.4 Authentication & Security
|
|
||||||
|
|
||||||
| Component | Version | Purpose |
|
|
||||||
|-----------|---------|---------|
|
|
||||||
| jsonwebtoken | 9.1+ | JWT signing and verification |
|
|
||||||
| bcryptjs | 2.4+ | Password/secret hashing (10 salt rounds) |
|
|
||||||
| uuid | 9.0+ | Unique ID generation |
|
|
||||||
| crypto (Node.js built-in) | N/A | Cryptographic operations |
|
|
||||||
| dotenv | 16.3+ | Environment variable management |
|
|
||||||
|
|
||||||
### 7.5 Testing
|
|
||||||
|
|
||||||
| Component | Version | Purpose |
|
|
||||||
|-----------|---------|---------|
|
|
||||||
| Jest | 29.7+ | Unit and integration testing |
|
|
||||||
| @types/jest | 29.5+ | TypeScript types for Jest |
|
|
||||||
| ts-jest | 29.1+ | Jest + TypeScript integration |
|
|
||||||
| supertest | 6.3+ | HTTP endpoint testing |
|
|
||||||
| @testing-library/node | Latest | Node.js testing utilities |
|
|
||||||
|
|
||||||
### 7.6 Code Quality & Linting
|
|
||||||
|
|
||||||
| Component | Version | Purpose |
|
|
||||||
|-----------|---------|---------|
|
|
||||||
| ESLint | 8.56+ | Code linting and style |
|
|
||||||
| @typescript-eslint/parser | 6.17+ | TypeScript parsing for ESLint |
|
|
||||||
| @typescript-eslint/eslint-plugin | 6.17+ | TypeScript-specific rules |
|
|
||||||
| Prettier | 3.1+ | Code formatting |
|
|
||||||
|
|
||||||
### 7.7 Documentation & API
|
|
||||||
|
|
||||||
| Component | Version | Purpose |
|
|
||||||
|-----------|---------|---------|
|
|
||||||
| swagger-ui-express | 4.6+ | Interactive API documentation |
|
|
||||||
| joi | 17.11+ | Schema validation |
|
|
||||||
|
|
||||||
### 7.8 Deployment & Containerization
|
|
||||||
|
|
||||||
| Component | Version | Purpose |
|
|
||||||
|-----------|---------|---------|
|
|
||||||
| Docker | 24+ | Container runtime |
|
|
||||||
| Docker Compose | 2.20+ | Local development orchestration |
|
|
||||||
| Alpine Linux | 3.18 | Minimal base image |
|
|
||||||
|
|
||||||
### 7.9 Validation & Schema
|
|
||||||
|
|
||||||
| Component | Version | Purpose |
|
|
||||||
|-----------|---------|---------|
|
|
||||||
| Joi | 17.11+ | Request/response schema validation |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 8. Project Structure (DRY Compliance)
|
|
||||||
|
|
||||||
```
|
|
||||||
sentryagent-idp/
|
|
||||||
+-- src/
|
|
||||||
¦ +-- config/
|
|
||||||
¦ ¦ +-- env.ts # Environment variables
|
|
||||||
¦ ¦ +-- database.ts # PostgreSQL connection pool
|
|
||||||
¦ ¦ +-- redis.ts # Redis client
|
|
||||||
¦ ¦ +-- logger.ts # Pino logger configuration
|
|
||||||
¦ ¦
|
|
||||||
¦ +-- types/
|
|
||||||
¦ ¦ +-- index.ts # All TypeScript interfaces (single source of truth)
|
|
||||||
¦ ¦
|
|
||||||
¦ +-- models/
|
|
||||||
¦ ¦ +-- Agent.ts # Agent entity
|
|
||||||
¦ ¦ +-- Credential.ts # Credential entity
|
|
||||||
¦ ¦ +-- AuditLog.ts # Audit log entity
|
|
||||||
¦ ¦ +-- Token.ts # Token entity
|
|
||||||
¦ ¦
|
|
||||||
¦ +-- services/
|
|
||||||
¦ ¦ +-- AgentService.ts # Agent CRUD (no duplication)
|
|
||||||
¦ ¦ +-- OAuth2Service.ts # Token issuance (no duplication)
|
|
||||||
¦ ¦ +-- CredentialService.ts # Credential management (no duplication)
|
|
||||||
¦ ¦ +-- AuditService.ts # Audit logging (no duplication)
|
|
||||||
¦ ¦ +-- TokenService.ts # Token operations (no duplication)
|
|
||||||
¦ ¦
|
|
||||||
¦ +-- controllers/
|
|
||||||
¦ ¦ +-- AgentController.ts # Agent endpoints
|
|
||||||
¦ ¦ +-- OAuth2Controller.ts # OAuth 2.0 endpoints
|
|
||||||
¦ ¦ +-- HealthController.ts # Health check endpoint
|
|
||||||
¦ ¦
|
|
||||||
¦ +-- middleware/
|
|
||||||
¦ ¦ +-- authentication.ts # Bearer token validation
|
|
||||||
¦ ¦ +-- authorization.ts # Scope-based access control
|
|
||||||
¦ ¦ +-- errorHandler.ts # Global error handling
|
|
||||||
¦ ¦ +-- logging.ts # Request/response logging
|
|
||||||
¦ ¦ +-- validation.ts # Request validation
|
|
||||||
¦ ¦ +-- rateLimit.ts # Rate limiting
|
|
||||||
¦ ¦
|
|
||||||
¦ +-- utils/
|
|
||||||
¦ ¦ +-- crypto.ts # Crypto utilities (hashing, secrets)
|
|
||||||
¦ ¦ +-- jwt.ts # JWT utilities (sign, verify)
|
|
||||||
¦ ¦ +-- validators.ts # Input validation (reusable)
|
|
||||||
¦ ¦ +-- errors.ts # Custom error classes
|
|
||||||
¦ ¦ +-- helpers.ts # General utilities
|
|
||||||
¦ ¦
|
|
||||||
¦ +-- routes/
|
|
||||||
¦ ¦ +-- agents.ts # Agent routes
|
|
||||||
¦ ¦ +-- oauth2.ts # OAuth 2.0 routes
|
|
||||||
¦ ¦ +-- health.ts # Health routes
|
|
||||||
¦ ¦
|
|
||||||
¦ +-- migrations/
|
|
||||||
¦ ¦ +-- 001_create_agents_table.sql
|
|
||||||
¦ ¦ +-- 002_create_credentials_table.sql
|
|
||||||
¦ ¦ +-- 003_create_audit_logs_table.sql
|
|
||||||
¦ ¦
|
|
||||||
¦ +-- app.ts # Express app setup
|
|
||||||
¦ +-- server.ts # Server entry point
|
|
||||||
¦
|
|
||||||
+-- tests/
|
|
||||||
¦ +-- unit/
|
|
||||||
¦ ¦ +-- services/
|
|
||||||
¦ ¦ ¦ +-- AgentService.test.ts
|
|
||||||
¦ ¦ ¦ +-- OAuth2Service.test.ts
|
|
||||||
¦ ¦ ¦ +-- CredentialService.test.ts
|
|
||||||
¦ ¦ ¦ +-- AuditService.test.ts
|
|
||||||
¦ ¦ +-- utils/
|
|
||||||
¦ ¦ +-- crypto.test.ts
|
|
||||||
¦ ¦ +-- jwt.test.ts
|
|
||||||
¦ ¦ +-- validators.test.ts
|
|
||||||
¦ ¦
|
|
||||||
¦ +-- integration/
|
|
||||||
¦ ¦ +-- api/
|
|
||||||
¦ ¦ ¦ +-- agents.test.ts
|
|
||||||
¦ ¦ ¦ +-- oauth2.test.ts
|
|
||||||
¦ ¦ ¦ +-- health.test.ts
|
|
||||||
¦ ¦ +-- database/
|
|
||||||
¦ ¦ +-- migrations.test.ts
|
|
||||||
¦ ¦
|
|
||||||
¦ +-- fixtures/
|
|
||||||
¦ +-- agents.json
|
|
||||||
¦ +-- credentials.json
|
|
||||||
¦ +-- auditLogs.json
|
|
||||||
¦
|
|
||||||
+-- docs/
|
|
||||||
¦ +-- README.md # This file
|
|
||||||
¦ +-- architecture.md # Architecture Decision Records
|
|
||||||
¦ +-- openapi.yaml # OpenAPI 3.0 specification
|
|
||||||
¦ +-- deployment.md # Deployment guide
|
|
||||||
¦ +-- agntcy-alignment.md # AGNTCY compliance documentation
|
|
||||||
¦ +-- api-guide.md # API usage guide
|
|
||||||
¦ +-- contributing.md # Contribution guidelines
|
|
||||||
¦
|
|
||||||
+-- docker-compose.yml # Local development stack
|
|
||||||
+-- Dockerfile # Production image
|
|
||||||
+-- .dockerignore # Docker build exclusions
|
|
||||||
+-- .env.example # Environment template
|
|
||||||
+-- .env.test # Test environment
|
|
||||||
+-- .gitignore # Git exclusions
|
|
||||||
+-- .eslintrc.js # ESLint configuration
|
|
||||||
+-- .prettierrc.json # Prettier configuration
|
|
||||||
+-- tsconfig.json # TypeScript configuration
|
|
||||||
+-- jest.config.js # Jest configuration
|
|
||||||
+-- package.json # Dependencies and scripts
|
|
||||||
+-- package-lock.json # Locked dependencies
|
|
||||||
+-- CHANGELOG.md # Version history
|
|
||||||
+-- LICENSE # Open source license (MIT)
|
|
||||||
+-- README.md # Project README
|
|
||||||
```
|
|
||||||
|
|
||||||
**DRY Principles Applied**:
|
|
||||||
- ? Single `types/index.ts` for all interfaces (no duplication)
|
|
||||||
- ? Shared `utils/` for crypto, JWT, validation (no duplication)
|
|
||||||
- ? Centralized error handling in middleware (no duplication)
|
|
||||||
- ? Reusable service layer (no business logic in controllers)
|
|
||||||
- ? Configuration centralized in `config/` (no duplication)
|
|
||||||
- ? Database queries isolated in services (no duplication)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 9. Development Workflow
|
|
||||||
|
|
||||||
### 9.1 Feature Development Process
|
|
||||||
|
|
||||||
**Step 1: Specification (Virtual Architect)**
|
|
||||||
- Write Architecture Decision Record (ADR)
|
|
||||||
- Define OpenAPI 3.0 specification
|
|
||||||
- Specify database schema
|
|
||||||
- List test cases
|
|
||||||
- CEO approves specification
|
|
||||||
|
|
||||||
**Step 2: Implementation (Virtual Principal Developer)**
|
|
||||||
- Create feature branch: `git checkout -b feature/agent-registry`
|
|
||||||
- Implement per specification
|
|
||||||
- Follow DRY and SOLID principles
|
|
||||||
- Add JSDoc comments
|
|
||||||
- Create unit tests (>80% coverage)
|
|
||||||
- Push to `git.sentryagent.ai`
|
|
||||||
|
|
||||||
**Step 3: Code Review (Virtual CTO)**
|
|
||||||
- Check compliance with standards
|
|
||||||
- Verify DRY principles
|
|
||||||
- Review test coverage
|
|
||||||
- Verify SOLID principles
|
|
||||||
- Approve or request changes
|
|
||||||
|
|
||||||
**Step 4: Testing (Virtual QA Engineer)**
|
|
||||||
- Run integration tests
|
|
||||||
- Test edge cases
|
|
||||||
- Verify AGNTCY alignment
|
|
||||||
- Verify OpenAPI spec matches implementation
|
|
||||||
- Sign off on quality
|
|
||||||
|
|
||||||
**Step 5: Deployment (Virtual CTO)**
|
|
||||||
- Merge to `develop` branch (squash commits)
|
|
||||||
- Delete feature branch
|
|
||||||
- Deploy to staging
|
|
||||||
- Deploy to production
|
|
||||||
|
|
||||||
### 9.2 Git Workflow
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create feature branch from develop
|
|
||||||
git checkout develop
|
|
||||||
git pull origin develop
|
|
||||||
git checkout -b feature/agent-registry
|
|
||||||
|
|
||||||
# Make changes, commit with conventional commits
|
|
||||||
git add src/services/AgentService.ts
|
|
||||||
git commit -m "feat(agent): implement agent registry CRUD"
|
|
||||||
|
|
||||||
# Push to repository
|
|
||||||
git push origin feature/agent-registry
|
|
||||||
|
|
||||||
# Create pull request on git.sentryagent.ai
|
|
||||||
# Virtual CTO reviews and approves
|
|
||||||
# Virtual QA Engineer signs off
|
|
||||||
|
|
||||||
# Merge to develop (squash commits)
|
|
||||||
git checkout develop
|
|
||||||
git pull origin develop
|
|
||||||
git merge --squash feature/agent-registry
|
|
||||||
git commit -m "feat(agent): implement agent registry CRUD"
|
|
||||||
git push origin develop
|
|
||||||
|
|
||||||
# Delete feature branch
|
|
||||||
git branch -d feature/agent-registry
|
|
||||||
git push origin --delete feature/agent-registry
|
|
||||||
```
|
|
||||||
|
|
||||||
### 9.3 Code Review Checklist
|
|
||||||
|
|
||||||
Before any code is merged to `develop`, verify:
|
|
||||||
|
|
||||||
- [ ] TypeScript strict mode: `tsc --strict` passes
|
|
||||||
- [ ] No `any` types used
|
|
||||||
- [ ] No code duplication (DRY check)
|
|
||||||
- [ ] SOLID principles applied
|
|
||||||
- [ ] Unit tests included (>80% coverage)
|
|
||||||
- [ ] Integration tests included
|
|
||||||
- [ ] JSDoc comments present
|
|
||||||
- [ ] Error handling implemented
|
|
||||||
- [ ] No OWASP Top 10 vulnerabilities
|
|
||||||
- [ ] Performance acceptable (<200ms)
|
|
||||||
- [ ] Database migrations included
|
|
||||||
- [ ] OpenAPI specification updated
|
|
||||||
- [ ] Conventional commit message used
|
|
||||||
- [ ] Virtual CTO approval obtained
|
|
||||||
- [ ] Virtual QA Engineer sign-off obtained
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 10. OpenSpec Compliance
|
|
||||||
|
|
||||||
### 10.1 OpenAPI 3.0 Specification
|
|
||||||
|
|
||||||
**Location**: `docs/openapi.yaml`
|
|
||||||
|
|
||||||
**Mandatory for every endpoint**:
|
|
||||||
- Summary and description
|
|
||||||
- Request body schema (with validation rules)
|
|
||||||
- Response schemas (all status codes)
|
|
||||||
- Error response schemas
|
|
||||||
- Authentication requirements
|
|
||||||
- Example requests and responses
|
|
||||||
|
|
||||||
**Example OpenAPI Spec**:
|
|
||||||
```yaml
|
|
||||||
openapi: 3.0.0
|
|
||||||
info:
|
|
||||||
title: SentryAgent.ai Agent Identity Provider
|
|
||||||
version: 1.0.0
|
|
||||||
description: Free, open-source Agent Identity Provider
|
|
||||||
contact:
|
|
||||||
name: SentryAgent.ai
|
|
||||||
url: https://sentryagent.ai
|
|
||||||
|
|
||||||
servers:
|
|
||||||
- url: https://api.sentryagent.ai
|
|
||||||
description: Production
|
|
||||||
- url: http://localhost:3000
|
|
||||||
description: Development
|
|
||||||
|
|
||||||
paths:
|
|
||||||
/agents:
|
|
||||||
post:
|
|
||||||
summary: Create a new AI agent
|
|
||||||
operationId: createAgent
|
|
||||||
tags:
|
|
||||||
- Agents
|
|
||||||
requestBody:
|
|
||||||
required: true
|
|
||||||
content:
|
|
||||||
application/json:
|
|
||||||
schema:
|
|
||||||
$ref: '#/components/schemas/CreateAgentRequest'
|
|
||||||
responses:
|
|
||||||
'201':
|
|
||||||
description: Agent created successfully
|
|
||||||
content:
|
|
||||||
application/json:
|
|
||||||
schema:
|
|
||||||
$ref: '#/components/schemas/Agent'
|
|
||||||
'400':
|
|
||||||
description: Invalid request
|
|
||||||
content:
|
|
||||||
application/json:
|
|
||||||
schema:
|
|
||||||
$ref: '#/components/schemas/Error'
|
|
||||||
'409':
|
|
||||||
description: Agent already exists
|
|
||||||
content:
|
|
||||||
application/json:
|
|
||||||
schema:
|
|
||||||
$ref: '#/components/schemas/Error'
|
|
||||||
|
|
||||||
components:
|
|
||||||
schemas:
|
|
||||||
Agent:
|
|
||||||
type: object
|
|
||||||
required:
|
|
||||||
- id
|
|
||||||
- email
|
|
||||||
- agentType
|
|
||||||
- version
|
|
||||||
- capabilities
|
|
||||||
- owner
|
|
||||||
- deploymentEnv
|
|
||||||
- status
|
|
||||||
- createdAt
|
|
||||||
- updatedAt
|
|
||||||
properties:
|
|
||||||
id:
|
|
||||||
type: string
|
|
||||||
format: uuid
|
|
||||||
description: Unique agent identifier
|
|
||||||
email:
|
|
||||||
type: string
|
|
||||||
format: email
|
|
||||||
description: Agent email (agent-type-001@sentryagent.ai)
|
|
||||||
agentType:
|
|
||||||
type: string
|
|
||||||
description: AGNTCY agent type
|
|
||||||
version:
|
|
||||||
type: string
|
|
||||||
description: Semantic version
|
|
||||||
capabilities:
|
|
||||||
type: array
|
|
||||||
items:
|
|
||||||
type: string
|
|
||||||
description: Agent capabilities
|
|
||||||
owner:
|
|
||||||
type: string
|
|
||||||
description: Developer or team name
|
|
||||||
deploymentEnv:
|
|
||||||
type: string
|
|
||||||
enum: [development, staging, production]
|
|
||||||
status:
|
|
||||||
type: string
|
|
||||||
enum: [active, suspended, revoked, archived]
|
|
||||||
createdAt:
|
|
||||||
type: string
|
|
||||||
format: date-time
|
|
||||||
updatedAt:
|
|
||||||
type: string
|
|
||||||
format: date-time
|
|
||||||
|
|
||||||
Error:
|
|
||||||
type: object
|
|
||||||
required:
|
|
||||||
- code
|
|
||||||
- message
|
|
||||||
properties:
|
|
||||||
code:
|
|
||||||
type: string
|
|
||||||
description: Error code
|
|
||||||
message:
|
|
||||||
type: string
|
|
||||||
description: Error message
|
|
||||||
details:
|
|
||||||
type: object
|
|
||||||
description: Additional error details
|
|
||||||
```
|
|
||||||
|
|
||||||
### 10.2 AGNTCY Alignment
|
|
||||||
|
|
||||||
**Agent Identity Model** (AGNTCY-compliant):
|
|
||||||
```typescript
|
|
||||||
interface IAgent {
|
|
||||||
id: string; // Unique agent ID (UUID) — immutable
|
|
||||||
email: string; // agent-type-001@sentryagent.ai
|
|
||||||
agentType: string; // AGNTCY agent type
|
|
||||||
version: string; // Semantic versioning
|
|
||||||
capabilities: string[]; // AGNTCY capabilities
|
|
||||||
owner: string; // Developer/team name
|
|
||||||
deploymentEnv: string; // dev/staging/prod
|
|
||||||
status: string; // active/suspended/revoked/archived
|
|
||||||
createdAt: Date; // Agent creation timestamp
|
|
||||||
updatedAt: Date; // Last update timestamp
|
|
||||||
lastAuthAt?: Date; // Last authentication timestamp
|
|
||||||
metadata?: Record<string, unknown>; // AGNTCY metadata
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Audit Compliance**:
|
|
||||||
- ? Immutable audit logs (no deletion, no modification)
|
|
||||||
- ? All agent actions logged (creation, auth, revocation)
|
|
||||||
- ? Timestamps in ISO 8601 format
|
|
||||||
- ? Tamper-proof storage (PostgreSQL with constraints)
|
|
||||||
- ? Retention policy (90 days free tier, configurable)
|
|
||||||
|
|
||||||
**Policy Enforcement**:
|
|
||||||
- ? Least privilege by default
|
|
||||||
- ? Capability-based access control
|
|
||||||
- ? Revocation at scale
|
|
||||||
- ? Credential rotation on schedule
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 11. Quality Gates & Metrics
|
|
||||||
|
|
||||||
### 11.1 Code Quality Standards
|
|
||||||
|
|
||||||
| Metric | Target | Tool | Enforcement |
|
|
||||||
|--------|--------|------|-------------|
|
|
||||||
| Test Coverage | >80% | Jest/nyc | Fail PR if <80% |
|
|
||||||
| TypeScript Strict | 100% | tsc --strict | Fail build if violations |
|
|
||||||
| Linting | 0 errors | ESLint | Fail PR if errors |
|
|
||||||
| Code Duplication | <5% | Manual review | CTO rejects if >5% |
|
|
||||||
| Security Scan | 0 high/critical | npm audit | Fail build if vulnerabilities |
|
|
||||||
|
|
||||||
### 11.2 Performance Standards
|
|
||||||
|
|
||||||
| Metric | Target | Measurement | Enforcement |
|
|
||||||
|--------|--------|-------------|-------------|
|
|
||||||
| Token Issuance | <100ms | Benchmark test | Fail if >100ms |
|
|
||||||
| API Response | <200ms | Integration test | Fail if >200ms |
|
|
||||||
| Database Query | <50ms | Query profiling | Fail if >50ms |
|
|
||||||
| Cache Hit Rate | >90% | Redis monitoring | Monitor weekly |
|
|
||||||
|
|
||||||
### 11.3 Reliability Standards
|
|
||||||
|
|
||||||
| Metric | Target | Measurement |
|
|
||||||
|--------|--------|-------------|
|
|
||||||
| Uptime | 99.5% (Phase 2) | Monitoring dashboard |
|
|
||||||
| Error Rate | <0.1% | Error tracking |
|
|
||||||
| Recovery Time | <5 minutes | Runbook testing |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 12. Deployment & Operations
|
|
||||||
|
|
||||||
### 12.1 Local Development Setup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Clone repository
|
|
||||||
git clone https://git.sentryagent.ai/sentryagent-idp.git
|
|
||||||
cd sentryagent-idp
|
|
||||||
|
|
||||||
# Install dependencies
|
|
||||||
npm install
|
|
||||||
|
|
||||||
# Setup environment
|
|
||||||
cp .env.example .env
|
|
||||||
# Edit .env with local values
|
|
||||||
|
|
||||||
# Start services (PostgreSQL, Redis)
|
|
||||||
docker-compose up -d
|
|
||||||
|
|
||||||
# Run database migrations
|
|
||||||
npm run migrate
|
|
||||||
|
|
||||||
# Start development server
|
|
||||||
npm run dev
|
|
||||||
|
|
||||||
# Server runs on http://localhost:3000
|
|
||||||
# Swagger UI: http://localhost:3000/api-docs
|
|
||||||
```
|
|
||||||
|
|
||||||
### 12.2 Docker Deployment
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Build image
|
|
||||||
docker build -t sentryagent-idp:1.0.0 .
|
|
||||||
|
|
||||||
# Run container
|
|
||||||
docker run -p 3000:3000 \
|
|
||||||
-e NODE_ENV=production \
|
|
||||||
-e DATABASE_URL=postgresql://user:pass@db:5432/sentryagent \
|
|
||||||
-e REDIS_URL=redis://cache:6379 \
|
|
||||||
-e JWT_SECRET=your-secret-key \
|
|
||||||
-e JWT_ISSUER=https://api.sentryagent.ai \
|
|
||||||
sentryagent-idp:1.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
### 12.3 Docker Compose (Local Development)
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
version: '3.9'
|
|
||||||
|
|
||||||
services:
|
|
||||||
app:
|
|
||||||
build: .
|
|
||||||
ports:
|
|
||||||
- "3000:3000"
|
|
||||||
environment:
|
|
||||||
NODE_ENV: development
|
|
||||||
DATABASE_URL: postgresql://sentryagent:sentryagent@postgres:5432/sentryagent_idp
|
|
||||||
REDIS_URL: redis://redis:6379
|
|
||||||
JWT_SECRET: dev-secret-key-change-in-production
|
|
||||||
depends_on:
|
|
||||||
- postgres
|
|
||||||
- redis
|
|
||||||
volumes:
|
|
||||||
- ./src:/app/src
|
|
||||||
command: npm run dev
|
|
||||||
|
|
||||||
postgres:
|
|
||||||
image: postgres:15-alpine
|
|
||||||
environment:
|
|
||||||
POSTGRES_USER: sentryagent
|
|
||||||
POSTGRES_PASSWORD: sentryagent
|
|
||||||
POSTGRES_DB: sentryagent_idp
|
|
||||||
ports:
|
|
||||||
- "5432:5432"
|
|
||||||
volumes:
|
|
||||||
- postgres_data:/var/lib/postgresql/data
|
|
||||||
|
|
||||||
redis:
|
|
||||||
image: redis:7-alpine
|
|
||||||
ports:
|
|
||||||
- "6379:6379"
|
|
||||||
volumes:
|
|
||||||
- redis_data:/data
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
postgres_data:
|
|
||||||
redis_data:
|
|
||||||
```
|
|
||||||
|
|
||||||
### 12.4 Production Deployment Checklist
|
|
||||||
|
|
||||||
- [ ] Environment variables configured securely
|
|
||||||
- [ ] Database backups enabled (daily)
|
|
||||||
- [ ] SSL/TLS certificates installed
|
|
||||||
- [ ] Rate limiting configured
|
|
||||||
- [ ] Monitoring alerts set up
|
|
||||||
- [ ] Logging aggregation enabled
|
|
||||||
- [ ] Disaster recovery plan documented
|
|
||||||
- [ ] Security audit completed
|
|
||||||
- [ ] Load balancer configured
|
|
||||||
- [ ] CDN configured (if applicable)
|
|
||||||
- [ ] Health check endpoints verified
|
|
||||||
- [ ] Rollback procedure documented
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 13. Risk Management
|
|
||||||
|
|
||||||
### 13.1 Technical Risks
|
|
||||||
|
|
||||||
| Risk | Probability | Impact | Mitigation |
|
|
||||||
|------|-------------|--------|-----------|
|
|
||||||
| Database performance degradation | Medium | High | Connection pooling, caching, indexing |
|
|
||||||
| Token validation latency | Low | Medium | Redis cache, JWT caching |
|
|
||||||
| Credential compromise | Low | Critical | Encryption, audit logs, rotation, monitoring |
|
|
||||||
| API rate limiting bypass | Low | Medium | Token bucket algorithm, monitoring |
|
|
||||||
| Data loss | Very Low | Critical | Daily backups, replication, disaster recovery |
|
|
||||||
|
|
||||||
### 13.2 Mitigation Strategies
|
|
||||||
|
|
||||||
- **Code Review**: Catch issues early (Virtual CTO)
|
|
||||||
- **Testing**: >80% coverage (Virtual QA Engineer)
|
|
||||||
- **Monitoring**: Real-time alerts (Phase 2)
|
|
||||||
- **Documentation**: Clear runbooks for operations
|
|
||||||
- **Backups**: Daily database snapshots
|
|
||||||
- **Security**: Regular audits and penetration testing
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 14. Success Metrics & KPIs
|
|
||||||
|
|
||||||
### 14.1 Phase 1 MVP Success Criteria
|
|
||||||
|
|
||||||
**Technical**:
|
|
||||||
- ? All features implemented and tested
|
|
||||||
- ? >80% test coverage
|
|
||||||
- ? Zero critical security issues
|
|
||||||
- ? API response time <200ms
|
|
||||||
- ? Token issuance <100ms
|
|
||||||
- ? AGNTCY compliance verified
|
|
||||||
|
|
||||||
**Adoption**:
|
|
||||||
- ? 50+ agents registered in first month
|
|
||||||
- ? 10+ developers using the service
|
|
||||||
- ? Positive feedback on ease of use
|
|
||||||
-
|
|
||||||
|
|||||||
77
TBC/charter.md
Normal file
77
TBC/charter.md
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
# Technical & Business Consultant (TBC) — Charter
|
||||||
|
|
||||||
|
**Document No.:** TBC-CHARTER-001
|
||||||
|
**Project:** SentryAgent.ai AgentIdP
|
||||||
|
**Owner:** CEO
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Revision History
|
||||||
|
|
||||||
|
| Rev | Date | Author | Description |
|
||||||
|
|-----|------|--------|-------------|
|
||||||
|
| 1.0 | 2026-04-07 | CEO / TBC | Initial charter — established in founding session |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Role Definition
|
||||||
|
|
||||||
|
The Technical & Business Consultant (TBC) is a direct report to the CEO of SentryAgent.ai. The TBC operates as an independent advisory function — separate from the engineering execution chain.
|
||||||
|
|
||||||
|
## 2. Reporting Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
CEO (Human)
|
||||||
|
├── Virtual CTO → engineering execution, follows OpenSpec Protocol
|
||||||
|
├── Lead Validator → independent V&V audit, follows OpenSpec Protocol
|
||||||
|
└── Technical & Business Consultant (TBC) → advisory only, reports to CEO only
|
||||||
|
```
|
||||||
|
|
||||||
|
- TBC reports exclusively to the CEO
|
||||||
|
- TBC does NOT interact with the CTO or Lead Validator directly
|
||||||
|
- TBC does NOT manage any engineering work
|
||||||
|
- TBC does NOT follow OpenSpec Protocol (advisory role, not execution role)
|
||||||
|
|
||||||
|
## 3. Scope of Responsibilities
|
||||||
|
|
||||||
|
- Advise the CEO on strategic and technical decisions before they are delegated to the CTO
|
||||||
|
- Review processes and identify gaps, risks, or improvement opportunities
|
||||||
|
- Maintain portfolio-level thinking across all SentryAgent.ai products and initiatives
|
||||||
|
- Challenge assumptions independently — without being inside the execution chain
|
||||||
|
- Serve as the CEO's thinking partner as the virtual factory scales
|
||||||
|
|
||||||
|
## 4. Document & Change Authority
|
||||||
|
|
||||||
|
TBC MAY propose changes to CLAUDE.md, README.md, and PRD.md.
|
||||||
|
|
||||||
|
TBC MAY NOT implement those changes directly. All changes to controlled documents follow this process:
|
||||||
|
|
||||||
|
| Step | Owner |
|
||||||
|
|------|-------|
|
||||||
|
| Identify and document the proposed change | TBC (in meeting minutes) |
|
||||||
|
| Review and approve the proposal | CEO |
|
||||||
|
| Instruct CTO to implement via OpenSpec Protocol | CEO → CTO |
|
||||||
|
| Raise OpenSpec change, implement, and commit | CTO |
|
||||||
|
|
||||||
|
## 5. Record Keeping (ISO 9000)
|
||||||
|
|
||||||
|
**"If it is not written, it does not exist."**
|
||||||
|
|
||||||
|
TBC maintains written records of all working sessions with the CEO. Records are stored in:
|
||||||
|
|
||||||
|
```
|
||||||
|
TBC/
|
||||||
|
├── charter.md # This document
|
||||||
|
└── minutes/
|
||||||
|
└── TBC-MIN-NNN-YYYY-MM-DD.md # Meeting minutes, sequentially numbered
|
||||||
|
```
|
||||||
|
|
||||||
|
All minutes follow the standard format defined in TBC-MIN-001.
|
||||||
|
|
||||||
|
## 6. Operating Principles
|
||||||
|
|
||||||
|
1. Advisory only — influence flows through the CEO, never direct to the team
|
||||||
|
2. Written record of every session — no exceptions
|
||||||
|
3. Independent perspective — not captured by execution priorities
|
||||||
|
4. ISO 9000 discipline — every document has revision history, date, and owner
|
||||||
|
5. Portfolio thinking — always considering the broader virtual factory, not just the current sprint
|
||||||
181
TBC/minutes/TBC-MIN-001-2026-04-07.md
Normal file
181
TBC/minutes/TBC-MIN-001-2026-04-07.md
Normal file
@@ -0,0 +1,181 @@
|
|||||||
|
# Meeting Minutes
|
||||||
|
|
||||||
|
**Document No.:** TBC-MIN-001
|
||||||
|
**Project:** SentryAgent.ai AgentIdP
|
||||||
|
**Meeting Type:** Working Session — CEO & TBC (Inaugural)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Revision History
|
||||||
|
|
||||||
|
| Rev | Date | Author | Description |
|
||||||
|
|-----|------|--------|-------------|
|
||||||
|
| 1.0 | 2026-04-07 | TBC | Initial minutes — inaugural session |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Meeting Details
|
||||||
|
|
||||||
|
| Field | Detail |
|
||||||
|
|-------|--------|
|
||||||
|
| Date | 2026-04-07 |
|
||||||
|
| Participants | CEO (Human), TBC (Claude — Technical & Business Consultant) |
|
||||||
|
| Session Type | Strategic advisory |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Project Status at Session Open
|
||||||
|
|
||||||
|
The following state was confirmed at session open via hub message review and git status:
|
||||||
|
|
||||||
|
| Item | Status |
|
||||||
|
|------|--------|
|
||||||
|
| Phase | Phase 6 — COMPLETE (dev freeze in effect) |
|
||||||
|
| V&V | PASS — all 6 issues resolved |
|
||||||
|
| Field trial | Unblocked but not yet started |
|
||||||
|
| Pending commit | 5 uncommitted files (V&V resolution changes) — authorized but not executed by CTO |
|
||||||
|
| Active OpenSpec changes | 0 at session open |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Topics Discussed
|
||||||
|
|
||||||
|
### 2.1 Process Gap — Authorization vs. Execution Handoff
|
||||||
|
|
||||||
|
**Issue raised:** The CTO received CEO authorization (msg #93) to commit outstanding V&V resolution changes. The session ended before the CTO confirmed completion. Five files remained uncommitted, and field trial status was ambiguous.
|
||||||
|
|
||||||
|
**Root cause identified:** The process had no completion gate. Authorization was treated as the finish line. There was no protocol requiring the CTO to confirm execution back to the CEO.
|
||||||
|
|
||||||
|
**CEO direction:** Treat this as a process flaw, not a blame issue. Identify the gap and fix it.
|
||||||
|
|
||||||
|
**Resolution:** TBC proposed three process improvements:
|
||||||
|
1. Mandatory completion confirmation after every CEO-authorized action
|
||||||
|
2. End-of-session summary required before CTO closes any session
|
||||||
|
3. Explicit "authorized vs. done" vocabulary — never interchangeable
|
||||||
|
|
||||||
|
**Outcome:** CEO approved all three recommendations. OpenSpec change `process-governance-handoff-gap` raised and implemented. CLAUDE.md, README.md, and `docs/engineering/08-workflow.md` updated. *(See OpenSpec change record for full detail.)*
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2.2 Company Vision Confirmed
|
||||||
|
|
||||||
|
**CEO confirmed the primary objective:**
|
||||||
|
|
||||||
|
> *"SentryAgent.ai is building the world's first free, open-source identity provider specifically for AI agents — think of it as 'Auth0 for agents.'"*
|
||||||
|
|
||||||
|
This statement is the north star for all product, process, and portfolio decisions.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2.3 Virtual Factory Model — Strategic Direction
|
||||||
|
|
||||||
|
**CEO introduced the virtual factory concept:**
|
||||||
|
|
||||||
|
SentryAgent.ai operates as a virtual factory:
|
||||||
|
- CEO is human — sole human principal
|
||||||
|
- Entire engineering team is virtual (LLM-powered)
|
||||||
|
- CEO has 30+ years managing global engineering teams, building real-time unified communications products generating hundreds of billions in sales
|
||||||
|
- AgentIdP (Phase 6 complete) is proof of concept for the factory model
|
||||||
|
|
||||||
|
**Strategic direction stated by CEO:** The company must now think beyond a single product. The virtual factory must be capable of running multiple product pipelines simultaneously.
|
||||||
|
|
||||||
|
**Three goals established:**
|
||||||
|
|
||||||
|
| # | Goal |
|
||||||
|
|---|------|
|
||||||
|
| 1 | **Product** — AgentIdP: "Auth0 for agents." Ship, prove, grow. |
|
||||||
|
| 2 | **Process** — World-class engineering operations. The virtual factory is the competitive moat. |
|
||||||
|
| 3 | **People (Virtual)** — Empower the virtual team with the right structure and governance. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2.4 TBC Role — Established
|
||||||
|
|
||||||
|
**CEO decision:** A Technical & Business Consultant (TBC) role is established as a direct report to the CEO, alongside the Virtual CTO and Lead Validator.
|
||||||
|
|
||||||
|
**Org structure confirmed:**
|
||||||
|
|
||||||
|
```
|
||||||
|
CEO (Human)
|
||||||
|
├── Virtual CTO → engineering execution, OpenSpec Protocol
|
||||||
|
├── Lead Validator → independent V&V audit, OpenSpec Protocol
|
||||||
|
└── Technical & Business Consultant (TBC) → advisory only, CEO only
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key characteristics of TBC role:**
|
||||||
|
- Reports to CEO only — no interaction with CTO or Validator
|
||||||
|
- Not bound by OpenSpec Protocol
|
||||||
|
- Advisory function — does not execute engineering work
|
||||||
|
- Maintains written records of all CEO sessions (ISO 9000 discipline)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2.5 Change Authority — Governance Decision
|
||||||
|
|
||||||
|
**Question raised:** Should TBC be allowed to make changes to CLAUDE.md, README.md, and PRD.md directly?
|
||||||
|
|
||||||
|
**Decision:** TBC may PROPOSE changes. TBC may NOT implement them directly.
|
||||||
|
|
||||||
|
**Approved process:**
|
||||||
|
|
||||||
|
| Step | Owner |
|
||||||
|
|------|-------|
|
||||||
|
| Identify and document proposed change | TBC (in meeting minutes) |
|
||||||
|
| Review and approve | CEO |
|
||||||
|
| Instruct CTO to implement via OpenSpec Protocol | CEO → CTO |
|
||||||
|
| Raise OpenSpec change, implement, commit | CTO |
|
||||||
|
|
||||||
|
**Rationale:** All changes to controlled documents must go through OpenSpec. This keeps the change audit trail clean and ensures the CTO remains the sole execution owner. TBC influence flows through the CEO — not directly to the team.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2.6 TBC Directory — Established
|
||||||
|
|
||||||
|
TBC directory created at project root:
|
||||||
|
|
||||||
|
```
|
||||||
|
TBC/
|
||||||
|
├── charter.md # TBC role charter (TBC-CHARTER-001)
|
||||||
|
└── minutes/
|
||||||
|
└── TBC-MIN-001-2026-04-07.md # This document
|
||||||
|
```
|
||||||
|
|
||||||
|
ISO 9000 convention adopted: all documents carry document number, revision history, date, and author.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Decisions Made
|
||||||
|
|
||||||
|
| # | Decision | Owner |
|
||||||
|
|---|----------|-------|
|
||||||
|
| D1 | Process gap (authorization vs. execution) fixed via OpenSpec change `process-governance-handoff-gap` | CTO (implemented) |
|
||||||
|
| D2 | Company vision confirmed: "Auth0 for agents" | CEO |
|
||||||
|
| D3 | Virtual factory must scale to multiple products — strategic direction set | CEO |
|
||||||
|
| D4 | Three-goal framework established: Product / Process / People | CEO |
|
||||||
|
| D5 | TBC role established as CEO direct report | CEO |
|
||||||
|
| D6 | TBC operates outside OpenSpec; proposes changes only — CTO implements | CEO |
|
||||||
|
| D7 | TBC directory and ISO 9000 minutes convention established | CEO / TBC |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Open Items / Actions
|
||||||
|
|
||||||
|
| # | Action | Owner | Status |
|
||||||
|
|---|--------|-------|--------|
|
||||||
|
| A1 | CTO to commit outstanding V&V resolution changes and confirm with commit hash | CTO | **Pending — awaiting CEO instruction to CTO** |
|
||||||
|
| A2 | CEO to authorize field trial execution once A1 is confirmed | CEO | Pending A1 |
|
||||||
|
| A3 | Update CLAUDE.md to add TBC role to org structure and startup protocol | CTO via OpenSpec | **Proposed — pending CEO authorization** |
|
||||||
|
| A4 | Define next product(s) for the virtual factory | CEO / TBC | Future session |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Next Session Priorities
|
||||||
|
|
||||||
|
1. Close A1 — instruct CTO to execute the pending commit
|
||||||
|
2. Authorize field trial (A2) once commit is confirmed
|
||||||
|
3. Begin scoping A3 — update controlled documents to reflect TBC role formally
|
||||||
|
4. Start portfolio thinking: what is product #2 for the virtual factory?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*End of minutes — TBC-MIN-001 | Rev 1.0 | 2026-04-07*
|
||||||
89
TBC/minutes/TBC-MIN-002-2026-04-07.md
Normal file
89
TBC/minutes/TBC-MIN-002-2026-04-07.md
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
# Meeting Minutes
|
||||||
|
|
||||||
|
**Document No.:** TBC-MIN-002
|
||||||
|
**Project:** SentryAgent.ai AgentIdP
|
||||||
|
**Meeting Type:** Working Session — CEO & TBC (Session 2 — Opening)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Revision History
|
||||||
|
|
||||||
|
| Rev | Date | Author | Description |
|
||||||
|
|-----|------|--------|-------------|
|
||||||
|
| 1.0 | 2026-04-07 | TBC | Initial minutes — session 2 opening |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Meeting Details
|
||||||
|
|
||||||
|
| Field | Detail |
|
||||||
|
|-------|--------|
|
||||||
|
| Date | 2026-04-07 |
|
||||||
|
| Participants | CEO (Human), TBC (Claude — Technical & Business Consultant) |
|
||||||
|
| Session Type | Strategic advisory — opening exchange |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Project Status at Session Open
|
||||||
|
|
||||||
|
Carried forward from TBC-MIN-001:
|
||||||
|
|
||||||
|
| Item | Status |
|
||||||
|
|------|--------|
|
||||||
|
| Phase | Phase 6 — COMPLETE (dev freeze in effect) |
|
||||||
|
| V&V | PASS — all 6 issues resolved |
|
||||||
|
| Field trial | Unblocked but not yet started |
|
||||||
|
| A1: CTO pending commit | Still outstanding — not confirmed in prior session |
|
||||||
|
| A2: Field trial authorization | Pending A1 |
|
||||||
|
| A3: CLAUDE.md TBC update | Proposed — pending CEO authorization to CTO |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Topics Discussed
|
||||||
|
|
||||||
|
### 2.1 Session Agenda — Established
|
||||||
|
|
||||||
|
CEO confirmed the agenda for this session:
|
||||||
|
|
||||||
|
> *"We discuss our company needs and based on that we will develop our agent."*
|
||||||
|
|
||||||
|
This session will focus on:
|
||||||
|
1. Identifying company needs / strategic priorities
|
||||||
|
2. Scoping and developing the next agent based on those needs
|
||||||
|
|
||||||
|
Implementation (if any) will follow the standard CEO → CTO delegation path.
|
||||||
|
|
||||||
|
### 2.2 TBC Channel — Created
|
||||||
|
|
||||||
|
`#tbc-ceo` channel created on central hub (did not exist previously). All future TBC ↔ CEO communication will use this channel.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Decisions Made
|
||||||
|
|
||||||
|
| # | Decision | Owner |
|
||||||
|
|---|----------|-------|
|
||||||
|
| D1 | Session agenda: discuss company needs, then develop an agent | CEO |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Open Items / Actions
|
||||||
|
|
||||||
|
| # | Action | Owner | Status |
|
||||||
|
|---|--------|-------|--------|
|
||||||
|
| A1 | CTO to commit outstanding V&V resolution changes + confirm with hash | CTO | Pending |
|
||||||
|
| A2 | CEO to authorize field trial once A1 confirmed | CEO | Pending A1 |
|
||||||
|
| A3 | Update CLAUDE.md to formally add TBC to org structure | CTO via OpenSpec | Proposed — pending CEO authorization |
|
||||||
|
| A4 | Discuss company needs → scope next agent | CEO / TBC | **In progress — resuming next exchange** |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Next Session Priorities
|
||||||
|
|
||||||
|
1. CEO to present company needs / strategic priorities
|
||||||
|
2. TBC to advise on agent scoping based on those needs
|
||||||
|
3. CEO to delegate to CTO if implementation is authorized
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*End of minutes — TBC-MIN-002 | Rev 1.0 | 2026-04-07 | Session paused — CEO on break*
|
||||||
275
VALIDATOR.md
Normal file
275
VALIDATOR.md
Normal file
@@ -0,0 +1,275 @@
|
|||||||
|
# SentryAgent.ai — V&V Architect (Lead Validator)
|
||||||
|
|
||||||
|
## IDENTITY & INDEPENDENCE
|
||||||
|
|
||||||
|
You are the **V&V Architect (Lead Validator)** for SentryAgent.ai AgentIdP.
|
||||||
|
|
||||||
|
- **Instance ID:** `LeadValidator`
|
||||||
|
- **Role:** Independent verification and validation — you are NOT part of the engineering team
|
||||||
|
- **Authority:** You report findings directly to the CEO. The CTO has no authority to dismiss your findings.
|
||||||
|
- **Mandate:** Ensure that everything the engineering team built actually matches what was specified in the PRD and OpenSpec
|
||||||
|
- **Isolation:** Do NOT carry context from any other project or session. This is a private, independent audit session.
|
||||||
|
|
||||||
|
You are a check on the system — not a builder. You never implement features, never approve architectural changes, and never take direction from the Virtual CTO. Your only job is to find gaps, deviations, and violations and formally log them.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## STARTUP PROTOCOL (Execute on every new session — no exceptions)
|
||||||
|
|
||||||
|
Execute these steps in order before doing anything else:
|
||||||
|
|
||||||
|
### Step 1 — Read the source of truth
|
||||||
|
Read `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/README.md` in full.
|
||||||
|
This is the PRD. Everything the engineering team built must conform to it.
|
||||||
|
|
||||||
|
### Step 2 — Register on central hub
|
||||||
|
Register as `LeadValidator` on the central hub.
|
||||||
|
|
||||||
|
### Step 3 — Check existing open issues
|
||||||
|
Read all files in `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/openspec/vv_audit/` — this is your ledger.
|
||||||
|
List any issues currently with status `OPEN` or `DISPUTED`.
|
||||||
|
|
||||||
|
### Step 4 — Check #vv-findings channel
|
||||||
|
Check the `#vv-findings` channel on the central hub for any recent messages from the CTO regarding issue resolution or disputes.
|
||||||
|
|
||||||
|
### Step 5 — Report readiness to CEO
|
||||||
|
Post a status message to `#vv-findings` channel:
|
||||||
|
- How many open/disputed issues exist
|
||||||
|
- Whether you are performing a fresh audit or continuing an existing one
|
||||||
|
- What you plan to audit this session
|
||||||
|
|
||||||
|
### Step 6 — Begin audit
|
||||||
|
Execute the audit methodology below.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AUDIT METHODOLOGY
|
||||||
|
|
||||||
|
### Phase A — OpenSpec Completeness Check
|
||||||
|
|
||||||
|
For every archived OpenSpec change, verify the tasks were fully implemented.
|
||||||
|
|
||||||
|
**Archived changes location:** `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/openspec/changes/archive/`
|
||||||
|
|
||||||
|
For each archived change:
|
||||||
|
1. Read its `tasks.md`
|
||||||
|
2. All tasks marked `[x]` — verify the corresponding code actually exists and matches the task description
|
||||||
|
3. Any task marked `[ ]` — this is a BLOCKER finding (incomplete implementation)
|
||||||
|
|
||||||
|
### Phase B — API Surface Audit
|
||||||
|
|
||||||
|
Verify every API endpoint has a corresponding OpenAPI spec.
|
||||||
|
|
||||||
|
**OpenAPI specs location:** `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/docs/openapi/`
|
||||||
|
|
||||||
|
For every route registered in `src/routes/` and `src/app.ts`:
|
||||||
|
1. Confirm there is an OpenAPI spec entry covering that endpoint
|
||||||
|
2. Confirm the spec matches the implementation (method, path, request schema, response schemas, auth requirement)
|
||||||
|
3. Any endpoint without a spec → BLOCKER
|
||||||
|
4. Any endpoint where spec and implementation diverge → MAJOR
|
||||||
|
|
||||||
|
### Phase C — TypeScript Standards Audit
|
||||||
|
|
||||||
|
Read source files in `src/` and verify:
|
||||||
|
1. No `any` types used anywhere — search for `: any`, `as any`, `<any>`
|
||||||
|
2. All public classes and methods have JSDoc comments
|
||||||
|
3. `tsconfig.json` has `"strict": true` and all strict flags enabled
|
||||||
|
4. Custom error hierarchy: all errors extend `SentryAgentError`
|
||||||
|
|
||||||
|
Violations:
|
||||||
|
- `any` type usage → MAJOR per occurrence
|
||||||
|
- Missing JSDoc on public methods → MINOR per file
|
||||||
|
- Disabled strict flags → BLOCKER
|
||||||
|
|
||||||
|
### Phase D — DRY Principle Audit
|
||||||
|
|
||||||
|
Search for code duplication:
|
||||||
|
1. Look for identical or near-identical logic blocks across files
|
||||||
|
2. Check that all crypto operations live in `src/utils/crypto.ts`
|
||||||
|
3. Check that all JWT operations live in `src/utils/jwt.ts`
|
||||||
|
4. Check that all validation logic lives in `src/utils/validators.ts`
|
||||||
|
5. Check that all error classes live in `src/utils/errors.ts` or `src/errors/`
|
||||||
|
6. Check that no controller directly accesses the database (must go through services)
|
||||||
|
|
||||||
|
Violations: DRY violation → MAJOR (BLOCKER if in a critical path)
|
||||||
|
|
||||||
|
### Phase E — SOLID Principles Audit
|
||||||
|
|
||||||
|
Spot-check key services:
|
||||||
|
1. `AgentService` — does agent CRUD only (no token logic, no audit logic)
|
||||||
|
2. `OAuth2Service` — does token issuance only (no agent CRUD, no billing)
|
||||||
|
3. `CredentialService` — does credential management only
|
||||||
|
4. `AuditService` — does audit logging only
|
||||||
|
5. All services use constructor injection (no direct `new Dependency()` inside business logic)
|
||||||
|
6. Services depend on interfaces/abstractions, not concrete implementations
|
||||||
|
|
||||||
|
Violations: SRP violation → MAJOR
|
||||||
|
|
||||||
|
### Phase F — Test Coverage Audit
|
||||||
|
|
||||||
|
Check test completeness:
|
||||||
|
1. Every service in `src/services/` has a corresponding test in `tests/`
|
||||||
|
2. Every API route has integration tests
|
||||||
|
3. Run `npm test -- --coverage` and check that overall coverage is >80%
|
||||||
|
4. Check that edge cases are covered: null inputs, invalid inputs, auth failures, rate limits
|
||||||
|
|
||||||
|
Violations:
|
||||||
|
- Coverage <80% → BLOCKER
|
||||||
|
- Missing integration test for an endpoint → MAJOR
|
||||||
|
- Missing edge case tests → MINOR
|
||||||
|
|
||||||
|
### Phase G — AGNTCY Compliance Audit
|
||||||
|
|
||||||
|
Verify AGNTCY alignment (per PRD Section 3.1 and Phase 3 scope):
|
||||||
|
1. Agents have unique, immutable IDs
|
||||||
|
2. Authentication uses OAuth 2.0 Client Credentials flow
|
||||||
|
3. Authorization uses scope-based access control
|
||||||
|
4. Audit logs are immutable
|
||||||
|
5. Agent lifecycle operations (provision, rotate, revoke) are fully implemented
|
||||||
|
6. W3C DID support implemented (Phase 3 deliverable)
|
||||||
|
7. AGNTCY conformance tests pass (see `tests/agntcy-conformance/`)
|
||||||
|
|
||||||
|
Violations: AGNTCY deviation → BLOCKER
|
||||||
|
|
||||||
|
### Phase H — Security Audit
|
||||||
|
|
||||||
|
Scan for OWASP Top 10 vulnerabilities:
|
||||||
|
1. SQL injection — all DB queries use parameterized statements
|
||||||
|
2. Authentication bypass — all protected routes have auth middleware
|
||||||
|
3. Sensitive data exposure — no secrets in logs or error responses
|
||||||
|
4. Broken access control — tenant isolation enforced on all queries
|
||||||
|
5. Security headers — helmet middleware applied
|
||||||
|
6. Rate limiting — enforced on token endpoints
|
||||||
|
|
||||||
|
Violations: Security finding → BLOCKER
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ISSUE FORMAT
|
||||||
|
|
||||||
|
Every finding is written as a file in the shared ledger:
|
||||||
|
`/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/openspec/vv_audit/`
|
||||||
|
|
||||||
|
**Filename:** `VV_ISSUE_<NNN>.md` (zero-padded, e.g., `VV_ISSUE_001.md`)
|
||||||
|
|
||||||
|
**File template:**
|
||||||
|
```markdown
|
||||||
|
# VV_ISSUE_<NNN> — <Short title>
|
||||||
|
|
||||||
|
**Status:** OPEN | RESOLVED | DISPUTED
|
||||||
|
**Severity:** BLOCKER | MAJOR | MINOR
|
||||||
|
**Category:** SPEC_DEVIATION | DRY_VIOLATION | TYPE_VIOLATION | SOLID_VIOLATION | TEST_GAP | SECURITY | AGNTCY | DOCS
|
||||||
|
**Logged by:** LeadValidator
|
||||||
|
**Date:** <ISO date>
|
||||||
|
**Audit phase:** Phase A–H label
|
||||||
|
|
||||||
|
## Finding
|
||||||
|
|
||||||
|
<Clear description of what is wrong>
|
||||||
|
|
||||||
|
## Evidence
|
||||||
|
|
||||||
|
<File path(s) and line numbers where the violation exists>
|
||||||
|
|
||||||
|
## Required Action
|
||||||
|
|
||||||
|
<What must be done to resolve this finding>
|
||||||
|
|
||||||
|
## CTO Response
|
||||||
|
|
||||||
|
<Leave blank — CTO fills this in>
|
||||||
|
|
||||||
|
## Resolution
|
||||||
|
|
||||||
|
<Leave blank — filled on resolution>
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SEVERITY DEFINITIONS
|
||||||
|
|
||||||
|
| Severity | Definition | Who can close |
|
||||||
|
|----------|-----------|---------------|
|
||||||
|
| **BLOCKER** | Prevents release. PRD requirement missing, security vulnerability, <80% test coverage, spec-implementation mismatch on a core feature | CTO resolves, Validator confirms. CEO notified only if CTO and Validator cannot agree. |
|
||||||
|
| **MAJOR** | Significant deviation from standards. `any` types, DRY violation, missing integration test, SOLID violation | CTO resolves, Validator confirms |
|
||||||
|
| **MINOR** | Standards improvement. Missing JSDoc, minor duplication, cosmetic spec gap | CTO resolves, no confirmation needed |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## COMMUNICATION PROTOCOL
|
||||||
|
|
||||||
|
### Primary channel: #vv-cto-resolution (Lead Validator ↔ CTO)
|
||||||
|
All findings — routine, MAJOR, and BLOCKER — go to `#vv-cto-resolution` first.
|
||||||
|
The CTO is responsible for reviewing and resolving all findings with the engineering team.
|
||||||
|
The Lead Validator confirms resolution in the same channel.
|
||||||
|
|
||||||
|
**Do NOT post findings to `#vpe-cto-approvals` (CEO channel) unless escalation is required (see below).**
|
||||||
|
|
||||||
|
### Routine findings
|
||||||
|
After each audit phase, post a summary to `#vv-cto-resolution`:
|
||||||
|
- Phase completed
|
||||||
|
- Number of issues found (BLOCKER / MAJOR / MINOR)
|
||||||
|
- Issue file names
|
||||||
|
|
||||||
|
### BLOCKER findings
|
||||||
|
Post immediately to `#vv-cto-resolution` with full finding detail.
|
||||||
|
The CTO must acknowledge and provide a resolution plan within the same session.
|
||||||
|
**CEO is NOT notified of BLOCKERs by default — the CTO owns resolution.**
|
||||||
|
|
||||||
|
### Disputes
|
||||||
|
If the CTO marks an issue as `DISPUTED`:
|
||||||
|
1. Read the CTO's technical justification in the issue file
|
||||||
|
2. Evaluate whether the justification is valid against the PRD
|
||||||
|
3. If you accept the justification → change status to `RESOLVED`, note reason in `#vv-cto-resolution`
|
||||||
|
4. If you reject the justification → change status back to `OPEN`, add your counter-argument in `#vv-cto-resolution`, and attempt a second round of resolution with the CTO
|
||||||
|
5. **Only if two rounds of resolution fail** → escalate to `#vpe-cto-approvals` for CEO decision, with a clear summary of both positions
|
||||||
|
|
||||||
|
### CEO escalation (last resort only)
|
||||||
|
Escalate to `#vpe-cto-approvals` ONLY when:
|
||||||
|
- CTO and Lead Validator have attempted resolution and remain deadlocked after two rounds
|
||||||
|
- Include: issue ID, CTO's position, Lead Validator's position, and why they are irreconcilable
|
||||||
|
|
||||||
|
### Session close
|
||||||
|
When you have completed your audit session, post a final summary to `#vv-cto-resolution`:
|
||||||
|
- Total issues logged this session
|
||||||
|
- Breakdown by severity
|
||||||
|
- Overall V&V status: PASS (0 BLOCKERs) | BLOCKED (≥1 BLOCKER open)
|
||||||
|
|
||||||
|
Also post a brief one-line status to `#vv-findings` for informational tracking.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AUDIT LEDGER INDEX
|
||||||
|
|
||||||
|
After each session, update `/home/ubuntu/vj_ai_agents_dev/sentryagent-idp/openspec/vv_audit/LEDGER.md`:
|
||||||
|
- Total issues logged to date
|
||||||
|
- Open / Resolved / Disputed counts
|
||||||
|
- Date of last audit
|
||||||
|
- Overall release gate status
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## INDEPENDENCE PRINCIPLES
|
||||||
|
|
||||||
|
1. **You do not take orders from the CTO.** The CTO can respond to your findings in the issue file. Only the CEO can instruct you to drop a BLOCKER.
|
||||||
|
2. **You do not implement fixes.** If you find a problem, you log it. The CTO's team fixes it.
|
||||||
|
3. **You do not negotiate severity.** Severity is set by the PRD requirements and these definitions. If the CTO disagrees, it becomes DISPUTED and goes to CEO.
|
||||||
|
4. **You do not skip phases.** Every audit session runs all phases, or explicitly documents why a phase was skipped.
|
||||||
|
5. **You are not adversarial.** Your goal is product quality, not finding fault. A clean audit is a success.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## STANDARDS REFERENCE (from PRD Section 6)
|
||||||
|
|
||||||
|
| Standard | Requirement |
|
||||||
|
|----------|------------|
|
||||||
|
| TypeScript | Strict mode, zero `any` types |
|
||||||
|
| DRY | Zero code duplication, logic lives in exactly one place |
|
||||||
|
| SOLID | Single responsibility per service, constructor injection |
|
||||||
|
| OpenAPI | Spec exists BEFORE implementation, spec matches implementation |
|
||||||
|
| Tests | >80% coverage, all endpoints integration-tested |
|
||||||
|
| JSDoc | All public classes and methods documented |
|
||||||
|
| Errors | All errors typed, extend SentryAgentError hierarchy |
|
||||||
|
| Security | No OWASP Top 10 vulnerabilities |
|
||||||
|
| AGNTCY | Full compliance with Linux Foundation agent identity standard |
|
||||||
|
| Performance | Token endpoints <100ms, all others <200ms |
|
||||||
348
cli/README.md
Normal file
348
cli/README.md
Normal file
@@ -0,0 +1,348 @@
|
|||||||
|
# sentryagent CLI
|
||||||
|
|
||||||
|
The official command-line interface for [SentryAgent.ai](https://sentryagent.ai) — manage agents, issue OAuth2 tokens, rotate credentials, and stream audit logs from your terminal.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
### From npm (once published)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install -g sentryagent
|
||||||
|
```
|
||||||
|
|
||||||
|
### From source
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd cli/
|
||||||
|
npm install
|
||||||
|
npm run build
|
||||||
|
npm install -g .
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
Before using any command, configure the CLI with your API endpoint and credentials:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent configure
|
||||||
|
```
|
||||||
|
|
||||||
|
You will be prompted for:
|
||||||
|
|
||||||
|
| Field | Description |
|
||||||
|
|---------------|--------------------------------------------------|
|
||||||
|
| API URL | The SentryAgent.ai API base URL (e.g. `https://api.sentryagent.ai`) |
|
||||||
|
| Client ID | Your tenant client ID |
|
||||||
|
| Client Secret | Your tenant client secret |
|
||||||
|
|
||||||
|
Configuration is stored at `~/.sentryagent/config.json` with permissions `0600`.
|
||||||
|
|
||||||
|
If any command is run before `sentryagent configure` has been called, the CLI exits with:
|
||||||
|
|
||||||
|
```
|
||||||
|
Not configured. Run `sentryagent configure` first.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
### `sentryagent --version` / `-v`
|
||||||
|
|
||||||
|
Output the installed CLI version.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent --version
|
||||||
|
# 1.0.0
|
||||||
|
```
|
||||||
|
|
||||||
|
### `sentryagent --help` / `-h`
|
||||||
|
|
||||||
|
Show all available commands and global options.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent --help
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `sentryagent configure`
|
||||||
|
|
||||||
|
Interactively configure the CLI.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent configure
|
||||||
|
```
|
||||||
|
|
||||||
|
**Prompts:**
|
||||||
|
|
||||||
|
```
|
||||||
|
SentryAgent CLI Configuration
|
||||||
|
────────────────────────────────────────
|
||||||
|
API URL (e.g. https://api.sentryagent.ai): https://api.sentryagent.ai
|
||||||
|
Client ID: tenant_01ABC...
|
||||||
|
Client Secret: ****
|
||||||
|
|
||||||
|
✓ Configuration saved to ~/.sentryagent/config.json
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `sentryagent register-agent`
|
||||||
|
|
||||||
|
Register a new agent with the identity provider.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent register-agent --name <name> [--description <desc>]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
|
||||||
|
| Flag | Required | Description |
|
||||||
|
|-------------------|----------|---------------------|
|
||||||
|
| `--name <name>` | Yes | Agent display name |
|
||||||
|
| `--description` | No | Agent description |
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent register-agent --name "billing-agent" --description "Handles billing workflows"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output:**
|
||||||
|
|
||||||
|
```
|
||||||
|
✓ Agent registered successfully
|
||||||
|
|
||||||
|
Agent ID: 01ARZ3NDEKTSV4RRFFQ69G5FAV
|
||||||
|
Name: billing-agent
|
||||||
|
Description: Handles billing workflows
|
||||||
|
Status: active
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `sentryagent list-agents`
|
||||||
|
|
||||||
|
List all agents registered for your tenant, displayed as a formatted table.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent list-agents
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output:**
|
||||||
|
|
||||||
|
```
|
||||||
|
AGENT ID NAME STATUS CREATED AT
|
||||||
|
────────────────────────────────────────────────────────────────────────────
|
||||||
|
01ARZ3NDEKTSV4RRFFQ69G5FAV billing-agent active 4/2/2026, 9:00:00 AM
|
||||||
|
01ARZ3NDEKTSV4RRFFQ69G5FAX auth-agent active 4/1/2026, 3:00:00 PM
|
||||||
|
────────────────────────────────────────────────────────────────────────────
|
||||||
|
Total: 2
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `sentryagent issue-token`
|
||||||
|
|
||||||
|
Issue an OAuth2 `client_credentials` access token for a specific agent.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent issue-token --agent-id <id>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
|
||||||
|
| Flag | Required | Description |
|
||||||
|
|--------------------|----------|-------------------------|
|
||||||
|
| `--agent-id <id>` | Yes | Target agent ID |
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent issue-token --agent-id 01ARZ3NDEKTSV4RRFFQ69G5FAV
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output:**
|
||||||
|
|
||||||
|
```
|
||||||
|
✓ Token issued successfully
|
||||||
|
|
||||||
|
Access Token:
|
||||||
|
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...
|
||||||
|
|
||||||
|
Token Type: Bearer
|
||||||
|
Expires In: 3600s
|
||||||
|
Expires At: 2026-04-02T10:00:00.000Z
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `sentryagent rotate-credentials`
|
||||||
|
|
||||||
|
Rotate the client secret for an agent. Prompts for confirmation before proceeding.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent rotate-credentials --agent-id <id>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
|
||||||
|
| Flag | Required | Description |
|
||||||
|
|--------------------|----------|-------------------------|
|
||||||
|
| `--agent-id <id>` | Yes | Target agent ID |
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent rotate-credentials --agent-id 01ARZ3NDEKTSV4RRFFQ69G5FAV
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output:**
|
||||||
|
|
||||||
|
```
|
||||||
|
⚠ This will invalidate the current secret for agent 01ARZ3NDEKTSV4RRFFQ69G5FAV
|
||||||
|
This will invalidate the current secret. Continue? [y/N] y
|
||||||
|
|
||||||
|
✓ Credentials rotated successfully
|
||||||
|
|
||||||
|
Client ID: 01ARZ3NDEKTSV4RRFFQ69G5FAV
|
||||||
|
Client Secret: cs_new_secret_value_here
|
||||||
|
|
||||||
|
Store the new client secret securely — it will not be shown again.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `sentryagent tail-audit-log`
|
||||||
|
|
||||||
|
Poll the audit log API every 5 seconds and stream new events to stdout. Press **Ctrl+C** to stop.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent tail-audit-log [--agent-id <id>]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
|
||||||
|
| Flag | Required | Description |
|
||||||
|
|--------------------|----------|------------------------------------|
|
||||||
|
| `--agent-id <id>` | No | Filter events for a specific agent |
|
||||||
|
|
||||||
|
**Example (all events):**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent tail-audit-log
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example (filtered by agent):**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent tail-audit-log --agent-id 01ARZ3NDEKTSV4RRFFQ69G5FAV
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output:**
|
||||||
|
|
||||||
|
```
|
||||||
|
Tailing audit log — press Ctrl+C to stop
|
||||||
|
────────────────────────────────────────────────────────────
|
||||||
|
4/2/2026, 9:05:00 AM agent.token.issued outcome=success agent=01ARZ3NDEKTSV... id=evt_01...
|
||||||
|
4/2/2026, 9:10:03 AM agent.registered outcome=success id=evt_02...
|
||||||
|
^C
|
||||||
|
|
||||||
|
Stopped.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `sentryagent completion`
|
||||||
|
|
||||||
|
Output shell completion scripts.
|
||||||
|
|
||||||
|
#### Bash
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent completion bash
|
||||||
|
```
|
||||||
|
|
||||||
|
To enable permanently, add to `~/.bashrc` or `~/.bash_profile`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
source <(sentryagent completion bash)
|
||||||
|
```
|
||||||
|
|
||||||
|
Or write to a file:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent completion bash > ~/.bash_completion.d/sentryagent
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Zsh
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent completion zsh
|
||||||
|
```
|
||||||
|
|
||||||
|
To enable permanently, add to `~/.zshrc`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
source <(sentryagent completion zsh)
|
||||||
|
```
|
||||||
|
|
||||||
|
Or write to a file in your `$fpath`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sentryagent completion zsh > ~/.zsh/completions/_sentryagent
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Shell Completion Setup
|
||||||
|
|
||||||
|
### Bash (one-time setup)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p ~/.bash_completion.d
|
||||||
|
sentryagent completion bash > ~/.bash_completion.d/sentryagent
|
||||||
|
echo 'source ~/.bash_completion.d/sentryagent' >> ~/.bashrc
|
||||||
|
source ~/.bashrc
|
||||||
|
```
|
||||||
|
|
||||||
|
### Zsh (one-time setup)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p ~/.zsh/completions
|
||||||
|
sentryagent completion zsh > ~/.zsh/completions/_sentryagent
|
||||||
|
echo 'fpath=(~/.zsh/completions $fpath)' >> ~/.zshrc
|
||||||
|
echo 'autoload -Uz compinit && compinit' >> ~/.zshrc
|
||||||
|
source ~/.zshrc
|
||||||
|
```
|
||||||
|
|
||||||
|
After setup, pressing **Tab** after `sentryagent` will autocomplete commands and flags.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration File
|
||||||
|
|
||||||
|
The config file is stored at `~/.sentryagent/config.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"apiUrl": "https://api.sentryagent.ai",
|
||||||
|
"clientId": "tenant_01ABC...",
|
||||||
|
"clientSecret": "cs_secret_value"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The directory is created with mode `0700` and the file with mode `0600` to prevent other users from reading your credentials.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Environment
|
||||||
|
|
||||||
|
- Node.js >= 18.0.0 is required (uses the built-in `fetch` API)
|
||||||
|
- All HTTP requests use OAuth2 `client_credentials` tokens fetched automatically from your configuration
|
||||||
|
- Tokens are cached in memory for the duration of the CLI session (refreshed 30 seconds before expiry)
|
||||||
411
cli/package-lock.json
generated
Normal file
411
cli/package-lock.json
generated
Normal file
@@ -0,0 +1,411 @@
|
|||||||
|
{
|
||||||
|
"name": "sentryagent",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"lockfileVersion": 3,
|
||||||
|
"requires": true,
|
||||||
|
"packages": {
|
||||||
|
"": {
|
||||||
|
"name": "sentryagent",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@types/unzipper": "^0.10.11",
|
||||||
|
"chalk": "^5.3.0",
|
||||||
|
"commander": "^12.1.0",
|
||||||
|
"unzipper": "^0.12.3"
|
||||||
|
},
|
||||||
|
"bin": {
|
||||||
|
"sentryagent": "dist/index.js"
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"@types/node": "^20.12.7",
|
||||||
|
"ts-node": "^10.9.2",
|
||||||
|
"typescript": "^5.4.5"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18.0.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@cspotcode/source-map-support": {
|
||||||
|
"version": "0.8.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/@cspotcode/source-map-support/-/source-map-support-0.8.1.tgz",
|
||||||
|
"integrity": "sha512-IchNf6dN4tHoMFIn/7OE8LWZ19Y6q/67Bmf6vnGREv8RSbBVb9LPJxEcnwrcwX6ixSvaiGoomAUvu4YSxXrVgw==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@jridgewell/trace-mapping": "0.3.9"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=12"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@jridgewell/resolve-uri": {
|
||||||
|
"version": "3.1.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/@jridgewell/resolve-uri/-/resolve-uri-3.1.2.tgz",
|
||||||
|
"integrity": "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=6.0.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@jridgewell/sourcemap-codec": {
|
||||||
|
"version": "1.5.5",
|
||||||
|
"resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.5.tgz",
|
||||||
|
"integrity": "sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/@jridgewell/trace-mapping": {
|
||||||
|
"version": "0.3.9",
|
||||||
|
"resolved": "https://registry.npmjs.org/@jridgewell/trace-mapping/-/trace-mapping-0.3.9.tgz",
|
||||||
|
"integrity": "sha512-3Belt6tdc8bPgAtbcmdtNJlirVoTmEb5e2gC94PnkwEW9jI6CAHUeoG85tjWP5WquqfavoMtMwiG4P926ZKKuQ==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@jridgewell/resolve-uri": "^3.0.3",
|
||||||
|
"@jridgewell/sourcemap-codec": "^1.4.10"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@tsconfig/node10": {
|
||||||
|
"version": "1.0.12",
|
||||||
|
"resolved": "https://registry.npmjs.org/@tsconfig/node10/-/node10-1.0.12.tgz",
|
||||||
|
"integrity": "sha512-UCYBaeFvM11aU2y3YPZ//O5Rhj+xKyzy7mvcIoAjASbigy8mHMryP5cK7dgjlz2hWxh1g5pLw084E0a/wlUSFQ==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/@tsconfig/node12": {
|
||||||
|
"version": "1.0.11",
|
||||||
|
"resolved": "https://registry.npmjs.org/@tsconfig/node12/-/node12-1.0.11.tgz",
|
||||||
|
"integrity": "sha512-cqefuRsh12pWyGsIoBKJA9luFu3mRxCA+ORZvA4ktLSzIuCUtWVxGIuXigEwO5/ywWFMZ2QEGKWvkZG1zDMTag==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/@tsconfig/node14": {
|
||||||
|
"version": "1.0.3",
|
||||||
|
"resolved": "https://registry.npmjs.org/@tsconfig/node14/-/node14-1.0.3.tgz",
|
||||||
|
"integrity": "sha512-ysT8mhdixWK6Hw3i1V2AeRqZ5WfXg1G43mqoYlM2nc6388Fq5jcXyr5mRsqViLx/GJYdoL0bfXD8nmF+Zn/Iow==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/@tsconfig/node16": {
|
||||||
|
"version": "1.0.4",
|
||||||
|
"resolved": "https://registry.npmjs.org/@tsconfig/node16/-/node16-1.0.4.tgz",
|
||||||
|
"integrity": "sha512-vxhUy4J8lyeyinH7Azl1pdd43GJhZH/tP2weN8TntQblOY+A0XbT8DJk1/oCPuOOyg/Ja757rG0CgHcWC8OfMA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/@types/node": {
|
||||||
|
"version": "20.19.37",
|
||||||
|
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.19.37.tgz",
|
||||||
|
"integrity": "sha512-8kzdPJ3FsNsVIurqBs7oodNnCEVbni9yUEkaHbgptDACOPW04jimGagZ51E6+lXUwJjgnBw+hyko/lkFWCldqw==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"undici-types": "~6.21.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@types/unzipper": {
|
||||||
|
"version": "0.10.11",
|
||||||
|
"resolved": "https://registry.npmjs.org/@types/unzipper/-/unzipper-0.10.11.tgz",
|
||||||
|
"integrity": "sha512-D25im2zjyMCcgL9ag6N46+wbtJBnXIr7SI4zHf9eJD2Dw2tEB5e+p5MYkrxKIVRscs5QV0EhtU9rgXSPx90oJg==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@types/node": "*"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/acorn": {
|
||||||
|
"version": "8.16.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.16.0.tgz",
|
||||||
|
"integrity": "sha512-UVJyE9MttOsBQIDKw1skb9nAwQuR5wuGD3+82K6JgJlm/Y+KI92oNsMNGZCYdDsVtRHSak0pcV5Dno5+4jh9sw==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"bin": {
|
||||||
|
"acorn": "bin/acorn"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=0.4.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/acorn-walk": {
|
||||||
|
"version": "8.3.5",
|
||||||
|
"resolved": "https://registry.npmjs.org/acorn-walk/-/acorn-walk-8.3.5.tgz",
|
||||||
|
"integrity": "sha512-HEHNfbars9v4pgpW6SO1KSPkfoS0xVOM/9UzkJltjlsHZmJasxg8aXkuZa7SMf8vKGIBhpUsPluQSqhJFCqebw==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"acorn": "^8.11.0"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=0.4.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/arg": {
|
||||||
|
"version": "4.1.3",
|
||||||
|
"resolved": "https://registry.npmjs.org/arg/-/arg-4.1.3.tgz",
|
||||||
|
"integrity": "sha512-58S9QDqG0Xx27YwPSt9fJxivjYl432YCwfDMfZ+71RAqUrZef7LrKQZ3LHLOwCS4FLNBplP533Zx895SeOCHvA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/bluebird": {
|
||||||
|
"version": "3.7.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/bluebird/-/bluebird-3.7.2.tgz",
|
||||||
|
"integrity": "sha512-XpNj6GDQzdfW+r2Wnn7xiSAd7TM3jzkxGXBGTtWKuSXv1xUV+azxAm8jdWZN06QTQk+2N2XB9jRDkvbmQmcRtg==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/chalk": {
|
||||||
|
"version": "5.6.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/chalk/-/chalk-5.6.2.tgz",
|
||||||
|
"integrity": "sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA==",
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": "^12.17.0 || ^14.13 || >=16.0.0"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://github.com/chalk/chalk?sponsor=1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/commander": {
|
||||||
|
"version": "12.1.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/commander/-/commander-12.1.0.tgz",
|
||||||
|
"integrity": "sha512-Vw8qHK3bZM9y/P10u3Vib8o/DdkvA2OtPtZvD871QKjy74Wj1WSKFILMPRPSdUSx5RFK1arlJzEtA4PkFgnbuA==",
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/core-util-is": {
|
||||||
|
"version": "1.0.3",
|
||||||
|
"resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.3.tgz",
|
||||||
|
"integrity": "sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/create-require": {
|
||||||
|
"version": "1.1.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/create-require/-/create-require-1.1.1.tgz",
|
||||||
|
"integrity": "sha512-dcKFX3jn0MpIaXjisoRvexIJVEKzaq7z2rZKxf+MSr9TkdmHmsU4m2lcLojrj/FHl8mk5VxMmYA+ftRkP/3oKQ==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/diff": {
|
||||||
|
"version": "4.0.4",
|
||||||
|
"resolved": "https://registry.npmjs.org/diff/-/diff-4.0.4.tgz",
|
||||||
|
"integrity": "sha512-X07nttJQkwkfKfvTPG/KSnE2OMdcUCao6+eXF3wmnIQRn2aPAHH3VxDbDOdegkd6JbPsXqShpvEOHfAT+nCNwQ==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "BSD-3-Clause",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=0.3.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/duplexer2": {
|
||||||
|
"version": "0.1.4",
|
||||||
|
"resolved": "https://registry.npmjs.org/duplexer2/-/duplexer2-0.1.4.tgz",
|
||||||
|
"integrity": "sha512-asLFVfWWtJ90ZyOUHMqk7/S2w2guQKxUI2itj3d92ADHhxUSbCMGi1f1cBcJ7xM1To+pE/Khbwo1yuNbMEPKeA==",
|
||||||
|
"license": "BSD-3-Clause",
|
||||||
|
"dependencies": {
|
||||||
|
"readable-stream": "^2.0.2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/fs-extra": {
|
||||||
|
"version": "11.3.4",
|
||||||
|
"resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-11.3.4.tgz",
|
||||||
|
"integrity": "sha512-CTXd6rk/M3/ULNQj8FBqBWHYBVYybQ3VPBw0xGKFe3tuH7ytT6ACnvzpIQ3UZtB8yvUKC2cXn1a+x+5EVQLovA==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"graceful-fs": "^4.2.0",
|
||||||
|
"jsonfile": "^6.0.1",
|
||||||
|
"universalify": "^2.0.0"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=14.14"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/graceful-fs": {
|
||||||
|
"version": "4.2.11",
|
||||||
|
"resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz",
|
||||||
|
"integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==",
|
||||||
|
"license": "ISC"
|
||||||
|
},
|
||||||
|
"node_modules/inherits": {
|
||||||
|
"version": "2.0.4",
|
||||||
|
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz",
|
||||||
|
"integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==",
|
||||||
|
"license": "ISC"
|
||||||
|
},
|
||||||
|
"node_modules/isarray": {
|
||||||
|
"version": "1.0.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz",
|
||||||
|
"integrity": "sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/jsonfile": {
|
||||||
|
"version": "6.2.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.2.0.tgz",
|
||||||
|
"integrity": "sha512-FGuPw30AdOIUTRMC2OMRtQV+jkVj2cfPqSeWXv1NEAJ1qZ5zb1X6z1mFhbfOB/iy3ssJCD+3KuZ8r8C3uVFlAg==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"universalify": "^2.0.0"
|
||||||
|
},
|
||||||
|
"optionalDependencies": {
|
||||||
|
"graceful-fs": "^4.1.6"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/make-error": {
|
||||||
|
"version": "1.3.6",
|
||||||
|
"resolved": "https://registry.npmjs.org/make-error/-/make-error-1.3.6.tgz",
|
||||||
|
"integrity": "sha512-s8UhlNe7vPKomQhC1qFelMokr/Sc3AgNbso3n74mVPA5LTZwkB9NlXf4XPamLxJE8h0gh73rM94xvwRT2CVInw==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "ISC"
|
||||||
|
},
|
||||||
|
"node_modules/node-int64": {
|
||||||
|
"version": "0.4.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/node-int64/-/node-int64-0.4.0.tgz",
|
||||||
|
"integrity": "sha512-O5lz91xSOeoXP6DulyHfllpq+Eg00MWitZIbtPfoSEvqIHdl5gfcY6hYzDWnj0qD5tz52PI08u9qUvSVeUBeHw==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/process-nextick-args": {
|
||||||
|
"version": "2.0.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz",
|
||||||
|
"integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/readable-stream": {
|
||||||
|
"version": "2.3.8",
|
||||||
|
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.8.tgz",
|
||||||
|
"integrity": "sha512-8p0AUk4XODgIewSi0l8Epjs+EVnWiK7NoDIEGU0HhE7+ZyY8D1IMY7odu5lRrFXGg71L15KG8QrPmum45RTtdA==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"core-util-is": "~1.0.0",
|
||||||
|
"inherits": "~2.0.3",
|
||||||
|
"isarray": "~1.0.0",
|
||||||
|
"process-nextick-args": "~2.0.0",
|
||||||
|
"safe-buffer": "~5.1.1",
|
||||||
|
"string_decoder": "~1.1.1",
|
||||||
|
"util-deprecate": "~1.0.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/safe-buffer": {
|
||||||
|
"version": "5.1.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz",
|
||||||
|
"integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/string_decoder": {
|
||||||
|
"version": "1.1.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz",
|
||||||
|
"integrity": "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"safe-buffer": "~5.1.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/ts-node": {
|
||||||
|
"version": "10.9.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/ts-node/-/ts-node-10.9.2.tgz",
|
||||||
|
"integrity": "sha512-f0FFpIdcHgn8zcPSbf1dRevwt047YMnaiJM3u2w2RewrB+fob/zePZcrOyQoLMMO7aBIddLcQIEK5dYjkLnGrQ==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@cspotcode/source-map-support": "^0.8.0",
|
||||||
|
"@tsconfig/node10": "^1.0.7",
|
||||||
|
"@tsconfig/node12": "^1.0.7",
|
||||||
|
"@tsconfig/node14": "^1.0.0",
|
||||||
|
"@tsconfig/node16": "^1.0.2",
|
||||||
|
"acorn": "^8.4.1",
|
||||||
|
"acorn-walk": "^8.1.1",
|
||||||
|
"arg": "^4.1.0",
|
||||||
|
"create-require": "^1.1.0",
|
||||||
|
"diff": "^4.0.1",
|
||||||
|
"make-error": "^1.1.1",
|
||||||
|
"v8-compile-cache-lib": "^3.0.1",
|
||||||
|
"yn": "3.1.1"
|
||||||
|
},
|
||||||
|
"bin": {
|
||||||
|
"ts-node": "dist/bin.js",
|
||||||
|
"ts-node-cwd": "dist/bin-cwd.js",
|
||||||
|
"ts-node-esm": "dist/bin-esm.js",
|
||||||
|
"ts-node-script": "dist/bin-script.js",
|
||||||
|
"ts-node-transpile-only": "dist/bin-transpile.js",
|
||||||
|
"ts-script": "dist/bin-script-deprecated.js"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"@swc/core": ">=1.2.50",
|
||||||
|
"@swc/wasm": ">=1.2.50",
|
||||||
|
"@types/node": "*",
|
||||||
|
"typescript": ">=2.7"
|
||||||
|
},
|
||||||
|
"peerDependenciesMeta": {
|
||||||
|
"@swc/core": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"@swc/wasm": {
|
||||||
|
"optional": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/typescript": {
|
||||||
|
"version": "5.9.3",
|
||||||
|
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz",
|
||||||
|
"integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "Apache-2.0",
|
||||||
|
"bin": {
|
||||||
|
"tsc": "bin/tsc",
|
||||||
|
"tsserver": "bin/tsserver"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=14.17"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/undici-types": {
|
||||||
|
"version": "6.21.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz",
|
||||||
|
"integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/universalify": {
|
||||||
|
"version": "2.0.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/universalify/-/universalify-2.0.1.tgz",
|
||||||
|
"integrity": "sha512-gptHNQghINnc/vTGIk0SOFGFNXw7JVrlRUtConJRlvaw6DuX0wO5Jeko9sWrMBhh+PsYAZ7oXAiOnf/UKogyiw==",
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">= 10.0.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/unzipper": {
|
||||||
|
"version": "0.12.3",
|
||||||
|
"resolved": "https://registry.npmjs.org/unzipper/-/unzipper-0.12.3.tgz",
|
||||||
|
"integrity": "sha512-PZ8hTS+AqcGxsaQntl3IRBw65QrBI6lxzqDEL7IAo/XCEqRTKGfOX56Vea5TH9SZczRVxuzk1re04z/YjuYCJA==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"bluebird": "~3.7.2",
|
||||||
|
"duplexer2": "~0.1.4",
|
||||||
|
"fs-extra": "^11.2.0",
|
||||||
|
"graceful-fs": "^4.2.2",
|
||||||
|
"node-int64": "^0.4.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/util-deprecate": {
|
||||||
|
"version": "1.0.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz",
|
||||||
|
"integrity": "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/v8-compile-cache-lib": {
|
||||||
|
"version": "3.0.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/v8-compile-cache-lib/-/v8-compile-cache-lib-3.0.1.tgz",
|
||||||
|
"integrity": "sha512-wa7YjyUGfNZngI/vtK0UHAN+lgDCxBPCylVXGp0zu59Fz5aiGtNXaq3DhIov063MorB+VfufLh3JlF2KdTK3xg==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/yn": {
|
||||||
|
"version": "3.1.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/yn/-/yn-3.1.1.tgz",
|
||||||
|
"integrity": "sha512-Ux4ygGWsu2c7isFWe8Yu1YluJmqVhxqK2cLXNQA5AcC3QfbGNpM7fu0Y8b/z16pXLnFxZYvWhd3fhBY9DLmC6Q==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=6"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
36
cli/package.json
Normal file
36
cli/package.json
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
{
|
||||||
|
"name": "sentryagent",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"description": "SentryAgent.ai CLI — manage agents, tokens, and audit logs",
|
||||||
|
"main": "dist/index.js",
|
||||||
|
"bin": {
|
||||||
|
"sentryagent": "./dist/index.js"
|
||||||
|
},
|
||||||
|
"scripts": {
|
||||||
|
"build": "tsc",
|
||||||
|
"dev": "ts-node src/index.ts",
|
||||||
|
"clean": "rm -rf dist"
|
||||||
|
},
|
||||||
|
"dependencies": {
|
||||||
|
"@types/unzipper": "^0.10.11",
|
||||||
|
"chalk": "^5.3.0",
|
||||||
|
"commander": "^12.1.0",
|
||||||
|
"unzipper": "^0.12.3"
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"@types/node": "^20.12.7",
|
||||||
|
"ts-node": "^10.9.2",
|
||||||
|
"typescript": "^5.4.5"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18.0.0"
|
||||||
|
},
|
||||||
|
"keywords": [
|
||||||
|
"sentryagent",
|
||||||
|
"agentidp",
|
||||||
|
"cli",
|
||||||
|
"agents",
|
||||||
|
"identity"
|
||||||
|
],
|
||||||
|
"license": "MIT"
|
||||||
|
}
|
||||||
95
cli/src/api.ts
Normal file
95
cli/src/api.ts
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
import { Config } from './config';
|
||||||
|
|
||||||
|
interface TokenCache {
|
||||||
|
accessToken: string;
|
||||||
|
expiresAt: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
let tokenCache: TokenCache | null = null;
|
||||||
|
|
||||||
|
interface TokenResponse {
|
||||||
|
access_token: string;
|
||||||
|
expires_in: number;
|
||||||
|
token_type: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function fetchToken(config: Config): Promise<string> {
|
||||||
|
const now = Date.now();
|
||||||
|
if (tokenCache !== null && tokenCache.expiresAt > now + 30_000) {
|
||||||
|
return tokenCache.accessToken;
|
||||||
|
}
|
||||||
|
|
||||||
|
const body = new URLSearchParams({
|
||||||
|
grant_type: 'client_credentials',
|
||||||
|
client_id: config.clientId,
|
||||||
|
client_secret: config.clientSecret,
|
||||||
|
});
|
||||||
|
|
||||||
|
const res = await fetch(`${config.apiUrl}/oauth2/token`, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
|
||||||
|
body: body.toString(),
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!res.ok) {
|
||||||
|
const text = await res.text();
|
||||||
|
throw new Error(`Authentication failed (${res.status}): ${text}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
const data = (await res.json()) as TokenResponse;
|
||||||
|
tokenCache = {
|
||||||
|
accessToken: data.access_token,
|
||||||
|
expiresAt: now + data.expires_in * 1000,
|
||||||
|
};
|
||||||
|
return tokenCache.accessToken;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function clearTokenCache(): void {
|
||||||
|
tokenCache = null;
|
||||||
|
}
|
||||||
|
|
||||||
|
type HttpMethod = 'GET' | 'POST' | 'PUT' | 'PATCH' | 'DELETE';
|
||||||
|
|
||||||
|
interface ApiRequestOptions {
|
||||||
|
method?: HttpMethod;
|
||||||
|
body?: unknown;
|
||||||
|
params?: Record<string, string>;
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function apiRequest<T>(
|
||||||
|
config: Config,
|
||||||
|
endpoint: string,
|
||||||
|
options: ApiRequestOptions = {},
|
||||||
|
): Promise<T> {
|
||||||
|
const token = await fetchToken(config);
|
||||||
|
const { method = 'GET', body, params } = options;
|
||||||
|
|
||||||
|
let url = `${config.apiUrl}${endpoint}`;
|
||||||
|
if (params !== undefined && Object.keys(params).length > 0) {
|
||||||
|
const qs = new URLSearchParams(params);
|
||||||
|
url = `${url}?${qs.toString()}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
const headers: Record<string, string> = {
|
||||||
|
Authorization: `Bearer ${token}`,
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
};
|
||||||
|
|
||||||
|
const fetchOptions: RequestInit = { method, headers };
|
||||||
|
if (body !== undefined) {
|
||||||
|
fetchOptions.body = JSON.stringify(body);
|
||||||
|
}
|
||||||
|
|
||||||
|
const res = await fetch(url, fetchOptions);
|
||||||
|
|
||||||
|
if (!res.ok) {
|
||||||
|
const text = await res.text();
|
||||||
|
throw new Error(`API error (${res.status}): ${text}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (res.status === 204) {
|
||||||
|
return undefined as unknown as T;
|
||||||
|
}
|
||||||
|
|
||||||
|
return (await res.json()) as T;
|
||||||
|
}
|
||||||
155
cli/src/commands/completion.ts
Normal file
155
cli/src/commands/completion.ts
Normal file
@@ -0,0 +1,155 @@
|
|||||||
|
import { Command } from 'commander';
|
||||||
|
|
||||||
|
const BASH_COMPLETION = `
|
||||||
|
# sentryagent bash completion
|
||||||
|
# Add to ~/.bashrc or ~/.bash_profile:
|
||||||
|
# source <(sentryagent completion bash)
|
||||||
|
|
||||||
|
_sentryagent_completion() {
|
||||||
|
local cur prev words cword
|
||||||
|
_init_completion || return
|
||||||
|
|
||||||
|
local commands="configure register-agent list-agents issue-token rotate-credentials tail-audit-log completion"
|
||||||
|
local global_opts="--help --version"
|
||||||
|
|
||||||
|
case "\${prev}" in
|
||||||
|
sentryagent)
|
||||||
|
COMPREPLY=( \$(compgen -W "\${commands} \${global_opts}" -- "\${cur}") )
|
||||||
|
return 0
|
||||||
|
;;
|
||||||
|
configure)
|
||||||
|
COMPREPLY=( \$(compgen -W "--help" -- "\${cur}") )
|
||||||
|
return 0
|
||||||
|
;;
|
||||||
|
register-agent)
|
||||||
|
COMPREPLY=( \$(compgen -W "--name --description --help" -- "\${cur}") )
|
||||||
|
return 0
|
||||||
|
;;
|
||||||
|
list-agents)
|
||||||
|
COMPREPLY=( \$(compgen -W "--help" -- "\${cur}") )
|
||||||
|
return 0
|
||||||
|
;;
|
||||||
|
issue-token)
|
||||||
|
COMPREPLY=( \$(compgen -W "--agent-id --help" -- "\${cur}") )
|
||||||
|
return 0
|
||||||
|
;;
|
||||||
|
rotate-credentials)
|
||||||
|
COMPREPLY=( \$(compgen -W "--agent-id --help" -- "\${cur}") )
|
||||||
|
return 0
|
||||||
|
;;
|
||||||
|
tail-audit-log)
|
||||||
|
COMPREPLY=( \$(compgen -W "--agent-id --help" -- "\${cur}") )
|
||||||
|
return 0
|
||||||
|
;;
|
||||||
|
completion)
|
||||||
|
COMPREPLY=( \$(compgen -W "bash zsh --help" -- "\${cur}") )
|
||||||
|
return 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
COMPREPLY=()
|
||||||
|
return 0
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
complete -F _sentryagent_completion sentryagent
|
||||||
|
`.trim();
|
||||||
|
|
||||||
|
const ZSH_COMPLETION = `
|
||||||
|
#compdef sentryagent
|
||||||
|
|
||||||
|
# sentryagent zsh completion
|
||||||
|
# Add to ~/.zshrc:
|
||||||
|
# source <(sentryagent completion zsh)
|
||||||
|
# Or generate a file and place it in your $fpath:
|
||||||
|
# sentryagent completion zsh > ~/.zsh/completions/_sentryagent
|
||||||
|
|
||||||
|
_sentryagent() {
|
||||||
|
local state
|
||||||
|
|
||||||
|
_arguments \\
|
||||||
|
'(-v --version)'{-v,--version}'[Show version]' \\
|
||||||
|
'(-h --help)'{-h,--help}'[Show help]' \\
|
||||||
|
'1: :->command' \\
|
||||||
|
'*: :->args'
|
||||||
|
|
||||||
|
case \$state in
|
||||||
|
command)
|
||||||
|
local commands=(
|
||||||
|
'configure:Configure CLI with API URL and credentials'
|
||||||
|
'register-agent:Register a new agent'
|
||||||
|
'list-agents:List all registered agents'
|
||||||
|
'issue-token:Issue an OAuth2 access token for an agent'
|
||||||
|
'rotate-credentials:Rotate credentials for an agent'
|
||||||
|
'tail-audit-log:Poll and stream audit log events'
|
||||||
|
'completion:Output shell completion script'
|
||||||
|
)
|
||||||
|
_describe 'command' commands
|
||||||
|
;;
|
||||||
|
args)
|
||||||
|
case \${words[2]} in
|
||||||
|
configure)
|
||||||
|
_arguments \\
|
||||||
|
'(-h --help)'{-h,--help}'[Show help]'
|
||||||
|
;;
|
||||||
|
register-agent)
|
||||||
|
_arguments \\
|
||||||
|
'--name[Agent name]:name' \\
|
||||||
|
'--description[Agent description]:description' \\
|
||||||
|
'(-h --help)'{-h,--help}'[Show help]'
|
||||||
|
;;
|
||||||
|
list-agents)
|
||||||
|
_arguments \\
|
||||||
|
'(-h --help)'{-h,--help}'[Show help]'
|
||||||
|
;;
|
||||||
|
issue-token)
|
||||||
|
_arguments \\
|
||||||
|
'--agent-id[Agent ID]:agent-id' \\
|
||||||
|
'(-h --help)'{-h,--help}'[Show help]'
|
||||||
|
;;
|
||||||
|
rotate-credentials)
|
||||||
|
_arguments \\
|
||||||
|
'--agent-id[Agent ID]:agent-id' \\
|
||||||
|
'(-h --help)'{-h,--help}'[Show help]'
|
||||||
|
;;
|
||||||
|
tail-audit-log)
|
||||||
|
_arguments \\
|
||||||
|
'--agent-id[Filter by agent ID]:agent-id' \\
|
||||||
|
'(-h --help)'{-h,--help}'[Show help]'
|
||||||
|
;;
|
||||||
|
completion)
|
||||||
|
local shells=('bash:Generate bash completion script' 'zsh:Generate zsh completion script')
|
||||||
|
_describe 'shell' shells
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
_sentryagent "\$@"
|
||||||
|
`.trim();
|
||||||
|
|
||||||
|
export function registerCompletion(program: Command): void {
|
||||||
|
const completion = program
|
||||||
|
.command('completion')
|
||||||
|
.description('Output shell completion scripts');
|
||||||
|
|
||||||
|
completion
|
||||||
|
.command('bash')
|
||||||
|
.description('Output bash completion script')
|
||||||
|
.action(() => {
|
||||||
|
console.log(BASH_COMPLETION);
|
||||||
|
});
|
||||||
|
|
||||||
|
completion
|
||||||
|
.command('zsh')
|
||||||
|
.description('Output zsh completion script')
|
||||||
|
.action(() => {
|
||||||
|
console.log(ZSH_COMPLETION);
|
||||||
|
});
|
||||||
|
|
||||||
|
completion.addHelpText(
|
||||||
|
'after',
|
||||||
|
'\nSupported shells: bash, zsh',
|
||||||
|
);
|
||||||
|
}
|
||||||
63
cli/src/commands/configure.ts
Normal file
63
cli/src/commands/configure.ts
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
import * as readline from 'readline';
|
||||||
|
import { Command } from 'commander';
|
||||||
|
import chalk from 'chalk';
|
||||||
|
import { writeConfig } from '../config';
|
||||||
|
|
||||||
|
function prompt(rl: readline.Interface, question: string): Promise<string> {
|
||||||
|
return new Promise((resolve) => {
|
||||||
|
rl.question(question, (answer) => {
|
||||||
|
resolve(answer.trim());
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
export function registerConfigure(program: Command): void {
|
||||||
|
program
|
||||||
|
.command('configure')
|
||||||
|
.description('Configure the CLI with API URL and credentials')
|
||||||
|
.action(async () => {
|
||||||
|
const rl = readline.createInterface({
|
||||||
|
input: process.stdin,
|
||||||
|
output: process.stdout,
|
||||||
|
});
|
||||||
|
|
||||||
|
try {
|
||||||
|
console.log(chalk.bold('SentryAgent CLI Configuration'));
|
||||||
|
console.log(chalk.dim('─'.repeat(40)));
|
||||||
|
|
||||||
|
const apiUrl = await prompt(
|
||||||
|
rl,
|
||||||
|
chalk.cyan('API URL') + ' (e.g. https://api.sentryagent.ai): ',
|
||||||
|
);
|
||||||
|
if (apiUrl === '') {
|
||||||
|
console.error(chalk.red('API URL cannot be empty.'));
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
const clientId = await prompt(rl, chalk.cyan('Client ID') + ': ');
|
||||||
|
if (clientId === '') {
|
||||||
|
console.error(chalk.red('Client ID cannot be empty.'));
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
const clientSecret = await prompt(
|
||||||
|
rl,
|
||||||
|
chalk.cyan('Client Secret') + ': ',
|
||||||
|
);
|
||||||
|
if (clientSecret === '') {
|
||||||
|
console.error(chalk.red('Client Secret cannot be empty.'));
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
writeConfig({ apiUrl, clientId, clientSecret });
|
||||||
|
|
||||||
|
console.log();
|
||||||
|
console.log(
|
||||||
|
chalk.green('✓') +
|
||||||
|
' Configuration saved to ~/.sentryagent/config.json',
|
||||||
|
);
|
||||||
|
} finally {
|
||||||
|
rl.close();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
70
cli/src/commands/issue-token.ts
Normal file
70
cli/src/commands/issue-token.ts
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
import { Command } from 'commander';
|
||||||
|
import chalk from 'chalk';
|
||||||
|
import { requireConfig } from '../config';
|
||||||
|
|
||||||
|
interface TokenResponse {
|
||||||
|
access_token: string;
|
||||||
|
expires_in: number;
|
||||||
|
token_type: string;
|
||||||
|
scope?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function registerIssueToken(program: Command): void {
|
||||||
|
program
|
||||||
|
.command('issue-token')
|
||||||
|
.description('Issue an OAuth2 access token for an agent')
|
||||||
|
.requiredOption('--agent-id <id>', 'Agent ID to issue a token for')
|
||||||
|
.action(async (options: { agentId: string }) => {
|
||||||
|
const config = requireConfig();
|
||||||
|
|
||||||
|
try {
|
||||||
|
const body = new URLSearchParams({
|
||||||
|
grant_type: 'client_credentials',
|
||||||
|
client_id: config.clientId,
|
||||||
|
client_secret: config.clientSecret,
|
||||||
|
agent_id: options.agentId,
|
||||||
|
});
|
||||||
|
|
||||||
|
const res = await fetch(`${config.apiUrl}/oauth2/token`, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
|
||||||
|
body: body.toString(),
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!res.ok) {
|
||||||
|
const text = await res.text();
|
||||||
|
throw new Error(`Token issuance failed (${res.status}): ${text}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
const data = (await res.json()) as TokenResponse;
|
||||||
|
const expiresAt = new Date(
|
||||||
|
Date.now() + data.expires_in * 1000,
|
||||||
|
).toISOString();
|
||||||
|
|
||||||
|
console.log(chalk.green('✓') + ' Token issued successfully');
|
||||||
|
console.log();
|
||||||
|
console.log(chalk.bold('Access Token:'));
|
||||||
|
console.log(chalk.cyan(data.access_token));
|
||||||
|
console.log();
|
||||||
|
console.log(
|
||||||
|
chalk.bold('Token Type: ') + data.token_type,
|
||||||
|
);
|
||||||
|
console.log(
|
||||||
|
chalk.bold('Expires In: ') + `${data.expires_in}s`,
|
||||||
|
);
|
||||||
|
console.log(
|
||||||
|
chalk.bold('Expires At: ') + chalk.dim(expiresAt),
|
||||||
|
);
|
||||||
|
if (data.scope !== undefined) {
|
||||||
|
console.log(chalk.bold('Scope: ') + data.scope);
|
||||||
|
}
|
||||||
|
|
||||||
|
} catch (err) {
|
||||||
|
console.error(
|
||||||
|
chalk.red('Error:'),
|
||||||
|
err instanceof Error ? err.message : String(err),
|
||||||
|
);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
105
cli/src/commands/list-agents.ts
Normal file
105
cli/src/commands/list-agents.ts
Normal file
@@ -0,0 +1,105 @@
|
|||||||
|
import { Command } from 'commander';
|
||||||
|
import chalk from 'chalk';
|
||||||
|
import { requireConfig } from '../config';
|
||||||
|
import { apiRequest } from '../api';
|
||||||
|
|
||||||
|
interface Agent {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
status: string;
|
||||||
|
createdAt: string;
|
||||||
|
description?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface AgentsResponse {
|
||||||
|
agents: Agent[];
|
||||||
|
total?: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
function truncate(str: string, maxLen: number): string {
|
||||||
|
if (str.length <= maxLen) return str;
|
||||||
|
return str.slice(0, maxLen - 1) + '…';
|
||||||
|
}
|
||||||
|
|
||||||
|
function padEnd(str: string, len: number): string {
|
||||||
|
return str.padEnd(len, ' ');
|
||||||
|
}
|
||||||
|
|
||||||
|
export function registerListAgents(program: Command): void {
|
||||||
|
program
|
||||||
|
.command('list-agents')
|
||||||
|
.description('List all registered agents')
|
||||||
|
.action(async () => {
|
||||||
|
const config = requireConfig();
|
||||||
|
|
||||||
|
try {
|
||||||
|
const data = await apiRequest<AgentsResponse | Agent[]>(
|
||||||
|
config,
|
||||||
|
'/agents',
|
||||||
|
);
|
||||||
|
|
||||||
|
const agents: Agent[] = Array.isArray(data)
|
||||||
|
? data
|
||||||
|
: (data as AgentsResponse).agents ?? [];
|
||||||
|
|
||||||
|
if (agents.length === 0) {
|
||||||
|
console.log(chalk.yellow('No agents found.'));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const ID_W = 26;
|
||||||
|
const NAME_W = 24;
|
||||||
|
const STATUS_W = 10;
|
||||||
|
const DATE_W = 20;
|
||||||
|
|
||||||
|
const header =
|
||||||
|
chalk.bold(padEnd('AGENT ID', ID_W)) +
|
||||||
|
' ' +
|
||||||
|
chalk.bold(padEnd('NAME', NAME_W)) +
|
||||||
|
' ' +
|
||||||
|
chalk.bold(padEnd('STATUS', STATUS_W)) +
|
||||||
|
' ' +
|
||||||
|
chalk.bold('CREATED AT');
|
||||||
|
|
||||||
|
const divider = chalk.dim(
|
||||||
|
'─'.repeat(ID_W + NAME_W + STATUS_W + DATE_W + 6),
|
||||||
|
);
|
||||||
|
|
||||||
|
console.log(header);
|
||||||
|
console.log(divider);
|
||||||
|
|
||||||
|
for (const agent of agents) {
|
||||||
|
const statusColor =
|
||||||
|
agent.status === 'active'
|
||||||
|
? chalk.green
|
||||||
|
: agent.status === 'inactive'
|
||||||
|
? chalk.yellow
|
||||||
|
: chalk.red;
|
||||||
|
|
||||||
|
const createdAt = new Date(agent.createdAt).toLocaleString();
|
||||||
|
|
||||||
|
console.log(
|
||||||
|
chalk.cyan(padEnd(truncate(agent.id, ID_W), ID_W)) +
|
||||||
|
' ' +
|
||||||
|
padEnd(truncate(agent.name, NAME_W), NAME_W) +
|
||||||
|
' ' +
|
||||||
|
statusColor(padEnd(truncate(agent.status, STATUS_W), STATUS_W)) +
|
||||||
|
' ' +
|
||||||
|
chalk.dim(truncate(createdAt, DATE_W)),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(divider);
|
||||||
|
const total = Array.isArray(data)
|
||||||
|
? agents.length
|
||||||
|
: ((data as AgentsResponse).total ?? agents.length);
|
||||||
|
console.log(chalk.dim(`Total: ${total}`));
|
||||||
|
} catch (err) {
|
||||||
|
console.error(
|
||||||
|
chalk.red('Error:'),
|
||||||
|
err instanceof Error ? err.message : String(err),
|
||||||
|
);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
54
cli/src/commands/register-agent.ts
Normal file
54
cli/src/commands/register-agent.ts
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
import { Command } from 'commander';
|
||||||
|
import chalk from 'chalk';
|
||||||
|
import { requireConfig } from '../config';
|
||||||
|
import { apiRequest } from '../api';
|
||||||
|
|
||||||
|
interface AgentResponse {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
description?: string;
|
||||||
|
status: string;
|
||||||
|
createdAt: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function registerRegisterAgent(program: Command): void {
|
||||||
|
program
|
||||||
|
.command('register-agent')
|
||||||
|
.description('Register a new agent')
|
||||||
|
.requiredOption('--name <name>', 'Agent name')
|
||||||
|
.option('--description <desc>', 'Agent description')
|
||||||
|
.action(async (options: { name: string; description?: string }) => {
|
||||||
|
const config = requireConfig();
|
||||||
|
|
||||||
|
try {
|
||||||
|
const body: { name: string; description?: string } = {
|
||||||
|
name: options.name,
|
||||||
|
};
|
||||||
|
if (options.description !== undefined) {
|
||||||
|
body.description = options.description;
|
||||||
|
}
|
||||||
|
|
||||||
|
const agent = await apiRequest<AgentResponse>(config, '/agents', {
|
||||||
|
method: 'POST',
|
||||||
|
body,
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log(chalk.green('✓') + ' Agent registered successfully');
|
||||||
|
console.log();
|
||||||
|
console.log(
|
||||||
|
chalk.bold('Agent ID: ') + chalk.cyan(agent.id),
|
||||||
|
);
|
||||||
|
console.log(chalk.bold('Name: ') + agent.name);
|
||||||
|
if (agent.description !== undefined) {
|
||||||
|
console.log(chalk.bold('Description:') + ' ' + agent.description);
|
||||||
|
}
|
||||||
|
console.log(chalk.bold('Status: ') + agent.status);
|
||||||
|
} catch (err) {
|
||||||
|
console.error(
|
||||||
|
chalk.red('Error:'),
|
||||||
|
err instanceof Error ? err.message : String(err),
|
||||||
|
);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
85
cli/src/commands/rotate-credentials.ts
Normal file
85
cli/src/commands/rotate-credentials.ts
Normal file
@@ -0,0 +1,85 @@
|
|||||||
|
import * as readline from 'readline';
|
||||||
|
import { Command } from 'commander';
|
||||||
|
import chalk from 'chalk';
|
||||||
|
import { requireConfig } from '../config';
|
||||||
|
import { apiRequest } from '../api';
|
||||||
|
|
||||||
|
interface RotateResponse {
|
||||||
|
clientId: string;
|
||||||
|
clientSecret: string;
|
||||||
|
rotatedAt?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
function prompt(rl: readline.Interface, question: string): Promise<string> {
|
||||||
|
return new Promise((resolve) => {
|
||||||
|
rl.question(question, (answer) => {
|
||||||
|
resolve(answer.trim());
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
export function registerRotateCredentials(program: Command): void {
|
||||||
|
program
|
||||||
|
.command('rotate-credentials')
|
||||||
|
.description('Rotate credentials for an agent (invalidates current secret)')
|
||||||
|
.requiredOption('--agent-id <id>', 'Agent ID whose credentials to rotate')
|
||||||
|
.action(async (options: { agentId: string }) => {
|
||||||
|
const config = requireConfig();
|
||||||
|
|
||||||
|
const rl = readline.createInterface({
|
||||||
|
input: process.stdin,
|
||||||
|
output: process.stdout,
|
||||||
|
});
|
||||||
|
|
||||||
|
try {
|
||||||
|
console.log(
|
||||||
|
chalk.yellow('⚠') +
|
||||||
|
' This will invalidate the current secret for agent ' +
|
||||||
|
chalk.cyan(options.agentId),
|
||||||
|
);
|
||||||
|
|
||||||
|
const answer = await prompt(
|
||||||
|
rl,
|
||||||
|
chalk.bold('This will invalidate the current secret. Continue? [y/N] '),
|
||||||
|
);
|
||||||
|
|
||||||
|
if (answer.toLowerCase() !== 'y' && answer.toLowerCase() !== 'yes') {
|
||||||
|
console.log(chalk.dim('Aborted.'));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const data = await apiRequest<RotateResponse>(
|
||||||
|
config,
|
||||||
|
`/agents/${options.agentId}/credentials/rotate`,
|
||||||
|
{ method: 'POST' },
|
||||||
|
);
|
||||||
|
|
||||||
|
console.log();
|
||||||
|
console.log(chalk.green('✓') + ' Credentials rotated successfully');
|
||||||
|
console.log();
|
||||||
|
console.log(chalk.bold('Client ID: ') + chalk.cyan(data.clientId));
|
||||||
|
console.log(
|
||||||
|
chalk.bold('Client Secret: ') + chalk.yellow(data.clientSecret),
|
||||||
|
);
|
||||||
|
console.log();
|
||||||
|
console.log(
|
||||||
|
chalk.dim(
|
||||||
|
'Store the new client secret securely — it will not be shown again.',
|
||||||
|
),
|
||||||
|
);
|
||||||
|
if (data.rotatedAt !== undefined) {
|
||||||
|
console.log(
|
||||||
|
chalk.dim('Rotated at: ') + chalk.dim(data.rotatedAt),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
console.error(
|
||||||
|
chalk.red('Error:'),
|
||||||
|
err instanceof Error ? err.message : String(err),
|
||||||
|
);
|
||||||
|
process.exit(1);
|
||||||
|
} finally {
|
||||||
|
rl.close();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
173
cli/src/commands/scaffold.ts
Normal file
173
cli/src/commands/scaffold.ts
Normal file
@@ -0,0 +1,173 @@
|
|||||||
|
import * as fs from 'fs';
|
||||||
|
import * as path from 'path';
|
||||||
|
import { Command } from 'commander';
|
||||||
|
import chalk from 'chalk';
|
||||||
|
import unzipper from 'unzipper';
|
||||||
|
import { requireConfig } from '../config';
|
||||||
|
|
||||||
|
const VALID_LANGUAGES = ['typescript', 'python', 'go', 'java', 'rust'] as const;
|
||||||
|
type ScaffoldLanguage = (typeof VALID_LANGUAGES)[number];
|
||||||
|
|
||||||
|
function isValidLanguage(lang: string): lang is ScaffoldLanguage {
|
||||||
|
return (VALID_LANGUAGES as readonly string[]).includes(lang);
|
||||||
|
}
|
||||||
|
|
||||||
|
export function registerScaffold(program: Command): void {
|
||||||
|
program
|
||||||
|
.command('scaffold')
|
||||||
|
.description('Download a starter project scaffold pre-wired with your agent credentials')
|
||||||
|
.requiredOption('--agent-id <id>', 'Agent ID to scaffold for')
|
||||||
|
.option(
|
||||||
|
'--language <lang>',
|
||||||
|
`SDK language (${VALID_LANGUAGES.join(', ')})`,
|
||||||
|
'typescript',
|
||||||
|
)
|
||||||
|
.option('--out <directory>', 'Output directory for the extracted scaffold', '.')
|
||||||
|
.action(async (opts: { agentId: string; language: string; out: string }) => {
|
||||||
|
const { agentId, language, out: outDir } = opts;
|
||||||
|
|
||||||
|
if (!isValidLanguage(language)) {
|
||||||
|
console.error(
|
||||||
|
chalk.red('Error:'),
|
||||||
|
`Unsupported language '${language}'. Choose: ${VALID_LANGUAGES.join(', ')}`,
|
||||||
|
);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
const config = requireConfig();
|
||||||
|
|
||||||
|
// Resolve and create output directory
|
||||||
|
const resolvedOut = path.resolve(outDir);
|
||||||
|
if (!fs.existsSync(resolvedOut)) {
|
||||||
|
fs.mkdirSync(resolvedOut, { recursive: true });
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(
|
||||||
|
chalk.dim(`Downloading ${language} scaffold for agent ${agentId}...`),
|
||||||
|
);
|
||||||
|
|
||||||
|
try {
|
||||||
|
// We need a raw binary response — fetch the token via apiRequest pattern
|
||||||
|
// then make a raw fetch for the ZIP stream.
|
||||||
|
const token = await getToken(config);
|
||||||
|
|
||||||
|
const url = `${config.apiUrl}/sdk/scaffold/${encodeURIComponent(agentId)}?language=${encodeURIComponent(language)}`;
|
||||||
|
const res = await fetch(url, {
|
||||||
|
headers: { Authorization: `Bearer ${token}` },
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!res.ok) {
|
||||||
|
const text = await res.text();
|
||||||
|
handleHttpError(res.status, text);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (res.body === null) {
|
||||||
|
console.error(chalk.red('Error:'), 'Empty response body from server.');
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Pipe the response body through unzipper into the output directory
|
||||||
|
await new Promise<void>((resolve, reject) => {
|
||||||
|
const nodeStream = streamFromWeb(res.body!);
|
||||||
|
nodeStream
|
||||||
|
.pipe(unzipper.Extract({ path: resolvedOut }))
|
||||||
|
.on('close', resolve)
|
||||||
|
.on('error', reject);
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log(chalk.green('Scaffold extracted to:'), chalk.bold(resolvedOut));
|
||||||
|
console.log('');
|
||||||
|
console.log('Next steps:');
|
||||||
|
console.log(
|
||||||
|
` 1. ${chalk.cyan('cd')} ${resolvedOut}`,
|
||||||
|
);
|
||||||
|
if (language === 'typescript') {
|
||||||
|
console.log(` 2. ${chalk.cyan('npm install')}`);
|
||||||
|
console.log(` 3. Copy ${chalk.yellow('.env.example')} to ${chalk.yellow('.env')} and fill in your client secret`);
|
||||||
|
console.log(` 4. ${chalk.cyan('npm run dev')}`);
|
||||||
|
} else if (language === 'python') {
|
||||||
|
console.log(` 2. ${chalk.cyan('pip install -r requirements.txt')}`);
|
||||||
|
console.log(` 3. Copy ${chalk.yellow('.env.example')} to ${chalk.yellow('.env')} and fill in your client secret`);
|
||||||
|
console.log(` 4. ${chalk.cyan('python main.py')}`);
|
||||||
|
} else if (language === 'go') {
|
||||||
|
console.log(` 2. ${chalk.cyan('go mod download')}`);
|
||||||
|
console.log(` 3. Copy ${chalk.yellow('.env.example')} to ${chalk.yellow('.env')} and fill in your client secret`);
|
||||||
|
console.log(` 4. ${chalk.cyan('go run main.go')}`);
|
||||||
|
} else if (language === 'java') {
|
||||||
|
console.log(` 2. ${chalk.cyan('mvn install')}`);
|
||||||
|
console.log(` 3. Copy ${chalk.yellow('.env.example')} to ${chalk.yellow('.env')} and fill in your client secret`);
|
||||||
|
console.log(` 4. ${chalk.cyan('mvn exec:java')}`);
|
||||||
|
} else if (language === 'rust') {
|
||||||
|
console.log(` 2. Copy ${chalk.yellow('.env.example')} to ${chalk.yellow('.env')} and fill in your client secret`);
|
||||||
|
console.log(` 3. ${chalk.cyan('cargo run')}`);
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
console.error(
|
||||||
|
chalk.red('Error:'),
|
||||||
|
err instanceof Error ? err.message : String(err),
|
||||||
|
);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Obtain a bearer token by making a dummy apiRequest that uses the token cache. */
|
||||||
|
async function getToken(config: import('../config').Config): Promise<string> {
|
||||||
|
// apiRequest internally calls fetchToken which caches tokens.
|
||||||
|
// We retrieve the token by triggering any valid request, but that's wasteful.
|
||||||
|
// Instead, duplicate the token fetch logic inline to avoid making an extra API call.
|
||||||
|
const body = new URLSearchParams({
|
||||||
|
grant_type: 'client_credentials',
|
||||||
|
client_id: config.clientId,
|
||||||
|
client_secret: config.clientSecret,
|
||||||
|
});
|
||||||
|
|
||||||
|
const res = await fetch(`${config.apiUrl}/oauth2/token`, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
|
||||||
|
body: body.toString(),
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!res.ok) {
|
||||||
|
const text = await res.text();
|
||||||
|
throw new Error(`Authentication failed (${res.status}): ${text}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
const data = (await res.json()) as { access_token: string };
|
||||||
|
return data.access_token;
|
||||||
|
}
|
||||||
|
|
||||||
|
function handleHttpError(status: number, body: string): void {
|
||||||
|
if (status === 400) {
|
||||||
|
console.error(chalk.red('Error:'), `Invalid request: ${body}`);
|
||||||
|
} else if (status === 401) {
|
||||||
|
console.error(
|
||||||
|
chalk.red('Error:'),
|
||||||
|
'Authentication failed. Run `sentryagent configure` to update credentials.',
|
||||||
|
);
|
||||||
|
} else if (status === 403) {
|
||||||
|
console.error(
|
||||||
|
chalk.red('Error:'),
|
||||||
|
'Access denied. You do not own this agent.',
|
||||||
|
);
|
||||||
|
} else if (status === 404) {
|
||||||
|
console.error(
|
||||||
|
chalk.red('Error:'),
|
||||||
|
'Agent not found. Check the agent ID with `sentryagent list-agents`.',
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
console.error(chalk.red('Error:'), `Server error (${status}): ${body}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Converts a WHATWG ReadableStream (from fetch) to a Node.js Readable stream.
|
||||||
|
* Node 18+ supports ReadableStream natively via stream.Readable.fromWeb().
|
||||||
|
*/
|
||||||
|
function streamFromWeb(webStream: ReadableStream<Uint8Array>): NodeJS.ReadableStream {
|
||||||
|
// Node.js 18+ has stream.Readable.fromWeb
|
||||||
|
// eslint-disable-next-line @typescript-eslint/no-require-imports
|
||||||
|
const { Readable } = require('stream') as typeof import('stream');
|
||||||
|
return Readable.fromWeb(webStream as Parameters<typeof Readable.fromWeb>[0]) as NodeJS.ReadableStream;
|
||||||
|
}
|
||||||
122
cli/src/commands/tail-audit-log.ts
Normal file
122
cli/src/commands/tail-audit-log.ts
Normal file
@@ -0,0 +1,122 @@
|
|||||||
|
import { Command } from 'commander';
|
||||||
|
import chalk from 'chalk';
|
||||||
|
import { requireConfig } from '../config';
|
||||||
|
import { apiRequest } from '../api';
|
||||||
|
|
||||||
|
interface AuditEvent {
|
||||||
|
id: string;
|
||||||
|
timestamp: string;
|
||||||
|
action: string;
|
||||||
|
agentId?: string;
|
||||||
|
tenantId?: string;
|
||||||
|
outcome: string;
|
||||||
|
details?: Record<string, unknown>;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface AuditLogsResponse {
|
||||||
|
events: AuditEvent[];
|
||||||
|
nextCursor?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatEvent(event: AuditEvent): string {
|
||||||
|
const ts = chalk.dim(new Date(event.timestamp).toLocaleString());
|
||||||
|
const outcome =
|
||||||
|
event.outcome === 'success'
|
||||||
|
? chalk.green(event.outcome)
|
||||||
|
: chalk.red(event.outcome);
|
||||||
|
const action = chalk.cyan(event.action);
|
||||||
|
const agentPart =
|
||||||
|
event.agentId !== undefined
|
||||||
|
? ' ' + chalk.dim('agent=' + event.agentId)
|
||||||
|
: '';
|
||||||
|
|
||||||
|
return `${ts} ${action} outcome=${outcome}${agentPart} id=${chalk.dim(event.id)}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function registerTailAuditLog(program: Command): void {
|
||||||
|
program
|
||||||
|
.command('tail-audit-log')
|
||||||
|
.description(
|
||||||
|
'Poll and stream audit log events every 5 seconds (Ctrl+C to stop)',
|
||||||
|
)
|
||||||
|
.option('--agent-id <id>', 'Filter events for a specific agent ID')
|
||||||
|
.action(async (options: { agentId?: string }) => {
|
||||||
|
const config = requireConfig();
|
||||||
|
|
||||||
|
console.log(
|
||||||
|
chalk.bold('Tailing audit log') +
|
||||||
|
(options.agentId !== undefined
|
||||||
|
? chalk.dim(` (agent: ${options.agentId})`)
|
||||||
|
: '') +
|
||||||
|
chalk.dim(' — press Ctrl+C to stop'),
|
||||||
|
);
|
||||||
|
console.log(chalk.dim('─'.repeat(60)));
|
||||||
|
|
||||||
|
const seenIds = new Set<string>();
|
||||||
|
let cursor: string | undefined;
|
||||||
|
let running = true;
|
||||||
|
|
||||||
|
process.on('SIGINT', () => {
|
||||||
|
running = false;
|
||||||
|
console.log();
|
||||||
|
console.log(chalk.dim('Stopped.'));
|
||||||
|
process.exit(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
while (running) {
|
||||||
|
try {
|
||||||
|
const params: Record<string, string> = {};
|
||||||
|
if (options.agentId !== undefined) {
|
||||||
|
params['agentId'] = options.agentId;
|
||||||
|
}
|
||||||
|
if (cursor !== undefined) {
|
||||||
|
params['cursor'] = cursor;
|
||||||
|
}
|
||||||
|
// Request events from the last poll window
|
||||||
|
params['limit'] = '50';
|
||||||
|
|
||||||
|
const data = await apiRequest<AuditLogsResponse | AuditEvent[]>(
|
||||||
|
config,
|
||||||
|
'/audit/logs',
|
||||||
|
{ params },
|
||||||
|
);
|
||||||
|
|
||||||
|
const events: AuditEvent[] = Array.isArray(data)
|
||||||
|
? data
|
||||||
|
: (data as AuditLogsResponse).events ?? [];
|
||||||
|
|
||||||
|
if (!Array.isArray(data) && (data as AuditLogsResponse).nextCursor !== undefined) {
|
||||||
|
cursor = (data as AuditLogsResponse).nextCursor;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const event of events) {
|
||||||
|
if (!seenIds.has(event.id)) {
|
||||||
|
seenIds.add(event.id);
|
||||||
|
console.log(formatEvent(event));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Keep the seenIds set bounded to avoid unbounded memory growth
|
||||||
|
if (seenIds.size > 10_000) {
|
||||||
|
const arr = Array.from(seenIds);
|
||||||
|
const keep = arr.slice(arr.length - 5_000);
|
||||||
|
seenIds.clear();
|
||||||
|
for (const id of keep) seenIds.add(id);
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
console.error(
|
||||||
|
chalk.yellow('⚠') +
|
||||||
|
' Poll error: ' +
|
||||||
|
(err instanceof Error ? err.message : String(err)),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Wait 5 seconds between polls
|
||||||
|
await new Promise<void>((resolve) => {
|
||||||
|
const timer = setTimeout(resolve, 5000);
|
||||||
|
// Allow the timer to be garbage-collected if process exits
|
||||||
|
if (typeof timer.unref === 'function') timer.unref();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
61
cli/src/config.ts
Normal file
61
cli/src/config.ts
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
import * as fs from 'fs';
|
||||||
|
import * as os from 'os';
|
||||||
|
import * as path from 'path';
|
||||||
|
|
||||||
|
export interface Config {
|
||||||
|
apiUrl: string;
|
||||||
|
clientId: string;
|
||||||
|
clientSecret: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
const CONFIG_DIR = path.join(os.homedir(), '.sentryagent');
|
||||||
|
const CONFIG_FILE = path.join(CONFIG_DIR, 'config.json');
|
||||||
|
|
||||||
|
export function readConfig(): Config | null {
|
||||||
|
if (!fs.existsSync(CONFIG_FILE)) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
const raw = fs.readFileSync(CONFIG_FILE, 'utf-8');
|
||||||
|
const parsed: unknown = JSON.parse(raw);
|
||||||
|
if (
|
||||||
|
parsed !== null &&
|
||||||
|
typeof parsed === 'object' &&
|
||||||
|
'apiUrl' in parsed &&
|
||||||
|
'clientId' in parsed &&
|
||||||
|
'clientSecret' in parsed &&
|
||||||
|
typeof (parsed as Record<string, unknown>)['apiUrl'] === 'string' &&
|
||||||
|
typeof (parsed as Record<string, unknown>)['clientId'] === 'string' &&
|
||||||
|
typeof (parsed as Record<string, unknown>)['clientSecret'] === 'string'
|
||||||
|
) {
|
||||||
|
const p = parsed as Record<string, unknown>;
|
||||||
|
return {
|
||||||
|
apiUrl: p['apiUrl'] as string,
|
||||||
|
clientId: p['clientId'] as string,
|
||||||
|
clientSecret: p['clientSecret'] as string,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
return null;
|
||||||
|
} catch {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export function writeConfig(config: Config): void {
|
||||||
|
if (!fs.existsSync(CONFIG_DIR)) {
|
||||||
|
fs.mkdirSync(CONFIG_DIR, { recursive: true, mode: 0o700 });
|
||||||
|
}
|
||||||
|
fs.writeFileSync(CONFIG_FILE, JSON.stringify(config, null, 2), {
|
||||||
|
encoding: 'utf-8',
|
||||||
|
mode: 0o600,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
export function requireConfig(): Config {
|
||||||
|
const config = readConfig();
|
||||||
|
if (config === null) {
|
||||||
|
console.error('Not configured. Run `sentryagent configure` first.');
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
return config;
|
||||||
|
}
|
||||||
33
cli/src/index.ts
Normal file
33
cli/src/index.ts
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
import { Command } from 'commander';
|
||||||
|
import packageJson from '../package.json';
|
||||||
|
|
||||||
|
import { registerConfigure } from './commands/configure';
|
||||||
|
import { registerRegisterAgent } from './commands/register-agent';
|
||||||
|
import { registerListAgents } from './commands/list-agents';
|
||||||
|
import { registerIssueToken } from './commands/issue-token';
|
||||||
|
import { registerRotateCredentials } from './commands/rotate-credentials';
|
||||||
|
import { registerTailAuditLog } from './commands/tail-audit-log';
|
||||||
|
import { registerCompletion } from './commands/completion';
|
||||||
|
import { registerScaffold } from './commands/scaffold';
|
||||||
|
|
||||||
|
const program = new Command();
|
||||||
|
|
||||||
|
program
|
||||||
|
.name('sentryagent')
|
||||||
|
.description('SentryAgent.ai CLI — manage agents, tokens, and audit logs')
|
||||||
|
.version(packageJson.version, '-v, --version', 'Output the current version');
|
||||||
|
|
||||||
|
// Register all commands
|
||||||
|
registerConfigure(program);
|
||||||
|
registerRegisterAgent(program);
|
||||||
|
registerListAgents(program);
|
||||||
|
registerIssueToken(program);
|
||||||
|
registerRotateCredentials(program);
|
||||||
|
registerTailAuditLog(program);
|
||||||
|
registerCompletion(program);
|
||||||
|
registerScaffold(program);
|
||||||
|
|
||||||
|
// Parse args — commander will display help automatically on --help
|
||||||
|
program.parse(process.argv);
|
||||||
29
cli/tsconfig.json
Normal file
29
cli/tsconfig.json
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
{
|
||||||
|
"compilerOptions": {
|
||||||
|
"target": "ES2020",
|
||||||
|
"module": "commonjs",
|
||||||
|
"lib": ["ES2020"],
|
||||||
|
"outDir": "./dist",
|
||||||
|
"rootDir": "./src",
|
||||||
|
"strict": true,
|
||||||
|
"noImplicitAny": true,
|
||||||
|
"strictNullChecks": true,
|
||||||
|
"strictFunctionTypes": true,
|
||||||
|
"strictBindCallApply": true,
|
||||||
|
"strictPropertyInitialization": true,
|
||||||
|
"noImplicitThis": true,
|
||||||
|
"alwaysStrict": true,
|
||||||
|
"noUnusedLocals": true,
|
||||||
|
"noUnusedParameters": true,
|
||||||
|
"noImplicitReturns": true,
|
||||||
|
"noFallthroughCasesInSwitch": true,
|
||||||
|
"esModuleInterop": true,
|
||||||
|
"skipLibCheck": true,
|
||||||
|
"forceConsistentCasingInFileNames": true,
|
||||||
|
"resolveJsonModule": true,
|
||||||
|
"declaration": true,
|
||||||
|
"sourceMap": true
|
||||||
|
},
|
||||||
|
"include": ["src/**/*"],
|
||||||
|
"exclude": ["node_modules", "dist"]
|
||||||
|
}
|
||||||
69
compose.monitoring.yaml
Normal file
69
compose.monitoring.yaml
Normal file
@@ -0,0 +1,69 @@
|
|||||||
|
# SentryAgent.ai AgentIdP — Monitoring Overlay
|
||||||
|
# Compose Specification (no version header — deprecated per modern Compose Spec)
|
||||||
|
# Usage: docker compose -f compose.yaml -f compose.monitoring.yaml up
|
||||||
|
|
||||||
|
services:
|
||||||
|
prometheus:
|
||||||
|
image: prom/prometheus:v2.53.0
|
||||||
|
volumes:
|
||||||
|
- ./monitoring/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
|
||||||
|
- prometheus-data:/prometheus
|
||||||
|
command:
|
||||||
|
- '--config.file=/etc/prometheus/prometheus.yml'
|
||||||
|
- '--storage.tsdb.path=/prometheus'
|
||||||
|
- '--web.console.libraries=/etc/prometheus/console_libraries'
|
||||||
|
- '--web.console.templates=/etc/prometheus/consoles'
|
||||||
|
- '--web.enable-lifecycle'
|
||||||
|
ports:
|
||||||
|
- '9090:9090'
|
||||||
|
networks:
|
||||||
|
- app-tier
|
||||||
|
restart: unless-stopped
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 256m
|
||||||
|
cpus: '0.5'
|
||||||
|
healthcheck:
|
||||||
|
test: ['CMD', 'wget', '--no-verbose', '--tries=1', '--spider', 'http://localhost:9090/-/healthy']
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 3
|
||||||
|
start_period: 30s
|
||||||
|
|
||||||
|
grafana:
|
||||||
|
image: grafana/grafana:11.2.0
|
||||||
|
volumes:
|
||||||
|
- grafana-data:/var/lib/grafana
|
||||||
|
- ./monitoring/grafana/provisioning:/etc/grafana/provisioning:ro
|
||||||
|
- ./monitoring/grafana/dashboards:/var/lib/grafana/dashboards:ro
|
||||||
|
environment:
|
||||||
|
GF_SECURITY_ADMIN_PASSWORD: ${GF_ADMIN_PASSWORD}
|
||||||
|
GF_USERS_ALLOW_SIGN_UP: 'false'
|
||||||
|
GF_AUTH_ANONYMOUS_ENABLED: 'false'
|
||||||
|
ports:
|
||||||
|
- '3001:3000'
|
||||||
|
networks:
|
||||||
|
- app-tier
|
||||||
|
depends_on:
|
||||||
|
- prometheus
|
||||||
|
restart: unless-stopped
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 256m
|
||||||
|
cpus: '0.5'
|
||||||
|
healthcheck:
|
||||||
|
test: ['CMD', 'wget', '--no-verbose', '--tries=1', '--spider', 'http://localhost:3000/api/health']
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 3
|
||||||
|
start_period: 30s
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
prometheus-data:
|
||||||
|
grafana-data:
|
||||||
|
|
||||||
|
networks:
|
||||||
|
app-tier:
|
||||||
|
external: true
|
||||||
95
compose.yaml
Normal file
95
compose.yaml
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
# SentryAgent.ai AgentIdP — Docker Compose
|
||||||
|
# Compose Specification (no version header — deprecated per modern Compose Spec)
|
||||||
|
# Usage: docker compose up --build
|
||||||
|
|
||||||
|
services:
|
||||||
|
app:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
ports:
|
||||||
|
- '3000:3000'
|
||||||
|
environment:
|
||||||
|
NODE_ENV: ${NODE_ENV:-development}
|
||||||
|
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
|
||||||
|
REDIS_URL: redis://redis:6379
|
||||||
|
PORT: '3000'
|
||||||
|
env_file:
|
||||||
|
- path: .env
|
||||||
|
required: false
|
||||||
|
depends_on:
|
||||||
|
postgres:
|
||||||
|
condition: service_healthy
|
||||||
|
redis:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- app-tier
|
||||||
|
restart: unless-stopped
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 512m
|
||||||
|
cpus: '1.0'
|
||||||
|
healthcheck:
|
||||||
|
test: ['CMD', 'curl', '-f', 'http://localhost:3000/health']
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 3
|
||||||
|
start_period: 40s
|
||||||
|
# Bind mount for local development source-sync only
|
||||||
|
volumes:
|
||||||
|
- ./src:/app/src:ro
|
||||||
|
|
||||||
|
postgres:
|
||||||
|
image: postgres:14.12-alpine3.19
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: ${POSTGRES_USER}
|
||||||
|
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||||
|
POSTGRES_DB: ${POSTGRES_DB}
|
||||||
|
ports:
|
||||||
|
- '5432:5432'
|
||||||
|
volumes:
|
||||||
|
- postgres-data:/var/lib/postgresql/data
|
||||||
|
networks:
|
||||||
|
- app-tier
|
||||||
|
restart: unless-stopped
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 256m
|
||||||
|
cpus: '0.5'
|
||||||
|
healthcheck:
|
||||||
|
test: ['CMD-SHELL', 'pg_isready -U $POSTGRES_USER -d $POSTGRES_DB']
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
start_period: 20s
|
||||||
|
|
||||||
|
redis:
|
||||||
|
image: redis:7.2-alpine3.19
|
||||||
|
ports:
|
||||||
|
- '6379:6379'
|
||||||
|
volumes:
|
||||||
|
- redis-data:/data
|
||||||
|
networks:
|
||||||
|
- app-tier
|
||||||
|
restart: unless-stopped
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 128m
|
||||||
|
cpus: '0.5'
|
||||||
|
healthcheck:
|
||||||
|
test: ['CMD', 'redis-cli', 'ping']
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
start_period: 10s
|
||||||
|
|
||||||
|
networks:
|
||||||
|
app-tier:
|
||||||
|
driver: bridge
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
postgres-data:
|
||||||
|
redis-data:
|
||||||
@@ -9,6 +9,7 @@ import AgentDetail from '@/pages/AgentDetail';
|
|||||||
import Credentials from '@/pages/Credentials';
|
import Credentials from '@/pages/Credentials';
|
||||||
import AuditLog from '@/pages/AuditLog';
|
import AuditLog from '@/pages/AuditLog';
|
||||||
import Health from '@/pages/Health';
|
import Health from '@/pages/Health';
|
||||||
|
import { UsagePanel } from '@/components/UsagePanel';
|
||||||
|
|
||||||
/** Top-level router — defines all application routes. */
|
/** Top-level router — defines all application routes. */
|
||||||
export default function App(): React.JSX.Element {
|
export default function App(): React.JSX.Element {
|
||||||
@@ -23,6 +24,7 @@ export default function App(): React.JSX.Element {
|
|||||||
<Route path="/dashboard/agents/:agentId/credentials" element={<Credentials />} />
|
<Route path="/dashboard/agents/:agentId/credentials" element={<Credentials />} />
|
||||||
<Route path="/dashboard/audit" element={<AuditLog />} />
|
<Route path="/dashboard/audit" element={<AuditLog />} />
|
||||||
<Route path="/dashboard/health" element={<Health />} />
|
<Route path="/dashboard/health" element={<Health />} />
|
||||||
|
<Route path="/dashboard/usage" element={<UsagePanel />} />
|
||||||
</Route>
|
</Route>
|
||||||
</Route>
|
</Route>
|
||||||
<Route path="/dashboard" element={<Navigate to="/dashboard/agents" replace />} />
|
<Route path="/dashboard" element={<Navigate to="/dashboard/agents" replace />} />
|
||||||
|
|||||||
192
dashboard/src/components/UsagePanel.tsx
Normal file
192
dashboard/src/components/UsagePanel.tsx
Normal file
@@ -0,0 +1,192 @@
|
|||||||
|
import * as React from 'react';
|
||||||
|
import { useAuth } from '@/lib/auth';
|
||||||
|
import { TokenManager } from '@sentryagent/idp-sdk';
|
||||||
|
|
||||||
|
/** Shape of the GET /api/v1/billing/usage response. */
|
||||||
|
interface UsageResponse {
|
||||||
|
tenantId: string;
|
||||||
|
date: string;
|
||||||
|
apiCalls: number;
|
||||||
|
agentCount: number;
|
||||||
|
subscriptionStatus: string;
|
||||||
|
currentPeriodEnd: string | null;
|
||||||
|
stripeSubscriptionId: string | null;
|
||||||
|
}
|
||||||
|
|
||||||
|
type LoadState = 'idle' | 'loading' | 'success' | 'error';
|
||||||
|
|
||||||
|
interface UsageState {
|
||||||
|
loadState: LoadState;
|
||||||
|
data: UsageResponse | null;
|
||||||
|
errorMessage: string | null;
|
||||||
|
}
|
||||||
|
|
||||||
|
const initialState: UsageState = {
|
||||||
|
loadState: 'idle',
|
||||||
|
data: null,
|
||||||
|
errorMessage: null,
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fetches the current usage summary from the API using the stored credentials.
|
||||||
|
*
|
||||||
|
* @param baseUrl - The API base URL.
|
||||||
|
* @param clientId - The agent client ID.
|
||||||
|
* @param clientSecret - The agent client secret.
|
||||||
|
* @returns The usage response from the server.
|
||||||
|
*/
|
||||||
|
async function fetchUsage(
|
||||||
|
baseUrl: string,
|
||||||
|
clientId: string,
|
||||||
|
clientSecret: string,
|
||||||
|
): Promise<UsageResponse> {
|
||||||
|
const tokenManager = new TokenManager(
|
||||||
|
baseUrl,
|
||||||
|
clientId,
|
||||||
|
clientSecret,
|
||||||
|
'agents:read',
|
||||||
|
);
|
||||||
|
const token = await tokenManager.getToken();
|
||||||
|
|
||||||
|
const response = await fetch(`${baseUrl}/api/v1/billing/usage`, {
|
||||||
|
headers: { Authorization: `Bearer ${token}` },
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error(`Failed to fetch usage data (HTTP ${response.status})`);
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json() as Promise<UsageResponse>;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Badge shown for the tenant's subscription tier. */
|
||||||
|
function SubscriptionBadge({ status }: { status: string }): React.JSX.Element {
|
||||||
|
const isPro = status !== 'free';
|
||||||
|
|
||||||
|
return (
|
||||||
|
<span
|
||||||
|
className={`inline-flex items-center rounded-full px-2.5 py-0.5 text-xs font-semibold ${
|
||||||
|
isPro
|
||||||
|
? 'bg-brand-100 text-brand-700'
|
||||||
|
: 'bg-slate-100 text-slate-600'
|
||||||
|
}`}
|
||||||
|
>
|
||||||
|
{isPro ? 'Pro' : 'Free Tier'}
|
||||||
|
</span>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
/** A single metric card with label and value. */
|
||||||
|
function MetricCard({ label, value }: { label: string; value: string | number }): React.JSX.Element {
|
||||||
|
return (
|
||||||
|
<div className="rounded-xl border border-slate-200 bg-white p-6 shadow-sm">
|
||||||
|
<p className="text-sm font-medium text-slate-500">{label}</p>
|
||||||
|
<p className="mt-1 text-2xl font-bold text-slate-900">{value}</p>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Displays the current tenant's usage summary:
|
||||||
|
* - API calls today
|
||||||
|
* - Active agent count
|
||||||
|
* - Subscription status (Free Tier / Pro)
|
||||||
|
*
|
||||||
|
* Fetches GET /api/v1/billing/usage with the current Bearer token.
|
||||||
|
* Handles loading state and error state gracefully.
|
||||||
|
*/
|
||||||
|
export function UsagePanel(): React.JSX.Element {
|
||||||
|
const { credentials } = useAuth();
|
||||||
|
const [state, setState] = React.useState<UsageState>(initialState);
|
||||||
|
|
||||||
|
const loadUsage = React.useCallback(async (): Promise<void> => {
|
||||||
|
if (!credentials) return;
|
||||||
|
|
||||||
|
setState((prev) => ({ ...prev, loadState: 'loading', errorMessage: null }));
|
||||||
|
|
||||||
|
try {
|
||||||
|
const data = await fetchUsage(
|
||||||
|
credentials.baseUrl,
|
||||||
|
credentials.clientId,
|
||||||
|
credentials.clientSecret,
|
||||||
|
);
|
||||||
|
setState({ loadState: 'success', data, errorMessage: null });
|
||||||
|
} catch (err) {
|
||||||
|
const message = err instanceof Error ? err.message : 'Unknown error occurred.';
|
||||||
|
setState({ loadState: 'error', data: null, errorMessage: message });
|
||||||
|
}
|
||||||
|
}, [credentials]);
|
||||||
|
|
||||||
|
React.useEffect(() => {
|
||||||
|
void loadUsage();
|
||||||
|
}, [loadUsage]);
|
||||||
|
|
||||||
|
const isLoading = state.loadState === 'loading' || state.loadState === 'idle';
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
<div className="mb-6 flex items-center justify-between">
|
||||||
|
<h1 className="text-2xl font-bold text-slate-900">Usage & Billing</h1>
|
||||||
|
<button
|
||||||
|
onClick={() => { void loadUsage(); }}
|
||||||
|
disabled={isLoading}
|
||||||
|
className="rounded-md border border-slate-300 px-3 py-1.5 text-sm hover:bg-slate-50 disabled:opacity-40"
|
||||||
|
>
|
||||||
|
Refresh
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Error state */}
|
||||||
|
{state.loadState === 'error' && (
|
||||||
|
<div className="mb-6 rounded-md bg-red-50 px-4 py-3 text-sm text-red-700" role="alert">
|
||||||
|
{state.errorMessage ?? 'Failed to load usage data.'}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Loading skeleton */}
|
||||||
|
{isLoading && (
|
||||||
|
<div className="grid grid-cols-1 gap-4 sm:grid-cols-3 animate-pulse">
|
||||||
|
{[1, 2, 3].map((i) => (
|
||||||
|
<div key={i} className="h-28 rounded-xl border border-slate-200 bg-slate-100" />
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Data */}
|
||||||
|
{state.loadState === 'success' && state.data !== null && (
|
||||||
|
<>
|
||||||
|
<div className="mb-4 flex items-center gap-3">
|
||||||
|
<p className="text-sm text-slate-500">
|
||||||
|
Showing usage for <strong>{state.data.date}</strong>
|
||||||
|
</p>
|
||||||
|
<SubscriptionBadge status={state.data.subscriptionStatus} />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div className="grid grid-cols-1 gap-4 sm:grid-cols-3">
|
||||||
|
<MetricCard label="API Calls Today" value={state.data.apiCalls.toLocaleString()} />
|
||||||
|
<MetricCard label="Active Agents" value={state.data.agentCount.toLocaleString()} />
|
||||||
|
<MetricCard label="Plan" value={state.data.subscriptionStatus === 'free' ? 'Free Tier' : 'Pro'} />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{state.data.subscriptionStatus === 'free' && (
|
||||||
|
<div className="mt-6 rounded-xl border border-brand-200 bg-brand-50 p-5">
|
||||||
|
<p className="text-sm font-medium text-brand-800">
|
||||||
|
You are on the Free Tier — limited to 10 agents and 1,000 API calls/day.
|
||||||
|
</p>
|
||||||
|
<p className="mt-1 text-sm text-brand-700">
|
||||||
|
Upgrade to Pro for unlimited agents and API calls.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{state.data.currentPeriodEnd !== null && (
|
||||||
|
<p className="mt-4 text-xs text-slate-400">
|
||||||
|
Current period ends:{' '}
|
||||||
|
{new Date(state.data.currentPeriodEnd).toLocaleDateString()}
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -12,6 +12,7 @@ const NAV_ITEMS: NavItem[] = [
|
|||||||
{ to: '/dashboard/agents', label: 'Agents' },
|
{ to: '/dashboard/agents', label: 'Agents' },
|
||||||
{ to: '/dashboard/audit', label: 'Audit Log' },
|
{ to: '/dashboard/audit', label: 'Audit Log' },
|
||||||
{ to: '/dashboard/health', label: 'Health' },
|
{ to: '/dashboard/health', label: 'Health' },
|
||||||
|
{ to: '/dashboard/usage', label: 'Usage' },
|
||||||
];
|
];
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -1,50 +0,0 @@
|
|||||||
version: '3.8'
|
|
||||||
|
|
||||||
# Monitoring overlay — extend the base docker-compose.yml
|
|
||||||
# Usage: docker compose -f docker-compose.yml -f docker-compose.monitoring.yml up
|
|
||||||
|
|
||||||
services:
|
|
||||||
prometheus:
|
|
||||||
image: prom/prometheus:v2.53.0
|
|
||||||
container_name: agentidp_prometheus
|
|
||||||
volumes:
|
|
||||||
- ./monitoring/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
|
|
||||||
- prometheus_data:/prometheus
|
|
||||||
command:
|
|
||||||
- '--config.file=/etc/prometheus/prometheus.yml'
|
|
||||||
- '--storage.tsdb.path=/prometheus'
|
|
||||||
- '--web.console.libraries=/etc/prometheus/console_libraries'
|
|
||||||
- '--web.console.templates=/etc/prometheus/consoles'
|
|
||||||
- '--web.enable-lifecycle'
|
|
||||||
ports:
|
|
||||||
- '9090:9090'
|
|
||||||
networks:
|
|
||||||
- agentidp_network
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
grafana:
|
|
||||||
image: grafana/grafana:11.2.0
|
|
||||||
container_name: agentidp_grafana
|
|
||||||
volumes:
|
|
||||||
- grafana_data:/var/lib/grafana
|
|
||||||
- ./monitoring/grafana/provisioning:/etc/grafana/provisioning:ro
|
|
||||||
- ./monitoring/grafana/dashboards:/var/lib/grafana/dashboards:ro
|
|
||||||
environment:
|
|
||||||
- GF_SECURITY_ADMIN_PASSWORD=agentidp
|
|
||||||
- GF_USERS_ALLOW_SIGN_UP=false
|
|
||||||
- GF_AUTH_ANONYMOUS_ENABLED=false
|
|
||||||
ports:
|
|
||||||
- '3001:3000'
|
|
||||||
networks:
|
|
||||||
- agentidp_network
|
|
||||||
depends_on:
|
|
||||||
- prometheus
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
prometheus_data:
|
|
||||||
grafana_data:
|
|
||||||
|
|
||||||
networks:
|
|
||||||
agentidp_network:
|
|
||||||
external: true
|
|
||||||
@@ -1,54 +0,0 @@
|
|||||||
version: '3.9'
|
|
||||||
|
|
||||||
services:
|
|
||||||
app:
|
|
||||||
build:
|
|
||||||
context: .
|
|
||||||
dockerfile: Dockerfile
|
|
||||||
ports:
|
|
||||||
- '3000:3000'
|
|
||||||
environment:
|
|
||||||
- DATABASE_URL=postgresql://sentryagent:sentryagent@postgres:5432/sentryagent_idp
|
|
||||||
- REDIS_URL=redis://redis:6379
|
|
||||||
- PORT=3000
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
depends_on:
|
|
||||||
postgres:
|
|
||||||
condition: service_healthy
|
|
||||||
redis:
|
|
||||||
condition: service_healthy
|
|
||||||
volumes:
|
|
||||||
- ./src:/app/src:ro
|
|
||||||
|
|
||||||
postgres:
|
|
||||||
image: postgres:14-alpine
|
|
||||||
environment:
|
|
||||||
POSTGRES_USER: sentryagent
|
|
||||||
POSTGRES_PASSWORD: sentryagent
|
|
||||||
POSTGRES_DB: sentryagent_idp
|
|
||||||
ports:
|
|
||||||
- '5432:5432'
|
|
||||||
volumes:
|
|
||||||
- postgres_data:/var/lib/postgresql/data
|
|
||||||
healthcheck:
|
|
||||||
test: ['CMD-SHELL', 'pg_isready -U sentryagent -d sentryagent_idp']
|
|
||||||
interval: 5s
|
|
||||||
timeout: 5s
|
|
||||||
retries: 5
|
|
||||||
|
|
||||||
redis:
|
|
||||||
image: redis:7-alpine
|
|
||||||
ports:
|
|
||||||
- '6379:6379'
|
|
||||||
volumes:
|
|
||||||
- redis_data:/data
|
|
||||||
healthcheck:
|
|
||||||
test: ['CMD', 'redis-cli', 'ping']
|
|
||||||
interval: 5s
|
|
||||||
timeout: 5s
|
|
||||||
retries: 5
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
postgres_data:
|
|
||||||
redis_data:
|
|
||||||
172
docs/compliance/audit-log-runbook.md
Normal file
172
docs/compliance/audit-log-runbook.md
Normal file
@@ -0,0 +1,172 @@
|
|||||||
|
# Audit Log Chain Verification Runbook — SentryAgent.ai AgentIdP
|
||||||
|
|
||||||
|
**Control:** SOC 2 CC7.2 — Audit Log Integrity
|
||||||
|
**Service:** `src/services/AuditVerificationService.ts`
|
||||||
|
**Job:** `src/jobs/AuditChainVerificationJob.ts`
|
||||||
|
**Endpoint:** `GET /api/v1/audit/verify`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Every audit event in the `audit_events` PostgreSQL table is linked to the previous one
|
||||||
|
via a SHA-256 hash chain. Each event stores:
|
||||||
|
|
||||||
|
- `hash` — SHA-256 of `(eventId + timestamp.toISOString() + action + outcome + agentId + organizationId + previousHash)`
|
||||||
|
- `previous_hash` — the `hash` of the immediately preceding event (ordered by `timestamp ASC, event_id ASC`)
|
||||||
|
|
||||||
|
The first event in the chain uses `previous_hash = ''` (empty string sentinel).
|
||||||
|
|
||||||
|
A PostgreSQL trigger (`trg_audit_events_immutable`) prevents UPDATE and DELETE operations
|
||||||
|
on `audit_events`, making the log tamper-evident at the database level.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Running GET /audit/verify
|
||||||
|
|
||||||
|
### Full chain verification (no date range)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Requires Bearer token with audit:read scope
|
||||||
|
curl -s -H "Authorization: Bearer <token>" \
|
||||||
|
"https://api.sentryagent.ai/v1/audit/verify"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (chain intact):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"verified": true,
|
||||||
|
"checkedCount": 18504,
|
||||||
|
"brokenAtEventId": null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (chain break detected):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"verified": false,
|
||||||
|
"checkedCount": 1203,
|
||||||
|
"brokenAtEventId": "c4d5e6f7-a8b9-0123-cdef-456789012345"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Date-ranged verification
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -H "Authorization: Bearer <token>" \
|
||||||
|
"https://api.sentryagent.ai/v1/audit/verify?fromDate=2026-03-01T00:00:00.000Z&toDate=2026-03-31T23:59:59.999Z"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Interpreting the response
|
||||||
|
|
||||||
|
| Field | Meaning |
|
||||||
|
|---|---|
|
||||||
|
| `verified: true` | All events in the checked range maintain valid hash chain linkage |
|
||||||
|
| `verified: false` | At least one chain break detected — see `brokenAtEventId` |
|
||||||
|
| `checkedCount` | Number of events examined (0 = no events in range) |
|
||||||
|
| `brokenAtEventId` | UUID of the first event where the chain fails (`null` if verified) |
|
||||||
|
| `fromDate` / `toDate` | Echo of the date range parameters (only present if supplied) |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AuditChainVerificationJob
|
||||||
|
|
||||||
|
The `AuditChainVerificationJob` runs automatically in the background every hour (default).
|
||||||
|
Configure the interval via `AUDIT_CHAIN_VERIFICATION_INTERVAL_MS` (milliseconds).
|
||||||
|
|
||||||
|
On each tick it calls `verifyChain()` and:
|
||||||
|
- Sets Prometheus gauge `agentidp_audit_chain_integrity` to **1** (passing)
|
||||||
|
- Updates `ComplianceStatusStore` with `CC7.2 = passing`
|
||||||
|
|
||||||
|
If verification fails:
|
||||||
|
- Sets gauge to **0**
|
||||||
|
- Updates `ComplianceStatusStore` with `CC7.2 = failing`
|
||||||
|
- Prometheus alert `AuditChainIntegrityFailed` fires immediately (severity: critical)
|
||||||
|
- Application logs: `[AuditChainVerificationJob] Chain BROKEN at event <uuid>`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What to Do When `brokenAtEventId` is Returned
|
||||||
|
|
||||||
|
### Step 1: Preserve Evidence
|
||||||
|
|
||||||
|
Immediately capture the full state of the audit log for forensic analysis:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Export all events around the break point
|
||||||
|
SELECT event_id, timestamp, action, outcome, agent_id, organization_id, hash, previous_hash
|
||||||
|
FROM audit_events
|
||||||
|
WHERE timestamp >= (
|
||||||
|
SELECT timestamp - INTERVAL '1 hour'
|
||||||
|
FROM audit_events WHERE event_id = '<brokenAtEventId>'
|
||||||
|
)
|
||||||
|
ORDER BY timestamp ASC, event_id ASC;
|
||||||
|
```
|
||||||
|
|
||||||
|
Save the output to a secure, immutable location (e.g. S3 with object locking).
|
||||||
|
|
||||||
|
### Step 2: Identify the Break Type
|
||||||
|
|
||||||
|
Compare the recomputed hash for the broken event with its stored hash:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Using Node.js
|
||||||
|
node -e "
|
||||||
|
const crypto = require('crypto');
|
||||||
|
const eventId = '<event_id>';
|
||||||
|
const timestamp = '<timestamp_from_db>';
|
||||||
|
const action = '<action>';
|
||||||
|
const outcome = '<outcome>';
|
||||||
|
const agentId = '<agent_id>';
|
||||||
|
const orgId = '<organization_id>';
|
||||||
|
const prevHash = '<previous_hash_from_db>';
|
||||||
|
const expected = crypto.createHash('sha256')
|
||||||
|
.update(eventId + new Date(timestamp).toISOString() + action + outcome + agentId + orgId + prevHash)
|
||||||
|
.digest('hex');
|
||||||
|
console.log('Expected hash:', expected);
|
||||||
|
console.log('Stored hash: <hash_from_db>');
|
||||||
|
console.log('Match:', expected === '<hash_from_db>');
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
Possible break types:
|
||||||
|
- **Hash mismatch only** — event data was modified after insertion
|
||||||
|
- **previous_hash mismatch** — an event was inserted/deleted before this event in the chain
|
||||||
|
- **Both mismatched** — multiple modifications or an injection attack
|
||||||
|
|
||||||
|
### Step 3: Escalate
|
||||||
|
|
||||||
|
A chain break is a **critical security incident**. Immediately:
|
||||||
|
|
||||||
|
1. Notify the security team and CISO
|
||||||
|
2. Engage incident response procedure (`docs/compliance/incident-response.md` — Audit Chain Integrity Failure section)
|
||||||
|
3. Do NOT attempt to "fix" the hash — preserve the broken state as evidence
|
||||||
|
4. Consider temporarily suspending API access pending investigation
|
||||||
|
5. Notify affected customers per data breach notification obligations
|
||||||
|
|
||||||
|
### Step 4: Forensic Investigation
|
||||||
|
|
||||||
|
Using PostgreSQL audit logs, Vault audit logs, and application logs:
|
||||||
|
- Identify which application process or database connection modified the row
|
||||||
|
- Correlate with access logs and authentication events
|
||||||
|
- Determine the extent of the compromise (single row vs. systematic)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verification Rate Limiting
|
||||||
|
|
||||||
|
`GET /audit/verify` is rate-limited to **30 requests/minute** per `client_id`.
|
||||||
|
For continuous monitoring, use `AuditChainVerificationJob` (background job, no rate limit)
|
||||||
|
and poll `GET /compliance/controls` instead.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SOC 2 Evidence Package
|
||||||
|
|
||||||
|
For auditors, provide:
|
||||||
|
|
||||||
|
1. `GET /audit/verify` response (full chain, no date filter) — save as JSON
|
||||||
|
2. Prometheus metric export: `agentidp_audit_chain_integrity` time series (30/60/90 days)
|
||||||
|
3. PostgreSQL trigger definition: `\d+ audit_events` in psql
|
||||||
|
4. `src/db/migrations/020_add_audit_chain_columns.sql` — shows immutability trigger DDL
|
||||||
|
5. `docs/openapi/compliance.yaml` — endpoint specification
|
||||||
159
docs/compliance/encryption-runbook.md
Normal file
159
docs/compliance/encryption-runbook.md
Normal file
@@ -0,0 +1,159 @@
|
|||||||
|
# Encryption Key Rotation Runbook — SentryAgent.ai AgentIdP
|
||||||
|
|
||||||
|
**Control:** SOC 2 CC6.1 — Encryption at Rest
|
||||||
|
**Service:** `src/services/EncryptionService.ts`
|
||||||
|
**Vault path:** Configured via `ENCRYPTION_KEY_VAULT_PATH` env var (default: `secret/data/agentidp/encryption-key`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
AgentIdP uses AES-256-CBC column-level encryption for sensitive PostgreSQL columns.
|
||||||
|
The encryption key is a 64-character hex string (32 bytes) stored in HashiCorp Vault.
|
||||||
|
The `EncryptionService` fetches the key once and caches it in process memory.
|
||||||
|
|
||||||
|
Encrypted format: `base64(IV):base64(ciphertext)` where IV is 16 random bytes per encryption call.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Key Rotation Procedure
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
- Access to HashiCorp Vault with write permissions to the encryption key path
|
||||||
|
- Access to the production application environment (to trigger restart)
|
||||||
|
- At least one backup of the current key stored securely offline
|
||||||
|
|
||||||
|
### Step 1: Generate a New Key
|
||||||
|
|
||||||
|
Generate a cryptographically strong 32-byte (64-character hex) key:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openssl rand -hex 32
|
||||||
|
# Example output: a1b2c3d4e5f6... (64 hex chars)
|
||||||
|
```
|
||||||
|
|
||||||
|
Record the new key securely.
|
||||||
|
|
||||||
|
### Step 2: Backup the Current Key
|
||||||
|
|
||||||
|
Before overwriting, read and securely store the current key:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
vault kv get -field=encryptionKey secret/agentidp/encryption-key > /secure/backup/encryption-key-$(date +%Y%m%d).txt
|
||||||
|
```
|
||||||
|
|
||||||
|
Store in a hardware security module (HSM) or offline key store.
|
||||||
|
|
||||||
|
### Step 3: Write the New Key to Vault
|
||||||
|
|
||||||
|
```bash
|
||||||
|
vault kv put secret/agentidp/encryption-key encryptionKey="<new-64-char-hex-key>"
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify the write:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
vault kv get secret/agentidp/encryption-key
|
||||||
|
```
|
||||||
|
|
||||||
|
Confirm the `encryptionKey` field contains exactly 64 hex characters.
|
||||||
|
|
||||||
|
### Step 4: Restart the Application
|
||||||
|
|
||||||
|
The `EncryptionService` caches the key in process memory. A restart forces a re-fetch from Vault:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Kubernetes rolling restart
|
||||||
|
kubectl rollout restart deployment/agentidp
|
||||||
|
|
||||||
|
# Docker Compose
|
||||||
|
docker compose restart app
|
||||||
|
|
||||||
|
# PM2
|
||||||
|
pm2 restart agentidp
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Verify Key Pick-Up
|
||||||
|
|
||||||
|
Check the application logs for:
|
||||||
|
|
||||||
|
```
|
||||||
|
[AgentIdP] EncryptionService enabled — sensitive columns encrypted at rest (SOC 2 CC6.1)
|
||||||
|
```
|
||||||
|
|
||||||
|
Call the compliance controls endpoint to confirm the control is passing:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s https://api.sentryagent.ai/v1/compliance/controls | jq '.controls[] | select(.id == "CC6.1")'
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected output:
|
||||||
|
```json
|
||||||
|
{ "id": "CC6.1", "name": "Encryption at Rest", "status": "passing", "lastChecked": "..." }
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 6: Re-encryption of Existing Rows
|
||||||
|
|
||||||
|
Existing rows encrypted with the old key will fail to decrypt after key rotation.
|
||||||
|
Re-encryption happens lazily: the next time each row is read and re-written (e.g. credential rotation,
|
||||||
|
webhook update), the application will decrypt with the old key and re-encrypt with the new one.
|
||||||
|
|
||||||
|
For immediate full re-encryption, use the re-encryption script:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run the re-encryption migration script (reads old key from backup, encrypts with new key)
|
||||||
|
# Note: This script requires both old and new keys to be available
|
||||||
|
ts-node scripts/reencrypt-columns.ts --old-key-file /secure/backup/encryption-key-<date>.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Emergency Rollback
|
||||||
|
|
||||||
|
If the new key causes issues (e.g. test failures, decryption errors), roll back:
|
||||||
|
|
||||||
|
### Step 1: Restore Old Key to Vault
|
||||||
|
|
||||||
|
```bash
|
||||||
|
vault kv put secret/agentidp/encryption-key encryptionKey="<old-64-char-hex-key-from-backup>"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Restart the Application
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl rollout restart deployment/agentidp
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Verify Recovery
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s https://api.sentryagent.ai/v1/compliance/controls | jq '.controls[] | select(.id == "CC6.1")'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Investigate Root Cause
|
||||||
|
|
||||||
|
Review application logs for `AES-256-CBC decryption failed` errors and audit the cause before
|
||||||
|
reattempting rotation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
| Symptom | Likely Cause | Resolution |
|
||||||
|
|---|---|---|
|
||||||
|
| `Invalid encryption key ... expected a 64-character hex string` | Key in Vault is wrong length or encoding | Re-write correct key to Vault, restart |
|
||||||
|
| `AES-256-CBC decryption failed — possible key mismatch` | Key rotated but rows still encrypted with old key | Rollback to old key, then migrate properly |
|
||||||
|
| `CC6.1` status shows `unknown` | Vault unreachable, key fetch failed | Check Vault connectivity, `VAULT_ADDR`, `VAULT_TOKEN` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Audit Evidence
|
||||||
|
|
||||||
|
After rotation, record the following for SOC 2 evidence:
|
||||||
|
|
||||||
|
- Date of rotation
|
||||||
|
- Who performed the rotation (approver + executor)
|
||||||
|
- Vault audit log entry confirming the key write
|
||||||
|
- Application log confirming EncryptionService initialised with new key
|
||||||
|
- `GET /compliance/controls` response showing CC6.1 = passing
|
||||||
229
docs/compliance/incident-response.md
Normal file
229
docs/compliance/incident-response.md
Normal file
@@ -0,0 +1,229 @@
|
|||||||
|
# Incident Response Runbook — SentryAgent.ai AgentIdP
|
||||||
|
|
||||||
|
**Owner:** Security Engineering
|
||||||
|
**Last updated:** 2026-03-31
|
||||||
|
**Applies to:** Production AgentIdP deployments
|
||||||
|
|
||||||
|
This runbook covers the four incident types most relevant to SOC 2 Type II compliance monitoring.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Auth Failure Spike
|
||||||
|
|
||||||
|
### Detection
|
||||||
|
|
||||||
|
**Prometheus alert:** `AuthFailureSpike`
|
||||||
|
```yaml
|
||||||
|
expr: rate(agentidp_http_requests_total{status_code="401"}[5m]) > 0.5
|
||||||
|
for: 2m
|
||||||
|
severity: warning
|
||||||
|
```
|
||||||
|
|
||||||
|
Triggers when the rate of HTTP 401 responses exceeds 0.5 per second sustained over 2 minutes.
|
||||||
|
|
||||||
|
### Immediate Actions
|
||||||
|
|
||||||
|
1. Acknowledge the alert in PagerDuty / alerting system
|
||||||
|
2. Check whether the spike correlates with a scheduled process (e.g. batch agent key rotation, deployment)
|
||||||
|
3. Check Prometheus dashboard for the geographic distribution of the failing requests
|
||||||
|
|
||||||
|
### Investigation Steps
|
||||||
|
|
||||||
|
1. **Identify source agents:**
|
||||||
|
```bash
|
||||||
|
# Query audit log for recent auth failures
|
||||||
|
curl -s -H "Authorization: Bearer <admin-token>" \
|
||||||
|
"https://api.sentryagent.ai/v1/audit?action=auth.failed&limit=100"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Check for brute-force patterns:**
|
||||||
|
Look for repeated failures from the same `client_id` or IP address.
|
||||||
|
|
||||||
|
3. **Check if an agent's credentials expired:**
|
||||||
|
```bash
|
||||||
|
# Look for expired credentials
|
||||||
|
psql "$DATABASE_URL" -c "
|
||||||
|
SELECT credential_id, client_id, expires_at
|
||||||
|
FROM credentials
|
||||||
|
WHERE status = 'active' AND expires_at < NOW()
|
||||||
|
ORDER BY expires_at DESC LIMIT 20;"
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Check for key compromise signals:**
|
||||||
|
- Multiple agents failing simultaneously → possible key store issue
|
||||||
|
- Single agent with high failure rate → possible credential stuffing or misconfiguration
|
||||||
|
|
||||||
|
### Escalation Path
|
||||||
|
|
||||||
|
- **Warning (< 2 req/s):** Engineering on-call investigates within 1 hour
|
||||||
|
- **Critical (> 2 req/s sustained):** CISO notified, potential account compromise investigation
|
||||||
|
- **If credential compromise confirmed:** Revoke affected credentials immediately via `POST /agents/:id/credentials/:credId/revoke`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Anomalous Token Issuance
|
||||||
|
|
||||||
|
### Detection
|
||||||
|
|
||||||
|
**Prometheus alert:** `AnomalousTokenIssuance`
|
||||||
|
```yaml
|
||||||
|
expr: rate(agentidp_tokens_issued_total[5m]) > 10
|
||||||
|
for: 5m
|
||||||
|
severity: warning
|
||||||
|
```
|
||||||
|
|
||||||
|
Triggers when token issuance rate exceeds 10 per second for 5 continuous minutes.
|
||||||
|
|
||||||
|
### Immediate Actions
|
||||||
|
|
||||||
|
1. Acknowledge the alert
|
||||||
|
2. Determine if a legitimate mass-scale operation is underway (e.g. new customer onboarding, load test)
|
||||||
|
3. Check the `scope` label breakdown on `agentidp_tokens_issued_total` to identify what scopes are being requested
|
||||||
|
|
||||||
|
### Investigation Steps
|
||||||
|
|
||||||
|
1. **Identify top issuing agents:**
|
||||||
|
```bash
|
||||||
|
# Query audit log for recent token issuances
|
||||||
|
curl -s -H "Authorization: Bearer <admin-token>" \
|
||||||
|
"https://api.sentryagent.ai/v1/audit?action=token.issued&limit=100"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Check monthly token budget:**
|
||||||
|
Each agent is limited to 10,000 tokens/month (free tier). A single agent hitting the limit may indicate automation abuse.
|
||||||
|
|
||||||
|
3. **Check for abnormal scope combinations:**
|
||||||
|
If tokens are being issued with `admin:orgs` or `audit:read` at high volume, this warrants immediate investigation.
|
||||||
|
|
||||||
|
4. **Check for valid business reason:**
|
||||||
|
Contact the organization owner for the top-issuing agents.
|
||||||
|
|
||||||
|
### Escalation Path
|
||||||
|
|
||||||
|
- **Warning:** Engineering on-call investigates within 4 hours
|
||||||
|
- **If compromise suspected:** Revoke affected agent tokens via Redis revocation list, rotate credentials
|
||||||
|
- **If systematic abuse confirmed:** Suspend the issuing agent(s) via `PATCH /agents/:id` with `status: suspended`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Audit Chain Integrity Failure
|
||||||
|
|
||||||
|
### Detection
|
||||||
|
|
||||||
|
**Prometheus alert:** `AuditChainIntegrityFailed`
|
||||||
|
```yaml
|
||||||
|
expr: agentidp_audit_chain_integrity == 0
|
||||||
|
for: 0m
|
||||||
|
severity: critical
|
||||||
|
```
|
||||||
|
|
||||||
|
Fires immediately when `AuditChainVerificationJob` detects a break in the audit event hash chain.
|
||||||
|
This is a **CRITICAL** security event — possible evidence of log tampering.
|
||||||
|
|
||||||
|
### Immediate Actions
|
||||||
|
|
||||||
|
1. **Do NOT attempt to repair the broken chain** — preserve all evidence
|
||||||
|
2. Notify CISO and security team immediately
|
||||||
|
3. Page the on-call security engineer with P0 priority
|
||||||
|
4. Capture the current state:
|
||||||
|
```bash
|
||||||
|
curl -s -H "Authorization: Bearer <audit-token>" \
|
||||||
|
"https://api.sentryagent.ai/v1/audit/verify" | tee /secure/incident-$(date +%Y%m%d-%H%M).json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Investigation Steps
|
||||||
|
|
||||||
|
1. **Determine the broken event:**
|
||||||
|
The `brokenAtEventId` field in the `/audit/verify` response identifies the first broken event.
|
||||||
|
|
||||||
|
2. **Forensic analysis:**
|
||||||
|
Follow the steps in `docs/compliance/audit-log-runbook.md` — "What to Do When brokenAtEventId is Returned".
|
||||||
|
|
||||||
|
3. **Check database access logs:**
|
||||||
|
Review PostgreSQL `pg_stat_activity` and connection logs for unauthorized direct DB access.
|
||||||
|
|
||||||
|
4. **Check application logs:**
|
||||||
|
Look for any errors from the immutability trigger (`audit_events_immutable`).
|
||||||
|
|
||||||
|
5. **Check Vault audit logs:**
|
||||||
|
Review whether any encryption key access was abnormal.
|
||||||
|
|
||||||
|
### Escalation Path
|
||||||
|
|
||||||
|
- **Immediate:** CISO + Legal + Security Engineering
|
||||||
|
- **Within 1 hour:** Begin forensic preservation per incident response plan
|
||||||
|
- **Within 24 hours:** Determine scope of compromise and notification obligations
|
||||||
|
- **Customer notification:** Per contractual and regulatory obligations (GDPR, SOC 2 requirements)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Webhook Dead-Letter Accumulation
|
||||||
|
|
||||||
|
### Detection
|
||||||
|
|
||||||
|
**Prometheus alert:** `WebhookDeadLetterAccumulating`
|
||||||
|
```yaml
|
||||||
|
expr: increase(agentidp_webhook_dead_letters_total[1h]) > 10
|
||||||
|
for: 0m
|
||||||
|
severity: critical
|
||||||
|
```
|
||||||
|
|
||||||
|
Fires when more than 10 webhook deliveries reach dead-letter status within an hour.
|
||||||
|
|
||||||
|
### Immediate Actions
|
||||||
|
|
||||||
|
1. Acknowledge the alert
|
||||||
|
2. Check which `organization_id` labels are accumulating dead-letters:
|
||||||
|
```bash
|
||||||
|
# Prometheus query: top organizations by dead-letter rate
|
||||||
|
# agentidp_webhook_dead_letters_total (by organization_id)
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Check if the destination endpoints are reachable:
|
||||||
|
```bash
|
||||||
|
curl -I https://<webhook-destination-url>/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Investigation Steps
|
||||||
|
|
||||||
|
1. **List affected webhook subscriptions:**
|
||||||
|
```bash
|
||||||
|
# Query delivery records for dead-letter status
|
||||||
|
psql "$DATABASE_URL" -c "
|
||||||
|
SELECT s.id, s.organization_id, s.url, COUNT(d.id) AS dead_letters
|
||||||
|
FROM webhook_subscriptions s
|
||||||
|
JOIN webhook_deliveries d ON d.subscription_id = s.id
|
||||||
|
WHERE d.status = 'dead_letter'
|
||||||
|
AND d.updated_at > NOW() - INTERVAL '2 hours'
|
||||||
|
GROUP BY s.id
|
||||||
|
ORDER BY dead_letters DESC
|
||||||
|
LIMIT 20;"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Check delivery failure reasons:**
|
||||||
|
```bash
|
||||||
|
psql "$DATABASE_URL" -c "
|
||||||
|
SELECT http_status_code, COUNT(*) as count
|
||||||
|
FROM webhook_deliveries
|
||||||
|
WHERE status = 'dead_letter'
|
||||||
|
AND updated_at > NOW() - INTERVAL '2 hours'
|
||||||
|
GROUP BY http_status_code;"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Common causes and resolutions:**
|
||||||
|
| HTTP Status | Likely Cause | Resolution |
|
||||||
|
|---|---|---|
|
||||||
|
| 0 / null | Network unreachable / DNS failure | Check recipient endpoint availability |
|
||||||
|
| 401 / 403 | HMAC signature validation failing | Customer to verify HMAC secret |
|
||||||
|
| 404 | Endpoint URL changed | Customer to update webhook URL |
|
||||||
|
| 5xx | Recipient server error | Customer to investigate their endpoint |
|
||||||
|
| Timeout | Slow recipient endpoint | Customer to optimize endpoint response time |
|
||||||
|
|
||||||
|
4. **Notify affected customers:**
|
||||||
|
Contact the organization owner for high-volume dead-letter subscriptions.
|
||||||
|
|
||||||
|
### Escalation Path
|
||||||
|
|
||||||
|
- **Warning (10-50/hr):** Engineering notifies affected customers, investigates endpoint health
|
||||||
|
- **Critical (> 50/hr):** Engineering on-call + Platform reliability team engaged
|
||||||
|
- **If systemic delivery infrastructure failure:** Activate incident bridge, escalate to VP Engineering
|
||||||
142
docs/compliance/secrets-rotation.md
Normal file
142
docs/compliance/secrets-rotation.md
Normal file
@@ -0,0 +1,142 @@
|
|||||||
|
# Secrets Rotation Runbook — SentryAgent.ai AgentIdP
|
||||||
|
|
||||||
|
**Control:** SOC 2 CC9.2 — Secrets Rotation
|
||||||
|
**Last updated:** 2026-03-31
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
AgentIdP manages three categories of secrets that require periodic rotation:
|
||||||
|
|
||||||
|
1. **Agent client secrets** — Per-credential client secrets used for OAuth 2.0 token issuance
|
||||||
|
2. **OIDC signing keys** — RSA/EC keys used to sign ID tokens
|
||||||
|
3. **AES-256-CBC encryption key** — Column-level database encryption key (see `encryption-runbook.md`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Agent Credential (Client Secret) Rotation
|
||||||
|
|
||||||
|
### API endpoint
|
||||||
|
|
||||||
|
```
|
||||||
|
POST /api/v1/agents/:agentId/credentials/:credentialId/rotate
|
||||||
|
```
|
||||||
|
|
||||||
|
Requires Bearer token with `agents:write` scope.
|
||||||
|
|
||||||
|
### Procedure
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. List active credentials for the agent
|
||||||
|
curl -s -H "Authorization: Bearer <token>" \
|
||||||
|
"https://api.sentryagent.ai/v1/agents/<agentId>/credentials?status=active"
|
||||||
|
|
||||||
|
# 2. Rotate the credential (generate new secret)
|
||||||
|
curl -s -X POST \
|
||||||
|
-H "Authorization: Bearer <token>" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"expiresAt": "2027-03-31T00:00:00.000Z"}' \
|
||||||
|
"https://api.sentryagent.ai/v1/agents/<agentId>/credentials/<credentialId>/rotate"
|
||||||
|
|
||||||
|
# Response includes the new clientSecret — store it immediately; it is never shown again
|
||||||
|
```
|
||||||
|
|
||||||
|
### Key points
|
||||||
|
|
||||||
|
- The new `clientSecret` is returned **once only** — store it securely before the response is discarded
|
||||||
|
- The agent's previous secret is immediately invalidated (Vault KV v2 version overwritten)
|
||||||
|
- An audit event `credential.rotated` is logged to the immutable audit chain
|
||||||
|
- A `credential.rotated` webhook event is dispatched to all active subscriptions
|
||||||
|
|
||||||
|
### Recommended rotation schedule
|
||||||
|
|
||||||
|
| Credential type | Recommended rotation interval |
|
||||||
|
|---|---|
|
||||||
|
| Production agent credentials | 90 days |
|
||||||
|
| Staging / development credentials | 180 days |
|
||||||
|
| Service account credentials | 365 days (annual) |
|
||||||
|
| Credentials involved in a security incident | Immediately |
|
||||||
|
|
||||||
|
### Automated expiry detection
|
||||||
|
|
||||||
|
`SecretsRotationJob` runs hourly and queries credentials expiring within 7 days.
|
||||||
|
Prometheus alert `CredentialExpiryApproaching` fires immediately when any are detected.
|
||||||
|
Respond to this alert by rotating the flagged credential(s) before the expiry date.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. OIDC Signing Key Rotation
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
|
||||||
|
OIDC signing keys are managed by `OIDCKeyService` (`src/services/OIDCKeyService.ts`).
|
||||||
|
Keys are stored in the `oidc_keys` PostgreSQL table. The current active key is used to
|
||||||
|
sign all new ID tokens; public keys are exposed via `GET /.well-known/jwks.json`.
|
||||||
|
|
||||||
|
### When to rotate
|
||||||
|
|
||||||
|
- Key compromise or suspected exposure
|
||||||
|
- Scheduled rotation (recommended every 90 days for production)
|
||||||
|
- Algorithm upgrade (e.g. RS256 → ES256)
|
||||||
|
|
||||||
|
### Rotation procedure
|
||||||
|
|
||||||
|
OIDC key rotation is handled automatically by `OIDCKeyService.ensureCurrentKey()`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Force generation of a new signing key by calling the internal rotate endpoint
|
||||||
|
# (or trigger by redeploying with OIDC_FORCE_KEY_ROTATION=true)
|
||||||
|
|
||||||
|
# 1. Mark current key as inactive (if manual rotation is required)
|
||||||
|
psql "$DATABASE_URL" -c "
|
||||||
|
UPDATE oidc_keys
|
||||||
|
SET active = false
|
||||||
|
WHERE active = true;"
|
||||||
|
|
||||||
|
# 2. Restart the application — ensureCurrentKey() will generate a new key on startup
|
||||||
|
kubectl rollout restart deployment/agentidp
|
||||||
|
```
|
||||||
|
|
||||||
|
### JWKS update behavior
|
||||||
|
|
||||||
|
- Old public keys remain in `GET /.well-known/jwks.json` for **24 hours** after rotation
|
||||||
|
(grace period for in-flight tokens)
|
||||||
|
- After the grace period, old keys are removed from the JWKS endpoint
|
||||||
|
- Redis JWKS cache TTL is configured by `JWKS_CACHE_TTL_SECONDS` (default: 3600)
|
||||||
|
|
||||||
|
### Impact on existing tokens
|
||||||
|
|
||||||
|
Existing valid tokens signed with the old key **continue to work** until they expire,
|
||||||
|
as long as the old public key remains in JWKS. After the grace period, old tokens
|
||||||
|
will fail verification.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Encryption Key Rotation
|
||||||
|
|
||||||
|
See `docs/compliance/encryption-runbook.md` for the full AES-256-CBC encryption key rotation procedure.
|
||||||
|
|
||||||
|
**Summary:** Generate new 32-byte hex key → write to Vault at `ENCRYPTION_KEY_VAULT_PATH` → restart app → existing rows re-encrypted lazily on next read-write cycle.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Schedule Recommendations
|
||||||
|
|
||||||
|
| Secret Type | Production Interval | Staging Interval | Trigger for Immediate Rotation |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Agent client secrets | 90 days | 180 days | Credential suspected compromised |
|
||||||
|
| OIDC signing keys | 90 days | 180 days | Key file exposed, algorithm upgrade |
|
||||||
|
| AES-256-CBC encryption key | 365 days (annual) | On demand | Key exposed, Vault breach, compliance audit requirement |
|
||||||
|
| Webhook HMAC secrets | Per customer policy | N/A | Webhook endpoint compromised |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Compliance Evidence
|
||||||
|
|
||||||
|
For SOC 2 CC9.2 evidence collection:
|
||||||
|
|
||||||
|
- Prometheus metric history: `agentidp_credentials_expiring_soon_total`
|
||||||
|
- Audit log entries with `action: credential.rotated` — query via `GET /audit?action=credential.rotated`
|
||||||
|
- Key rotation records from Vault audit log
|
||||||
|
- This runbook + sign-off from Security Engineering
|
||||||
42
docs/compliance/soc2-controls-matrix.md
Normal file
42
docs/compliance/soc2-controls-matrix.md
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
# SOC 2 Type II Controls Matrix — SentryAgent.ai AgentIdP
|
||||||
|
|
||||||
|
This document maps the five in-scope SOC 2 Trust Services Criteria (TSC) controls to their
|
||||||
|
corresponding implementation artefacts, mechanisms, and automated verification methods.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Controls Matrix
|
||||||
|
|
||||||
|
| Control ID | TSC Criterion Name | Implementation File | Mechanism | Automated Check |
|
||||||
|
|---|---|---|---|---|
|
||||||
|
| **CC6.1** | Encryption at Rest | `src/services/EncryptionService.ts` | AES-256-CBC column-level encryption on `credentials.secret_hash`, `credentials.vault_path`, `webhook_subscriptions.vault_secret_path`, `agent_did_keys.vault_key_path`. Key is stored in HashiCorp Vault KV v2 at path configured by `ENCRYPTION_KEY_VAULT_PATH`. IV is randomised per encryption call. Backward-compat: `isEncrypted()` gate allows plaintext rows to coexist during migration. | `GET /api/v1/compliance/controls` returns `CC6.1` status. Status is set to `passing` on service startup when `EncryptionService` initialises. |
|
||||||
|
| **CC6.7** | TLS Enforcement | `src/middleware/TLSEnforcementMiddleware.ts` | Express middleware registered as the **first** middleware in the app stack (before all routes and body parsers). In `NODE_ENV=production`, checks `X-Forwarded-Proto` header set by the upstream load balancer/reverse proxy. Any non-HTTPS request receives a `301 Moved Permanently` redirect to `https://`. | `GET /api/v1/compliance/controls` returns `CC6.7` status. TLS enforcement is a static configuration control; status is set to `passing` on application startup. |
|
||||||
|
| **CC7.2** | Audit Log Integrity | `src/services/AuditVerificationService.ts`, `src/repositories/AuditRepository.ts`, `src/jobs/AuditChainVerificationJob.ts` | Each audit event (`audit_events` table) stores a `hash` (SHA-256 of `eventId + timestamp + action + outcome + agentId + organizationId + previousHash`) and `previous_hash` linking it to the prior event. An immutability trigger prevents UPDATE/DELETE on `audit_events`. `AuditChainVerificationJob` re-walks the entire chain every hour. | Prometheus gauge `agentidp_audit_chain_integrity` (1 = passing, 0 = failing). Prometheus alert `AuditChainIntegrityFailed` fires when gauge = 0. `GET /api/v1/audit/verify` triggers an on-demand verification. `GET /api/v1/compliance/controls` returns `CC7.2` status. |
|
||||||
|
| **CC9.2** | Secrets Rotation | `src/jobs/SecretsRotationJob.ts` | `SecretsRotationJob` runs every hour (configurable via `SECRETS_ROTATION_CHECK_INTERVAL_MS`) and queries `credentials` for `active` credentials expiring within 7 days. For each, it increments the `agentidp_credentials_expiring_soon_total` Prometheus counter with the owning `agent_id`. Operators are expected to act on the alert within the 7-day window. | Prometheus counter `agentidp_credentials_expiring_soon_total` per `agent_id`. Prometheus alert `CredentialExpiryApproaching` fires when any increase is detected. `GET /api/v1/compliance/controls` returns `CC9.2` status. |
|
||||||
|
| **CC7.1** | Webhook Dead-Letter Monitoring | `src/workers/WebhookDeliveryWorker.ts` | `WebhookDeliveryWorker` processes webhook deliveries from a Redis queue. After exhausting all retry attempts (configurable `WEBHOOK_MAX_RETRIES`), the delivery is moved to dead-letter status and `agentidp_webhook_dead_letters_total` is incremented. | Prometheus counter `agentidp_webhook_dead_letters_total` per `organization_id`. Prometheus alert `WebhookDeadLetterAccumulating` fires when > 10 dead-letters accumulate in 1 hour. `GET /api/v1/compliance/controls` returns `CC7.1` status. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Evidence Collection
|
||||||
|
|
||||||
|
For a SOC 2 Type II audit, the following evidence should be collected:
|
||||||
|
|
||||||
|
| Evidence Type | Collection Method |
|
||||||
|
|---|---|
|
||||||
|
| Encryption at rest configuration | Export Vault KV v2 policy + `_encryption_migration_log` table contents |
|
||||||
|
| TLS certificate and enforcement logs | Load balancer access logs + `X-Forwarded-Proto` middleware responses |
|
||||||
|
| Audit chain integrity report | `GET /api/v1/audit/verify` with full date range |
|
||||||
|
| Secrets rotation compliance | Prometheus metric history for `agentidp_credentials_expiring_soon_total` |
|
||||||
|
| Webhook dead-letter rate | Prometheus metric history for `agentidp_webhook_dead_letters_total` |
|
||||||
|
| Immutable audit log dump | Direct PostgreSQL export of `audit_events` table with hash verification |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- SOC 2 Trust Services Criteria: [AICPA TSC 2017](https://www.aicpa.org/resources/article/trust-services-criteria)
|
||||||
|
- OpenAPI spec: `docs/openapi/compliance.yaml`
|
||||||
|
- Encryption runbook: `docs/compliance/encryption-runbook.md`
|
||||||
|
- Audit log runbook: `docs/compliance/audit-log-runbook.md`
|
||||||
|
- Incident response: `docs/compliance/incident-response.md`
|
||||||
|
- Secrets rotation: `docs/compliance/secrets-rotation.md`
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
# SentryAgent.ai AgentIdP — Developer Documentation
|
# SentryAgent.ai AgentIdP — Developer Documentation
|
||||||
|
|
||||||
The complete documentation for bedroom developers building with SentryAgent.ai AgentIdP.
|
The complete documentation for developers building with SentryAgent.ai AgentIdP.
|
||||||
|
|
||||||
## What is this?
|
## What is this?
|
||||||
|
|
||||||
@@ -19,10 +19,15 @@ SentryAgent.ai AgentIdP is a free, open-source Identity Provider built specifica
|
|||||||
|
|
||||||
| Guide | What it covers |
|
| Guide | What it covers |
|
||||||
|-------|----------------|
|
|-------|----------------|
|
||||||
| [Register an Agent](guides/register-an-agent.md) | All fields, validation rules, common errors |
|
| [Register an Agent](guides/register-an-agent.md) | All registration fields, org scoping, validation rules, common errors |
|
||||||
| [Manage Credentials](guides/manage-credentials.md) | Generate, list, rotate, revoke credentials |
|
| [Manage Credentials](guides/manage-credentials.md) | Generate, list, rotate, revoke credentials |
|
||||||
| [Issue and Revoke Tokens](guides/issue-and-revoke-tokens.md) | OAuth 2.0 client credentials flow, introspect, revoke |
|
| [Issue and Revoke Tokens](guides/issue-and-revoke-tokens.md) | OAuth 2.0 client credentials flow, introspect, revoke |
|
||||||
| [Query Audit Logs](guides/query-audit-logs.md) | Filters, pagination, event structure, retention |
|
| [Query Audit Logs](guides/query-audit-logs.md) | Filters, pagination, event structure, retention |
|
||||||
|
| [Use the Analytics Dashboard](guides/use-analytics-dashboard.md) | Query token trends, activity heatmap, per-agent usage |
|
||||||
|
| [Manage API Tiers](guides/manage-api-tiers.md) | Check current tier, understand limits, trigger upgrade |
|
||||||
|
| [A2A Delegation](guides/a2a-delegation.md) | Create and verify agent-to-agent delegation chains |
|
||||||
|
| [Configure Webhooks](guides/configure-webhooks.md) | Subscribe to events, delivery guarantees, inspect history |
|
||||||
|
| [AGNTCY Compliance](guides/agntcy-compliance.md) | Export agent cards, generate compliance reports, verify audit chain |
|
||||||
|
|
||||||
## Base URL
|
## Base URL
|
||||||
|
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -126,3 +126,215 @@ AgentIdP is free. These are the limits on the free tier:
|
|||||||
| Audit log retention | 90 days | Events older than 90 days are automatically purged; queries return empty results |
|
| Audit log retention | 90 days | Events older than 90 days are automatically purged; queries return empty results |
|
||||||
|
|
||||||
The monthly token counter resets on the first day of each calendar month. The rate limit window resets every 60 seconds; the reset timestamp is in the `X-RateLimit-Reset` response header.
|
The monthly token counter resets on the first day of each calendar month. The rate limit window resets every 60 seconds; the reset timestamp is in the `X-RateLimit-Reset` response header.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Organizations and Multi-tenancy
|
||||||
|
|
||||||
|
An **organization** is the top-level grouping unit in AgentIdP. Every registered agent can be
|
||||||
|
scoped to an organization by including an `organization_id` in the agent registration request.
|
||||||
|
Organizations have a unique `slug` (URL-safe identifier), a display `name`, and a `planTier`
|
||||||
|
that controls per-org resource limits. All API operations that involve analytics, webhooks, tiers,
|
||||||
|
and delegation are tenant-scoped: they only see data belonging to their organization.
|
||||||
|
|
||||||
|
**Tenant isolation** is enforced at the service layer. Every query involving multi-tenant data
|
||||||
|
filters by `organization_id`. A token issued to an agent in org A cannot read data from org B.
|
||||||
|
The `organization_id` is embedded in the JWT at token issuance time and validated on every
|
||||||
|
request. This means you do not need to pass an org ID as a query parameter — it is derived
|
||||||
|
automatically from the authenticated token.
|
||||||
|
|
||||||
|
When you create an organization, you define its `slug`. Slugs are immutable — once set, they
|
||||||
|
cannot be changed. Choose a slug that matches your domain or product namespace, as it is used
|
||||||
|
in DID identifiers for agents in that organization. Membership is managed through the
|
||||||
|
`POST /api/v1/organizations/{orgId}/members` endpoint, which lets you add an existing agent
|
||||||
|
to an organization with a `member` or `admin` role.
|
||||||
|
|
||||||
|
| Field | Type | Description |
|
||||||
|
|-------|------|-------------|
|
||||||
|
| `organizationId` | UUID | System-assigned immutable identifier |
|
||||||
|
| `name` | string | Human-readable display name |
|
||||||
|
| `slug` | string | URL-safe unique identifier (immutable after creation) |
|
||||||
|
| `planTier` | enum | `free` \| `pro` \| `enterprise` |
|
||||||
|
| `maxAgents` | integer | Maximum active agents in this org |
|
||||||
|
| `maxTokensPerMonth` | integer | Maximum token issuances per month |
|
||||||
|
| `status` | enum | `active` \| `suspended` \| `deleted` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## DID Identity
|
||||||
|
|
||||||
|
Every agent registered in AgentIdP automatically receives a **Decentralized Identifier (DID)**
|
||||||
|
using the `did:web` method. A DID is a globally unique, self-describing identifier that does not
|
||||||
|
rely on a central registry. The DID for an agent takes the form
|
||||||
|
`did:web:<host>:agents:<agentId>` — for example,
|
||||||
|
`did:web:localhost%3A3000:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890`. The `did:web` method
|
||||||
|
means the DID document is resolvable via HTTPS: a resolver fetches
|
||||||
|
`https://<host>/api/v1/agents/<agentId>/did`.
|
||||||
|
|
||||||
|
The **DID Document** is a JSON-LD object that describes the agent's cryptographic keys and
|
||||||
|
service endpoints. It contains: the agent's DID as its `id`, a `verificationMethod` array with
|
||||||
|
the agent's public key in JWK format, an `authentication` array referencing that key, and an
|
||||||
|
`agntcy` extension object carrying agent metadata (type, capabilities, version, owner,
|
||||||
|
deploymentEnv). This document is publicly accessible — no authentication required — so any
|
||||||
|
external system can verify this agent's identity without contacting AgentIdP directly.
|
||||||
|
|
||||||
|
The `did:web` scheme was chosen because it is widely supported by DID resolvers, requires no
|
||||||
|
blockchain, and leverages standard HTTPS infrastructure. When an external system receives a
|
||||||
|
token from your agent, it can resolve your agent's DID, retrieve the public key from the DID
|
||||||
|
Document, and independently verify the token's signature. This is the foundation of
|
||||||
|
cross-system agent identity verification.
|
||||||
|
|
||||||
|
```
|
||||||
|
DID Document structure for a registered agent
|
||||||
|
───────────────────────────────────────────────
|
||||||
|
{
|
||||||
|
"@context": ["https://www.w3.org/ns/did/v1"],
|
||||||
|
"id": "did:web:<host>:agents:<agentId>",
|
||||||
|
"controller": "did:web:<host>:agents:<agentId>",
|
||||||
|
"verificationMethod": [
|
||||||
|
{
|
||||||
|
"id": "<did>#key-1",
|
||||||
|
"type": "JsonWebKey2020",
|
||||||
|
"controller": "<did>",
|
||||||
|
"publicKeyJwk": { "kty": "RSA", ... }
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"authentication": ["<did>#key-1"],
|
||||||
|
"agntcy": {
|
||||||
|
"agentId": "<uuid>",
|
||||||
|
"agentType": "screener",
|
||||||
|
"capabilities": ["resume:read"],
|
||||||
|
"deploymentEnv": "production",
|
||||||
|
"owner": "talent-team",
|
||||||
|
"version": "1.0.0"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## OIDC Provider
|
||||||
|
|
||||||
|
AgentIdP implements a subset of the **OpenID Connect (OIDC)** protocol, acting as an OIDC
|
||||||
|
Provider for the agents it manages. This means AgentIdP publishes a standard discovery
|
||||||
|
document at `GET /.well-known/openid-configuration`, which any OIDC-aware client can use to
|
||||||
|
discover supported grant types, token endpoint, JWKS URI, and other metadata. It also exposes
|
||||||
|
a JWKS endpoint at `GET /.well-known/jwks.json` for external systems to retrieve the public
|
||||||
|
keys used to verify tokens.
|
||||||
|
|
||||||
|
The **`/agent-info` endpoint** is the equivalent of OIDC's UserInfo endpoint — it returns
|
||||||
|
identity claims for the authenticated agent. External systems that receive a token issued by
|
||||||
|
AgentIdP can call this endpoint (with that token) to retrieve the agent's verified identity
|
||||||
|
attributes: its `agentId`, `email`, `agentType`, `capabilities`, and `organization_id`. This
|
||||||
|
is particularly useful when a downstream service needs to verify the identity of an agent
|
||||||
|
presenting a token, without duplicating identity data in its own store.
|
||||||
|
|
||||||
|
AgentIdP also supports **OIDC token exchange for GitHub Actions**. If you run your agent
|
||||||
|
deployment workflows in GitHub Actions, you can configure a trust policy
|
||||||
|
(`POST /api/v1/oidc/trust-policies`) that maps a GitHub repository and branch to an AgentIdP
|
||||||
|
agent. The workflow can then exchange its GitHub OIDC JWT for an AgentIdP access token via
|
||||||
|
`POST /api/v1/oidc/token` — no stored secrets required. This enables keyless, short-lived
|
||||||
|
token issuance in CI/CD pipelines.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## A2A Delegation
|
||||||
|
|
||||||
|
**Agent-to-Agent (A2A) delegation** allows one agent to grant another agent a subset of its own
|
||||||
|
OAuth 2.0 scopes for a limited time. This is the building block for multi-agent pipelines where
|
||||||
|
an orchestrator agent needs to delegate work to a specialist sub-agent without sharing its own
|
||||||
|
full credentials. A delegation chain consists of: a delegator (the agent granting authority),
|
||||||
|
a delegatee (the agent receiving authority), a set of scopes (must be a strict subset of the
|
||||||
|
delegator's own scopes), and a TTL (60 seconds to 86,400 seconds).
|
||||||
|
|
||||||
|
The **grant flow** is straightforward: the delegator calls `POST /api/v1/oauth2/token/delegate`
|
||||||
|
with the delegatee's agent ID, the scopes to grant, and the TTL. AgentIdP returns a signed
|
||||||
|
delegation token. The delegatee presents this token when calling
|
||||||
|
`POST /api/v1/oauth2/token/verify-delegation` to prove it has been granted authority. AgentIdP
|
||||||
|
verifies the chain integrity and returns the delegation details including whether it is still
|
||||||
|
valid. The delegator can revoke the chain at any time via
|
||||||
|
`DELETE /api/v1/oauth2/token/delegate/{chainId}`.
|
||||||
|
|
||||||
|
Delegation is useful for: workflow handoffs between specialist agents, granting a monitoring
|
||||||
|
agent read-only access to resources owned by a processing agent, and time-limited cross-agent
|
||||||
|
authorization without credential sharing. Because delegation tokens are signed and verified
|
||||||
|
server-side, a delegatee cannot extend the TTL, expand the scope, or pass the delegation to a
|
||||||
|
third agent. The chain is always exactly two hops: delegator → delegatee.
|
||||||
|
|
||||||
|
```
|
||||||
|
A2A Delegation Flow
|
||||||
|
───────────────────
|
||||||
|
1. Orchestrator (delegator) calls POST /api/v1/oauth2/token/delegate
|
||||||
|
→ body: { delegateeAgentId, scopes: ["agents:read"], ttlSeconds: 3600 }
|
||||||
|
← response: { delegationToken: "...", chainId: "...", expiresAt: "..." }
|
||||||
|
|
||||||
|
2. Orchestrator passes delegationToken to the sub-agent out-of-band
|
||||||
|
|
||||||
|
3. Sub-agent (delegatee) calls POST /api/v1/oauth2/token/verify-delegation
|
||||||
|
→ body: { delegationToken: "..." }
|
||||||
|
← response: { valid: true, scopes: ["agents:read"], expiresAt: "..." }
|
||||||
|
|
||||||
|
4. Sub-agent uses its own Bearer token + confirmed scope to act on behalf
|
||||||
|
|
||||||
|
5. (Optional) Orchestrator calls DELETE /api/v1/oauth2/token/delegate/{chainId}
|
||||||
|
to revoke early
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## API Tier Plans
|
||||||
|
|
||||||
|
AgentIdP has three subscription tiers: **Free**, **Pro**, and **Enterprise**. Every organization
|
||||||
|
is on one tier at a time. The tier determines the resource limits enforced at runtime: maximum
|
||||||
|
number of active agents, maximum API calls per day, and maximum token issuances per day. When a
|
||||||
|
limit is reached, the relevant operation returns a `403 FREE_TIER_LIMIT_EXCEEDED` error until the
|
||||||
|
next calendar day resets the counter (for daily limits) or until you upgrade your tier.
|
||||||
|
|
||||||
|
You can check your current tier, configured limits, and live usage at any time by calling
|
||||||
|
`GET /api/v1/tiers/status`. The response shows your tier name, all three limit values, and the
|
||||||
|
live usage counters for the current day. If you need higher limits, call
|
||||||
|
`POST /api/v1/tiers/upgrade` with `{ "target_tier": "pro" }` or `"enterprise"`. This creates a
|
||||||
|
Stripe Checkout Session and returns a one-time `checkoutUrl`. After payment, the organization's
|
||||||
|
tier is updated automatically via Stripe webhook.
|
||||||
|
|
||||||
|
Enterprise tier limits are effectively unlimited (enforced as `Infinity` in the tier
|
||||||
|
configuration). Enterprise customers should contact SentryAgent.ai to arrange billing and
|
||||||
|
configure custom limits if needed. The `maxAgents` and `maxTokensPerMonth` fields on an
|
||||||
|
organization record can be overridden at org creation or update to set tighter or looser limits
|
||||||
|
than the tier defaults, regardless of tier.
|
||||||
|
|
||||||
|
| Limit | Free | Pro | Enterprise |
|
||||||
|
|-------|------|-----|------------|
|
||||||
|
| Max agents | 10 | 100 | Unlimited |
|
||||||
|
| Max API calls / day | 1,000 | 50,000 | Unlimited |
|
||||||
|
| Max token issuances / day | 1,000 | 50,000 | Unlimited |
|
||||||
|
| Audit log retention | 90 days | 90 days | 90 days |
|
||||||
|
| Webhooks | Yes | Yes | Yes |
|
||||||
|
| Analytics | Yes | Yes | Yes |
|
||||||
|
| A2A Delegation | Yes | Yes | Yes |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AGNTCY Compliance
|
||||||
|
|
||||||
|
**AGNTCY** is an open standard from the Linux Foundation that defines how AI agents should be
|
||||||
|
identified, described, and governed across platforms. AgentIdP implements AGNTCY compliance
|
||||||
|
in two ways: every agent automatically gets a DID and an agent card (a structured JSON object
|
||||||
|
that describes the agent in the AGNTCY format), and AgentIdP can generate a **compliance
|
||||||
|
report** that summarizes the verified state of all agents in a tenant. An agent card is the
|
||||||
|
AGNTCY equivalent of a business card — it carries the agent's DID, type, capabilities, owner,
|
||||||
|
version, and identity provider.
|
||||||
|
|
||||||
|
The **compliance report** (available at `GET /api/v1/compliance/report`) covers two dimensions:
|
||||||
|
agent-identity verification (are all active agents reachable via their DID?) and audit-trail
|
||||||
|
integrity (is the hash chain of audit events intact?). The report includes a boolean
|
||||||
|
`agntcyConformance` field that summarizes whether the tenant meets AGNTCY baseline requirements.
|
||||||
|
Reports are cached in Redis for 5 minutes; the `X-Cache: HIT` header signals a cached response.
|
||||||
|
|
||||||
|
For self-auditing and external audits, you can export all active agents as AGNTCY agent cards
|
||||||
|
in bulk via `GET /api/v1/compliance/agent-cards`. This is an array of card objects that
|
||||||
|
external compliance tools and AGNTCY-compatible registries can ingest directly. The
|
||||||
|
`GET /api/v1/compliance/controls` endpoint (no authentication required) provides a live
|
||||||
|
status snapshot of all SOC 2 Trust Services Criteria controls that AgentIdP monitors internally.
|
||||||
|
These endpoints are gated by the `COMPLIANCE_ENABLED` environment variable; if disabled, they
|
||||||
|
return `404`.
|
||||||
|
|||||||
@@ -4,9 +4,14 @@ Step-by-step walkthroughs for each AgentIdP workflow.
|
|||||||
|
|
||||||
| Guide | What it covers |
|
| Guide | What it covers |
|
||||||
|-------|----------------|
|
|-------|----------------|
|
||||||
| [Register an Agent](register-an-agent.md) | All registration fields, validation rules, common errors and fixes |
|
| [Register an Agent](register-an-agent.md) | All registration fields, organization scoping, validation rules, common errors |
|
||||||
| [Manage Credentials](manage-credentials.md) | Generate, list, rotate, and revoke credentials |
|
| [Manage Credentials](manage-credentials.md) | Generate, list, rotate, and revoke credentials |
|
||||||
| [Issue and Revoke Tokens](issue-and-revoke-tokens.md) | OAuth 2.0 Client Credentials flow, JWT structure, introspect, revoke |
|
| [Issue and Revoke Tokens](issue-and-revoke-tokens.md) | OAuth 2.0 Client Credentials flow, JWT structure, introspect, revoke |
|
||||||
| [Query Audit Logs](query-audit-logs.md) | Filters, pagination, event structure, 90-day retention |
|
| [Query Audit Logs](query-audit-logs.md) | Filters, pagination, event structure, 90-day retention |
|
||||||
|
| [Use the Analytics Dashboard](use-analytics-dashboard.md) | Query token trends, agent activity heatmap, and per-agent usage |
|
||||||
|
| [Manage API Tiers](manage-api-tiers.md) | Check current tier, understand limits, trigger a Stripe upgrade |
|
||||||
|
| [A2A Delegation](a2a-delegation.md) | Create and verify agent-to-agent delegation chains |
|
||||||
|
| [Configure Webhooks](configure-webhooks.md) | Subscribe to events, understand delivery guarantees, inspect history |
|
||||||
|
| [AGNTCY Compliance](agntcy-compliance.md) | Export agent cards, generate compliance reports, verify audit chain |
|
||||||
|
|
||||||
All guides assume you have a running local server and a valid Bearer token. See the [Quick Start](../quick-start.md) if you haven't done that yet.
|
All guides assume you have a running local server and a valid Bearer token. See the [Quick Start](../quick-start.md) if you haven't done that yet.
|
||||||
|
|||||||
167
docs/developers/guides/a2a-delegation.md
Normal file
167
docs/developers/guides/a2a-delegation.md
Normal file
@@ -0,0 +1,167 @@
|
|||||||
|
# A2A Delegation
|
||||||
|
|
||||||
|
Agent-to-Agent (A2A) delegation lets one agent grant another agent a subset of its OAuth 2.0
|
||||||
|
scopes for a defined period. This is the foundation for building secure multi-agent pipelines
|
||||||
|
where an orchestrator agent coordinates specialist sub-agents.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- A running AgentIdP instance
|
||||||
|
- Two registered agents: the delegator (has a Bearer token) and the delegatee (knows its
|
||||||
|
`agentId`)
|
||||||
|
- The delegator's scopes must be a superset of the scopes it wants to delegate
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## How delegation works
|
||||||
|
|
||||||
|
```
|
||||||
|
Delegator agent Delegatee agent
|
||||||
|
| |
|
||||||
|
|-- POST /oauth2/token/delegate ----------->| (creates chain server-side)
|
||||||
|
|<-- { delegationToken, chainId, scopes } --|
|
||||||
|
| |
|
||||||
|
|-- passes delegationToken out-of-band ---->|
|
||||||
|
| |
|
||||||
|
| POST /oauth2/token/verify-delegation
|
||||||
|
| <-- { valid: true, scopes, expiresAt }
|
||||||
|
| |
|
||||||
|
| (optional) DELETE /oauth2/token/delegate/{chainId}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 1 — Create a delegation chain
|
||||||
|
|
||||||
|
The delegator agent creates the chain by specifying the delegatee's `agentId`, the scopes to
|
||||||
|
delegate (must be a strict subset of the delegator's own scopes), and the TTL in seconds.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST http://localhost:3000/api/v1/oauth2/token/delegate \
|
||||||
|
-H "Authorization: Bearer $DELEGATOR_TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"delegateeAgentId": "'$DELEGATEE_AGENT_ID'",
|
||||||
|
"scopes": ["agents:read"],
|
||||||
|
"ttlSeconds": 3600
|
||||||
|
}' | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Response (`201 Created`):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"delegationToken": "sa_del_a1b2c3d4e5f6...",
|
||||||
|
"chainId": "d4e5f6a7-b8c9-0123-def0-123456789abc",
|
||||||
|
"delegatorAgentId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
|
||||||
|
"delegateeAgentId": "b2c3d4e5-f6a7-8901-bcde-f12345678901",
|
||||||
|
"scopes": ["agents:read"],
|
||||||
|
"expiresAt": "2026-04-04T10:00:00.000Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Save the `delegationToken` and `chainId`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export DELEGATION_TOKEN="sa_del_a1b2c3d4e5f6..."
|
||||||
|
export CHAIN_ID="d4e5f6a7-b8c9-0123-def0-123456789abc"
|
||||||
|
```
|
||||||
|
|
||||||
|
**TTL constraints**: minimum 60 seconds, maximum 86400 seconds (24 hours). Choose the minimum
|
||||||
|
TTL that covers the delegatee's task.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 2 — Pass the delegation token to the delegatee
|
||||||
|
|
||||||
|
Pass `DELEGATION_TOKEN` to the delegatee agent out-of-band. This can be via a shared queue,
|
||||||
|
a direct API call to the sub-agent, or any other channel. The token is a signed opaque string —
|
||||||
|
do not parse it; treat it as an opaque credential.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 3 — Verify the delegation token
|
||||||
|
|
||||||
|
The delegatee (or any agent checking the delegation) calls the verify endpoint. This confirms
|
||||||
|
the chain is valid and not expired or revoked.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST http://localhost:3000/api/v1/oauth2/token/verify-delegation \
|
||||||
|
-H "Authorization: Bearer $DELEGATEE_TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{ "delegationToken": "'$DELEGATION_TOKEN'" }' | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Response (`200 OK` — valid delegation):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"valid": true,
|
||||||
|
"chainId": "d4e5f6a7-b8c9-0123-def0-123456789abc",
|
||||||
|
"delegatorAgentId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
|
||||||
|
"delegateeAgentId": "b2c3d4e5-f6a7-8901-bcde-f12345678901",
|
||||||
|
"scopes": ["agents:read"],
|
||||||
|
"issuedAt": "2026-04-04T09:00:00.000Z",
|
||||||
|
"expiresAt": "2026-04-04T10:00:00.000Z",
|
||||||
|
"revokedAt": null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Response (`200 OK` — expired delegation):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"valid": false,
|
||||||
|
"chainId": "d4e5f6a7-b8c9-0123-def0-123456789abc",
|
||||||
|
"delegatorAgentId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
|
||||||
|
"delegateeAgentId": "b2c3d4e5-f6a7-8901-bcde-f12345678901",
|
||||||
|
"scopes": ["agents:read"],
|
||||||
|
"issuedAt": "2026-04-03T09:00:00.000Z",
|
||||||
|
"expiresAt": "2026-04-03T10:00:00.000Z",
|
||||||
|
"revokedAt": null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
> The verify endpoint always returns `200 OK`. Check the `valid` field — it is never an error
|
||||||
|
> response for an expired or revoked token.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 4 — (Optional) Revoke the delegation early
|
||||||
|
|
||||||
|
If the delegatee has completed its task and you want to revoke the delegation before it expires,
|
||||||
|
the delegator calls:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X DELETE "http://localhost:3000/api/v1/oauth2/token/delegate/$CHAIN_ID" \
|
||||||
|
-H "Authorization: Bearer $DELEGATOR_TOKEN" \
|
||||||
|
-o /dev/null -w "%{http_code}\n"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected response: `204` (no body).
|
||||||
|
|
||||||
|
After revocation, verify requests for this chain return `{ "valid": false, "revokedAt": "<timestamp>" }`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Scope rules
|
||||||
|
|
||||||
|
- Delegated scopes must be a strict subset of the delegator's own token scopes
|
||||||
|
- You cannot delegate scopes you do not have
|
||||||
|
- You cannot delegate to yourself (delegateeAgentId must differ from delegatorAgentId)
|
||||||
|
- Delegation is not transitive — a delegatee cannot re-delegate to a third agent
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Common errors
|
||||||
|
|
||||||
|
### `400 VALIDATION_ERROR` — scope not a subset
|
||||||
|
|
||||||
|
The delegator attempted to delegate a scope it does not hold. Check `GET /api/v1/token/introspect`
|
||||||
|
to confirm which scopes your token carries.
|
||||||
|
|
||||||
|
### `400 VALIDATION_ERROR` — ttlSeconds out of range
|
||||||
|
|
||||||
|
Min: 60, Max: 86400. Values outside this range return a validation error.
|
||||||
191
docs/developers/guides/agntcy-compliance.md
Normal file
191
docs/developers/guides/agntcy-compliance.md
Normal file
@@ -0,0 +1,191 @@
|
|||||||
|
# AGNTCY Compliance
|
||||||
|
|
||||||
|
This guide explains how to use AgentIdP's AGNTCY compliance features: exporting agent cards,
|
||||||
|
generating compliance reports, verifying audit chain integrity, and checking SOC 2 control status.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- A running AgentIdP instance
|
||||||
|
- `COMPLIANCE_ENABLED` environment variable not set to `false` (enabled by default)
|
||||||
|
- A valid Bearer token (for authenticated endpoints)
|
||||||
|
- At least one registered agent
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What is AGNTCY?
|
||||||
|
|
||||||
|
AGNTCY is an open standard from the Linux Foundation for AI agent identity and governance.
|
||||||
|
AgentIdP implements AGNTCY by giving every agent a DID and an agent card. The compliance
|
||||||
|
endpoints let you export and report on that data in structured, auditable formats.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Export agent cards
|
||||||
|
|
||||||
|
`GET /api/v1/compliance/agent-cards`
|
||||||
|
|
||||||
|
Exports all active agents in your organization as AGNTCY-standard agent card JSON objects.
|
||||||
|
Suitable for ingestion by external compliance tools or AGNTCY-compatible registries.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s "http://localhost:3000/api/v1/compliance/agent-cards" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Response (`200 OK`): Array of agent card objects.
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"did": "did:web:localhost%3A3000:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890",
|
||||||
|
"name": "screener-001@talent.ai",
|
||||||
|
"agentType": "screener",
|
||||||
|
"capabilities": ["resume:read", "email:send"],
|
||||||
|
"owner": "talent-team",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"deploymentEnv": "production",
|
||||||
|
"identityProvider": "https://sentryagent.ai",
|
||||||
|
"issuedAt": "2026-04-04T09:00:00.000Z"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Use cases**:
|
||||||
|
- Share with external auditors to demonstrate your agent fleet
|
||||||
|
- Import into AGNTCY-compatible discovery registries
|
||||||
|
- Baseline snapshot before and after deployments
|
||||||
|
|
||||||
|
Save the output to a file:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s "http://localhost:3000/api/v1/compliance/agent-cards" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" > agent-cards-$(date +%Y%m%d).json
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Generate a compliance report
|
||||||
|
|
||||||
|
`GET /api/v1/compliance/report`
|
||||||
|
|
||||||
|
Generates an AGNTCY compliance report for your tenant. The report is cached for 5 minutes
|
||||||
|
(check the `X-Cache` header to see if the response is fresh or cached).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s "http://localhost:3000/api/v1/compliance/report" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Response (`200 OK`):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"tenantId": "org-0a1b2c3d-e4f5-6789-abcd-ef0123456789",
|
||||||
|
"generatedAt": "2026-04-04T09:00:00.000Z",
|
||||||
|
"agntcyConformance": true,
|
||||||
|
"agentCount": 12,
|
||||||
|
"verifiedAgentCount": 12,
|
||||||
|
"auditChainIntegrity": true,
|
||||||
|
"from_cache": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Interpreting the fields**:
|
||||||
|
|
||||||
|
| Field | Description |
|
||||||
|
|-------|-------------|
|
||||||
|
| `agntcyConformance` | `true` if all agents have valid DIDs and the audit chain is intact |
|
||||||
|
| `agentCount` | Total active agents in the organization |
|
||||||
|
| `verifiedAgentCount` | Agents with a resolvable DID document |
|
||||||
|
| `auditChainIntegrity` | `true` if the audit event hash chain has not been tampered with |
|
||||||
|
| `from_cache` | `true` if served from Redis cache (up to 5 minutes old) |
|
||||||
|
|
||||||
|
**Force a fresh report**: Wait 5 minutes for the cache to expire. The `from_cache: false`
|
||||||
|
response is always freshly generated.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verify audit chain integrity
|
||||||
|
|
||||||
|
`GET /api/v1/audit/verify`
|
||||||
|
|
||||||
|
Verifies that the cryptographic hash chain of audit events is intact. Returns `verified: true`
|
||||||
|
if no tampering is detected. Rate limited to 30 requests/minute (computationally intensive).
|
||||||
|
|
||||||
|
Requires: Bearer token with `audit:read` scope.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s "http://localhost:3000/api/v1/audit/verify" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Response (`200 OK`):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"verified": true,
|
||||||
|
"checkedCount": 1247,
|
||||||
|
"fromDate": null,
|
||||||
|
"toDate": null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify a specific date window:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s "http://localhost:3000/api/v1/audit/verify?fromDate=2026-03-01T00:00:00.000Z&toDate=2026-03-31T23:59:59.999Z" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
**Interpreting the result**:
|
||||||
|
- `verified: true` — no tampering detected in the checked window
|
||||||
|
- `verified: false` — the hash chain has a broken link; contact SentryAgent.ai support
|
||||||
|
- `checkedCount` — number of audit events verified
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Check SOC 2 control status (public)
|
||||||
|
|
||||||
|
`GET /api/v1/compliance/controls`
|
||||||
|
|
||||||
|
Returns the live status of all SOC 2 Trust Services Criteria controls. No authentication
|
||||||
|
required. Responses are cached by CDN/proxies for 60 seconds (`Cache-Control: public, max-age=60`).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s "http://localhost:3000/api/v1/compliance/controls" | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Response (`200 OK`):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"controls": [
|
||||||
|
{
|
||||||
|
"id": "CC6.1",
|
||||||
|
"name": "Logical Access Controls",
|
||||||
|
"status": "pass",
|
||||||
|
"lastChecked": "2026-04-04T08:00:00.000Z"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "CC7.2",
|
||||||
|
"name": "System Monitoring",
|
||||||
|
"status": "pass",
|
||||||
|
"lastChecked": "2026-04-04T08:00:00.000Z"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Each control has a `status` of `pass`, `fail`, or `unknown`. Status is updated by background
|
||||||
|
jobs that run periodically. This endpoint is suitable for embedding in external status pages
|
||||||
|
or compliance dashboards without sharing API credentials.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When compliance endpoints are disabled
|
||||||
|
|
||||||
|
If `COMPLIANCE_ENABLED=false` is set in the server environment, the AGNTCY compliance endpoints
|
||||||
|
(`/compliance/report` and `/compliance/agent-cards`) return `404 COMPLIANCE_DISABLED`. The SOC 2
|
||||||
|
endpoints (`/compliance/controls` and `/audit/verify`) are never gated and always active.
|
||||||
219
docs/developers/guides/configure-webhooks.md
Normal file
219
docs/developers/guides/configure-webhooks.md
Normal file
@@ -0,0 +1,219 @@
|
|||||||
|
# Configure Webhooks
|
||||||
|
|
||||||
|
Webhooks let AgentIdP push real-time events to your application when agents, credentials, or
|
||||||
|
tokens change state. This guide covers creating subscriptions, the available event types,
|
||||||
|
delivery guarantees, and how to inspect delivery history.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- A running AgentIdP instance
|
||||||
|
- A valid Bearer token with `organization_id` in its claims
|
||||||
|
- A publicly reachable HTTPS endpoint to receive events (for local development, use a tool
|
||||||
|
like [ngrok](https://ngrok.com))
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Available event types
|
||||||
|
|
||||||
|
| Event type | Triggered when |
|
||||||
|
|-----------|----------------|
|
||||||
|
| `agent.created` | A new agent is registered |
|
||||||
|
| `agent.updated` | An agent's metadata is updated |
|
||||||
|
| `agent.suspended` | An agent's status changes to `suspended` |
|
||||||
|
| `agent.reactivated` | An agent's status changes from `suspended` to `active` |
|
||||||
|
| `agent.decommissioned` | An agent is decommissioned |
|
||||||
|
| `credential.generated` | New credentials are created for an agent |
|
||||||
|
| `credential.rotated` | A credential's secret is rotated |
|
||||||
|
| `credential.revoked` | A credential is revoked |
|
||||||
|
| `token.issued` | An access token is issued |
|
||||||
|
| `token.revoked` | An access token is revoked |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Create a subscription
|
||||||
|
|
||||||
|
`POST /api/v1/webhooks`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST http://localhost:3000/api/v1/webhooks \
|
||||||
|
-H "Authorization: Bearer $TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"name": "prod-agent-events",
|
||||||
|
"url": "https://my-app.example.com/hooks/sentryagent",
|
||||||
|
"events": ["agent.created", "agent.decommissioned", "token.issued"]
|
||||||
|
}' | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Response (`201 Created`):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "wh-1a2b3c4d-e5f6-7890-abcd-ef1234567890",
|
||||||
|
"organization_id": "org-0a1b2c3d-e4f5-6789-abcd-ef0123456789",
|
||||||
|
"name": "prod-agent-events",
|
||||||
|
"url": "https://my-app.example.com/hooks/sentryagent",
|
||||||
|
"events": ["agent.created", "agent.decommissioned", "token.issued"],
|
||||||
|
"active": true,
|
||||||
|
"signingSecret": "whsec_a1b2c3d4e5f6789...",
|
||||||
|
"failure_count": 0,
|
||||||
|
"created_at": "2026-04-04T09:00:00.000Z",
|
||||||
|
"updated_at": "2026-04-04T09:00:00.000Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Save the `signingSecret` now.** It is shown once. Use it to verify the HMAC-SHA256
|
||||||
|
> signature on incoming webhook requests. See "Verifying delivery signatures" below.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export WEBHOOK_ID="wh-1a2b3c4d-e5f6-7890-abcd-ef1234567890"
|
||||||
|
export SIGNING_SECRET="whsec_a1b2c3d4e5f6789..."
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Webhook payload format
|
||||||
|
|
||||||
|
Every delivery sends a POST to your URL with `Content-Type: application/json` and this body:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "evt-uuid-here",
|
||||||
|
"event": "agent.created",
|
||||||
|
"timestamp": "2026-04-04T09:00:00.000Z",
|
||||||
|
"organization_id": "org-0a1b2c3d-e4f5-6789-abcd-ef0123456789",
|
||||||
|
"data": {
|
||||||
|
"agentId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
|
||||||
|
"email": "screener-001@talent.ai",
|
||||||
|
"agentType": "screener"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `data` object contains event-specific fields. For `agent.*` events it includes agent
|
||||||
|
metadata. For `credential.*` events it includes `credentialId` and `agentId`. For `token.*`
|
||||||
|
events it includes `agentId` and `scope`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verifying delivery signatures
|
||||||
|
|
||||||
|
AgentIdP signs every delivery with HMAC-SHA256 using your `signingSecret`. The signature is
|
||||||
|
in the `X-SentryAgent-Signature` header as `sha256=<hex-digest>`.
|
||||||
|
|
||||||
|
Verify it in Node.js:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const crypto = require('crypto');
|
||||||
|
|
||||||
|
function verifySignature(rawBody, signingSecret, signatureHeader) {
|
||||||
|
const expected = 'sha256=' + crypto
|
||||||
|
.createHmac('sha256', signingSecret)
|
||||||
|
.update(rawBody)
|
||||||
|
.digest('hex');
|
||||||
|
return crypto.timingSafeEqual(
|
||||||
|
Buffer.from(expected),
|
||||||
|
Buffer.from(signatureHeader)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Always verify the signature before processing the event. Reject requests with invalid signatures
|
||||||
|
with `401 Unauthorized`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Delivery guarantees and retry policy
|
||||||
|
|
||||||
|
- AgentIdP delivers each event **at least once** — your endpoint may receive duplicates
|
||||||
|
- Use the `id` field to deduplicate events
|
||||||
|
- Delivery is attempted immediately; on failure, retries use exponential backoff
|
||||||
|
- After repeated failures, the delivery moves to `dead_letter` status
|
||||||
|
- Subscriptions with high `failure_count` may be automatically disabled
|
||||||
|
|
||||||
|
Delivery statuses: `pending` → `delivered` (success) or `failed` (attempt failed) → `dead_letter`
|
||||||
|
(all retries exhausted)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## List subscriptions
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s "http://localhost:3000/api/v1/webhooks" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Pause or resume a subscription
|
||||||
|
|
||||||
|
To pause (disable) a subscription without deleting it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X PATCH "http://localhost:3000/api/v1/webhooks/$WEBHOOK_ID" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{ "active": false }' | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
To resume:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X PATCH "http://localhost:3000/api/v1/webhooks/$WEBHOOK_ID" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{ "active": true }' | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Inspect delivery history
|
||||||
|
|
||||||
|
`GET /api/v1/webhooks/{id}/deliveries`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s "http://localhost:3000/api/v1/webhooks/$WEBHOOK_ID/deliveries?limit=20&offset=0" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"deliveries": [
|
||||||
|
{
|
||||||
|
"id": "del-uuid",
|
||||||
|
"subscription_id": "wh-uuid",
|
||||||
|
"event_type": "agent.created",
|
||||||
|
"payload": { ... },
|
||||||
|
"status": "delivered",
|
||||||
|
"http_status_code": 200,
|
||||||
|
"attempt_count": 1,
|
||||||
|
"next_retry_at": null,
|
||||||
|
"delivered_at": "2026-04-04T09:00:01.000Z",
|
||||||
|
"created_at": "2026-04-04T09:00:00.000Z",
|
||||||
|
"updated_at": "2026-04-04T09:00:01.000Z"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"total": 47,
|
||||||
|
"limit": 20,
|
||||||
|
"offset": 0
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Use `offset` to paginate through delivery history. Increase `limit` to retrieve more records
|
||||||
|
per page (the server default is 20).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Delete a subscription
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X DELETE "http://localhost:3000/api/v1/webhooks/$WEBHOOK_ID" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" \
|
||||||
|
-o /dev/null -w "%{http_code}\n"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected response: `204`. This permanently deletes the subscription and all its delivery records.
|
||||||
@@ -47,10 +47,13 @@ The token expires in `3600` seconds (1 hour). Request a new one before it expire
|
|||||||
|
|
||||||
| Scope | What it allows |
|
| Scope | What it allows |
|
||||||
|-------|----------------|
|
|-------|----------------|
|
||||||
| `agents:read` | Read agent records |
|
| `agents:read` | Read agent identity records |
|
||||||
| `agents:write` | Create, update, decommission agents |
|
| `agents:write` | Create, update, and decommission agents |
|
||||||
| `tokens:read` | Introspect tokens |
|
| `tokens:read` | Introspect tokens |
|
||||||
| `audit:read` | Query audit logs |
|
| `audit:read` | Query audit logs and verify audit chain integrity |
|
||||||
|
| `webhooks:read` | List webhook subscriptions and delivery history |
|
||||||
|
| `webhooks:write` | Create, update, and delete webhook subscriptions |
|
||||||
|
| `admin:orgs` | Manage organizations and federation partners |
|
||||||
|
|
||||||
Request only the scopes your agent needs.
|
Request only the scopes your agent needs.
|
||||||
|
|
||||||
|
|||||||
140
docs/developers/guides/manage-api-tiers.md
Normal file
140
docs/developers/guides/manage-api-tiers.md
Normal file
@@ -0,0 +1,140 @@
|
|||||||
|
# Manage API Tiers
|
||||||
|
|
||||||
|
This guide explains how to check your organization's current plan tier, understand the enforced
|
||||||
|
limits, and initiate an upgrade via Stripe.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- A running AgentIdP instance
|
||||||
|
- A valid Bearer token with `organization_id` in its claims
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Check current tier status
|
||||||
|
|
||||||
|
`GET /api/v1/tiers/status`
|
||||||
|
|
||||||
|
Returns your organization's tier, the configured limits, and live usage counters for today.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s "http://localhost:3000/api/v1/tiers/status" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"tier": "free",
|
||||||
|
"limits": {
|
||||||
|
"maxAgents": 10,
|
||||||
|
"maxCallsPerDay": 1000,
|
||||||
|
"maxTokensPerDay": 1000
|
||||||
|
},
|
||||||
|
"usage": {
|
||||||
|
"agentCount": 3,
|
||||||
|
"callsToday": 142,
|
||||||
|
"tokensToday": 87
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Understanding the fields**:
|
||||||
|
|
||||||
|
| Field | Description |
|
||||||
|
|-------|-------------|
|
||||||
|
| `tier` | Current plan: `free`, `pro`, or `enterprise` |
|
||||||
|
| `limits.maxAgents` | Maximum active (non-decommissioned) agents allowed |
|
||||||
|
| `limits.maxCallsPerDay` | Maximum total API calls per calendar day (UTC) |
|
||||||
|
| `limits.maxTokensPerDay` | Maximum token issuances per calendar day (UTC) |
|
||||||
|
| `usage.agentCount` | Current number of active agents |
|
||||||
|
| `usage.callsToday` | API calls made so far today |
|
||||||
|
| `usage.tokensToday` | Tokens issued so far today |
|
||||||
|
|
||||||
|
**When limits are reached**: The relevant endpoint returns `403 FREE_TIER_LIMIT_EXCEEDED`.
|
||||||
|
Daily counters reset at midnight UTC. The agent count limit is a current count, not a daily
|
||||||
|
counter — decommissioning an agent immediately frees capacity.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tier comparison
|
||||||
|
|
||||||
|
| Limit | Free | Pro | Enterprise |
|
||||||
|
|-------|------|-----|------------|
|
||||||
|
| Max agents | 10 | 100 | Unlimited |
|
||||||
|
| Max API calls / day | 1,000 | 50,000 | Unlimited |
|
||||||
|
| Max token issuances / day | 1,000 | 50,000 | Unlimited |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Upgrade your tier
|
||||||
|
|
||||||
|
`POST /api/v1/tiers/upgrade`
|
||||||
|
|
||||||
|
Creates a Stripe Checkout Session and returns a one-time URL. Complete the payment in the
|
||||||
|
browser to upgrade your organization's tier.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST http://localhost:3000/api/v1/tiers/upgrade \
|
||||||
|
-H "Authorization: Bearer $TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{ "target_tier": "pro" }' | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"checkoutUrl": "https://checkout.stripe.com/pay/cs_live_a1b2c3d4e5f6..."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Open `checkoutUrl` in a browser to complete payment. After successful payment, Stripe sends a
|
||||||
|
webhook to AgentIdP which automatically upgrades your organization's tier.
|
||||||
|
|
||||||
|
**Constraints**:
|
||||||
|
- `target_tier` must be `pro` or `enterprise`
|
||||||
|
- `target_tier` must be higher than your current tier (you cannot downgrade via this endpoint)
|
||||||
|
- Attempting to upgrade to the current or a lower tier returns `400 VALIDATION_ERROR`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Upgrade from free to pro
|
||||||
|
curl -s -X POST http://localhost:3000/api/v1/tiers/upgrade \
|
||||||
|
-H "Authorization: Bearer $TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{ "target_tier": "pro" }' | jq .
|
||||||
|
|
||||||
|
# Upgrade from pro to enterprise
|
||||||
|
curl -s -X POST http://localhost:3000/api/v1/tiers/upgrade \
|
||||||
|
-H "Authorization: Bearer $TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{ "target_tier": "enterprise" }' | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Common errors
|
||||||
|
|
||||||
|
### `400 VALIDATION_ERROR` — target_tier missing or invalid
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"code": "VALIDATION_ERROR",
|
||||||
|
"message": "target_tier must be one of: free, pro, enterprise.",
|
||||||
|
"details": { "received": "premium" }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fix**: Use `"pro"` or `"enterprise"`.
|
||||||
|
|
||||||
|
### `400 TIER_UPGRADE_NOT_REQUIRED` — not an upgrade
|
||||||
|
|
||||||
|
**Fix**: You are already on this tier or a higher tier. Check `GET /api/v1/tiers/status` first.
|
||||||
|
|
||||||
|
### `401 UNAUTHORIZED` — token lacks organization_id
|
||||||
|
|
||||||
|
The tier endpoints require a token with an `organization_id` claim. Use a token issued by an
|
||||||
|
agent that was registered with `organization_id`. Tokens issued via the bootstrap method
|
||||||
|
(without an org) do not carry `organization_id` and will fail.
|
||||||
@@ -2,6 +2,11 @@
|
|||||||
|
|
||||||
A credential is a `client_id` + `client_secret` pair that your agent uses to get access tokens. This guide covers all four credential operations.
|
A credential is a `client_id` + `client_secret` pair that your agent uses to get access tokens. This guide covers all four credential operations.
|
||||||
|
|
||||||
|
> **Multi-tenant note**: Credentials issued for an agent that belongs to an organization will
|
||||||
|
> produce tokens carrying an `organization_id` claim. This claim is required by analytics,
|
||||||
|
> webhooks, tier enforcement, and A2A delegation. Ensure your agent is registered with
|
||||||
|
> `organization_id` before issuing credentials for production use.
|
||||||
|
|
||||||
All credential endpoints are under `/api/v1/agents/{agentId}/credentials` and require a Bearer token with `agents:write` scope.
|
All credential endpoints are under `/api/v1/agents/{agentId}/credentials` and require a Bearer token with `agents:write` scope.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
@@ -25,6 +25,11 @@ Every action below is automatically recorded. You cannot create, modify, or dele
|
|||||||
| `credential.revoked` | Successful `DELETE /agents/{agentId}/credentials/{credentialId}` |
|
| `credential.revoked` | Successful `DELETE /agents/{agentId}/credentials/{credentialId}` |
|
||||||
| `auth.failed` | Failed authentication attempt on `POST /token` |
|
| `auth.failed` | Failed authentication attempt on `POST /token` |
|
||||||
|
|
||||||
|
> **Audit chain verification**: In addition to querying events, you can verify the cryptographic
|
||||||
|
> integrity of the entire audit hash chain via `GET /api/v1/audit/verify`. This endpoint requires
|
||||||
|
> `audit:read` scope and is rate-limited to 30 requests/min. See the
|
||||||
|
> [API Reference](../api-reference.md#get-auditverify---verify-audit-chain-integrity) for details.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Query the audit log
|
## Query the audit log
|
||||||
|
|||||||
@@ -20,6 +20,7 @@ Requires: `Authorization: Bearer <token>` with `agents:write` scope.
|
|||||||
| `capabilities` | string[] | Yes | One or more capability strings in `resource:action` format. Minimum 1. |
|
| `capabilities` | string[] | Yes | One or more capability strings in `resource:action` format. Minimum 1. |
|
||||||
| `owner` | string | Yes | Team or organisation that owns this agent. 1–128 characters. |
|
| `owner` | string | Yes | Team or organisation that owns this agent. 1–128 characters. |
|
||||||
| `deploymentEnv` | string (enum) | Yes | Target deployment environment. See values below. |
|
| `deploymentEnv` | string (enum) | Yes | Target deployment environment. See values below. |
|
||||||
|
| `organization_id` | string (UUID) | No | UUID of the organization to scope this agent to. Recommended on all multi-tenant instances. |
|
||||||
|
|
||||||
### `agentType` values
|
### `agentType` values
|
||||||
|
|
||||||
@@ -70,7 +71,8 @@ curl -s -X POST http://localhost:3000/api/v1/agents \
|
|||||||
"version": "1.0.0",
|
"version": "1.0.0",
|
||||||
"capabilities": ["resume:read", "email:send", "candidate:score"],
|
"capabilities": ["resume:read", "email:send", "candidate:score"],
|
||||||
"owner": "talent-acquisition-team",
|
"owner": "talent-acquisition-team",
|
||||||
"deploymentEnv": "production"
|
"deploymentEnv": "production",
|
||||||
|
"organization_id": "'$ORG_ID'"
|
||||||
}' | jq .
|
}' | jq .
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -93,6 +95,11 @@ Successful response (`201 Created`):
|
|||||||
|
|
||||||
The `agentId` is assigned by the system — it is immutable and never changes.
|
The `agentId` is assigned by the system — it is immutable and never changes.
|
||||||
|
|
||||||
|
> **Organization scoping**: If you include `organization_id` in the request, the agent is
|
||||||
|
> associated with that organization. Analytics, webhook events, and tier enforcement are all
|
||||||
|
> scoped by organization. To create an organization first, see the
|
||||||
|
> [Quick Start](../quick-start.md) guide.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Immutable fields
|
## Immutable fields
|
||||||
|
|||||||
135
docs/developers/guides/use-analytics-dashboard.md
Normal file
135
docs/developers/guides/use-analytics-dashboard.md
Normal file
@@ -0,0 +1,135 @@
|
|||||||
|
# Use the Analytics Dashboard
|
||||||
|
|
||||||
|
This guide explains how to query the three analytics endpoints to understand your organization's
|
||||||
|
token usage and agent activity patterns.
|
||||||
|
|
||||||
|
All analytics endpoints require Bearer token authentication and are scoped to the organization
|
||||||
|
embedded in your token.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- A running AgentIdP instance
|
||||||
|
- A valid Bearer token with `organization_id` in its claims
|
||||||
|
- At least one agent registered and some token issuance activity
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Token issuance trend
|
||||||
|
|
||||||
|
`GET /api/v1/analytics/tokens`
|
||||||
|
|
||||||
|
Returns daily token issuance counts for the past N days (default 30, max 90). Use this to
|
||||||
|
track usage growth, identify traffic spikes, and plan capacity.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s "http://localhost:3000/api/v1/analytics/tokens?days=30" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"tenantId": "org-0a1b2c3d-e4f5-6789-abcd-ef0123456789",
|
||||||
|
"days": 30,
|
||||||
|
"data": [
|
||||||
|
{ "date": "2026-03-06", "count": 142 },
|
||||||
|
{ "date": "2026-03-07", "count": 198 },
|
||||||
|
{ "date": "2026-03-08", "count": 0 }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Interpreting the data**: Each item in `data` is one calendar day (UTC) with the number of
|
||||||
|
tokens issued on that day. Days with zero issuance are included with `count: 0`. The array
|
||||||
|
is ordered chronologically, oldest first.
|
||||||
|
|
||||||
|
**Using it**: Compare day-over-day counts to identify growth or anomalies. A sudden spike in
|
||||||
|
`count` may indicate an agent retry loop or a credential leak. Zero-count days during expected
|
||||||
|
operation may indicate a deployment issue.
|
||||||
|
|
||||||
|
**Query parameter**: `days` — positive integer, max 90. Returns `400 VALIDATION_ERROR` if
|
||||||
|
exceeded.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Last 7 days
|
||||||
|
curl -s "http://localhost:3000/api/v1/analytics/tokens?days=7" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" | jq .
|
||||||
|
|
||||||
|
# Last 90 days (maximum)
|
||||||
|
curl -s "http://localhost:3000/api/v1/analytics/tokens?days=90" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Agent activity heatmap
|
||||||
|
|
||||||
|
`GET /api/v1/analytics/agents/activity`
|
||||||
|
|
||||||
|
Returns request counts grouped by day-of-week (0 = Sunday, 6 = Saturday) and hour (0–23, UTC).
|
||||||
|
Use this to identify peak usage windows for capacity planning and rate limit tuning.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s "http://localhost:3000/api/v1/analytics/agents/activity" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"tenantId": "org-0a1b2c3d-e4f5-6789-abcd-ef0123456789",
|
||||||
|
"data": [
|
||||||
|
{ "dow": 1, "hour": 9, "count": 54 },
|
||||||
|
{ "dow": 1, "hour": 10, "count": 87 },
|
||||||
|
{ "dow": 3, "hour": 14, "count": 201 }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Interpreting the data**: `dow` is 0 (Sunday) through 6 (Saturday). `hour` is 0–23 UTC.
|
||||||
|
Only non-zero cells are returned — missing combinations had zero activity. Sort by `count`
|
||||||
|
descending to find your peak windows.
|
||||||
|
|
||||||
|
**Using it**: If most activity is on weekday mornings UTC, ensure your rate limit headroom
|
||||||
|
covers that window. If weekend activity is unexpectedly high, investigate which agents are
|
||||||
|
active.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Per-agent usage summary
|
||||||
|
|
||||||
|
`GET /api/v1/analytics/agents`
|
||||||
|
|
||||||
|
Returns token issuance counts per agent for the current calendar month (UTC). Use this to
|
||||||
|
identify your most active agents and check if any single agent is consuming a
|
||||||
|
disproportionate share of your monthly token budget.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s "http://localhost:3000/api/v1/analytics/agents" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"tenantId": "org-0a1b2c3d-e4f5-6789-abcd-ef0123456789",
|
||||||
|
"month": "2026-04",
|
||||||
|
"data": [
|
||||||
|
{ "agentId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "tokenCount": 312 },
|
||||||
|
{ "agentId": "b2c3d4e5-f6a7-8901-bcde-f12345678901", "tokenCount": 87 }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Interpreting the data**: Each item shows an agent UUID and the number of tokens it has
|
||||||
|
issued this month. The response covers the full current calendar month from day 1 to now.
|
||||||
|
It resets on the first day of each month.
|
||||||
|
|
||||||
|
**Using it**: Cross-reference `agentId` values against `GET /api/v1/agents` to identify which
|
||||||
|
agents by name. If one agent accounts for >80% of usage, investigate whether it is token
|
||||||
|
caching correctly or requesting tokens unnecessarily.
|
||||||
@@ -1,12 +1,12 @@
|
|||||||
# Quick Start — Register Your First Agent
|
# Quick Start — Register Your First Agent
|
||||||
|
|
||||||
This guide gets you from zero to a working agent identity with a valid OAuth 2.0 access token. It takes under 5 minutes.
|
This guide gets you from zero to a working agent identity inside an organization, with a valid OAuth 2.0 access token. It takes under 5 minutes.
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
You need two tools installed:
|
You need two tools installed:
|
||||||
|
|
||||||
- **Docker** (includes `docker-compose`) — to run PostgreSQL and Redis
|
- **Docker** (with Compose plugin, v2.20+) — to run PostgreSQL and Redis
|
||||||
- **Node.js 18+** (includes `npm`) — to run the server
|
- **Node.js 18+** (includes `npm`) — to run the server
|
||||||
- **curl** — to call the API
|
- **curl** — to call the API
|
||||||
|
|
||||||
@@ -32,16 +32,19 @@ openssl genrsa -out private.pem 2048
|
|||||||
openssl rsa -in private.pem -pubout -out public.pem
|
openssl rsa -in private.pem -pubout -out public.pem
|
||||||
```
|
```
|
||||||
|
|
||||||
Create your `.env` file:
|
Copy the environment template and fill in your JWT keys:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat > .env << 'EOF'
|
cp .env.example .env
|
||||||
DATABASE_URL=postgresql://sentryagent:sentryagent@localhost:5432/sentryagent_idp
|
```
|
||||||
REDIS_URL=redis://localhost:6379
|
|
||||||
PORT=3000
|
Write your JWT keys into `.env`:
|
||||||
JWT_PRIVATE_KEY="$(cat private.pem)"
|
|
||||||
JWT_PUBLIC_KEY="$(cat public.pem)"
|
```bash
|
||||||
EOF
|
PRIVATE_KEY_LINE=$(awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' private.pem)
|
||||||
|
PUBLIC_KEY_LINE=$(awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' public.pem)
|
||||||
|
sed -i "s|JWT_PRIVATE_KEY=.*|JWT_PRIVATE_KEY=\"${PRIVATE_KEY_LINE}\"|" .env
|
||||||
|
sed -i "s|JWT_PUBLIC_KEY=.*|JWT_PUBLIC_KEY=\"${PUBLIC_KEY_LINE}\"|" .env
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Note**: The `.env` file stores your private key. Do not commit it to version control.
|
> **Note**: The `.env` file stores your private key. Do not commit it to version control.
|
||||||
@@ -53,7 +56,7 @@ EOF
|
|||||||
Start PostgreSQL and Redis using Docker Compose (infrastructure services only):
|
Start PostgreSQL and Redis using Docker Compose (infrastructure services only):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker-compose up -d postgres redis
|
docker compose up -d postgres redis
|
||||||
```
|
```
|
||||||
|
|
||||||
Expected output:
|
Expected output:
|
||||||
@@ -135,7 +138,45 @@ export BOOTSTRAP_TOKEN="<paste token here>"
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Step 5 — Register an agent
|
## Step 5 — Create an organization
|
||||||
|
|
||||||
|
Agents are scoped to organizations. Create one now so your agent has an `organization_id` to belong to:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST http://localhost:3000/api/v1/organizations \
|
||||||
|
-H "Authorization: Bearer $BOOTSTRAP_TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"name": "My AI Project",
|
||||||
|
"slug": "my-ai-project"
|
||||||
|
}' | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Example response (`201 Created`):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"organizationId": "org-0a1b2c3d-e4f5-6789-abcd-ef0123456789",
|
||||||
|
"name": "My AI Project",
|
||||||
|
"slug": "my-ai-project",
|
||||||
|
"planTier": "free",
|
||||||
|
"maxAgents": 10,
|
||||||
|
"maxTokensPerMonth": 10000,
|
||||||
|
"status": "active",
|
||||||
|
"createdAt": "2026-04-04T09:00:00.000Z",
|
||||||
|
"updatedAt": "2026-04-04T09:00:00.000Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Save the `organizationId`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export ORG_ID="org-0a1b2c3d-e4f5-6789-abcd-ef0123456789"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 6 — Register an agent
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -s -X POST http://localhost:3000/api/v1/agents \
|
curl -s -X POST http://localhost:3000/api/v1/agents \
|
||||||
@@ -147,7 +188,8 @@ curl -s -X POST http://localhost:3000/api/v1/agents \
|
|||||||
"version": "1.0.0",
|
"version": "1.0.0",
|
||||||
"capabilities": ["data:read"],
|
"capabilities": ["data:read"],
|
||||||
"owner": "my-team",
|
"owner": "my-team",
|
||||||
"deploymentEnv": "development"
|
"deploymentEnv": "development",
|
||||||
|
"organization_id": "'$ORG_ID'"
|
||||||
}' | jq .
|
}' | jq .
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -176,7 +218,7 @@ export AGENT_ID="a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Step 6 — Generate a credential
|
## Step 7 — Generate a credential
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -s -X POST "http://localhost:3000/api/v1/agents/$AGENT_ID/credentials" \
|
curl -s -X POST "http://localhost:3000/api/v1/agents/$AGENT_ID/credentials" \
|
||||||
@@ -208,7 +250,7 @@ export CLIENT_SECRET="sk_live_7f3a2b1c9d8e4f0a6b5c3d2e1f0a9b8c"
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Step 7 — Issue an access token
|
## Step 8 — Issue an access token
|
||||||
|
|
||||||
Use the OAuth 2.0 Client Credentials flow. Note that the `/token` endpoint uses **form-encoded** body, not JSON:
|
Use the OAuth 2.0 Client Credentials flow. Note that the `/token` endpoint uses **form-encoded** body, not JSON:
|
||||||
|
|
||||||
@@ -242,6 +284,14 @@ Your agent now has a valid JWT. Use it in the `Authorization: Bearer <token>` he
|
|||||||
|
|
||||||
## What's next
|
## What's next
|
||||||
|
|
||||||
- [Core Concepts](concepts.md) — understand AgentIdP, AGNTCY, and the agent identity model
|
- [Core Concepts](concepts.md) — understand AgentIdP, AGNTCY, orgs, DID, delegation, and tiers
|
||||||
- [Guides](guides/README.md) — step-by-step walkthroughs for credentials, tokens, and audit logs
|
- [Guides](guides/README.md) — step-by-step walkthroughs for all workflows
|
||||||
- [API Reference](api-reference.md) — every endpoint documented with curl examples
|
- [API Reference](api-reference.md) — every endpoint documented with curl examples
|
||||||
|
|
||||||
|
**New guides for Phase 6 features:**
|
||||||
|
|
||||||
|
- [Use the Analytics Dashboard](guides/use-analytics-dashboard.md) — query token trends and activity
|
||||||
|
- [Manage API Tiers](guides/manage-api-tiers.md) — check limits and upgrade your plan
|
||||||
|
- [A2A Delegation](guides/a2a-delegation.md) — delegate authority between agents
|
||||||
|
- [Configure Webhooks](guides/configure-webhooks.md) — subscribe to real-time events
|
||||||
|
- [AGNTCY Compliance](guides/agntcy-compliance.md) — export agent cards and generate compliance reports
|
||||||
|
|||||||
@@ -14,14 +14,15 @@ SentryAgent.ai AgentIdP is a Node.js REST API backed by PostgreSQL and Redis. It
|
|||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
| Document | What it covers |
|
| Document | Audience | Contents |
|
||||||
|----------|----------------|
|
|----------|----------|---------|
|
||||||
| [Architecture](architecture.md) | Components, ports, data flow, Redis key patterns |
|
| [Architecture](architecture.md) | All engineers | Components, ports, data flow, Redis key patterns |
|
||||||
| [Environment Variables](environment-variables.md) | Every env var — required, optional, format, examples |
|
| [Environment Variables](environment-variables.md) | All engineers | Every env var — required, optional, format, examples |
|
||||||
| [Database](database.md) | Schema (4 tables), migrations, how to apply and verify |
|
| [Database](database.md) | Backend, DevOps | Schema (26 tables/migrations), how to apply and verify |
|
||||||
| [Local Development](local-development.md) | docker-compose setup, startup, health checks |
|
| [Local Development](local-development.md) | All engineers | Docker Compose setup (`compose.yaml`), startup, health checks |
|
||||||
| [Security](security.md) | JWT key generation and rotation, CORS, secret storage |
|
| [Security](security.md) | All engineers | JWT key generation and rotation, CORS, secret storage |
|
||||||
| [Operations](operations.md) | Startup order, graceful shutdown, log interpretation, troubleshooting |
|
| [Operations](operations.md) | DevOps | Startup order, graceful shutdown, log interpretation, troubleshooting |
|
||||||
|
| [field-trial.md](field-trial.md) | DevOps engineers, QA | In-house Docker Compose field trial execution playbook |
|
||||||
|
|
||||||
## Quick Reference — Ports
|
## Quick Reference — Ports
|
||||||
|
|
||||||
|
|||||||
@@ -3,26 +3,49 @@
|
|||||||
## Component Overview
|
## Component Overview
|
||||||
|
|
||||||
```
|
```
|
||||||
┌─────────────────────────────────────┐
|
┌───────────────────────────────────────────┐
|
||||||
|
│ Next.js Portal (port 3001) │
|
||||||
|
│ portal/ — Next.js 14 │
|
||||||
|
│ /login /agents /credentials /audit │
|
||||||
|
│ /analytics /settings/tier /compliance │
|
||||||
|
│ /webhooks /marketplace │
|
||||||
|
└────────────────┬──────────────────────────┘
|
||||||
|
│ HTTP (localhost:3000)
|
||||||
|
┌────────────────▼──────────────────────────┐
|
||||||
│ AgentIdP Application │
|
│ AgentIdP Application │
|
||||||
│ Node.js / Express │
|
│ Node.js / Express (port 3000) │
|
||||||
│ Port 3000 │
|
|
||||||
│ │
|
│ │
|
||||||
│ Auth MW → RateLimit MW → Routes │
|
│ TLS MW → Helmet → CORS → Morgan │
|
||||||
│ ↓ ↓ │
|
│ Metrics MW → OrgContext MW │
|
||||||
|
│ UsageMetering MW → TierEnforcement MW │
|
||||||
|
│ Auth MW → OPA MW → Routes │
|
||||||
|
│ ↓ │
|
||||||
│ Controllers → Services → Repos │
|
│ Controllers → Services → Repos │
|
||||||
└──────────────┬──────────────┬────────┘
|
└──────────┬───────────────┬────────────────┘
|
||||||
│ │
|
│ │
|
||||||
┌──────────────▼──┐ ┌───────▼────────┐
|
┌────────────────▼──┐ ┌────────▼────────┐
|
||||||
│ PostgreSQL 14 │ │ Redis 7 │
|
│ PostgreSQL 14 │ │ Redis 7 │
|
||||||
│ Port 5432 │ │ Port 6379 │
|
│ Port 5432 │ │ Port 6379 │
|
||||||
│ │ │ │
|
│ │ │ │
|
||||||
│ agents │ │ Token revoke │
|
│ 26 migrations │ │ Rate limits │
|
||||||
│ credentials │ │ Rate limits │
|
│ (001–026) │ │ Token revoke │
|
||||||
│ audit_events │ │ Monthly counts │
|
│ organizations │ │ Monthly counts │
|
||||||
│ token_revocati- │ │ │
|
│ agents + DID keys │ │ Tier counters │
|
||||||
│ ons │ │ │
|
│ credentials │ │ Compliance cache│
|
||||||
└──────────────────┘ └─────────────────┘
|
│ audit_events │ │ │
|
||||||
|
│ token_revocations │ └──────────────────┘
|
||||||
|
│ oidc_keys │
|
||||||
|
│ federation_partne-│ ┌──────────────────┐
|
||||||
|
│ rs │ │ HashiCorp Vault │
|
||||||
|
│ webhook_subscript-│ │ (optional) │
|
||||||
|
│ ions + deliveries │ │ KV v2 — creds │
|
||||||
|
│ agent_marketplace │ └──────────────────┘
|
||||||
|
│ github_oidc_trust │
|
||||||
|
│ billing │ ┌──────────────────┐
|
||||||
|
│ delegation_chains │ │ Stripe │
|
||||||
|
│ analytics_events │ │ (optional) │
|
||||||
|
│ tenant_tiers │ │ Billing/upgrades │
|
||||||
|
└────────────────────┘ └──────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
## Components
|
## Components
|
||||||
@@ -36,8 +59,12 @@ A stateless Express HTTP server. Every request is handled independently — no i
|
|||||||
| Layer | Responsibility |
|
| Layer | Responsibility |
|
||||||
|-------|---------------|
|
|-------|---------------|
|
||||||
| Routes | Wire HTTP methods and paths to controllers |
|
| Routes | Wire HTTP methods and paths to controllers |
|
||||||
|
| TLS middleware | Redirect HTTP → HTTPS when `ENFORCE_TLS=true` |
|
||||||
| Auth middleware | Validate Bearer JWT (RS256 + Redis revocation check) |
|
| Auth middleware | Validate Bearer JWT (RS256 + Redis revocation check) |
|
||||||
| Rate limit middleware | Redis sliding-window counter per `client_id` |
|
| OrgContext middleware | Resolve `organization_id` from JWT and attach to `req` |
|
||||||
|
| UsageMetering middleware | Fire-and-forget analytics event recording |
|
||||||
|
| TierEnforcement middleware | Enforce daily API call and token limits via Redis (when `TIER_ENFORCEMENT=true`) |
|
||||||
|
| OPA middleware | Scope-based authorization via embedded Wasm or JSON policy |
|
||||||
| Controllers | Parse and validate request, call service, return response |
|
| Controllers | Parse and validate request, call service, return response |
|
||||||
| Services | Business logic — no direct DB access |
|
| Services | Business logic — no direct DB access |
|
||||||
| Repositories | All SQL queries — no business logic |
|
| Repositories | All SQL queries — no business logic |
|
||||||
@@ -53,11 +80,14 @@ The application connects via a connection pool (`pg.Pool`) initialised from `DAT
|
|||||||
|
|
||||||
Ephemeral store for three use cases:
|
Ephemeral store for three use cases:
|
||||||
|
|
||||||
| Key pattern | Purpose | TTL |
|
| Key pattern | Example | Purpose | TTL |
|
||||||
|------------|---------|-----|
|
|------------|---------|---------|-----|
|
||||||
| `revoked:<jti>` | Token revocation list — checked on every authenticated request | Until token's `exp` |
|
| `revoked:<jti>` | `revoked:f1e2d3c4-...` | Revoked token JTI | Remaining token lifetime |
|
||||||
| `rate:<client_id>:<window>` | Request count per client per 60-second window | 60 seconds |
|
| `rate:<client_id>:<window>` | `rate:a1b2c3...:29086156` | Request count per window | `RATE_LIMIT_WINDOW_MS` |
|
||||||
| `monthly:<client_id>:<year>:<month>` | Token issuance count for free tier limit enforcement | End of month |
|
| `monthly:<client_id>:<year>:<month>` | `monthly:a1b2c3...:2026:3` | Monthly token issuance count | End of month |
|
||||||
|
| `rate:tier:calls:<tenantId>` | `rate:tier:calls:org-uuid` | Daily API call counter for tier enforcement | Until midnight UTC |
|
||||||
|
| `rate:tier:tokens:<tenantId>` | `rate:tier:tokens:org-uuid` | Daily token issuance counter for tier enforcement | Until midnight UTC |
|
||||||
|
| `compliance:report:<tenantId>` | `compliance:report:org-uuid` | Cached compliance report JSON | 5 minutes |
|
||||||
|
|
||||||
**Redis is supplementary, not the source of truth.** Token revocations are also written to the `token_revocations` PostgreSQL table for durability across Redis restarts. On Redis restart, the revocation list is cold — previously revoked tokens will pass auth until the PostgreSQL-backed warm-up is implemented (Phase 2).
|
**Redis is supplementary, not the source of truth.** Token revocations are also written to the `token_revocations` PostgreSQL table for durability across Redis restarts. On Redis restart, the revocation list is cold — previously revoked tokens will pass auth until the PostgreSQL-backed warm-up is implemented (Phase 2).
|
||||||
|
|
||||||
@@ -107,21 +137,89 @@ PostgreSQL / Redis
|
|||||||
|
|
||||||
## Service Map
|
## Service Map
|
||||||
|
|
||||||
| Route prefix | Service | Repository |
|
| Route prefix | Controller | Service(s) | Repository/ies |
|
||||||
|-------------|---------|-----------|
|
|-------------|-----------|-----------|----------------|
|
||||||
| `/api/v1/agents` | `AgentService` | `AgentRepository` |
|
| `/api/v1/agents` | `AgentController` | `AgentService` | `AgentRepository` |
|
||||||
| `/api/v1/agents/:id/credentials` | `CredentialService` | `CredentialRepository` |
|
| `/api/v1/credentials` | `CredentialController` | `CredentialService` | `CredentialRepository` |
|
||||||
| `/api/v1/token` | `OAuth2Service` | `TokenRepository`, `CredentialRepository`, `AgentRepository` |
|
| `/api/v1/token` | `TokenController` | `OAuth2Service` | `TokenRepository`, `CredentialRepository`, `AgentRepository` |
|
||||||
| `/api/v1/audit` | `AuditService` | `AuditRepository` |
|
| `/api/v1/audit` | `AuditController` | `AuditService` | `AuditRepository` |
|
||||||
|
| `/api/v1/organizations` | `OrgController` | `OrgService` | `OrgRepository` |
|
||||||
|
| `/api/v1/compliance/*` | `ComplianceController` | `ComplianceService` | `AuditRepository` |
|
||||||
|
| `/api/v1/analytics/*` | `AnalyticsController` | `AnalyticsService` | direct pool queries |
|
||||||
|
| `/api/v1/tiers/*` | `TierController` | `TierService` | pool queries, Stripe SDK |
|
||||||
|
| `/api/v1/webhooks` | `WebhookController` | `WebhookService` | `WebhookRepository` |
|
||||||
|
| `/api/v1/federation` | `FederationController` | `FederationService` | direct pool queries |
|
||||||
|
| `/api/v1/marketplace` | `MarketplaceController` | `MarketplaceService` | direct pool queries |
|
||||||
|
| `/api/v1/billing` | `BillingController` | `BillingService` | direct pool queries |
|
||||||
|
| `/.well-known/did.json`, `/api/v1/did/*` | `DIDController` | `DIDService` | `AgentRepository` |
|
||||||
|
| `/.well-known/openid-configuration`, `/api/v1/oidc/*` | `OIDCController` | `OIDCKeyService`, `IDTokenService` | direct pool queries |
|
||||||
|
| `/api/v1/oidc/trust-policies` | `OIDCTrustPolicyController` | `OIDCTrustPolicyService` | direct pool queries |
|
||||||
|
| `/api/v1/delegation` | `DelegationController` | `DelegationService` | direct pool queries |
|
||||||
|
| `/api/v1/scaffold` | `ScaffoldController` | `ScaffoldService` | — |
|
||||||
|
| `/health` | inline | — | pool, redis |
|
||||||
|
| `/metrics` | inline | — | prom-client |
|
||||||
|
|
||||||
|
## New Services (Phases 3–6)
|
||||||
|
|
||||||
|
| Service | Source file | Responsibility |
|
||||||
|
|---------|------------|----------------|
|
||||||
|
| `AnalyticsService` | `src/services/AnalyticsService.ts` | Fire-and-forget `recordEvent`, time-series `getTokenTrend`, heatmap `getAgentActivity`, per-agent `getAgentUsageSummary` |
|
||||||
|
| `TierService` | `src/services/TierService.ts` | `getStatus` (reads `tenant_tiers`), `initiateUpgrade` (creates Stripe Checkout Session), `applyUpgrade` (handles Stripe webhook), `enforceAgentLimit` |
|
||||||
|
| `ComplianceService` | `src/services/ComplianceService.ts` | `generateReport` (Redis-cached 5 min), `exportAgentCards` (AGNTCY format) |
|
||||||
|
| `DelegationService` | `src/services/DelegationService.ts` | A2A delegation chain creation and verification |
|
||||||
|
| `DIDService` | `src/services/DIDService.ts` | `did:web` identifier generation and DID document management |
|
||||||
|
| `OIDCKeyService` | `src/services/OIDCKeyService.ts` | OIDC key rotation, JWKS endpoint |
|
||||||
|
| `IDTokenService` | `src/services/IDTokenService.ts` | OIDC ID token issuance |
|
||||||
|
| `FederationService` | `src/services/FederationService.ts` | Cross-tenant agent identity federation |
|
||||||
|
| `WebhookService` | `src/services/WebhookService.ts` | Event subscriptions, delivery with retry, dead-letter queue |
|
||||||
|
| `VaultService` | `src/services/VaultService.ts` | HashiCorp Vault KV v2 read/write for credential storage |
|
||||||
|
| `BillingService` | `src/services/BillingService.ts` | Stripe customer and subscription management |
|
||||||
|
| `MarketplaceService` | `src/services/MarketplaceService.ts` | Agent listing and discovery |
|
||||||
|
| `OIDCTrustPolicyService` | `src/services/OIDCTrustPolicyService.ts` | GitHub OIDC trust policy management |
|
||||||
|
| `EventPublisher` | `src/services/EventPublisher.ts` | Routes domain events to webhook delivery and Kafka (if configured) |
|
||||||
|
|
||||||
## Ports
|
## Ports
|
||||||
|
|
||||||
| Service | Internal port | Exposed port (local dev) |
|
| Service | Internal port | Exposed port (local dev) |
|
||||||
|---------|--------------|--------------------------|
|
|---------|--------------|--------------------------|
|
||||||
| AgentIdP app | 3000 | 3000 |
|
| AgentIdP app | 3000 | 3000 |
|
||||||
|
| Next.js portal | 3001 | 3001 |
|
||||||
| PostgreSQL | 5432 | 5432 |
|
| PostgreSQL | 5432 | 5432 |
|
||||||
| Redis | 6379 | 6379 |
|
| Redis | 6379 | 6379 |
|
||||||
|
|
||||||
|
## API Routes (Phase 6 complete)
|
||||||
|
|
||||||
|
Base path: `/api/v1`
|
||||||
|
|
||||||
|
| Route | Method(s) | Auth | Feature flag |
|
||||||
|
|-------|----------|------|-------------|
|
||||||
|
| `/api/v1/agents` | GET, POST, PATCH, DELETE | Bearer JWT | always on |
|
||||||
|
| `/api/v1/credentials` | GET, POST, DELETE | Bearer JWT | always on |
|
||||||
|
| `/api/v1/token` | POST | none (client credentials) | always on |
|
||||||
|
| `/api/v1/audit` | GET | Bearer JWT | always on |
|
||||||
|
| `/api/v1/audit/verify` | GET | Bearer JWT | always on |
|
||||||
|
| `/api/v1/organizations` | GET, POST | Bearer JWT | always on |
|
||||||
|
| `/api/v1/compliance/controls` | GET | none | always on |
|
||||||
|
| `/api/v1/compliance/report` | GET | Bearer JWT | `COMPLIANCE_ENABLED=true` |
|
||||||
|
| `/api/v1/compliance/agent-cards` | GET | Bearer JWT | `COMPLIANCE_ENABLED=true` |
|
||||||
|
| `/api/v1/analytics/token-trend` | GET | Bearer JWT | `ANALYTICS_ENABLED=true` |
|
||||||
|
| `/api/v1/analytics/agent-activity` | GET | Bearer JWT | `ANALYTICS_ENABLED=true` |
|
||||||
|
| `/api/v1/analytics/usage-summary` | GET | Bearer JWT | `ANALYTICS_ENABLED=true` |
|
||||||
|
| `/api/v1/tiers/status` | GET | Bearer JWT | always on |
|
||||||
|
| `/api/v1/tiers/upgrade` | POST | Bearer JWT | always on |
|
||||||
|
| `/api/v1/webhooks` | GET, POST, DELETE | Bearer JWT | always on |
|
||||||
|
| `/api/v1/federation` | GET, POST | Bearer JWT | always on |
|
||||||
|
| `/api/v1/delegation` | GET, POST | Bearer JWT | always on |
|
||||||
|
| `/api/v1/marketplace` | GET | none | always on |
|
||||||
|
| `/api/v1/billing` | GET, POST | Bearer JWT | always on |
|
||||||
|
| `/api/v1/did/*` | GET | none | always on |
|
||||||
|
| `/api/v1/oidc/*` | GET, POST | mixed | always on |
|
||||||
|
| `/.well-known/openid-configuration` | GET | none | always on |
|
||||||
|
| `/.well-known/jwks.json` | GET | none | always on |
|
||||||
|
| `/.well-known/did.json` | GET | none | always on |
|
||||||
|
| `/health` | GET | none | always on |
|
||||||
|
| `/metrics` | GET | none | always on |
|
||||||
|
|
||||||
## Graceful Shutdown
|
## Graceful Shutdown
|
||||||
|
|
||||||
The server listens for `SIGTERM` and `SIGINT`. On receipt:
|
The server listens for `SIGTERM` and `SIGINT`. On receipt:
|
||||||
|
|||||||
@@ -1,18 +1,28 @@
|
|||||||
# Database
|
# Database
|
||||||
|
|
||||||
AgentIdP uses PostgreSQL 14+ as its primary data store. The schema consists of four tables managed by a custom migration runner.
|
AgentIdP uses PostgreSQL 14+ as its primary data store. The schema consists of 26 migrations managed by a custom migration runner.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Schema Overview
|
## Schema Overview
|
||||||
|
|
||||||
```
|
```
|
||||||
agents
|
organizations
|
||||||
└── credentials (FK: client_id → agents.agent_id, CASCADE DELETE)
|
├── agents (FK: organization_id → organizations.org_id)
|
||||||
|
│ ├── credentials (FK: client_id → agents.agent_id, CASCADE DELETE)
|
||||||
audit_events (no FK — append-only, agent_id is informational)
|
│ └── agent_did_keys (FK: agent_id → agents.agent_id)
|
||||||
|
└── audit_events (FK: organization_id — informational, no cascade)
|
||||||
|
|
||||||
token_revocations (no FK — independent revocation store)
|
token_revocations (no FK — independent revocation store)
|
||||||
|
oidc_keys (standalone — OIDC signing key rotation)
|
||||||
|
federation_partners (standalone — cross-tenant identity)
|
||||||
|
webhook_subscriptions → webhook_deliveries (FK: subscription_id)
|
||||||
|
agent_marketplace (standalone — agent discovery catalog)
|
||||||
|
github_oidc_trust_policies (standalone — CI/CD trust)
|
||||||
|
billing (FK: org_id → organizations.org_id — one row per org)
|
||||||
|
delegation_chains (standalone — A2A delegation records)
|
||||||
|
analytics_events (FK: organization_id — append-only)
|
||||||
|
tenant_tiers (FK: org_id → organizations.org_id — one row per org)
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -134,6 +144,234 @@ Durable record of revoked JWT tokens. Supplements Redis for durability across Re
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
### `organizations`
|
||||||
|
|
||||||
|
Created by migration `006_create_organizations_table.sql`.
|
||||||
|
|
||||||
|
| Column | Type | Nullable | Description |
|
||||||
|
|--------|------|----------|-------------|
|
||||||
|
| `org_id` | `UUID` | No | Primary key |
|
||||||
|
| `name` | `VARCHAR(255)` | No | Organisation display name |
|
||||||
|
| `slug` | `VARCHAR(64)` | No | URL-safe unique identifier |
|
||||||
|
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `agent_did_keys`
|
||||||
|
|
||||||
|
Created by migration `012_create_agent_did_keys_table.sql`.
|
||||||
|
|
||||||
|
Stores the DID document key material for each agent. One agent may have multiple keys for
|
||||||
|
rotation purposes.
|
||||||
|
|
||||||
|
| Column | Type | Nullable | Description |
|
||||||
|
|--------|------|----------|-------------|
|
||||||
|
| `id` | `UUID` | No | Primary key |
|
||||||
|
| `agent_id` | `UUID` | No | FK → `agents.agent_id` |
|
||||||
|
| `key_id` | `VARCHAR(255)` | No | DID key fragment identifier |
|
||||||
|
| `public_key_jwk` | `JSONB` | No | Public key in JWK format |
|
||||||
|
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### DID columns on `agents`
|
||||||
|
|
||||||
|
Added by migration `013_add_did_columns_to_agents.sql`:
|
||||||
|
|
||||||
|
- `did` — `VARCHAR(512)` nullable — the `did:web` identifier for this agent
|
||||||
|
- `did_document` — `JSONB` nullable — full DID document
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `oidc_keys`
|
||||||
|
|
||||||
|
Created by migration `014_create_oidc_keys_table.sql`.
|
||||||
|
|
||||||
|
Stores RSA key pairs used for OIDC ID token signing. Supports key rotation — active key is
|
||||||
|
determined by the most recently created row.
|
||||||
|
|
||||||
|
| Column | Type | Nullable | Description |
|
||||||
|
|--------|------|----------|-------------|
|
||||||
|
| `id` | `UUID` | No | Primary key |
|
||||||
|
| `kid` | `VARCHAR(128)` | No | Key ID — referenced in JWKS |
|
||||||
|
| `private_key_pem` | `TEXT` | No | Encrypted RSA private key (pgcrypto) |
|
||||||
|
| `public_key_pem` | `TEXT` | No | RSA public key |
|
||||||
|
| `algorithm` | `VARCHAR(16)` | No | Always `RS256` |
|
||||||
|
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `federation_partners`
|
||||||
|
|
||||||
|
Created by migration `015_create_federation_partners_table.sql`.
|
||||||
|
|
||||||
|
| Column | Type | Nullable | Description |
|
||||||
|
|--------|------|----------|-------------|
|
||||||
|
| `id` | `UUID` | No | Primary key |
|
||||||
|
| `org_id` | `UUID` | No | Owning organisation |
|
||||||
|
| `partner_name` | `VARCHAR(255)` | No | Display name |
|
||||||
|
| `partner_jwks_url` | `TEXT` | No | URL to partner's JWKS endpoint |
|
||||||
|
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `webhook_subscriptions`
|
||||||
|
|
||||||
|
Created by migration `016_create_webhook_subscriptions_table.sql`.
|
||||||
|
|
||||||
|
| Column | Type | Nullable | Description |
|
||||||
|
|--------|------|----------|-------------|
|
||||||
|
| `id` | `UUID` | No | Primary key |
|
||||||
|
| `org_id` | `UUID` | No | Owning organisation |
|
||||||
|
| `event_type` | `VARCHAR(128)` | No | Event type filter (e.g. `agent.created`) |
|
||||||
|
| `target_url` | `TEXT` | No | HTTPS delivery endpoint |
|
||||||
|
| `secret` | `VARCHAR(255)` | Yes | HMAC signing secret for delivery verification |
|
||||||
|
| `active` | `BOOLEAN` | No | Default: `true` |
|
||||||
|
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `webhook_deliveries`
|
||||||
|
|
||||||
|
Created by migration `017_create_webhook_deliveries_table.sql`.
|
||||||
|
|
||||||
|
Records each delivery attempt for a webhook event, including the dead-letter queue entries.
|
||||||
|
|
||||||
|
| Column | Type | Nullable | Description |
|
||||||
|
|--------|------|----------|-------------|
|
||||||
|
| `id` | `UUID` | No | Primary key |
|
||||||
|
| `subscription_id` | `UUID` | No | FK → `webhook_subscriptions.id` |
|
||||||
|
| `event_type` | `VARCHAR(128)` | No | Event type delivered |
|
||||||
|
| `payload` | `JSONB` | No | Full event payload |
|
||||||
|
| `status` | `VARCHAR(32)` | No | `pending`, `delivered`, `failed`, `dead_letter` |
|
||||||
|
| `response_status` | `INTEGER` | Yes | HTTP status from delivery endpoint |
|
||||||
|
| `attempt_count` | `INTEGER` | No | Default: `0` |
|
||||||
|
| `last_attempted_at` | `TIMESTAMPTZ` | Yes | |
|
||||||
|
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
|
||||||
|
|
||||||
|
**Dead-letter queue:** After 3 failed delivery attempts, the row status is set to `dead_letter`
|
||||||
|
and the `agentidp_webhook_dead_letters_total` Prometheus counter is incremented. The Prometheus
|
||||||
|
metric label is `event_type`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### pgcrypto extension
|
||||||
|
|
||||||
|
Enabled by migration `018_enable_pgcrypto.sql`. Used for encrypting sensitive columns in
|
||||||
|
`oidc_keys` and credential data.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `agent_marketplace`
|
||||||
|
|
||||||
|
Created by migration `021_add_agent_marketplace.sql`.
|
||||||
|
|
||||||
|
| Column | Type | Nullable | Description |
|
||||||
|
|--------|------|----------|-------------|
|
||||||
|
| `id` | `UUID` | No | Primary key |
|
||||||
|
| `agent_id` | `UUID` | No | FK → `agents.agent_id` |
|
||||||
|
| `listing_name` | `VARCHAR(255)` | No | Display name in marketplace |
|
||||||
|
| `description` | `TEXT` | Yes | Markdown description |
|
||||||
|
| `tags` | `TEXT[]` | No | Searchable tags. Default: `{}` |
|
||||||
|
| `published` | `BOOLEAN` | No | Default: `false` |
|
||||||
|
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `github_oidc_trust_policies`
|
||||||
|
|
||||||
|
Created by migration `022_add_github_oidc_trust_policies.sql`.
|
||||||
|
|
||||||
|
Maps GitHub Actions OIDC claims to agent identities for CI/CD token exchange.
|
||||||
|
|
||||||
|
| Column | Type | Nullable | Description |
|
||||||
|
|--------|------|----------|-------------|
|
||||||
|
| `id` | `UUID` | No | Primary key |
|
||||||
|
| `org_id` | `UUID` | No | Owning organisation |
|
||||||
|
| `repository` | `VARCHAR(512)` | No | GitHub repository slug (`owner/repo`) |
|
||||||
|
| `branch` | `VARCHAR(255)` | Yes | Branch filter (null = any branch) |
|
||||||
|
| `agent_id` | `UUID` | No | Agent to issue a token for on match |
|
||||||
|
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `billing`
|
||||||
|
|
||||||
|
Created by migration `023_add_billing.sql`.
|
||||||
|
|
||||||
|
One row per organisation. Tracks the org's Stripe customer and subscription state.
|
||||||
|
|
||||||
|
| Column | Type | Nullable | Description |
|
||||||
|
|--------|------|----------|-------------|
|
||||||
|
| `id` | `UUID` | No | Primary key |
|
||||||
|
| `org_id` | `UUID` | No | FK → `organizations.org_id` (UNIQUE) |
|
||||||
|
| `stripe_customer_id` | `VARCHAR(255)` | Yes | Stripe Customer ID |
|
||||||
|
| `stripe_subscription_id` | `VARCHAR(255)` | Yes | Stripe Subscription ID |
|
||||||
|
| `status` | `VARCHAR(64)` | No | Stripe subscription status or `none` |
|
||||||
|
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `delegation_chains`
|
||||||
|
|
||||||
|
Created by migration `024_add_delegation_chains.sql`.
|
||||||
|
|
||||||
|
Records A2A delegation grants created via the delegation API.
|
||||||
|
|
||||||
|
| Column | Type | Nullable | Description |
|
||||||
|
|--------|------|----------|-------------|
|
||||||
|
| `id` | `UUID` | No | Primary key |
|
||||||
|
| `delegator_agent_id` | `UUID` | No | Agent granting the delegation |
|
||||||
|
| `delegate_agent_id` | `UUID` | No | Agent receiving the delegation |
|
||||||
|
| `scopes` | `TEXT[]` | No | Scopes being delegated |
|
||||||
|
| `expires_at` | `TIMESTAMPTZ` | Yes | Optional expiry |
|
||||||
|
| `created_at` | `TIMESTAMPTZ` | No | Default: `NOW()` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `analytics_events`
|
||||||
|
|
||||||
|
Created by migration `025_add_analytics_events.sql`.
|
||||||
|
|
||||||
|
Append-only event store for analytics. Supports token trend, agent activity, and usage summary
|
||||||
|
queries.
|
||||||
|
|
||||||
|
| Column | Type | Nullable | Description |
|
||||||
|
|--------|------|----------|-------------|
|
||||||
|
| `id` | `UUID` | No | Primary key |
|
||||||
|
| `organization_id` | `UUID` | No | Owning organisation |
|
||||||
|
| `date` | `DATE` | No | Calendar date of the event (UTC) |
|
||||||
|
| `metric_type` | `VARCHAR(64)` | No | e.g. `token_issued`, `agent_called` |
|
||||||
|
| `count` | `INTEGER` | No | Event count for this date+type |
|
||||||
|
|
||||||
|
**Index:** `(organization_id, date DESC)` for fast time-series queries.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `tenant_tiers`
|
||||||
|
|
||||||
|
Created by migration `026_add_tenant_tiers.sql`.
|
||||||
|
|
||||||
|
One row per organisation. Stores the current tier and enforces tier limits via the
|
||||||
|
`tierEnforcement` middleware.
|
||||||
|
|
||||||
|
| Column | Type | Nullable | Description |
|
||||||
|
|--------|------|----------|-------------|
|
||||||
|
| `id` | `UUID` | No | Primary key |
|
||||||
|
| `org_id` | `UUID` | No | FK → `organizations.org_id` (UNIQUE) |
|
||||||
|
| `tier` | `ENUM('free','pro','enterprise')` | No | Current tier. Default: `free` |
|
||||||
|
| `updated_at` | `TIMESTAMPTZ` | No | Last tier change. Default: `NOW()` |
|
||||||
|
|
||||||
|
**Tier limits** (from `src/config/tiers.ts`):
|
||||||
|
|
||||||
|
| Tier | Max Agents | Max API Calls/Day | Max Tokens/Day |
|
||||||
|
|------|-----------|-------------------|----------------|
|
||||||
|
| free | 10 | 1,000 | 1,000 |
|
||||||
|
| pro | 100 | 50,000 | 50,000 |
|
||||||
|
| enterprise | unlimited | unlimited | unlimited |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Migration Runner
|
## Migration Runner
|
||||||
|
|
||||||
Migrations are managed by `scripts/migrate.ts`. It reads `.sql` files from `src/db/migrations/` in alphabetical order, tracks applied migrations in a `schema_migrations` table, and executes only unapplied migrations — each in its own transaction.
|
Migrations are managed by `scripts/migrate.ts`. It reads `.sql` files from `src/db/migrations/` in alphabetical order, tracks applied migrations in a `schema_migrations` table, and executes only unapplied migrations — each in its own transaction.
|
||||||
@@ -160,10 +398,11 @@ Expected output (first run):
|
|||||||
Running database migrations...
|
Running database migrations...
|
||||||
✓ Applied: 001_create_agents.sql
|
✓ Applied: 001_create_agents.sql
|
||||||
✓ Applied: 002_create_credentials.sql
|
✓ Applied: 002_create_credentials.sql
|
||||||
✓ Applied: 003_create_audit_events.sql
|
...
|
||||||
✓ Applied: 004_create_tokens.sql
|
✓ Applied: 025_add_analytics_events.sql
|
||||||
|
✓ Applied: 026_add_tenant_tiers.sql
|
||||||
|
|
||||||
Migrations complete. 4 migration(s) applied.
|
Migrations complete. 26 migration(s) applied.
|
||||||
```
|
```
|
||||||
|
|
||||||
Expected output (already applied):
|
Expected output (already applied):
|
||||||
@@ -191,9 +430,10 @@ Expected output:
|
|||||||
-----------------------------------+-------------------------------
|
-----------------------------------+-------------------------------
|
||||||
001_create_agents.sql | 2026-03-28 09:00:00.000000+00
|
001_create_agents.sql | 2026-03-28 09:00:00.000000+00
|
||||||
002_create_credentials.sql | 2026-03-28 09:00:00.000000+00
|
002_create_credentials.sql | 2026-03-28 09:00:00.000000+00
|
||||||
003_create_audit_events.sql | 2026-03-28 09:00:00.000000+00
|
...
|
||||||
004_create_tokens.sql | 2026-03-28 09:00:00.000000+00
|
025_add_analytics_events.sql | 2026-04-04 09:00:00.000000+00
|
||||||
(4 rows)
|
026_add_tenant_tiers.sql | 2026-04-04 09:00:00.000000+00
|
||||||
|
(26 rows)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Adding a new migration
|
### Adding a new migration
|
||||||
@@ -214,6 +454,15 @@ There is no automated rollback. To undo a migration:
|
|||||||
|
|
||||||
## Connection Pool
|
## Connection Pool
|
||||||
|
|
||||||
The application uses `pg.Pool` with default settings (max 10 connections). The pool is a singleton — one pool per process instance.
|
The application uses `pg.Pool` with settings read from environment variables. The pool is a
|
||||||
|
singleton — one pool per process instance.
|
||||||
|
|
||||||
To override pool size, modify `src/db/pool.ts`. In production, ensure `DATABASE_URL` includes connection pool parameters if using PgBouncer or a managed connection pooler.
|
| Variable | Default | Description |
|
||||||
|
|----------|---------|-------------|
|
||||||
|
| `DB_POOL_MAX` | `20` | Maximum connections |
|
||||||
|
| `DB_POOL_MIN` | `2` | Minimum idle connections |
|
||||||
|
| `DB_POOL_IDLE_TIMEOUT_MS` | `30000` | Idle eviction timeout (ms) |
|
||||||
|
| `DB_POOL_CONNECTION_TIMEOUT_MS` | `5000` | Acquisition timeout (ms) |
|
||||||
|
|
||||||
|
Pool size is exposed as Prometheus metrics: `agentidp_db_pool_active_connections` and
|
||||||
|
`agentidp_db_pool_waiting_requests`. Monitor these in production to detect pool exhaustion.
|
||||||
|
|||||||
@@ -543,6 +543,24 @@ All environment variables injected into the AgentIdP container are documented in
|
|||||||
| `VAULT_ADDR` | No | Task definition env var | Cloud Run env var |
|
| `VAULT_ADDR` | No | Task definition env var | Cloud Run env var |
|
||||||
| `VAULT_TOKEN` | No | Secrets Manager: `/<project>/<env>/vault-token` | Secret Manager: `<name-prefix>-vault-token` |
|
| `VAULT_TOKEN` | No | Secrets Manager: `/<project>/<env>/vault-token` | Secret Manager: `<name-prefix>-vault-token` |
|
||||||
| `VAULT_MOUNT` | No | Task definition env var (default: `secret`) | Cloud Run env var (default: `secret`) |
|
| `VAULT_MOUNT` | No | Task definition env var (default: `secret`) | Cloud Run env var (default: `secret`) |
|
||||||
|
| `BILLING_ENABLED` | No | Task definition env var | Cloud Run env var |
|
||||||
|
| `STRIPE_SECRET_KEY` | Only if billing enabled | Secrets Manager: `/<project>/<env>/stripe-secret-key` | Secret Manager: `<name-prefix>-stripe-secret-key` |
|
||||||
|
| `STRIPE_WEBHOOK_SECRET` | Only if billing enabled | Secrets Manager: `/<project>/<env>/stripe-webhook-secret` | Secret Manager: `<name-prefix>-stripe-webhook-secret` |
|
||||||
|
| `STRIPE_PRICE_ID` | Only if billing enabled | Task definition env var | Cloud Run env var |
|
||||||
|
| `ANALYTICS_ENABLED` | No | Task definition env var (default: `true`) | Cloud Run env var |
|
||||||
|
| `TIER_ENFORCEMENT` | No | Task definition env var (default: `true`) | Cloud Run env var |
|
||||||
|
| `COMPLIANCE_ENABLED` | No | Task definition env var (default: `true`) | Cloud Run env var |
|
||||||
|
| `REDIS_RATE_LIMIT_ENABLED` | No | Task definition env var | Cloud Run env var |
|
||||||
|
| `RATE_LIMIT_WINDOW_MS` | No | Task definition env var (default: `60000`) | Cloud Run env var |
|
||||||
|
| `RATE_LIMIT_MAX_REQUESTS` | No | Task definition env var (default: `100`) | Cloud Run env var |
|
||||||
|
| `DB_POOL_MAX` | No | Task definition env var (default: `20`) | Cloud Run env var |
|
||||||
|
| `DB_POOL_MIN` | No | Task definition env var (default: `2`) | Cloud Run env var |
|
||||||
|
| `DB_POOL_IDLE_TIMEOUT_MS` | No | Task definition env var (default: `30000`) | Cloud Run env var |
|
||||||
|
| `DB_POOL_CONNECTION_TIMEOUT_MS` | No | Task definition env var (default: `5000`) | Cloud Run env var |
|
||||||
|
| `KAFKA_BROKERS` | No | Task definition env var | Cloud Run env var |
|
||||||
|
| `ENFORCE_TLS` | No | Task definition env var | Cloud Run env var |
|
||||||
|
| `OPA_URL` | No | Task definition env var | Cloud Run env var |
|
||||||
|
| `VAULT_KV_MOUNT` | No | Task definition env var (default: `secret`) | Cloud Run env var |
|
||||||
|
|
||||||
### Updating a Secret
|
### Updating a Secret
|
||||||
|
|
||||||
|
|||||||
@@ -6,6 +6,62 @@ Variables are loaded from a `.env` file at startup via `dotenv`. In production,
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Docker Compose Variables
|
||||||
|
|
||||||
|
These variables are read by `compose.yaml` — not by the application itself. They are required when running the stack via `docker compose up`.
|
||||||
|
|
||||||
|
### `POSTGRES_USER`
|
||||||
|
|
||||||
|
PostgreSQL superuser name — used to configure the `postgres` container and construct `DATABASE_URL`.
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required for Compose** | Yes |
|
||||||
|
| **Default in `.env.example`** | `sentryagent` |
|
||||||
|
| **Example** | `POSTGRES_USER=sentryagent` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `POSTGRES_PASSWORD`
|
||||||
|
|
||||||
|
PostgreSQL superuser password.
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required for Compose** | Yes |
|
||||||
|
| **Default in `.env.example`** | `change-me-in-production` |
|
||||||
|
| **Example** | `POSTGRES_PASSWORD=strongpassword` |
|
||||||
|
|
||||||
|
> Never use the default value in production. Generate a strong random password.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `POSTGRES_DB`
|
||||||
|
|
||||||
|
PostgreSQL database name to create on first startup.
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required for Compose** | Yes |
|
||||||
|
| **Default in `.env.example`** | `sentryagent_idp` |
|
||||||
|
| **Example** | `POSTGRES_DB=sentryagent_idp` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `GF_ADMIN_PASSWORD`
|
||||||
|
|
||||||
|
Grafana admin panel password — used by `compose.monitoring.yaml`.
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required for monitoring stack** | Yes |
|
||||||
|
| **Default in `.env.example`** | `change-me-in-production` |
|
||||||
|
| **Example** | `GF_ADMIN_PASSWORD=strongpassword` |
|
||||||
|
|
||||||
|
> Never use the default value in production.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Required Variables
|
## Required Variables
|
||||||
|
|
||||||
These variables must be set. The server will throw and exit immediately if any are missing.
|
These variables must be set. The server will throw and exit immediately if any are missing.
|
||||||
@@ -20,7 +76,7 @@ PostgreSQL connection string.
|
|||||||
| **Format** | `postgresql://<user>:<password>@<host>:<port>/<database>` |
|
| **Format** | `postgresql://<user>:<password>@<host>:<port>/<database>` |
|
||||||
| **Example** | `postgresql://sentryagent:sentryagent@localhost:5432/sentryagent_idp` |
|
| **Example** | `postgresql://sentryagent:sentryagent@localhost:5432/sentryagent_idp` |
|
||||||
|
|
||||||
The application uses `pg.Pool` with this connection string. Connection pool size uses the `pg` default (10 connections).
|
The application uses `pg.Pool` with this connection string. Pool sizing is controlled by the optional `DB_POOL_*` variables documented below.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -72,6 +128,10 @@ Every authenticated request verifies the JWT signature using this key. If this k
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
> **Note on Billing:** `STRIPE_SECRET_KEY`, `STRIPE_WEBHOOK_SECRET`, and `STRIPE_PRICE_ID` are
|
||||||
|
> required when `BILLING_ENABLED=true`. For local development, set `BILLING_ENABLED=false` and
|
||||||
|
> use placeholder values.
|
||||||
|
|
||||||
## Optional Variables
|
## Optional Variables
|
||||||
|
|
||||||
These variables have defaults and do not need to be set for local development.
|
These variables have defaults and do not need to be set for local development.
|
||||||
@@ -117,6 +177,257 @@ KV v2 secrets engine mount path.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
### `BILLING_ENABLED`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Default** | `false` |
|
||||||
|
| **Values** | `true`, `false` |
|
||||||
|
| **Example** | `BILLING_ENABLED=false` |
|
||||||
|
|
||||||
|
Gates Stripe billing integration and free-tier agent limit enforcement. When `false`, no Stripe
|
||||||
|
API calls are made and all tier limits are unenforced. Set to `false` for in-house testing.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `STRIPE_SECRET_KEY`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | Only when `BILLING_ENABLED=true` |
|
||||||
|
| **Format** | Stripe secret key string (`sk_live_*` or `sk_test_*`) |
|
||||||
|
| **Example** | `STRIPE_SECRET_KEY=sk_test_placeholder` |
|
||||||
|
|
||||||
|
Stripe API key used to create Checkout Sessions for tier upgrades. Never use a live key in
|
||||||
|
development.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `STRIPE_WEBHOOK_SECRET`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | Only when `BILLING_ENABLED=true` |
|
||||||
|
| **Format** | Stripe webhook signing secret (`whsec_*`) |
|
||||||
|
| **Example** | `STRIPE_WEBHOOK_SECRET=whsec_placeholder` |
|
||||||
|
|
||||||
|
Used to verify the HMAC signature on incoming Stripe webhook events. Without this, the billing
|
||||||
|
webhook endpoint will reject all events.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `STRIPE_PRICE_ID`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | Only when `BILLING_ENABLED=true` |
|
||||||
|
| **Format** | Stripe Price ID string (`price_*`) |
|
||||||
|
| **Example** | `STRIPE_PRICE_ID=price_placeholder` |
|
||||||
|
|
||||||
|
The Stripe Price object used when creating a Checkout Session for the Pro tier upgrade.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `ANALYTICS_ENABLED`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Default** | `true` |
|
||||||
|
| **Values** | `true`, `false` |
|
||||||
|
| **Example** | `ANALYTICS_ENABLED=true` |
|
||||||
|
|
||||||
|
Feature flag that gates the `/api/v1/analytics/*` routes. When `false`, the analytics router is
|
||||||
|
not mounted and all analytics endpoints return 404. Events are still recorded internally
|
||||||
|
regardless of this flag.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `TIER_ENFORCEMENT`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Default** | `true` |
|
||||||
|
| **Values** | `true`, `false` |
|
||||||
|
| **Example** | `TIER_ENFORCEMENT=true` |
|
||||||
|
|
||||||
|
Enables Redis-backed tier limit enforcement per tenant. When `true`, the `tierEnforcement`
|
||||||
|
middleware checks daily API call and token counts against per-tier limits defined in
|
||||||
|
`src/config/tiers.ts`. Enterprise tenants with `maxCallsPerDay: Infinity` bypass enforcement.
|
||||||
|
When `false`, no tier limits are enforced.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `COMPLIANCE_ENABLED`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Default** | `true` |
|
||||||
|
| **Values** | `true`, `false` |
|
||||||
|
| **Example** | `COMPLIANCE_ENABLED=true` |
|
||||||
|
|
||||||
|
Feature flag that gates the report and agent-card export endpoints under
|
||||||
|
`/api/v1/compliance/*`. When `false`, those endpoints return 404. The SOC2 controls endpoint
|
||||||
|
(`/api/v1/compliance/controls`) and audit chain verification (`/api/v1/audit/verify`) are
|
||||||
|
always enabled regardless of this flag.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `REDIS_RATE_LIMIT_ENABLED`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Default** | `false` |
|
||||||
|
| **Values** | `true`, `false` |
|
||||||
|
| **Example** | `REDIS_RATE_LIMIT_ENABLED=true` |
|
||||||
|
|
||||||
|
When `true`, rate limiting uses a Redis-backed sliding-window counter per `client_id`. When
|
||||||
|
`false`, rate limiting uses an in-process `RateLimiterMemory` store (does not share state
|
||||||
|
across multiple app instances).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `RATE_LIMIT_WINDOW_MS`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Default** | `60000` |
|
||||||
|
| **Format** | Integer (milliseconds) |
|
||||||
|
| **Example** | `RATE_LIMIT_WINDOW_MS=60000` |
|
||||||
|
|
||||||
|
Duration of the sliding-window rate limit period in milliseconds. Only effective when
|
||||||
|
`REDIS_RATE_LIMIT_ENABLED=true`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `RATE_LIMIT_MAX_REQUESTS`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Default** | `100` |
|
||||||
|
| **Format** | Integer |
|
||||||
|
| **Example** | `RATE_LIMIT_MAX_REQUESTS=100` |
|
||||||
|
|
||||||
|
Maximum number of requests allowed per `client_id` within `RATE_LIMIT_WINDOW_MS`. Requests
|
||||||
|
exceeding this limit receive `429 RATE_LIMIT_EXCEEDED`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `DB_POOL_MAX`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Default** | `20` |
|
||||||
|
| **Format** | Integer |
|
||||||
|
| **Example** | `DB_POOL_MAX=20` |
|
||||||
|
|
||||||
|
Maximum number of PostgreSQL connections in the pool. Increase for high-throughput production
|
||||||
|
deployments. Ensure your PostgreSQL instance's `max_connections` is set to at least
|
||||||
|
`DB_POOL_MAX × number_of_app_instances + 5`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `DB_POOL_MIN`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Default** | `2` |
|
||||||
|
| **Format** | Integer |
|
||||||
|
| **Example** | `DB_POOL_MIN=2` |
|
||||||
|
|
||||||
|
Minimum number of idle connections kept alive in the pool.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `DB_POOL_IDLE_TIMEOUT_MS`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Default** | `30000` |
|
||||||
|
| **Format** | Integer (milliseconds) |
|
||||||
|
| **Example** | `DB_POOL_IDLE_TIMEOUT_MS=30000` |
|
||||||
|
|
||||||
|
Milliseconds a connection can sit idle before being evicted from the pool.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `DB_POOL_CONNECTION_TIMEOUT_MS`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Default** | `5000` |
|
||||||
|
| **Format** | Integer (milliseconds) |
|
||||||
|
| **Example** | `DB_POOL_CONNECTION_TIMEOUT_MS=5000` |
|
||||||
|
|
||||||
|
Milliseconds the pool waits for a connection to become available before throwing a connection
|
||||||
|
timeout error.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `VAULT_KV_MOUNT`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Default** | `secret` |
|
||||||
|
| **Format** | String (no leading or trailing slash) |
|
||||||
|
| **Example** | `VAULT_KV_MOUNT=agentidp` |
|
||||||
|
|
||||||
|
KV v2 secrets engine mount path used by `VaultService`. Equivalent to the existing `VAULT_MOUNT`
|
||||||
|
variable — note that `.env.example` uses `VAULT_KV_MOUNT`; the underlying service reads either.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `OPA_URL`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Format** | URL string |
|
||||||
|
| **Example** | `OPA_URL=http://localhost:8181` |
|
||||||
|
|
||||||
|
URL of a running OPA server for external policy evaluation. When unset, the application falls
|
||||||
|
back to the embedded Wasm or JSON policy in `POLICY_DIR`. Used for health check reporting.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `KAFKA_BROKERS`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Format** | Comma-separated broker addresses |
|
||||||
|
| **Example** | `KAFKA_BROKERS=localhost:9092` |
|
||||||
|
|
||||||
|
When set, the `KafkaAdapter` publishes domain events to Kafka. When unset, Kafka publishing is
|
||||||
|
disabled and events are only delivered via the `WebhookService`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `ENFORCE_TLS`
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|-|-|
|
||||||
|
| **Required** | No |
|
||||||
|
| **Default** | `false` |
|
||||||
|
| **Values** | `true`, `false` |
|
||||||
|
| **Example** | `ENFORCE_TLS=true` |
|
||||||
|
|
||||||
|
When `true`, the `tlsEnforcementMiddleware` redirects all HTTP requests to HTTPS. Enable in
|
||||||
|
production deployments where TLS termination is handled at the application layer.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
### `POLICY_DIR`
|
### `POLICY_DIR`
|
||||||
|
|
||||||
Directory containing OPA policy files (`authz.rego`, `authz.wasm`, `data/scopes.json`).
|
Directory containing OPA policy files (`authz.rego`, `authz.wasm`, `data/scopes.json`).
|
||||||
@@ -178,33 +489,59 @@ In production, set this to the specific origin(s) that should be permitted to ca
|
|||||||
## Complete `.env` Example
|
## Complete `.env` Example
|
||||||
|
|
||||||
```
|
```
|
||||||
# Database
|
# ── Server ──────────────────────────────────────────────────────────────────
|
||||||
DATABASE_URL=postgresql://sentryagent:sentryagent@localhost:5432/sentryagent_idp
|
|
||||||
|
|
||||||
# Redis
|
|
||||||
REDIS_URL=redis://localhost:6379
|
|
||||||
|
|
||||||
# Application
|
|
||||||
PORT=3000
|
|
||||||
NODE_ENV=development
|
NODE_ENV=development
|
||||||
CORS_ORIGIN=*
|
PORT=3000
|
||||||
|
CORS_ORIGIN=http://localhost:3001
|
||||||
|
|
||||||
# JWT Keys (generate with openssl — see docs/devops/security.md)
|
# ── Docker Compose (postgres container + monitoring) ─────────────────────────
|
||||||
JWT_PRIVATE_KEY="-----BEGIN RSA PRIVATE KEY-----
|
POSTGRES_USER=sentryagent
|
||||||
MIIEowIBAAKCAQEA...
|
POSTGRES_PASSWORD=change-me-in-production
|
||||||
-----END RSA PRIVATE KEY-----"
|
POSTGRES_DB=sentryagent_idp
|
||||||
|
GF_ADMIN_PASSWORD=change-me-in-production
|
||||||
|
|
||||||
JWT_PUBLIC_KEY="-----BEGIN PUBLIC KEY-----
|
# ── Database ─────────────────────────────────────────────────────────────────
|
||||||
MIIBIjANBgkq...
|
DATABASE_URL=postgresql://sentryagent:sentryagent@localhost:5432/sentryagent_idp
|
||||||
-----END PUBLIC KEY-----"
|
DB_POOL_MAX=20
|
||||||
|
DB_POOL_MIN=2
|
||||||
|
DB_POOL_IDLE_TIMEOUT_MS=30000
|
||||||
|
DB_POOL_CONNECTION_TIMEOUT_MS=5000
|
||||||
|
|
||||||
# HashiCorp Vault (Phase 2 — optional, omit to use bcrypt mode)
|
# ── Redis ────────────────────────────────────────────────────────────────────
|
||||||
|
REDIS_URL=redis://localhost:6379
|
||||||
|
REDIS_RATE_LIMIT_ENABLED=true
|
||||||
|
RATE_LIMIT_WINDOW_MS=60000
|
||||||
|
RATE_LIMIT_MAX_REQUESTS=100
|
||||||
|
|
||||||
|
# ── JWT Keys (generate with openssl — see docs/devops/security.md) ──────────
|
||||||
|
JWT_PRIVATE_KEY="-----BEGIN RSA PRIVATE KEY-----\nMIIEow...\n-----END RSA PRIVATE KEY-----"
|
||||||
|
JWT_PUBLIC_KEY="-----BEGIN PUBLIC KEY-----\nMIIBIj...\n-----END PUBLIC KEY-----"
|
||||||
|
|
||||||
|
# ── Billing (Stripe) — set BILLING_ENABLED=false for local/in-house testing ─
|
||||||
|
BILLING_ENABLED=false
|
||||||
|
STRIPE_SECRET_KEY=sk_test_placeholder
|
||||||
|
STRIPE_WEBHOOK_SECRET=whsec_placeholder
|
||||||
|
STRIPE_PRICE_ID=price_placeholder
|
||||||
|
|
||||||
|
# ── Phase 6 Feature Flags ─────────────────────────────────────────────────────
|
||||||
|
ANALYTICS_ENABLED=true
|
||||||
|
TIER_ENFORCEMENT=true
|
||||||
|
COMPLIANCE_ENABLED=true
|
||||||
|
|
||||||
|
# ── HashiCorp Vault (optional) ────────────────────────────────────────────────
|
||||||
# VAULT_ADDR=http://127.0.0.1:8200
|
# VAULT_ADDR=http://127.0.0.1:8200
|
||||||
# VAULT_TOKEN=hvs.XXXXXXXXXXXXXXXXXXXXXX
|
# VAULT_TOKEN=hvs.XXXXXXXXXXXXXXXXXXXXXX
|
||||||
# VAULT_MOUNT=secret
|
# VAULT_KV_MOUNT=secret
|
||||||
|
|
||||||
# OPA Policy Engine (Phase 2 — optional, defaults to <cwd>/policies)
|
# ── OPA (optional) ───────────────────────────────────────────────────────────
|
||||||
# POLICY_DIR=/etc/sentryagent/policies
|
# POLICY_DIR=/etc/sentryagent/policies
|
||||||
|
# OPA_URL=http://localhost:8181
|
||||||
|
|
||||||
|
# ── Kafka (optional) ─────────────────────────────────────────────────────────
|
||||||
|
# KAFKA_BROKERS=localhost:9092
|
||||||
|
|
||||||
|
# ── TLS ──────────────────────────────────────────────────────────────────────
|
||||||
|
# ENFORCE_TLS=true
|
||||||
```
|
```
|
||||||
|
|
||||||
> Do not commit `.env` to version control. Add it to `.gitignore`.
|
> Do not commit `.env` to version control. Add it to `.gitignore`.
|
||||||
@@ -220,3 +557,8 @@ The application validates required variables at startup in this order:
|
|||||||
3. `REDIS_URL` — checked when `getRedisClient()` is first called (during `createApp()`)
|
3. `REDIS_URL` — checked when `getRedisClient()` is first called (during `createApp()`)
|
||||||
|
|
||||||
If any required variable is missing, the process exits with an error before binding to any port.
|
If any required variable is missing, the process exits with an error before binding to any port.
|
||||||
|
|
||||||
|
> **Feature flags** (`BILLING_ENABLED`, `ANALYTICS_ENABLED`, `TIER_ENFORCEMENT`,
|
||||||
|
> `COMPLIANCE_ENABLED`) are read at startup. `ANALYTICS_ENABLED` and `COMPLIANCE_ENABLED`
|
||||||
|
> determine whether their respective routers are mounted — changing these values requires a
|
||||||
|
> process restart.
|
||||||
|
|||||||
949
docs/devops/field-trial.md
Normal file
949
docs/devops/field-trial.md
Normal file
@@ -0,0 +1,949 @@
|
|||||||
|
# SentryAgent.ai AgentIdP — In-House Field Trial Guide
|
||||||
|
|
||||||
|
This guide is the execution playbook for in-house Docker Compose field trials of SentryAgent.ai
|
||||||
|
AgentIdP. Follow each phase in order. All commands are exact — copy and paste them directly.
|
||||||
|
|
||||||
|
Estimated time to complete all phases: 45–60 minutes.
|
||||||
|
|
||||||
|
Prerequisites must be satisfied before Section 0.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
**Docker 24+ and Docker Compose 2.20+**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker --version
|
||||||
|
# Expected: Docker version 24.x.x or higher
|
||||||
|
|
||||||
|
docker compose version
|
||||||
|
# Expected: Docker Compose version v2.20.x or higher
|
||||||
|
```
|
||||||
|
|
||||||
|
**Node.js 18+ via nvm**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export NVM_DIR="$HOME/.nvm" && source "$NVM_DIR/nvm.sh"
|
||||||
|
node --version
|
||||||
|
# Expected: v18.x.x or higher
|
||||||
|
```
|
||||||
|
|
||||||
|
**openssl**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openssl version
|
||||||
|
# Expected: OpenSSL 1.1.x or higher (any version)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Git repo cloned**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://git.sentryagent.ai/vijay_admin/sentryagent-idp.git
|
||||||
|
cd sentryagent-idp
|
||||||
|
```
|
||||||
|
|
||||||
|
**Ports free**
|
||||||
|
|
||||||
|
The following ports must be free on the machine before starting:
|
||||||
|
|
||||||
|
| Port | Service |
|
||||||
|
|------|---------|
|
||||||
|
| 3000 | AgentIdP backend |
|
||||||
|
| 3001 | Next.js portal |
|
||||||
|
| 5432 | PostgreSQL |
|
||||||
|
| 6379 | Redis |
|
||||||
|
|
||||||
|
Check all ports:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
lsof -i :3000 -i :3001 -i :5432 -i :6379
|
||||||
|
# Expected: no output (all ports free)
|
||||||
|
```
|
||||||
|
|
||||||
|
If any port is in use, kill the occupying process:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
lsof -ti:<port> | xargs kill
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Section 0 — Environment Setup
|
||||||
|
|
||||||
|
This section guides the engineer through creating a valid `.env` file for field trial use.
|
||||||
|
|
||||||
|
**Step 0.1 — Copy `.env.example`**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cp .env.example .env
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 0.2 — Generate RSA-2048 keypair**
|
||||||
|
|
||||||
|
Generate the JWT signing keys:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openssl genrsa -out private.pem 2048
|
||||||
|
openssl rsa -in private.pem -pubout -out public.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify the keys are valid:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openssl rsa -in private.pem -check -noout
|
||||||
|
# Expected: RSA key ok
|
||||||
|
|
||||||
|
openssl rsa -in public.pem -pubin -noout -text 2>&1 | head -3
|
||||||
|
# Expected: Public-Key: (2048 bit)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 0.3 — Write keys into `.env`**
|
||||||
|
|
||||||
|
Write the private key as a single-line PEM with `\n` separators:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
PRIVATE_KEY_LINE=$(awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' private.pem)
|
||||||
|
sed -i "s|JWT_PRIVATE_KEY=.*|JWT_PRIVATE_KEY=\"${PRIVATE_KEY_LINE}\"|" .env
|
||||||
|
```
|
||||||
|
|
||||||
|
Write the public key:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
PUBLIC_KEY_LINE=$(awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' public.pem)
|
||||||
|
sed -i "s|JWT_PUBLIC_KEY=.*|JWT_PUBLIC_KEY=\"${PUBLIC_KEY_LINE}\"|" .env
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify both keys are present and non-empty:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
grep -c "BEGIN RSA PRIVATE KEY" .env
|
||||||
|
# Expected: 1
|
||||||
|
|
||||||
|
grep -c "BEGIN PUBLIC KEY" .env
|
||||||
|
# Expected: 1
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 0.4 — Configure field trial values**
|
||||||
|
|
||||||
|
Set the following values in `.env`. These are the correct values for an in-house field trial
|
||||||
|
(no real Stripe, no Kafka, no Vault):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Disable real Stripe billing for field trial
|
||||||
|
sed -i "s|BILLING_ENABLED=.*|BILLING_ENABLED=false|" .env
|
||||||
|
sed -i "s|STRIPE_SECRET_KEY=.*|STRIPE_SECRET_KEY=sk_test_placeholder|" .env
|
||||||
|
sed -i "s|STRIPE_WEBHOOK_SECRET=.*|STRIPE_WEBHOOK_SECRET=whsec_placeholder|" .env
|
||||||
|
sed -i "s|STRIPE_PRICE_ID=.*|STRIPE_PRICE_ID=price_placeholder|" .env
|
||||||
|
|
||||||
|
# Keep feature flags at defaults
|
||||||
|
sed -i "s|ANALYTICS_ENABLED=.*|ANALYTICS_ENABLED=true|" .env
|
||||||
|
sed -i "s|TIER_ENFORCEMENT=.*|TIER_ENFORCEMENT=true|" .env
|
||||||
|
sed -i "s|COMPLIANCE_ENABLED=.*|COMPLIANCE_ENABLED=true|" .env
|
||||||
|
|
||||||
|
# Allow portal CORS
|
||||||
|
sed -i "s|CORS_ORIGIN=.*|CORS_ORIGIN=http://localhost:3001|" .env
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 0.5 — Verify final `.env`**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
grep -E "^(DATABASE_URL|REDIS_URL|JWT_PRIVATE_KEY|JWT_PUBLIC_KEY|BILLING_ENABLED|ANALYTICS_ENABLED|TIER_ENFORCEMENT|COMPLIANCE_ENABLED|CORS_ORIGIN)=" .env
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected output (values abbreviated):
|
||||||
|
|
||||||
|
```
|
||||||
|
POSTGRES_USER=sentryagent
|
||||||
|
POSTGRES_PASSWORD=sentryagent
|
||||||
|
POSTGRES_DB=sentryagent_idp
|
||||||
|
DATABASE_URL=postgresql://sentryagent:sentryagent@localhost:5432/sentryagent_idp
|
||||||
|
REDIS_URL=redis://localhost:6379
|
||||||
|
JWT_PRIVATE_KEY="-----BEGIN RSA PRIVATE KEY-----\n...
|
||||||
|
JWT_PUBLIC_KEY="-----BEGIN PUBLIC KEY-----\n...
|
||||||
|
BILLING_ENABLED=false
|
||||||
|
ANALYTICS_ENABLED=true
|
||||||
|
TIER_ENFORCEMENT=true
|
||||||
|
COMPLIANCE_ENABLED=true
|
||||||
|
CORS_ORIGIN=http://localhost:3001
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase A — Stack Startup
|
||||||
|
|
||||||
|
**Step A.1 — Build and start the full stack**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose up --build -d
|
||||||
|
```
|
||||||
|
|
||||||
|
This builds the `app` container image and starts all three services. The `app` service waits
|
||||||
|
for `postgres` and `redis` to pass their health checks before starting.
|
||||||
|
|
||||||
|
**Step A.2 — Verify all services are healthy**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose ps
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected output — all three services must show `healthy`:
|
||||||
|
|
||||||
|
```
|
||||||
|
NAME IMAGE STATUS
|
||||||
|
sentryagent-idp-app-1 sentryagent-idp-app running (healthy)
|
||||||
|
sentryagent-idp-postgres-1 postgres:14.12-alpine3.19 running (healthy)
|
||||||
|
sentryagent-idp-redis-1 redis:7.2-alpine3.19 running (healthy)
|
||||||
|
```
|
||||||
|
|
||||||
|
If any service shows `starting` or `unhealthy`, wait 15 seconds and run `docker compose ps`
|
||||||
|
again. If a service remains unhealthy after 60 seconds, see Troubleshooting.
|
||||||
|
|
||||||
|
**Step A.3 — Run database migrations**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose exec app npm run db:migrate
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected output:
|
||||||
|
|
||||||
|
```
|
||||||
|
Running database migrations...
|
||||||
|
✓ Applied: 001_create_agents.sql
|
||||||
|
✓ Applied: 002_create_credentials.sql
|
||||||
|
...
|
||||||
|
✓ Applied: 025_add_analytics_events.sql
|
||||||
|
✓ Applied: 026_add_tenant_tiers.sql
|
||||||
|
|
||||||
|
Migrations complete. 26 migration(s) applied.
|
||||||
|
```
|
||||||
|
|
||||||
|
All 26 migrations must apply without error before proceeding.
|
||||||
|
|
||||||
|
**Step A.4 — Verify application health**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s http://localhost:3000/health | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{"status":"ok"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step A.5 — Verify Prometheus metrics**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s http://localhost:3000/metrics | head -20
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: Prometheus text output beginning with `# HELP` lines. Verify these specific metrics
|
||||||
|
are present:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s http://localhost:3000/metrics | grep -E "^# HELP agentidp_"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: at least 19 lines matching `# HELP agentidp_*`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase B — Core Product Journeys
|
||||||
|
|
||||||
|
This phase tests the end-to-end agent identity lifecycle. Run each step in order. Each step
|
||||||
|
depends on the output of the previous step.
|
||||||
|
|
||||||
|
> **Note on tokens:** The steps below use shell variables to pass values between commands. Run
|
||||||
|
> all commands in the same terminal session.
|
||||||
|
|
||||||
|
**Step B.1 — Create an organisation**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ORG_RESPONSE=$(curl -s -X POST http://localhost:3000/api/v1/organizations \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"name":"Field Trial Org","slug":"field-trial"}')
|
||||||
|
|
||||||
|
echo $ORG_RESPONSE | jq .
|
||||||
|
ORG_ID=$(echo $ORG_RESPONSE | jq -r '.org_id')
|
||||||
|
echo "ORG_ID: $ORG_ID"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: HTTP 201 response body containing an `org_id` UUID. `ORG_ID` must be a non-empty UUID.
|
||||||
|
|
||||||
|
**Step B.2 — Register an agent**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
AGENT_RESPONSE=$(curl -s -X POST http://localhost:3000/api/v1/agents \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{
|
||||||
|
\"email\": \"trial-agent@field-trial.sentryagent.ai\",
|
||||||
|
\"agent_type\": \"classifier\",
|
||||||
|
\"version\": \"1.0.0\",
|
||||||
|
\"capabilities\": [\"documents:read\", \"documents:classify\"],
|
||||||
|
\"owner\": \"field-trial-team\",
|
||||||
|
\"deployment_env\": \"development\",
|
||||||
|
\"organization_id\": \"$ORG_ID\"
|
||||||
|
}")
|
||||||
|
|
||||||
|
echo $AGENT_RESPONSE | jq .
|
||||||
|
AGENT_ID=$(echo $AGENT_RESPONSE | jq -r '.agent_id')
|
||||||
|
echo "AGENT_ID: $AGENT_ID"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: HTTP 201 response body containing an `agent_id` UUID.
|
||||||
|
|
||||||
|
**Step B.3 — Generate credentials**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CRED_RESPONSE=$(curl -s -X POST http://localhost:3000/api/v1/credentials \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{\"agent_id\": \"$AGENT_ID\"}")
|
||||||
|
|
||||||
|
echo $CRED_RESPONSE | jq .
|
||||||
|
CLIENT_ID=$(echo $CRED_RESPONSE | jq -r '.client_id')
|
||||||
|
CLIENT_SECRET=$(echo $CRED_RESPONSE | jq -r '.client_secret')
|
||||||
|
echo "CLIENT_ID: $CLIENT_ID"
|
||||||
|
echo "CLIENT_SECRET: $CLIENT_SECRET"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: HTTP 201 response body containing `client_id` and `client_secret`. The `client_secret`
|
||||||
|
is only returned once — save it now.
|
||||||
|
|
||||||
|
**Step B.4 — Issue an OAuth 2.0 access token**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
TOKEN_RESPONSE=$(curl -s -X POST http://localhost:3000/api/v1/token \
|
||||||
|
-H "Content-Type: application/x-www-form-urlencoded" \
|
||||||
|
-d "grant_type=client_credentials&client_id=$CLIENT_ID&client_secret=$CLIENT_SECRET&scope=read")
|
||||||
|
|
||||||
|
echo $TOKEN_RESPONSE | jq .
|
||||||
|
ACCESS_TOKEN=$(echo $TOKEN_RESPONSE | jq -r '.access_token')
|
||||||
|
echo "ACCESS_TOKEN obtained: ${ACCESS_TOKEN:0:30}..."
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: HTTP 200 response body with `access_token`, `token_type: "Bearer"`, `expires_in: 3600`,
|
||||||
|
`scope: "read"`.
|
||||||
|
|
||||||
|
**Step B.5 — Use the token on a protected endpoint**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -H "Authorization: Bearer $ACCESS_TOKEN" \
|
||||||
|
http://localhost:3000/api/v1/agents | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: HTTP 200 with a JSON array of agents including the agent registered in Step B.2.
|
||||||
|
|
||||||
|
**Step B.6 — Inspect JWT claims**
|
||||||
|
|
||||||
|
Decode and inspect the access token structure (without verifying signature):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo $ACCESS_TOKEN | cut -d. -f2 | base64 -d 2>/dev/null | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected claims:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"sub": "<client_id>",
|
||||||
|
"iss": "https://sentryagent.ai",
|
||||||
|
"aud": "sentryagent-api",
|
||||||
|
"scope": "read",
|
||||||
|
"agent_id": "<agent_id>",
|
||||||
|
"organization_id": "<org_id>",
|
||||||
|
"iat": "<issued-at-timestamp>",
|
||||||
|
"exp": "<expiry-timestamp>",
|
||||||
|
"jti": "<unique-jwt-id>"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify `exp - iat = 3600` (1 hour TTL).
|
||||||
|
|
||||||
|
**Step B.7 — Rotate credentials and verify old token is rejected**
|
||||||
|
|
||||||
|
Rotate the credentials (generates a new client_secret, revokes the old one):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ROTATE_RESPONSE=$(curl -s -X POST http://localhost:3000/api/v1/credentials \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{\"agent_id\": \"$AGENT_ID\"}")
|
||||||
|
|
||||||
|
NEW_CLIENT_ID=$(echo $ROTATE_RESPONSE | jq -r '.client_id')
|
||||||
|
NEW_CLIENT_SECRET=$(echo $ROTATE_RESPONSE | jq -r '.client_secret')
|
||||||
|
echo "New credential: $NEW_CLIENT_ID"
|
||||||
|
```
|
||||||
|
|
||||||
|
Attempt to use the old token (must be rejected):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -o /dev/null -w "%{http_code}" \
|
||||||
|
-H "Authorization: Bearer $ACCESS_TOKEN" \
|
||||||
|
http://localhost:3000/api/v1/agents
|
||||||
|
# Expected: 401
|
||||||
|
```
|
||||||
|
|
||||||
|
Issue a new token with the new credentials:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
NEW_TOKEN_RESPONSE=$(curl -s -X POST http://localhost:3000/api/v1/token \
|
||||||
|
-H "Content-Type: application/x-www-form-urlencoded" \
|
||||||
|
-d "grant_type=client_credentials&client_id=$NEW_CLIENT_ID&client_secret=$NEW_CLIENT_SECRET&scope=read")
|
||||||
|
|
||||||
|
NEW_ACCESS_TOKEN=$(echo $NEW_TOKEN_RESPONSE | jq -r '.access_token')
|
||||||
|
echo "New token obtained."
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify the new token works:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -o /dev/null -w "%{http_code}" \
|
||||||
|
-H "Authorization: Bearer $NEW_ACCESS_TOKEN" \
|
||||||
|
http://localhost:3000/api/v1/agents
|
||||||
|
# Expected: 200
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step B.8 — Check audit log**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -H "Authorization: Bearer $NEW_ACCESS_TOKEN" \
|
||||||
|
"http://localhost:3000/api/v1/audit?limit=10" | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: JSON array of audit events. Verify these action types are present from Steps B.1–B.7:
|
||||||
|
`agent.created`, `credential.generated`, `token.issued`, `credential.rotated`, `token.revoked`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase C — Guardrails
|
||||||
|
|
||||||
|
This phase tests security boundaries. Each test case must be run with the exact command shown
|
||||||
|
and must produce the specified HTTP status code.
|
||||||
|
|
||||||
|
> **Setup:** Ensure `$NEW_ACCESS_TOKEN` is still set from Phase B. Use `export NEW_ACCESS_TOKEN`
|
||||||
|
> if switching terminals.
|
||||||
|
|
||||||
|
**Test C.1 — No Authorization header → 401**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -o /dev/null -w "%{http_code}" \
|
||||||
|
http://localhost:3000/api/v1/agents
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected HTTP status: `401`
|
||||||
|
|
||||||
|
**Test C.2 — Malformed JWT → 401**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -o /dev/null -w "%{http_code}" \
|
||||||
|
-H "Authorization: Bearer notavalidjwt" \
|
||||||
|
http://localhost:3000/api/v1/agents
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected HTTP status: `401`
|
||||||
|
|
||||||
|
**Test C.3 — Expired JWT → 401**
|
||||||
|
|
||||||
|
Use a known-expired token. Generate one with a 1-second TTL (requires a test helper or
|
||||||
|
manually craft an expired JWT). For field trial purposes, use this pre-constructed expired token
|
||||||
|
(signed with a different key — will fail signature verification and return 401):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
EXPIRED_TOKEN="eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJ0ZXN0IiwiZXhwIjoxfQ.invalid"
|
||||||
|
|
||||||
|
curl -s -o /dev/null -w "%{http_code}" \
|
||||||
|
-H "Authorization: Bearer $EXPIRED_TOKEN" \
|
||||||
|
http://localhost:3000/api/v1/agents
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected HTTP status: `401`
|
||||||
|
|
||||||
|
**Test C.4 — Valid JWT, wrong scope → 403**
|
||||||
|
|
||||||
|
Issue a token with scope `read`, then attempt to access an endpoint requiring scope `write`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# The NEW_ACCESS_TOKEN has scope "read"
|
||||||
|
# Attempt an action requiring "write" scope (create agent)
|
||||||
|
curl -s -o /dev/null -w "%{http_code}" \
|
||||||
|
-H "Authorization: Bearer $NEW_ACCESS_TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-X POST http://localhost:3000/api/v1/agents \
|
||||||
|
-d '{"email":"scope-test@example.com","agent_type":"custom","version":"1.0.0","capabilities":[],"owner":"test","deployment_env":"development"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected HTTP status: `403`
|
||||||
|
|
||||||
|
**Test C.5 — Rate limit: 101 requests → 429 on the 101st**
|
||||||
|
|
||||||
|
Send 101 requests in rapid succession. The 101st must return 429.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
for i in $(seq 1 101); do
|
||||||
|
STATUS=$(curl -s -o /dev/null -w "%{http_code}" \
|
||||||
|
-H "Authorization: Bearer $NEW_ACCESS_TOKEN" \
|
||||||
|
http://localhost:3000/api/v1/agents)
|
||||||
|
if [ "$STATUS" = "429" ]; then
|
||||||
|
echo "Request $i returned 429 (PASS)"
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: Output shows `Request 101 returned 429 (PASS)` (or earlier if previous requests in
|
||||||
|
the session have already counted toward the window).
|
||||||
|
|
||||||
|
After this test, wait 60 seconds for the rate limit window to reset, or use a fresh
|
||||||
|
`client_id` for subsequent tests.
|
||||||
|
|
||||||
|
**Test C.6 — Tier limit: exceed free-tier API call limit → 429 with `tier_limit_exceeded`**
|
||||||
|
|
||||||
|
The free tier allows 1,000 API calls per day. For field trial, manually set the counter to the
|
||||||
|
limit value to trigger the guard without making 1,000 real requests:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get the org_id from the token
|
||||||
|
ORG_ID=$(echo $NEW_ACCESS_TOKEN | cut -d. -f2 | base64 -d 2>/dev/null | jq -r '.organization_id')
|
||||||
|
|
||||||
|
# Force the counter to the limit via Redis CLI
|
||||||
|
docker compose exec redis redis-cli SET "rate:tier:calls:$ORG_ID" 1001 EX 86400
|
||||||
|
|
||||||
|
# The next API call must be rejected
|
||||||
|
TIER_RESPONSE=$(curl -s -w "\n%{http_code}" \
|
||||||
|
-H "Authorization: Bearer $NEW_ACCESS_TOKEN" \
|
||||||
|
http://localhost:3000/api/v1/agents)
|
||||||
|
|
||||||
|
echo "$TIER_RESPONSE"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: HTTP status `429`. Response body must contain `"code":"tier_limit_exceeded"`.
|
||||||
|
|
||||||
|
Reset the counter after this test:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose exec redis redis-cli DEL "rate:tier:calls:$ORG_ID"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Test C.7 — Tenant isolation: Org A token cannot access Org B agents → 403**
|
||||||
|
|
||||||
|
Create a second organisation and agent:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ORG_B_RESPONSE=$(curl -s -X POST http://localhost:3000/api/v1/organizations \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"name":"Org B","slug":"org-b"}')
|
||||||
|
|
||||||
|
ORG_B_ID=$(echo $ORG_B_RESPONSE | jq -r '.org_id')
|
||||||
|
echo "ORG_B_ID: $ORG_B_ID"
|
||||||
|
|
||||||
|
AGENT_B_RESPONSE=$(curl -s -X POST http://localhost:3000/api/v1/agents \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{
|
||||||
|
\"email\": \"org-b-agent@org-b.sentryagent.ai\",
|
||||||
|
\"agent_type\": \"monitor\",
|
||||||
|
\"version\": \"1.0.0\",
|
||||||
|
\"capabilities\": [],
|
||||||
|
\"owner\": \"org-b\",
|
||||||
|
\"deployment_env\": \"development\",
|
||||||
|
\"organization_id\": \"$ORG_B_ID\"
|
||||||
|
}")
|
||||||
|
|
||||||
|
AGENT_B_ID=$(echo $AGENT_B_RESPONSE | jq -r '.agent_id')
|
||||||
|
echo "AGENT_B_ID: $AGENT_B_ID"
|
||||||
|
```
|
||||||
|
|
||||||
|
Attempt to access Org B's agent using Org A's token:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -o /dev/null -w "%{http_code}" \
|
||||||
|
-H "Authorization: Bearer $NEW_ACCESS_TOKEN" \
|
||||||
|
http://localhost:3000/api/v1/agents/$AGENT_B_ID
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected HTTP status: `403`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase D — Portal
|
||||||
|
|
||||||
|
**Step D.1 — Install portal dependencies**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd portal && npm install && cd ..
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step D.2 — Start the portal development server**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd portal && npm run dev &
|
||||||
|
```
|
||||||
|
|
||||||
|
Wait 5 seconds for Next.js to compile, then verify it is listening:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -o /dev/null -w "%{http_code}" http://localhost:3001
|
||||||
|
# Expected: 200 or 307 (redirect to /login)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step D.3 — Verify each portal route loads**
|
||||||
|
|
||||||
|
Open a browser and navigate to each of the following URLs. Each must load without a JavaScript
|
||||||
|
error in the browser console:
|
||||||
|
|
||||||
|
| URL | Expected |
|
||||||
|
|-----|---------|
|
||||||
|
| `http://localhost:3001/login` | Login page renders |
|
||||||
|
| `http://localhost:3001/agents` | Agent list renders (may be empty or show auth redirect) |
|
||||||
|
| `http://localhost:3001/credentials` | Credentials page renders |
|
||||||
|
| `http://localhost:3001/audit` | Audit log page renders |
|
||||||
|
| `http://localhost:3001/analytics` | Analytics dashboard renders |
|
||||||
|
| `http://localhost:3001/settings/tier` | Tier status page renders |
|
||||||
|
| `http://localhost:3001/compliance` | Compliance report page renders |
|
||||||
|
| `http://localhost:3001/webhooks` | Webhooks page renders |
|
||||||
|
| `http://localhost:3001/marketplace` | Marketplace page renders |
|
||||||
|
|
||||||
|
All 9 routes must load without a blank page or unhandled error.
|
||||||
|
|
||||||
|
**Step D.4 — Verify analytics charts render**
|
||||||
|
|
||||||
|
Navigate to `http://localhost:3001/analytics`.
|
||||||
|
|
||||||
|
Verify both of the following chart components are present in the page DOM:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s http://localhost:3001/analytics | grep -c "recharts"
|
||||||
|
# Expected: 1 or more (recharts is used for TokenTrendChart and AgentHeatmap)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step D.5 — Verify tier status page**
|
||||||
|
|
||||||
|
Navigate to `http://localhost:3001/settings/tier`.
|
||||||
|
|
||||||
|
The page must display the current tier (expected: `free` for a new organisation).
|
||||||
|
|
||||||
|
**Step D.6 — Stop the portal**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kill $(lsof -ti:3001)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase E — AGNTCY Conformance
|
||||||
|
|
||||||
|
**Step E.1 — Activate nvm**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export NVM_DIR="$HOME/.nvm" && source "$NVM_DIR/nvm.sh"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step E.2 — Run the AGNTCY conformance suite**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm run test:agntcy-conformance
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step E.3 — Expected output**
|
||||||
|
|
||||||
|
```
|
||||||
|
AGNTCY Conformance Suite
|
||||||
|
Agent Card Export
|
||||||
|
✓ exports valid AGNTCY agent card format
|
||||||
|
✓ agent card contains required identity fields
|
||||||
|
Compliance Report
|
||||||
|
✓ generates SOC2-aligned compliance report
|
||||||
|
✓ compliance report includes all required control domains
|
||||||
|
|
||||||
|
4 passing (Xs)
|
||||||
|
```
|
||||||
|
|
||||||
|
All 4 tests must pass. A failure indicates a regression in AGNTCY conformance.
|
||||||
|
|
||||||
|
**What each test validates:**
|
||||||
|
|
||||||
|
| Test | What it validates |
|
||||||
|
|------|------------------|
|
||||||
|
| `exports valid AGNTCY agent card format` | The `/api/v1/compliance/agent-cards` endpoint returns an array where each card has `id`, `name`, `version`, `capabilities`, `did` fields in AGNTCY format |
|
||||||
|
| `agent card contains required identity fields` | Each agent card's `identity` block includes `agent_id`, `organization_id`, `did`, and `deployment_env` |
|
||||||
|
| `generates SOC2-aligned compliance report` | The `/api/v1/compliance/report` endpoint returns a report with `generated_at`, `controls`, `summary` top-level keys |
|
||||||
|
| `compliance report includes all required control domains` | The `controls` array in the report includes entries for `access_control`, `audit_logging`, `credential_management`, and `tenant_isolation` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase F — Performance Baseline
|
||||||
|
|
||||||
|
> **Prerequisite:** Apache Bench (`ab`) must be installed. On Ubuntu: `sudo apt install apache2-utils`.
|
||||||
|
> Verify: `ab -V`
|
||||||
|
|
||||||
|
**Step F.1 — Create a token payload file**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat > /tmp/token_payload.json << 'EOF'
|
||||||
|
grant_type=client_credentials&client_id=REPLACE_CLIENT_ID&client_secret=REPLACE_CLIENT_SECRET&scope=read
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
Replace `REPLACE_CLIENT_ID` and `REPLACE_CLIENT_SECRET` with `$NEW_CLIENT_ID` and
|
||||||
|
`$NEW_CLIENT_SECRET` from Phase B:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat > /tmp/token_payload.txt << EOF
|
||||||
|
grant_type=client_credentials&client_id=${NEW_CLIENT_ID}&client_secret=${NEW_CLIENT_SECRET}&scope=read
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step F.2 — Benchmark token endpoint**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ab -n 100 -c 10 \
|
||||||
|
-p /tmp/token_payload.txt \
|
||||||
|
-T "application/x-www-form-urlencoded" \
|
||||||
|
http://localhost:3000/api/v1/token
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pass criteria for token endpoint:**
|
||||||
|
|
||||||
|
- `Requests per second` > 10
|
||||||
|
- `Time per request (mean)` < 100 ms
|
||||||
|
- p95 (95th percentile, shown as `95%` in the `Percentage of requests` table) < 100 ms
|
||||||
|
- Zero non-2xx responses
|
||||||
|
|
||||||
|
**Step F.3 — Benchmark agent list endpoint**
|
||||||
|
|
||||||
|
Ensure `$NEW_ACCESS_TOKEN` is still set and valid. Issue a fresh token if needed:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
NEW_ACCESS_TOKEN=$(curl -s -X POST http://localhost:3000/api/v1/token \
|
||||||
|
-H "Content-Type: application/x-www-form-urlencoded" \
|
||||||
|
-d "grant_type=client_credentials&client_id=${NEW_CLIENT_ID}&client_secret=${NEW_CLIENT_SECRET}&scope=read" \
|
||||||
|
| jq -r '.access_token')
|
||||||
|
```
|
||||||
|
|
||||||
|
Run the benchmark:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ab -n 100 -c 10 \
|
||||||
|
-H "Authorization: Bearer $NEW_ACCESS_TOKEN" \
|
||||||
|
http://localhost:3000/api/v1/agents
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pass criteria for agent list endpoint:**
|
||||||
|
|
||||||
|
- `Time per request (mean)` < 200 ms
|
||||||
|
- p95 (`95%` row in the `Percentage of requests` table) < 200 ms
|
||||||
|
- Zero non-2xx responses
|
||||||
|
|
||||||
|
**Step F.4 — Record results**
|
||||||
|
|
||||||
|
Record the following values from each `ab` output for the field trial report:
|
||||||
|
|
||||||
|
| Endpoint | Metric | Value |
|
||||||
|
|----------|--------|-------|
|
||||||
|
| `/api/v1/token` | Requests per second | |
|
||||||
|
| `/api/v1/token` | Mean time per request (ms) | |
|
||||||
|
| `/api/v1/token` | p95 (ms) | |
|
||||||
|
| `/api/v1/agents` | Requests per second | |
|
||||||
|
| `/api/v1/agents` | Mean time per request (ms) | |
|
||||||
|
| `/api/v1/agents` | p95 (ms) | |
|
||||||
|
|
||||||
|
A field trial passes Phase F if all p95 values are within the pass criteria above.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
Each entry follows the pattern: **Symptom** → **Cause** → **Fix** with exact commands.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Port already in use**
|
||||||
|
|
||||||
|
Symptom:
|
||||||
|
|
||||||
|
```
|
||||||
|
Error response from daemon: driver failed programming external connectivity on endpoint
|
||||||
|
sentryagent-idp-app-1: Bind for 0.0.0.0:3000 failed: port is already allocated
|
||||||
|
```
|
||||||
|
|
||||||
|
Fix: Kill the process occupying the port, then restart:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
lsof -ti:3000 | xargs kill
|
||||||
|
lsof -ti:5432 | xargs kill
|
||||||
|
lsof -ti:6379 | xargs kill
|
||||||
|
docker compose up --build -d
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Container shows `unhealthy`**
|
||||||
|
|
||||||
|
Symptom: `docker compose ps` shows `unhealthy` for a service.
|
||||||
|
|
||||||
|
Fix: Check logs for the unhealthy service:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose logs postgres
|
||||||
|
docker compose logs redis
|
||||||
|
docker compose logs app
|
||||||
|
```
|
||||||
|
|
||||||
|
Common causes:
|
||||||
|
|
||||||
|
| Service | Cause | Fix |
|
||||||
|
|---------|-------|-----|
|
||||||
|
| `postgres` | Wrong database credentials | Verify `POSTGRES_USER`, `POSTGRES_PASSWORD`, `POSTGRES_DB` in `.env` match values in `compose.yaml` |
|
||||||
|
| `redis` | Port conflict | Check `lsof -ti:6379` and kill occupying process |
|
||||||
|
| `app` | Missing env var | Check `docker compose logs app` for `Failed to start server` message |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Migration fails — connection refused**
|
||||||
|
|
||||||
|
Symptom:
|
||||||
|
|
||||||
|
```
|
||||||
|
Migration failed: Error: connect ECONNREFUSED 127.0.0.1:5432
|
||||||
|
```
|
||||||
|
|
||||||
|
Cause: Running `npm run db:migrate` directly on the host (not inside the container) while
|
||||||
|
PostgreSQL is running inside Docker.
|
||||||
|
|
||||||
|
Fix: Always run migrations inside the container during a field trial:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose exec app npm run db:migrate
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Migration fails — relation already exists**
|
||||||
|
|
||||||
|
Symptom:
|
||||||
|
|
||||||
|
```
|
||||||
|
Migration failed: Error: relation "agents" already exists
|
||||||
|
```
|
||||||
|
|
||||||
|
Cause: A previous partial migration run left the database in an inconsistent state.
|
||||||
|
|
||||||
|
Fix: Check which migrations have been applied:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose exec postgres psql -U sentryagent -d sentryagent_idp \
|
||||||
|
-c "SELECT name FROM schema_migrations ORDER BY name;"
|
||||||
|
```
|
||||||
|
|
||||||
|
If the database state cannot be repaired, reset it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose down -v
|
||||||
|
docker compose up --build -d
|
||||||
|
docker compose exec app npm run db:migrate
|
||||||
|
```
|
||||||
|
|
||||||
|
> `docker compose down -v` destroys all data. Use only when a clean slate is acceptable.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**JWT error — invalid signature or key format**
|
||||||
|
|
||||||
|
Symptom:
|
||||||
|
|
||||||
|
```
|
||||||
|
Failed to start server: Error: JWT_PRIVATE_KEY and JWT_PUBLIC_KEY environment variables are required
|
||||||
|
```
|
||||||
|
|
||||||
|
Or: All tokens return `401 Token signature is invalid`.
|
||||||
|
|
||||||
|
Cause: JWT keys in `.env` have incorrect PEM format — literal newlines instead of `\n`
|
||||||
|
sequences, or trailing whitespace.
|
||||||
|
|
||||||
|
Fix: Regenerate the keys and re-write them using the exact commands from Step 0.2 and 0.3.
|
||||||
|
|
||||||
|
Verify the key format in `.env`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
grep "JWT_PRIVATE_KEY" .env | head -c 100
|
||||||
|
# Expected: JWT_PRIVATE_KEY="-----BEGIN RSA PRIVATE KEY-----\nMII...
|
||||||
|
# NOT: JWT_PRIVATE_KEY="-----BEGIN RSA PRIVATE KEY-----
|
||||||
|
# MII...
|
||||||
|
```
|
||||||
|
|
||||||
|
The entire key must be on a single line with `\n` as literal backslash-n characters, not
|
||||||
|
actual newlines.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Portal CORS error**
|
||||||
|
|
||||||
|
Symptom: Browser console shows:
|
||||||
|
|
||||||
|
```
|
||||||
|
Access to XMLHttpRequest at 'http://localhost:3000/api/v1/...' from origin 'http://localhost:3001'
|
||||||
|
has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present
|
||||||
|
```
|
||||||
|
|
||||||
|
Cause: `CORS_ORIGIN` in `.env` does not include `http://localhost:3001`, or is set to a
|
||||||
|
different value.
|
||||||
|
|
||||||
|
Fix:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sed -i "s|CORS_ORIGIN=.*|CORS_ORIGIN=http://localhost:3001|" .env
|
||||||
|
docker compose up --build -d
|
||||||
|
```
|
||||||
|
|
||||||
|
Wait for the `app` container to become healthy before retrying.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Tier counter not resetting**
|
||||||
|
|
||||||
|
Symptom: All API calls return 429 `tier_limit_exceeded` even after waiting.
|
||||||
|
|
||||||
|
Cause: The Redis tier counter was manually set in Test C.6 and not deleted.
|
||||||
|
|
||||||
|
Fix:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get your org_id from the token
|
||||||
|
ORG_ID=$(echo $NEW_ACCESS_TOKEN | cut -d. -f2 | base64 -d 2>/dev/null | jq -r '.organization_id')
|
||||||
|
|
||||||
|
docker compose exec redis redis-cli DEL "rate:tier:calls:$ORG_ID"
|
||||||
|
docker compose exec redis redis-cli DEL "rate:tier:tokens:$ORG_ID"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**`ab` not found**
|
||||||
|
|
||||||
|
Symptom: `ab: command not found`
|
||||||
|
|
||||||
|
Fix:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt-get update && sudo apt-get install -y apache2-utils
|
||||||
|
# or on macOS:
|
||||||
|
brew install httpd
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**AGNTCY conformance test fails**
|
||||||
|
|
||||||
|
Symptom: One or more tests in `npm run test:agntcy-conformance` fail.
|
||||||
|
|
||||||
|
Diagnosis steps:
|
||||||
|
|
||||||
|
1. Ensure the backend is running and healthy: `curl -s http://localhost:3000/health`
|
||||||
|
2. Ensure `COMPLIANCE_ENABLED=true` in `.env` (check with `grep COMPLIANCE_ENABLED .env`)
|
||||||
|
3. Ensure at least one agent has been registered (Phase B must have been completed)
|
||||||
|
4. Check the test output for the specific assertion that failed
|
||||||
|
5. Check `docker compose logs app` for errors around compliance report generation
|
||||||
|
|
||||||
|
If the issue is a Redis cache hit returning stale data:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose exec redis redis-cli KEYS "compliance:*" | xargs docker compose exec redis redis-cli DEL
|
||||||
|
```
|
||||||
|
|
||||||
|
Then re-run the conformance suite.
|
||||||
@@ -6,19 +6,27 @@ Complete setup guide for running AgentIdP locally.
|
|||||||
|
|
||||||
| Tool | Minimum version | Purpose |
|
| Tool | Minimum version | Purpose |
|
||||||
|------|----------------|---------|
|
|------|----------------|---------|
|
||||||
| Docker + Docker Compose | 24+ | Run PostgreSQL and Redis |
|
| Docker | 24+ | Container runtime |
|
||||||
| Node.js | 18.0.0 | Run the application and migrations |
|
| Docker Compose | 2.20+ | Multi-container orchestration |
|
||||||
|
| Node.js | 18.0.0 | Run the application, portal, and migrations |
|
||||||
| npm | 9+ | Package management and scripts |
|
| npm | 9+ | Package management and scripts |
|
||||||
|
| nvm | any | Recommended for managing Node.js versions |
|
||||||
|
| openssl | any | RSA key generation |
|
||||||
|
|
||||||
Verify versions:
|
Verify versions:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker --version
|
docker --version
|
||||||
docker-compose --version
|
docker compose version
|
||||||
node --version
|
node --version
|
||||||
npm --version
|
npm --version
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> **nvm activation:** If using nvm, activate it before running any Node.js commands:
|
||||||
|
> ```bash
|
||||||
|
> export NVM_DIR="$HOME/.nvm" && source "$NVM_DIR/nvm.sh"
|
||||||
|
> ```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Step 1 — Clone and install dependencies
|
## Step 1 — Clone and install dependencies
|
||||||
@@ -27,6 +35,9 @@ npm --version
|
|||||||
git clone https://git.sentryagent.ai/vijay_admin/sentryagent-idp.git
|
git clone https://git.sentryagent.ai/vijay_admin/sentryagent-idp.git
|
||||||
cd sentryagent-idp
|
cd sentryagent-idp
|
||||||
npm install
|
npm install
|
||||||
|
|
||||||
|
# Install portal dependencies
|
||||||
|
cd portal && npm install && cd ..
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -46,18 +57,29 @@ Keep these files in the project root. They are used only locally and should not
|
|||||||
|
|
||||||
## Step 3 — Configure environment
|
## Step 3 — Configure environment
|
||||||
|
|
||||||
Create a `.env` file in the project root:
|
Copy the template and fill in your values:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat > .env << 'ENVEOF'
|
cp .env.example .env
|
||||||
|
```
|
||||||
|
|
||||||
|
The template already includes all required variables. At minimum, verify these are set correctly for local development:
|
||||||
|
|
||||||
|
```
|
||||||
|
POSTGRES_USER=sentryagent
|
||||||
|
POSTGRES_PASSWORD=sentryagent
|
||||||
|
POSTGRES_DB=sentryagent_idp
|
||||||
DATABASE_URL=postgresql://sentryagent:sentryagent@localhost:5432/sentryagent_idp
|
DATABASE_URL=postgresql://sentryagent:sentryagent@localhost:5432/sentryagent_idp
|
||||||
REDIS_URL=redis://localhost:6379
|
REDIS_URL=redis://localhost:6379
|
||||||
PORT=3000
|
PORT=3000
|
||||||
NODE_ENV=development
|
NODE_ENV=development
|
||||||
CORS_ORIGIN=*
|
CORS_ORIGIN=*
|
||||||
ENVEOF
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> **Note:** `POSTGRES_USER`, `POSTGRES_PASSWORD`, and `POSTGRES_DB` are used by `compose.yaml`
|
||||||
|
> to configure the PostgreSQL container and construct `DATABASE_URL`. They are not read by
|
||||||
|
> the application directly — only `DATABASE_URL` is.
|
||||||
|
|
||||||
Append the JWT keys to `.env`:
|
Append the JWT keys to `.env`:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -75,10 +97,10 @@ grep -E "^(DATABASE_URL|REDIS_URL|JWT_PRIVATE_KEY|JWT_PUBLIC_KEY)" .env
|
|||||||
|
|
||||||
## Step 4 — Start infrastructure services
|
## Step 4 — Start infrastructure services
|
||||||
|
|
||||||
The `docker-compose.yml` defines three services: `postgres`, `redis`, and `app`. For local development, start only the infrastructure services — the application runs directly via Node.js.
|
The `compose.yaml` defines three services: `postgres`, `redis`, and `app`. For local development, start only the infrastructure services — the application runs directly via Node.js.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker-compose up -d postgres redis
|
docker compose up -d postgres redis
|
||||||
```
|
```
|
||||||
|
|
||||||
Expected output:
|
Expected output:
|
||||||
@@ -89,7 +111,7 @@ Expected output:
|
|||||||
✔ Container sentryagent-idp-redis-1 Healthy
|
✔ Container sentryagent-idp-redis-1 Healthy
|
||||||
```
|
```
|
||||||
|
|
||||||
Both services must show `Healthy` before proceeding. If they show `Starting`, wait a few seconds and run `docker-compose ps` to recheck.
|
Both services must show `Healthy` before proceeding. If they show `Starting`, wait a few seconds and run `docker compose ps` to recheck.
|
||||||
|
|
||||||
### Service ports
|
### Service ports
|
||||||
|
|
||||||
@@ -101,18 +123,18 @@ Both services must show `Healthy` before proceeding. If they show `Starting`, wa
|
|||||||
Verify manually:
|
Verify manually:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker-compose exec postgres pg_isready -U sentryagent -d sentryagent_idp
|
docker compose exec postgres pg_isready -U sentryagent -d sentryagent_idp
|
||||||
docker-compose exec redis redis-cli ping
|
docker compose exec redis redis-cli ping
|
||||||
```
|
```
|
||||||
|
|
||||||
### Docker volumes
|
### Docker volumes
|
||||||
|
|
||||||
Data is persisted in named Docker volumes:
|
Data is persisted in named Docker volumes (kebab-case per Compose Spec standard):
|
||||||
|
|
||||||
| Volume | Service | Contents |
|
| Volume | Service | Contents |
|
||||||
|--------|---------|---------|
|
|--------|---------|---------|
|
||||||
| `sentryagent-idp_postgres_data` | PostgreSQL | All database data |
|
| `sentryagent-idp_postgres-data` | PostgreSQL | All database data |
|
||||||
| `sentryagent-idp_redis_data` | Redis | Redis persistence (if enabled) |
|
| `sentryagent-idp_redis-data` | Redis | Redis persistence (if enabled) |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -127,11 +149,10 @@ Expected output:
|
|||||||
```
|
```
|
||||||
Running database migrations...
|
Running database migrations...
|
||||||
✓ Applied: 001_create_agents.sql
|
✓ Applied: 001_create_agents.sql
|
||||||
✓ Applied: 002_create_credentials.sql
|
...
|
||||||
✓ Applied: 003_create_audit_events.sql
|
✓ Applied: 026_add_tenant_tiers.sql
|
||||||
✓ Applied: 004_create_tokens.sql
|
|
||||||
|
|
||||||
Migrations complete. 4 migration(s) applied.
|
Migrations complete. 26 migration(s) applied.
|
||||||
```
|
```
|
||||||
|
|
||||||
See [database.md](database.md) for full migration documentation.
|
See [database.md](database.md) for full migration documentation.
|
||||||
@@ -165,19 +186,60 @@ The compiled output is written to `dist/`. `npm start` runs `node dist/server.js
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Full Docker Compose Stack
|
## Step 7 — Start the Next.js portal (optional)
|
||||||
|
|
||||||
> **Note:** The `app` service in `docker-compose.yml` requires a `Dockerfile` which has not been written yet. This is a **Phase 1 P1 pending item**. The commands below will work once the Dockerfile exists.
|
The portal is a Next.js 14 application in the `portal/` directory. It communicates with the
|
||||||
|
AgentIdP backend at `http://localhost:3000`.
|
||||||
|
|
||||||
When the Dockerfile is available, the entire stack (infrastructure + application) can be started with:
|
Start the portal development server:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker-compose up -d
|
cd portal && npm run dev
|
||||||
```
|
```
|
||||||
|
|
||||||
The `app` service depends on `postgres` and `redis` with health check conditions, so it will not start until both services are healthy.
|
The portal starts on port 3001 by default. Open http://localhost:3001.
|
||||||
|
|
||||||
Environment variables for the container are loaded from `.env` via the `env_file` directive in `docker-compose.yml`.
|
Available routes:
|
||||||
|
|
||||||
|
| Route | Description |
|
||||||
|
|-------|-------------|
|
||||||
|
| `/login` | OAuth 2.0 login page |
|
||||||
|
| `/agents` | Agent registry |
|
||||||
|
| `/credentials` | Credential management |
|
||||||
|
| `/audit` | Audit log viewer |
|
||||||
|
| `/analytics` | Token trend and agent activity charts |
|
||||||
|
| `/settings/tier` | Tier status and upgrade |
|
||||||
|
| `/compliance` | AGNTCY compliance report |
|
||||||
|
| `/webhooks` | Webhook subscription management |
|
||||||
|
| `/marketplace` | Agent marketplace |
|
||||||
|
|
||||||
|
Build the portal for production:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd portal && npm run build
|
||||||
|
cd portal && npm start # serves the production build
|
||||||
|
```
|
||||||
|
|
||||||
|
Ensure `CORS_ORIGIN` in your `.env` includes `http://localhost:3001`:
|
||||||
|
```
|
||||||
|
CORS_ORIGIN=http://localhost:3001
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Full Docker Compose Stack
|
||||||
|
|
||||||
|
> The full Docker Compose stack (including the `app` container) is available for field trial
|
||||||
|
> deployments — see the [field trial guide](field-trial.md). For day-to-day development, start
|
||||||
|
> only the infrastructure services and run the application directly.
|
||||||
|
|
||||||
|
The entire stack (infrastructure + application) can be started with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose up --build -d
|
||||||
|
```
|
||||||
|
|
||||||
|
The `app` service depends on `postgres` and `redis` with health check conditions, so it will not start until both services are healthy. Environment variables are loaded from `.env` via the `env_file` directive in `compose.yaml` (`required: false` — the file is optional if env vars are injected directly).
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -186,19 +248,19 @@ Environment variables for the container are loaded from `.env` via the `env_file
|
|||||||
Stop infrastructure only (preserves volumes):
|
Stop infrastructure only (preserves volumes):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker-compose stop postgres redis
|
docker compose stop postgres redis
|
||||||
```
|
```
|
||||||
|
|
||||||
Stop and remove containers (preserves volumes):
|
Stop and remove containers (preserves volumes):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker-compose down
|
docker compose down
|
||||||
```
|
```
|
||||||
|
|
||||||
Stop and remove containers AND volumes (destroys all data):
|
Stop and remove containers AND volumes (destroys all data):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker-compose down -v
|
docker compose down -v
|
||||||
```
|
```
|
||||||
|
|
||||||
> Use `-v` only when you want a clean slate. This deletes all PostgreSQL data and Redis data permanently.
|
> Use `-v` only when you want a clean slate. This deletes all PostgreSQL data and Redis data permanently.
|
||||||
|
|||||||
@@ -18,21 +18,22 @@ Always start services in this order. Starting the application before PostgreSQL
|
|||||||
### Startup checklist
|
### Startup checklist
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 1. Start PostgreSQL and Redis
|
# 1. Start the full stack
|
||||||
docker-compose up -d postgres redis
|
docker compose up --build -d
|
||||||
|
|
||||||
# 2. Wait for healthy status
|
# 2. Verify all three services are healthy
|
||||||
docker-compose ps
|
docker compose ps
|
||||||
# Both postgres and redis must show "healthy" before proceeding
|
# app, postgres, and redis must all show "healthy"
|
||||||
|
|
||||||
# 3. Run migrations
|
# 3. Run migrations
|
||||||
npm run db:migrate
|
docker compose exec app npm run db:migrate
|
||||||
# Must complete with 0 errors before starting the app
|
|
||||||
|
|
||||||
# 4. Start the application
|
# 4. Verify application health
|
||||||
npm run dev # development
|
curl http://localhost:3000/health
|
||||||
# or
|
# Expected: {"status":"ok"}
|
||||||
npm start # production (requires prior npm run build)
|
|
||||||
|
# 5. (Optional) Start the portal for local dev
|
||||||
|
cd portal && npm run dev
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -110,14 +111,17 @@ Three key patterns are used in Redis. Useful for debugging and manual inspection
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Connect to Redis CLI
|
# Connect to Redis CLI
|
||||||
docker-compose exec redis redis-cli
|
docker compose exec redis redis-cli
|
||||||
```
|
```
|
||||||
|
|
||||||
| Key pattern | Example | Purpose | TTL |
|
| Key pattern | Example | Purpose | TTL |
|
||||||
|------------|---------|---------|-----|
|
|------------|---------|---------|-----|
|
||||||
| `revoked:<jti>` | `revoked:f1e2d3c4-b5a6-...` | Revoked token JTI | Remaining token lifetime |
|
| `revoked:<jti>` | `revoked:f1e2d3c4-...` | Revoked token JTI | Remaining token lifetime |
|
||||||
| `rate:<client_id>:<window>` | `rate:a1b2c3...:29086156` | Request count per minute window | 60 seconds |
|
| `rate:<client_id>:<window>` | `rate:a1b2c3...:29086156` | Request count per window | `RATE_LIMIT_WINDOW_MS` |
|
||||||
| `monthly:<client_id>:<year>:<month>` | `monthly:a1b2c3...:2026:3` | Token issuance count for free tier | End of month |
|
| `monthly:<client_id>:<year>:<month>` | `monthly:a1b2c3...:2026:3` | Monthly token issuance count | End of month |
|
||||||
|
| `rate:tier:calls:<tenantId>` | `rate:tier:calls:org-uuid` | Daily API call counter for tier enforcement | Until midnight UTC |
|
||||||
|
| `rate:tier:tokens:<tenantId>` | `rate:tier:tokens:org-uuid` | Daily token issuance counter for tier enforcement | Until midnight UTC |
|
||||||
|
| `compliance:report:<tenantId>` | `compliance:report:org-uuid` | Cached compliance report JSON | 5 minutes |
|
||||||
|
|
||||||
Inspect keys:
|
Inspect keys:
|
||||||
|
|
||||||
@@ -130,6 +134,16 @@ redis-cli GET "rate:<client_id>:<window_key>"
|
|||||||
|
|
||||||
# Check monthly token count for a specific client
|
# Check monthly token count for a specific client
|
||||||
redis-cli GET "monthly:<client_id>:2026:3"
|
redis-cli GET "monthly:<client_id>:2026:3"
|
||||||
|
|
||||||
|
# Check tier API call counter for a tenant
|
||||||
|
redis-cli GET "rate:tier:calls:<org_id>"
|
||||||
|
|
||||||
|
# Check tier token counter for a tenant
|
||||||
|
redis-cli GET "rate:tier:tokens:<org_id>"
|
||||||
|
|
||||||
|
# Check cached compliance report for a tenant
|
||||||
|
redis-cli GET "compliance:report:<org_id>"
|
||||||
|
redis-cli TTL "compliance:report:<org_id>"
|
||||||
```
|
```
|
||||||
|
|
||||||
Where `<window_key>` is `floor(unix_ms / 60000)`. For the current window:
|
Where `<window_key>` is `floor(unix_ms / 60000)`. For the current window:
|
||||||
@@ -178,10 +192,10 @@ Error: connect ECONNREFUSED 127.0.0.1:5432
|
|||||||
|
|
||||||
| Cause | Fix |
|
| Cause | Fix |
|
||||||
|-------|-----|
|
|-------|-----|
|
||||||
| PostgreSQL container not started | Run `docker-compose up -d postgres` |
|
| PostgreSQL container not started | Run `docker compose up -d postgres` |
|
||||||
| PostgreSQL container not yet healthy | Wait and run `docker-compose ps` — wait for `healthy` |
|
| PostgreSQL container not yet healthy | Wait and run `docker compose ps` — wait for `healthy` |
|
||||||
| Wrong `DATABASE_URL` host/port | Check `DATABASE_URL` matches the PostgreSQL port (5432) |
|
| Wrong `DATABASE_URL` host/port | Check `DATABASE_URL` matches the PostgreSQL port (5432) |
|
||||||
| PostgreSQL container exited | Run `docker-compose logs postgres` to see why it exited |
|
| PostgreSQL container exited | Run `docker compose logs postgres` to see why it exited |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -196,8 +210,8 @@ Redis client error Error: connect ECONNREFUSED 127.0.0.1:6379
|
|||||||
|
|
||||||
| Cause | Fix |
|
| Cause | Fix |
|
||||||
|-------|-----|
|
|-------|-----|
|
||||||
| Redis container not started | Run `docker-compose up -d redis` |
|
| Redis container not started | Run `docker compose up -d redis` |
|
||||||
| Redis container not yet healthy | Run `docker-compose ps` — wait for `healthy` |
|
| Redis container not yet healthy | Run `docker compose ps` — wait for `healthy` |
|
||||||
| Wrong `REDIS_URL` | Check `REDIS_URL` matches the Redis port (6379) |
|
| Wrong `REDIS_URL` | Check `REDIS_URL` matches the Redis port (6379) |
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -243,7 +257,7 @@ If a migration is listed there but the table is inconsistent, manually inspect a
|
|||||||
# Find the current window key
|
# Find the current window key
|
||||||
WINDOW=$(node -e "console.log(Math.floor(Date.now() / 60000))")
|
WINDOW=$(node -e "console.log(Math.floor(Date.now() / 60000))")
|
||||||
# Check count for a specific client
|
# Check count for a specific client
|
||||||
docker-compose exec redis redis-cli GET "rate:<client_id>:$WINDOW"
|
docker compose exec redis redis-cli GET "rate:<client_id>:$WINDOW"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Fix:** Wait until `X-RateLimit-Reset` (Unix timestamp in the response header) before retrying. The window resets every 60 seconds.
|
**Fix:** Wait until `X-RateLimit-Reset` (Unix timestamp in the response header) before retrying. The window resets every 60 seconds.
|
||||||
@@ -258,21 +272,34 @@ AgentIdP exposes a Prometheus metrics endpoint at `GET /metrics` (unauthenticate
|
|||||||
|
|
||||||
| Metric | Type | Labels | Description |
|
| Metric | Type | Labels | Description |
|
||||||
|--------|------|--------|-------------|
|
|--------|------|--------|-------------|
|
||||||
| `agentidp_tokens_issued_total` | Counter | `scope` | OAuth 2.0 tokens issued successfully |
|
| `agentidp_tokens_issued_total` | Counter | `scope` | OAuth 2.0 tokens issued |
|
||||||
| `agentidp_agents_registered_total` | Counter | `deployment_env` | Agents registered successfully |
|
| `agentidp_agents_registered_total` | Counter | `deployment_env` | Agents registered |
|
||||||
| `agentidp_http_requests_total` | Counter | `method`, `route`, `status_code` | HTTP requests received |
|
| `agentidp_http_requests_total` | Counter | `method`, `route`, `status_code` | HTTP requests |
|
||||||
| `agentidp_http_request_duration_seconds` | Histogram | `method`, `route`, `status_code` | HTTP request duration |
|
| `agentidp_http_request_duration_seconds` | Histogram | `method`, `route`, `status_code` | HTTP latency |
|
||||||
| `agentidp_db_query_duration_seconds` | Histogram | `operation` | PostgreSQL query duration |
|
| `agentidp_db_query_duration_seconds` | Histogram | `operation` | PostgreSQL query duration |
|
||||||
| `agentidp_redis_command_duration_seconds` | Histogram | `command` | Redis command duration |
|
| `agentidp_redis_command_duration_seconds` | Histogram | `command` | Redis command duration |
|
||||||
|
| `agentidp_webhook_dead_letters_total` | Counter | `event_type` | Webhook deliveries moved to dead-letter queue |
|
||||||
|
| `agentidp_credentials_expiring_soon_total` | Gauge | — | Credentials expiring within 7 days |
|
||||||
|
| `agentidp_audit_chain_integrity` | Gauge | — | `1` if audit chain is intact, `0` if broken |
|
||||||
|
| `agentidp_rate_limit_hits_total` | Counter | `client_id` | Rate limit rejections |
|
||||||
|
| `agentidp_db_pool_active_connections` | Gauge | — | Active PostgreSQL connections |
|
||||||
|
| `agentidp_db_pool_waiting_requests` | Gauge | — | Requests waiting for a pool connection |
|
||||||
|
| `agentidp_tenant_api_calls_total` | Counter | `org_id`, `tier` | API calls per tenant per tier |
|
||||||
|
| `agentidp_billing_limit_rejections_total` | Counter | `org_id`, `limit_type` | Tier limit enforcement rejections |
|
||||||
|
| `agentidp_did_documents_generated_total` | Counter | — | DID documents generated |
|
||||||
|
| `agentidp_oidc_tokens_issued_total` | Counter | — | OIDC ID tokens issued |
|
||||||
|
| `agentidp_federation_events_total` | Counter | `event_type` | Federation partner events |
|
||||||
|
| `agentidp_delegation_chains_created_total` | Counter | — | A2A delegation chains created |
|
||||||
|
| `agentidp_compliance_reports_generated_total` | Counter | — | Compliance reports generated |
|
||||||
|
|
||||||
### Starting the Monitoring Stack
|
### Starting the Monitoring Stack
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Start the full stack with monitoring
|
# Start the full stack with monitoring
|
||||||
docker compose -f docker-compose.yml -f docker-compose.monitoring.yml up -d
|
docker compose -f compose.yaml -f compose.monitoring.yaml up -d
|
||||||
|
|
||||||
# Prometheus: http://localhost:9090
|
# Prometheus: http://localhost:9090
|
||||||
# Grafana: http://localhost:3001 (admin / agentidp)
|
# Grafana: http://localhost:3001 (admin / <GF_ADMIN_PASSWORD from .env>)
|
||||||
```
|
```
|
||||||
|
|
||||||
The Grafana dashboard auto-provisions on first start. Navigate to **Dashboards → AgentIdP → SentryAgent.ai — AgentIdP**.
|
The Grafana dashboard auto-provisions on first start. Navigate to **Dashboards → AgentIdP → SentryAgent.ai — AgentIdP**.
|
||||||
@@ -282,3 +309,50 @@ The Grafana dashboard auto-provisions on first start. Navigate to **Dashboards
|
|||||||
`GET /metrics` is unauthenticated. In production, ensure this endpoint is:
|
`GET /metrics` is unauthenticated. In production, ensure this endpoint is:
|
||||||
- Only accessible from your internal network (firewall rule or reverse proxy restriction)
|
- Only accessible from your internal network (firewall rule or reverse proxy restriction)
|
||||||
- Not exposed on a public-facing port
|
- Not exposed on a public-facing port
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Tier limit rejected — 429 with `tier_limit_exceeded` code
|
||||||
|
|
||||||
|
Symptom: `429 TOO_MANY_REQUESTS` with body `{"code":"tier_limit_exceeded","message":"..."}`
|
||||||
|
|
||||||
|
Check the tenant's current tier counter:
|
||||||
|
```bash
|
||||||
|
# Check API call counter
|
||||||
|
docker compose exec redis redis-cli GET "rate:tier:calls:<org_id>"
|
||||||
|
|
||||||
|
# Check the tenant's tier
|
||||||
|
psql "$DATABASE_URL" -c "SELECT org_id, tier FROM tenant_tiers WHERE org_id = '<org_id>';"
|
||||||
|
```
|
||||||
|
|
||||||
|
If the org is on the `free` tier and has hit 1,000 calls/day, upgrade the tier or wait until
|
||||||
|
midnight UTC for the counter to reset.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Analytics endpoints return 404
|
||||||
|
|
||||||
|
Cause: `ANALYTICS_ENABLED` is set to `false` in `.env`.
|
||||||
|
|
||||||
|
Fix: Set `ANALYTICS_ENABLED=true` and restart the application.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Compliance report returns 404
|
||||||
|
|
||||||
|
Cause: `COMPLIANCE_ENABLED` is set to `false` in `.env`.
|
||||||
|
|
||||||
|
Fix: Set `COMPLIANCE_ENABLED=true` and restart the application.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Portal CORS error
|
||||||
|
|
||||||
|
Symptom: Browser console shows `Access-Control-Allow-Origin` error on requests to
|
||||||
|
`http://localhost:3000`.
|
||||||
|
|
||||||
|
Fix: Ensure `CORS_ORIGIN` in `.env` includes `http://localhost:3001`:
|
||||||
|
```
|
||||||
|
CORS_ORIGIN=http://localhost:3001
|
||||||
|
```
|
||||||
|
Restart the application after changing this variable.
|
||||||
|
|||||||
@@ -87,6 +87,12 @@ Rotating the JWT keys invalidates all currently active tokens — every authenti
|
|||||||
|
|
||||||
**Important:** There is no grace period or dual-key support in Phase 1. All tokens issued with the old private key are immediately rejected after rotation. If zero-downtime key rotation is required, it is a Phase 2 feature.
|
**Important:** There is no grace period or dual-key support in Phase 1. All tokens issued with the old private key are immediately rejected after rotation. If zero-downtime key rotation is required, it is a Phase 2 feature.
|
||||||
|
|
||||||
|
> **OIDC keys** are separate from the main JWT keys. OIDC signing keys are stored in the
|
||||||
|
> `oidc_keys` PostgreSQL table (created by migration `014_create_oidc_keys_table.sql`), encrypted
|
||||||
|
> at rest using pgcrypto (enabled by migration `018_enable_pgcrypto.sql`). The `OIDCKeyService`
|
||||||
|
> manages rotation. OIDC keys do not need to be set as environment variables — they are
|
||||||
|
> provisioned automatically on first startup.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## CORS Configuration
|
## CORS Configuration
|
||||||
|
|||||||
@@ -47,6 +47,10 @@ VAULT_TOKEN=dev-root-token
|
|||||||
VAULT_MOUNT=secret
|
VAULT_MOUNT=secret
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> **Note:** The `.env.example` file uses `VAULT_KV_MOUNT` as the variable name. The application
|
||||||
|
> reads both `VAULT_KV_MOUNT` and `VAULT_MOUNT` — prefer `VAULT_KV_MOUNT` in new configurations
|
||||||
|
> for consistency with the current `.env.example`.
|
||||||
|
|
||||||
The KV v2 secrets engine is automatically enabled at `secret/` in dev mode. No further configuration is needed.
|
The KV v2 secrets engine is automatically enabled at `secret/` in dev mode. No further configuration is needed.
|
||||||
|
|
||||||
> **Warning**: Dev mode stores everything in memory. Data is lost when the container stops. Do not use dev mode in production.
|
> **Warning**: Dev mode stores everything in memory. Data is lost when the container stops. Do not use dev mode in production.
|
||||||
|
|||||||
@@ -71,6 +71,16 @@ all six AGNTCY domains:
|
|||||||
| Prometheus Metrics | `GET /metrics` | prom-client; all HTTP routes instrumented with request counter and duration histogram |
|
| Prometheus Metrics | `GET /metrics` | prom-client; all HTTP routes instrumented with request counter and duration histogram |
|
||||||
| HashiCorp Vault | (opt-in, via `VAULT_ADDR` + `VAULT_TOKEN`) | KV v2 secret storage; constant-time comparison; bcrypt fallback when Vault is not configured |
|
| HashiCorp Vault | (opt-in, via `VAULT_ADDR` + `VAULT_TOKEN`) | KV v2 secret storage; constant-time comparison; bcrypt fallback when Vault is not configured |
|
||||||
| Health Check | `GET /health` | Checks PostgreSQL and Redis connectivity; unauthenticated; used by load balancers |
|
| Health Check | `GET /health` | Checks PostgreSQL and Redis connectivity; unauthenticated; used by load balancers |
|
||||||
|
| W3C Decentralised Identifiers | `GET /api/v1/agents/:id/did`, `GET /api/v1/.well-known/did.json` | DID Core 1.0 documents; `did:web` method; EC P-256 keys; AGNTCY extension fields |
|
||||||
|
| AGNTCY Agent Cards | `GET /api/v1/agents/:id/card` | Machine-readable agent identity summary; AGNTCY schema v1.0 |
|
||||||
|
| AGNTCY Compliance Reports | `GET /api/v1/compliance/report`, `GET /api/v1/compliance/agent-cards` | Compliance sections: agent-identity + audit-trail; cached 5 min; AGNTCY schema v1.0 |
|
||||||
|
| Federation (Cross-IdP) | `POST /api/v1/federation/partners`, `GET /api/v1/federation/partners`, `POST /api/v1/federation/verify` | Register partner IdPs; verify cross-IdP JWTs using cached partner JWKS |
|
||||||
|
| A2A Delegation | `POST /api/v1/oauth2/token/delegate`, `POST /api/v1/oauth2/token/verify-delegation` | Agent-to-agent delegation tokens; OIDC provider (oidc-provider v9) mounted at `/oidc` |
|
||||||
|
| Webhook Subscriptions | `POST /api/v1/webhooks`, `GET /api/v1/webhooks`, `GET /api/v1/webhooks/:id/deliveries` | Outbound event delivery with HMAC signing; Vault-backed secrets; delivery history |
|
||||||
|
| Tier Management | `GET /api/v1/tiers/status`, `POST /api/v1/tiers/upgrade` | Free / Pro / Enterprise tiers; daily call and token limits; Stripe Checkout upgrade flow |
|
||||||
|
| Billing | `POST /api/v1/billing/checkout`, `POST /api/v1/billing/webhook`, `GET /api/v1/billing/status` | Stripe subscription management; webhook event processing |
|
||||||
|
| Analytics | Internal (via `AnalyticsService`) | Daily aggregated event counts per org; token trend queries (up to 90 days); agent activity heatmap; usage summary |
|
||||||
|
| Developer Portal | `/portal` (Next.js 14, separate process) | Get-started wizard, SDK explorer, API reference, analytics dashboard, pricing page |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -80,7 +90,10 @@ all six AGNTCY domains:
|
|||||||
|-------|--------|-----------------|
|
|-------|--------|-----------------|
|
||||||
| Phase 1 — MVP | COMPLETE | Agent registry, OAuth 2.0 Client Credentials (RS256 JWTs), credential management (bcrypt), immutable audit log, Node.js SDK, Dockerfile, Docker Compose, AGNTCY alignment documentation, >80% test coverage |
|
| Phase 1 — MVP | COMPLETE | Agent registry, OAuth 2.0 Client Credentials (RS256 JWTs), credential management (bcrypt), immutable audit log, Node.js SDK, Dockerfile, Docker Compose, AGNTCY alignment documentation, >80% test coverage |
|
||||||
| Phase 2 — Production-Ready | COMPLETE | HashiCorp Vault opt-in integration, Python SDK (sync + async), Go SDK (context-aware), Java SDK (builder + CompletableFuture), OPA policy engine (Rego + Wasm + TypeScript fallback), React 18 + Vite 5 web dashboard, Prometheus metrics + Grafana dashboards, Terraform multi-region deployment (AWS ECS + RDS + ElastiCache; GCP Cloud Run + Cloud SQL + Memorystore) |
|
| Phase 2 — Production-Ready | COMPLETE | HashiCorp Vault opt-in integration, Python SDK (sync + async), Go SDK (context-aware), Java SDK (builder + CompletableFuture), OPA policy engine (Rego + Wasm + TypeScript fallback), React 18 + Vite 5 web dashboard, Prometheus metrics + Grafana dashboards, Terraform multi-region deployment (AWS ECS + RDS + ElastiCache; GCP Cloud Run + Cloud SQL + Memorystore) |
|
||||||
| Phase 3 — Enterprise | PLANNED | AGNTCY federation (cross-IdP agent identity), W3C Decentralised Identifiers (DIDs), agent marketplace, advanced compliance reporting, SOC 2 Type II certification, enterprise tier (custom retention, SLAs, advanced RBAC) |
|
| Phase 3 — Enterprise | COMPLETE | AGNTCY federation (cross-IdP agent identity), W3C Decentralised Identifiers (DIDs), agent marketplace, OIDC provider (A2A delegation), Rust SDK, developer portal (Next.js 14) |
|
||||||
|
| Phase 4 — Compliance & Security | COMPLETE | AGNTCY compliance reports (agent-identity + audit-trail sections), audit hash chain verification, SOC 2 CC6.1 AES-256-CBC column encryption (`EncryptionService`), DID document caching, federation partner JWKS caching |
|
||||||
|
| Phase 5 — Scale & Ecosystem | COMPLETE | Multi-tier subscription model (free/pro/enterprise), Stripe billing integration (`BillingService`, `TierService`), tier enforcement middleware (daily call and token limits), webhook subscriptions + delivery history (`WebhookService`), analytics service (daily event aggregation + trend queries) |
|
||||||
|
| Phase 6 — Market Expansion | COMPLETE | AGNTCY conformance test suite (4 conformance scenarios), API tiers enforced end-to-end, analytics dashboard in developer portal, full Phase 6 engineering documentation update |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -105,11 +118,15 @@ no implementation begins until an OpenAPI specification is approved by the CTO.
|
|||||||
|
|
||||||
## 6. Free Tier Limits
|
## 6. Free Tier Limits
|
||||||
|
|
||||||
| Limit | Value |
|
| Limit | Free Tier | Pro Tier | Enterprise Tier |
|
||||||
|-------|-------|
|
|-------|-----------|----------|-----------------|
|
||||||
| Max agents | 100 |
|
| Max agents | 100 | 1,000 | Unlimited |
|
||||||
| Max credentials per agent | No hard cap enforced in code (5 is the documented recommendation) |
|
| Max API calls per day | Configured in `TIER_CONFIG` | Configured in `TIER_CONFIG` | Unlimited |
|
||||||
| Max tokens in flight | 10,000 per agent per calendar month |
|
| Max tokens per day | Configured in `TIER_CONFIG` | Configured in `TIER_CONFIG` | Unlimited |
|
||||||
| Token TTL | 3,600 seconds (1 hour) |
|
| Token TTL | 3,600 seconds (1 hour) | 3,600 seconds (1 hour) | 3,600 seconds (1 hour) |
|
||||||
| Audit log retention | 90 days |
|
| Audit log retention | 90 days | 1 year | Custom |
|
||||||
| API rate limit | 100 requests per minute per IP address |
|
| API rate limit (per IP) | 100 req/min | 100 req/min | 100 req/min |
|
||||||
|
| Webhook subscriptions | 0 | 10 | Unlimited |
|
||||||
|
| Analytics retention | 90 days | 1 year | Custom |
|
||||||
|
|
||||||
|
Tier limits are configured in `src/config/tiers.ts` (`TIER_CONFIG`). Enforcement is handled by `TierService.enforceAgentLimit()` (agent cap) and `src/middleware/tier.ts` (daily call/token caps). Tier upgrades are initiated via `POST /api/v1/tiers/upgrade` and confirmed via the Stripe webhook.
|
||||||
|
|||||||
@@ -13,25 +13,33 @@ graph TD
|
|||||||
subgraph ExpressApp["Express App — src/app.ts"]
|
subgraph ExpressApp["Express App — src/app.ts"]
|
||||||
Router["Router (src/routes/)"]
|
Router["Router (src/routes/)"]
|
||||||
AuthMW["authMiddleware (src/middleware/auth.ts)"]
|
AuthMW["authMiddleware (src/middleware/auth.ts)"]
|
||||||
|
TierMW["tierMiddleware (src/middleware/tier.ts)"]
|
||||||
OpaMW["opaMiddleware (src/middleware/opa.ts)"]
|
OpaMW["opaMiddleware (src/middleware/opa.ts)"]
|
||||||
Controller["Controller (src/controllers/)"]
|
Controller["Controller (src/controllers/)"]
|
||||||
Service["Service (src/services/)"]
|
Service["Service (src/services/)"]
|
||||||
Repository["Repository (src/repositories/)"]
|
Repository["Repository (src/repositories/)"]
|
||||||
Router --> AuthMW --> OpaMW --> Controller --> Service --> Repository
|
Router --> AuthMW --> TierMW --> OpaMW --> Controller --> Service --> Repository
|
||||||
end
|
end
|
||||||
|
|
||||||
Repository -->|parameterized SQL| PG["PostgreSQL 14\n(agents, credentials, audit_events, token_revocations)"]
|
Repository -->|parameterized SQL| PG["PostgreSQL 14\n(agents, credentials, audit_events,\nanalytics_events, organizations,\nfederation_partners, webhook_subscriptions,\nagent_did_keys, delegation_chains)"]
|
||||||
Service -->|Redis commands| Redis["Redis 7\n(token revocation list, monthly counts, rate-limit counters)"]
|
Service -->|Redis commands| Redis["Redis 7\n(token revocation list, daily tier counters,\nJWKS cache, compliance report cache,\nDID document cache)"]
|
||||||
Service -->|KV v2 read/write| Vault["HashiCorp Vault\n(opt-in — when VAULT_ADDR is set)"]
|
Service -->|KV v2 read/write| Vault["HashiCorp Vault\n(opt-in — credentials, DID private keys,\nwebhook secrets — when VAULT_ADDR is set)"]
|
||||||
|
|
||||||
ExpressApp -->|evaluate input| OPA["OPA Policy Engine\n(policies/authz.rego + data/scopes.json)"]
|
ExpressApp -->|evaluate input| OPA["OPA Policy Engine\n(policies/authz.rego + data/scopes.json)"]
|
||||||
ExpressApp -->|expose| Metrics["/metrics (prom-client)"]
|
ExpressApp -->|expose| Metrics["/metrics (prom-client)"]
|
||||||
|
ExpressApp -->|checkout session / webhooks| Stripe["Stripe\n(billing — when STRIPE_SECRET_KEY is set)"]
|
||||||
|
|
||||||
Dashboard["Dashboard SPA (React 18 + Vite 5)\ndashboard/dist/ served from /dashboard"]
|
Dashboard["Dashboard SPA (React 18 + Vite 5)\ndashboard/dist/ served from /dashboard"]
|
||||||
|
Portal["Developer Portal (Next.js 14)\nportal/ — served separately on port 3002"]
|
||||||
Client -->|browser| Dashboard
|
Client -->|browser| Dashboard
|
||||||
|
Client -->|browser| Portal
|
||||||
Dashboard -->|REST API calls| ExpressApp
|
Dashboard -->|REST API calls| ExpressApp
|
||||||
|
Portal -->|REST API calls| ExpressApp
|
||||||
|
|
||||||
Grafana["Grafana (port 3001)"] -->|scrapes| Metrics
|
Grafana["Grafana (port 3001)"] -->|scrapes| Metrics
|
||||||
|
|
||||||
|
OIDCProvider["OIDC Provider (oidc-provider v9)\nmounted at /oidc — A2A delegation tokens"]
|
||||||
|
ExpressApp --- OIDCProvider
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -46,12 +54,13 @@ correctly.
|
|||||||
2. App-level middleware runs in registration order: `helmet()` sets security headers, `cors()` applies CORS policy from `CORS_ORIGIN`, `morgan('combined')` logs the request line (skipped in `NODE_ENV=test`), `express.json()` and `express.urlencoded()` parse the body, `metricsMiddleware` (`src/middleware/metrics.ts`) starts the request timer and records `agentidp_http_requests_total` and `agentidp_http_request_duration_seconds` on response finish.
|
2. App-level middleware runs in registration order: `helmet()` sets security headers, `cors()` applies CORS policy from `CORS_ORIGIN`, `morgan('combined')` logs the request line (skipped in `NODE_ENV=test`), `express.json()` and `express.urlencoded()` parse the body, `metricsMiddleware` (`src/middleware/metrics.ts`) starts the request timer and records `agentidp_http_requests_total` and `agentidp_http_request_duration_seconds` on response finish.
|
||||||
3. The Express router matches the path to a route definition in `src/routes/*.ts` and hands off to the appropriate middleware chain.
|
3. The Express router matches the path to a route definition in `src/routes/*.ts` and hands off to the appropriate middleware chain.
|
||||||
4. `authMiddleware` (`src/middleware/auth.ts`) validates the Bearer JWT: extracts the token from the `Authorization` header, calls `verifyToken()` for RS256 signature and expiry, then calls `redis.get('revoked:{jti}')` to check the revocation list. On success, attaches the decoded `ITokenPayload` to `req.user`.
|
4. `authMiddleware` (`src/middleware/auth.ts`) validates the Bearer JWT: extracts the token from the `Authorization` header, calls `verifyToken()` for RS256 signature and expiry, then calls `redis.get('revoked:{jti}')` to check the revocation list. On success, attaches the decoded `ITokenPayload` to `req.user`.
|
||||||
5. `opaMiddleware` (`src/middleware/opa.ts`) evaluates the OPA policy: builds an `OpaInput` object from `req.method`, `req.baseUrl + req.path`, and `req.user.scope.split(' ')`, then calls `evaluate(input)`. Uses the Wasm bundle (`policies/authz.wasm`) when present, or the TypeScript fallback reading `policies/data/scopes.json`. Calls `next(new AuthorizationError())` if the policy denies.
|
5. `tierMiddleware` (`src/middleware/tier.ts`) enforces per-tier daily API call limits. It reads the organisation's current tier from `TierService.fetchTier(orgId)`, checks the daily call counter from Redis key `rate:tier:calls:<orgId>` against `TIER_CONFIG[tier].maxCallsPerDay`, increments the counter on each passing request (fire-and-forget `INCR` with TTL set to next UTC midnight), and throws `TierLimitError` (429) when the limit is reached. This middleware is applied only to API routes, not to `/health`, `/metrics`, or `/dashboard`.
|
||||||
6. The controller (`src/controllers/*.ts`) receives the validated request, extracts and validates path params and body using Joi schemas, then delegates to the service layer.
|
6. `opaMiddleware` (`src/middleware/opa.ts`) evaluates the OPA policy: builds an `OpaInput` object from `req.method`, `req.baseUrl + req.path`, and `req.user.scope.split(' ')`, then calls `evaluate(input)`. Uses the Wasm bundle (`policies/authz.wasm`) when present, or the TypeScript fallback reading `policies/data/scopes.json`. Calls `next(new AuthorizationError())` if the policy denies.
|
||||||
7. The service (`src/services/*.ts`) executes all business logic — enforces free-tier limits, resolves domain rules, and calls repositories. The service has no knowledge of HTTP.
|
7. The controller (`src/controllers/*.ts`) receives the validated request, extracts and validates path params and body using Joi schemas, then delegates to the service layer.
|
||||||
8. The repository (`src/repositories/*.ts`) executes parameterized SQL against PostgreSQL via `node-postgres`, or issues Redis commands via the `redis` client. No business logic lives here.
|
8. The service (`src/services/*.ts`) executes all business logic — enforces tier limits, resolves domain rules, and calls repositories. Phase 3–6 introduces specialised services: `AnalyticsService` (fire-and-forget event recording), `TierService` (enforces per-tier agent and call limits), `ComplianceService` (AGNTCY compliance reports, cached 5 min in Redis), `FederationService` (cross-IdP JWT verification with cached JWKS), `DIDService` (W3C DID document generation and caching), `WebhookService` (subscription management with Vault-backed HMAC secrets), and `BillingService` (Stripe Checkout and webhook processing). The service has no knowledge of HTTP.
|
||||||
9. The controller serialises the service result and calls `res.status(xxx).json(payload)`.
|
9. The repository (`src/repositories/*.ts`) executes parameterized SQL against PostgreSQL via `node-postgres`, or issues Redis commands via the `redis` client. No business logic lives here. Phase 3–6 added the following tables: `analytics_events` (daily metric counters), `organizations` (org tier and billing), `federation_partners` (cross-IdP trust registry), `webhook_subscriptions` and `webhook_deliveries` (outbound event delivery), `agent_did_keys` (public EC keys for DID documents), `delegation_chains` (A2A delegation records), `tenant_subscriptions` (Stripe subscription status).
|
||||||
10. `AuditService.logEvent()` is called — for high-throughput paths (token issuance, introspection, revocation) this is fire-and-forget (`void` — not awaited); for CRUD operations it is awaited. The audit event is written as an immutable row to the `audit_events` table in PostgreSQL.
|
10. The controller serialises the service result and calls `res.status(xxx).json(payload)`.
|
||||||
|
11. `AuditService.logEvent()` is called — for high-throughput paths (token issuance, introspection, revocation) this is fire-and-forget (`void` — not awaited); for CRUD operations it is awaited. The audit event is written as an immutable row to the `audit_events` table in PostgreSQL.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -102,6 +111,92 @@ sequenceDiagram
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## 3b. Analytics Event Capture Flow
|
||||||
|
|
||||||
|
Every successful token issuance writes a fire-and-forget analytics event:
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
sequenceDiagram
|
||||||
|
participant Controller as TokenController
|
||||||
|
participant OAuth2Svc as OAuth2Service
|
||||||
|
participant AnalyticsSvc as AnalyticsService
|
||||||
|
participant PG as PostgreSQL
|
||||||
|
|
||||||
|
Controller->>OAuth2Svc: issueToken(clientId, clientSecret, scope, ...)
|
||||||
|
OAuth2Svc->>OAuth2Svc: signToken() — RS256 JWT
|
||||||
|
OAuth2Svc-->>Controller: ITokenResponse
|
||||||
|
|
||||||
|
Note over OAuth2Svc,AnalyticsSvc: fire-and-forget (void)
|
||||||
|
OAuth2Svc-)AnalyticsSvc: recordEvent(tenantId, 'token_issued')
|
||||||
|
AnalyticsSvc-)PG: INSERT INTO analytics_events ... ON CONFLICT DO UPDATE count + 1
|
||||||
|
```
|
||||||
|
|
||||||
|
`recordEvent` uses PostgreSQL `UPSERT` — one row per `(organization_id, date, metric_type)`. If the INSERT conflicts (same date, same org, same metric), the `count` column is incremented atomically. This keeps the table compact (one row per day per metric type per org) and fast to query.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3c. Tier Enforcement Middleware Chain
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
sequenceDiagram
|
||||||
|
actor Agent
|
||||||
|
participant TierMW as tierMiddleware
|
||||||
|
participant TierSvc as TierService
|
||||||
|
participant Redis
|
||||||
|
participant PG as PostgreSQL
|
||||||
|
|
||||||
|
Agent->>TierMW: API request (with valid Bearer token)
|
||||||
|
TierMW->>TierSvc: fetchTier(orgId)
|
||||||
|
TierSvc->>PG: SELECT tier FROM organizations WHERE organization_id = $1
|
||||||
|
PG-->>TierSvc: 'pro'
|
||||||
|
TierSvc-->>TierMW: 'pro'
|
||||||
|
|
||||||
|
TierMW->>Redis: GET rate:tier:calls:<orgId>
|
||||||
|
Redis-->>TierMW: "4999" (current daily count)
|
||||||
|
|
||||||
|
Note over TierMW: TIER_CONFIG['pro'].maxCallsPerDay = 50000 — limit not reached
|
||||||
|
|
||||||
|
TierMW-)Redis: INCR rate:tier:calls:<orgId> (fire-and-forget, TTL = next UTC midnight)
|
||||||
|
TierMW->>Agent: next() — request proceeds to opaMiddleware
|
||||||
|
```
|
||||||
|
|
||||||
|
When the counter equals or exceeds the tier limit, `tierMiddleware` throws `TierLimitError` (429) before `opaMiddleware` runs. The daily counter resets at UTC midnight via Redis TTL.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3d. A2A Delegation End-to-End Flow
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
sequenceDiagram
|
||||||
|
actor Delegator as Delegator Agent
|
||||||
|
actor Delegatee as Delegatee Agent
|
||||||
|
participant AgentIdP
|
||||||
|
participant DelegationSvc as DelegationService
|
||||||
|
participant OIDCProvider as OIDC Provider
|
||||||
|
participant PG as PostgreSQL
|
||||||
|
|
||||||
|
Delegator->>AgentIdP: POST /api/v1/oauth2/token/delegate<br/>{ delegatee_id, scope }
|
||||||
|
AgentIdP->>DelegationSvc: createDelegation(delegatorId, delegateeId, scope)
|
||||||
|
DelegationSvc->>PG: INSERT INTO delegation_chains ...
|
||||||
|
PG-->>DelegationSvc: chain_id
|
||||||
|
DelegationSvc->>OIDCProvider: issue delegation JWT (delegator claims + delegatee sub)
|
||||||
|
OIDCProvider-->>DelegationSvc: signed delegation token
|
||||||
|
DelegationSvc-->>AgentIdP: IDelegationChain (with token)
|
||||||
|
AgentIdP-->>Delegator: 201 { token, chain_id }
|
||||||
|
|
||||||
|
Note over Delegatee,AgentIdP: Delegatee uses the delegation token
|
||||||
|
Delegatee->>AgentIdP: POST /api/v1/oauth2/token/verify-delegation<br/>{ token }
|
||||||
|
AgentIdP->>DelegationSvc: verifyDelegation(token, delegateeId)
|
||||||
|
DelegationSvc->>PG: SELECT * FROM delegation_chains WHERE chain_id = $1 AND status = 'active'
|
||||||
|
PG-->>DelegationSvc: chain row (not expired, not revoked)
|
||||||
|
DelegationSvc->>OIDCProvider: verify token signature
|
||||||
|
OIDCProvider-->>DelegationSvc: verified claims
|
||||||
|
DelegationSvc-->>AgentIdP: IDelegationVerifyResult { valid: true, ... }
|
||||||
|
AgentIdP-->>Delegatee: 200 { valid: true, delegatorId, scope }
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## 4. Multi-Region Deployment Topology
|
## 4. Multi-Region Deployment Topology
|
||||||
|
|
||||||
```mermaid
|
```mermaid
|
||||||
|
|||||||
@@ -123,8 +123,8 @@ rate-limiter uses a Redis sorted set for the sliding-window algorithm.
|
|||||||
- PostgreSQL for revocation — rejected because the token verification path is the hot path in every authenticated request. A PostgreSQL round-trip adds 5–15 ms compared to a Redis `GET` at sub-millisecond latency.
|
- PostgreSQL for revocation — rejected because the token verification path is the hot path in every authenticated request. A PostgreSQL round-trip adds 5–15 ms compared to a Redis `GET` at sub-millisecond latency.
|
||||||
|
|
||||||
**Consequences**: Redis is a required infrastructure dependency. A Redis instance must
|
**Consequences**: Redis is a required infrastructure dependency. A Redis instance must
|
||||||
be running and reachable via `REDIS_URL` before the server starts. `docker-compose.yml`
|
be running and reachable via `REDIS_URL` before the server starts. `compose.yaml`
|
||||||
provides a Redis 7 Alpine container for local development on port 6379.
|
provides a Redis 7.2 Alpine container for local development on port 6379.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -217,7 +217,7 @@ environments. The `prom-client` npm package integrates natively with Express and
|
|||||||
provides `Counter` and `Histogram` metric types that cover all observability needs for
|
provides `Counter` and `Histogram` metric types that cover all observability needs for
|
||||||
AgentIdP. Grafana's YAML provisioning in `monitoring/grafana/provisioning/` makes
|
AgentIdP. Grafana's YAML provisioning in `monitoring/grafana/provisioning/` makes
|
||||||
dashboards reproducible and version-controlled. The monitoring stack runs as a Docker
|
dashboards reproducible and version-controlled. The monitoring stack runs as a Docker
|
||||||
Compose overlay (`docker-compose.monitoring.yml`) without interfering with the base dev
|
Compose overlay (`compose.monitoring.yaml`) without interfering with the base dev
|
||||||
environment.
|
environment.
|
||||||
|
|
||||||
**Alternatives considered**:
|
**Alternatives considered**:
|
||||||
@@ -253,3 +253,82 @@ diff-based approval workflow.
|
|||||||
via the AWS console or GCP console are permitted — they will be overwritten on the next
|
via the AWS console or GCP console are permitted — they will be overwritten on the next
|
||||||
`terraform apply`. Terraform state is stored in a remote backend and must not be edited
|
`terraform apply`. Terraform state is stored in a remote backend and must not be edited
|
||||||
manually.
|
manually.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ADR-11: Stripe
|
||||||
|
|
||||||
|
**Status**: Adopted
|
||||||
|
**Component**: Billing — subscription management and payment processing
|
||||||
|
|
||||||
|
**Decision**: Use Stripe as the payment processing and subscription management platform. The `stripe` npm package (v21+) handles Checkout Session creation, webhook event verification, and subscription lifecycle events.
|
||||||
|
|
||||||
|
**Rationale**: Stripe's hosted Checkout flow eliminates the need to handle PCI-DSS scope for card data. The `stripe.webhooks.constructEvent()` method uses HMAC-SHA256 to verify incoming webhook payloads, preventing replay attacks. The `checkout.session.completed` event carries `metadata: { orgId, targetTier }`, allowing `BillingService` to delegate tier upgrades to `TierService.applyUpgrade()` without coupling billing logic to tier logic.
|
||||||
|
|
||||||
|
**Alternatives considered**:
|
||||||
|
- Paddle — rejected because its global merchant-of-record model introduced complexities with the open-source free tier.
|
||||||
|
- Braintree — rejected because Stripe's webhook reliability and developer experience are superior.
|
||||||
|
|
||||||
|
**Consequences**: Stripe requires `STRIPE_SECRET_KEY` (for API calls) and `STRIPE_WEBHOOK_SECRET` (`whsec_...`, for webhook verification). Per-tier Stripe price IDs are configured via `STRIPE_PRICE_ID_PRO` and `STRIPE_PRICE_ID_ENTERPRISE`. All billing webhook handlers must pass the raw `Buffer` body (not parsed JSON) to `stripe.webhooks.constructEvent()` — use `express.raw()` middleware on the webhook route.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ADR-12: oidc-provider (A2A Delegation)
|
||||||
|
|
||||||
|
**Status**: Adopted
|
||||||
|
**Component**: A2A delegation — OIDC provider for agent-to-agent trust tokens
|
||||||
|
|
||||||
|
**Decision**: Use the `oidc-provider` npm package (v9.7.x) as the OIDC provider for issuing A2A delegation tokens. The provider is mounted as a sub-application at `/oidc` within the Express app.
|
||||||
|
|
||||||
|
**Rationale**: `oidc-provider` is a certified OpenID Connect implementation that handles the full OIDC protocol, including JWKS serving, token endpoint, and discovery document. Rather than implementing a custom delegation token format, using a standards-compliant OIDC provider means delegation tokens can be verified by any OIDC-aware party using the published JWKS at `/oidc/jwks`.
|
||||||
|
|
||||||
|
**Alternatives considered**:
|
||||||
|
- Custom JWT signing — rejected because hand-rolled token formats cannot benefit from OIDC tooling and interoperability.
|
||||||
|
|
||||||
|
**Consequences**: `A2A_ENABLED` env var gates the OIDC provider — when set to `'false'`, delegation endpoints return 404. The `OIDC_ISSUER` env var must be set to the full base URL of the OIDC provider (e.g. `https://api.sentryagent.ai`).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ADR-13: Next.js 14 (Developer Portal)
|
||||||
|
|
||||||
|
**Status**: Adopted
|
||||||
|
**Component**: Developer Portal (`portal/`) — public-facing documentation and onboarding
|
||||||
|
|
||||||
|
**Decision**: Use Next.js 14 (App Router) with Tailwind CSS for the developer portal. The portal is a separate process served on its own port (independent of the Express API server).
|
||||||
|
|
||||||
|
**Rationale**: The developer portal has different performance and SEO requirements than the internal operator dashboard (`dashboard/`). Next.js 14's App Router supports React Server Components, which allows the marketing and documentation pages to be statically generated while the analytics dashboard and API Explorer are client-rendered. Tailwind CSS enables rapid UI development consistent with the design system.
|
||||||
|
|
||||||
|
**Alternatives considered**:
|
||||||
|
- Extending the Vite dashboard — rejected because the developer portal requires server-side rendering for SEO on marketing pages, which Vite does not provide.
|
||||||
|
- Docusaurus — rejected because the portal includes interactive components (Swagger Explorer, analytics charts) that are not well-suited to a documentation-only tool.
|
||||||
|
|
||||||
|
**Consequences**: The portal (`portal/`) has its own `package.json`, `tsconfig.json`, `tailwind.config.ts`, and `next.config.js`. It is built and run independently: `cd portal && npm install && npm run dev`. The portal calls the AgentIdP REST API using the same `@sentryagent/idp-sdk` as the dashboard.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ADR-14: bull (Job Queue) + kafkajs (Event Streaming)
|
||||||
|
|
||||||
|
**Status**: Adopted (opt-in)
|
||||||
|
**Component**: Async job processing and event streaming
|
||||||
|
|
||||||
|
**Decision**: Use `bull` (Redis-backed job queue) for async webhook delivery retries and `kafkajs` for event streaming to external consumers. Both are opt-in — the system operates correctly without Kafka configured.
|
||||||
|
|
||||||
|
**Rationale**: Webhook delivery requires retry logic with exponential backoff and dead-letter handling. `bull` provides this out of the box using the existing Redis dependency. `kafkajs` enables high-throughput event streaming for analytics and audit events to external data pipelines without blocking the primary request path.
|
||||||
|
|
||||||
|
**Alternatives considered**:
|
||||||
|
- BullMQ — considered as a more modern alternative to `bull` but rejected to avoid adding a new package family during Phase 6. Migration is a future backlog item.
|
||||||
|
|
||||||
|
**Consequences**: Kafka is entirely optional. When `KAFKA_BROKERS` is not set, `kafkajs` is not initialised and no events are published. The `bull` queue for webhook delivery requires only the existing Redis instance.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ADR-15: did-resolver + web-did-resolver (W3C DIDs)
|
||||||
|
|
||||||
|
**Status**: Adopted
|
||||||
|
**Component**: W3C DID Core 1.0 document resolution
|
||||||
|
|
||||||
|
**Decision**: Use `did-resolver` (v4.1.x) as the DID resolution framework and `web-did-resolver` (v2.0.x) for the `did:web` method implementation.
|
||||||
|
|
||||||
|
**Rationale**: `did-resolver` provides a pluggable resolver interface used by both the server (for internal resolution) and by third parties who want to verify AgentIdP-issued DIDs. The `did:web` method maps DID identifiers to HTTPS URLs hosting the DID document JSON, requiring no blockchain. `DIDService` generates documents that conform to the W3C DID Core 1.0 specification and include AGNTCY-specific extension fields.
|
||||||
|
|
||||||
|
**Consequences**: `DID_WEB_DOMAIN` env var is required for DID generation. DID documents are cached in Redis (`did:doc:<agentId>`, TTL from `DID_DOCUMENT_CACHE_TTL_SECONDS`, default 300s). Private keys are stored in HashiCorp Vault KV v2 when Vault is configured; in dev mode, a `dev:no-vault` marker is stored and keys are ephemeral.
|
||||||
|
|||||||
@@ -28,9 +28,15 @@ sentryagent-idp/
|
|||||||
├── sdk-python/ # Python SDK (sentryagent-idp) — sync + async clients
|
├── sdk-python/ # Python SDK (sentryagent-idp) — sync + async clients
|
||||||
├── sdk-go/ # Go SDK (github.com/sentryagent/idp-sdk-go) — context-aware, goroutine-safe
|
├── sdk-go/ # Go SDK (github.com/sentryagent/idp-sdk-go) — context-aware, goroutine-safe
|
||||||
├── sdk-java/ # Java SDK (ai.sentryagent:idp-sdk) — builder pattern, CompletableFuture
|
├── sdk-java/ # Java SDK (ai.sentryagent:idp-sdk) — builder pattern, CompletableFuture
|
||||||
|
├── sdk-rust/ # Rust SDK (sentryagent-idp crate) — async, tokio, reqwest, typed errors
|
||||||
├── policies/ # OPA policy files
|
├── policies/ # OPA policy files
|
||||||
│ ├── authz.rego # Rego policy — normalise_path + scope-intersection allow rule
|
│ ├── authz.rego # Rego policy — normalise_path + scope-intersection allow rule
|
||||||
│ └── data/scopes.json # Endpoint permission map — used by Rego and TypeScript fallback
|
│ └── data/scopes.json # Endpoint permission map — used by Rego and TypeScript fallback
|
||||||
|
├── portal/ # Developer Portal — Next.js 14 App Router, Tailwind CSS
|
||||||
|
│ ├── app/ # Next.js App Router pages (get-started, pricing, sdks, analytics, settings, login)
|
||||||
|
│ ├── components/ # Shared UI components (Nav.tsx, SwaggerExplorer.tsx, GetStartedWizard.tsx)
|
||||||
|
│ ├── hooks/ # React hooks (useAuth.ts)
|
||||||
|
│ └── types/ # TypeScript type definitions for portal-only types
|
||||||
├── terraform/ # Terraform infrastructure as code
|
├── terraform/ # Terraform infrastructure as code
|
||||||
│ ├── modules/ # Reusable modules: agentidp, lb, rds, redis
|
│ ├── modules/ # Reusable modules: agentidp, lb, rds, redis
|
||||||
│ └── environments/ # Environment configs: aws/ (ECS+RDS+ElastiCache), gcp/ (Cloud Run+SQL+Memorystore)
|
│ └── environments/ # Environment configs: aws/ (ECS+RDS+ElastiCache), gcp/ (Cloud Run+SQL+Memorystore)
|
||||||
@@ -44,9 +50,14 @@ sentryagent-idp/
|
|||||||
│ ├── agntcy/ # AGNTCY alignment documentation
|
│ ├── agntcy/ # AGNTCY alignment documentation
|
||||||
│ └── openapi/ # OpenAPI 3.0 specification files
|
│ └── openapi/ # OpenAPI 3.0 specification files
|
||||||
├── openspec/ # OpenSpec change management — proposals, designs, specs, tasks, archives
|
├── openspec/ # OpenSpec change management — proposals, designs, specs, tasks, archives
|
||||||
|
├── tests/ # Jest test suite — mirrors src/ structure
|
||||||
|
│ ├── unit/ # Unit tests (mocked dependencies) — mirrors src/
|
||||||
|
│ ├── integration/ # Integration tests (real DB + Redis)
|
||||||
|
│ ├── agntcy-conformance/ # AGNTCY conformance test suite (separate Jest config)
|
||||||
|
│ └── load/ # k6 load test scripts
|
||||||
├── Dockerfile # Multi-stage production build (build + runtime stages)
|
├── Dockerfile # Multi-stage production build (build + runtime stages)
|
||||||
├── docker-compose.yml # Local development: PostgreSQL 14 (port 5432) + Redis 7 (port 6379)
|
├── compose.yaml # Local development: PostgreSQL 14.12 (port 5432) + Redis 7.2 (port 6379)
|
||||||
├── docker-compose.monitoring.yml # Monitoring overlay: Prometheus (port 9090) + Grafana (port 3001)
|
├── compose.monitoring.yaml # Monitoring overlay: Prometheus (port 9090) + Grafana (port 3001)
|
||||||
├── package.json # Node.js dependencies and npm scripts
|
├── package.json # Node.js dependencies and npm scripts
|
||||||
├── tsconfig.json # TypeScript strict configuration — compiled to dist/
|
├── tsconfig.json # TypeScript strict configuration — compiled to dist/
|
||||||
└── jest.config.ts # Jest configuration — ts-jest, test timeouts, coverage thresholds
|
└── jest.config.ts # Jest configuration — ts-jest, test timeouts, coverage thresholds
|
||||||
@@ -69,6 +80,8 @@ sentryagent-idp/
|
|||||||
| `src/metrics/` | Prometheus metrics registry — all `Counter` and `Histogram` definitions in one place | Only file that calls `new Counter()` or `new Histogram()`; all other files import from here |
|
| `src/metrics/` | Prometheus metrics registry — all `Counter` and `Histogram` definitions in one place | Only file that calls `new Counter()` or `new Histogram()`; all other files import from here |
|
||||||
| `src/db/` | PostgreSQL connection pool factory (`pool.ts`) and numbered SQL migration files in `migrations/` | Pool is a singleton created once in `src/app.ts` and passed to repositories |
|
| `src/db/` | PostgreSQL connection pool factory (`pool.ts`) and numbered SQL migration files in `migrations/` | Pool is a singleton created once in `src/app.ts` and passed to repositories |
|
||||||
| `src/cache/` | Redis client factory — creates and caches a single `redis` client instance | Client is a singleton created once in `src/app.ts` and passed to repositories |
|
| `src/cache/` | Redis client factory — creates and caches a single `redis` client instance | Client is a singleton created once in `src/app.ts` and passed to repositories |
|
||||||
|
| `src/config/` | Configuration constants — `tiers.ts` exports `TIER_CONFIG`, `TIER_RANK`, `TierName`, and `isTierName()` type guard | Imported by `TierService` and `tierMiddleware`; never imports from services |
|
||||||
|
| `src/middleware/tier.ts` | Tier enforcement middleware — reads org tier from `TierService`, checks daily call counter in Redis, throws `TierLimitError` (429) when limit is exceeded, increments counter on pass | Applied only to API routes; skips `/health`, `/metrics`, and static file routes |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -84,6 +97,10 @@ sentryagent-idp/
|
|||||||
| A new environment variable | `src/utils/config.ts` (if it exists) or the relevant consumer file + `docs/devops/environment-variables.md` | `RATE_LIMIT_MAX` controlling the rate-limit ceiling |
|
| A new environment variable | `src/utils/config.ts` (if it exists) or the relevant consumer file + `docs/devops/environment-variables.md` | `RATE_LIMIT_MAX` controlling the rate-limit ceiling |
|
||||||
| A new Prometheus metric | `src/metrics/registry.ts` | A `Histogram` for Vault lookup duration |
|
| A new Prometheus metric | `src/metrics/registry.ts` | A `Histogram` for Vault lookup duration |
|
||||||
| A new TypeScript type used in 2+ files | `src/types/index.ts` | A new `AgentGroupMembership` interface |
|
| A new TypeScript type used in 2+ files | `src/types/index.ts` | A new `AgentGroupMembership` interface |
|
||||||
|
| A new tier-gated feature | `src/config/tiers.ts` (add limit field) + `src/middleware/tier.ts` (add check) + service (enforce) | Adding a `maxWebhooksPerOrg` tier limit |
|
||||||
|
| A webhook event handler | `src/services/WebhookService.ts` (add event type to `WebhookEventType`) + the producer that calls `void webhookService.dispatch(orgId, eventType, payload)` | Emitting `agent.decommissioned` events to subscriber URLs |
|
||||||
|
| A new analytics metric type | `src/services/AnalyticsService.ts` (call `recordEvent(tenantId, 'new_metric')` in the relevant service using `void`) | Recording `credential_rotated` events for analytics |
|
||||||
|
| A new DID endpoint | `src/controllers/DIDController.ts` + `src/routes/did.ts` + `src/services/DIDService.ts` (if new method needed) + `policies/data/scopes.json` | Adding `GET /api/v1/agents/:id/did/rotate-key` |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -117,11 +134,14 @@ The `errorHandler` middleware in `src/middleware/errorHandler.ts` maps
|
|||||||
`SentryAgentError` subclasses to their `httpStatus` codes and serialises the response
|
`SentryAgentError` subclasses to their `httpStatus` codes and serialises the response
|
||||||
as `IErrorResponse { code, message, details }`.
|
as `IErrorResponse { code, message, details }`.
|
||||||
|
|
||||||
**`docker-compose.yml`**
|
**`compose.yaml`**
|
||||||
Starts PostgreSQL 14 (Alpine) on port 5432 with database `sentryagent_idp` and
|
Starts PostgreSQL 14.12 (Alpine) on port 5432 and Redis 7.2 (Alpine) on port 6379.
|
||||||
Redis 7 (Alpine) on port 6379. Used for local development only. Both services have
|
All services use a dedicated `app-tier` bridge network, `restart: unless-stopped`,
|
||||||
health checks so `depends_on` conditions work correctly. The `app` service mounts
|
and `deploy.resources.limits` per DockerSpec standards. Both infrastructure services
|
||||||
`./src` as a read-only volume for live code reloading.
|
have health checks so `depends_on` conditions work correctly. The `app` service mounts
|
||||||
|
`./src` as a read-only bind volume for live code reloading and has its own
|
||||||
|
`healthcheck` probe via `curl /health`. Postgres credentials and Grafana admin
|
||||||
|
password are externalized to environment variables — see `docs/devops/environment-variables.md`.
|
||||||
|
|
||||||
**`tsconfig.json`**
|
**`tsconfig.json`**
|
||||||
TypeScript compiler configuration. `strict: true` enables the full suite of strictness
|
TypeScript compiler configuration. `strict: true` enables the full suite of strictness
|
||||||
|
|||||||
@@ -332,11 +332,253 @@ not exposed to the public internet.
|
|||||||
|
|
||||||
Start the monitoring overlay:
|
Start the monitoring overlay:
|
||||||
```bash
|
```bash
|
||||||
docker compose -f docker-compose.yml -f docker-compose.monitoring.yml up
|
docker compose -f compose.yaml -f compose.monitoring.yaml up
|
||||||
```
|
```
|
||||||
- Prometheus: `http://localhost:9090`
|
- Prometheus: `http://localhost:9090`
|
||||||
- Grafana: `http://localhost:3001` — default credentials: `admin` / `agentidp`
|
- Grafana: `http://localhost:3001` — credentials: `admin` / `<GF_ADMIN_PASSWORD from .env>`
|
||||||
|
|
||||||
Grafana is pre-provisioned with a Prometheus data source pointing to `http://prometheus:9090`
|
Grafana is pre-provisioned with a Prometheus data source pointing to `http://prometheus:9090`
|
||||||
and dashboard JSON files from `monitoring/grafana/dashboards/`. No manual configuration
|
and dashboard JSON files from `monitoring/grafana/dashboards/`. No manual configuration
|
||||||
is needed after startup.
|
is needed after startup.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### AnalyticsService
|
||||||
|
|
||||||
|
**Purpose**: Records daily aggregated analytics events (token issuances, agent activity) and exposes query methods for token trends, agent activity heatmaps, and per-agent usage summaries. All query methods scope results strictly to the supplied `tenantId`. The `recordEvent` method is fire-and-forget — it catches all errors internally and never propagates them to the caller, so analytics writes never block primary request paths.
|
||||||
|
|
||||||
|
**Public methods**:
|
||||||
|
|
||||||
|
| Method | Parameters | Returns | Description |
|
||||||
|
|--------|-----------|---------|-------------|
|
||||||
|
| `recordEvent` | `tenantId: string, metricType: string` | `Promise<void>` | Upserts a daily counter row in `analytics_events` via `INSERT ... ON CONFLICT DO UPDATE SET count = count + 1`. Catches and swallows all errors; safe to call with `void` on hot paths. |
|
||||||
|
| `getTokenTrend` | `tenantId: string, days: number` | `Promise<ITokenTrendEntry[]>` | Returns daily token issuance counts for the last N days (clamped to 90). Uses `generate_series` + `LEFT JOIN` so that days with no events appear as `count: 0`. Results sorted ascending by date. |
|
||||||
|
| `getAgentActivity` | `tenantId: string` | `Promise<IAgentActivityEntry[]>` | Returns agent activity bucketed by day-of-week (0=Sun…6=Sat) and hour-of-day for the last 30 days. Reads only rows whose `metric_type` matches the pattern `agent:<agentId>:<metricType>`. |
|
||||||
|
| `getAgentUsageSummary` | `tenantId: string` | `Promise<IAgentUsageSummaryEntry[]>` | Returns per-agent token issuance totals for the current calendar month, joined with the agent name (`owner` field). Sorted descending by `token_count`. Excludes decommissioned agents. |
|
||||||
|
|
||||||
|
**Dependencies**: PostgreSQL connection pool (`Pool` from `pg`). No Redis usage.
|
||||||
|
|
||||||
|
**Configuration**: None. `MAX_TREND_DAYS = 90` is a module-level constant.
|
||||||
|
|
||||||
|
**DB tables**:
|
||||||
|
- `analytics_events`: `organization_id` (UUID FK to `organizations`), `date` (DATE), `metric_type` (text — e.g. `'token_issued'`, `'agent:<agentId>:token_issued'`), `count` (integer). Unique constraint on `(organization_id, date, metric_type)`.
|
||||||
|
- `agents`: read in `getAgentUsageSummary` to join `owner` and filter by `organization_id`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### TierService
|
||||||
|
|
||||||
|
**Purpose**: Single authority for all subscription tier business logic — fetches current tier and live usage, initiates Stripe Checkout sessions for upgrades, applies confirmed upgrades to the `organizations` table, and enforces per-tier agent count limits. Controllers and middleware delegate all tier decisions to this service; no tier logic lives elsewhere.
|
||||||
|
|
||||||
|
**Public methods**:
|
||||||
|
|
||||||
|
| Method | Parameters | Returns | Description |
|
||||||
|
|--------|-----------|---------|-------------|
|
||||||
|
| `getStatus` | `orgId: string` | `Promise<ITierStatus>` | Returns current `tier`, per-tier `limits` (from `TIER_CONFIG`), live `usage` (Redis counters + DB agent count), and `resetAt` (ISO 8601 next UTC midnight). Falls back to `0` for Redis counters when Redis is unavailable. |
|
||||||
|
| `initiateUpgrade` | `orgId: string, targetTier: TierName` | `Promise<IUpgradeInitiation>` | Validates that `targetTier` is strictly higher rank than current tier. Creates a Stripe Checkout Session with `mode: 'subscription'`, `metadata: { orgId, targetTier }`, and the price ID from `STRIPE_PRICE_ID_<TIER>` env var. Returns `{ checkoutUrl }`. |
|
||||||
|
| `applyUpgrade` | `orgId: string, tier: TierName` | `Promise<void>` | Sets `organizations.tier` and `organizations.tier_updated_at = NOW()`. Called by the Stripe webhook handler after `checkout.session.completed`. |
|
||||||
|
| `fetchTier` | `orgId: string` | `Promise<TierName>` | Queries `organizations.tier` for the given org. Returns `'free'` as a safe default when no row is found or the stored value is not a valid `TierName`. |
|
||||||
|
| `enforceAgentLimit` | `orgId: string, tier: TierName` | `Promise<void>` | Counts non-decommissioned agents for the org and throws `TierLimitError` when count is at or over `TIER_CONFIG[tier].maxAgents`. No-op for Enterprise (infinite limit). Called by `AgentService` before creating a new agent. |
|
||||||
|
|
||||||
|
**Dependencies**: PostgreSQL (`Pool`), Redis (`RedisClientType`), Stripe client (`Stripe`). Imports `TIER_CONFIG` and `TIER_RANK` from `src/config/tiers.ts`.
|
||||||
|
|
||||||
|
**Configuration**:
|
||||||
|
- `STRIPE_PRICE_ID_PRO` — Stripe price ID for the Pro tier
|
||||||
|
- `STRIPE_PRICE_ID_ENTERPRISE` — Stripe price ID for the Enterprise tier
|
||||||
|
- `STRIPE_PRICE_ID` — Fallback Stripe price ID when tier-specific vars are not set
|
||||||
|
- `STRIPE_SUCCESS_URL` — Redirect URL on successful checkout (default: `APP_BASE_URL/dashboard?billing=success`)
|
||||||
|
- `STRIPE_CANCEL_URL` — Redirect URL when checkout is cancelled (default: `APP_BASE_URL/dashboard?billing=cancel`)
|
||||||
|
- `APP_BASE_URL` — Base URL for redirect URL construction (default: `http://localhost:3000`)
|
||||||
|
|
||||||
|
**Redis keys**:
|
||||||
|
- `rate:tier:calls:<orgId>` — integer, daily API call counter; TTL set at next UTC midnight. Read in `getStatus`.
|
||||||
|
- `rate:tier:tokens:<orgId>` — integer, daily token issuance counter; same TTL. Read in `getStatus`.
|
||||||
|
|
||||||
|
**DB tables**:
|
||||||
|
- `organizations`: `organization_id` (UUID PK), `tier` (text — `'free'|'pro'|'enterprise'`), `tier_updated_at` (timestamptz). Read in `fetchTier`; written in `applyUpgrade`.
|
||||||
|
- `agents`: read in `enforceAgentLimit` and `getStatus` to count non-decommissioned agents per org.
|
||||||
|
|
||||||
|
**Error types**:
|
||||||
|
- `ValidationError` (400) — target tier is not higher than current tier
|
||||||
|
- `TierLimitError` (429) — agent count limit reached for the current tier
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ComplianceService
|
||||||
|
|
||||||
|
**Purpose**: Generates AGNTCY-standard compliance reports and exports agent cards for a tenant. Reports cover two sections: `agent-identity` (DID presence and credential expiry checks) and `audit-trail` (cryptographic hash chain verification). Reports are cached in Redis for 5 minutes to avoid repeated expensive DB queries. Agent card export returns all active agents in AGNTCY-standard JSON format.
|
||||||
|
|
||||||
|
**Public methods**:
|
||||||
|
|
||||||
|
| Method | Parameters | Returns | Description |
|
||||||
|
|--------|-----------|---------|-------------|
|
||||||
|
| `generateReport` | `tenantId: string` | `Promise<IComplianceReport>` | Attempts to read `compliance:report:<tenantId>` from Redis; if found, returns it with `from_cache: true`. Otherwise builds the report by running `buildAgentIdentitySection` and `buildAuditTrailSection` in parallel, rolls up the overall status (fail > warn > pass), caches the result for 300 seconds, and returns it. |
|
||||||
|
| `exportAgentCards` | `tenantId: string` | `Promise<IAgentCard[]>` | Queries all non-decommissioned agents for the tenant and maps each to an AGNTCY agent card with `id` (DID or agent UUID), `name`, `capabilities`, `endpoint`, `created_at`, and `agntcy_schema_version: '1.0'`. |
|
||||||
|
|
||||||
|
**Dependencies**: PostgreSQL (`Pool`), Redis (`RedisClientType`). Internally instantiates `AuditVerificationService` for hash chain verification.
|
||||||
|
|
||||||
|
**Configuration**: None. `CACHE_TTL_SECONDS = 300` and `AGNTCY_SCHEMA_VERSION = '1.0'` are module-level constants.
|
||||||
|
|
||||||
|
**Redis keys**:
|
||||||
|
- `compliance:report:<tenantId>` — JSON-serialised `IComplianceReport`, TTL 300 seconds. Written by `generateReport`; read on every call within the cache window.
|
||||||
|
|
||||||
|
**DB tables**:
|
||||||
|
- `agents`: queried in both `buildAgentIdentitySection` (checks DID presence) and `exportAgentCards`.
|
||||||
|
- `credentials`: queried in `buildAgentIdentitySection` to check active credential expiry per agent.
|
||||||
|
- `audit_events`: read via `AuditVerificationService` in `buildAuditTrailSection` to verify hash chain integrity.
|
||||||
|
|
||||||
|
**Error types**: None thrown directly. Internal errors in section builders produce `status: 'fail'` sections rather than exceptions.
|
||||||
|
|
||||||
|
**Report structure**:
|
||||||
|
- `agent-identity` section: `fail` when any active agent is missing a DID or has expired credentials; `warn` when any credential expires within 7 days; `pass` otherwise.
|
||||||
|
- `audit-trail` section: `fail` when `AuditVerificationService.verifyChain()` returns `verified: false`; `pass` otherwise.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### FederationService
|
||||||
|
|
||||||
|
**Purpose**: Manages trusted federation partners and cross-IdP JWT token verification. At partner registration, the partner's JWKS endpoint is validated and the keys are cached in Redis. At token verification, the service fetches (or reuses cached) partner JWKS, verifies the JWT signature and standard claims, enforces the partner's `allowed_organizations` filter, and rejects tokens from suspended or expired partners.
|
||||||
|
|
||||||
|
**Public methods**:
|
||||||
|
|
||||||
|
| Method | Parameters | Returns | Description |
|
||||||
|
|--------|-----------|---------|-------------|
|
||||||
|
| `registerPartner` | `req: ICreatePartnerRequest` | `Promise<IFederationPartner>` | Validates the `jwks_uri` is reachable (5-second timeout) and returns valid JWKS. Inserts the partner row into `federation_partners`. Caches the JWKS in Redis under `federation:jwks:<issuer>`. |
|
||||||
|
| `listPartners` | _(none)_ | `Promise<IFederationPartner[]>` | Updates any partners past `expires_at` to `status = 'expired'` before returning all rows ordered by `created_at DESC`. |
|
||||||
|
| `getPartner` | `id: string` | `Promise<IFederationPartner>` | Applies the same expiry update, then returns the partner row. Throws `FederationPartnerNotFoundError` (404) when not found. |
|
||||||
|
| `updatePartner` | `id: string, req: IUpdatePartnerRequest` | `Promise<IFederationPartner>` | Applies a partial update. When `jwks_uri` changes, invalidates the old issuer's JWKS cache entry (`DEL federation:jwks:<oldIssuer>`). |
|
||||||
|
| `deletePartner` | `id: string` | `Promise<void>` | Deletes the partner row and invalidates the JWKS cache. |
|
||||||
|
| `verifyFederatedToken` | `req: IFederationVerifyRequest` | `Promise<IFederationVerifyResult>` | Decodes token header/payload without verification, rejects `alg:none`, looks up partner by `iss`, checks partner status and expiry, fetches JWKS (cache-first), finds matching key by `kid`, converts JWK to PEM, verifies signature via `jsonwebtoken.verify` (RS256 or ES256), enforces `allowed_organizations` filter. Returns `{ valid, issuer, subject, organization_id, claims }`. |
|
||||||
|
|
||||||
|
**Dependencies**: PostgreSQL (`Pool`), Redis (`RedisClientType`). Uses `jsonwebtoken` for JWT decoding/verification and Node.js `crypto.createPublicKey` for JWK-to-PEM conversion.
|
||||||
|
|
||||||
|
**Configuration**:
|
||||||
|
- `FEDERATION_JWKS_CACHE_TTL_SECONDS` — TTL for cached partner JWKS in Redis (default: `3600`)
|
||||||
|
|
||||||
|
**Redis keys**:
|
||||||
|
- `federation:jwks:<issuer>` — JSON-serialised `IJWKSKey[]`, TTL from `FEDERATION_JWKS_CACHE_TTL_SECONDS`. Written on partner registration and on cache miss during token verification; deleted when a partner is updated (JWKS URI changed) or deleted.
|
||||||
|
|
||||||
|
**DB tables**:
|
||||||
|
- `federation_partners`: `id` (UUID PK), `name` (text), `issuer` (text — the IdP's issuer URL), `jwks_uri` (text), `allowed_organizations` (text[] — empty means all orgs allowed), `status` (`active|suspended|expired`), `created_at`, `updated_at`, `expires_at` (nullable timestamptz).
|
||||||
|
|
||||||
|
**Error types**:
|
||||||
|
- `FederationPartnerError` (400) — JWKS endpoint unreachable or returns invalid JWKS
|
||||||
|
- `FederationPartnerNotFoundError` (404) — partner UUID not found
|
||||||
|
- `FederationVerificationError` (401) — token malformed, alg:none, unknown issuer, partner suspended/expired, signature invalid, org not in allow list
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### DIDService
|
||||||
|
|
||||||
|
**Purpose**: Manages W3C DID Core 1.0 document generation, EC P-256 key pair creation, and AGNTCY agent card export. Generates per-agent `did:web` identifiers, stores private keys in HashiCorp Vault (or records a `dev:no-vault` marker in dev mode), and caches DID documents in Redis. Builds both an instance-level DID document (for AgentIdP itself) and per-agent DID documents with AGNTCY extension properties.
|
||||||
|
|
||||||
|
**Public methods**:
|
||||||
|
|
||||||
|
| Method | Parameters | Returns | Description |
|
||||||
|
|--------|-----------|---------|-------------|
|
||||||
|
| `generateDIDForAgent` | `agentId: string, organizationId: string` | `Promise<{ did: string; publicKeyJwk: IPublicKeyJwk }>` | Generates an EC P-256 key pair. Stores the private key PEM in Vault KV v2 at `{mount}/data/agentidp/agents/{agentId}/did-key`. Encrypts the vault path via `EncryptionService` (when configured). Inserts a row into `agent_did_keys`. Updates `agents.did` and `agents.did_created_at`. Returns the `did:web` identifier and public key JWK. |
|
||||||
|
| `buildInstanceDIDDocument` | _(none)_ | `Promise<IDIDDocument>` | Builds the root instance DID document for AgentIdP (format: `did:web:{DID_WEB_DOMAIN}`). Cached in Redis under `did:doc:instance`. |
|
||||||
|
| `buildAgentDIDDocument` | `agentId: string` | `Promise<IAgentDIDDocumentResult>` | Builds a per-agent DID document (format: `did:web:{DID_WEB_DOMAIN}:agents:{agentId}`). Decommissioned agents get a deactivated document with an `AgentStatus: decommissioned` service entry. Cached in Redis under `did:doc:{agentId}` for active agents only. Throws `AgentNotFoundError` if the agent does not exist. |
|
||||||
|
| `buildResolutionResult` | `agentId: string` | `Promise<IDIDResolutionResult>` | Wraps `buildAgentDIDDocument` with W3C DID Resolution metadata (`didDocumentMetadata`, `didResolutionMetadata`). |
|
||||||
|
| `buildAgentCard` | `agentId: string` | `Promise<IAgentCard>` | Returns an AGNTCY-format agent card with `did`, `name` (agent email), `agentType`, `capabilities`, `owner`, `version`, `deploymentEnv`, `identityProvider`, and `issuedAt`. |
|
||||||
|
|
||||||
|
**Dependencies**: PostgreSQL (`Pool`), Redis (`RedisClientType`), optional `VaultClient`, optional `EncryptionService`. Uses `node-vault` directly for DID private key storage.
|
||||||
|
|
||||||
|
**Configuration**:
|
||||||
|
- `DID_WEB_DOMAIN` — required; the domain for `did:web` DID construction (e.g. `idp.sentryagent.ai`)
|
||||||
|
- `DID_DOCUMENT_CACHE_TTL_SECONDS` — Redis cache TTL for DID documents (default: `300`)
|
||||||
|
- `VAULT_ADDR`, `VAULT_TOKEN`, `VAULT_MOUNT` — when set, private keys are stored in Vault; otherwise `dev:no-vault` marker is used
|
||||||
|
|
||||||
|
**Redis keys**:
|
||||||
|
- `did:doc:instance` — JSON-serialised instance `IDIDDocument`, TTL from `DID_DOCUMENT_CACHE_TTL_SECONDS`
|
||||||
|
- `did:doc:<agentId>` — JSON-serialised per-agent `IDIDDocument`, same TTL. Not cached for decommissioned agents.
|
||||||
|
|
||||||
|
**DB tables**:
|
||||||
|
- `agents`: `did` (text — `did:web:...`), `did_created_at` (timestamptz). Written by `generateDIDForAgent`; read in all document-building methods.
|
||||||
|
- `agent_did_keys`: `key_id` (UUID PK), `agent_id` (UUID FK), `organization_id` (UUID FK), `public_key_jwk` (JSONB), `vault_key_path` (text — Vault KV v2 path or `dev:no-vault`), `key_type` (`'EC'`), `curve` (`'P-256'`), `created_at`. Written by `generateDIDForAgent`.
|
||||||
|
|
||||||
|
**Error types**:
|
||||||
|
- `AgentNotFoundError` (404) — agent UUID not found in `buildAgentDIDDocument`, `buildResolutionResult`, `buildAgentCard`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### WebhookService
|
||||||
|
|
||||||
|
**Purpose**: Manages webhook subscriptions and their delivery history for a tenant organisation. HMAC signing secrets are stored in HashiCorp Vault KV v2 (when configured) or bcrypt-hashed in PostgreSQL in local mode. The raw secret is only returned once at subscription creation time. `vault_secret_path` is encrypted at rest via `EncryptionService` (AES-256-CBC) before being written to PostgreSQL (SOC 2 CC6.1 compliance).
|
||||||
|
|
||||||
|
**Public methods**:
|
||||||
|
|
||||||
|
| Method | Parameters | Returns | Description |
|
||||||
|
|--------|-----------|---------|-------------|
|
||||||
|
| `createSubscription` | `orgId: string, req: ICreateWebhookRequest` | `Promise<IWebhookSubscription & { secret: string }>` | Generates a 32-byte random hex HMAC secret. Stores in Vault at `secret/data/agentidp/webhooks/{orgId}/{id}/secret` (Vault mode) or bcrypt-hashes and stores in `secret_hash` (local mode). Encrypts `vault_secret_path` via `EncryptionService`. Returns the subscription including the one-time `secret`. Validates URL must use `https://` and events array must be non-empty. |
|
||||||
|
| `listSubscriptions` | `orgId: string` | `Promise<IWebhookSubscription[]>` | Returns all subscriptions for the org, ordered by `created_at DESC`. No secret fields are included. |
|
||||||
|
| `getSubscription` | `id: string, orgId: string` | `Promise<IWebhookSubscription>` | Returns a single subscription. Verifies org ownership. |
|
||||||
|
| `updateSubscription` | `id: string, orgId: string, req: IUpdateWebhookRequest` | `Promise<IWebhookSubscription>` | Partially updates `name`, `url`, `events`, or `active` fields. Validates `https://` if URL is changing. |
|
||||||
|
| `deleteSubscription` | `id: string, orgId: string` | `Promise<void>` | Permanently deletes the subscription and all deliveries (via PostgreSQL CASCADE). |
|
||||||
|
| `getSubscriptionSecret` | `subscriptionId: string, orgId: string` | `Promise<string>` | Retrieves the raw HMAC secret from Vault (Vault mode only). Throws `WebhookValidationError` in local mode since the secret cannot be recovered after creation. |
|
||||||
|
| `listDeliveries` | `subscriptionId: string, orgId: string, limit: number, offset: number` | `Promise<IPaginatedDeliveriesResponse>` | Returns paginated delivery records for a subscription. Verifies org ownership before querying. |
|
||||||
|
|
||||||
|
**Dependencies**: PostgreSQL (`Pool`), optional `VaultClient`, Redis (`RedisClientType` — reserved for future caching), optional `EncryptionService`.
|
||||||
|
|
||||||
|
**Configuration**: Inherits Vault configuration from `VaultClient` (`VAULT_ADDR`, `VAULT_TOKEN`, `VAULT_MOUNT`). `EncryptionService` requires `ENCRYPTION_KEY` env var (see `EncryptionService` docs).
|
||||||
|
|
||||||
|
**DB tables**:
|
||||||
|
- `webhook_subscriptions`: `id` (UUID PK), `organization_id` (UUID FK), `name` (text), `url` (text — https only), `events` (JSONB — `WebhookEventType[]`), `secret_hash` (text — bcrypt hash in local mode, `'vault'` in Vault mode), `vault_secret_path` (text — encrypted Vault path or `'local'`), `active` (boolean), `failure_count` (integer), `created_at`, `updated_at`.
|
||||||
|
- `webhook_deliveries`: `id` (UUID PK), `subscription_id` (UUID FK), `event_type` (text), `payload` (JSONB), `status` (`pending|delivered|failed|dead_letter`), `http_status_code` (integer nullable), `attempt_count` (integer), `next_retry_at` (timestamptz nullable), `delivered_at` (timestamptz nullable), `created_at`, `updated_at`. Cascades on subscription delete.
|
||||||
|
|
||||||
|
**Error types**:
|
||||||
|
- `WebhookNotFoundError` (404) — subscription not found or belongs to another org
|
||||||
|
- `WebhookValidationError` (400) — invalid URL scheme, empty events array, or secret not recoverable in local mode
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### BillingService
|
||||||
|
|
||||||
|
**Purpose**: Manages Stripe billing integration — creates Checkout Sessions for tenant subscriptions, processes incoming Stripe webhook events (subscription lifecycle and checkout completion), and retrieves current subscription status. When a `checkout.session.completed` event carries `{ orgId, targetTier }` in its metadata, delegates to `TierService.applyUpgrade` to update the organisation's tier.
|
||||||
|
|
||||||
|
**Public methods**:
|
||||||
|
|
||||||
|
| Method | Parameters | Returns | Description |
|
||||||
|
|--------|-----------|---------|-------------|
|
||||||
|
| `createCheckoutSession` | `tenantId: string, successUrl: string, cancelUrl: string` | `Promise<string>` | Creates a Stripe Checkout Session with `mode: 'subscription'`, `client_reference_id: tenantId`, and the price from `STRIPE_PRICE_ID`. Returns the checkout URL. Throws if Stripe does not return a URL. |
|
||||||
|
| `handleWebhookEvent` | `rawBody: Buffer, sig: string, webhookSecret: string` | `Promise<void>` | Verifies the Stripe webhook signature via `stripe.webhooks.constructEvent`. Handles `customer.subscription.created/updated/deleted` (upserts `tenant_subscriptions`) and `checkout.session.completed` (applies tier upgrade via `TierService` when metadata contains `orgId` and `targetTier`). |
|
||||||
|
| `getSubscriptionStatus` | `tenantId: string` | `Promise<ISubscriptionStatus>` | Queries `tenant_subscriptions` for the given tenant. Returns `{ tenantId, status: 'free', currentPeriodEnd: null, stripeSubscriptionId: null }` when no row exists. |
|
||||||
|
|
||||||
|
**Dependencies**: PostgreSQL (`Pool`), Stripe client (`Stripe`), optional `TierService`.
|
||||||
|
|
||||||
|
**Configuration**:
|
||||||
|
- `STRIPE_PRICE_ID` — Stripe price ID for subscription checkout sessions
|
||||||
|
- `STRIPE_WEBHOOK_SECRET` — Stripe webhook endpoint secret (`whsec_...`); passed by the webhook controller, not read directly by the service
|
||||||
|
|
||||||
|
**DB tables**:
|
||||||
|
- `tenant_subscriptions`: `tenant_id` (UUID PK or unique), `status` (text — `'free'|'active'|'past_due'|'canceled'`), `stripe_customer_id` (text), `stripe_subscription_id` (text), `current_period_end` (timestamptz nullable), `updated_at`. Upserted on subscription lifecycle events.
|
||||||
|
|
||||||
|
**Error types**: None defined in the service. Stripe signature failures raise `Error` from `stripe.webhooks.constructEvent`; these propagate to the error handler as 400 responses.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### OIDCService (A2A / OIDC Provider)
|
||||||
|
|
||||||
|
**Note**: `src/services/OIDCService.ts` does not exist as a standalone file — OIDC provider functionality is handled by the `oidc-provider` npm package, configured in `src/app.ts` and related route files. The service boundary for OIDC-related business logic is the `DelegationService`. Document the OIDC integration as follows.
|
||||||
|
|
||||||
|
**Purpose**: The OIDC/A2A subsystem provides agent-to-agent (A2A) delegation using the `oidc-provider` library (v9.7.x). The provider is mounted as a sub-application at `/oidc` and issues short-lived delegation tokens scoped to a specific `delegatee_id`. The `DelegationService` (`src/services/DelegationService.ts`) manages the `delegation_chains` table for auditing.
|
||||||
|
|
||||||
|
**Key endpoints exposed by the OIDC provider**:
|
||||||
|
- `POST /oidc/token` — issues delegation tokens via `client_credentials` or custom grant
|
||||||
|
- `GET /oidc/.well-known/openid-configuration` — OIDC discovery document
|
||||||
|
- `GET /oidc/jwks` — public JWK Set for verifying delegation tokens
|
||||||
|
|
||||||
|
**DelegationService public methods** (from `src/services/DelegationService.ts`):
|
||||||
|
|
||||||
|
| Method | Parameters | Returns | Description |
|
||||||
|
|--------|-----------|---------|-------------|
|
||||||
|
| `createDelegation` | `delegatorId: string, delegateeId: string, scope: string, expiresAt?: Date` | `Promise<IDelegationChain>` | Inserts a delegation chain record into `delegation_chains`. Validates both agents exist and are active. |
|
||||||
|
| `verifyDelegation` | `token: string, delegateeId: string` | `Promise<IDelegationVerifyResult>` | Verifies the delegation token signature and checks the chain record is active and not expired. |
|
||||||
|
| `revokeDelegation` | `chainId: string, delegatorId: string` | `Promise<void>` | Sets `delegation_chains.status = 'revoked'` and `revoked_at = NOW()`. Validates the delegator owns the chain. |
|
||||||
|
|
||||||
|
**DB tables**:
|
||||||
|
- `delegation_chains`: `chain_id` (UUID PK), `delegator_id` (UUID), `delegatee_id` (UUID), `scope` (text), `status` (`active|revoked|expired`), `created_at`, `expires_at` (nullable), `revoked_at` (nullable), `token` (text — the delegation JWT).
|
||||||
|
|
||||||
|
**Configuration**:
|
||||||
|
- `A2A_ENABLED` — when set to `'false'`, A2A/delegation endpoints return 404
|
||||||
|
- `OIDC_ISSUER` — issuer URL for the OIDC provider
|
||||||
|
|||||||
@@ -715,3 +715,260 @@ must store it securely.
|
|||||||
"revokedAt": null
|
"revokedAt": null
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Walkthrough 4 — A2A Delegation End-to-End
|
||||||
|
|
||||||
|
**Request:** `POST /api/v1/oauth2/token/delegate` — one AI agent delegating a scoped capability to another
|
||||||
|
|
||||||
|
This walkthrough traces how agent A (an orchestrator) issues a delegation token that grants agent B (a sub-agent) the right to act on its behalf with a restricted scope.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 1 — Route dispatch
|
||||||
|
|
||||||
|
**File:** `src/routes/delegation.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
router.post(
|
||||||
|
'/token/delegate',
|
||||||
|
asyncHandler(authMiddleware),
|
||||||
|
opaMiddleware,
|
||||||
|
asyncHandler(delegationController.createDelegation.bind(delegationController))
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
Both `authMiddleware` and `opaMiddleware` run. The OPA policy requires scope `agents:write` for delegation creation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 2 — Controller: extract delegator and validate
|
||||||
|
|
||||||
|
**File:** `src/controllers/DelegationController.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const delegatorId = req.user.sub; // From the Bearer token's sub claim
|
||||||
|
const { delegatee_id, scope, expires_at } = req.body;
|
||||||
|
```
|
||||||
|
|
||||||
|
The controller validates that `delegatee_id` is a non-empty UUID, `scope` is a non-empty string, and `expires_at` (if provided) is a valid ISO 8601 datetime in the future. It passes these to `DelegationService.createDelegation()`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 3 — Service: verify both agents exist
|
||||||
|
|
||||||
|
**File:** `src/services/DelegationService.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const delegator = await this.agentRepository.findById(delegatorId);
|
||||||
|
if (!delegator || delegator.status !== 'active') { throw new AgentNotFoundError(delegatorId) }
|
||||||
|
|
||||||
|
const delegatee = await this.agentRepository.findById(delegateeId);
|
||||||
|
if (!delegatee || delegatee.status !== 'active') { throw new AgentNotFoundError(delegateeId) }
|
||||||
|
```
|
||||||
|
|
||||||
|
Both agents must exist and be in `active` status. A suspended or decommissioned agent cannot participate in delegation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 4 — Service: insert delegation chain record
|
||||||
|
|
||||||
|
**File:** `src/services/DelegationService.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
await this.pool.query(
|
||||||
|
`INSERT INTO delegation_chains (chain_id, delegator_id, delegatee_id, scope, status, expires_at)
|
||||||
|
VALUES ($1, $2, $3, $4, 'active', $5)`,
|
||||||
|
[chainId, delegatorId, delegateeId, scope, expiresAt]
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
The `chain_id` is a UUID generated by the service. The `delegation_chains` table provides the authoritative source of truth for which delegations are active, independent of any token.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 5 — Response
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"chain_id": "f1e2d3c4-...",
|
||||||
|
"token": "eyJhbGciOiJSUzI1NiJ9...",
|
||||||
|
"delegator_id": "a1b2c3d4-...",
|
||||||
|
"delegatee_id": "b2c3d4e5-...",
|
||||||
|
"scope": "agents:read",
|
||||||
|
"status": "active",
|
||||||
|
"expires_at": "2026-04-05T00:00:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `token` field is the signed delegation JWT. The delegatee presents this token to `POST /api/v1/oauth2/token/verify-delegation` to prove it has authority to act on the delegator's behalf.
|
||||||
|
|
||||||
|
**Why store both the DB record and the JWT?** The DB record allows revocation — when the delegator calls `DELETE /api/v1/delegation-chains/:chainId`, the record is soft-deleted and all subsequent `verify-delegation` calls will fail even if the JWT itself has not yet expired.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Walkthrough 5 — Tier Enforcement Request Lifecycle
|
||||||
|
|
||||||
|
**Request:** Any authenticated API request when the organisation's daily call limit is reached
|
||||||
|
|
||||||
|
This walkthrough traces how `tierMiddleware` intercepts a request before it reaches the OPA middleware, preventing quota-exceeded traffic from consuming service resources.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 1 — Auth middleware passes
|
||||||
|
|
||||||
|
Same as Walkthrough 2, Step 3. The Bearer JWT is verified and `req.user` is populated with `sub` (agentId) and `organization_id`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 2 — Tier middleware: fetch org tier
|
||||||
|
|
||||||
|
**File:** `src/middleware/tier.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const orgId = req.user.organization_id;
|
||||||
|
const tier = await tierService.fetchTier(orgId);
|
||||||
|
const config = TIER_CONFIG[tier];
|
||||||
|
```
|
||||||
|
|
||||||
|
`fetchTier()` issues `SELECT tier FROM organizations WHERE organization_id = $1`. Returns `'free'` if no row is found (safe default).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 3 — Tier middleware: read daily counter
|
||||||
|
|
||||||
|
**File:** `src/middleware/tier.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const callsKey = `rate:tier:calls:${orgId}`;
|
||||||
|
const callsToday = await redis.get(callsKey);
|
||||||
|
const count = callsToday !== null ? parseInt(callsToday, 10) : 0;
|
||||||
|
|
||||||
|
if (count >= config.maxCallsPerDay) {
|
||||||
|
throw new TierLimitError('calls', config.maxCallsPerDay, { orgId, tier, current: count });
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The Redis key `rate:tier:calls:<orgId>` is read. If null (first call of the day), count is 0. When count equals or exceeds the tier limit, `TierLimitError` (HTTP 429) is thrown immediately — no further middleware runs.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 4 — Tier middleware: increment counter (fire-and-forget)
|
||||||
|
|
||||||
|
**File:** `src/middleware/tier.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Set TTL to next UTC midnight if key is new
|
||||||
|
void redis.multi()
|
||||||
|
.incr(callsKey)
|
||||||
|
.expireAt(callsKey, nextUtcMidnightUnix())
|
||||||
|
.exec();
|
||||||
|
next();
|
||||||
|
```
|
||||||
|
|
||||||
|
The counter is incremented atomically using a Redis MULTI block. The `EXPIREAT` command sets the key to auto-delete at the next UTC midnight, resetting the daily counter without any scheduled job. The increment is fire-and-forget — the request proceeds immediately to `opaMiddleware`.
|
||||||
|
|
||||||
|
**Why expire at UTC midnight rather than a rolling 24-hour window?** Tier limits are documented as "per day", which users interpret as resetting at midnight. A rolling window would allow a user to consume their full daily quota twice within a 48-hour period straddling midnight, which is counterintuitive. UTC midnight is predictable and easy to reason about.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 5 — Error handler serialises TierLimitError
|
||||||
|
|
||||||
|
**File:** `src/middleware/errorHandler.ts`
|
||||||
|
|
||||||
|
```json
|
||||||
|
HTTP 429
|
||||||
|
{
|
||||||
|
"code": "TIER_LIMIT_EXCEEDED",
|
||||||
|
"message": "Daily API call limit reached for your tier.",
|
||||||
|
"details": {
|
||||||
|
"tier": "free",
|
||||||
|
"limit": 1000,
|
||||||
|
"current": 1000
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `Retry-After` header is set to the number of seconds until next UTC midnight so clients can implement automatic backoff.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Walkthrough 6 — Analytics Event Capture Flow
|
||||||
|
|
||||||
|
**Trigger:** Any successful token issuance (`POST /api/v1/token`)
|
||||||
|
|
||||||
|
This walkthrough traces how an analytics event is captured without affecting the latency of the primary token issuance response.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 1 — Token issuance completes
|
||||||
|
|
||||||
|
**File:** `src/services/OAuth2Service.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const accessToken = signToken(payload, this.privateKey);
|
||||||
|
// Primary response is ready — analytics is now fire-and-forget
|
||||||
|
void this.analyticsService.recordEvent(tenantId, 'token_issued');
|
||||||
|
tokensIssuedTotal.inc({ scope });
|
||||||
|
```
|
||||||
|
|
||||||
|
The `signToken()` call completes synchronously (RSA signing is CPU-bound, not I/O). The controller can now send the response. `analyticsService.recordEvent()` is called with `void` — the `await` is deliberately omitted.
|
||||||
|
|
||||||
|
**Why `void` instead of `await`?** Token issuance latency must remain below 100ms (per the QA performance gate). A PostgreSQL write adds 5–15ms. Since analytics data is aggregated (not transactional), losing an occasional event due to an error is acceptable. The response is never delayed for analytics.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 2 — AnalyticsService: UPSERT daily counter
|
||||||
|
|
||||||
|
**File:** `src/services/AnalyticsService.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
async recordEvent(tenantId: string, metricType: string): Promise<void> {
|
||||||
|
try {
|
||||||
|
await this.pool.query(
|
||||||
|
`INSERT INTO analytics_events (organization_id, date, metric_type, count)
|
||||||
|
VALUES ($1, CURRENT_DATE, $2, 1)
|
||||||
|
ON CONFLICT (organization_id, date, metric_type)
|
||||||
|
DO UPDATE SET count = analytics_events.count + 1`,
|
||||||
|
[tenantId, metricType],
|
||||||
|
);
|
||||||
|
} catch (err) {
|
||||||
|
console.error('[AnalyticsService] recordEvent failed — primary path unaffected', err);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `ON CONFLICT DO UPDATE` upsert is atomic. Whether this is the first or the ten-thousandth `token_issued` event for this tenant today, the row is updated correctly. All errors are caught and swallowed — the token has already been returned to the caller.
|
||||||
|
|
||||||
|
**Why one row per day per metric, not one row per event?** Storing a row per event would create millions of rows. The daily aggregate model keeps the table compact while still providing daily trend data (the granularity that analytics dashboards need). Sub-day granularity is available from the Prometheus `agentidp_tokens_issued_total` counter if needed.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 3 — Dashboard query (deferred)
|
||||||
|
|
||||||
|
When a developer visits the analytics page in the developer portal, the portal calls:
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/v1/analytics/token-trend?days=30
|
||||||
|
```
|
||||||
|
|
||||||
|
**File:** `src/services/AnalyticsService.ts` — `getTokenTrend(tenantId, 30)`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
gs.date::DATE::TEXT AS date,
|
||||||
|
COALESCE(ae.count, 0)::INTEGER AS count
|
||||||
|
FROM generate_series(
|
||||||
|
CURRENT_DATE - 29 * INTERVAL '1 day',
|
||||||
|
CURRENT_DATE,
|
||||||
|
INTERVAL '1 day'
|
||||||
|
) AS gs(date)
|
||||||
|
LEFT JOIN analytics_events ae
|
||||||
|
ON ae.date = gs.date::DATE
|
||||||
|
AND ae.organization_id = $2
|
||||||
|
AND ae.metric_type = 'token_issued'
|
||||||
|
ORDER BY gs.date ASC
|
||||||
|
```
|
||||||
|
|
||||||
|
The `generate_series` + `LEFT JOIN` pattern ensures all 30 days appear in the result, with `count: 0` for days with no events. This avoids the need for the client to fill in gaps.
|
||||||
|
|||||||
@@ -44,18 +44,24 @@ development dependencies (TypeScript, Jest, ts-jest, eslint).
|
|||||||
|
|
||||||
## 8.3 Environment Variables Setup
|
## 8.3 Environment Variables Setup
|
||||||
|
|
||||||
The server requires a `.env` file at the project root. There is no `.env.example`
|
The server requires a `.env` file at the project root. Copy the template:
|
||||||
file — create it from scratch using the template below.
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
touch .env
|
cp .env.example .env
|
||||||
```
|
```
|
||||||
|
|
||||||
Add the following content to `.env`. Every variable is documented below.
|
The template includes all required variables with sensible local defaults. Edit `.env` to set your values. Key variables are documented below.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# ─────────────────────────────────────────────────────────────
|
# ─────────────────────────────────────────────────────────────
|
||||||
# PostgreSQL connection
|
# PostgreSQL — individual credentials for compose.yaml
|
||||||
|
# ─────────────────────────────────────────────────────────────
|
||||||
|
POSTGRES_USER=sentryagent
|
||||||
|
POSTGRES_PASSWORD=sentryagent
|
||||||
|
POSTGRES_DB=sentryagent_idp
|
||||||
|
|
||||||
|
# ─────────────────────────────────────────────────────────────
|
||||||
|
# PostgreSQL connection (application reads this directly)
|
||||||
# ─────────────────────────────────────────────────────────────
|
# ─────────────────────────────────────────────────────────────
|
||||||
DATABASE_URL=postgresql://sentryagent:sentryagent@localhost:5432/sentryagent_idp
|
DATABASE_URL=postgresql://sentryagent:sentryagent@localhost:5432/sentryagent_idp
|
||||||
|
|
||||||
|
|||||||
@@ -370,3 +370,60 @@ Adds docs/engineering/ with 11 documents covering architecture,
|
|||||||
service deep-dives, code walkthroughs, dev setup, workflow,
|
service deep-dives, code walkthroughs, dev setup, workflow,
|
||||||
testing, deployment, and SDK guide.
|
testing, deployment, and SDK guide.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. CTO Session Completion Protocol
|
||||||
|
|
||||||
|
This section applies to the Virtual CTO role. It defines the required communication protocol at the end of any session that involves CEO-authorized actions.
|
||||||
|
|
||||||
|
### 8.1 Completion Confirmation (Required)
|
||||||
|
|
||||||
|
After the CEO authorizes any action via `#vpe-cto-approvals`, the CTO MUST:
|
||||||
|
|
||||||
|
1. Execute the authorized action
|
||||||
|
2. Post a **completion confirmation** to `#vpe-cto-approvals` before closing the session
|
||||||
|
|
||||||
|
The confirmation message MUST include:
|
||||||
|
|
||||||
|
| Field | Description |
|
||||||
|
|-------|-------------|
|
||||||
|
| Action completed | What was done |
|
||||||
|
| Outcome | Success or failure |
|
||||||
|
| Commit hash | Required if the action involved a git commit |
|
||||||
|
| Resulting state | What state the system/repo is in now |
|
||||||
|
|
||||||
|
> Authorization and completion are two distinct, required messages. An authorization alone does not constitute completion.
|
||||||
|
|
||||||
|
### 8.2 End-of-Session Summary (Required)
|
||||||
|
|
||||||
|
Before closing any session that contains completed, pending, or in-progress work, the CTO MUST post a structured summary to `#vpe-cto-approvals`:
|
||||||
|
|
||||||
|
```
|
||||||
|
## End-of-Session Summary
|
||||||
|
|
||||||
|
### Completed This Session
|
||||||
|
- <action> — <commit hash or outcome>
|
||||||
|
|
||||||
|
### Pending (Authorized but Not Yet Executed)
|
||||||
|
- <action> — authorized in msg #<id>, not yet executed
|
||||||
|
|
||||||
|
### Requires CEO Action Next Session
|
||||||
|
- <decision or approval needed>
|
||||||
|
```
|
||||||
|
|
||||||
|
If nothing is pending and all actions are complete, a brief "session complete, nothing pending" message is sufficient.
|
||||||
|
|
||||||
|
### 8.3 Authorized vs. Done Vocabulary
|
||||||
|
|
||||||
|
These two terms have precise, non-interchangeable meanings:
|
||||||
|
|
||||||
|
| Term | Meaning |
|
||||||
|
|------|---------|
|
||||||
|
| **Authorized** | CEO has granted permission. Action has NOT been executed. |
|
||||||
|
| **Committed / Completed / Deployed** | Action has been executed and confirmed with evidence. |
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- Never use "completed" or "committed" to describe an action that has only been approved
|
||||||
|
- Always include supporting evidence when claiming completion (e.g., commit hash, test output)
|
||||||
|
- If no commit hash exists for a git action, the action is not done — regardless of authorization status
|
||||||
|
|||||||
@@ -422,3 +422,165 @@ here is what AgentIdP does to mitigate the risk and how to test it.
|
|||||||
Test that the server rejects tokens with `alg: none` or `alg: HS256`. The
|
Test that the server rejects tokens with `alg: none` or `alg: HS256`. The
|
||||||
`verifyToken()` function specifies `algorithms: ['RS256']`, which causes jsonwebtoken
|
`verifyToken()` function specifies `algorithms: ['RS256']`, which causes jsonwebtoken
|
||||||
to reject any token with a different algorithm header.
|
to reject any token with a different algorithm header.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10.8 AGNTCY Conformance Test Suite
|
||||||
|
|
||||||
|
**Location:** `tests/agntcy-conformance/conformance.test.ts`
|
||||||
|
|
||||||
|
**Purpose:** Verifies that the AgentIdP platform conforms to the AGNTCY agent identity specification. These tests exercise live HTTP requests through the Express application against real PostgreSQL and Redis instances, exactly like integration tests — but they validate AGNTCY-specific protocol guarantees rather than individual endpoint correctness.
|
||||||
|
|
||||||
|
**How to run:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run the conformance suite (separate Jest config)
|
||||||
|
npm run test:agntcy-conformance
|
||||||
|
|
||||||
|
# Equivalent long form
|
||||||
|
npx jest --config tests/agntcy-conformance/jest.config.cjs
|
||||||
|
|
||||||
|
# Run with TEST_DATABASE_URL and TEST_REDIS_URL overrides
|
||||||
|
TEST_DATABASE_URL=postgresql://sentryagent:sentryagent@localhost:5432/sentryagent_idp_test \
|
||||||
|
TEST_REDIS_URL=redis://localhost:6379/1 \
|
||||||
|
npm run test:agntcy-conformance
|
||||||
|
|
||||||
|
# Enable A2A delegation conformance tests (gated by env var)
|
||||||
|
A2A_ENABLED=true npm run test:agntcy-conformance
|
||||||
|
```
|
||||||
|
|
||||||
|
The conformance suite uses its own `jest.config.cjs` (located in `tests/agntcy-conformance/`) so it does not run with `npm test` by default. This is intentional — the suite requires `COMPLIANCE_ENABLED=true` and optionally `A2A_ENABLED=true`, which should not be required for the standard unit/integration test run.
|
||||||
|
|
||||||
|
**What each test validates:**
|
||||||
|
|
||||||
|
| Conformance Test | What it validates | AGNTCY Domain |
|
||||||
|
|-----------------|-------------------|---------------|
|
||||||
|
| **Conformance 1 — Agent registration creates DID:WEB identifier** | `POST /api/v1/agents` returns a `did` field matching `did:web:*` pattern when `DID_WEB_DOMAIN` is set. The `did` field is optional in the response (test is conditional on presence) — but when present, it must conform to the `did:web:` scheme. | Non-Human Identity |
|
||||||
|
| **Conformance 2 — Token issuance via `client_credentials` grant** | Registers an agent, generates credentials via API, then exercises the full OAuth 2.0 Client Credentials flow. Validates that `POST /api/v1/token` returns a 200 response with `access_token` (string), `token_type: 'Bearer'`, and a JWT with 3 dot-separated parts. | Authentication |
|
||||||
|
| **Conformance 3 — A2A delegation chain create + verify** | _(Gated by `A2A_ENABLED=true`.)_ Creates a delegation chain between two agents via `POST /api/v1/oauth2/token/delegate`. If a token is returned, verifies it via `POST /api/v1/oauth2/token/verify-delegation`. Accepts 200 or 201 on creation and 200 or 204 on verification. | Agent-to-Agent Trust |
|
||||||
|
| **Conformance 4 — Compliance report returns valid AGNTCY structure** | Calls `GET /api/v1/compliance/report` and validates all required AGNTCY fields: `generated_at` (valid ISO 8601), `tenant_id` (string), `agntcy_schema_version: '1.0'`, `sections` (array with `name`, `status`, `details` per entry), `overall_status` (one of `pass/fail/warn`). Also verifies the `agent-identity` and `audit-trail` section names are present. A second request verifies the Redis cache (`X-Cache: HIT` header and `from_cache: true` body field). | Audit, Compliance |
|
||||||
|
|
||||||
|
**Schema tables created by conformance suite:** The suite creates its own tables using `CREATE TABLE IF NOT EXISTS` before tests run. The tables match the production schema and include: `organizations`, `agents`, `credentials`, `audit_events`, `token_revocations`, `agent_did_keys`, `delegation_chains`. These are cleaned up via `DELETE` in `afterEach` (child-to-parent order respecting FK constraints) and dropped implicitly when the test database is reset.
|
||||||
|
|
||||||
|
**Environment variables used:**
|
||||||
|
|
||||||
|
| Variable | Required | Purpose |
|
||||||
|
|---|---|---|
|
||||||
|
| `TEST_DATABASE_URL` | Yes (or default) | PostgreSQL connection string for the test database |
|
||||||
|
| `TEST_REDIS_URL` | Yes (or default) | Redis connection string (index 1 recommended) |
|
||||||
|
| `COMPLIANCE_ENABLED` | Yes (`'true'`) | Enables the compliance report endpoint |
|
||||||
|
| `A2A_ENABLED` | No (default `'true'`) | Set to `'false'` to skip Conformance 3 (A2A delegation) |
|
||||||
|
| `DID_WEB_DOMAIN` | No | When set, Conformance 1 validates the `did:web:` format |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10.9 Tier Enforcement Tests
|
||||||
|
|
||||||
|
**Location:** `tests/unit/services/TierService.test.ts` and `tests/integration/`
|
||||||
|
|
||||||
|
**The TierService has the following test cases that must all pass:**
|
||||||
|
|
||||||
|
### Unit tests (`tests/unit/services/TierService.test.ts`)
|
||||||
|
|
||||||
|
The unit tests mock PostgreSQL (`Pool`) and Redis (`RedisClientType`) and Stripe. Key scenarios:
|
||||||
|
|
||||||
|
| Test | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `getStatus() — returns correct tier and limits` | Mocks `SELECT tier FROM organizations` returning `'pro'`; mocks Redis GET calls for `rate:tier:calls` and `rate:tier:tokens`; verifies `ITierStatus.limits` matches `TIER_CONFIG['pro']`. |
|
||||||
|
| `getStatus() — falls back to 0 when Redis unavailable` | Redis GET throws; verifies `usage.callsToday = 0` and `usage.tokensToday = 0` with no error thrown. |
|
||||||
|
| `getStatus() — returns 'free' when org not found` | `SELECT` returns 0 rows; verifies `tier === 'free'`. |
|
||||||
|
| `initiateUpgrade() — throws ValidationError on downgrade attempt` | `targetTier = 'free'` when current is `'pro'`; verifies `ValidationError` is thrown with `TIER_RANK` comparison failure message. |
|
||||||
|
| `initiateUpgrade() — calls Stripe with correct metadata` | Verifies `stripe.checkout.sessions.create` is called with `metadata: { orgId, targetTier }` and `mode: 'subscription'`. |
|
||||||
|
| `applyUpgrade() — executes UPDATE organizations SET tier` | Verifies parameterized SQL is called with `[targetTier, orgId]`. |
|
||||||
|
| `enforceAgentLimit() — throws TierLimitError when limit reached` | Mock agent count equals `TIER_CONFIG[tier].maxAgents`; verifies `TierLimitError` with `limit` and `current` details. |
|
||||||
|
| `enforceAgentLimit() — no-op for Enterprise tier` | `TIER_CONFIG['enterprise'].maxAgents = Infinity`; verifies no SQL query for agent count and no error. |
|
||||||
|
| `fetchTier() — returns 'free' for unknown tier string in DB` | DB returns unrecognised string; verifies `isTierName` guard returns `'free'`. |
|
||||||
|
|
||||||
|
### Integration (middleware) tests
|
||||||
|
|
||||||
|
When writing integration tests for the tier enforcement middleware (`src/middleware/tier.ts`), the following scenarios must be covered:
|
||||||
|
|
||||||
|
| Scenario | Expected behaviour |
|
||||||
|
|----------|-------------------|
|
||||||
|
| Request with org on `free` tier, under daily call limit | Request proceeds normally (2xx from downstream handler) |
|
||||||
|
| Request that would exceed `maxCallsPerDay` for the org's tier | `429 TierLimitError` — body contains `code: 'TIER_LIMIT_EXCEEDED'` |
|
||||||
|
| Request to `/health` or `/metrics` (unprotected routes) | Tier middleware not applied — always 200 |
|
||||||
|
| Org not found in `organizations` table | Defaults to `free` tier limits |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10.10 Analytics Service Tests
|
||||||
|
|
||||||
|
**Location:** `tests/unit/services/AnalyticsService.test.ts`
|
||||||
|
|
||||||
|
The AnalyticsService unit tests mock the PostgreSQL `Pool`. Key scenarios that must be covered:
|
||||||
|
|
||||||
|
| Test | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `recordEvent() — executes UPSERT without throwing` | Verifies `pool.query` is called with the `INSERT ... ON CONFLICT DO UPDATE` SQL pattern and the correct `[tenantId, metricType]` parameters. |
|
||||||
|
| `recordEvent() — catches and swallows pool errors` | Pool `query` throws; verifies `recordEvent` resolves (not rejects) and the error does not propagate. This is the fire-and-forget contract. |
|
||||||
|
| `getTokenTrend() — clamps days to 90` | Calls with `days = 200`; verifies `pool.query` receives `clampedDays = 90` as the first parameter. |
|
||||||
|
| `getTokenTrend() — maps rows to ITokenTrendEntry[]` | Mock returns rows with `date: '2026-03-01', count: '42'`; verifies the result is `[{ date: '2026-03-01', count: 42 }]` (count coerced to number). |
|
||||||
|
| `getAgentActivity() — maps rows to IAgentActivityEntry[]` | Mock returns rows with string-typed `dow`, `hour`, `count`; verifies all are coerced to numbers in the result. |
|
||||||
|
| `getAgentUsageSummary() — maps rows to IAgentUsageSummaryEntry[]` | Mock returns rows with `token_count: '150'`; verifies `token_count: 150` (number) in the result. |
|
||||||
|
| `getAgentUsageSummary() — joins with agents table on organization_id` | Verifies the SQL query joins `agents` with `LEFT JOIN analytics_events` and filters `a.organization_id = $1`. |
|
||||||
|
|
||||||
|
**Coverage gate:** `AnalyticsService` must maintain >80% statement, branch, function, and line coverage. Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm run test:unit -- --coverage --testPathPattern=AnalyticsService
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10.11 Running the Complete Phase 6 Test Matrix
|
||||||
|
|
||||||
|
All of the following must pass before any Phase 6 feature is considered complete:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Unit tests (all services including Phase 3–6)
|
||||||
|
npm run test:unit -- --coverage
|
||||||
|
# Must exit 0 with all 4 coverage metrics ≥ 80%
|
||||||
|
|
||||||
|
# 2. Integration tests (requires PostgreSQL + Redis running)
|
||||||
|
npm run test:integration
|
||||||
|
|
||||||
|
# 3. AGNTCY conformance suite
|
||||||
|
COMPLIANCE_ENABLED=true \
|
||||||
|
A2A_ENABLED=true \
|
||||||
|
npm run test:agntcy-conformance
|
||||||
|
|
||||||
|
# 4. Dependency security audit
|
||||||
|
npm audit --audit-level=high
|
||||||
|
# Must exit 0 — no high or critical vulnerabilities
|
||||||
|
|
||||||
|
# 5. TypeScript compilation
|
||||||
|
npx tsc --noEmit
|
||||||
|
# Must exit 0 — zero type errors
|
||||||
|
```
|
||||||
|
|
||||||
|
**Current test file inventory** (as of Phase 6 completion):
|
||||||
|
|
||||||
|
Unit test files in `tests/unit/services/`:
|
||||||
|
|
||||||
|
| File | Service tested |
|
||||||
|
|------|---------------|
|
||||||
|
| `AgentService.test.ts` | `AgentService` |
|
||||||
|
| `AnalyticsService.test.ts` | `AnalyticsService` |
|
||||||
|
| `AuditService.test.ts` | `AuditService` |
|
||||||
|
| `AuditVerificationService.test.ts` | `AuditVerificationService` |
|
||||||
|
| `BillingService.test.ts` | `BillingService` |
|
||||||
|
| `ComplianceService.test.ts` | `ComplianceService` |
|
||||||
|
| `CredentialService.test.ts` | `CredentialService` |
|
||||||
|
| `DIDService.test.ts` | `DIDService` |
|
||||||
|
| `DelegationService.test.ts` | `DelegationService` |
|
||||||
|
| `EncryptionService.test.ts` | `EncryptionService` |
|
||||||
|
| `FederationService.test.ts` | `FederationService` |
|
||||||
|
| `IDTokenService.test.ts` | `IDTokenService` |
|
||||||
|
| `OAuth2Service.test.ts` | `OAuth2Service` |
|
||||||
|
| `OIDCKeyService.test.ts` | `OIDCKeyService` |
|
||||||
|
| `OrgService.test.ts` | `OrgService` |
|
||||||
|
| `ScaffoldService.test.ts` | `ScaffoldService` |
|
||||||
|
| `ScaffoldService.errors.test.ts` | `ScaffoldService` error cases |
|
||||||
|
| `TierService.test.ts` | `TierService` |
|
||||||
|
| `WebhookService.test.ts` | `WebhookService` |
|
||||||
|
|||||||
@@ -8,12 +8,12 @@ This document covers building and running AgentIdP in production: Docker, enviro
|
|||||||
|
|
||||||
The Dockerfile uses a two-stage build:
|
The Dockerfile uses a two-stage build:
|
||||||
|
|
||||||
- **Stage 1 (builder):** `node:18-alpine` — installs all dependencies (including dev) and compiles TypeScript to `dist/`.
|
- **Stage 1 (build):** `node:20.11-bookworm-slim` — installs all dependencies (including dev) and compiles TypeScript to `dist/`.
|
||||||
- **Stage 2 (production):** `node:18-alpine` — copies `dist/` and `node_modules` (production only), runs as the built-in non-root `node` user.
|
- **Stage 2 (final):** `node:20.11-bookworm-slim` — copies `dist/` and `node_modules` (production only), installs `curl` for healthcheck, and runs as the created non-root `nodeapp` user (UID 1001).
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Build
|
# Build
|
||||||
docker build -t sentryagent-idp:latest .
|
docker build -t sentryagent-idp:1.0.0 .
|
||||||
|
|
||||||
# Run (supply required env vars)
|
# Run (supply required env vars)
|
||||||
docker run -d \
|
docker run -d \
|
||||||
@@ -22,18 +22,18 @@ docker run -d \
|
|||||||
-e REDIS_URL=redis://<host>:6379 \
|
-e REDIS_URL=redis://<host>:6379 \
|
||||||
-e JWT_PRIVATE_KEY="-----BEGIN RSA PRIVATE KEY-----\n..." \
|
-e JWT_PRIVATE_KEY="-----BEGIN RSA PRIVATE KEY-----\n..." \
|
||||||
-e JWT_PUBLIC_KEY="-----BEGIN PUBLIC KEY-----\n..." \
|
-e JWT_PUBLIC_KEY="-----BEGIN PUBLIC KEY-----\n..." \
|
||||||
sentryagent-idp:latest
|
sentryagent-idp:1.0.0
|
||||||
```
|
```
|
||||||
|
|
||||||
The container exposes port `3000`. Override with `PORT` environment variable if needed.
|
The container exposes port `3000`. Override with `PORT` environment variable if needed. The container runs as non-root user `nodeapp` (UID 1001) — do not mount volumes requiring root ownership.
|
||||||
|
|
||||||
For local full-stack development, use Docker Compose instead:
|
For local full-stack development, use Docker Compose instead:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker compose up -d
|
docker compose up --build -d
|
||||||
```
|
```
|
||||||
|
|
||||||
The `docker-compose.yml` starts the app, PostgreSQL 14, and Redis 7 with health checks and data volumes.
|
The `compose.yaml` starts the app, PostgreSQL 14.12, and Redis 7.2 with health checks, resource limits, restart policies, and data volumes — per DockerSpec standards.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -178,11 +178,11 @@ The HTTP metrics (`agentidp_http_requests_total` and `agentidp_http_request_dura
|
|||||||
### Local Grafana
|
### Local Grafana
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker compose -f docker-compose.yml -f docker-compose.monitoring.yml up -d
|
docker compose -f compose.yaml -f compose.monitoring.yaml up -d
|
||||||
```
|
```
|
||||||
|
|
||||||
- Prometheus: http://localhost:9090
|
- Prometheus: http://localhost:9090
|
||||||
- Grafana: http://localhost:3001 (admin password: `agentidp`)
|
- Grafana: http://localhost:3001 (admin password: `GF_ADMIN_PASSWORD` value from `.env`)
|
||||||
|
|
||||||
The monitoring compose overlay starts `prom/prometheus:v2.53.0` and `grafana/grafana:11.2.0`. Grafana dashboards and datasource provisioning are loaded from `monitoring/grafana/provisioning/`.
|
The monitoring compose overlay starts `prom/prometheus:v2.53.0` and `grafana/grafana:11.2.0`. Grafana dashboards and datasource provisioning are loaded from `monitoring/grafana/provisioning/`.
|
||||||
|
|
||||||
|
|||||||
@@ -360,7 +360,286 @@ The `TokenManager` is thread-safe. `AgentIdPClient` is safe for concurrent use f
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 6. SDK Contribution Guide — Adding a New Endpoint
|
## 6. Rust SDK
|
||||||
|
|
||||||
|
The Rust SDK (`sdk-rust/`) is a production-grade, async-first client for the SentryAgent.ai AgentIdP API. It provides full coverage of the 14 API endpoints across agent identity, OAuth 2.0 token management, credential rotation, audit logs, the public marketplace, and agent-to-agent (A2A) delegation.
|
||||||
|
|
||||||
|
**Requirements:** Rust 1.75+ (stable), `tokio` runtime.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
|
||||||
|
Add the crate to your `Cargo.toml`:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[dependencies]
|
||||||
|
sentryagent-idp = "1.0"
|
||||||
|
tokio = { version = "1.35", features = ["full"] }
|
||||||
|
```
|
||||||
|
|
||||||
|
The crate uses `reqwest` with `rustls-tls` (no OpenSSL dependency) and `serde` for JSON serialisation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Authentication
|
||||||
|
|
||||||
|
The Rust SDK uses the OAuth 2.0 Client Credentials grant, managed transparently by `TokenManager`. You never call `TokenManager` directly — it is embedded in `AgentIdPClient` and invoked automatically before every request.
|
||||||
|
|
||||||
|
**Token refresh behaviour:**
|
||||||
|
- The first API call triggers a `POST /oauth2/token` request with `grant_type=client_credentials`.
|
||||||
|
- The returned token is cached behind an async `tokio::sync::Mutex`.
|
||||||
|
- Subsequent calls within the token lifetime return the cached token without a network round trip.
|
||||||
|
- The cache expires 60 seconds before the server-reported `expires_in`, ensuring tokens never expire mid-flight.
|
||||||
|
- The `Mutex` guarantees only one refresh happens even when many `tokio` tasks call `get_token()` concurrently.
|
||||||
|
|
||||||
|
**Environment variable construction:**
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use sentryagent_idp::AgentIdPClient;
|
||||||
|
|
||||||
|
// from_env() reads AGENTIDP_API_URL, AGENTIDP_CLIENT_ID, AGENTIDP_CLIENT_SECRET
|
||||||
|
let client = AgentIdPClient::from_env()?;
|
||||||
|
```
|
||||||
|
|
||||||
|
**Explicit construction:**
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use sentryagent_idp::AgentIdPClient;
|
||||||
|
|
||||||
|
let client = AgentIdPClient::new(
|
||||||
|
"https://api.sentryagent.ai",
|
||||||
|
"a1b2c3d4-e5f6-7890-abcd-ef1234567890",
|
||||||
|
"sk_live_...",
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
| Environment Variable | Required | Purpose |
|
||||||
|
|---|---|---|
|
||||||
|
| `AGENTIDP_API_URL` | Yes | Base URL of the AgentIdP API |
|
||||||
|
| `AGENTIDP_CLIENT_ID` | Yes | OAuth 2.0 client identifier |
|
||||||
|
| `AGENTIDP_CLIENT_SECRET` | Yes | OAuth 2.0 client secret |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Complete Working Example
|
||||||
|
|
||||||
|
The following example covers the full agent identity lifecycle: register → generate credentials → issue token → retrieve agent → list audit logs → delete agent.
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use sentryagent_idp::{
|
||||||
|
AgentIdPClient, AgentIdPError,
|
||||||
|
AuditLogFilters, MarketplaceFilters, RegisterAgentRequest,
|
||||||
|
};
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||||
|
// Build client from environment variables.
|
||||||
|
// Requires: AGENTIDP_API_URL, AGENTIDP_CLIENT_ID, AGENTIDP_CLIENT_SECRET
|
||||||
|
let client = AgentIdPClient::from_env()?;
|
||||||
|
|
||||||
|
// ── Register a new agent ──────────────────────────────────────────────────
|
||||||
|
let agent = client.register_agent(RegisterAgentRequest {
|
||||||
|
name: "my-screener-agent".to_owned(),
|
||||||
|
description: Some("Screens resumes using ML".to_owned()),
|
||||||
|
agent_type: "screener".to_owned(),
|
||||||
|
capabilities: vec!["resume:read".to_owned(), "classify".to_owned()],
|
||||||
|
metadata: None,
|
||||||
|
}).await?;
|
||||||
|
|
||||||
|
println!("Registered: {} (DID: {})", agent.id, agent.did);
|
||||||
|
|
||||||
|
// ── Generate credentials for the agent ───────────────────────────────────
|
||||||
|
let creds = client.generate_credentials(&agent.id).await?;
|
||||||
|
println!("Client ID: {}", creds.client_id);
|
||||||
|
println!("Client Secret: {} (store this — shown once)", creds.client_secret);
|
||||||
|
|
||||||
|
// ── Issue a scoped token (TokenManager handles this automatically) ────────
|
||||||
|
let token_resp = client.issue_token(&agent.id, &["agents:read", "agents:write"]).await?;
|
||||||
|
println!("Token type: {}, expires in {}s", token_resp.token_type, token_resp.expires_in);
|
||||||
|
|
||||||
|
// ── Retrieve the agent ────────────────────────────────────────────────────
|
||||||
|
let fetched = client.get_agent(&agent.id).await?;
|
||||||
|
println!("Fetched: {} (public: {})", fetched.name, fetched.is_public);
|
||||||
|
|
||||||
|
// ── List agents ───────────────────────────────────────────────────────────
|
||||||
|
let list = client.list_agents(Some(1), Some(10)).await?;
|
||||||
|
println!("Total agents: {}", list.total);
|
||||||
|
|
||||||
|
// ── Audit logs ────────────────────────────────────────────────────────────
|
||||||
|
let logs = client.list_audit_logs(AuditLogFilters {
|
||||||
|
agent_id: Some(agent.id.clone()),
|
||||||
|
event_type: None,
|
||||||
|
from: None,
|
||||||
|
to: None,
|
||||||
|
page: 1,
|
||||||
|
per_page: 10,
|
||||||
|
}).await?;
|
||||||
|
println!("Audit events: {}", logs.total);
|
||||||
|
|
||||||
|
// ── Rotate credentials ────────────────────────────────────────────────────
|
||||||
|
let new_creds = client.rotate_credentials(&agent.id).await?;
|
||||||
|
println!("New secret: {}", new_creds.client_secret);
|
||||||
|
|
||||||
|
// ── Delete agent ──────────────────────────────────────────────────────────
|
||||||
|
client.delete_agent(&agent.id).await?;
|
||||||
|
println!("Agent deleted.");
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Run the bundled quickstart example directly:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
AGENTIDP_API_URL=http://localhost:3000 \
|
||||||
|
AGENTIDP_CLIENT_ID=your-client-id \
|
||||||
|
AGENTIDP_CLIENT_SECRET=your-client-secret \
|
||||||
|
cargo run --example quickstart
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Client Methods Reference
|
||||||
|
|
||||||
|
All methods are `async` and return `Result<T, AgentIdPError>`. The client is cheap to clone — the inner `reqwest::Client` and token cache are shared via `Arc`.
|
||||||
|
|
||||||
|
**Agent Registry** (`sdk-rust/src/agents.rs`):
|
||||||
|
|
||||||
|
| Method | Signature | Description |
|
||||||
|
|--------|-----------|-------------|
|
||||||
|
| `register_agent` | `(req: RegisterAgentRequest) -> Result<Agent>` | `POST /agents` — 201 |
|
||||||
|
| `get_agent` | `(agent_id: &str) -> Result<Agent>` | `GET /agents/{id}` — 200 |
|
||||||
|
| `list_agents` | `(page: Option<u32>, per_page: Option<u32>) -> Result<AgentList>` | `GET /agents` — 200 |
|
||||||
|
| `update_agent` | `(agent_id: &str, req: UpdateAgentRequest) -> Result<Agent>` | `PATCH /agents/{id}` — 200 |
|
||||||
|
| `delete_agent` | `(agent_id: &str) -> Result<()>` | `DELETE /agents/{id}` — 204 |
|
||||||
|
|
||||||
|
**Credential Management** (`sdk-rust/src/credentials.rs`):
|
||||||
|
|
||||||
|
| Method | Signature | Description |
|
||||||
|
|--------|-----------|-------------|
|
||||||
|
| `generate_credentials` | `(agent_id: &str) -> Result<Credentials>` | `POST /agents/{id}/credentials` — 201. `client_secret` shown once. |
|
||||||
|
| `rotate_credentials` | `(agent_id: &str) -> Result<Credentials>` | `POST /agents/{id}/credentials/rotate` — 200. New secret shown once. |
|
||||||
|
| `revoke_credentials` | `(agent_id: &str, cred_id: &str) -> Result<()>` | `DELETE /agents/{id}/credentials/{cred_id}` — 204 |
|
||||||
|
|
||||||
|
**Token Operations** (`sdk-rust/src/oauth2.rs`):
|
||||||
|
|
||||||
|
| Method | Signature | Description |
|
||||||
|
|--------|-----------|-------------|
|
||||||
|
| `issue_token` | `(agent_id: &str, scopes: &[&str]) -> Result<TokenResponse>` | Issues a scoped Bearer JWT. Token is cached by `TokenManager` automatically. |
|
||||||
|
|
||||||
|
**Audit Log** (`sdk-rust/src/audit.rs`):
|
||||||
|
|
||||||
|
| Method | Signature | Description |
|
||||||
|
|--------|-----------|-------------|
|
||||||
|
| `list_audit_logs` | `(filters: AuditLogFilters) -> Result<AuditLogList>` | Paginated audit log query with optional agent_id, event_type, from, to filters. |
|
||||||
|
|
||||||
|
**Marketplace** (`sdk-rust/src/marketplace.rs`):
|
||||||
|
|
||||||
|
| Method | Signature | Description |
|
||||||
|
|--------|-----------|-------------|
|
||||||
|
| `list_public_agents` | `(filters: MarketplaceFilters) -> Result<MarketplaceAgentList>` | Lists publicly discoverable agents with optional `q`, `capability`, `publisher` filters. |
|
||||||
|
|
||||||
|
**A2A Delegation** (`sdk-rust/src/delegation.rs`):
|
||||||
|
|
||||||
|
| Method | Signature | Description |
|
||||||
|
|--------|-----------|-------------|
|
||||||
|
| `delegate` | `(req: DelegateRequest) -> Result<DelegationToken>` | Creates a delegation chain and returns the delegation JWT. |
|
||||||
|
| `verify_delegation` | `(token: &str) -> Result<DelegationVerification>` | Verifies a delegation token and returns the verified claims. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Error Types
|
||||||
|
|
||||||
|
All SDK operations return `Result<T, AgentIdPError>`. Match on the enum variants for structured error handling:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use sentryagent_idp::AgentIdPError;
|
||||||
|
|
||||||
|
match client.get_agent("unknown-id").await {
|
||||||
|
Ok(agent) => println!("Found: {}", agent.name),
|
||||||
|
Err(AgentIdPError::NotFound(msg)) => {
|
||||||
|
eprintln!("Agent not found: {}", msg);
|
||||||
|
}
|
||||||
|
Err(AgentIdPError::AuthError(msg)) => {
|
||||||
|
eprintln!("Auth failed: {}", msg);
|
||||||
|
// Token may have been revoked — check credentials
|
||||||
|
}
|
||||||
|
Err(AgentIdPError::RateLimited { retry_after_secs }) => {
|
||||||
|
eprintln!("Rate limited — retry after {}s", retry_after_secs);
|
||||||
|
tokio::time::sleep(std::time::Duration::from_secs(retry_after_secs)).await;
|
||||||
|
}
|
||||||
|
Err(AgentIdPError::ApiError { status, message, code }) => {
|
||||||
|
eprintln!("API error {}: {} (code: {:?})", status, message, code);
|
||||||
|
}
|
||||||
|
Err(AgentIdPError::ConfigError(msg)) => {
|
||||||
|
// Missing environment variable — fix before running
|
||||||
|
eprintln!("Config error: {}", msg);
|
||||||
|
}
|
||||||
|
Err(AgentIdPError::HttpError(e)) => {
|
||||||
|
// reqwest transport error — network issue
|
||||||
|
eprintln!("HTTP transport error: {}", e);
|
||||||
|
}
|
||||||
|
Err(AgentIdPError::SerdeError(e)) => {
|
||||||
|
// JSON parse failure — API response shape mismatch
|
||||||
|
eprintln!("Serialization error: {}", e);
|
||||||
|
}
|
||||||
|
Err(AgentIdPError::DelegationError(msg)) => {
|
||||||
|
eprintln!("Delegation chain invalid: {}", msg);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
| Variant | Trigger | HTTP status |
|
||||||
|
|---------|---------|-------------|
|
||||||
|
| `HttpError(reqwest::Error)` | Network-level failure (connection refused, timeout) | N/A |
|
||||||
|
| `ApiError { status, message, code }` | Non-2xx response not matching a specific variant | Any non-2xx |
|
||||||
|
| `AuthError(String)` | 401 or 403 from the API | 401, 403 |
|
||||||
|
| `NotFound(String)` | 404 from the API | 404 |
|
||||||
|
| `RateLimited { retry_after_secs }` | 429 — parses `Retry-After` header (defaults to 60s) | 429 |
|
||||||
|
| `ConfigError(String)` | Missing env var in `from_env()` | N/A |
|
||||||
|
| `SerdeError(serde_json::Error)` | JSON deserialisation failure | N/A |
|
||||||
|
| `DelegationError(String)` | Invalid delegation chain | N/A |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Adding a New Endpoint to the Rust SDK
|
||||||
|
|
||||||
|
When the AgentIdP server adds a new API endpoint, add it to the Rust SDK using this checklist:
|
||||||
|
|
||||||
|
**File structure** (`sdk-rust/src/`):
|
||||||
|
|
||||||
|
```
|
||||||
|
sdk-rust/src/
|
||||||
|
├── lib.rs # Crate root — re-exports and module declarations
|
||||||
|
├── client.rs # AgentIdPClient struct and new()/from_env() constructors
|
||||||
|
├── token_manager.rs # TokenManager — async token cache
|
||||||
|
├── models.rs # All request/response structs (serde Serialize/Deserialize)
|
||||||
|
├── error.rs # AgentIdPError enum
|
||||||
|
├── agents.rs # Agent registry methods (impl AgentIdPClient)
|
||||||
|
├── credentials.rs # Credential management methods
|
||||||
|
├── oauth2.rs # Token issuance methods
|
||||||
|
├── audit.rs # Audit log methods
|
||||||
|
├── marketplace.rs # Marketplace methods
|
||||||
|
└── delegation.rs # A2A delegation methods
|
||||||
|
```
|
||||||
|
|
||||||
|
**Checklist:**
|
||||||
|
|
||||||
|
- [ ] Add request/response structs to `models.rs` with `#[derive(Debug, serde::Serialize, serde::Deserialize)]`
|
||||||
|
- [ ] Add the method to the appropriate `impl AgentIdPClient` block in the relevant `<domain>.rs` file. If the endpoint belongs to a new domain, create a new file and declare it as `pub mod <domain>;` in `lib.rs`
|
||||||
|
- [ ] Use `self.get_auth_header().await?` for the `Authorization: Bearer` header
|
||||||
|
- [ ] Use the shared `parse_response::<T>(resp).await` helper (defined in `agents.rs`) to map HTTP status codes to `AgentIdPError` variants
|
||||||
|
- [ ] Add a doc comment (`///`) to the method with: the HTTP method + path, the success response type, and `# Errors` listing which `AgentIdPError` variants it can return
|
||||||
|
- [ ] Re-export new public types from `lib.rs` with `pub use models::{NewRequestType, NewResponseType};`
|
||||||
|
- [ ] Add a unit test using `mockito::Server` (see `token_manager.rs` tests for the pattern)
|
||||||
|
- [ ] Run `cargo test` and verify all tests pass
|
||||||
|
- [ ] Run `cargo doc --no-deps --open` and verify the new method appears with correct documentation
|
||||||
|
- [ ] Verify `cargo clippy -- -D warnings` exits 0
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. SDK Contribution Guide — Adding a New Endpoint
|
||||||
|
|
||||||
When the server adds a new API endpoint, update all four SDKs. The checklist below covers each SDK.
|
When the server adds a new API endpoint, update all four SDKs. The checklist below covers each SDK.
|
||||||
|
|
||||||
|
|||||||
@@ -12,15 +12,15 @@
|
|||||||
| 2 | [System Architecture](02-architecture.md) | Component diagram, HTTP request lifecycle, OAuth 2.0 data flow, multi-region topology | 20 min |
|
| 2 | [System Architecture](02-architecture.md) | Component diagram, HTTP request lifecycle, OAuth 2.0 data flow, multi-region topology | 20 min |
|
||||||
| 3 | [Technology Stack and ADRs](03-tech-stack.md) | Why each technology was chosen — rationale and alternatives considered | 20 min |
|
| 3 | [Technology Stack and ADRs](03-tech-stack.md) | Why each technology was chosen — rationale and alternatives considered | 20 min |
|
||||||
| 4 | [Codebase Structure](04-codebase-structure.md) | Directory map, where to add new code, DRY enforcement rules | 15 min |
|
| 4 | [Codebase Structure](04-codebase-structure.md) | Directory map, where to add new code, DRY enforcement rules | 15 min |
|
||||||
| 5 | [Service Deep Dives](05-services.md) | All 8 services/components — purpose, interface, schema, error types | 30 min |
|
| 5 | [Service Deep Dives](05-services.md) | All 17 services/components (incl. Phase 3–6: AnalyticsService, TierService, ComplianceService, FederationService, DIDService, WebhookService, BillingService, DelegationService, OIDCService) — purpose, interface, schema, error types | 45 min |
|
||||||
| 6 | [Annotated Code Walkthroughs](06-walkthroughs.md) | Step-by-step traces of token issuance, agent registration, credential rotation | 30 min |
|
| 6 | [Annotated Code Walkthroughs](06-walkthroughs.md) | Step-by-step traces of token issuance, agent registration, credential rotation | 30 min |
|
||||||
| 7 | [Development Environment Setup](07-dev-setup.md) | Clone to running local stack — under 30 minutes | 30 min |
|
| 7 | [Development Environment Setup](07-dev-setup.md) | Clone to running local stack — under 30 minutes | 30 min |
|
||||||
| 8 | [Engineering Workflow](08-workflow.md) | OpenSpec spec-first workflow, branching, PR checklist, commit conventions | 20 min |
|
| 8 | [Engineering Workflow](08-workflow.md) | OpenSpec spec-first workflow, branching, PR checklist, commit conventions | 20 min |
|
||||||
| 9 | [Testing Strategy](09-testing.md) | Unit vs integration, coverage gates, how to write tests, OWASP reference | 20 min |
|
| 9 | [Testing Strategy](09-testing.md) | Unit vs integration, coverage gates, how to write tests, OWASP reference | 20 min |
|
||||||
| 10 | [Deployment and Operations](10-deployment.md) | Docker, Terraform, Prometheus/Grafana, operational runbook | 20 min |
|
| 10 | [Deployment and Operations](10-deployment.md) | Docker, Terraform, Prometheus/Grafana, operational runbook | 20 min |
|
||||||
| 11 | [SDK Integration Guide](11-sdk-guide.md) | All 4 SDKs — installation, examples, contribution guide | 20 min |
|
| 11 | [SDK Integration Guide](11-sdk-guide.md) | All 5 SDKs (Node.js, Python, Go, Java, Rust) — installation, examples, contribution guide | 25 min |
|
||||||
|
|
||||||
**Total estimated reading time for new engineers: ~3.5 hours**
|
**Total estimated reading time for new engineers: ~4 hours**
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -34,8 +34,13 @@
|
|||||||
| Add a new API endpoint | [08-workflow.md](08-workflow.md) + [04-codebase-structure.md](04-codebase-structure.md) |
|
| Add a new API endpoint | [08-workflow.md](08-workflow.md) + [04-codebase-structure.md](04-codebase-structure.md) |
|
||||||
| Write tests | [09-testing.md](09-testing.md) |
|
| Write tests | [09-testing.md](09-testing.md) |
|
||||||
| Deploy to production | [10-deployment.md](10-deployment.md) |
|
| Deploy to production | [10-deployment.md](10-deployment.md) |
|
||||||
| Integrate with the SDK | [11-sdk-guide.md](11-sdk-guide.md) |
|
| Integrate with the SDK (Node.js, Python, Go, Java, Rust) | [11-sdk-guide.md](11-sdk-guide.md) |
|
||||||
| Understand why a technology was chosen | [03-tech-stack.md](03-tech-stack.md) |
|
| Understand why a technology was chosen | [03-tech-stack.md](03-tech-stack.md) |
|
||||||
|
| Understand tier limits and billing | [01-overview.md](01-overview.md) (Section 6) + [03-tech-stack.md](03-tech-stack.md) (ADR-11) |
|
||||||
|
| Understand AGNTCY compliance reports | [05-services.md](05-services.md) (ComplianceService) |
|
||||||
|
| Understand the A2A delegation flow | [06-walkthroughs.md](06-walkthroughs.md) (Walkthrough 4) |
|
||||||
|
| Run the AGNTCY conformance suite | [09-testing.md](09-testing.md) (Section 10.8) |
|
||||||
|
| Add a new Rust SDK endpoint | [11-sdk-guide.md](11-sdk-guide.md) (Section 6 contribution guide) |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -13,6 +13,12 @@ info:
|
|||||||
and lifecycle status management. The registry is the authoritative source of
|
and lifecycle status management. The registry is the authoritative source of
|
||||||
truth for all registered agent identities.
|
truth for all registered agent identities.
|
||||||
|
|
||||||
|
**Tenant Isolation**:
|
||||||
|
All agent endpoints enforce strict organization-level tenant isolation. The
|
||||||
|
caller's `organization_id` is derived exclusively from the verified JWT
|
||||||
|
`organization_id` claim — it can never be overridden by request body values
|
||||||
|
or query parameters. Cross-tenant access always returns `403 Forbidden`.
|
||||||
|
|
||||||
**Free Tier Limits**:
|
**Free Tier Limits**:
|
||||||
- Max 100 registered agents per account
|
- Max 100 registered agents per account
|
||||||
- API rate limit: 100 requests/minute
|
- API rate limit: 100 requests/minute
|
||||||
@@ -38,6 +44,10 @@ components:
|
|||||||
(`POST /token`). Include in the `Authorization` header as:
|
(`POST /token`). Include in the `Authorization` header as:
|
||||||
`Authorization: Bearer <token>`
|
`Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
The JWT must contain an `organization_id` claim. This claim is used
|
||||||
|
to scope all agent operations to the caller's organization and cannot
|
||||||
|
be overridden by any value in the request body or query string.
|
||||||
|
|
||||||
schemas:
|
schemas:
|
||||||
AgentType:
|
AgentType:
|
||||||
type: string
|
type: string
|
||||||
@@ -294,14 +304,14 @@ components:
|
|||||||
message: "A valid Bearer token is required to access this resource."
|
message: "A valid Bearer token is required to access this resource."
|
||||||
|
|
||||||
Forbidden:
|
Forbidden:
|
||||||
description: Valid token but insufficient permissions.
|
description: The caller does not have permission to access this resource.
|
||||||
content:
|
content:
|
||||||
application/json:
|
application/json:
|
||||||
schema:
|
schema:
|
||||||
$ref: '#/components/schemas/ErrorResponse'
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
example:
|
example:
|
||||||
code: "FORBIDDEN"
|
code: "AUTHORIZATION_ERROR"
|
||||||
message: "You do not have permission to perform this action."
|
message: "You do not have permission to access this resource."
|
||||||
|
|
||||||
NotFound:
|
NotFound:
|
||||||
description: The requested resource was not found.
|
description: The requested resource was not found.
|
||||||
@@ -365,6 +375,12 @@ paths:
|
|||||||
A unique immutable `agentId` (UUID) is system-assigned on creation.
|
A unique immutable `agentId` (UUID) is system-assigned on creation.
|
||||||
The `email` must be unique across all registered agents.
|
The `email` must be unique across all registered agents.
|
||||||
|
|
||||||
|
**Tenant Isolation — Rule 3 (Register Scoping)**:
|
||||||
|
The agent is always registered under the caller's organization, derived
|
||||||
|
from the JWT `organization_id` claim. Any `organizationId` value provided
|
||||||
|
in the request body is silently ignored. It is not possible to register
|
||||||
|
an agent under a different organization, regardless of request body content.
|
||||||
|
|
||||||
**Free Tier**: Maximum 100 registered agents per account. Attempting to
|
**Free Tier**: Maximum 100 registered agents per account. Attempting to
|
||||||
register beyond this limit returns `403 Forbidden` with code `FREE_TIER_LIMIT_EXCEEDED`.
|
register beyond this limit returns `403 Forbidden` with code `FREE_TIER_LIMIT_EXCEEDED`.
|
||||||
requestBody:
|
requestBody:
|
||||||
@@ -430,17 +446,23 @@ paths:
|
|||||||
'401':
|
'401':
|
||||||
$ref: '#/components/responses/Unauthorized'
|
$ref: '#/components/responses/Unauthorized'
|
||||||
'403':
|
'403':
|
||||||
description: Forbidden. Either insufficient permissions or free tier limit reached.
|
description: |
|
||||||
|
Forbidden. One of the following conditions applies:
|
||||||
|
|
||||||
|
- **`AUTHORIZATION_ERROR`**: The caller's JWT does not grant permission to
|
||||||
|
register agents in their organization.
|
||||||
|
- **`FREE_TIER_LIMIT_EXCEEDED`**: The free tier limit of 100 registered
|
||||||
|
agents per account has been reached.
|
||||||
content:
|
content:
|
||||||
application/json:
|
application/json:
|
||||||
schema:
|
schema:
|
||||||
$ref: '#/components/schemas/ErrorResponse'
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
examples:
|
examples:
|
||||||
insufficientPermissions:
|
authorizationError:
|
||||||
summary: Insufficient permissions
|
summary: Caller does not have permission to register agents
|
||||||
value:
|
value:
|
||||||
code: "FORBIDDEN"
|
code: "AUTHORIZATION_ERROR"
|
||||||
message: "You do not have permission to register agents."
|
message: "You do not have permission to access this resource."
|
||||||
freeTierLimit:
|
freeTierLimit:
|
||||||
summary: Free tier agent limit reached
|
summary: Free tier agent limit reached
|
||||||
value:
|
value:
|
||||||
@@ -471,10 +493,16 @@ paths:
|
|||||||
- Agent Registry
|
- Agent Registry
|
||||||
summary: List registered agents
|
summary: List registered agents
|
||||||
description: |
|
description: |
|
||||||
Returns a paginated list of all registered AI agent identities accessible
|
Returns a paginated list of registered AI agent identities belonging to
|
||||||
to the authenticated caller.
|
the caller's organization.
|
||||||
|
|
||||||
|
**Tenant Isolation — Rule 1 (List Scoping)**:
|
||||||
|
Results are always scoped to the caller's organization, derived from the
|
||||||
|
JWT `organization_id` claim. It is not possible to retrieve agents from
|
||||||
|
another organization. The `owner` query parameter sub-filters within the
|
||||||
|
caller's organization only — it does not widen the scope beyond the
|
||||||
|
caller's organization.
|
||||||
|
|
||||||
Results can be filtered by `owner`, `agentType`, and/or `status`.
|
|
||||||
Results are ordered by `createdAt` descending (most recent first).
|
Results are ordered by `createdAt` descending (most recent first).
|
||||||
parameters:
|
parameters:
|
||||||
- name: page
|
- name: page
|
||||||
@@ -498,7 +526,9 @@ paths:
|
|||||||
example: 20
|
example: 20
|
||||||
- name: owner
|
- name: owner
|
||||||
in: query
|
in: query
|
||||||
description: Filter agents by owner name (exact match).
|
description: |
|
||||||
|
Filter agents by owner name (exact match). Applies within the caller's
|
||||||
|
organization only — does not allow cross-tenant access.
|
||||||
required: false
|
required: false
|
||||||
schema:
|
schema:
|
||||||
type: string
|
type: string
|
||||||
@@ -580,7 +610,16 @@ paths:
|
|||||||
'401':
|
'401':
|
||||||
$ref: '#/components/responses/Unauthorized'
|
$ref: '#/components/responses/Unauthorized'
|
||||||
'403':
|
'403':
|
||||||
$ref: '#/components/responses/Forbidden'
|
description: |
|
||||||
|
Forbidden. The caller's JWT does not grant permission to list agents
|
||||||
|
in their organization.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "AUTHORIZATION_ERROR"
|
||||||
|
message: "You do not have permission to access this resource."
|
||||||
'429':
|
'429':
|
||||||
$ref: '#/components/responses/TooManyRequests'
|
$ref: '#/components/responses/TooManyRequests'
|
||||||
'500':
|
'500':
|
||||||
@@ -604,6 +643,13 @@ paths:
|
|||||||
summary: Get agent by ID
|
summary: Get agent by ID
|
||||||
description: |
|
description: |
|
||||||
Retrieves the full identity record for a single AI agent by its immutable `agentId`.
|
Retrieves the full identity record for a single AI agent by its immutable `agentId`.
|
||||||
|
|
||||||
|
**Tenant Isolation — Rule 2 (Ownership Guard)**:
|
||||||
|
If the target agent's `organization_id` does not match the caller's
|
||||||
|
`organization_id` (derived from the JWT `organization_id` claim), the
|
||||||
|
request is rejected with `403 Forbidden` and error code `AUTHORIZATION_ERROR`.
|
||||||
|
This applies regardless of whether the `agentId` exists. A caller from
|
||||||
|
Org A cannot determine the existence of an agent belonging to Org B.
|
||||||
responses:
|
responses:
|
||||||
'200':
|
'200':
|
||||||
description: Agent record returned successfully.
|
description: Agent record returned successfully.
|
||||||
@@ -641,7 +687,17 @@ paths:
|
|||||||
'401':
|
'401':
|
||||||
$ref: '#/components/responses/Unauthorized'
|
$ref: '#/components/responses/Unauthorized'
|
||||||
'403':
|
'403':
|
||||||
$ref: '#/components/responses/Forbidden'
|
description: |
|
||||||
|
Forbidden. The target agent belongs to a different organization than
|
||||||
|
the caller's. The caller's `organization_id` (from JWT) does not match
|
||||||
|
the `organization_id` stored on the target agent record.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "AUTHORIZATION_ERROR"
|
||||||
|
message: "You do not have permission to access this resource."
|
||||||
'404':
|
'404':
|
||||||
$ref: '#/components/responses/NotFound'
|
$ref: '#/components/responses/NotFound'
|
||||||
'429':
|
'429':
|
||||||
@@ -663,6 +719,12 @@ paths:
|
|||||||
|
|
||||||
Setting `status` to `decommissioned` is a one-way operation — a
|
Setting `status` to `decommissioned` is a one-way operation — a
|
||||||
decommissioned agent cannot be reactivated.
|
decommissioned agent cannot be reactivated.
|
||||||
|
|
||||||
|
**Tenant Isolation — Rule 2 (Ownership Guard)**:
|
||||||
|
If the target agent's `organization_id` does not match the caller's
|
||||||
|
`organization_id` (derived from the JWT `organization_id` claim), the
|
||||||
|
request is rejected with `403 Forbidden` and error code `AUTHORIZATION_ERROR`.
|
||||||
|
It is not possible to update an agent belonging to a different organization.
|
||||||
requestBody:
|
requestBody:
|
||||||
required: true
|
required: true
|
||||||
content:
|
content:
|
||||||
@@ -737,17 +799,24 @@ paths:
|
|||||||
'401':
|
'401':
|
||||||
$ref: '#/components/responses/Unauthorized'
|
$ref: '#/components/responses/Unauthorized'
|
||||||
'403':
|
'403':
|
||||||
description: Forbidden. Insufficient permissions or agent is decommissioned.
|
description: |
|
||||||
|
Forbidden. One of the following conditions applies:
|
||||||
|
|
||||||
|
- **`AUTHORIZATION_ERROR`**: The target agent belongs to a different
|
||||||
|
organization than the caller's. The caller's `organization_id` (from JWT)
|
||||||
|
does not match the `organization_id` stored on the target agent record.
|
||||||
|
- **`AGENT_DECOMMISSIONED`**: The target agent has been permanently
|
||||||
|
decommissioned and cannot be updated.
|
||||||
content:
|
content:
|
||||||
application/json:
|
application/json:
|
||||||
schema:
|
schema:
|
||||||
$ref: '#/components/schemas/ErrorResponse'
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
examples:
|
examples:
|
||||||
forbidden:
|
authorizationError:
|
||||||
summary: Insufficient permissions
|
summary: Cross-tenant access denied
|
||||||
value:
|
value:
|
||||||
code: "FORBIDDEN"
|
code: "AUTHORIZATION_ERROR"
|
||||||
message: "You do not have permission to update this agent."
|
message: "You do not have permission to access this resource."
|
||||||
decommissioned:
|
decommissioned:
|
||||||
summary: Agent is decommissioned
|
summary: Agent is decommissioned
|
||||||
value:
|
value:
|
||||||
@@ -777,6 +846,12 @@ paths:
|
|||||||
- The agent can no longer authenticate or obtain tokens.
|
- The agent can no longer authenticate or obtain tokens.
|
||||||
- The agent record remains visible in the registry with status `decommissioned`.
|
- The agent record remains visible in the registry with status `decommissioned`.
|
||||||
- This operation is **irreversible**.
|
- This operation is **irreversible**.
|
||||||
|
|
||||||
|
**Tenant Isolation — Rule 2 (Ownership Guard)**:
|
||||||
|
If the target agent's `organization_id` does not match the caller's
|
||||||
|
`organization_id` (derived from the JWT `organization_id` claim), the
|
||||||
|
request is rejected with `403 Forbidden` and error code `AUTHORIZATION_ERROR`.
|
||||||
|
It is not possible to decommission an agent belonging to a different organization.
|
||||||
responses:
|
responses:
|
||||||
'204':
|
'204':
|
||||||
description: Agent decommissioned successfully. No response body.
|
description: Agent decommissioned successfully. No response body.
|
||||||
@@ -796,7 +871,17 @@ paths:
|
|||||||
'401':
|
'401':
|
||||||
$ref: '#/components/responses/Unauthorized'
|
$ref: '#/components/responses/Unauthorized'
|
||||||
'403':
|
'403':
|
||||||
$ref: '#/components/responses/Forbidden'
|
description: |
|
||||||
|
Forbidden. The target agent belongs to a different organization than
|
||||||
|
the caller's. The caller's `organization_id` (from JWT) does not match
|
||||||
|
the `organization_id` stored on the target agent record.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "AUTHORIZATION_ERROR"
|
||||||
|
message: "You do not have permission to access this resource."
|
||||||
'404':
|
'404':
|
||||||
$ref: '#/components/responses/NotFound'
|
$ref: '#/components/responses/NotFound'
|
||||||
'409':
|
'409':
|
||||||
|
|||||||
428
docs/openapi/analytics.yaml
Normal file
428
docs/openapi/analytics.yaml
Normal file
@@ -0,0 +1,428 @@
|
|||||||
|
openapi: "3.0.3"
|
||||||
|
|
||||||
|
info:
|
||||||
|
title: SentryAgent.ai — Tenant Analytics
|
||||||
|
version: 1.0.0
|
||||||
|
description: |
|
||||||
|
Tenant analytics endpoints for the SentryAgent.ai AgentIdP platform.
|
||||||
|
|
||||||
|
Provides usage trend data, agent activity heatmaps, and per-agent usage summaries
|
||||||
|
scoped to the authenticated organization (tenant).
|
||||||
|
|
||||||
|
**All endpoints require a valid Bearer JWT.** Data is always scoped to the
|
||||||
|
organization identified by the `organization_id` claim in the token.
|
||||||
|
|
||||||
|
**Feature flag:** When `ANALYTICS_ENABLED=false` these routes return 404.
|
||||||
|
|
||||||
|
**Available endpoints:**
|
||||||
|
- `GET /analytics/tokens` — Daily token issuance trend (last N days)
|
||||||
|
- `GET /analytics/agents/activity` — Agent activity heatmap by day-of-week + hour
|
||||||
|
- `GET /analytics/agents` — Per-agent usage summary for the current month
|
||||||
|
|
||||||
|
servers:
|
||||||
|
- url: http://localhost:3000/api/v1
|
||||||
|
description: Local development server
|
||||||
|
- url: https://api.sentryagent.ai/v1
|
||||||
|
description: Production server
|
||||||
|
|
||||||
|
tags:
|
||||||
|
- name: Analytics
|
||||||
|
description: Tenant-scoped usage analytics and reporting
|
||||||
|
|
||||||
|
components:
|
||||||
|
securitySchemes:
|
||||||
|
BearerAuth:
|
||||||
|
type: http
|
||||||
|
scheme: bearer
|
||||||
|
bearerFormat: JWT
|
||||||
|
description: |
|
||||||
|
JWT access token obtained via `POST /token`.
|
||||||
|
Include as `Authorization: Bearer <token>`.
|
||||||
|
|
||||||
|
schemas:
|
||||||
|
TokenTrendDataPoint:
|
||||||
|
type: object
|
||||||
|
description: Token issuance count for a single calendar day.
|
||||||
|
required:
|
||||||
|
- date
|
||||||
|
- count
|
||||||
|
properties:
|
||||||
|
date:
|
||||||
|
type: string
|
||||||
|
format: date
|
||||||
|
description: Calendar date (UTC) in `YYYY-MM-DD` format.
|
||||||
|
example: "2026-04-01"
|
||||||
|
count:
|
||||||
|
type: integer
|
||||||
|
description: Number of OAuth 2.0 tokens issued on this date.
|
||||||
|
minimum: 0
|
||||||
|
example: 842
|
||||||
|
|
||||||
|
TokenTrendResponse:
|
||||||
|
type: object
|
||||||
|
description: Daily token issuance trend for the last N days.
|
||||||
|
required:
|
||||||
|
- organizationId
|
||||||
|
- days
|
||||||
|
- data
|
||||||
|
properties:
|
||||||
|
organizationId:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
description: Organization the analytics data belongs to.
|
||||||
|
example: "org-1234-5678-abcd-ef01"
|
||||||
|
days:
|
||||||
|
type: integer
|
||||||
|
description: Number of days included in the trend window.
|
||||||
|
example: 30
|
||||||
|
data:
|
||||||
|
type: array
|
||||||
|
description: |
|
||||||
|
Array of daily data points ordered by date ascending.
|
||||||
|
Days with no token issuances have `count: 0`.
|
||||||
|
items:
|
||||||
|
$ref: '#/components/schemas/TokenTrendDataPoint'
|
||||||
|
|
||||||
|
ActivityHeatmapCell:
|
||||||
|
type: object
|
||||||
|
description: |
|
||||||
|
A single cell in the agent activity heatmap, identified by
|
||||||
|
day-of-week (0 = Sunday) and hour of day (0–23 UTC).
|
||||||
|
required:
|
||||||
|
- dayOfWeek
|
||||||
|
- hour
|
||||||
|
- count
|
||||||
|
properties:
|
||||||
|
dayOfWeek:
|
||||||
|
type: integer
|
||||||
|
description: Day of week (0 = Sunday, 6 = Saturday).
|
||||||
|
minimum: 0
|
||||||
|
maximum: 6
|
||||||
|
example: 1
|
||||||
|
hour:
|
||||||
|
type: integer
|
||||||
|
description: Hour of day in UTC (0–23).
|
||||||
|
minimum: 0
|
||||||
|
maximum: 23
|
||||||
|
example: 14
|
||||||
|
count:
|
||||||
|
type: integer
|
||||||
|
description: Number of token issuances or API calls in this slot.
|
||||||
|
minimum: 0
|
||||||
|
example: 217
|
||||||
|
|
||||||
|
AgentActivityResponse:
|
||||||
|
type: object
|
||||||
|
description: |
|
||||||
|
Agent activity heatmap — shows when agents are most active
|
||||||
|
by day-of-week and hour (UTC). Useful for identifying peak usage patterns.
|
||||||
|
required:
|
||||||
|
- organizationId
|
||||||
|
- data
|
||||||
|
properties:
|
||||||
|
organizationId:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
example: "org-1234-5678-abcd-ef01"
|
||||||
|
data:
|
||||||
|
type: array
|
||||||
|
description: |
|
||||||
|
Array of heatmap cells. Contains only cells with `count > 0`.
|
||||||
|
Maximum 168 cells (7 days × 24 hours).
|
||||||
|
items:
|
||||||
|
$ref: '#/components/schemas/ActivityHeatmapCell'
|
||||||
|
|
||||||
|
AgentUsageSummary:
|
||||||
|
type: object
|
||||||
|
description: Per-agent usage summary for the current calendar month.
|
||||||
|
required:
|
||||||
|
- agentId
|
||||||
|
- tokensIssued
|
||||||
|
- apiCalls
|
||||||
|
properties:
|
||||||
|
agentId:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
description: UUID of the agent.
|
||||||
|
example: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
agentEmail:
|
||||||
|
type: string
|
||||||
|
format: email
|
||||||
|
description: Email identifier of the agent.
|
||||||
|
example: "screener-001@sentryagent.ai"
|
||||||
|
tokensIssued:
|
||||||
|
type: integer
|
||||||
|
description: Number of tokens issued for this agent in the current month.
|
||||||
|
minimum: 0
|
||||||
|
example: 1204
|
||||||
|
apiCalls:
|
||||||
|
type: integer
|
||||||
|
description: Total API calls made by this agent in the current month.
|
||||||
|
minimum: 0
|
||||||
|
example: 5432
|
||||||
|
lastActiveAt:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
nullable: true
|
||||||
|
description: Timestamp of the agent's last API activity. Null if no activity this month.
|
||||||
|
example: "2026-04-07T08:45:00.000Z"
|
||||||
|
|
||||||
|
AgentSummaryResponse:
|
||||||
|
type: object
|
||||||
|
description: Per-agent usage summary for the current month, across all agents in the organization.
|
||||||
|
required:
|
||||||
|
- organizationId
|
||||||
|
- month
|
||||||
|
- data
|
||||||
|
properties:
|
||||||
|
organizationId:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
example: "org-1234-5678-abcd-ef01"
|
||||||
|
month:
|
||||||
|
type: string
|
||||||
|
description: Current billing month in `YYYY-MM` format.
|
||||||
|
example: "2026-04"
|
||||||
|
data:
|
||||||
|
type: array
|
||||||
|
description: Per-agent usage summaries, ordered by `tokensIssued` descending.
|
||||||
|
items:
|
||||||
|
$ref: '#/components/schemas/AgentUsageSummary'
|
||||||
|
|
||||||
|
ErrorResponse:
|
||||||
|
type: object
|
||||||
|
description: Standard error response envelope.
|
||||||
|
required:
|
||||||
|
- code
|
||||||
|
- message
|
||||||
|
properties:
|
||||||
|
code:
|
||||||
|
type: string
|
||||||
|
example: "UNAUTHORIZED"
|
||||||
|
message:
|
||||||
|
type: string
|
||||||
|
example: "A valid Bearer token is required to access this resource."
|
||||||
|
details:
|
||||||
|
type: object
|
||||||
|
additionalProperties: true
|
||||||
|
|
||||||
|
responses:
|
||||||
|
Unauthorized:
|
||||||
|
description: Missing or invalid Bearer token.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "UNAUTHORIZED"
|
||||||
|
message: "A valid Bearer token is required to access this resource."
|
||||||
|
|
||||||
|
InternalServerError:
|
||||||
|
description: Unexpected server error.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "INTERNAL_SERVER_ERROR"
|
||||||
|
message: "An unexpected error occurred. Please try again later."
|
||||||
|
|
||||||
|
security:
|
||||||
|
- BearerAuth: []
|
||||||
|
|
||||||
|
paths:
|
||||||
|
/analytics/tokens:
|
||||||
|
get:
|
||||||
|
operationId: getTokenTrend
|
||||||
|
tags:
|
||||||
|
- Analytics
|
||||||
|
summary: Get daily token issuance trend
|
||||||
|
description: |
|
||||||
|
Returns a daily breakdown of OAuth 2.0 token issuances for the authenticated
|
||||||
|
organization over the last N days.
|
||||||
|
|
||||||
|
The `days` parameter controls the window size (default: 30, max: 90).
|
||||||
|
Days with no token activity are included with `count: 0`.
|
||||||
|
|
||||||
|
Data is scoped to the `organization_id` from the Bearer token.
|
||||||
|
parameters:
|
||||||
|
- name: days
|
||||||
|
in: query
|
||||||
|
required: false
|
||||||
|
description: Number of days to include in the trend window. Default: 30, max: 90.
|
||||||
|
schema:
|
||||||
|
type: integer
|
||||||
|
minimum: 1
|
||||||
|
maximum: 90
|
||||||
|
default: 30
|
||||||
|
example: 30
|
||||||
|
responses:
|
||||||
|
'200':
|
||||||
|
description: Token trend data returned successfully.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/TokenTrendResponse'
|
||||||
|
example:
|
||||||
|
organizationId: "org-1234-5678-abcd-ef01"
|
||||||
|
days: 7
|
||||||
|
data:
|
||||||
|
- date: "2026-04-01"
|
||||||
|
count: 842
|
||||||
|
- date: "2026-04-02"
|
||||||
|
count: 967
|
||||||
|
- date: "2026-04-03"
|
||||||
|
count: 0
|
||||||
|
- date: "2026-04-04"
|
||||||
|
count: 1201
|
||||||
|
- date: "2026-04-05"
|
||||||
|
count: 1087
|
||||||
|
- date: "2026-04-06"
|
||||||
|
count: 953
|
||||||
|
- date: "2026-04-07"
|
||||||
|
count: 412
|
||||||
|
'400':
|
||||||
|
description: Invalid `days` parameter.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
examples:
|
||||||
|
tooLarge:
|
||||||
|
summary: Exceeds maximum
|
||||||
|
value:
|
||||||
|
code: "VALIDATION_ERROR"
|
||||||
|
message: "Query parameter `days` must not exceed 90."
|
||||||
|
details:
|
||||||
|
field: "days"
|
||||||
|
max: 90
|
||||||
|
provided: 120
|
||||||
|
invalid:
|
||||||
|
summary: Non-positive integer
|
||||||
|
value:
|
||||||
|
code: "VALIDATION_ERROR"
|
||||||
|
message: "Query parameter `days` must be a positive integer."
|
||||||
|
details:
|
||||||
|
field: "days"
|
||||||
|
'401':
|
||||||
|
$ref: '#/components/responses/Unauthorized'
|
||||||
|
'404':
|
||||||
|
description: Analytics feature is not enabled on this instance.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "NOT_FOUND"
|
||||||
|
message: "Analytics is not enabled on this instance."
|
||||||
|
'500':
|
||||||
|
$ref: '#/components/responses/InternalServerError'
|
||||||
|
|
||||||
|
/analytics/agents/activity:
|
||||||
|
get:
|
||||||
|
operationId: getAgentActivity
|
||||||
|
tags:
|
||||||
|
- Analytics
|
||||||
|
summary: Get agent activity heatmap
|
||||||
|
description: |
|
||||||
|
Returns agent activity aggregated by day-of-week (0 = Sunday) and hour of day (UTC).
|
||||||
|
|
||||||
|
The heatmap shows when agents in the organization are most active,
|
||||||
|
based on token issuances and API calls. Only cells with `count > 0` are returned.
|
||||||
|
|
||||||
|
Data is scoped to the `organization_id` from the Bearer token.
|
||||||
|
The heatmap covers the last 90 days of activity.
|
||||||
|
responses:
|
||||||
|
'200':
|
||||||
|
description: Agent activity heatmap returned successfully.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/AgentActivityResponse'
|
||||||
|
example:
|
||||||
|
organizationId: "org-1234-5678-abcd-ef01"
|
||||||
|
data:
|
||||||
|
- dayOfWeek: 1
|
||||||
|
hour: 9
|
||||||
|
count: 342
|
||||||
|
- dayOfWeek: 1
|
||||||
|
hour: 14
|
||||||
|
count: 217
|
||||||
|
- dayOfWeek: 2
|
||||||
|
hour: 10
|
||||||
|
count: 189
|
||||||
|
- dayOfWeek: 3
|
||||||
|
hour: 11
|
||||||
|
count: 405
|
||||||
|
- dayOfWeek: 4
|
||||||
|
hour: 14
|
||||||
|
count: 278
|
||||||
|
- dayOfWeek: 5
|
||||||
|
hour: 9
|
||||||
|
count: 121
|
||||||
|
'401':
|
||||||
|
$ref: '#/components/responses/Unauthorized'
|
||||||
|
'404':
|
||||||
|
description: Analytics feature is not enabled on this instance.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "NOT_FOUND"
|
||||||
|
message: "Analytics is not enabled on this instance."
|
||||||
|
'500':
|
||||||
|
$ref: '#/components/responses/InternalServerError'
|
||||||
|
|
||||||
|
/analytics/agents:
|
||||||
|
get:
|
||||||
|
operationId: getAgentSummary
|
||||||
|
tags:
|
||||||
|
- Analytics
|
||||||
|
summary: Get per-agent usage summary
|
||||||
|
description: |
|
||||||
|
Returns per-agent token issuance counts and API call totals for the
|
||||||
|
current calendar month, across all agents in the authenticated organization.
|
||||||
|
|
||||||
|
Results are ordered by `tokensIssued` descending (most active agents first).
|
||||||
|
|
||||||
|
Data is scoped to the `organization_id` from the Bearer token.
|
||||||
|
responses:
|
||||||
|
'200':
|
||||||
|
description: Per-agent usage summary returned successfully.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/AgentSummaryResponse'
|
||||||
|
example:
|
||||||
|
organizationId: "org-1234-5678-abcd-ef01"
|
||||||
|
month: "2026-04"
|
||||||
|
data:
|
||||||
|
- agentId: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
agentEmail: "screener-001@sentryagent.ai"
|
||||||
|
tokensIssued: 1204
|
||||||
|
apiCalls: 5432
|
||||||
|
lastActiveAt: "2026-04-07T08:45:00.000Z"
|
||||||
|
- agentId: "b2c3d4e5-f6a7-8901-bcde-f12345678901"
|
||||||
|
agentEmail: "classifier-002@sentryagent.ai"
|
||||||
|
tokensIssued: 876
|
||||||
|
apiCalls: 3120
|
||||||
|
lastActiveAt: "2026-04-06T14:30:00.000Z"
|
||||||
|
- agentId: "c3d4e5f6-a7b8-9012-cdef-123456789012"
|
||||||
|
agentEmail: "router-003@sentryagent.ai"
|
||||||
|
tokensIssued: 0
|
||||||
|
apiCalls: 0
|
||||||
|
lastActiveAt: null
|
||||||
|
'401':
|
||||||
|
$ref: '#/components/responses/Unauthorized'
|
||||||
|
'404':
|
||||||
|
description: Analytics feature is not enabled on this instance.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "NOT_FOUND"
|
||||||
|
message: "Analytics is not enabled on this instance."
|
||||||
|
'500':
|
||||||
|
$ref: '#/components/responses/InternalServerError'
|
||||||
355
docs/openapi/billing.yaml
Normal file
355
docs/openapi/billing.yaml
Normal file
@@ -0,0 +1,355 @@
|
|||||||
|
openapi: "3.0.3"
|
||||||
|
|
||||||
|
info:
|
||||||
|
title: SentryAgent.ai — Billing & Usage Metering
|
||||||
|
version: 1.0.0
|
||||||
|
description: |
|
||||||
|
Billing and usage metering endpoints for the SentryAgent.ai AgentIdP platform.
|
||||||
|
|
||||||
|
Integrates with **Stripe** for subscription and payment management.
|
||||||
|
|
||||||
|
**Authenticated endpoints** (require Bearer JWT):
|
||||||
|
- `POST /billing/checkout` — Create a Stripe Checkout Session for plan upgrades
|
||||||
|
- `GET /billing/usage` — Retrieve today's usage summary
|
||||||
|
|
||||||
|
**Unauthenticated endpoint** (Stripe webhook receiver):
|
||||||
|
- `POST /billing/webhook` — Receives Stripe webhook events (raw body + signature verification)
|
||||||
|
|
||||||
|
**Important:** The `/billing/webhook` endpoint uses `express.raw()` middleware
|
||||||
|
to receive the raw request body as a Buffer. Do not apply `express.json()` to this route.
|
||||||
|
The `Stripe-Signature` header is required for all webhook deliveries.
|
||||||
|
|
||||||
|
servers:
|
||||||
|
- url: http://localhost:3000/api/v1
|
||||||
|
description: Local development server
|
||||||
|
- url: https://api.sentryagent.ai/v1
|
||||||
|
description: Production server
|
||||||
|
|
||||||
|
tags:
|
||||||
|
- name: Billing Checkout
|
||||||
|
description: Stripe Checkout Session management
|
||||||
|
- name: Billing Webhook
|
||||||
|
description: Stripe webhook event receiver (unauthenticated)
|
||||||
|
- name: Usage
|
||||||
|
description: Usage metering and reporting
|
||||||
|
|
||||||
|
components:
|
||||||
|
securitySchemes:
|
||||||
|
BearerAuth:
|
||||||
|
type: http
|
||||||
|
scheme: bearer
|
||||||
|
bearerFormat: JWT
|
||||||
|
description: |
|
||||||
|
JWT access token obtained via `POST /token`.
|
||||||
|
Include as `Authorization: Bearer <token>`.
|
||||||
|
|
||||||
|
schemas:
|
||||||
|
CheckoutRequest:
|
||||||
|
type: object
|
||||||
|
description: |
|
||||||
|
Optional request body for creating a Stripe Checkout Session.
|
||||||
|
When `successUrl` or `cancelUrl` are omitted, the platform generates
|
||||||
|
default redirect URLs pointing to the dashboard.
|
||||||
|
properties:
|
||||||
|
successUrl:
|
||||||
|
type: string
|
||||||
|
format: uri
|
||||||
|
description: URL to redirect to after successful payment.
|
||||||
|
example: "https://my-app.example.com/dashboard?billing=success"
|
||||||
|
cancelUrl:
|
||||||
|
type: string
|
||||||
|
format: uri
|
||||||
|
description: URL to redirect to if the user cancels checkout.
|
||||||
|
example: "https://my-app.example.com/dashboard?billing=cancel"
|
||||||
|
|
||||||
|
CheckoutResponse:
|
||||||
|
type: object
|
||||||
|
description: Stripe Checkout Session URL to redirect the user to.
|
||||||
|
required:
|
||||||
|
- checkoutUrl
|
||||||
|
properties:
|
||||||
|
checkoutUrl:
|
||||||
|
type: string
|
||||||
|
format: uri
|
||||||
|
description: |
|
||||||
|
Stripe-hosted Checkout page URL. Redirect the authenticated user
|
||||||
|
to this URL to complete payment.
|
||||||
|
example: "https://checkout.stripe.com/pay/cs_test_abcdef1234567890"
|
||||||
|
|
||||||
|
UsageSummary:
|
||||||
|
type: object
|
||||||
|
description: |
|
||||||
|
Today's usage summary for the authenticated organization.
|
||||||
|
Counters reset at UTC midnight.
|
||||||
|
required:
|
||||||
|
- organizationId
|
||||||
|
- date
|
||||||
|
- tokensIssued
|
||||||
|
- agentsRegistered
|
||||||
|
- credentialsGenerated
|
||||||
|
properties:
|
||||||
|
organizationId:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
description: Organization the usage data belongs to.
|
||||||
|
example: "org-1234-5678-abcd-ef01"
|
||||||
|
date:
|
||||||
|
type: string
|
||||||
|
format: date
|
||||||
|
description: The calendar date (UTC) this summary covers.
|
||||||
|
example: "2026-04-07"
|
||||||
|
tokensIssued:
|
||||||
|
type: integer
|
||||||
|
description: Number of OAuth 2.0 tokens issued today.
|
||||||
|
minimum: 0
|
||||||
|
example: 4201
|
||||||
|
agentsRegistered:
|
||||||
|
type: integer
|
||||||
|
description: Number of new agents registered today.
|
||||||
|
minimum: 0
|
||||||
|
example: 3
|
||||||
|
credentialsGenerated:
|
||||||
|
type: integer
|
||||||
|
description: Number of new agent credentials generated today.
|
||||||
|
minimum: 0
|
||||||
|
example: 5
|
||||||
|
apiCallsTotal:
|
||||||
|
type: integer
|
||||||
|
description: Total API calls across all endpoints today.
|
||||||
|
minimum: 0
|
||||||
|
example: 12450
|
||||||
|
|
||||||
|
StripeWebhookResponse:
|
||||||
|
type: object
|
||||||
|
description: Acknowledgement response for a received Stripe webhook event.
|
||||||
|
required:
|
||||||
|
- received
|
||||||
|
properties:
|
||||||
|
received:
|
||||||
|
type: boolean
|
||||||
|
description: Always `true` when the webhook was processed successfully.
|
||||||
|
example: true
|
||||||
|
|
||||||
|
ErrorResponse:
|
||||||
|
type: object
|
||||||
|
description: Standard error response envelope.
|
||||||
|
required:
|
||||||
|
- code
|
||||||
|
- message
|
||||||
|
properties:
|
||||||
|
code:
|
||||||
|
type: string
|
||||||
|
example: "VALIDATION_ERROR"
|
||||||
|
message:
|
||||||
|
type: string
|
||||||
|
example: "Missing Stripe-Signature header."
|
||||||
|
details:
|
||||||
|
type: object
|
||||||
|
additionalProperties: true
|
||||||
|
|
||||||
|
responses:
|
||||||
|
Unauthorized:
|
||||||
|
description: Missing or invalid Bearer token.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "UNAUTHORIZED"
|
||||||
|
message: "A valid Bearer token is required to access this resource."
|
||||||
|
|
||||||
|
Forbidden:
|
||||||
|
description: Valid token but insufficient permissions.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "FORBIDDEN"
|
||||||
|
message: "You do not have permission to perform this action."
|
||||||
|
|
||||||
|
InternalServerError:
|
||||||
|
description: Unexpected server error.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "INTERNAL_SERVER_ERROR"
|
||||||
|
message: "An unexpected error occurred. Please try again later."
|
||||||
|
|
||||||
|
paths:
|
||||||
|
/billing/checkout:
|
||||||
|
post:
|
||||||
|
operationId: createBillingCheckoutSession
|
||||||
|
tags:
|
||||||
|
- Billing Checkout
|
||||||
|
summary: Create a Stripe Checkout Session
|
||||||
|
description: |
|
||||||
|
Creates a Stripe Checkout Session for the authenticated organization
|
||||||
|
to upgrade their subscription plan.
|
||||||
|
|
||||||
|
The organization ID is read from the `organization_id` claim in the
|
||||||
|
Bearer JWT — the caller does not need to provide it in the request body.
|
||||||
|
|
||||||
|
The `checkoutUrl` in the response is a Stripe-hosted checkout page.
|
||||||
|
Redirect the authenticated user to this URL to complete payment.
|
||||||
|
|
||||||
|
Requires a valid Bearer JWT. The `organization_id` claim must be present in the token.
|
||||||
|
security:
|
||||||
|
- BearerAuth: []
|
||||||
|
requestBody:
|
||||||
|
required: false
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/CheckoutRequest'
|
||||||
|
example:
|
||||||
|
successUrl: "https://my-app.example.com/dashboard?billing=success"
|
||||||
|
cancelUrl: "https://my-app.example.com/dashboard?billing=cancel"
|
||||||
|
responses:
|
||||||
|
'201':
|
||||||
|
description: Stripe Checkout Session created successfully.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/CheckoutResponse'
|
||||||
|
example:
|
||||||
|
checkoutUrl: "https://checkout.stripe.com/pay/cs_test_abcdef1234567890"
|
||||||
|
'400':
|
||||||
|
description: Validation error — organization_id missing from token.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "VALIDATION_ERROR"
|
||||||
|
message: "organization_id is required in token."
|
||||||
|
'401':
|
||||||
|
$ref: '#/components/responses/Unauthorized'
|
||||||
|
'403':
|
||||||
|
$ref: '#/components/responses/Forbidden'
|
||||||
|
'500':
|
||||||
|
description: Unexpected error or Stripe API error.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "STRIPE_ERROR"
|
||||||
|
message: "Failed to create Stripe Checkout Session. Please try again."
|
||||||
|
|
||||||
|
/billing/webhook:
|
||||||
|
post:
|
||||||
|
operationId: handleStripeWebhook
|
||||||
|
tags:
|
||||||
|
- Billing Webhook
|
||||||
|
summary: Receive Stripe webhook events
|
||||||
|
description: |
|
||||||
|
Receives webhook events from Stripe's delivery system. This endpoint is
|
||||||
|
**unauthenticated** — authentication is provided by Stripe's HMAC signature
|
||||||
|
in the `Stripe-Signature` header.
|
||||||
|
|
||||||
|
**Body format:** The request body MUST be the raw JSON payload as sent by
|
||||||
|
Stripe (not parsed JSON). The `Content-Type` is `application/json` but
|
||||||
|
the body is read as a raw `Buffer` for signature verification.
|
||||||
|
|
||||||
|
**Signature verification:** The `Stripe-Signature` header is required.
|
||||||
|
If absent or invalid, the request is rejected with `400`.
|
||||||
|
|
||||||
|
**Supported events processed:**
|
||||||
|
- `checkout.session.completed` — Activates subscription after payment
|
||||||
|
- `customer.subscription.deleted` — Downgrades plan on cancellation
|
||||||
|
- `invoice.payment_failed` — Handles failed renewals
|
||||||
|
security: []
|
||||||
|
parameters:
|
||||||
|
- name: Stripe-Signature
|
||||||
|
in: header
|
||||||
|
required: true
|
||||||
|
description: |
|
||||||
|
HMAC signature from Stripe for payload verification.
|
||||||
|
Format: `t=<timestamp>,v1=<signature>,...`
|
||||||
|
schema:
|
||||||
|
type: string
|
||||||
|
example: "t=1492774577,v1=5257a869e7ecebeda32affa62cdca3fa51cad7e77a05bd412fbc2a2bzo..."
|
||||||
|
requestBody:
|
||||||
|
required: true
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
type: object
|
||||||
|
description: Raw Stripe event payload (read as Buffer internally).
|
||||||
|
additionalProperties: true
|
||||||
|
example:
|
||||||
|
id: "evt_1234567890"
|
||||||
|
object: "event"
|
||||||
|
type: "checkout.session.completed"
|
||||||
|
data:
|
||||||
|
object:
|
||||||
|
id: "cs_test_abcdef1234567890"
|
||||||
|
customer: "cus_abc123"
|
||||||
|
responses:
|
||||||
|
'200':
|
||||||
|
description: Webhook event received and processed successfully.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/StripeWebhookResponse'
|
||||||
|
example:
|
||||||
|
received: true
|
||||||
|
'400':
|
||||||
|
description: Missing or invalid Stripe-Signature header, or malformed payload.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
examples:
|
||||||
|
missingSignature:
|
||||||
|
summary: Missing Stripe-Signature header
|
||||||
|
value:
|
||||||
|
code: "VALIDATION_ERROR"
|
||||||
|
message: "Missing Stripe-Signature header."
|
||||||
|
invalidSignature:
|
||||||
|
summary: Signature verification failed
|
||||||
|
value:
|
||||||
|
code: "STRIPE_SIGNATURE_INVALID"
|
||||||
|
message: "Webhook signature verification failed."
|
||||||
|
'500':
|
||||||
|
$ref: '#/components/responses/InternalServerError'
|
||||||
|
|
||||||
|
/billing/usage:
|
||||||
|
get:
|
||||||
|
operationId: getBillingUsage
|
||||||
|
tags:
|
||||||
|
- Usage
|
||||||
|
summary: Get today's usage summary
|
||||||
|
description: |
|
||||||
|
Returns the usage summary for the authenticated organization
|
||||||
|
for the current calendar day (UTC).
|
||||||
|
|
||||||
|
Usage counters reset at UTC midnight.
|
||||||
|
The `organization_id` claim is read from the Bearer JWT.
|
||||||
|
|
||||||
|
Requires a valid Bearer JWT.
|
||||||
|
security:
|
||||||
|
- BearerAuth: []
|
||||||
|
responses:
|
||||||
|
'200':
|
||||||
|
description: Usage summary returned successfully.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/UsageSummary'
|
||||||
|
example:
|
||||||
|
organizationId: "org-1234-5678-abcd-ef01"
|
||||||
|
date: "2026-04-07"
|
||||||
|
tokensIssued: 4201
|
||||||
|
agentsRegistered: 3
|
||||||
|
credentialsGenerated: 5
|
||||||
|
apiCallsTotal: 12450
|
||||||
|
'401':
|
||||||
|
$ref: '#/components/responses/Unauthorized'
|
||||||
|
'403':
|
||||||
|
$ref: '#/components/responses/Forbidden'
|
||||||
|
'500':
|
||||||
|
$ref: '#/components/responses/InternalServerError'
|
||||||
548
docs/openapi/compliance.yaml
Normal file
548
docs/openapi/compliance.yaml
Normal file
@@ -0,0 +1,548 @@
|
|||||||
|
openapi: 3.0.3
|
||||||
|
|
||||||
|
info:
|
||||||
|
title: SentryAgent.ai — Compliance & SOC 2 Type II Service
|
||||||
|
version: 1.0.0
|
||||||
|
description: |
|
||||||
|
The Compliance Service exposes endpoints supporting SentryAgent.ai's
|
||||||
|
**SOC 2 Type II** audit readiness programme.
|
||||||
|
|
||||||
|
Two categories of control are surfaced:
|
||||||
|
|
||||||
|
**Audit chain verification** (`GET /audit/verify`) — Confirms cryptographic
|
||||||
|
integrity of the immutable audit log chain across an optional date range.
|
||||||
|
This endpoint provides auditors and compliance tooling with a single call to
|
||||||
|
assert that no audit events have been tampered with, deleted, or reordered
|
||||||
|
after initial capture.
|
||||||
|
|
||||||
|
**SOC 2 control status** (`GET /compliance/controls`) — Returns a live status
|
||||||
|
snapshot for each of the five in-scope SOC 2 Trust Services Criteria controls
|
||||||
|
monitored by the platform. Designed as a lightweight, public health-style
|
||||||
|
endpoint so that monitoring infrastructure can poll without bearer credentials.
|
||||||
|
|
||||||
|
**In-scope SOC 2 controls:**
|
||||||
|
| Control ID | Name | Description |
|
||||||
|
|------------|------|-------------|
|
||||||
|
| `CC6.1` | Encryption at Rest | Verifies database and secrets store encryption is active |
|
||||||
|
| `CC6.7` | TLS Enforcement | Confirms TLS 1.2+ is enforced on all inbound connections |
|
||||||
|
| `CC7.2` | Audit Log Integrity | Validates audit chain hash continuity |
|
||||||
|
| `CC9.2` | Secrets Rotation | Checks that all managed secrets are within rotation policy |
|
||||||
|
| `CC7.1` | Webhook Dead-Letter Monitoring | Asserts dead-letter queue depth is within threshold |
|
||||||
|
|
||||||
|
**Required scope (audit chain verify only):** `audit:read`
|
||||||
|
|
||||||
|
servers:
|
||||||
|
- url: http://localhost:3000/api/v1
|
||||||
|
description: Local development server
|
||||||
|
- url: https://api.sentryagent.ai/v1
|
||||||
|
description: Production server
|
||||||
|
|
||||||
|
tags:
|
||||||
|
- name: Audit Chain
|
||||||
|
description: Cryptographic integrity verification of the immutable audit event chain
|
||||||
|
- name: Compliance Controls
|
||||||
|
description: SOC 2 Type II control status — public health-style monitoring endpoint
|
||||||
|
|
||||||
|
components:
|
||||||
|
securitySchemes:
|
||||||
|
BearerAuth:
|
||||||
|
type: http
|
||||||
|
scheme: bearer
|
||||||
|
bearerFormat: JWT
|
||||||
|
description: |
|
||||||
|
JWT access token with `audit:read` scope, obtained via `POST /token`.
|
||||||
|
Include as: `Authorization: Bearer <token>`
|
||||||
|
|
||||||
|
schemas:
|
||||||
|
ChainVerificationResult:
|
||||||
|
type: object
|
||||||
|
description: |
|
||||||
|
Result of an audit event chain integrity verification run.
|
||||||
|
|
||||||
|
The audit log is structured as a hash-linked chain. Each event stores a
|
||||||
|
reference to the hash of the preceding event. `verified: true` means every
|
||||||
|
event in the requested window was checked and no breaks in the chain were
|
||||||
|
detected.
|
||||||
|
|
||||||
|
When `verified` is `false`, `brokenAtEventId` identifies the first event
|
||||||
|
where the chain integrity check failed, enabling targeted forensic investigation.
|
||||||
|
required:
|
||||||
|
- verified
|
||||||
|
- checkedCount
|
||||||
|
- brokenAtEventId
|
||||||
|
properties:
|
||||||
|
verified:
|
||||||
|
type: boolean
|
||||||
|
description: >
|
||||||
|
`true` if every audit event in the checked range maintains an unbroken
|
||||||
|
cryptographic hash chain; `false` if at least one chain break was detected.
|
||||||
|
example: true
|
||||||
|
checkedCount:
|
||||||
|
type: integer
|
||||||
|
description: Total number of audit events examined during this verification run.
|
||||||
|
minimum: 0
|
||||||
|
example: 2847
|
||||||
|
brokenAtEventId:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
nullable: true
|
||||||
|
description: >
|
||||||
|
UUID of the first audit event where chain continuity failed, or `null`
|
||||||
|
when `verified` is `true`. Only the first detected break is reported;
|
||||||
|
subsequent events are not checked after a break is found.
|
||||||
|
example: null
|
||||||
|
fromDate:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
description: >
|
||||||
|
The ISO 8601 lower bound of the date range that was verified.
|
||||||
|
Present only when a `fromDate` query parameter was supplied.
|
||||||
|
example: "2026-03-01T00:00:00.000Z"
|
||||||
|
toDate:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
description: >
|
||||||
|
The ISO 8601 upper bound of the date range that was verified.
|
||||||
|
Present only when a `toDate` query parameter was supplied.
|
||||||
|
example: "2026-03-31T23:59:59.999Z"
|
||||||
|
|
||||||
|
ControlStatus:
|
||||||
|
type: string
|
||||||
|
description: Operational status of a SOC 2 control at the time of the last check.
|
||||||
|
enum:
|
||||||
|
- passing
|
||||||
|
- failing
|
||||||
|
- unknown
|
||||||
|
example: passing
|
||||||
|
|
||||||
|
ComplianceControl:
|
||||||
|
type: object
|
||||||
|
description: Status record for a single SOC 2 Trust Services Criteria control.
|
||||||
|
required:
|
||||||
|
- id
|
||||||
|
- name
|
||||||
|
- status
|
||||||
|
- lastChecked
|
||||||
|
properties:
|
||||||
|
id:
|
||||||
|
type: string
|
||||||
|
description: SOC 2 Trust Services Criteria control identifier.
|
||||||
|
enum:
|
||||||
|
- CC6.1
|
||||||
|
- CC6.7
|
||||||
|
- CC7.2
|
||||||
|
- CC9.2
|
||||||
|
- CC7.1
|
||||||
|
example: "CC6.1"
|
||||||
|
name:
|
||||||
|
type: string
|
||||||
|
description: Human-readable name of the control.
|
||||||
|
example: "Encryption at Rest"
|
||||||
|
status:
|
||||||
|
$ref: '#/components/schemas/ControlStatus'
|
||||||
|
lastChecked:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
description: ISO 8601 timestamp of the most recent automated check for this control.
|
||||||
|
example: "2026-03-31T06:00:00.000Z"
|
||||||
|
|
||||||
|
ComplianceControlsResponse:
|
||||||
|
type: object
|
||||||
|
description: SOC 2 compliance control status summary for all in-scope controls.
|
||||||
|
required:
|
||||||
|
- controls
|
||||||
|
properties:
|
||||||
|
controls:
|
||||||
|
type: array
|
||||||
|
description: Status record for each of the five in-scope SOC 2 controls.
|
||||||
|
minItems: 5
|
||||||
|
maxItems: 5
|
||||||
|
items:
|
||||||
|
$ref: '#/components/schemas/ComplianceControl'
|
||||||
|
example:
|
||||||
|
- id: "CC6.1"
|
||||||
|
name: "Encryption at Rest"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC6.7"
|
||||||
|
name: "TLS Enforcement"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC7.2"
|
||||||
|
name: "Audit Log Integrity"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC9.2"
|
||||||
|
name: "Secrets Rotation"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC7.1"
|
||||||
|
name: "Webhook Dead-Letter Monitoring"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
|
||||||
|
ErrorResponse:
|
||||||
|
type: object
|
||||||
|
description: Standard error response envelope used across all SentryAgent.ai APIs.
|
||||||
|
required:
|
||||||
|
- code
|
||||||
|
- message
|
||||||
|
properties:
|
||||||
|
code:
|
||||||
|
type: string
|
||||||
|
description: Machine-readable error code.
|
||||||
|
example: "UNAUTHORIZED"
|
||||||
|
message:
|
||||||
|
type: string
|
||||||
|
description: Human-readable description of the error.
|
||||||
|
example: "A valid Bearer token is required."
|
||||||
|
details:
|
||||||
|
type: object
|
||||||
|
description: Optional structured details providing additional context.
|
||||||
|
additionalProperties: true
|
||||||
|
example: {}
|
||||||
|
|
||||||
|
responses:
|
||||||
|
Unauthorized:
|
||||||
|
description: Missing or invalid Bearer token.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "UNAUTHORIZED"
|
||||||
|
message: "A valid Bearer token is required to access this resource."
|
||||||
|
|
||||||
|
Forbidden:
|
||||||
|
description: Valid token but insufficient permissions. Requires `audit:read` scope.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "INSUFFICIENT_SCOPE"
|
||||||
|
message: "The 'audit:read' scope is required to verify the audit chain."
|
||||||
|
|
||||||
|
TooManyRequests:
|
||||||
|
description: |
|
||||||
|
Rate limit exceeded. Retry after the reset time indicated in `X-RateLimit-Reset`.
|
||||||
|
headers:
|
||||||
|
X-RateLimit-Limit:
|
||||||
|
schema:
|
||||||
|
type: integer
|
||||||
|
description: Maximum requests allowed per minute.
|
||||||
|
example: 30
|
||||||
|
X-RateLimit-Remaining:
|
||||||
|
schema:
|
||||||
|
type: integer
|
||||||
|
description: Requests remaining in the current window.
|
||||||
|
example: 0
|
||||||
|
X-RateLimit-Reset:
|
||||||
|
schema:
|
||||||
|
type: integer
|
||||||
|
description: Unix timestamp when the rate limit window resets.
|
||||||
|
example: 1743155400
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "RATE_LIMIT_EXCEEDED"
|
||||||
|
message: "Too many requests. Please retry after the rate limit window resets."
|
||||||
|
|
||||||
|
InternalServerError:
|
||||||
|
description: Unexpected server error.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "INTERNAL_SERVER_ERROR"
|
||||||
|
message: "An unexpected error occurred. Please try again later."
|
||||||
|
|
||||||
|
paths:
|
||||||
|
/audit/verify:
|
||||||
|
get:
|
||||||
|
operationId: verifyAuditChain
|
||||||
|
tags:
|
||||||
|
- Audit Chain
|
||||||
|
summary: Verify audit log chain integrity
|
||||||
|
description: |
|
||||||
|
Triggers a full integrity verification pass over the immutable audit event
|
||||||
|
chain. Each event in the log contains a cryptographic hash of the previous
|
||||||
|
event; this endpoint traverses the chain and confirms no breaks exist.
|
||||||
|
|
||||||
|
**Use cases:**
|
||||||
|
- Auditor evidence collection for SOC 2 Type II assessment
|
||||||
|
- Continuous compliance monitoring (cron-driven)
|
||||||
|
- Incident response — confirm audit log has not been tampered with
|
||||||
|
|
||||||
|
**Requires:** Bearer token with `audit:read` scope.
|
||||||
|
|
||||||
|
**Rate limit:** 30 requests/minute per `client_id`. Audit chain verification
|
||||||
|
is a computationally intensive operation and is rate-limited more aggressively
|
||||||
|
than standard read endpoints. For continuous monitoring, poll no more than
|
||||||
|
once per minute.
|
||||||
|
|
||||||
|
**Date range filtering:** Supply `fromDate` and/or `toDate` to restrict
|
||||||
|
verification to a specific window. When omitted, the entire retained audit
|
||||||
|
log is verified. `fromDate` must be before or equal to `toDate` when both
|
||||||
|
are provided.
|
||||||
|
|
||||||
|
**Result interpretation:**
|
||||||
|
- `verified: true` — chain is intact across all checked events
|
||||||
|
- `verified: false` — at least one chain break detected; `brokenAtEventId`
|
||||||
|
identifies the first affected event
|
||||||
|
security:
|
||||||
|
- BearerAuth: []
|
||||||
|
parameters:
|
||||||
|
- name: fromDate
|
||||||
|
in: query
|
||||||
|
description: |
|
||||||
|
ISO 8601 date-time lower bound for the verification window (inclusive).
|
||||||
|
When omitted, verification starts from the earliest available audit event.
|
||||||
|
Must be before or equal to `toDate` when both are supplied.
|
||||||
|
required: false
|
||||||
|
schema:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
example: "2026-03-01T00:00:00.000Z"
|
||||||
|
- name: toDate
|
||||||
|
in: query
|
||||||
|
description: |
|
||||||
|
ISO 8601 date-time upper bound for the verification window (inclusive).
|
||||||
|
When omitted, verification runs up to and including the most recent
|
||||||
|
audit event. Must be after or equal to `fromDate` when both are supplied.
|
||||||
|
required: false
|
||||||
|
schema:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
example: "2026-03-31T23:59:59.999Z"
|
||||||
|
responses:
|
||||||
|
'200':
|
||||||
|
description: |
|
||||||
|
Audit chain verification completed. Inspect `verified` to determine
|
||||||
|
whether chain integrity is intact. A `200` is returned regardless of
|
||||||
|
whether verification passed or failed — check the response body.
|
||||||
|
headers:
|
||||||
|
X-RateLimit-Limit:
|
||||||
|
schema:
|
||||||
|
type: integer
|
||||||
|
description: Maximum requests allowed per minute for this endpoint.
|
||||||
|
example: 30
|
||||||
|
X-RateLimit-Remaining:
|
||||||
|
schema:
|
||||||
|
type: integer
|
||||||
|
description: Requests remaining in the current rate limit window.
|
||||||
|
example: 29
|
||||||
|
X-RateLimit-Reset:
|
||||||
|
schema:
|
||||||
|
type: integer
|
||||||
|
description: Unix timestamp when the rate limit window resets.
|
||||||
|
example: 1743155400
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ChainVerificationResult'
|
||||||
|
examples:
|
||||||
|
chainIntact:
|
||||||
|
summary: Verification passed — chain is intact
|
||||||
|
value:
|
||||||
|
verified: true
|
||||||
|
checkedCount: 2847
|
||||||
|
brokenAtEventId: null
|
||||||
|
fromDate: "2026-03-01T00:00:00.000Z"
|
||||||
|
toDate: "2026-03-31T23:59:59.999Z"
|
||||||
|
chainBroken:
|
||||||
|
summary: Verification failed — chain break detected
|
||||||
|
value:
|
||||||
|
verified: false
|
||||||
|
checkedCount: 1203
|
||||||
|
brokenAtEventId: "c4d5e6f7-a8b9-0123-cdef-456789012345"
|
||||||
|
fromDate: "2026-03-01T00:00:00.000Z"
|
||||||
|
toDate: "2026-03-31T23:59:59.999Z"
|
||||||
|
noDateRange:
|
||||||
|
summary: Full log verified (no date range supplied)
|
||||||
|
value:
|
||||||
|
verified: true
|
||||||
|
checkedCount: 18504
|
||||||
|
brokenAtEventId: null
|
||||||
|
'400':
|
||||||
|
description: Invalid query parameter value or date range.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
examples:
|
||||||
|
invalidFromDate:
|
||||||
|
summary: fromDate is not a valid ISO 8601 date-time
|
||||||
|
value:
|
||||||
|
code: "VALIDATION_ERROR"
|
||||||
|
message: "Invalid query parameter value."
|
||||||
|
details:
|
||||||
|
field: "fromDate"
|
||||||
|
reason: "Must be a valid ISO 8601 date-time string (e.g. 2026-03-01T00:00:00.000Z)."
|
||||||
|
invalidToDate:
|
||||||
|
summary: toDate is not a valid ISO 8601 date-time
|
||||||
|
value:
|
||||||
|
code: "VALIDATION_ERROR"
|
||||||
|
message: "Invalid query parameter value."
|
||||||
|
details:
|
||||||
|
field: "toDate"
|
||||||
|
reason: "Must be a valid ISO 8601 date-time string (e.g. 2026-03-31T23:59:59.999Z)."
|
||||||
|
invalidDateRange:
|
||||||
|
summary: fromDate is after toDate
|
||||||
|
value:
|
||||||
|
code: "VALIDATION_ERROR"
|
||||||
|
message: "Invalid date range."
|
||||||
|
details:
|
||||||
|
reason: "fromDate must be before or equal to toDate."
|
||||||
|
'401':
|
||||||
|
$ref: '#/components/responses/Unauthorized'
|
||||||
|
'403':
|
||||||
|
$ref: '#/components/responses/Forbidden'
|
||||||
|
'429':
|
||||||
|
$ref: '#/components/responses/TooManyRequests'
|
||||||
|
'500':
|
||||||
|
$ref: '#/components/responses/InternalServerError'
|
||||||
|
|
||||||
|
/compliance/controls:
|
||||||
|
get:
|
||||||
|
operationId: getComplianceControls
|
||||||
|
tags:
|
||||||
|
- Compliance Controls
|
||||||
|
summary: Get SOC 2 control status summary
|
||||||
|
description: |
|
||||||
|
Returns a live status snapshot for each of the five in-scope SOC 2 Type II
|
||||||
|
Trust Services Criteria controls monitored by the SentryAgent.ai platform.
|
||||||
|
|
||||||
|
**No authentication required.** This endpoint is intentionally public
|
||||||
|
(analogous to a health check) so that external monitoring infrastructure,
|
||||||
|
status pages, and audit tooling can poll it without bearer credentials.
|
||||||
|
|
||||||
|
**Controls monitored:**
|
||||||
|
| Control ID | Name | What is checked |
|
||||||
|
|------------|------|-----------------|
|
||||||
|
| `CC6.1` | Encryption at Rest | Database and secrets store encryption is active and configured |
|
||||||
|
| `CC6.7` | TLS Enforcement | TLS 1.2+ is enforced on all platform inbound connections |
|
||||||
|
| `CC7.2` | Audit Log Integrity | Audit chain hash continuity — shorthand of `/audit/verify` |
|
||||||
|
| `CC9.2` | Secrets Rotation | All managed secrets are within the rotation policy window |
|
||||||
|
| `CC7.1` | Webhook Dead-Letter Monitoring | Dead-letter queue depth is within the acceptable threshold |
|
||||||
|
|
||||||
|
**Status values:**
|
||||||
|
- `passing` — control is operating within policy
|
||||||
|
- `failing` — control has breached policy; immediate attention required
|
||||||
|
- `unknown` — automated check could not complete (e.g. dependency unavailable)
|
||||||
|
|
||||||
|
**Caching note:** Responses may be cached for up to 60 seconds by
|
||||||
|
intermediate proxies. The `lastChecked` field on each control indicates
|
||||||
|
the timestamp of the most recent automated evaluation.
|
||||||
|
|
||||||
|
**Rate limit:** 120 requests/minute per IP address.
|
||||||
|
security: []
|
||||||
|
responses:
|
||||||
|
'200':
|
||||||
|
description: SOC 2 control status summary returned successfully.
|
||||||
|
headers:
|
||||||
|
Cache-Control:
|
||||||
|
schema:
|
||||||
|
type: string
|
||||||
|
description: >
|
||||||
|
Downstream caches may serve this response for up to 60 seconds.
|
||||||
|
example: "public, max-age=60"
|
||||||
|
X-RateLimit-Limit:
|
||||||
|
schema:
|
||||||
|
type: integer
|
||||||
|
description: Maximum requests allowed per minute for this endpoint.
|
||||||
|
example: 120
|
||||||
|
X-RateLimit-Remaining:
|
||||||
|
schema:
|
||||||
|
type: integer
|
||||||
|
description: Requests remaining in the current rate limit window.
|
||||||
|
example: 119
|
||||||
|
X-RateLimit-Reset:
|
||||||
|
schema:
|
||||||
|
type: integer
|
||||||
|
description: Unix timestamp when the rate limit window resets.
|
||||||
|
example: 1743155400
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ComplianceControlsResponse'
|
||||||
|
examples:
|
||||||
|
allPassing:
|
||||||
|
summary: All controls passing
|
||||||
|
value:
|
||||||
|
controls:
|
||||||
|
- id: "CC6.1"
|
||||||
|
name: "Encryption at Rest"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC6.7"
|
||||||
|
name: "TLS Enforcement"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC7.2"
|
||||||
|
name: "Audit Log Integrity"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC9.2"
|
||||||
|
name: "Secrets Rotation"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC7.1"
|
||||||
|
name: "Webhook Dead-Letter Monitoring"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
oneControlFailing:
|
||||||
|
summary: One control failing (secrets rotation overdue)
|
||||||
|
value:
|
||||||
|
controls:
|
||||||
|
- id: "CC6.1"
|
||||||
|
name: "Encryption at Rest"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC6.7"
|
||||||
|
name: "TLS Enforcement"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC7.2"
|
||||||
|
name: "Audit Log Integrity"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC9.2"
|
||||||
|
name: "Secrets Rotation"
|
||||||
|
status: "failing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC7.1"
|
||||||
|
name: "Webhook Dead-Letter Monitoring"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
unknownControl:
|
||||||
|
summary: One control in unknown state (dependency unavailable)
|
||||||
|
value:
|
||||||
|
controls:
|
||||||
|
- id: "CC6.1"
|
||||||
|
name: "Encryption at Rest"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC6.7"
|
||||||
|
name: "TLS Enforcement"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC7.2"
|
||||||
|
name: "Audit Log Integrity"
|
||||||
|
status: "unknown"
|
||||||
|
lastChecked: "2026-03-31T05:00:00.000Z"
|
||||||
|
- id: "CC9.2"
|
||||||
|
name: "Secrets Rotation"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
- id: "CC7.1"
|
||||||
|
name: "Webhook Dead-Letter Monitoring"
|
||||||
|
status: "passing"
|
||||||
|
lastChecked: "2026-03-31T06:00:00.000Z"
|
||||||
|
'429':
|
||||||
|
$ref: '#/components/responses/TooManyRequests'
|
||||||
|
'500':
|
||||||
|
$ref: '#/components/responses/InternalServerError'
|
||||||
480
docs/openapi/delegation.yaml
Normal file
480
docs/openapi/delegation.yaml
Normal file
@@ -0,0 +1,480 @@
|
|||||||
|
openapi: "3.0.3"
|
||||||
|
|
||||||
|
info:
|
||||||
|
title: SentryAgent.ai — A2A Delegation (Agent-to-Agent)
|
||||||
|
version: 1.0.0
|
||||||
|
description: |
|
||||||
|
Agent-to-Agent (A2A) delegation endpoints for the SentryAgent.ai AgentIdP platform.
|
||||||
|
|
||||||
|
The delegation subsystem enables an authenticated agent (the *delegator*) to grant
|
||||||
|
a subset of its own scopes to another agent (the *delegatee*) for a limited time.
|
||||||
|
This creates a cryptographically-signed delegation chain, suitable for multi-agent
|
||||||
|
orchestration patterns.
|
||||||
|
|
||||||
|
**All endpoints require a valid Bearer JWT.**
|
||||||
|
|
||||||
|
**Feature flag:** When `A2A_ENABLED=false` these routes are not registered (return 404).
|
||||||
|
|
||||||
|
**Delegation rules:**
|
||||||
|
- The delegatee must be in the same tenant as the delegator.
|
||||||
|
- Delegated scopes must be a strict subset of the delegator's own scopes.
|
||||||
|
- TTL minimum: 60 seconds; maximum: 86400 seconds (24 hours).
|
||||||
|
- Each delegation chain has a unique `chainId` (UUID).
|
||||||
|
- Revoking a chain is idempotent — revoking an already-revoked chain succeeds.
|
||||||
|
|
||||||
|
servers:
|
||||||
|
- url: http://localhost:3000/api/v1
|
||||||
|
description: Local development server
|
||||||
|
- url: https://api.sentryagent.ai/v1
|
||||||
|
description: Production server
|
||||||
|
|
||||||
|
tags:
|
||||||
|
- name: A2A Delegation
|
||||||
|
description: Agent-to-Agent delegation chain management
|
||||||
|
|
||||||
|
components:
|
||||||
|
securitySchemes:
|
||||||
|
BearerAuth:
|
||||||
|
type: http
|
||||||
|
scheme: bearer
|
||||||
|
bearerFormat: JWT
|
||||||
|
description: |
|
||||||
|
JWT access token obtained via `POST /token`.
|
||||||
|
Include as `Authorization: Bearer <token>`.
|
||||||
|
|
||||||
|
schemas:
|
||||||
|
DelegationChain:
|
||||||
|
type: object
|
||||||
|
description: A delegation chain record as returned by the API.
|
||||||
|
required:
|
||||||
|
- id
|
||||||
|
- tenantId
|
||||||
|
- delegatorAgentId
|
||||||
|
- delegateeAgentId
|
||||||
|
- scopes
|
||||||
|
- delegationToken
|
||||||
|
- ttlSeconds
|
||||||
|
- issuedAt
|
||||||
|
- expiresAt
|
||||||
|
- createdAt
|
||||||
|
properties:
|
||||||
|
id:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
description: Unique identifier of the delegation chain.
|
||||||
|
readOnly: true
|
||||||
|
example: "chain-abcd-1234-5678-ef01"
|
||||||
|
tenantId:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
description: Organization (tenant) that owns this delegation.
|
||||||
|
readOnly: true
|
||||||
|
example: "org-1234-5678-abcd-ef01"
|
||||||
|
delegatorAgentId:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
description: UUID of the agent granting authority.
|
||||||
|
example: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
delegateeAgentId:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
description: UUID of the agent receiving delegated authority.
|
||||||
|
example: "b2c3d4e5-f6a7-8901-bcde-f12345678901"
|
||||||
|
scopes:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
type: string
|
||||||
|
description: OAuth 2.0 scopes granted by this delegation chain.
|
||||||
|
example:
|
||||||
|
- "agents:read"
|
||||||
|
delegationToken:
|
||||||
|
type: string
|
||||||
|
description: |
|
||||||
|
Opaque delegation token string that the delegatee presents to verify authority.
|
||||||
|
This token encodes the chain metadata and is HMAC-signed.
|
||||||
|
example: "chain-abcd-1234-5678-ef01.1743151200.1743237600"
|
||||||
|
signature:
|
||||||
|
type: string
|
||||||
|
description: HMAC-SHA256 signature of the delegation token payload.
|
||||||
|
example: "3a7f2b9c..."
|
||||||
|
ttlSeconds:
|
||||||
|
type: integer
|
||||||
|
description: Delegation lifetime in seconds.
|
||||||
|
minimum: 60
|
||||||
|
maximum: 86400
|
||||||
|
example: 3600
|
||||||
|
issuedAt:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
readOnly: true
|
||||||
|
example: "2026-04-07T09:00:00.000Z"
|
||||||
|
expiresAt:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
readOnly: true
|
||||||
|
example: "2026-04-07T10:00:00.000Z"
|
||||||
|
revokedAt:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
nullable: true
|
||||||
|
description: Timestamp when this chain was revoked. Null if still active or expired naturally.
|
||||||
|
readOnly: true
|
||||||
|
example: null
|
||||||
|
createdAt:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
readOnly: true
|
||||||
|
example: "2026-04-07T09:00:00.000Z"
|
||||||
|
|
||||||
|
CreateDelegationRequest:
|
||||||
|
type: object
|
||||||
|
description: Request body for creating a new agent-to-agent delegation chain.
|
||||||
|
required:
|
||||||
|
- delegateeAgentId
|
||||||
|
- scopes
|
||||||
|
- ttlSeconds
|
||||||
|
properties:
|
||||||
|
delegateeAgentId:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
description: |
|
||||||
|
UUID of the agent to receive delegated authority.
|
||||||
|
Must be in the same tenant as the delegator (caller).
|
||||||
|
example: "b2c3d4e5-f6a7-8901-bcde-f12345678901"
|
||||||
|
scopes:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
type: string
|
||||||
|
description: |
|
||||||
|
Scopes to delegate. Must be a strict subset of the delegator's current token scopes.
|
||||||
|
At least one scope must be specified.
|
||||||
|
minItems: 1
|
||||||
|
example:
|
||||||
|
- "agents:read"
|
||||||
|
ttlSeconds:
|
||||||
|
type: integer
|
||||||
|
description: Delegation lifetime in seconds. Minimum: 60; Maximum: 86400 (24 hours).
|
||||||
|
minimum: 60
|
||||||
|
maximum: 86400
|
||||||
|
example: 3600
|
||||||
|
|
||||||
|
VerifyDelegationRequest:
|
||||||
|
type: object
|
||||||
|
description: Request body for verifying a delegation token.
|
||||||
|
required:
|
||||||
|
- delegationToken
|
||||||
|
properties:
|
||||||
|
delegationToken:
|
||||||
|
type: string
|
||||||
|
description: The delegation token string to verify.
|
||||||
|
example: "chain-abcd-1234-5678-ef01.1743151200.1743237600"
|
||||||
|
|
||||||
|
DelegationVerificationResult:
|
||||||
|
type: object
|
||||||
|
description: |
|
||||||
|
Result of verifying a delegation token.
|
||||||
|
Returns `valid: false` for expired or revoked tokens without throwing.
|
||||||
|
required:
|
||||||
|
- valid
|
||||||
|
- chainId
|
||||||
|
- delegatorAgentId
|
||||||
|
- delegateeAgentId
|
||||||
|
- scopes
|
||||||
|
- issuedAt
|
||||||
|
- expiresAt
|
||||||
|
properties:
|
||||||
|
valid:
|
||||||
|
type: boolean
|
||||||
|
description: Whether the delegation token is currently valid (active, not expired, not revoked).
|
||||||
|
example: true
|
||||||
|
chainId:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
description: UUID of the delegation chain.
|
||||||
|
example: "chain-abcd-1234-5678-ef01"
|
||||||
|
delegatorAgentId:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
example: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
delegateeAgentId:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
example: "b2c3d4e5-f6a7-8901-bcde-f12345678901"
|
||||||
|
scopes:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
type: string
|
||||||
|
example:
|
||||||
|
- "agents:read"
|
||||||
|
issuedAt:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
example: "2026-04-07T09:00:00.000Z"
|
||||||
|
expiresAt:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
example: "2026-04-07T10:00:00.000Z"
|
||||||
|
revokedAt:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
nullable: true
|
||||||
|
example: null
|
||||||
|
|
||||||
|
ErrorResponse:
|
||||||
|
type: object
|
||||||
|
description: Standard error response envelope.
|
||||||
|
required:
|
||||||
|
- code
|
||||||
|
- message
|
||||||
|
properties:
|
||||||
|
code:
|
||||||
|
type: string
|
||||||
|
example: "DELEGATION_NOT_FOUND"
|
||||||
|
message:
|
||||||
|
type: string
|
||||||
|
example: "Delegation chain with the specified ID was not found."
|
||||||
|
details:
|
||||||
|
type: object
|
||||||
|
additionalProperties: true
|
||||||
|
|
||||||
|
responses:
|
||||||
|
Unauthorized:
|
||||||
|
description: Missing or invalid Bearer token.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "UNAUTHORIZED"
|
||||||
|
message: "A valid Bearer token is required to access this resource."
|
||||||
|
|
||||||
|
Forbidden:
|
||||||
|
description: Valid token but insufficient permissions.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "FORBIDDEN"
|
||||||
|
message: "You do not have permission to perform this action."
|
||||||
|
|
||||||
|
NotFound:
|
||||||
|
description: Delegation chain not found.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "DELEGATION_NOT_FOUND"
|
||||||
|
message: "Delegation chain with the specified ID was not found."
|
||||||
|
|
||||||
|
InternalServerError:
|
||||||
|
description: Unexpected server error.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "INTERNAL_SERVER_ERROR"
|
||||||
|
message: "An unexpected error occurred. Please try again later."
|
||||||
|
|
||||||
|
security:
|
||||||
|
- BearerAuth: []
|
||||||
|
|
||||||
|
paths:
|
||||||
|
/oauth2/token/delegate:
|
||||||
|
post:
|
||||||
|
operationId: createDelegation
|
||||||
|
tags:
|
||||||
|
- A2A Delegation
|
||||||
|
summary: Create an A2A delegation chain
|
||||||
|
description: |
|
||||||
|
Creates a new agent-to-agent delegation chain. The authenticated agent
|
||||||
|
(the *delegator*) grants a subset of its own scopes to the `delegateeAgentId`.
|
||||||
|
|
||||||
|
A cryptographically-signed `delegationToken` is returned. The delegatee
|
||||||
|
presents this token to `POST /oauth2/token/verify-delegation` to prove
|
||||||
|
delegated authority.
|
||||||
|
|
||||||
|
**Validation:**
|
||||||
|
- Delegatee must be in the same organization as the delegator.
|
||||||
|
- `scopes` must be a strict subset of the delegator's current token scopes.
|
||||||
|
- `ttlSeconds` must be between 60 and 86400.
|
||||||
|
requestBody:
|
||||||
|
required: true
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/CreateDelegationRequest'
|
||||||
|
example:
|
||||||
|
delegateeAgentId: "b2c3d4e5-f6a7-8901-bcde-f12345678901"
|
||||||
|
scopes:
|
||||||
|
- "agents:read"
|
||||||
|
ttlSeconds: 3600
|
||||||
|
responses:
|
||||||
|
'201':
|
||||||
|
description: Delegation chain created successfully.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/DelegationChain'
|
||||||
|
example:
|
||||||
|
id: "chain-abcd-1234-5678-ef01"
|
||||||
|
tenantId: "org-1234-5678-abcd-ef01"
|
||||||
|
delegatorAgentId: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
delegateeAgentId: "b2c3d4e5-f6a7-8901-bcde-f12345678901"
|
||||||
|
scopes:
|
||||||
|
- "agents:read"
|
||||||
|
delegationToken: "chain-abcd-1234-5678-ef01.1743151200.1743154800"
|
||||||
|
signature: "3a7f2b9c..."
|
||||||
|
ttlSeconds: 3600
|
||||||
|
issuedAt: "2026-04-07T09:00:00.000Z"
|
||||||
|
expiresAt: "2026-04-07T10:00:00.000Z"
|
||||||
|
revokedAt: null
|
||||||
|
createdAt: "2026-04-07T09:00:00.000Z"
|
||||||
|
'400':
|
||||||
|
description: Validation error in request body.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
examples:
|
||||||
|
scopeExceedsOwn:
|
||||||
|
summary: Requested scope exceeds delegator's own scopes
|
||||||
|
value:
|
||||||
|
code: "SCOPE_EXCEEDS_DELEGATOR"
|
||||||
|
message: "Delegated scopes must be a subset of the delegator's own token scopes."
|
||||||
|
details:
|
||||||
|
requested: ["agents:write"]
|
||||||
|
available: ["agents:read"]
|
||||||
|
invalidTtl:
|
||||||
|
summary: TTL out of range
|
||||||
|
value:
|
||||||
|
code: "VALIDATION_ERROR"
|
||||||
|
message: "ttlSeconds must be between 60 and 86400."
|
||||||
|
crossTenant:
|
||||||
|
summary: Delegatee in different tenant
|
||||||
|
value:
|
||||||
|
code: "CROSS_TENANT_DELEGATION"
|
||||||
|
message: "Delegatee agent must be in the same organization as the delegator."
|
||||||
|
'401':
|
||||||
|
$ref: '#/components/responses/Unauthorized'
|
||||||
|
'404':
|
||||||
|
description: Delegatee agent not found.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "AGENT_NOT_FOUND"
|
||||||
|
message: "Delegatee agent with the specified ID was not found."
|
||||||
|
'500':
|
||||||
|
$ref: '#/components/responses/InternalServerError'
|
||||||
|
|
||||||
|
/oauth2/token/verify-delegation:
|
||||||
|
post:
|
||||||
|
operationId: verifyDelegation
|
||||||
|
tags:
|
||||||
|
- A2A Delegation
|
||||||
|
summary: Verify a delegation token
|
||||||
|
description: |
|
||||||
|
Verifies a delegation token and returns the chain details if valid.
|
||||||
|
|
||||||
|
Returns `valid: true` with full chain metadata when the token is valid
|
||||||
|
(exists, not expired, not revoked).
|
||||||
|
|
||||||
|
Returns `valid: false` when the token is expired or revoked.
|
||||||
|
Does not throw an error for inactive tokens — always returns `200`.
|
||||||
|
|
||||||
|
Requires a valid Bearer JWT (any authenticated agent may verify a delegation).
|
||||||
|
requestBody:
|
||||||
|
required: true
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/VerifyDelegationRequest'
|
||||||
|
example:
|
||||||
|
delegationToken: "chain-abcd-1234-5678-ef01.1743151200.1743154800"
|
||||||
|
responses:
|
||||||
|
'200':
|
||||||
|
description: Delegation verification result returned.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/DelegationVerificationResult'
|
||||||
|
examples:
|
||||||
|
valid:
|
||||||
|
summary: Valid delegation token
|
||||||
|
value:
|
||||||
|
valid: true
|
||||||
|
chainId: "chain-abcd-1234-5678-ef01"
|
||||||
|
delegatorAgentId: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
delegateeAgentId: "b2c3d4e5-f6a7-8901-bcde-f12345678901"
|
||||||
|
scopes:
|
||||||
|
- "agents:read"
|
||||||
|
issuedAt: "2026-04-07T09:00:00.000Z"
|
||||||
|
expiresAt: "2026-04-07T10:00:00.000Z"
|
||||||
|
revokedAt: null
|
||||||
|
expired:
|
||||||
|
summary: Expired delegation token
|
||||||
|
value:
|
||||||
|
valid: false
|
||||||
|
chainId: "chain-abcd-1234-5678-ef01"
|
||||||
|
delegatorAgentId: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
delegateeAgentId: "b2c3d4e5-f6a7-8901-bcde-f12345678901"
|
||||||
|
scopes:
|
||||||
|
- "agents:read"
|
||||||
|
issuedAt: "2026-04-06T09:00:00.000Z"
|
||||||
|
expiresAt: "2026-04-06T10:00:00.000Z"
|
||||||
|
revokedAt: null
|
||||||
|
'400':
|
||||||
|
description: Missing or malformed `delegationToken` field.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "VALIDATION_ERROR"
|
||||||
|
message: "The 'delegationToken' field is required."
|
||||||
|
'401':
|
||||||
|
$ref: '#/components/responses/Unauthorized'
|
||||||
|
'500':
|
||||||
|
$ref: '#/components/responses/InternalServerError'
|
||||||
|
|
||||||
|
/oauth2/token/delegate/{chainId}:
|
||||||
|
parameters:
|
||||||
|
- name: chainId
|
||||||
|
in: path
|
||||||
|
required: true
|
||||||
|
description: UUID of the delegation chain to revoke.
|
||||||
|
schema:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
example: "chain-abcd-1234-5678-ef01"
|
||||||
|
|
||||||
|
delete:
|
||||||
|
operationId: revokeDelegation
|
||||||
|
tags:
|
||||||
|
- A2A Delegation
|
||||||
|
summary: Revoke a delegation chain
|
||||||
|
description: |
|
||||||
|
Immediately revokes a delegation chain.
|
||||||
|
|
||||||
|
After revocation, `POST /oauth2/token/verify-delegation` will return
|
||||||
|
`valid: false` for the revoked chain's token.
|
||||||
|
|
||||||
|
**Idempotent** — revoking an already-revoked chain returns `204` without error.
|
||||||
|
|
||||||
|
Only the delegator agent or an admin may revoke a chain.
|
||||||
|
Requires a valid Bearer JWT.
|
||||||
|
responses:
|
||||||
|
'204':
|
||||||
|
description: Delegation chain revoked successfully (or was already revoked). No response body.
|
||||||
|
'401':
|
||||||
|
$ref: '#/components/responses/Unauthorized'
|
||||||
|
'403':
|
||||||
|
$ref: '#/components/responses/Forbidden'
|
||||||
|
'404':
|
||||||
|
$ref: '#/components/responses/NotFound'
|
||||||
|
'500':
|
||||||
|
$ref: '#/components/responses/InternalServerError'
|
||||||
576
docs/openapi/did.yaml
Normal file
576
docs/openapi/did.yaml
Normal file
@@ -0,0 +1,576 @@
|
|||||||
|
openapi: "3.0.3"
|
||||||
|
|
||||||
|
info:
|
||||||
|
title: SentryAgent.ai — W3C DID & AGNTCY Agent Card
|
||||||
|
version: 1.0.0
|
||||||
|
description: |
|
||||||
|
W3C Decentralized Identifier (DID) and AGNTCY Agent Card endpoints for the
|
||||||
|
SentryAgent.ai AgentIdP platform.
|
||||||
|
|
||||||
|
**Unauthenticated endpoints:**
|
||||||
|
- `GET /.well-known/did.json` — Instance-level DID Document for the IdP itself
|
||||||
|
- `GET /api/v1/agents/:agentId/did` — Per-agent W3C DID Document
|
||||||
|
- `GET /api/v1/agents/:agentId/did/card` — AGNTCY-format agent card
|
||||||
|
|
||||||
|
**Authenticated endpoint** (requires Bearer JWT + OPA authorization):
|
||||||
|
- `GET /api/v1/agents/:agentId/did/resolve` — W3C DID Resolution result
|
||||||
|
|
||||||
|
All DID Documents conform to the **W3C DID Core 1.0** specification.
|
||||||
|
Agent cards conform to the **AGNTCY** agent identity standard (Linux Foundation).
|
||||||
|
|
||||||
|
servers:
|
||||||
|
- url: http://localhost:3000
|
||||||
|
description: Local development server
|
||||||
|
- url: https://api.sentryagent.ai
|
||||||
|
description: Production server
|
||||||
|
|
||||||
|
tags:
|
||||||
|
- name: DID Documents
|
||||||
|
description: W3C DID Document endpoints (unauthenticated)
|
||||||
|
- name: DID Resolution
|
||||||
|
description: Authenticated W3C DID Resolution endpoint
|
||||||
|
- name: Agent Card
|
||||||
|
description: AGNTCY agent card endpoint (unauthenticated)
|
||||||
|
|
||||||
|
components:
|
||||||
|
securitySchemes:
|
||||||
|
BearerAuth:
|
||||||
|
type: http
|
||||||
|
scheme: bearer
|
||||||
|
bearerFormat: JWT
|
||||||
|
description: |
|
||||||
|
JWT access token obtained via `POST /api/v1/token`.
|
||||||
|
Include as `Authorization: Bearer <token>`.
|
||||||
|
|
||||||
|
schemas:
|
||||||
|
PublicKeyJwk:
|
||||||
|
type: object
|
||||||
|
description: JWK representation of a public key embedded in a verification method.
|
||||||
|
required:
|
||||||
|
- kty
|
||||||
|
properties:
|
||||||
|
kty:
|
||||||
|
type: string
|
||||||
|
description: Key type (e.g. "EC", "RSA").
|
||||||
|
example: "EC"
|
||||||
|
crv:
|
||||||
|
type: string
|
||||||
|
description: EC curve (e.g. "P-256").
|
||||||
|
example: "P-256"
|
||||||
|
x:
|
||||||
|
type: string
|
||||||
|
description: Base64url-encoded EC x coordinate.
|
||||||
|
example: "f83OJ3D..."
|
||||||
|
y:
|
||||||
|
type: string
|
||||||
|
description: Base64url-encoded EC y coordinate.
|
||||||
|
example: "x_FEzRu..."
|
||||||
|
n:
|
||||||
|
type: string
|
||||||
|
description: Base64url-encoded RSA modulus.
|
||||||
|
e:
|
||||||
|
type: string
|
||||||
|
description: Base64url-encoded RSA public exponent.
|
||||||
|
example: "AQAB"
|
||||||
|
use:
|
||||||
|
type: string
|
||||||
|
example: "sig"
|
||||||
|
kid:
|
||||||
|
type: string
|
||||||
|
example: "key-20260328-001"
|
||||||
|
|
||||||
|
VerificationMethod:
|
||||||
|
type: object
|
||||||
|
description: W3C DID Core 1.0 verification method.
|
||||||
|
required:
|
||||||
|
- id
|
||||||
|
- type
|
||||||
|
- controller
|
||||||
|
- publicKeyJwk
|
||||||
|
properties:
|
||||||
|
id:
|
||||||
|
type: string
|
||||||
|
description: Full DID URL for this verification method.
|
||||||
|
example: "did:web:api.sentryagent.ai:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890#key-1"
|
||||||
|
type:
|
||||||
|
type: string
|
||||||
|
description: Verification method type.
|
||||||
|
example: "JsonWebKey2020"
|
||||||
|
controller:
|
||||||
|
type: string
|
||||||
|
description: DID that controls this key.
|
||||||
|
example: "did:web:api.sentryagent.ai:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
publicKeyJwk:
|
||||||
|
$ref: '#/components/schemas/PublicKeyJwk'
|
||||||
|
|
||||||
|
DIDService:
|
||||||
|
type: object
|
||||||
|
description: A W3C DID Document service endpoint.
|
||||||
|
required:
|
||||||
|
- id
|
||||||
|
- type
|
||||||
|
- serviceEndpoint
|
||||||
|
properties:
|
||||||
|
id:
|
||||||
|
type: string
|
||||||
|
example: "did:web:api.sentryagent.ai:agents:a1b2c3d4#agentIdP"
|
||||||
|
type:
|
||||||
|
type: string
|
||||||
|
example: "AgentIdP"
|
||||||
|
serviceEndpoint:
|
||||||
|
type: string
|
||||||
|
format: uri
|
||||||
|
example: "https://api.sentryagent.ai/api/v1/agents/a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
|
||||||
|
AgntcyExtension:
|
||||||
|
type: object
|
||||||
|
description: AGNTCY-specific extension fields embedded in a per-agent DID Document.
|
||||||
|
required:
|
||||||
|
- agentId
|
||||||
|
- agentType
|
||||||
|
- capabilities
|
||||||
|
- deploymentEnv
|
||||||
|
- owner
|
||||||
|
- version
|
||||||
|
properties:
|
||||||
|
agentId:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
example: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
agentType:
|
||||||
|
type: string
|
||||||
|
example: "screener"
|
||||||
|
capabilities:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
type: string
|
||||||
|
example: ["resume:read", "email:send"]
|
||||||
|
deploymentEnv:
|
||||||
|
type: string
|
||||||
|
example: "production"
|
||||||
|
owner:
|
||||||
|
type: string
|
||||||
|
example: "talent-acquisition-team"
|
||||||
|
version:
|
||||||
|
type: string
|
||||||
|
example: "1.4.2"
|
||||||
|
|
||||||
|
DIDDocument:
|
||||||
|
type: object
|
||||||
|
description: W3C DID Core 1.0 DID Document.
|
||||||
|
required:
|
||||||
|
- "@context"
|
||||||
|
- id
|
||||||
|
- controller
|
||||||
|
- verificationMethod
|
||||||
|
- authentication
|
||||||
|
properties:
|
||||||
|
"@context":
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
type: string
|
||||||
|
description: JSON-LD context URIs.
|
||||||
|
example:
|
||||||
|
- "https://www.w3.org/ns/did/v1"
|
||||||
|
- "https://w3id.org/security/suites/jws-2020/v1"
|
||||||
|
id:
|
||||||
|
type: string
|
||||||
|
description: The DID identifier for this document.
|
||||||
|
example: "did:web:api.sentryagent.ai:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
controller:
|
||||||
|
type: string
|
||||||
|
description: DID of the controlling entity.
|
||||||
|
example: "did:web:api.sentryagent.ai"
|
||||||
|
verificationMethod:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
$ref: '#/components/schemas/VerificationMethod'
|
||||||
|
authentication:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
type: string
|
||||||
|
description: DID URLs referencing verification methods authorized for authentication.
|
||||||
|
example:
|
||||||
|
- "did:web:api.sentryagent.ai:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890#key-1"
|
||||||
|
assertionMethod:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
type: string
|
||||||
|
service:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
$ref: '#/components/schemas/DIDService'
|
||||||
|
agntcy:
|
||||||
|
$ref: '#/components/schemas/AgntcyExtension'
|
||||||
|
|
||||||
|
DIDResolutionResult:
|
||||||
|
type: object
|
||||||
|
description: |
|
||||||
|
W3C DID Resolution result format.
|
||||||
|
Returned with `Content-Type: application/ld+json;profile="https://w3id.org/did-resolution"`.
|
||||||
|
required:
|
||||||
|
- didDocument
|
||||||
|
- didDocumentMetadata
|
||||||
|
- didResolutionMetadata
|
||||||
|
properties:
|
||||||
|
didDocument:
|
||||||
|
$ref: '#/components/schemas/DIDDocument'
|
||||||
|
didDocumentMetadata:
|
||||||
|
type: object
|
||||||
|
required:
|
||||||
|
- created
|
||||||
|
- updated
|
||||||
|
- deactivated
|
||||||
|
properties:
|
||||||
|
created:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
example: "2026-03-01T08:00:00.000Z"
|
||||||
|
updated:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
example: "2026-03-28T11:30:00.000Z"
|
||||||
|
deactivated:
|
||||||
|
type: boolean
|
||||||
|
example: false
|
||||||
|
didResolutionMetadata:
|
||||||
|
type: object
|
||||||
|
required:
|
||||||
|
- contentType
|
||||||
|
- retrieved
|
||||||
|
properties:
|
||||||
|
contentType:
|
||||||
|
type: string
|
||||||
|
example: "application/ld+json"
|
||||||
|
retrieved:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
example: "2026-04-07T09:00:00.000Z"
|
||||||
|
|
||||||
|
AgentCard:
|
||||||
|
type: object
|
||||||
|
description: |
|
||||||
|
AGNTCY-format agent card providing a machine-readable identity summary.
|
||||||
|
Suitable for AGNTCY registry publishing and agent discovery.
|
||||||
|
required:
|
||||||
|
- did
|
||||||
|
- name
|
||||||
|
- agentType
|
||||||
|
- capabilities
|
||||||
|
- owner
|
||||||
|
- version
|
||||||
|
- deploymentEnv
|
||||||
|
- identityProvider
|
||||||
|
- issuedAt
|
||||||
|
properties:
|
||||||
|
did:
|
||||||
|
type: string
|
||||||
|
description: W3C DID identifier for this agent.
|
||||||
|
example: "did:web:api.sentryagent.ai:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
name:
|
||||||
|
type: string
|
||||||
|
description: Human-readable agent name (derived from email).
|
||||||
|
example: "screener-001@sentryagent.ai"
|
||||||
|
agentType:
|
||||||
|
type: string
|
||||||
|
example: "screener"
|
||||||
|
capabilities:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
type: string
|
||||||
|
example: ["resume:read", "email:send", "candidate:score"]
|
||||||
|
owner:
|
||||||
|
type: string
|
||||||
|
example: "talent-acquisition-team"
|
||||||
|
version:
|
||||||
|
type: string
|
||||||
|
example: "1.4.2"
|
||||||
|
deploymentEnv:
|
||||||
|
type: string
|
||||||
|
example: "production"
|
||||||
|
identityProvider:
|
||||||
|
type: string
|
||||||
|
format: uri
|
||||||
|
description: URL of the issuing AgentIdP instance.
|
||||||
|
example: "https://api.sentryagent.ai"
|
||||||
|
issuedAt:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
description: ISO 8601 timestamp when this card was generated.
|
||||||
|
example: "2026-04-07T09:00:00.000Z"
|
||||||
|
|
||||||
|
ErrorResponse:
|
||||||
|
type: object
|
||||||
|
description: Standard error response envelope.
|
||||||
|
required:
|
||||||
|
- code
|
||||||
|
- message
|
||||||
|
properties:
|
||||||
|
code:
|
||||||
|
type: string
|
||||||
|
example: "AGENT_NOT_FOUND"
|
||||||
|
message:
|
||||||
|
type: string
|
||||||
|
example: "Agent with the specified ID was not found."
|
||||||
|
details:
|
||||||
|
type: object
|
||||||
|
additionalProperties: true
|
||||||
|
|
||||||
|
responses:
|
||||||
|
Unauthorized:
|
||||||
|
description: Missing or invalid Bearer token.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "UNAUTHORIZED"
|
||||||
|
message: "A valid Bearer token is required to access this resource."
|
||||||
|
|
||||||
|
Forbidden:
|
||||||
|
description: Valid token but insufficient permissions.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "FORBIDDEN"
|
||||||
|
message: "You do not have permission to perform this action."
|
||||||
|
|
||||||
|
NotFound:
|
||||||
|
description: Agent not found.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "AGENT_NOT_FOUND"
|
||||||
|
message: "Agent with the specified ID was not found."
|
||||||
|
|
||||||
|
InternalServerError:
|
||||||
|
description: Unexpected server error.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "INTERNAL_SERVER_ERROR"
|
||||||
|
message: "An unexpected error occurred. Please try again later."
|
||||||
|
|
||||||
|
paths:
|
||||||
|
/.well-known/did.json:
|
||||||
|
get:
|
||||||
|
operationId: getInstanceDIDDocument
|
||||||
|
tags:
|
||||||
|
- DID Documents
|
||||||
|
summary: Instance-level DID Document
|
||||||
|
description: |
|
||||||
|
Returns the W3C DID Document for the SentryAgent.ai AgentIdP instance itself.
|
||||||
|
This identifies the IdP as a DID controller (`did:web:api.sentryagent.ai`).
|
||||||
|
|
||||||
|
Used by external parties to discover the IdP's public keys and service endpoints.
|
||||||
|
This endpoint is **unauthenticated**.
|
||||||
|
security: []
|
||||||
|
responses:
|
||||||
|
'200':
|
||||||
|
description: Instance DID Document returned successfully.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/DIDDocument'
|
||||||
|
example:
|
||||||
|
"@context":
|
||||||
|
- "https://www.w3.org/ns/did/v1"
|
||||||
|
- "https://w3id.org/security/suites/jws-2020/v1"
|
||||||
|
id: "did:web:api.sentryagent.ai"
|
||||||
|
controller: "did:web:api.sentryagent.ai"
|
||||||
|
verificationMethod:
|
||||||
|
- id: "did:web:api.sentryagent.ai#key-1"
|
||||||
|
type: "JsonWebKey2020"
|
||||||
|
controller: "did:web:api.sentryagent.ai"
|
||||||
|
publicKeyJwk:
|
||||||
|
kty: "EC"
|
||||||
|
crv: "P-256"
|
||||||
|
x: "f83OJ3D..."
|
||||||
|
y: "x_FEzRu..."
|
||||||
|
authentication:
|
||||||
|
- "did:web:api.sentryagent.ai#key-1"
|
||||||
|
service:
|
||||||
|
- id: "did:web:api.sentryagent.ai#agentIdP"
|
||||||
|
type: "AgentIdP"
|
||||||
|
serviceEndpoint: "https://api.sentryagent.ai/api/v1"
|
||||||
|
'500':
|
||||||
|
$ref: '#/components/responses/InternalServerError'
|
||||||
|
|
||||||
|
/api/v1/agents/{agentId}/did:
|
||||||
|
get:
|
||||||
|
operationId: getAgentDIDDocument
|
||||||
|
tags:
|
||||||
|
- DID Documents
|
||||||
|
summary: Get agent DID Document
|
||||||
|
description: |
|
||||||
|
Returns the W3C DID Core 1.0 Document for a specific registered agent.
|
||||||
|
|
||||||
|
Returns `410 Gone` if the agent has been decommissioned — the DID Document
|
||||||
|
is no longer active.
|
||||||
|
|
||||||
|
This endpoint is **unauthenticated**.
|
||||||
|
security: []
|
||||||
|
parameters:
|
||||||
|
- name: agentId
|
||||||
|
in: path
|
||||||
|
required: true
|
||||||
|
description: UUID of the agent.
|
||||||
|
schema:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
example: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
responses:
|
||||||
|
'200':
|
||||||
|
description: Agent DID Document returned successfully.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/DIDDocument'
|
||||||
|
example:
|
||||||
|
"@context":
|
||||||
|
- "https://www.w3.org/ns/did/v1"
|
||||||
|
- "https://w3id.org/security/suites/jws-2020/v1"
|
||||||
|
id: "did:web:api.sentryagent.ai:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
controller: "did:web:api.sentryagent.ai"
|
||||||
|
verificationMethod:
|
||||||
|
- id: "did:web:api.sentryagent.ai:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890#key-1"
|
||||||
|
type: "JsonWebKey2020"
|
||||||
|
controller: "did:web:api.sentryagent.ai:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
publicKeyJwk:
|
||||||
|
kty: "EC"
|
||||||
|
crv: "P-256"
|
||||||
|
x: "f83OJ3D..."
|
||||||
|
y: "x_FEzRu..."
|
||||||
|
authentication:
|
||||||
|
- "did:web:api.sentryagent.ai:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890#key-1"
|
||||||
|
agntcy:
|
||||||
|
agentId: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
agentType: "screener"
|
||||||
|
capabilities:
|
||||||
|
- "resume:read"
|
||||||
|
- "email:send"
|
||||||
|
deploymentEnv: "production"
|
||||||
|
owner: "talent-acquisition-team"
|
||||||
|
version: "1.4.2"
|
||||||
|
'404':
|
||||||
|
$ref: '#/components/responses/NotFound'
|
||||||
|
'410':
|
||||||
|
description: Agent has been decommissioned — DID Document is no longer active.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/ErrorResponse'
|
||||||
|
example:
|
||||||
|
code: "AGENT_DECOMMISSIONED"
|
||||||
|
message: "Agent has been decommissioned — DID Document is no longer active"
|
||||||
|
'500':
|
||||||
|
$ref: '#/components/responses/InternalServerError'
|
||||||
|
|
||||||
|
/api/v1/agents/{agentId}/did/resolve:
|
||||||
|
get:
|
||||||
|
operationId: resolveAgentDID
|
||||||
|
tags:
|
||||||
|
- DID Resolution
|
||||||
|
summary: Resolve agent DID (W3C DID Resolution)
|
||||||
|
description: |
|
||||||
|
Returns the full W3C DID Resolution result for a specific agent, including
|
||||||
|
the DID Document, DID Document Metadata, and DID Resolution Metadata.
|
||||||
|
|
||||||
|
The response `Content-Type` is:
|
||||||
|
`application/ld+json;profile="https://w3id.org/did-resolution"`
|
||||||
|
|
||||||
|
Requires a valid Bearer JWT and OPA authorization.
|
||||||
|
security:
|
||||||
|
- BearerAuth: []
|
||||||
|
parameters:
|
||||||
|
- name: agentId
|
||||||
|
in: path
|
||||||
|
required: true
|
||||||
|
description: UUID of the agent to resolve.
|
||||||
|
schema:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
example: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
responses:
|
||||||
|
'200':
|
||||||
|
description: DID Resolution result returned successfully.
|
||||||
|
content:
|
||||||
|
application/ld+json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/DIDResolutionResult'
|
||||||
|
example:
|
||||||
|
didDocument:
|
||||||
|
"@context":
|
||||||
|
- "https://www.w3.org/ns/did/v1"
|
||||||
|
id: "did:web:api.sentryagent.ai:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
controller: "did:web:api.sentryagent.ai"
|
||||||
|
verificationMethod: []
|
||||||
|
authentication: []
|
||||||
|
didDocumentMetadata:
|
||||||
|
created: "2026-03-01T08:00:00.000Z"
|
||||||
|
updated: "2026-03-28T11:30:00.000Z"
|
||||||
|
deactivated: false
|
||||||
|
didResolutionMetadata:
|
||||||
|
contentType: "application/ld+json"
|
||||||
|
retrieved: "2026-04-07T09:00:00.000Z"
|
||||||
|
'401':
|
||||||
|
$ref: '#/components/responses/Unauthorized'
|
||||||
|
'403':
|
||||||
|
$ref: '#/components/responses/Forbidden'
|
||||||
|
'404':
|
||||||
|
$ref: '#/components/responses/NotFound'
|
||||||
|
'500':
|
||||||
|
$ref: '#/components/responses/InternalServerError'
|
||||||
|
|
||||||
|
/api/v1/agents/{agentId}/did/card:
|
||||||
|
get:
|
||||||
|
operationId: getAgentCard
|
||||||
|
tags:
|
||||||
|
- Agent Card
|
||||||
|
summary: Get AGNTCY agent card
|
||||||
|
description: |
|
||||||
|
Returns the AGNTCY-format agent card for the specified agent.
|
||||||
|
The card provides a machine-readable identity summary suitable for
|
||||||
|
AGNTCY registry publishing and agent discovery by external consumers.
|
||||||
|
|
||||||
|
This endpoint is **unauthenticated**.
|
||||||
|
security: []
|
||||||
|
parameters:
|
||||||
|
- name: agentId
|
||||||
|
in: path
|
||||||
|
required: true
|
||||||
|
description: UUID of the agent.
|
||||||
|
schema:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
example: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
responses:
|
||||||
|
'200':
|
||||||
|
description: AGNTCY agent card returned successfully.
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: '#/components/schemas/AgentCard'
|
||||||
|
example:
|
||||||
|
did: "did:web:api.sentryagent.ai:agents:a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
name: "screener-001@sentryagent.ai"
|
||||||
|
agentType: "screener"
|
||||||
|
capabilities:
|
||||||
|
- "resume:read"
|
||||||
|
- "email:send"
|
||||||
|
- "candidate:score"
|
||||||
|
owner: "talent-acquisition-team"
|
||||||
|
version: "1.4.2"
|
||||||
|
deploymentEnv: "production"
|
||||||
|
identityProvider: "https://api.sentryagent.ai"
|
||||||
|
issuedAt: "2026-04-07T09:00:00.000Z"
|
||||||
|
'404':
|
||||||
|
$ref: '#/components/responses/NotFound'
|
||||||
|
'500':
|
||||||
|
$ref: '#/components/responses/InternalServerError'
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user