Audit Trail (Nectar)
Nectar is Honeybee’s audit layer. It answers one question: “What did the agent do?”
Every action an agent takes — every LLM call, every tool execution, every file write, every network request — is captured, correlated, and stored for review.
What’s captured
Section titled “What’s captured”Nectar captures 7 layers of agent activity:
| Layer | What | How |
|---|---|---|
| Prompts & responses | Full LLM request/response payloads | Telemetry hooks in agent runner |
| Tool execution | Tool name, arguments, result, duration | Pre/post tool hooks |
| Carapace scans | Scan results, scores, findings, actions | Guard integration |
| File system diffs | Files created, modified, deleted | MemFS changeset or FUSE overlay diff |
| Network events | SSL plaintext (outbound + inbound) | eBPF firewall |
| Process trees | Which processes spawned which | eBPF process probes |
| State changes | ACP state mutations, events, claims | Store hooks |
Correlation via trace_id
Section titled “Correlation via trace_id”Every audit event carries a trace_id that links it to the originating chain:
Queen session (trace_id: abc-123) └── Worker spawn (trace_id: abc-123, span: worker-1) ├── LLM call (trace_id: abc-123, span: worker-1.llm.1) ├── Tool: write_file (trace_id: abc-123, span: worker-1.tool.1) ├── Carapace scan (trace_id: abc-123, span: worker-1.scan.1) └── Network: POST api.openai.com (trace_id: abc-123, span: worker-1.net.1)This means you can take any single event and trace it back through the full chain: which agent, which session, which LLM call triggered which tool, and what network requests resulted.
Storage architecture
Section titled “Storage architecture”Local (always on)
Section titled “Local (always on)”Every Honeybee deployment writes audit events to local JSONL files:
~/.honeyb/projects/<slug>/telemetry/<date>.jsonlOne file per day, auto-rotated. Events are appended synchronously — no data loss on crash.
Cloud (opt-in)
Section titled “Cloud (opt-in)”For teams, Nectar provides cloud storage with queryable metadata:
| Store | What | Access pattern |
|---|---|---|
| D1 (Cloudflare) | Event metadata (type, timestamp, trace_id, score, agent_id) | SQL queries, dashboards, filtering |
| R2 (Cloudflare) | Full payloads (prompts, responses, file diffs) | Fetch on demand from dashboard |
The split keeps queries fast (D1 is indexed) while allowing full payload review when needed (R2 stores bulk data cheaply).
Data flow
Section titled “Data flow”Agent activity → Local JSONL (always, immediate) → Aggregation (5-minute summaries) → Cloud ingest (opt-in, summaries only) → D1 metadata + R2 payloads (team dashboard)Cloud sync sends aggregated summaries only — never raw prompts, responses, or content. Teams that need full payload review use the local JSONL or configure explicit payload upload.
Privacy guarantees
Section titled “Privacy guarantees”| What | Local | Cloud (default) | Cloud (full audit) |
|---|---|---|---|
| Event counts | Yes | Yes | Yes |
| Timing / latency | Yes | Yes | Yes |
| Scores / findings | Yes | Yes | Yes |
| Tool names | Yes | Yes | Yes |
| Prompt content | Yes | No | Opt-in |
| Response content | Yes | No | Opt-in |
| File diffs | Yes | No | Opt-in |
| Network payloads | Yes | No | Opt-in |
Default cloud mode is structural metadata only. Content never leaves the local machine unless explicitly configured.
Compliance use cases
Section titled “Compliance use cases””Show your auditor what the AI did”
Section titled “”Show your auditor what the AI did””Nectar provides the evidence chain auditors need:
- Complete capture: Every LLM interaction, tool use, and file change is recorded
- Tamper evidence: JSONL append-only log with timestamps
- Correlation: trace_id links every action to its originating session
- Scan results: Carapace scores prove every input/output was checked
”Prove the agent didn’t access unauthorized data”
Section titled “”Prove the agent didn’t access unauthorized data””Combining Nectar with the eBPF firewall:
- Network log: Every SSL connection, every domain, every payload
- File log: Every file read/write with full diff
- Process log: Every subprocess spawned by the agent
- Claim log: Every resource lock acquired and released
12 telemetry event types
Section titled “12 telemetry event types”These events form the backbone of Nectar’s audit trail:
| Event | Source | Key fields |
|---|---|---|
llm_call | Runner | model, tokens, latency, cost |
llm_error | Runner | model, error type, retry count |
tool_call | Runner | tool name, duration, success |
agent_complete | Runner | exit reason |
context_compaction | Runner | tokens before/after |
guard_scan | Guard | side, score, action, findings |
agent_spawn | Orchestrator | agent ID, role, type, model |
agent_exit | Orchestrator | agent ID, exit reason, duration |
agent_kill | Orchestrator | agent ID, kill reason |
protocol_start | Orchestrator | protocol name, agent count |
protocol_end | Orchestrator | duration, total tokens, total cost |
dance_call | WebSocket | tool name, caller role, latency |
Related
Section titled “Related”- Nectar product page — full product overview
- Telemetry events — event format reference
- Scanner — how Carapace scans feed into audit
- eBPF Firewall — network event capture
- Privacy — data handling policies