Analyze Agent Logs with fs9
AI agents produce structured logs — tool invocations, model calls, lifecycle events, errors. The fs9 extension lets you write those logs to DB9’s built-in filesystem and then query them with SQL, without loading them into a separate observability tool.
This guide walks through a practical flow: write agent logs, query them as structured data, and build useful aggregations.
Prerequisites
Section titled “Prerequisites”- A DB9 database (see Quick Start)
- The
fs9extension enabled
CREATE EXTENSION IF NOT EXISTS fs9;How fs9 Works
Section titled “How fs9 Works”fs9 exposes a per-database filesystem backed by TiKV. Files are isolated per tenant — each database has its own / root. You interact with it through:
- Scalar functions like
fs9_write(),fs9_read(),fs9_append()for file operations - A table function
extensions.fs9()that reads files as SQL result sets with automatic format detection
The table function auto-detects .jsonl, .csv, .tsv, and plain text formats by file extension.
1. Write Agent Logs to the Filesystem
Section titled “1. Write Agent Logs to the Filesystem”Agents typically emit logs as JSONL (one JSON object per line). Use fs9_write() or fs9_append() to store them:
-- Write a batch of agent events as JSONLSELECT fs9_write('/logs/agent/2026-03-12.jsonl', '{"ts":"2026-03-12T10:00:01Z","event":"tool_call","tool":"sql","args":{"query":"SELECT count(*) FROM users"},"duration_ms":42}{"ts":"2026-03-12T10:00:02Z","event":"tool_call","tool":"http_get","args":{"url":"https://api.example.com/status"},"duration_ms":310}{"ts":"2026-03-12T10:00:03Z","event":"tool_call","tool":"fs_write","args":{"path":"/data/report.csv"},"duration_ms":15}{"ts":"2026-03-12T10:00:05Z","event":"model_call","model":"claude-sonnet-4-20250514","tokens_in":1200,"tokens_out":350,"duration_ms":890}{"ts":"2026-03-12T10:00:06Z","event":"tool_call","tool":"sql","args":{"query":"INSERT INTO reports VALUES (...)"},"duration_ms":28}{"ts":"2026-03-12T10:00:08Z","event":"lifecycle","phase":"complete","status":"success","total_duration_ms":7200}');For streaming logs, use fs9_append() to add lines without overwriting:
SELECT fs9_append('/logs/agent/2026-03-12.jsonl', '{"ts":"2026-03-12T10:01:00Z","event":"tool_call","tool":"sql","args":{"query":"SELECT * FROM metrics"},"duration_ms":55}');2. Query Logs as Structured Data
Section titled “2. Query Logs as Structured Data”The extensions.fs9() table function reads .jsonl files and returns each line as a JSONB row:
SELECT _line_number, lineFROM extensions.fs9('/logs/agent/2026-03-12.jsonl')LIMIT 5;Each row has:
| Column | Type | Description |
|---|---|---|
_line_number | INT | 1-based line index |
line | JSONB | The parsed JSON object |
_path | TEXT | Source file path (useful with globs) |
Since line is JSONB, you can use standard PostgreSQL JSON operators to extract fields:
SELECT line->>'event' AS event_type, line->>'tool' AS tool, (line->>'duration_ms')::int AS duration_msFROM extensions.fs9('/logs/agent/2026-03-12.jsonl')WHERE line->>'event' = 'tool_call';3. Aggregate Tool Call Patterns
Section titled “3. Aggregate Tool Call Patterns”Count tool calls by tool name:
SELECT line->>'tool' AS tool, count(*) AS calls, round(avg((line->>'duration_ms')::numeric)) AS avg_ms, max((line->>'duration_ms')::int) AS max_msFROM extensions.fs9('/logs/agent/2026-03-12.jsonl')WHERE line->>'event' = 'tool_call'GROUP BY 1ORDER BY calls DESC;Example output:
| tool | calls | avg_ms | max_ms |
|---|---|---|---|
| sql | 3 | 42 | 55 |
| http_get | 1 | 310 | 310 |
| fs_write | 1 | 15 | 15 |
4. Track Token Usage
Section titled “4. Track Token Usage”Aggregate model calls to see token consumption:
SELECT line->>'model' AS model, count(*) AS calls, sum((line->>'tokens_in')::int) AS total_tokens_in, sum((line->>'tokens_out')::int) AS total_tokens_out, round(avg((line->>'duration_ms')::numeric)) AS avg_latency_msFROM extensions.fs9('/logs/agent/2026-03-12.jsonl')WHERE line->>'event' = 'model_call'GROUP BY 1;5. Query Across Multiple Log Files
Section titled “5. Query Across Multiple Log Files”Use glob patterns to query all logs in a directory or across dates:
-- All JSONL files in the agent logs directorySELECT _path, line->>'event' AS event, line->>'tool' AS toolFROM extensions.fs9('/logs/agent/*.jsonl')LIMIT 20;
-- Recursive search across all subdirectoriesSELECT _path, count(*) AS eventsFROM extensions.fs9('/logs/**/*.jsonl')GROUP BY 1ORDER BY events DESC;Glob limits: a single glob can match up to 10,000 files and 100 MB total. For larger log volumes, query specific date ranges instead of using **.
6. Query CSV Logs
Section titled “6. Query CSV Logs”If your agent writes CSV logs, fs9 auto-detects the format and maps headers to column names:
-- Write a CSV logSELECT fs9_write('/logs/requests.csv', 'timestamp,method,path,status,duration_ms2026-03-12T10:00:01Z,GET,/api/users,200,422026-03-12T10:00:02Z,POST,/api/tasks,201,852026-03-12T10:00:03Z,GET,/api/users/1,404,12');
-- Query it — column names come from the header rowSELECT method, path, status, duration_msFROM extensions.fs9('/logs/requests.csv')WHERE status != '200';7. File Management
Section titled “7. File Management”Check what logs exist and how large they are:
-- List log filesSELECT path, size, mtimeFROM extensions.fs9('/logs/agent/')ORDER BY mtime DESC;
-- Check a specific file's sizeSELECT fs9_size('/logs/agent/2026-03-12.jsonl');
-- Read a partial range (first 1024 bytes)SELECT fs9_read_at('/logs/agent/2026-03-12.jsonl', 0, 1024);
-- Delete old logsSELECT fs9_remove('/logs/agent/2026-03-01.jsonl');
-- Delete a directory recursivelySELECT fs9_remove('/logs/old/', true);Limits and Caveats
Section titled “Limits and Caveats”- Superuser required — all fs9 operations require the database admin role.
- Max file size — 100 MB per file. For larger log volumes, split by date or category.
- Max glob result — a single glob query can scan up to 10,000 files and 100 MB total.
- Concurrent read budget — 128 MB across all concurrent
fs9_read()calls globally. - UTF-8 only — binary content read as text uses lossy UTF-8 conversion. Use
fs9_write()with BYTEA for binary data. - No streaming — the table function loads the entire file into memory. For very large files, use
fs9_read_at()to read in ranges.
Next Pages
Section titled “Next Pages”- fs9 Extension Reference — full function reference, format detection rules, and named parameters
- RAG with Built-in Embeddings — vector search and retrieval pipelines
- HTTP from SQL — call external APIs from SQL
- Extensions Overview — all 9 built-in extensions
- CLI Reference —
db9 db sqlfor running queries from the terminal