Skip to content
Discord Get Started

Analyze Agent Logs with fs9

AI agents produce structured logs — tool invocations, model calls, lifecycle events, errors. The fs9 extension lets you write those logs to DB9’s built-in filesystem and then query them with SQL, without loading them into a separate observability tool.

This guide walks through a practical flow: write agent logs, query them as structured data, and build useful aggregations.

  • A DB9 database (see Quick Start)
  • The fs9 extension enabled
SQL
CREATE EXTENSION IF NOT EXISTS fs9;

fs9 exposes a per-database filesystem backed by TiKV. Files are isolated per tenant — each database has its own / root. You interact with it through:

  • Scalar functions like fs9_write(), fs9_read(), fs9_append() for file operations
  • A table function extensions.fs9() that reads files as SQL result sets with automatic format detection

The table function auto-detects .jsonl, .csv, .tsv, and plain text formats by file extension.

Agents typically emit logs as JSONL (one JSON object per line). Use fs9_write() or fs9_append() to store them:

SQL
-- Write a batch of agent events as JSONL
SELECT fs9_write('/logs/agent/2026-03-12.jsonl',
'{"ts":"2026-03-12T10:00:01Z","event":"tool_call","tool":"sql","args":{"query":"SELECT count(*) FROM users"},"duration_ms":42}
{"ts":"2026-03-12T10:00:02Z","event":"tool_call","tool":"http_get","args":{"url":"https://api.example.com/status"},"duration_ms":310}
{"ts":"2026-03-12T10:00:03Z","event":"tool_call","tool":"fs_write","args":{"path":"/data/report.csv"},"duration_ms":15}
{"ts":"2026-03-12T10:00:05Z","event":"model_call","model":"claude-sonnet-4-20250514","tokens_in":1200,"tokens_out":350,"duration_ms":890}
{"ts":"2026-03-12T10:00:06Z","event":"tool_call","tool":"sql","args":{"query":"INSERT INTO reports VALUES (...)"},"duration_ms":28}
{"ts":"2026-03-12T10:00:08Z","event":"lifecycle","phase":"complete","status":"success","total_duration_ms":7200}
');

For streaming logs, use fs9_append() to add lines without overwriting:

SQL
SELECT fs9_append('/logs/agent/2026-03-12.jsonl',
'{"ts":"2026-03-12T10:01:00Z","event":"tool_call","tool":"sql","args":{"query":"SELECT * FROM metrics"},"duration_ms":55}
');

The extensions.fs9() table function reads .jsonl files and returns each line as a JSONB row:

SQL
SELECT _line_number, line
FROM extensions.fs9('/logs/agent/2026-03-12.jsonl')
LIMIT 5;

Each row has:

ColumnTypeDescription
_line_numberINT1-based line index
lineJSONBThe parsed JSON object
_pathTEXTSource file path (useful with globs)

Since line is JSONB, you can use standard PostgreSQL JSON operators to extract fields:

SQL
SELECT
line->>'event' AS event_type,
line->>'tool' AS tool,
(line->>'duration_ms')::int AS duration_ms
FROM extensions.fs9('/logs/agent/2026-03-12.jsonl')
WHERE line->>'event' = 'tool_call';

Count tool calls by tool name:

SQL
SELECT
line->>'tool' AS tool,
count(*) AS calls,
round(avg((line->>'duration_ms')::numeric)) AS avg_ms,
max((line->>'duration_ms')::int) AS max_ms
FROM extensions.fs9('/logs/agent/2026-03-12.jsonl')
WHERE line->>'event' = 'tool_call'
GROUP BY 1
ORDER BY calls DESC;

Example output:

toolcallsavg_msmax_ms
sql34255
http_get1310310
fs_write11515

Aggregate model calls to see token consumption:

SQL
SELECT
line->>'model' AS model,
count(*) AS calls,
sum((line->>'tokens_in')::int) AS total_tokens_in,
sum((line->>'tokens_out')::int) AS total_tokens_out,
round(avg((line->>'duration_ms')::numeric)) AS avg_latency_ms
FROM extensions.fs9('/logs/agent/2026-03-12.jsonl')
WHERE line->>'event' = 'model_call'
GROUP BY 1;

Use glob patterns to query all logs in a directory or across dates:

SQL
-- All JSONL files in the agent logs directory
SELECT
_path,
line->>'event' AS event,
line->>'tool' AS tool
FROM extensions.fs9('/logs/agent/*.jsonl')
LIMIT 20;
-- Recursive search across all subdirectories
SELECT
_path,
count(*) AS events
FROM extensions.fs9('/logs/**/*.jsonl')
GROUP BY 1
ORDER BY events DESC;

Glob limits: a single glob can match up to 10,000 files and 100 MB total. For larger log volumes, query specific date ranges instead of using **.

If your agent writes CSV logs, fs9 auto-detects the format and maps headers to column names:

SQL
-- Write a CSV log
SELECT fs9_write('/logs/requests.csv',
'timestamp,method,path,status,duration_ms
2026-03-12T10:00:01Z,GET,/api/users,200,42
2026-03-12T10:00:02Z,POST,/api/tasks,201,85
2026-03-12T10:00:03Z,GET,/api/users/1,404,12
');
-- Query it — column names come from the header row
SELECT method, path, status, duration_ms
FROM extensions.fs9('/logs/requests.csv')
WHERE status != '200';

Check what logs exist and how large they are:

SQL
-- List log files
SELECT path, size, mtime
FROM extensions.fs9('/logs/agent/')
ORDER BY mtime DESC;
-- Check a specific file's size
SELECT fs9_size('/logs/agent/2026-03-12.jsonl');
-- Read a partial range (first 1024 bytes)
SELECT fs9_read_at('/logs/agent/2026-03-12.jsonl', 0, 1024);
-- Delete old logs
SELECT fs9_remove('/logs/agent/2026-03-01.jsonl');
-- Delete a directory recursively
SELECT fs9_remove('/logs/old/', true);
  • Superuser required — all fs9 operations require the database admin role.
  • Max file size — 100 MB per file. For larger log volumes, split by date or category.
  • Max glob result — a single glob query can scan up to 10,000 files and 100 MB total.
  • Concurrent read budget — 128 MB across all concurrent fs9_read() calls globally.
  • UTF-8 only — binary content read as text uses lossy UTF-8 conversion. Use fs9_write() with BYTEA for binary data.
  • No streaming — the table function loads the entire file into memory. For very large files, use fs9_read_at() to read in ranges.