Skip to content
Discord Get Started

Agent Workflows

DB9 gives AI agents a complete backend through the PostgreSQL wire protocol. Instead of stitching together separate services for storage, embeddings, files, HTTP, and scheduling, agents do all of it in SQL.

This page is the starting point for building agent workflows on DB9. It maps common agent tasks to the DB9 capabilities that solve them, shows how those capabilities compose, and links to deeper guides.

  • Agent developers building assistants, copilots, or autonomous agents that need persistent state.
  • Platform engineers creating agent infrastructure where each agent, user, or task gets its own database.
  • Teams already using DB9 who want to move beyond basic storage into full agent automation.

If you’re still deciding whether DB9 is the right fit, start with Why DB9 for AI Agents.

Most agent workflows follow a predictable lifecycle. DB9 has a built-in primitive for each stage:

Agent lifecycle stageDB9 primitiveHow to access
Provision a workspaceinstantDatabase() or db9 createSDK, CLI
Store structured stateStandard SQL tablesAny Postgres client
Search semanticallyembedding() + pgvector operatorsSQL
Ingest files and artifactsfs9 functions and table sourceSQL, CLI, SDK
Call external APIshttp_get(), http_post(), and friendsSQL
Branch for safe experimentsdb9 branch createCLI, SDK
Schedule recurring workpg_cronSQL, CLI
Onboard agent toolingdb9 onboardCLI

Each primitive is accessible through standard SQL or the PostgreSQL wire protocol. Agents don’t need a proprietary SDK to use them — any Postgres driver works.

Provision: one database per agent, user, or task

Section titled “Provision: one database per agent, user, or task”

Agent systems typically need to create databases on demand. DB9 provisions a database in under a second, with no signup required for anonymous use.

From the TypeScript SDK:

TypeScript
import { instantDatabase } from 'get-db9';
const db = await instantDatabase({
name: 'agent-session-42',
seed: `
CREATE TABLE memory (id SERIAL, key TEXT, value JSONB);
CREATE TABLE artifacts (id SERIAL, path TEXT, content TEXT);
`,
});
// db.connectionString → ready for any Postgres client

instantDatabase() is idempotent — if a database with that name already exists, it returns the existing one. This makes agent restarts safe.

From the CLI:

Terminal
db9 create --name agent-session-42

For fleet operations, the SDK client also exposes databases.create(), databases.list(), databases.delete(), and databases.credentials() for full programmatic lifecycle management.

Common patterns:

  • Database-per-agent — each agent instance gets its own database for complete isolation.
  • Database-per-user — multi-tenant applications give each end user a dedicated database.
  • Database-per-task — disposable databases for one-shot jobs, deleted when the task completes.

Deeper guide: Provisioning

Agents store context, conversation history, tool outputs, and intermediate results in regular Postgres tables. DB9 supports the full range of SQL data types, DDL, DML, transactions, and indexes.

SQL
-- Agent stores a tool result
INSERT INTO memory (key, value)
VALUES ('search_result', '{"query": "quarterly revenue", "hits": 14}');
-- Agent retrieves its context
SELECT key, value FROM memory ORDER BY id DESC LIMIT 10;

Because DB9 speaks the PostgreSQL wire protocol, agents can use any Postgres-compatible ORM or driver: Prisma, Drizzle, SQLAlchemy, TypeORM, or raw pg connections.

Reference: SQL, Connect

Search: built-in embeddings and vector queries

Section titled “Search: built-in embeddings and vector queries”

DB9 includes a server-side embedding() function that generates vectors without a separate embedding service. Combined with pgvector-compatible operators and HNSW indexes, agents can build semantic search in pure SQL.

SQL
-- Enable the extension (once per database)
CREATE EXTENSION IF NOT EXISTS embedding;
-- Store content with its embedding
INSERT INTO knowledge (content, vec)
VALUES (
'DB9 supports branching for safe experiments',
embedding('DB9 supports branching for safe experiments')::vector(1536)
);
-- Semantic search
SELECT content, vec <-> embedding('how do I test safely?')::vector(1536) AS distance
FROM knowledge
ORDER BY distance
LIMIT 5;

The embedding() function accepts an optional model name and dimensions parameter:

SQL
-- Use a specific model
SELECT embedding('hello world', 'bedrock/amazon-titan-v2', 1024);

Embeddings are generated server-side and cached within each statement execution. The per-statement limit is 100 embedding calls (configurable via the embedding.max_calls session parameter).

Guide: RAG with Built-in Embeddings (coming soon) · Reference: Vector Search

Agents produce and consume files — logs, CSVs, JSON exports, Parquet snapshots. fs9 makes these queryable without loading them into tables first.

Read files as tables:

SQL
-- Query a CSV directly
SELECT * FROM fs9('/data/results.csv');
-- Query Parquet files with a glob pattern
SELECT * FROM fs9('/exports/*.parquet');

Manage files from SQL:

SQL
-- Write a result file
SELECT fs9_write('/output/summary.json', '{"status": "complete", "rows": 1024}');
-- Read a file
SELECT fs9_read('/output/summary.json');
-- List directory contents
SELECT * FROM fs9('/output/', recursive := true);

Manage files from the CLI or SDK:

Terminal
# Copy a local file into the database
db9 fs cp ./data.csv mydb:/data/data.csv
# Interactive file shell
db9 fs sh mydb
# FUSE mount (access db files like a local directory)
db9 fs mount mydb ./mnt

The SDK provides a full file API (fs.read, fs.write, fs.list, fs.stat, etc.) over WebSocket for programmatic access.

File limits: 100 MB per file, 128 MB concurrent read budget, 10,000 files per glob query.

Guide: Analyze Agent Logs with fs9 (coming soon) · Reference: fs9

Agents often need to call external services — webhooks, LLM APIs, enrichment endpoints. DB9’s HTTP extension makes this possible from SQL without leaving the database:

SQL
-- GET request
SELECT status, content FROM http_get('https://api.example.com/status');
-- POST with JSON body
SELECT status, content
FROM http_post(
'https://hooks.slack.com/services/T.../B.../xxx',
'{"text": "Agent task complete"}',
'application/json'
);

Safety boundaries are enforced:

  • HTTPS only (port 443) — no plaintext HTTP by default
  • Private and loopback IPs are blocked (SSRF protection)
  • 100 requests per SQL statement, 20 concurrent per tenant
  • 1 MB max response body, 256 KB max request body
  • 5-second request timeout, 1-second connect timeout

Guide: HTTP from SQL (coming soon) · Reference: HTTP Extension

Agents can fork a database to try a risky operation, validate the result, and discard the branch if it fails:

Terminal
# Create a branch
db9 branch create mydb --name experiment-v2
# Agent works on the branch using its own connection string...
# If the experiment failed, delete the branch
db9 branch delete <branch-id>
# If it succeeded, promote the result (or just keep using the branch)

Branches are independent database copies created from the parent’s schema at the time of branching. Each branch gets its own connection string, credentials, and isolated storage — writes to the branch do not affect the parent.

The SDK also supports branching programmatically:

TypeScript
const branch = await client.databases.branch(parentId, { name: 'experiment-v2' });
// branch.connectionString → isolated workspace

Common branch patterns:

  • Preview environments — branch per pull request for isolated testing.
  • Schema experiments — try a migration on a branch before applying to production.
  • Task isolation — each agent task gets a branch, merged or discarded on completion.

Guide: Branching Workflows (coming soon)

Agents can schedule periodic work — cache refreshes, log cleanup, periodic API polling — directly in the database:

SQL
-- Schedule a cleanup job every 6 hours
SELECT cron.schedule(
'cleanup-old-context',
'0 */6 * * *',
$$DELETE FROM memory WHERE created_at < now() - interval '7 days'$$
);
-- Check execution history
SELECT * FROM cron.job_run_details ORDER BY runid DESC LIMIT 5;

The CLI provides full cron management:

Terminal
db9 db cron mydb create '*/30 * * * *' "SELECT * FROM http_get('https://api.example.com/poll')"
db9 db cron mydb list
db9 db cron mydb history --limit 10
db9 db cron mydb status

Jobs run inside the database engine with no external scheduler. You can enable, disable, and delete jobs through SQL or the CLI.

Guide: Scheduled Jobs with pg_cron (coming soon) · Reference: pg_cron

The db9 onboard command installs a DB9 skill file into your AI coding agent’s skills directory. Once installed, the agent can use DB9 commands as part of its normal workflow.

Terminal
# Auto-detect installed agents and install
db9 onboard
# Target a specific agent
db9 onboard --agent claude
db9 onboard --agent codex
db9 onboard --agent opencode
# Install for all detected agents
db9 onboard --all
# Preview what would be installed
db9 onboard --dry-run

Supported agents:

AgentUser scopeProject scope
Claude Code~/.claude/skills/db9/SKILL.md./.claude/skills/db9/SKILL.md
OpenAI Codex~/.codex/skills/db9/SKILL.md— (user scope only)
OpenCode~/.config/opencode/skills/db9/SKILL.md./.opencode/skills/db9/SKILL.md
Generic agents~/.agents/skills/db9/SKILL.md./.agents/skills/db9/SKILL.md

Use --scope user, --scope project, or --scope both to control where skills are installed. Skill files include a semver version in their frontmatter — db9 onboard only updates when a newer version is available (use --force to override).

Guide: Install DB9 Skills (coming soon) · Claude Code (coming soon) · OpenAI Codex (coming soon)

The real power of DB9 for agents is that these primitives compose through SQL. A single agent workflow can:

  1. Create a database with instantDatabase()
  2. Ingest a CSV with SELECT * FROM fs9('/data/input.csv')
  3. Embed and index the content with embedding()
  4. Search semantically with pgvector operators
  5. Call an external API with http_post() to deliver results
  6. Schedule a follow-up job with cron.schedule()
  7. Branch to try an alternative approach

All of this happens inside the database, in SQL, through a standard Postgres connection. No orchestrator, no sidecar, no message queue.