Agent Workflows
DB9 gives AI agents a complete backend through the PostgreSQL wire protocol. Instead of stitching together separate services for storage, embeddings, files, HTTP, and scheduling, agents do all of it in SQL.
This page is the starting point for building agent workflows on DB9. It maps common agent tasks to the DB9 capabilities that solve them, shows how those capabilities compose, and links to deeper guides.
Who should read this
Section titled “Who should read this”- Agent developers building assistants, copilots, or autonomous agents that need persistent state.
- Platform engineers creating agent infrastructure where each agent, user, or task gets its own database.
- Teams already using DB9 who want to move beyond basic storage into full agent automation.
If you’re still deciding whether DB9 is the right fit, start with Why DB9 for AI Agents.
The agent lifecycle in DB9
Section titled “The agent lifecycle in DB9”Most agent workflows follow a predictable lifecycle. DB9 has a built-in primitive for each stage:
| Agent lifecycle stage | DB9 primitive | How to access |
|---|---|---|
| Provision a workspace | instantDatabase() or db9 create | SDK, CLI |
| Store structured state | Standard SQL tables | Any Postgres client |
| Search semantically | embedding() + pgvector operators | SQL |
| Ingest files and artifacts | fs9 functions and table source | SQL, CLI, SDK |
| Call external APIs | http_get(), http_post(), and friends | SQL |
| Branch for safe experiments | db9 branch create | CLI, SDK |
| Schedule recurring work | pg_cron | SQL, CLI |
| Onboard agent tooling | db9 onboard | CLI |
Each primitive is accessible through standard SQL or the PostgreSQL wire protocol. Agents don’t need a proprietary SDK to use them — any Postgres driver works.
Provision: one database per agent, user, or task
Section titled “Provision: one database per agent, user, or task”Agent systems typically need to create databases on demand. DB9 provisions a database in under a second, with no signup required for anonymous use.
From the TypeScript SDK:
import { instantDatabase } from 'get-db9';
const db = await instantDatabase({ name: 'agent-session-42', seed: ` CREATE TABLE memory (id SERIAL, key TEXT, value JSONB); CREATE TABLE artifacts (id SERIAL, path TEXT, content TEXT); `,});// db.connectionString → ready for any Postgres clientinstantDatabase() is idempotent — if a database with that name already exists, it returns the existing one.
This makes agent restarts safe.
From the CLI:
db9 create --name agent-session-42For fleet operations, the SDK client also exposes databases.create(), databases.list(), databases.delete(), and databases.credentials() for full programmatic lifecycle management.
Common patterns:
- Database-per-agent — each agent instance gets its own database for complete isolation.
- Database-per-user — multi-tenant applications give each end user a dedicated database.
- Database-per-task — disposable databases for one-shot jobs, deleted when the task completes.
→ Deeper guide: Provisioning
Store: standard SQL for structured state
Section titled “Store: standard SQL for structured state”Agents store context, conversation history, tool outputs, and intermediate results in regular Postgres tables. DB9 supports the full range of SQL data types, DDL, DML, transactions, and indexes.
-- Agent stores a tool resultINSERT INTO memory (key, value)VALUES ('search_result', '{"query": "quarterly revenue", "hits": 14}');
-- Agent retrieves its contextSELECT key, value FROM memory ORDER BY id DESC LIMIT 10;Because DB9 speaks the PostgreSQL wire protocol, agents can use any Postgres-compatible ORM or driver: Prisma, Drizzle, SQLAlchemy, TypeORM, or raw pg connections.
Search: built-in embeddings and vector queries
Section titled “Search: built-in embeddings and vector queries”DB9 includes a server-side embedding() function that generates vectors without a separate embedding service.
Combined with pgvector-compatible operators and HNSW indexes, agents can build semantic search in pure SQL.
-- Enable the extension (once per database)CREATE EXTENSION IF NOT EXISTS embedding;
-- Store content with its embeddingINSERT INTO knowledge (content, vec)VALUES ( 'DB9 supports branching for safe experiments', embedding('DB9 supports branching for safe experiments')::vector(1536));
-- Semantic searchSELECT content, vec <-> embedding('how do I test safely?')::vector(1536) AS distanceFROM knowledgeORDER BY distanceLIMIT 5;The embedding() function accepts an optional model name and dimensions parameter:
-- Use a specific modelSELECT embedding('hello world', 'bedrock/amazon-titan-v2', 1024);Embeddings are generated server-side and cached within each statement execution.
The per-statement limit is 100 embedding calls (configurable via the embedding.max_calls session parameter).
→ Guide: RAG with Built-in Embeddings (coming soon) · Reference: Vector Search
Ingest: query files from SQL with fs9
Section titled “Ingest: query files from SQL with fs9”Agents produce and consume files — logs, CSVs, JSON exports, Parquet snapshots. fs9 makes these queryable without loading them into tables first.
Read files as tables:
-- Query a CSV directlySELECT * FROM fs9('/data/results.csv');
-- Query Parquet files with a glob patternSELECT * FROM fs9('/exports/*.parquet');Manage files from SQL:
-- Write a result fileSELECT fs9_write('/output/summary.json', '{"status": "complete", "rows": 1024}');
-- Read a fileSELECT fs9_read('/output/summary.json');
-- List directory contentsSELECT * FROM fs9('/output/', recursive := true);Manage files from the CLI or SDK:
# Copy a local file into the databasedb9 fs cp ./data.csv mydb:/data/data.csv
# Interactive file shelldb9 fs sh mydb
# FUSE mount (access db files like a local directory)db9 fs mount mydb ./mntThe SDK provides a full file API (fs.read, fs.write, fs.list, fs.stat, etc.) over WebSocket for programmatic access.
File limits: 100 MB per file, 128 MB concurrent read budget, 10,000 files per glob query.
→ Guide: Analyze Agent Logs with fs9 (coming soon) · Reference: fs9
Call: HTTP requests from SQL
Section titled “Call: HTTP requests from SQL”Agents often need to call external services — webhooks, LLM APIs, enrichment endpoints. DB9’s HTTP extension makes this possible from SQL without leaving the database:
-- GET requestSELECT status, content FROM http_get('https://api.example.com/status');
-- POST with JSON bodySELECT status, contentFROM http_post( 'https://hooks.slack.com/services/T.../B.../xxx', '{"text": "Agent task complete"}', 'application/json');Safety boundaries are enforced:
- HTTPS only (port 443) — no plaintext HTTP by default
- Private and loopback IPs are blocked (SSRF protection)
- 100 requests per SQL statement, 20 concurrent per tenant
- 1 MB max response body, 256 KB max request body
- 5-second request timeout, 1-second connect timeout
→ Guide: HTTP from SQL (coming soon) · Reference: HTTP Extension
Branch: safe experiments and rollback
Section titled “Branch: safe experiments and rollback”Agents can fork a database to try a risky operation, validate the result, and discard the branch if it fails:
# Create a branchdb9 branch create mydb --name experiment-v2
# Agent works on the branch using its own connection string...
# If the experiment failed, delete the branchdb9 branch delete <branch-id>
# If it succeeded, promote the result (or just keep using the branch)Branches are independent database copies created from the parent’s schema at the time of branching. Each branch gets its own connection string, credentials, and isolated storage — writes to the branch do not affect the parent.
The SDK also supports branching programmatically:
const branch = await client.databases.branch(parentId, { name: 'experiment-v2' });// branch.connectionString → isolated workspaceCommon branch patterns:
- Preview environments — branch per pull request for isolated testing.
- Schema experiments — try a migration on a branch before applying to production.
- Task isolation — each agent task gets a branch, merged or discarded on completion.
→ Guide: Branching Workflows (coming soon)
Schedule: recurring jobs with pg_cron
Section titled “Schedule: recurring jobs with pg_cron”Agents can schedule periodic work — cache refreshes, log cleanup, periodic API polling — directly in the database:
-- Schedule a cleanup job every 6 hoursSELECT cron.schedule( 'cleanup-old-context', '0 */6 * * *', $$DELETE FROM memory WHERE created_at < now() - interval '7 days'$$);
-- Check execution historySELECT * FROM cron.job_run_details ORDER BY runid DESC LIMIT 5;The CLI provides full cron management:
db9 db cron mydb create '*/30 * * * *' "SELECT * FROM http_get('https://api.example.com/poll')"db9 db cron mydb listdb9 db cron mydb history --limit 10db9 db cron mydb statusJobs run inside the database engine with no external scheduler. You can enable, disable, and delete jobs through SQL or the CLI.
→ Guide: Scheduled Jobs with pg_cron (coming soon) · Reference: pg_cron
Onboard: install DB9 as an agent skill
Section titled “Onboard: install DB9 as an agent skill”The db9 onboard command installs a DB9 skill file into your AI coding agent’s skills directory.
Once installed, the agent can use DB9 commands as part of its normal workflow.
# Auto-detect installed agents and installdb9 onboard
# Target a specific agentdb9 onboard --agent claudedb9 onboard --agent codexdb9 onboard --agent opencode
# Install for all detected agentsdb9 onboard --all
# Preview what would be installeddb9 onboard --dry-runSupported agents:
| Agent | User scope | Project scope |
|---|---|---|
| Claude Code | ~/.claude/skills/db9/SKILL.md | ./.claude/skills/db9/SKILL.md |
| OpenAI Codex | ~/.codex/skills/db9/SKILL.md | — (user scope only) |
| OpenCode | ~/.config/opencode/skills/db9/SKILL.md | ./.opencode/skills/db9/SKILL.md |
| Generic agents | ~/.agents/skills/db9/SKILL.md | ./.agents/skills/db9/SKILL.md |
Use --scope user, --scope project, or --scope both to control where skills are installed.
Skill files include a semver version in their frontmatter — db9 onboard only updates when a newer version is available (use --force to override).
→ Guide: Install DB9 Skills (coming soon) · Claude Code (coming soon) · OpenAI Codex (coming soon)
Composing capabilities
Section titled “Composing capabilities”The real power of DB9 for agents is that these primitives compose through SQL. A single agent workflow can:
- Create a database with
instantDatabase() - Ingest a CSV with
SELECT * FROM fs9('/data/input.csv') - Embed and index the content with
embedding() - Search semantically with pgvector operators
- Call an external API with
http_post()to deliver results - Schedule a follow-up job with
cron.schedule() - Branch to try an alternative approach
All of this happens inside the database, in SQL, through a standard Postgres connection. No orchestrator, no sidecar, no message queue.
Next steps
Section titled “Next steps”- DB9 with Claude Code — install the DB9 skill and start using databases from Claude Code
- DB9 with OpenAI Codex — install the DB9 skill for OpenAI Codex CLI
- Quick Start — create your first database in under a minute
- Connect — connection strings, drivers, and authentication
- Why DB9 for AI Agents — the positioning case for using DB9 in agent systems
- CLI Reference — full command reference for
db9 create,db9 onboard,db9 branch, and more - TypeScript SDK —
instantDatabase(), client API, and programmatic database management - RAG with Built-in Embeddings — build a retrieval pipeline using DB9-native embeddings
- Branching Workflows — preview environments, task isolation, and safe migrations using branches
- Extensions — deep dives into fs9, HTTP, embeddings, vector search, and pg_cron