Why DB9 for AI Agents
AI agents need more than a place to store rows. They need to provision databases on demand, search semantically, ingest files, call APIs, branch safely, and schedule follow-up work — ideally without leaving SQL.
DB9 is built for this. Every capability an agent needs ships inside the database server, accessible through the PostgreSQL wire protocol. There is no sidecar, no orchestrator, and no glue service to maintain.
Who should read this
Section titled “Who should read this”- Agent developers evaluating where to store agent state, context, and artifacts.
- Platform engineers building multi-agent or multi-tenant systems that need disposable, programmable databases.
- Teams comparing DB9 to Neon, Supabase, or managed Postgres for AI-heavy workloads.
If you already know DB9 is a fit, skip ahead to the Quick Start or the Agent Workflows guide.
The agent-database problem
Section titled “The agent-database problem”Most databases were designed for long-lived applications with human operators. AI agents break that model:
| Agents need | Traditional databases offer |
|---|---|
| Create a database in milliseconds | Minutes of provisioning and config |
| Embed and search text in one query | Separate embedding service + vector DB |
| Read CSV, JSON, or Parquet from SQL | ETL pipeline or external loader |
| Call an API from a query | Application-layer HTTP code |
| Fork the database to try something | Full backup and restore |
| Schedule a cleanup job | External cron or task queue |
DB9 closes every gap in that table with compiled-in extensions, not external services.
What DB9 gives agents
Section titled “What DB9 gives agents”Instant provisioning
Section titled “Instant provisioning”An agent can create a database in under a second with a single CLI command or SDK call. No signup is required — anonymous databases work immediately for prototyping.
db9 create --name agent-workspaceimport { instantDatabase } from 'get-db9';
const db = await instantDatabase({ name: 'agent-workspace', seed: 'CREATE TABLE context (id SERIAL, key TEXT, value JSONB)',});// db.connectionString is ready to useThe SDK’s instantDatabase() checks for an existing database by name and reuses it, or creates one if it doesn’t exist.
This makes agent restarts idempotent.
Built-in embeddings and vector search
Section titled “Built-in embeddings and vector search”DB9 includes a built-in embedding() function that calls an embedding provider (OpenAI or AWS Bedrock) and returns a vector — no separate embedding microservice needed.
Enable it once per database with CREATE EXTENSION embedding, then combine it with pgvector-compatible operators and HNSW indexes to build semantic search in pure SQL:
-- Store a document with its embeddingINSERT INTO docs (content, vec)VALUES ('deployment guide', embedding('deployment guide')::vector);
-- Semantic searchSELECT contentFROM docsORDER BY vec <-> embedding('how do I deploy?')::vectorLIMIT 5;Embeddings are generated server-side, cached, and subject to per-tenant concurrency limits (5 concurrent requests by default). Agents don’t need to manage an embedding API client — the database handles it.
fs9 — query files from SQL
Section titled “fs9 — query files from SQL”Agents produce and consume files: logs, CSVs, JSON exports, Parquet snapshots. DB9’s fs9 extension exposes a queryable file system inside the database:
-- Read a CSV as a tableSELECT * FROM fs9('/data/results.csv');
-- Write a fileSELECT fs9_write('/data/output.json', '{"status": "complete"}');
-- Check if a file existsSELECT fs9_exists('/data/results.csv');fs9 supports CSV, JSON Lines, and Parquet with automatic schema inference.
Files can also be managed through the CLI (db9 fs cp, db9 fs sh) or a FUSE mount.
Individual files are limited to 100 MB, with a 128 MB per-operation read budget.
HTTP from SQL
Section titled “HTTP from SQL”Agents often need to call external services — webhooks, LLM APIs, enrichment endpoints. DB9 lets them do it from SQL:
SELECT status, contentFROM http_get( 'https://api.example.com/enrich', '[{"field":"Authorization","value":"Bearer sk-..."}]'::jsonb);
SELECT contentFROM http_post( 'https://hooks.slack.com/services/...', '{"text": "Task complete"}', 'application/json');Safety boundaries are enforced by default:
- HTTPS only (no plaintext HTTP)
- Private/loopback IPs are blocked (SSRF protection)
- 100 requests per statement, 20 concurrent per tenant
- 1 MB max response, 256 KB max request body
- 5-second request timeout
Database branching
Section titled “Database branching”Agents can fork a database to try a risky operation, then discard the branch if it fails:
db9 branch create myapp --name experiment# Agent works on the branch...db9 branch delete experimentBranches are lightweight copies that share history with the parent and diverge on write. Use cases include preview environments, schema experiments, and rollback points.
Scheduled jobs with pg_cron
Section titled “Scheduled jobs with pg_cron”Agents can schedule recurring work — cache refreshes, log cleanup, periodic API calls — directly in SQL:
SELECT cron.schedule( 'cleanup-old-context', '0 */6 * * *', $$DELETE FROM context WHERE created_at < now() - interval '7 days'$$);
-- Check job historySELECT * FROM cron.job_run_details ORDER BY runid DESC LIMIT 5;Jobs run inside the database with no external scheduler.
The CLI also provides db9 db <DB> cron list, db9 db <DB> cron create, db9 db <DB> cron history, and db9 db <DB> cron status commands for managing jobs outside SQL.
One-command agent onboarding
Section titled “One-command agent onboarding”DB9 installs itself as a skill for AI coding agents with a single command:
db9 onboard --agent claude # Claude Codedb9 onboard --agent codex # OpenAI Codexdb9 onboard --agent opencode # OpenCodedb9 onboard --agent agents # Generic .agents directorySkills can be installed at user scope (~/.claude/skills/db9/) or project scope (./.claude/skills/db9/), and --dry-run previews changes before writing anything.
Once installed, the agent can use DB9 commands as part of its normal workflow.
Everything through standard PostgreSQL
Section titled “Everything through standard PostgreSQL”Every capability listed above is accessible through the PostgreSQL wire protocol. This means:
- Any Postgres client works.
psql, pgAdmin, DBeaver, language drivers — they all connect to DB9 without adapters. - ORMs work. Prisma, Drizzle, TypeORM, Sequelize, Knex, SQLAlchemy, and GORM connect to DB9 as a standard Postgres backend.
- Agents reason in SQL. LLMs already know SQL. There’s no proprietary query language or SDK to learn.
When to choose DB9 for agents
Section titled “When to choose DB9 for agents”- Your agents need to create, use, and discard databases as part of their workflow.
- You want embeddings, file access, HTTP calls, and scheduling in the database layer, not as separate services.
- You’re building multi-agent systems where each agent (or task, or user) gets its own isolated database.
- You want agents to operate in pure SQL rather than through proprietary APIs.
- You need branching for safe experimentation or preview environments.
When DB9 may not be the best fit
Section titled “When DB9 may not be the best fit”- Your agents only need a key-value store or document database — a simpler tool may be enough.
- You need extensions that DB9 doesn’t support yet — check the Extensions and SQL limits pages first.
- You need a full application platform with auth UI, file storage CDN, and edge functions (consider Supabase).
- Your workload requires on-premises or self-hosted deployment.
Next steps
Section titled “Next steps”- Agent Workflows — the practical guide to building agent systems on DB9
- Quick Start — create your first database and run a query in under a minute
- Overview — understand DB9’s architecture and positioning
- CLI Reference — full command reference including
db9 db create,db9 onboard, anddb9 cron - TypeScript SDK —
instantDatabase()and programmatic database management - Extensions — deep dives into fs9, HTTP, embeddings, vector search, and pg_cron