Migrate from PostgreSQL
This guide covers migrating from any PostgreSQL installation — self-hosted, AWS RDS, Google Cloud SQL, Azure Database for PostgreSQL, or other managed services — to DB9. The process uses standard pg_dump for export and the DB9 CLI or psql for import.
For platform-specific guides, see Migrate from Neon or Migrate from Supabase.
Before You Start: Compatibility Check
Section titled “Before You Start: Compatibility Check”DB9 supports most PostgreSQL workloads, but some features are not available. Run these checks against your existing database before migrating.
Quick compatibility scan
Section titled “Quick compatibility scan”Connect to your source database and check for unsupported features:
-- Table partitioning (not supported)SELECT count(*) AS partitioned_tablesFROM pg_partitioned_table;
-- Table inheritance (not supported)SELECT count(*) AS inherited_tablesFROM pg_inherits;
-- Row-level security policies (not supported)SELECT count(*) AS rls_policiesFROM pg_policies;
-- Foreign data wrappers (not supported)SELECT count(*) AS fdw_serversFROM pg_foreign_server;
-- Logical replication (not supported)SELECT count(*) AS publicationsFROM pg_publication;
-- Advisory locks in use (not supported)SELECT count(*) AS advisory_locksFROM pg_locks WHERE locktype = 'advisory';If any of these return non-zero counts, review whether your application depends on them. DB9 does not support these features — you will need to refactor or remove them before migrating.
Extension check
Section titled “Extension check”SELECT extname FROM pg_extension WHERE extname NOT IN ( 'plpgsql', 'uuid-ossp', 'hstore', 'vector') ORDER BY extname;DB9 supports 9 built-in extensions: http, uuid-ossp, hstore, fs9, pg_cron, parquet, zhparser, vector, embedding. Extensions not in this list (PostGIS, pg_trgm, pgcrypto, ltree, etc.) are not available.
Note: gen_random_uuid() works in DB9 without any extension — no need for pgcrypto.
PL/pgSQL check
Section titled “PL/pgSQL check”DB9 supports basic PL/pgSQL: variable declarations, IF/ELSIF, FOR loops, PERFORM, RAISE, and RETURN. It does not support:
WHILEloopsEXECUTE(dynamic SQL)- Exception handling (
BEGIN...EXCEPTION) - Cursor operations
-- Find functions that may use unsupported PL/pgSQL featuresSELECT proname, prosrcFROM pg_procWHERE prolang = (SELECT oid FROM pg_language WHERE lanname = 'plpgsql') AND (prosrc ILIKE '%WHILE%' OR prosrc ILIKE '%EXECUTE%' OR prosrc ILIKE '%EXCEPTION%');Review any matches and rewrite them before migrating.
What Changes
Section titled “What Changes”| Area | Standard PostgreSQL | DB9 |
|---|---|---|
| Connection string | postgresql://user:pass@host:5432/dbname | postgresql://tenant.role@pg.db9.io:5433/postgres |
| Port | 5432 (default) | 5433 |
| Database name | Custom | Always postgres |
| Username | Standard roles | tenant_id.role format (e.g., a1b2c3d4e5f6.admin) |
| Transaction isolation | SERIALIZABLE fully enforced | SERIALIZABLE accepted but runs as REPEATABLE READ |
| Connection pooling | External (PgBouncer, pgpool) | Application-side pooling |
| Replication | Logical and streaming | Not supported |
| LISTEN/NOTIFY | Supported | Not supported |
| Extensions | Community ecosystem | 9 built-in only |
| Indexes | All types fully functional | B-tree and HNSW full; GIN/GiST/Hash accepted but fall back to table scan |
See the Compatibility Matrix for the complete list.
What Stays the Same
Section titled “What Stays the Same”- SQL — DML (SELECT, INSERT, UPDATE, DELETE, UPSERT), DDL (CREATE TABLE, ALTER, DROP), JOINs, CTEs, window functions, subqueries, and RETURNING all work without changes.
- Data types — TEXT, INTEGER, BIGINT, BOOLEAN, TIMESTAMPTZ, UUID, JSONB, arrays, FLOAT8, NUMERIC, BYTEA, and vectors.
- Wire protocol — pgwire v3 (Simple Query, Extended Query, COPY, prepared statements). Any PostgreSQL driver works.
- ORMs — Prisma, Drizzle, SQLAlchemy, TypeORM, Sequelize, Knex, and GORM are tested at 98-100% compatibility.
Prerequisites
Section titled “Prerequisites”- Access to your source PostgreSQL database
pg_dumpinstalled locally (should match or be close to your source PostgreSQL version)- DB9 CLI installed:
curl -fsSL https://get.db9.io | sh - A DB9 account:
db9 create --name my-appto create your target database
Step 1: Export from PostgreSQL
Section titled “Step 1: Export from PostgreSQL”Schema and data (plain SQL)
Section titled “Schema and data (plain SQL)”pg_dump --no-owner --no-privileges --no-comments \ "postgresql://user:pass@your-host:5432/your_database" \ > export.sqlSchema only
Section titled “Schema only”pg_dump --schema-only --no-owner --no-privileges \ "postgresql://user:pass@your-host:5432/your_database" \ > schema.sqlSpecific tables
Section titled “Specific tables”pg_dump --no-owner --no-privileges -t users -t orders -t products \ "postgresql://user:pass@your-host:5432/your_database" \ > tables.sqlUse plain SQL format (default). DB9 does not support pg_restore with the custom (-Fc) or directory (-Fd) formats — import via SQL text only.
Flags explained:
--no-owner— omitsALTER ... OWNER TOstatements that reference source-specific roles--no-privileges— omitsGRANT/REVOKEstatements--no-comments— omitsCOMMENT ONstatements
Locale and encoding settings in the pg_dump output (like SET client_encoding) are accepted and safely ignored by DB9, which operates in UTF-8 only.
Managed PostgreSQL notes
Section titled “Managed PostgreSQL notes”| Provider | Connection notes |
|---|---|
| AWS RDS | Use the endpoint hostname and master user credentials. Ensure the security group allows outbound connections from your machine. |
| Google Cloud SQL | Use Cloud SQL Auth Proxy or allowlist your IP. Direct connection: host:5432/dbname. |
| Azure Database | Use the {user}@{server} username format Azure requires. |
| DigitalOcean | Use the connection string from the database dashboard. Requires sslmode=require. |
Step 2: Clean the Export
Section titled “Step 2: Clean the Export”Review the export for features DB9 does not support:
# Unsupported extensionsgrep "CREATE EXTENSION" export.sql
# Table partitioninggrep -i "PARTITION BY\|PARTITION OF" export.sql
# Row-level securitygrep -i "ROW LEVEL SECURITY\|CREATE POLICY" export.sql
# Table inheritancegrep -i "INHERITS" export.sql
# Foreign data wrappersgrep -i "CREATE SERVER\|CREATE FOREIGN TABLE" export.sql
# Replicationgrep -i "CREATE PUBLICATION\|CREATE SUBSCRIPTION" export.sql
# Rulesgrep -i "CREATE RULE" export.sqlRemove or comment out any matches. For extensions, keep only those DB9 supports: uuid-ossp, hstore, vector.
Common cleanup patterns
Section titled “Common cleanup patterns”# Remove all CREATE EXTENSION except supported onessed -i.bak '/CREATE EXTENSION/!b; /uuid-ossp\|hstore\|vector/!d' export.sql
# Remove RLSsed -i.bak '/ENABLE ROW LEVEL SECURITY/d; /CREATE POLICY/,/;$/d' export.sqlOr manually review and remove the flagged lines.
Step 3: Create the DB9 Database
Section titled “Step 3: Create the DB9 Database”db9 create --name my-app --show-connection-stringDatabase creation is synchronous and completes in under a second.
Step 4: Import into DB9
Section titled “Step 4: Import into DB9”Choose the method based on your database size.
Small databases (under 16 MB)
Section titled “Small databases (under 16 MB)”db9 db sql my-app -f export.sqlUses the DB9 API. Limited to 50,000 rows or 16 MB per table.
Medium to large databases
Section titled “Medium to large databases”Use psql with a direct pgwire connection — no size limits:
psql "$(db9 db status my-app --json | jq -r .connection_string)" -f export.sqlLarge databases (streaming COPY)
Section titled “Large databases (streaming COPY)”For the fastest import of large datasets, split schema and data:
# 1. Import schemapsql "$(db9 db status my-app --json | jq -r .connection_string)" -f schema.sql
# 2. Stream data directly from source to DB9pg_dump --data-only --no-owner \ "postgresql://user:pass@your-host:5432/your_database" \ | psql "$(db9 db status my-app --json | jq -r .connection_string)"This pipes COPY statements through pgwire without intermediate files. DB9 supports COPY in TEXT and CSV formats.
Import errors
Section titled “Import errors”If import fails partway through:
- Unsupported DDL — check the error message for the specific statement, remove it from the SQL file, and re-run.
- Data type mismatch — DB9 does not support XML, CIDR/INET, or range types. Cast or remove these columns.
- Encoding errors — DB9 is UTF-8 only. Non-UTF-8 data will fail with “invalid byte sequence for encoding UTF8”. Convert the source data to UTF-8 before export.
Step 5: Update Your Application
Section titled “Step 5: Update Your Application”Connection string
Section titled “Connection string”DATABASE_URL=postgresql://user:password@your-host:5432/your_databaseDATABASE_URL=postgresql://a1b2c3d4e5f6.admin@pg.db9.io:5433/postgres?sslmode=requireKey differences:
- Username:
{tenant_id}.{role}format - Port: 5433
- Database: Always
postgres - TLS: Required (
sslmode=require)
Connection pooling
Section titled “Connection pooling”If you use an external connection pooler (PgBouncer, pgpool), remove it and configure pooling in your application:
// node-postgresconst pool = new pg.Pool({ connectionString: process.env.DATABASE_URL, max: 10, idleTimeoutMillis: 30000,});# SQLAlchemyengine = create_engine( DATABASE_URL, pool_size=10, pool_pre_ping=True,)For ORM-specific connection setup, see the integration guides: Prisma, Drizzle, SQLAlchemy, GORM.
LISTEN/NOTIFY replacement
Section titled “LISTEN/NOTIFY replacement”If your application uses LISTEN/NOTIFY for real-time updates, switch to:
- Polling — query on an interval
- Application-level WebSockets — push changes from your API layer when writes occur
- External message queue — Redis Pub/Sub, AWS SQS, or similar
Sequences and SERIAL columns
Section titled “Sequences and SERIAL columns”SERIAL, BIGSERIAL, and GENERATED ALWAYS AS IDENTITY columns work in DB9. After importing data, verify sequences are set correctly:
db9 db sql my-app -q "SELECT setval('users_id_seq', (SELECT max(id) FROM users))"Run this for each table with auto-incrementing columns to avoid primary key conflicts on new inserts.
Step 6: Validate
Section titled “Step 6: Validate”Check schema
Section titled “Check schema”db9 db dump my-app --ddl-onlyCompare with your original schema.
Check row counts
Section titled “Check row counts”db9 db sql my-app -q "SELECT count(*) FROM users"db9 db sql my-app -q "SELECT count(*) FROM orders"Compare against the source database.
Run your test suite
Section titled “Run your test suite”DATABASE_URL="$(db9 db status my-app --json | jq -r .connection_string)" npm testCommon differences to watch for
Section titled “Common differences to watch for”- SERIALIZABLE isolation — DB9 accepts the setting but executes as REPEATABLE READ. Applications that depend on true serializable behavior may see different results under contention.
- GIN/GiST index performance — these index types are accepted but fall back to table scan. Full-text search and JSONB indexing will work but may be slower than on standard PostgreSQL.
- Advisory locks —
pg_advisory_lock()and related functions are not supported. UseSELECT ... FOR UPDATEfor row-level coordination. - Timestamp precision — DB9 stores timestamps with millisecond precision, not microsecond. Applications that rely on microsecond timestamps may see rounding.
Rollback Plan
Section titled “Rollback Plan”Your source database is unchanged by the migration. To revert:
- Switch
DATABASE_URLback to the original PostgreSQL connection string. - If you need to export data created in DB9:
# Small databasesdb9 db dump my-app -o db9-export.sqlpsql "postgresql://user:pass@your-host:5432/your_database" -f db9-export.sql
# Large databases — use COPY per tabledb9 db sql my-app -q "COPY users TO STDOUT WITH (FORMAT csv, HEADER)" > users.csvpsql "postgresql://user:pass@your-host:5432/your_database" \ -c "COPY users FROM STDIN WITH (FORMAT csv, HEADER)" < users.csvThe db9 db dump command outputs plain SQL (up to 50,000 rows or 16 MB per table). For larger databases, export individual tables with COPY.
Caveats
Section titled “Caveats”- No zero-downtime migration — DB9 does not support logical replication. Plan a maintenance window for the cutover, or accept a brief period of dual-writes.
- UTF-8 only — DB9 does not support other encodings. Ensure your data is UTF-8 before export.
- Plain SQL import only — DB9 does not support
pg_restorewith custom or directory formats. Always use the default plain-text format withpg_dump. - Dump size limits — The
db9 db sql -fAPI has per-table limits (50,000 rows, 16 MB). For larger databases, use directpsqlover pgwire. - No custom extensions — only the 9 built-in extensions are available. If your application depends on PostGIS, ltree, pg_trgm, or other community extensions, those features will not be available.
- Millisecond timestamp precision — timestamps are stored with millisecond resolution, not microsecond.
Next Pages
Section titled “Next Pages”- Compatibility Matrix — full list of supported and unsupported features
- Connect — connection string format and authentication
- Migrate from Neon — Neon-specific migration
- Migrate from Supabase — Supabase-specific migration
- Production Checklist — deployment readiness
- Limits and Quotas — operational limits