Skip to content
Discord Get Started

Migrate from PostgreSQL

This guide covers migrating from any PostgreSQL installation — self-hosted, AWS RDS, Google Cloud SQL, Azure Database for PostgreSQL, or other managed services — to DB9. The process uses standard pg_dump for export and the DB9 CLI or psql for import.

For platform-specific guides, see Migrate from Neon or Migrate from Supabase.

DB9 supports most PostgreSQL workloads, but some features are not available. Run these checks against your existing database before migrating.

Connect to your source database and check for unsupported features:

SQL
-- Table partitioning (not supported)
SELECT count(*) AS partitioned_tables
FROM pg_partitioned_table;
-- Table inheritance (not supported)
SELECT count(*) AS inherited_tables
FROM pg_inherits;
-- Row-level security policies (not supported)
SELECT count(*) AS rls_policies
FROM pg_policies;
-- Foreign data wrappers (not supported)
SELECT count(*) AS fdw_servers
FROM pg_foreign_server;
-- Logical replication (not supported)
SELECT count(*) AS publications
FROM pg_publication;
-- Advisory locks in use (not supported)
SELECT count(*) AS advisory_locks
FROM pg_locks WHERE locktype = 'advisory';

If any of these return non-zero counts, review whether your application depends on them. DB9 does not support these features — you will need to refactor or remove them before migrating.

SQL
SELECT extname FROM pg_extension WHERE extname NOT IN (
'plpgsql', 'uuid-ossp', 'hstore', 'vector'
) ORDER BY extname;

DB9 supports 9 built-in extensions: http, uuid-ossp, hstore, fs9, pg_cron, parquet, zhparser, vector, embedding. Extensions not in this list (PostGIS, pg_trgm, pgcrypto, ltree, etc.) are not available.

Note: gen_random_uuid() works in DB9 without any extension — no need for pgcrypto.

DB9 supports basic PL/pgSQL: variable declarations, IF/ELSIF, FOR loops, PERFORM, RAISE, and RETURN. It does not support:

  • WHILE loops
  • EXECUTE (dynamic SQL)
  • Exception handling (BEGIN...EXCEPTION)
  • Cursor operations
SQL
-- Find functions that may use unsupported PL/pgSQL features
SELECT proname, prosrc
FROM pg_proc
WHERE prolang = (SELECT oid FROM pg_language WHERE lanname = 'plpgsql')
AND (prosrc ILIKE '%WHILE%' OR prosrc ILIKE '%EXECUTE%' OR prosrc ILIKE '%EXCEPTION%');

Review any matches and rewrite them before migrating.

AreaStandard PostgreSQLDB9
Connection stringpostgresql://user:pass@host:5432/dbnamepostgresql://tenant.role@pg.db9.io:5433/postgres
Port5432 (default)5433
Database nameCustomAlways postgres
UsernameStandard rolestenant_id.role format (e.g., a1b2c3d4e5f6.admin)
Transaction isolationSERIALIZABLE fully enforcedSERIALIZABLE accepted but runs as REPEATABLE READ
Connection poolingExternal (PgBouncer, pgpool)Application-side pooling
ReplicationLogical and streamingNot supported
LISTEN/NOTIFYSupportedNot supported
ExtensionsCommunity ecosystem9 built-in only
IndexesAll types fully functionalB-tree and HNSW full; GIN/GiST/Hash accepted but fall back to table scan

See the Compatibility Matrix for the complete list.

  • SQL — DML (SELECT, INSERT, UPDATE, DELETE, UPSERT), DDL (CREATE TABLE, ALTER, DROP), JOINs, CTEs, window functions, subqueries, and RETURNING all work without changes.
  • Data types — TEXT, INTEGER, BIGINT, BOOLEAN, TIMESTAMPTZ, UUID, JSONB, arrays, FLOAT8, NUMERIC, BYTEA, and vectors.
  • Wire protocol — pgwire v3 (Simple Query, Extended Query, COPY, prepared statements). Any PostgreSQL driver works.
  • ORMs — Prisma, Drizzle, SQLAlchemy, TypeORM, Sequelize, Knex, and GORM are tested at 98-100% compatibility.
  • Access to your source PostgreSQL database
  • pg_dump installed locally (should match or be close to your source PostgreSQL version)
  • DB9 CLI installed: curl -fsSL https://get.db9.io | sh
  • A DB9 account: db9 create --name my-app to create your target database
Terminal
pg_dump --no-owner --no-privileges --no-comments \
"postgresql://user:pass@your-host:5432/your_database" \
> export.sql
Terminal
pg_dump --schema-only --no-owner --no-privileges \
"postgresql://user:pass@your-host:5432/your_database" \
> schema.sql
Terminal
pg_dump --no-owner --no-privileges -t users -t orders -t products \
"postgresql://user:pass@your-host:5432/your_database" \
> tables.sql

Use plain SQL format (default). DB9 does not support pg_restore with the custom (-Fc) or directory (-Fd) formats — import via SQL text only.

Flags explained:

  • --no-owner — omits ALTER ... OWNER TO statements that reference source-specific roles
  • --no-privileges — omits GRANT/REVOKE statements
  • --no-comments — omits COMMENT ON statements

Locale and encoding settings in the pg_dump output (like SET client_encoding) are accepted and safely ignored by DB9, which operates in UTF-8 only.

ProviderConnection notes
AWS RDSUse the endpoint hostname and master user credentials. Ensure the security group allows outbound connections from your machine.
Google Cloud SQLUse Cloud SQL Auth Proxy or allowlist your IP. Direct connection: host:5432/dbname.
Azure DatabaseUse the {user}@{server} username format Azure requires.
DigitalOceanUse the connection string from the database dashboard. Requires sslmode=require.

Review the export for features DB9 does not support:

Terminal
# Unsupported extensions
grep "CREATE EXTENSION" export.sql
# Table partitioning
grep -i "PARTITION BY\|PARTITION OF" export.sql
# Row-level security
grep -i "ROW LEVEL SECURITY\|CREATE POLICY" export.sql
# Table inheritance
grep -i "INHERITS" export.sql
# Foreign data wrappers
grep -i "CREATE SERVER\|CREATE FOREIGN TABLE" export.sql
# Replication
grep -i "CREATE PUBLICATION\|CREATE SUBSCRIPTION" export.sql
# Rules
grep -i "CREATE RULE" export.sql

Remove or comment out any matches. For extensions, keep only those DB9 supports: uuid-ossp, hstore, vector.

Terminal
# Remove all CREATE EXTENSION except supported ones
sed -i.bak '/CREATE EXTENSION/!b; /uuid-ossp\|hstore\|vector/!d' export.sql
# Remove RLS
sed -i.bak '/ENABLE ROW LEVEL SECURITY/d; /CREATE POLICY/,/;$/d' export.sql

Or manually review and remove the flagged lines.

Terminal
db9 create --name my-app --show-connection-string

Database creation is synchronous and completes in under a second.

Choose the method based on your database size.

Terminal
db9 db sql my-app -f export.sql

Uses the DB9 API. Limited to 50,000 rows or 16 MB per table.

Use psql with a direct pgwire connection — no size limits:

Terminal
psql "$(db9 db status my-app --json | jq -r .connection_string)" -f export.sql

For the fastest import of large datasets, split schema and data:

Terminal
# 1. Import schema
psql "$(db9 db status my-app --json | jq -r .connection_string)" -f schema.sql
# 2. Stream data directly from source to DB9
pg_dump --data-only --no-owner \
"postgresql://user:pass@your-host:5432/your_database" \
| psql "$(db9 db status my-app --json | jq -r .connection_string)"

This pipes COPY statements through pgwire without intermediate files. DB9 supports COPY in TEXT and CSV formats.

If import fails partway through:

  • Unsupported DDL — check the error message for the specific statement, remove it from the SQL file, and re-run.
  • Data type mismatch — DB9 does not support XML, CIDR/INET, or range types. Cast or remove these columns.
  • Encoding errors — DB9 is UTF-8 only. Non-UTF-8 data will fail with “invalid byte sequence for encoding UTF8”. Convert the source data to UTF-8 before export.
Diff
DATABASE_URL=postgresql://user:password@your-host:5432/your_database
DATABASE_URL=postgresql://a1b2c3d4e5f6.admin@pg.db9.io:5433/postgres?sslmode=require

Key differences:

  • Username: {tenant_id}.{role} format
  • Port: 5433
  • Database: Always postgres
  • TLS: Required (sslmode=require)

If you use an external connection pooler (PgBouncer, pgpool), remove it and configure pooling in your application:

TypeScript
// node-postgres
const pool = new pg.Pool({
connectionString: process.env.DATABASE_URL,
max: 10,
idleTimeoutMillis: 30000,
});
Python
# SQLAlchemy
engine = create_engine(
DATABASE_URL,
pool_size=10,
pool_pre_ping=True,
)

For ORM-specific connection setup, see the integration guides: Prisma, Drizzle, SQLAlchemy, GORM.

If your application uses LISTEN/NOTIFY for real-time updates, switch to:

  • Polling — query on an interval
  • Application-level WebSockets — push changes from your API layer when writes occur
  • External message queue — Redis Pub/Sub, AWS SQS, or similar

SERIAL, BIGSERIAL, and GENERATED ALWAYS AS IDENTITY columns work in DB9. After importing data, verify sequences are set correctly:

Terminal
db9 db sql my-app -q "SELECT setval('users_id_seq', (SELECT max(id) FROM users))"

Run this for each table with auto-incrementing columns to avoid primary key conflicts on new inserts.

Terminal
db9 db dump my-app --ddl-only

Compare with your original schema.

Terminal
db9 db sql my-app -q "SELECT count(*) FROM users"
db9 db sql my-app -q "SELECT count(*) FROM orders"

Compare against the source database.

Terminal
DATABASE_URL="$(db9 db status my-app --json | jq -r .connection_string)" npm test
  • SERIALIZABLE isolation — DB9 accepts the setting but executes as REPEATABLE READ. Applications that depend on true serializable behavior may see different results under contention.
  • GIN/GiST index performance — these index types are accepted but fall back to table scan. Full-text search and JSONB indexing will work but may be slower than on standard PostgreSQL.
  • Advisory lockspg_advisory_lock() and related functions are not supported. Use SELECT ... FOR UPDATE for row-level coordination.
  • Timestamp precision — DB9 stores timestamps with millisecond precision, not microsecond. Applications that rely on microsecond timestamps may see rounding.

Your source database is unchanged by the migration. To revert:

  1. Switch DATABASE_URL back to the original PostgreSQL connection string.
  2. If you need to export data created in DB9:
Terminal
# Small databases
db9 db dump my-app -o db9-export.sql
psql "postgresql://user:pass@your-host:5432/your_database" -f db9-export.sql
# Large databases — use COPY per table
db9 db sql my-app -q "COPY users TO STDOUT WITH (FORMAT csv, HEADER)" > users.csv
psql "postgresql://user:pass@your-host:5432/your_database" \
-c "COPY users FROM STDIN WITH (FORMAT csv, HEADER)" < users.csv

The db9 db dump command outputs plain SQL (up to 50,000 rows or 16 MB per table). For larger databases, export individual tables with COPY.

  • No zero-downtime migration — DB9 does not support logical replication. Plan a maintenance window for the cutover, or accept a brief period of dual-writes.
  • UTF-8 only — DB9 does not support other encodings. Ensure your data is UTF-8 before export.
  • Plain SQL import only — DB9 does not support pg_restore with custom or directory formats. Always use the default plain-text format with pg_dump.
  • Dump size limits — The db9 db sql -f API has per-table limits (50,000 rows, 16 MB). For larger databases, use direct psql over pgwire.
  • No custom extensions — only the 9 built-in extensions are available. If your application depends on PostGIS, ltree, pg_trgm, or other community extensions, those features will not be available.
  • Millisecond timestamp precision — timestamps are stored with millisecond resolution, not microsecond.