Migrate from Neon
This guide walks through migrating a PostgreSQL database from Neon to DB9. The process uses standard PostgreSQL tooling (pg_dump for export) and the DB9 CLI for import.
What Changes and What Stays the Same
Section titled “What Changes and What Stays the Same”Stays the same
Section titled “Stays the same”- SQL compatibility — DB9 supports the same DML, DDL, joins, CTEs, window functions, and subqueries you use in Neon. Most queries work without changes.
- PostgreSQL drivers — Any driver that connects via pgwire (node-postgres, psycopg, pgx, JDBC) works with DB9.
- ORM compatibility — Prisma, Drizzle, SQLAlchemy, TypeORM, Sequelize, Knex, and GORM are tested and supported.
- Data types — Common types (TEXT, INTEGER, BIGINT, BOOLEAN, TIMESTAMPTZ, UUID, JSONB, arrays, vectors) work identically.
Changes
Section titled “Changes”| Area | Neon | DB9 |
|---|---|---|
| Connection string | postgresql://user:pass@ep-*.neon.tech/dbname | postgresql://tenant.role@pg.db9.io:5433/postgres |
| Connection pooling | Built-in PgBouncer (transaction mode) | No built-in pooler — use application-side pooling |
| Branching | Copy-on-write, instant for any size | Full data copy, async (seconds to minutes) |
| Compute | Autoscaling, scale-to-zero | Fixed per-database, always on |
| Serverless driver | @neondatabase/serverless (HTTP/WebSocket) | Standard pgwire only — no HTTP SQL endpoint |
| Extensions | 40+ community extensions | 9 built-in (http, vector, fs9, pg_cron, embedding, hstore, uuid-ossp, parquet, zhparser) |
| Replication | Logical replication supported | Not supported |
| Row-level security | Supported | Not supported |
| Table partitioning | Supported | Not supported |
| LISTEN/NOTIFY | Supported | Not supported |
| Port | 5432 | 5433 |
| Database name | Custom (e.g., neondb) | Always postgres |
Review the Compatibility Matrix for the full list of supported and unsupported features.
Prerequisites
Section titled “Prerequisites”- Access to your Neon database (direct/unpooled connection string)
pg_dumpinstalled locally (comes with PostgreSQL client tools)- DB9 CLI installed:
curl -fsSL https://get.db9.io | sh - A DB9 account:
db9 create --name my-appto create your target database
Step 1: Export from Neon
Section titled “Step 1: Export from Neon”Use pg_dump with Neon’s direct (unpooled) connection string. Do not use the pooled connection — pg_dump requires a direct connection.
Schema and data (plain SQL format)
Section titled “Schema and data (plain SQL format)”pg_dump --no-owner --no-privileges --no-comments \ "postgresql://user:pass@ep-cool-name-123456.us-east-2.aws.neon.tech/neondb?sslmode=require" \ > export.sqlSchema only
Section titled “Schema only”pg_dump --schema-only --no-owner --no-privileges \ "postgresql://user:pass@ep-cool-name-123456.us-east-2.aws.neon.tech/neondb?sslmode=require" \ > schema.sqlFlags explained:
--no-owner— omitsALTER ... OWNER TOstatements that reference Neon-specific roles--no-privileges— omitsGRANT/REVOKEstatements--no-comments— omitsCOMMENT ONstatements that may reference Neon internals
Use plain SQL format (default). DB9 does not support pg_restore with the custom (-Fc) or directory (-Fd) formats — import via SQL text only.
Step 2: Clean the Export
Section titled “Step 2: Clean the Export”The pg_dump output may contain statements that DB9 does not support. Remove or comment out:
CREATE EXTENSIONfor extensions DB9 does not have — DB9 supports 9 built-in extensions. Remove anyCREATE EXTENSIONfor extensions not in:http,uuid-ossp,hstore,fs9,pg_cron,parquet,zhparser,vector,embedding.CREATE PUBLICATION/CREATE SUBSCRIPTION— DB9 does not support logical replication.- Row-level security policies —
CREATE POLICY,ALTER TABLE ... ENABLE ROW LEVEL SECURITY. - Table partitioning —
PARTITION BY,CREATE TABLE ... PARTITION OF. - Advisory lock calls —
pg_advisory_lock(),pg_try_advisory_lock(). - Custom types with WHILE loops in PL/pgSQL — DB9 supports basic PL/pgSQL but not
WHILE,EXECUTE, or exception handling. - Locale settings — DB9 accepts and ignores locale parameters from
pg_dump, so these are safe to leave in.
A quick way to identify issues:
# Check for unsupported extensionsgrep "CREATE EXTENSION" export.sql
# Check for partitioninggrep -i "PARTITION" export.sql
# Check for RLSgrep -i "ROW LEVEL SECURITY\|CREATE POLICY" export.sql
# Check for replicationgrep -i "PUBLICATION\|SUBSCRIPTION" export.sqlStep 3: Create the DB9 Database
Section titled “Step 3: Create the DB9 Database”# Create a new databasedb9 create --name my-app --show-connection-stringThis returns immediately with the connection string and credentials. Save them for your application config.
Step 4: Import into DB9
Section titled “Step 4: Import into DB9”Option A: CLI import (recommended for most databases)
Section titled “Option A: CLI import (recommended for most databases)”db9 db sql my-app -f export.sqlThis executes the SQL file against your DB9 database via the API. Suitable for databases up to the dump limits (50,000 rows or 16 MB per table).
Option B: Direct psql import (for larger databases)
Section titled “Option B: Direct psql import (for larger databases)”For larger exports, use psql with DB9’s connection string directly:
psql "$(db9 db status my-app --json | jq -r .connection_string)" -f export.sqlThis streams the SQL through the pgwire protocol and handles larger files without the API dump limits.
Option C: COPY for bulk data
Section titled “Option C: COPY for bulk data”If your export is large and you split schema from data, you can use COPY for bulk loading:
# Import schema firstpsql "$(db9 db status my-app --json | jq -r .connection_string)" -f schema.sql
# Then import data via COPY (pg_dump with --data-only --inserts=off uses COPY by default)pg_dump --data-only --no-owner \ "postgresql://user:pass@ep-cool-name-123456.us-east-2.aws.neon.tech/neondb?sslmode=require" \ | psql "$(db9 db status my-app --json | jq -r .connection_string)"DB9 supports COPY in CSV and TEXT formats over pgwire.
Step 5: Update Your Application
Section titled “Step 5: Update Your Application”Connection string
Section titled “Connection string”Replace the Neon connection string with DB9’s:
DATABASE_URL=postgresql://user:pass@ep-cool-name-123456.us-east-2.aws.neon.tech/neondb?sslmode=requireDATABASE_URL=postgresql://a1b2c3d4e5f6.admin@pg.db9.io:5433/postgres?sslmode=requireKey differences:
- Username: DB9 uses
{tenant_id}.{role}format (e.g.,a1b2c3d4e5f6.admin) - Port: 5433, not 5432
- Database: Always
postgres - Host:
pg.db9.io(not region-specific endpoints)
Neon serverless driver
Section titled “Neon serverless driver”If you use @neondatabase/serverless, replace it with a standard PostgreSQL driver:
import { neon } from '@neondatabase/serverless';const sql = neon(process.env.DATABASE_URL);const result = await sql`SELECT * FROM users`;import pg from 'pg';const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL });const result = await pool.query('SELECT * FROM users');DB9 uses standard pgwire (TCP), so pg (node-postgres), psycopg, pgx, and other standard drivers work without modification.
Connection pooling
Section titled “Connection pooling”Neon provides built-in PgBouncer. DB9 does not include a connection pooler. If your application opens many connections, configure pooling at the application level:
// node-postgres poolconst pool = new pg.Pool({ connectionString: process.env.DATABASE_URL, max: 10, // DB9 handles concurrent connections well idleTimeoutMillis: 30000,});For ORMs, see the integration guides: Prisma, Drizzle, SQLAlchemy.
Edge Runtime
Section titled “Edge Runtime”If you run code in edge/serverless environments (Cloudflare Workers, Vercel Edge Functions) that relied on Neon’s HTTP driver, you need to move database queries to a Node.js runtime. DB9 requires TCP connections via pgwire — it does not support HTTP or WebSocket SQL endpoints.
See the Next.js guide for patterns that work with both Server Components and API routes.
Step 6: Validate
Section titled “Step 6: Validate”Check schema
Section titled “Check schema”db9 db dump my-app --ddl-onlyCompare the output with your original schema to confirm all tables, indexes, and constraints were created.
Check row counts
Section titled “Check row counts”Run a count on your key tables to verify data was imported:
db9 db sql my-app -q "SELECT count(*) FROM users"db9 db sql my-app -q "SELECT count(*) FROM orders"Compare row counts against the source Neon database.
Run your test suite
Section titled “Run your test suite”The most reliable validation is running your application’s existing test suite against the DB9 database. Update DATABASE_URL in your test environment and run:
DATABASE_URL="$(db9 db status my-app --json | jq -r .connection_string)" npm testCheck for unsupported features
Section titled “Check for unsupported features”If your tests fail, check these common differences:
- SERIALIZABLE isolation — DB9 accepts
SET TRANSACTION ISOLATION LEVEL SERIALIZABLEbut runs as REPEATABLE READ - LISTEN/NOTIFY — not supported; use polling or an external message queue
- Advisory locks — not supported; use
SELECT ... FOR UPDATEfor row-level locking - Row-level security — not supported; enforce access control in your application layer
Rollback Plan
Section titled “Rollback Plan”If you need to revert:
- Your Neon database is unchanged — switch
DATABASE_URLback to the Neon connection string. - If you need to export data created in DB9 back to Neon:
# Export from DB9db9 db dump my-app -o db9-export.sql
# Import to Neon (use direct/unpooled connection)psql "postgresql://user:pass@ep-cool-name-123456.us-east-2.aws.neon.tech/neondb?sslmode=require" \ -f db9-export.sqlThe db9 db dump command outputs plain SQL (up to 50,000 rows or 16 MB per table). For larger databases, use psql to stream individual tables with COPY.
Caveats
Section titled “Caveats”- No zero-downtime migration — DB9 does not support logical replication, so you cannot stream changes from Neon in real time. Plan a maintenance window or accept a brief cutover period.
- Extension gaps — If your Neon database uses extensions not in DB9’s built-in set (e.g.,
PostGIS,pg_trgm,pgcrypto), those features will not be available. Check yourCREATE EXTENSIONstatements. - Dump size limits — The
db9 db sql -fAPI import has limits (50,000 rows, 16 MB per table). For larger databases, use directpsqlconnection for import. - Branching model — Neon branches are copy-on-write and instant. DB9 branches are full copies and take longer for large databases. Adjust CI workflows that depend on instant branching.
- Autoscaling — Neon can scale compute to zero when idle. DB9 databases are always on. This affects cost for rarely-used databases.
Next Pages
Section titled “Next Pages”- Compatibility Matrix — full list of supported and unsupported PostgreSQL features
- Connect — connection string format and authentication options
- Migrate from PostgreSQL — general PostgreSQL migration path
- Migrate from Supabase — Supabase-specific migration guide
- Production Checklist — deployment readiness