Skip to content
Discord Get Started

CI Ephemeral Databases

Every CI run can get its own disposable DB9 database with real PostgreSQL compatibility — no Docker, no shared state, no flaky mocks. Create a database at the start of the job, run your tests, and delete it when done.

This guide shows two strategies and how to wire them into GitHub Actions and other CI systems.

StrategyBest forSpeedData
Ephemeral databaseClean-slate tests, schema-only validationInstant (under 1s)Empty
BranchTests that need production data, migration testingSeconds to minutesFull copy of parent

Use ephemeral databases when your tests create their own fixtures. Use branches when you need a copy of real data to test against.

  • A DB9 account with the CLI installed (Quick Start)
  • A CI environment that can run shell commands (GitHub Actions, GitLab CI, etc.)
  • A DB9_TOKEN for authentication (create one with db9 token create)

Create a fresh, empty database for each CI run. Tests apply their own migrations and seed data.

Terminal
# Generate a unique name for this run
DB_NAME="ci-${GITHUB_RUN_ID:-$(date +%s)}"
# Create the database (synchronous, <1 second)
db9 create --name "$DB_NAME" --show-connection-string
# Get the connection string for your test runner
export DATABASE_URL=$(db9 db status "$DB_NAME" --json | jq -r .connection_string)
# Run migrations and tests
npx prisma db push
npm test
# Clean up
db9 delete "$DB_NAME" --yes

Database creation is synchronous — the database is ready to accept connections as soon as db9 create returns.

.github/workflows/test.yml
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
env:
DB9_TOKEN: ${{ secrets.DB9_TOKEN }}
steps:
- uses: actions/checkout@v4
- name: Install DB9 CLI
run: curl -fsSL https://get.db9.io | sh
- name: Create ephemeral database
run: |
DB_NAME="ci-${{ github.run_id }}-${{ github.run_attempt }}"
db9 create --name "$DB_NAME"
echo "DB_NAME=$DB_NAME" >> "$GITHUB_ENV"
echo "DATABASE_URL=$(db9 db status "$DB_NAME" --json | jq -r .connection_string)" >> "$GITHUB_ENV"
- name: Run tests
run: |
npx prisma db push
npm test
- name: Cleanup
if: always()
run: db9 delete "$DB_NAME" --yes

Key points:

  • DB9_TOKEN authenticates the CLI — store it as a repository secret.
  • if: always() on the cleanup step ensures the database is deleted even if tests fail.
  • Include run_attempt in the name to avoid conflicts on re-runs.

Use instantDatabase for test harnesses that manage their own lifecycle:

test/setup.ts
import { instantDatabase, createDb9Client } from 'get-db9';
let databaseId: string;
export async function setupTestDb() {
const db = await instantDatabase({
name: `ci-${process.env.GITHUB_RUN_ID || Date.now()}`,
});
databaseId = db.databaseId;
return {
connectionString: db.connectionString,
databaseId: db.databaseId,
};
}
export async function teardownTestDb() {
if (databaseId) {
const client = createDb9Client();
await client.databases.delete(databaseId);
}
}

instantDatabase is idempotent — if the database already exists, it returns the existing one without error. The seed parameter only runs on first creation.

Create a branch from a shared parent database. The branch starts with a full copy of the parent’s schema and data — useful for testing migrations against real data or running integration tests that expect pre-existing records.

Terminal
# Branch from the staging database
BRANCH_NAME="ci-${GITHUB_RUN_ID:-$(date +%s)}"
db9 branch create staging --name "$BRANCH_NAME"
# Wait for the branch to become ACTIVE
while true; do
STATE=$(db9 db status "$BRANCH_NAME" --json | jq -r .state)
if [ "$STATE" = "ACTIVE" ]; then
break
elif [ "$STATE" = "CREATE_FAILED" ]; then
echo "Branch creation failed"
db9 db status "$BRANCH_NAME" --json | jq .state_reason
exit 1
fi
sleep 2
done
# Get the connection string
export DATABASE_URL=$(db9 db status "$BRANCH_NAME" --json | jq -r .connection_string)
# Run tests against the branch (has all of staging's data)
npm test
# Clean up
db9 delete "$BRANCH_NAME" --yes

Branch creation is asynchronous — you must poll until the state is ACTIVE before connecting. Branches of small databases typically complete in seconds; larger databases take longer.

.github/workflows/integration.yml
name: Integration Tests
on: [push]
jobs:
test:
runs-on: ubuntu-latest
env:
DB9_TOKEN: ${{ secrets.DB9_TOKEN }}
steps:
- uses: actions/checkout@v4
- name: Install DB9 CLI
run: curl -fsSL https://get.db9.io | sh
- name: Create branch from staging
run: |
BRANCH_NAME="ci-${{ github.run_id }}-${{ github.run_attempt }}"
db9 branch create staging --name "$BRANCH_NAME"
# Poll until ready
for i in $(seq 1 60); do
STATE=$(db9 db status "$BRANCH_NAME" --json | jq -r .state)
[ "$STATE" = "ACTIVE" ] && break
[ "$STATE" = "CREATE_FAILED" ] && exit 1
sleep 2
done
echo "DB_NAME=$BRANCH_NAME" >> "$GITHUB_ENV"
echo "DATABASE_URL=$(db9 db status "$BRANCH_NAME" --json | jq -r .connection_string)" >> "$GITHUB_ENV"
- name: Apply pending migration
run: db9 db sql "$DB_NAME" -f migrations/latest.sql
- name: Run integration tests
run: npm test
- name: Cleanup
if: always()
run: db9 delete "$DB_NAME" --yes
test/branch-setup.ts
import { createDb9Client } from 'get-db9';
const client = createDb9Client();
export async function createTestBranch(parentId: string) {
const branch = await client.databases.branch(parentId, {
name: `ci-${process.env.GITHUB_RUN_ID || Date.now()}`,
});
// Poll until ACTIVE
let status;
do {
status = await client.databases.get(branch.id);
if (status.state === 'CREATE_FAILED') {
throw new Error(`Branch failed: ${status.state_reason}`);
}
await new Promise((r) => setTimeout(r, 2000));
} while (status.state !== 'ACTIVE');
return {
databaseId: branch.id,
connectionString: status.connection_string,
};
}
export async function deleteTestBranch(databaseId: string) {
await client.databases.delete(databaseId);
}

When running tests in parallel (e.g., Jest workers, pytest-xdist), give each worker its own database to avoid conflicts.

test/worker-setup.ts
import { instantDatabase, createDb9Client } from 'get-db9';
const runId = process.env.GITHUB_RUN_ID || Date.now();
const workerId = process.env.JEST_WORKER_ID || '1';
export async function getWorkerDb() {
return instantDatabase({
name: `ci-${runId}-w${workerId}`,
seed: `
CREATE TABLE users (id SERIAL PRIMARY KEY, email TEXT UNIQUE, name TEXT);
CREATE TABLE posts (id SERIAL PRIMARY KEY, title TEXT, author_id INT REFERENCES users(id));
`,
});
}

The seed SQL runs only when the database is first created — subsequent calls with the same name return the existing database.

Terminal
# Delete all databases matching the CI run prefix
for db in $(db9 list --json | jq -r '.[].name' | grep "^ci-${GITHUB_RUN_ID}"); do
db9 delete "$db" --yes
done

For CI environments where you want to limit credential exposure, use connect tokens instead of the admin password:

Terminal
# Get a short-lived token (10-minute TTL, range: 5–15 minutes)
TOKEN_JSON=$(db9 db connect-token "$DB_NAME")
# Extract connection details
HOST=$(echo "$TOKEN_JSON" | jq -r .host)
PORT=$(echo "$TOKEN_JSON" | jq -r .port)
USER=$(echo "$TOKEN_JSON" | jq -r .user)
TOKEN=$(echo "$TOKEN_JSON" | jq -r .token)
# Use with psql or any driver
PGPASSWORD="$TOKEN" psql "postgresql://${USER}@${HOST}:${PORT}/postgres?sslmode=require"

Connect tokens are RS256 JWTs that expire automatically. They cannot be used after expiry, so leaked CI logs pose less risk than static passwords.

Delete the database in the same CI job using if: always():

YAML
- name: Cleanup
if: always()
run: db9 delete "$DB_NAME" --yes

If CI jobs sometimes fail without running cleanup, add a scheduled workflow to catch orphans:

.github/workflows/cleanup.yml
name: DB9 Cleanup
on:
schedule:
- cron: '0 */6 * * *' # Every 6 hours
jobs:
cleanup:
runs-on: ubuntu-latest
env:
DB9_TOKEN: ${{ secrets.DB9_TOKEN }}
steps:
- name: Install DB9 CLI
run: curl -fsSL https://get.db9.io | sh
- name: Delete stale CI databases
run: |
db9 list --json | jq -r '.[].name' | grep '^ci-' | while read name; do
echo "Deleting $name"
db9 delete "$name" --yes
done

Use deterministic names so CI runs don’t accumulate orphans:

PatternExampleUse case
ci-{run_id}ci-12345678One database per run
ci-{run_id}-w{worker}ci-12345678-w3Parallel workers
ci-{run_id}-{attempt}ci-12345678-2Re-runs without conflicts
pr-{pr_number}pr-42Preview environments (reused across pushes)
  • Anonymous accounts: Limited to 5 databases total (including branches). Run db9 claim to remove the limit.
  • Concurrent branch creations: Maximum 2 at a time. A third returns HTTP 429 — retry after the in-progress branches finish.
  • Branch creation is async: Always poll for ACTIVE state. Database creation (db9 create) is synchronous and instant.
  • Branches are full copies: Each branch consumes its own storage. Large parent databases take longer to branch and use more quota.
  • Deletion is permanent: There is no undo. Ensure cleanup runs only delete databases with your CI prefix.

The database name already exists. Include the run attempt number in the name to handle CI re-runs, or use instantDatabase in the SDK which is idempotent.

Only 2 branches can be created concurrently. If your CI creates branches in parallel, serialize the creation steps or use ephemeral databases instead (no concurrency limit on db9 create).

Database not cleaned up after failed CI run

Section titled “Database not cleaned up after failed CI run”

Add if: always() to your cleanup step. For persistent orphans, use the scheduled cleanup workflow above. All CI databases should use a consistent naming prefix (e.g., ci-) for safe bulk deletion.

Verify the connection string uses port 5433 and host pg.db9.io. Ensure sslmode=require is set — DB9 requires TLS for all connections.