Production Checklist
Use this checklist before promoting a DB9 database from development to production. Each section covers what to do, why it matters, and how to verify.
Decision summary
Section titled “Decision summary”| Area | Recommended default |
|---|---|
| Authentication | Short-lived connect tokens, rotated per deployment |
| Secrets | Never commit credentials; pass via environment variables |
| Connection strings | Use sslmode=require and port 5433 |
| Branching | One main database for production; branches for preview and CI |
| Observability | db9 db inspect for live metrics; periodic slow query review |
| Recovery | Branches as snapshots; application-level backup for critical data |
| Limits | Know the per-tenant boundaries before launch |
Authentication
Section titled “Authentication”Use connect tokens in production
Section titled “Use connect tokens in production”Connect tokens expire automatically (default: 10 minutes) and are the recommended credential for production workloads. Generate them at deploy time or on a short rotation schedule.
# Generate a connect tokendb9 db connect-token myappThe token is used as the PostgreSQL password. Your application should fetch a fresh token at startup or on reconnect.
For the TypeScript SDK:
import { createDb9Client } from 'get-db9';
const client = createDb9Client();const { token, host, port, user, expires_at } = await client.databases.connectToken(databaseId);Avoid static passwords in production
Section titled “Avoid static passwords in production”Static passwords set with db9 db reset-password do not expire. They are convenient during development but create a credential that lives forever unless manually rotated. If you must use a static password, rotate it regularly and store it in a secrets manager.
API tokens for automation
Section titled “API tokens for automation”Use named API tokens for CI pipelines and infrastructure automation that manage databases through the REST API or SDK — not for direct SQL connections.
db9 token create --name ci-deploy --expires-in-days 90Set the resulting token as DB9_API_KEY in your CI environment. List and revoke tokens with db9 token list and db9 token revoke.
Secrets management
Section titled “Secrets management”| Rule | Details |
|---|---|
| Never commit credentials | Connection strings, API tokens, and passwords belong in environment variables or a secrets manager — not in source control. |
Use PGPASSWORD or DATABASE_URL | Standard PostgreSQL environment variables work with DB9. |
| Rotate connect tokens per deployment | Generate a fresh token each time your application starts. |
| Scope API tokens narrowly | Create separate tokens for CI, monitoring, and admin tasks with appropriate expiration. |
Connection configuration
Section titled “Connection configuration”Required settings
Section titled “Required settings”Host: api.db9.aiPort: 5433Database: postgresUsername: <tenant_id>.adminSSL: sslmode=requireDB9 uses port 5433, not the PostgreSQL default 5432. All connections use the postgres database name. The username format is <tenant_id>.<role> — see Connect to DB9 for details.
DB9’s hosted service supports TLS with SCRAM-SHA-256 authentication. Always set sslmode=require in production connection strings:
postgresql://a1b2c3d4e5f6.admin:token@api.db9.ai:5433/postgres?sslmode=requireConnection pooling
Section titled “Connection pooling”db9-server manages per-tenant connection pooling internally. Idle tenant connections are evicted after 5 minutes by default. For applications with bursty traffic, keep connections alive with a lightweight health check query rather than relying on reconnection.
If your application framework provides its own connection pool (most ORMs do), configure it with:
- A reasonable pool size (start with 5–10 connections)
- A connection timeout that accounts for TLS handshake latency
- Reconnection logic that fetches a fresh connect token on auth failure
Branching strategy
Section titled “Branching strategy”DB9 branches create isolated copies of a database at a point in time. Use them to separate production from non-production workloads.
Recommended pattern
Section titled “Recommended pattern”| Environment | Approach |
|---|---|
| Production | One main database. No branches on this database during normal operation. |
| Staging / Preview | Branch from production for realistic preview environments. Delete branches when the preview closes. |
| CI / Testing | Create ephemeral branches per test run. Delete after tests complete. |
| Development | Each developer can create personal branches for isolated experimentation. |
Branch lifecycle
Section titled “Branch lifecycle”Branches do not auto-expire. Clean up branches you no longer need:
# List branchesdb9 branch list myapp
# Delete a branchdb9 branch delete preview-42For CI workflows, script branch creation and deletion as part of your pipeline:
# Create a branch for this CI rundb9 branch create myapp --name "ci-${CI_BUILD_ID}"
# Run tests against the branch# ...
# Clean updb9 branch delete "ci-${CI_BUILD_ID}"Branch creation has a timeout of 5 minutes (300 seconds) by default. For large databases, factor this into your CI pipeline timing.
Observability
Section titled “Observability”Live metrics with db9 db inspect
Section titled “Live metrics with db9 db inspect”The inspect command provides real-time metrics for a running database:
# Summary: QPS, TPS, latency, active connectionsdb9 db inspect myapp
# Recent query samples with latencydb9 db inspect myapp queries
# Full report (summary + query samples)db9 db inspect myapp report
# Schema, table, and index informationdb9 db inspect myapp schemasdb9 db inspect myapp tablesdb9 db inspect myapp indexesSlow query review
Section titled “Slow query review”Identify slow queries with:
db9 db inspect myapp slow-queriesReview slow queries periodically — especially after schema changes, new feature deployments, or load increases. Use EXPLAIN against the database to investigate query plans.
What you can see today
Section titled “What you can see today”| Metric | Available |
|---|---|
| Queries per second (QPS) | Yes |
| Transactions per second (TPS) | Yes |
| Average and p99 latency | Yes |
| Active connection count | Yes |
| Query samples with individual latency | Yes |
| Slow query log | Yes |
| Schema / table / index inspection | Yes |
| External metrics export (Prometheus, Datadog) | Not yet |
| Alerting | Not yet |
DB9 does not currently export metrics to external monitoring systems. Use db9 db inspect as your primary observability tool and build alerting around your application-level metrics for now.
Recovery expectations
Section titled “Recovery expectations”What DB9 provides
Section titled “What DB9 provides”- Branch-based snapshots: Create a branch at any time to capture a point-in-time copy of your database. This is the primary mechanism for creating restore points.
- TiKV durability: Data is replicated across TiKV nodes with Raft consensus. Individual node failures do not cause data loss.
- Reconciler: A background process detects and recovers failed provisioning or deletion operations (stuck states recover within ~10 minutes).
What DB9 does not provide today
Section titled “What DB9 does not provide today”- Automated point-in-time recovery (PITR): There is no built-in continuous backup with arbitrary restore points. Use branches as manual snapshots before risky operations.
- Cross-region replication: Data lives in one region. Plan accordingly for disaster recovery.
- Self-service backup export: Database dumps are limited to 50,000 rows and 16 MB. For larger datasets, export data through your application logic or use
COPYover a pgwire connection.
Recommended practices
Section titled “Recommended practices”-
Before migrations: Create a branch as a snapshot before running schema changes.
Terminal db9 branch create myapp --name "pre-migration-$(date +%Y%m%d)"# Run migration# Verify# Delete snapshot branch when confident -
Critical data: Maintain application-level backups (periodic
COPY TOor scheduled exports) for data that cannot be reconstructed. -
Test recovery: Periodically create a branch, connect to it, and verify your data is accessible. Do not assume recovery works without testing it.
Limits and quotas
Section titled “Limits and quotas”Know these boundaries before launching:
| Limit | Value | Notes |
|---|---|---|
| Database count | Varies by account tier | Anonymous accounts: 5 databases. Verified accounts: higher limits. |
| Connection limit | 1000 per db9-server instance | Per-user connection limits enforced via PostgreSQL role settings. |
| Database dump | 50,000 rows / 16 MB | Via the REST API dump endpoint. No limit on COPY over pgwire. |
| Connect token TTL | 10 minutes (default) | Configurable by the platform operator. |
| Branch clone timeout | 5 minutes (default) | Large databases may need more time. |
| Serializable isolation | Accepted but downgraded | DB9 runs at Snapshot Isolation (equivalent to Repeatable Read). SET TRANSACTION ISOLATION LEVEL SERIALIZABLE is accepted but does not provide SSI guarantees. |
| Per-tenant QPS limit | Disabled by default | Can be configured by the platform operator. |
For detailed SQL compatibility limits, see SQL Limits & Constraints.
Pre-launch verification
Section titled “Pre-launch verification”Run through these checks before going live:
# 1. Verify you can connect with a fresh connect tokendb9 db connect-token myapp
# 2. Check database healthdb9 db inspect myapp
# 3. Verify TLSpsql "postgresql://TENANT.admin@api.db9.ai:5433/postgres?sslmode=require"
# 4. Run your application's health check against the database# (application-specific)
# 5. Confirm branches are cleaned updb9 branch list myapp
# 6. Review slow queries from recent testingdb9 db inspect myapp slow-queriesChecklist summary
Section titled “Checklist summary”- Using connect tokens (not static passwords) for application auth
- Credentials stored in environment variables or secrets manager
- Connection string uses port
5433, databasepostgres,sslmode=require - Branching strategy documented: production database, preview branches, CI branches
- Stale branches cleaned up
-
db9 db inspectreviewed for baseline metrics - Slow queries checked and addressed
- Recovery plan documented: pre-migration branches, application-level backups for critical data
- Account limits understood (database count, connection limit, dump size)
- Serializable isolation caveat understood if your application uses it
- API tokens scoped and expiring for CI and automation
Next steps
Section titled “Next steps”- Connect to DB9 — connection strings, drivers, and authentication details
- Architecture — how DB9 isolates tenants and processes queries
- CLI Reference — full command reference including
db inspect,db branch, andtoken - SQL Limits & Constraints — detailed compatibility notes
- Extensions — enable vector search, fs9, HTTP from SQL, and pg_cron