Skip to content
Discord Get Started

GORM

GORM connects to DB9 using the pgx v5 driver via gorm.io/driver/postgres. Struct-based models, AutoMigrate, the query builder, and transactions work out of the box.

DB9’s E2E smoke tests validate CRUD, transactions, savepoints, joins, subqueries, window functions, DDL, and vector operations through GORM.

Terminal
db9 create --name gorm-app

Get the connection string:

Terminal
db9 db status gorm-app
Terminal
mkdir gorm-db9 && cd gorm-db9
go mod init gorm-db9
go get gorm.io/gorm gorm.io/driver/postgres
main.go
package main
import (
"gorm.io/driver/postgres"
"gorm.io/gorm"
"gorm.io/gorm/logger"
"log"
)
func main() {
dsn := "postgresql://gorm-app.admin:YOUR_PASSWORD@pg.db9.io:5433/postgres?sslmode=require"
db, err := gorm.Open(
postgres.New(postgres.Config{DSN: dsn}),
&gorm.Config{
Logger: logger.Default.LogMode(logger.Silent),
},
)
if err != nil {
log.Fatalf("failed to connect: %v", err)
}
// Verify connection
sqlDB, _ := db.DB()
if err := sqlDB.Ping(); err != nil {
log.Fatalf("ping failed: %v", err)
}
log.Println("Connected to DB9")
}

Use Go structs with GORM tags:

models.go
package main
import "time"
type User struct {
ID uint `gorm:"primaryKey"`
Email string `gorm:"not null;uniqueIndex;size:255"`
Name string `gorm:"not null;size:100"`
Age int `gorm:"default:0"`
IsActive bool `gorm:"default:true"`
Bio *string `gorm:"type:text"`
Metadata map[string]any `gorm:"serializer:json"`
CreatedAt time.Time
UpdatedAt time.Time
Posts []Post `gorm:"foreignKey:AuthorID"`
}
type Post struct {
ID uint `gorm:"primaryKey"`
Title string `gorm:"not null;size:500"`
Content string `gorm:"type:text;not null"`
Published bool `gorm:"default:false"`
AuthorID uint `gorm:"not null"`
Author User `gorm:"foreignKey:AuthorID"`
CreatedAt time.Time
}

GORM features used:

  • primaryKey — auto-increment primary key
  • uniqueIndex — creates a unique index
  • serializer:json — automatically encodes/decodes map[string]any as JSONB
  • CreatedAt / UpdatedAt — auto-managed by GORM

Use AutoMigrate to create tables from your model definitions:

Go
if err := db.AutoMigrate(&User{}, &Post{}); err != nil {
log.Fatalf("AutoMigrate: %v", err)
}

For production, use raw SQL for more control over schema changes:

Go
db.Exec(`ALTER TABLE users ADD COLUMN IF NOT EXISTS phone TEXT`)
db.Exec(`CREATE INDEX IF NOT EXISTS idx_users_email ON users (email)`)
Go
// Create
user := User{
Email: "alice@example.com",
Name: "Alice",
Age: 30,
Metadata: map[string]any{"role": "admin"},
}
if err := db.Create(&user).Error; err != nil {
log.Fatalf("create: %v", err)
}
// user.ID is now populated
// Read by primary key
var found User
db.First(&found, user.ID)
// Read with condition
var alice User
db.Where("email = ?", "alice@example.com").First(&alice)
// Read with multiple conditions
var active []User
db.Where("age >= ? AND is_active = ?", 25, true).
Order("name ASC").
Limit(10).
Find(&active)
// Update
db.Model(&user).Update("name", "Alice Updated")
// Update multiple fields
db.Model(&user).Updates(map[string]any{
"name": "Alice V2",
"age": 31,
})
// Delete
db.Delete(&user)
Go
err := db.Transaction(func(tx *gorm.DB) error {
user := User{Email: "txn@example.com", Name: "Txn User", Age: 25}
if err := tx.Create(&user).Error; err != nil {
return err
}
post := Post{Title: "Atomic Post", Content: "Created in a transaction.", AuthorID: user.ID}
if err := tx.Create(&post).Error; err != nil {
return err
}
return nil // auto-commits
// return errors.New("...") would auto-rollback
})
Go
tx := db.Begin()
if err := tx.Create(&User{Email: "manual@example.com", Name: "Manual"}).Error; err != nil {
tx.Rollback()
log.Fatal(err)
}
tx.Commit()

GORM automatically uses savepoints for nested Transaction calls:

Go
db.Transaction(func(tx *gorm.DB) error {
tx.Create(&User{Email: "outer@example.com", Name: "Outer"})
// Creates a savepoint internally
tx.Transaction(func(nested *gorm.DB) error {
nested.Exec("UPDATE users SET age = age + 1 WHERE email = 'outer@example.com'")
return nil
})
return nil
})
Go
tx := db.Begin()
tx.Exec("SET TRANSACTION ISOLATION LEVEL REPEATABLE READ")
// ... queries run at snapshot isolation
tx.Commit()

Supported: READ COMMITTED, REPEATABLE READ, SERIALIZABLE (runs as REPEATABLE READ on DB9).

Use db.Raw for SELECT queries and db.Exec for statements:

Go
// Raw SELECT
var count int64
db.Raw("SELECT COUNT(*) FROM users WHERE age > ?", 25).Scan(&count)
// Scan into struct
var results []struct {
Name string `gorm:"column:name"`
Total int64 `gorm:"column:total"`
}
db.Raw(`
SELECT name, COUNT(*) AS total
FROM users
GROUP BY name
HAVING COUNT(*) >= 1
`).Scan(&results)
// Window functions
var ranked []struct {
ID int `gorm:"column:id"`
RN int64 `gorm:"column:rn"`
}
db.Raw(`
SELECT id, ROW_NUMBER() OVER (PARTITION BY author_id ORDER BY id) AS rn
FROM posts
`).Scan(&ranked)
// Joins
var ids []int
db.Raw(`
SELECT u.id FROM users u
INNER JOIN posts p ON p.author_id = u.id
`).Scan(&ids)
// Subquery
db.Raw(`
SELECT id FROM users
WHERE id IN (SELECT author_id FROM posts WHERE published = true)
`).Scan(&ids)

GORM does not have a native vector column type. Use raw SQL for vector operations:

Go
// Create vector table and index
db.Exec(`
CREATE EXTENSION IF NOT EXISTS vector;
CREATE TABLE IF NOT EXISTS embeddings (
id SERIAL PRIMARY KEY,
name VARCHAR(255),
embedding vector(3) NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_embeddings
ON embeddings USING hnsw (embedding vector_l2_ops);
`)
// Insert vectors
db.Exec(`INSERT INTO embeddings (name, embedding) VALUES (?, ?)`, "doc-1", "[1.0, 2.0, 3.0]")
// k-NN search by L2 distance
var ids []int
db.Raw(`
SELECT id FROM embeddings
WHERE embedding <-> '[1.0, 0.0, 0.0]' < 1.0
ORDER BY embedding <-> '[1.0, 0.0, 0.0]'
`).Scan(&ids)
// Cosine distance search
var results []struct {
Name string `gorm:"column:name"`
Distance float64 `gorm:"column:distance"`
}
db.Raw(`
SELECT name, cosine_distance(embedding, '[1.0, 1.0, 1.0]') AS distance
FROM embeddings
ORDER BY distance ASC
LIMIT 5
`).Scan(&results)

DB9 supports HNSW indexes with vector_l2_ops, vector_cosine_ops, and vector_ip_ops. IVFFlat is not supported.

For production, manage migrations with raw SQL:

Go
// Add a column
db.Exec("ALTER TABLE users ADD COLUMN IF NOT EXISTS phone TEXT")
// Create an index
db.Exec("CREATE INDEX IF NOT EXISTS idx_users_age ON users (age)")
// Drop a column
db.Exec("ALTER TABLE users DROP COLUMN IF EXISTS phone")

AutoMigrate can be used for development but relies on information_schema introspection that may not fully work with DB9 for complex alterations.

  • Driver: GORM uses gorm.io/driver/postgres which wraps pgx v5. No need to install lib/pq.
  • Connection pooling: Access the underlying *sql.DB via db.DB() to configure pool settings (SetMaxOpenConns, SetMaxIdleConns, SetConnMaxLifetime).
  • JSON columns: Use gorm:"serializer:json" for map[string]any fields. GORM automatically handles JSON encoding and decoding.
  • Timestamp precision: DB9 stores timestamps with millisecond precision. If your application relies on microsecond precision, be aware of up to 1ms drift on round-trips.
  • Connection string: Use the format postgresql://gorm-app.admin:{password}@pg.db9.io:5433/postgres?sslmode=require.

AutoMigrate fails or creates unexpected schema

Section titled “AutoMigrate fails or creates unexpected schema”

DB9’s information_schema support is limited. For complex schema changes, use db.Exec() with raw DDL. AutoMigrate works for initial table creation but may not reliably detect existing schema differences.

Verify the host (pg.db9.io), port (5433), and username format ({database-name}.admin). DB9 routes connections by parsing the tenant from the username.

DB9 requires TLS in production. Ensure your connection string includes sslmode=require. For local development, use sslmode=disable.

DB9 stores timestamps with millisecond precision. If you compare timestamps after a round-trip, allow up to 1ms delta:

Go
delta := createdAt.Sub(fromDB.CreatedAt).Abs()
if delta > time.Millisecond {
// unexpected drift
}

Tested with GORM 1.25+ and pgx v5 against DB9. E2E smoke tests cover:

CategoryStatus
Connection and poolingPass
CRUD (Create, First, Where, Update, Delete)Pass
Transactions and savepointsPass
AutoMigratePass
JSON serialization (map → JSONB)Pass
Joins (INNER, LEFT)Pass
GROUP BY / HAVINGPass
Subqueries and window functionsPass
Vector operations and HNSWPass
DDL (ALTER TABLE, CREATE INDEX)Pass