File Anatomy & Safety
Migration File Structure
Section titled “Migration File Structure”Every migration file has a standard structure with metadata fields and
up/down functions:
export default migration({ description: "Add user authentication tables", up_revision: "001", down_revision: null, // null for the first migration
up(m) { /* ... */ }, down(m) { /* ... */ },});| Field | Description |
|---|---|
description | Human-readable summary shown in alab status |
up_revision | This migration’s revision ID (auto-filled from filename) |
down_revision | Previous migration’s revision ID (null for the first migration) |
Here’s a complete example:
export default migration({ description: "Create auth and blog tables", up_revision: "001", down_revision: null,
up(m) { m.create_table("auth.role", (t) => { t.id(); t.string("name", 100).unique(); t.timestamps(); });
m.create_table("auth.user", (t) => { t.id(); t.string("email", 255).unique(); t.boolean("is_active").default(true); t.string("password", 255); t.string("username", 50).unique(); t.timestamps(); });
m.create_table("blog.post", (t) => { t.id(); t.belongs_to("auth.user").as("author"); t.text("body"); t.string("title", 200); t.timestamps(); }); m.create_index("blog.post", ["author_id"]); },
down(m) { m.drop_table("blog.post"); m.drop_table("auth.user"); m.drop_table("auth.role"); },});Tables are created in dependency order (referenced tables first). The down()
function reverses everything in up(). See the
DSL Reference for all available m.* and t.*
methods.
Backfilling Data
Section titled “Backfilling Data”When adding a NOT NULL column to a table with existing rows, use the .backfill() modifier to safely populate existing rows:
export default migration({ up(m) { m.add_column("blog.post", (col) => col.string(20, "status") .default("draft") .backfill("draft") // Populates existing rows ); },
down(m) { m.drop_column("blog.post", "status"); },});How it works:
- Column is added as nullable first
- Batched UPDATE populates existing rows (5,000 rows per batch)
- 1-second sleep between batches allows AUTOVACUUM to run
WHERE column IS NULLmakes operation idempotent (safe to re-run)- Finally, NOT NULL constraint is applied
Performance: ~4,000-5,000 rows/sec throughput. A 1M row table takes approximately 3-4 minutes to backfill.
Rename Hints
Section titled “Rename Hints”If the interactive rename detection (during alab new) doesn’t catch a rename,
you can provide explicit hints in the migration file:
export default migration({ renames: { columns: { "auth.user.email": "email_address", }, },
up(m) { m.rename_column("auth.user", "email", "email_address"); },
down(m) { m.rename_column("auth.user", "email_address", "email"); },});The renames.columns object maps namespace.table.old_column to the new column
name. This bypasses the rename-detection heuristic.
Integrity & Safety
Section titled “Integrity & Safety”AstrolaDB includes several layers of protection to prevent accidental data loss and migration tampering.
Checksum Chain
Section titled “Checksum Chain”Every migration is part of a checksum chain. Each checksum is computed as
sha256(file_content + previous_checksum), starting from a "genesis" seed.
Modifying any file invalidates all subsequent checksums.
When you run alab migrate, the runner verifies checksums against the database.
If a file was modified after being applied, migration is blocked.
Lock File
Section titled “Lock File”The alab.lock file tracks all migration files via SHA-256 checksums. The first
line is an aggregate checksum of all files — a single value that detects any
change. Manage it with alab lock status, alab lock verify, and
alab lock repair. It updates automatically after alab migrate and
alab squash.
Destructive Operations
Section titled “Destructive Operations”Migrations containing DROP operations require --confirm-destroy:
alab migrate --confirm-destroyGit Safety
Section titled “Git Safety”Before applying migrations, AstrolaDB checks your git working tree. Modified
migration files block the run (override with --force). Uncommitted schema
changes trigger a warning.
SQL Determinism
Section titled “SQL Determinism”Use alab check --determinism to verify migrations produce consistent SQL. It
re-generates SQL from each applied migration and compares against the stored
sql_checksum. Catches non-deterministic migrations (e.g., using Date.now()).
Distributed Locking
Section titled “Distributed Locking”Multiple processes sharing a database are protected by distributed locking. Use
--skip-lock for single-writer CI environments, or --lock-timeout 60s for
custom timeouts. Release stuck locks with alab lock release --force.