Skip to content

File Anatomy & Safety

Every migration file has a standard structure with metadata fields and up/down functions:

export default migration({
description: "Add user authentication tables",
up_revision: "001",
down_revision: null, // null for the first migration
up(m) {
/* ... */
},
down(m) {
/* ... */
},
});
FieldDescription
descriptionHuman-readable summary shown in alab status
up_revisionThis migration’s revision ID (auto-filled from filename)
down_revisionPrevious migration’s revision ID (null for the first migration)

Here’s a complete example:

export default migration({
description: "Create auth and blog tables",
up_revision: "001",
down_revision: null,
up(m) {
m.create_table("auth.role", (t) => {
t.id();
t.string("name", 100).unique();
t.timestamps();
});
m.create_table("auth.user", (t) => {
t.id();
t.string("email", 255).unique();
t.boolean("is_active").default(true);
t.string("password", 255);
t.string("username", 50).unique();
t.timestamps();
});
m.create_table("blog.post", (t) => {
t.id();
t.belongs_to("auth.user").as("author");
t.text("body");
t.string("title", 200);
t.timestamps();
});
m.create_index("blog.post", ["author_id"]);
},
down(m) {
m.drop_table("blog.post");
m.drop_table("auth.user");
m.drop_table("auth.role");
},
});

Tables are created in dependency order (referenced tables first). The down() function reverses everything in up(). See the DSL Reference for all available m.* and t.* methods.

Use up_hook and down_hook to run arbitrary SQL before or after the auto-generated DDL:

export default migration({
up(m) {
m.add_column("blog.post", (t) => {
t.string("status", 20).default("draft");
});
},
up_hook(h) {
h.before("SET statement_timeout = '60s';");
h.after(
"UPDATE blog_post SET status = 'published' WHERE published_at IS NOT NULL;",
);
h.backfill("blog.post", "status", "'active'");
},
down(m) {
m.drop_column("blog.post", "status");
},
down_hook(h) {
h.before("SET statement_timeout = '60s';");
},
});
MethodDescription
h.before(sql)Run SQL before the generated DDL
h.after(sql)Run SQL after the generated DDL
h.backfill(table, column, val)Shorthand for UPDATE table SET column = val

Hooks run inside the same transaction as the migration.

If the interactive rename detection (during alab new) doesn’t catch a rename, you can provide explicit hints in the migration file:

export default migration({
renames: {
columns: {
"auth.user.email": "email_address",
},
},
up(m) {
m.rename_column("auth.user", "email", "email_address");
},
down(m) {
m.rename_column("auth.user", "email_address", "email");
},
});

The renames.columns object maps namespace.table.old_column to the new column name. This bypasses the rename-detection heuristic.

Use postgres() and sqlite() to embed SQL that only runs on a specific database dialect:

up(m) {
m.sql(postgres("CREATE EXTENSION IF NOT EXISTS pgcrypto;"));
m.sql(sqlite("PRAGMA journal_mode=WAL;"));
}

AstrolaDB emits only the expression matching the active dialect and ignores the rest.


AstrolaDB includes several layers of protection to prevent accidental data loss and migration tampering.

Every migration is part of a checksum chain. Each checksum is computed as sha256(file_content + previous_checksum), starting from a "genesis" seed. Modifying any file invalidates all subsequent checksums.

When you run alab migrate, the runner verifies checksums against the database. If a file was modified after being applied, migration is blocked.

The alab.lock file tracks all migration files via SHA-256 checksums. The first line is an aggregate checksum of all files — a single value that detects any change. Manage it with alab lock status, alab lock verify, and alab lock repair. It updates automatically after alab migrate and alab squash.

Migrations containing DROP operations require --confirm-destroy:

Terminal window
alab migrate --confirm-destroy

Before applying migrations, AstrolaDB checks your git working tree. Modified migration files block the run (override with --force). Uncommitted schema changes trigger a warning.

Use alab check --determinism to verify migrations produce consistent SQL. It re-generates SQL from each applied migration and compares against the stored sql_checksum. Catches non-deterministic migrations (e.g., using Date.now()).

Multiple processes sharing a database are protected by distributed locking. Use --skip-lock for single-writer CI environments, or --lock-timeout 60s for custom timeouts. Release stuck locks with alab lock release --force.