Database Transactions and Isolation Levels

June 22, 2021

Database Transactions and Isolation Levels
Database Transactions and Isolation Levels

Transactions are how databases keep their promises. Isolation levels determine how strict those promises are.

Estimated Reading Time : 10m

What ACID means

Every relational database claims to be ACID-compliant. Here’s what each letter actually guarantees:

Atomicity — a transaction is all or nothing. If any statement fails, the entire transaction is rolled back. You never end up with a half-applied set of changes.

Consistency — a transaction moves the database from one valid state to another. Constraints (foreign keys, unique indexes, check constraints) are enforced before the transaction commits.

Isolation — concurrent transactions don’t interfere with each other. The degree of isolation depends on the isolation level.

Durability — once a transaction commits, the data survives crashes. It’s on disk, not just in memory.

Atomicity, consistency, and durability are binary — you either have them or you don’t. Isolation is the one with knobs.

Why isolation levels exist

Full isolation means every transaction behaves as if it’s the only one running. This is safe but slow — transactions have to wait for each other.

In practice, most workloads can tolerate some degree of concurrency. Isolation levels let you choose how much visibility concurrent transactions have into each other’s uncommitted or recently committed changes.

The tradeoffs are between correctness and throughput.

The problems isolation levels prevent

Before looking at the levels, understand the problems they address:

Dirty read — a transaction reads data written by another transaction that hasn’t committed yet. If that transaction rolls back, you read data that never existed.

Non-repeatable read — a transaction reads the same row twice and gets different values because another transaction committed a change in between.

Phantom read — a transaction runs the same query twice and gets different rows back because another transaction inserted or deleted rows that match the query.

Serialization anomaly — the result of running transactions concurrently differs from any possible serial (one-at-a-time) execution order.

The four isolation levels

SQL defines four isolation levels, from weakest to strongest:

Read Uncommitted

Allows dirty reads. You can see changes from other transactions before they commit.

In practice, Postgres doesn’t implement this — it silently upgrades to Read Committed. But some databases (MySQL, SQL Server) do support it.

There’s almost no legitimate reason to use this level. The performance gain over Read Committed is negligible, and the risk of reading rolled-back data is real.

Read Committed

The default in PostgreSQL. Each statement within a transaction sees only data that was committed before that statement began. You won’t see uncommitted changes from other transactions.

-- Session A
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
-- not committed yet

-- Session B
BEGIN;
SELECT balance FROM accounts WHERE id = 1;
-- sees the OLD balance (before Session A's update)

Dirty reads are prevented, but non-repeatable reads and phantom reads can still happen. If Session A commits between two SELECT statements in Session B, Session B will see the updated data on the second read.

For most OLTP workloads, this is the right level.

Repeatable Read

All statements within a transaction see a snapshot of the database as it was at the start of the transaction. Even if other transactions commit changes during your transaction, you won’t see them.

-- Session A
BEGIN ISOLATION LEVEL REPEATABLE READ;
SELECT balance FROM accounts WHERE id = 1;
-- returns 500

-- Session B commits: UPDATE accounts SET balance = 400 WHERE id = 1;

-- Session A (still in the same transaction)
SELECT balance FROM accounts WHERE id = 1;
-- still returns 500

Non-repeatable reads are prevented. In Postgres, phantom reads are also prevented at this level (Postgres uses snapshot isolation under the hood, which is stricter than the SQL standard requires).

The tradeoff: if your transaction tries to modify a row that another concurrent transaction already modified, Postgres will abort your transaction with a serialization error. Your application needs to handle retries.

Serializable

The strictest level. Postgres guarantees that the result of concurrent transactions is equivalent to some serial execution order. Any transaction that would violate this guarantee is aborted.

BEGIN ISOLATION LEVEL SERIALIZABLE;
-- any conflicting concurrent transaction will be aborted

This is the safest option but comes with the highest overhead. Postgres uses predicate locking and serializable snapshot isolation (SSI) to detect conflicts. More transactions will be aborted and need retries.

Use Serializable when correctness is more important than throughput — financial calculations, inventory management, or any workload where concurrent anomalies would cause real damage.

Summary table

Level Dirty Read Non-repeatable Read Phantom Read Serialization Anomaly
Read Uncommitted Possible* Possible Possible Possible
Read Committed No Possible Possible Possible
Repeatable Read No No No** Possible
Serializable No No No No

* Postgres upgrades to Read Committed automatically ** Postgres prevents phantoms at Repeatable Read via snapshot isolation

Isolation levels in Go

Setting the isolation level in Go with database/sql:

tx, err := db.BeginTx(ctx, &sql.TxOptions{
    Isolation: sql.LevelSerializable,
})
if err != nil {
    return err
}
defer tx.Rollback()

// do work within the transaction
_, err = tx.ExecContext(ctx, "UPDATE accounts SET balance = balance - 100 WHERE id = $1", accountID)
if err != nil {
    return err
}

return tx.Commit()

Available constants: sql.LevelDefault, sql.LevelReadCommitted, sql.LevelRepeatableRead, sql.LevelSerializable.

Handling serialization failures

At Repeatable Read and Serializable levels, transactions can fail with a serialization error. Your application must be prepared to retry:

// isSerializationError checks for Postgres error code 40001
// (serialization_failure), which means a concurrent transaction
// conflicted and this one needs to be retried.
func isSerializationError(err error) bool {
    var pgErr *pgconn.PgError
    return errors.As(err, &pgErr) && pgErr.Code == "40001"
}

func runInTransaction(ctx context.Context, db *sql.DB, fn func(tx *sql.Tx) error) error {
    for retries := 0; retries < 3; retries++ {
        tx, err := db.BeginTx(ctx, &sql.TxOptions{
            Isolation: sql.LevelSerializable,
        })
        if err != nil {
            return err
        }

        err = fn(tx)
        if err != nil {
            tx.Rollback()
            if isSerializationError(err) {
                continue
            }
            return err
        }

        err = tx.Commit()
        if err != nil {
            if isSerializationError(err) {
                continue
            }
            return err
        }
        return nil
    }
    return fmt.Errorf("transaction failed after 3 retries")
}

The pgconn.PgError type comes from github.com/jackc/pgx — the most common Postgres driver for Go. Error code 40001 is the SQL standard code for serialization failures.

The mental model

Read Committed is the practical default — it prevents the worst problems (dirty reads) without requiring retry logic. Move to Repeatable Read when you need consistent snapshots within a transaction. Move to Serializable when you can’t tolerate any concurrency anomalies and you’re willing to handle retries.

Start with the default. Tighten isolation only when you have a specific problem it solves.