CREATE INDEX CONCURRENTLY: The Complete PostgreSQL Guide
CREATE INDEX CONCURRENTLY is the single most important command for safely adding indexes to production PostgreSQL tables. This guide covers how it works, why it sometimes fails, how to handle failures, and the related REINDEX CONCURRENTLY command.
Why Regular CREATE INDEX Is Dangerous
A regular CREATE INDEX acquires a SHARE lock on the table. This lock blocks all INSERT, UPDATE, and DELETE operations for the entire duration of the index build. On a large table, that can mean minutes or even hours of write downtime.
-- DANGEROUS: Blocks all writes for the entire build duration
CREATE INDEX idx_orders_customer_id ON orders (customer_id);
-- On a 50M row table, this might take 3-5 minutes
-- Every INSERT, UPDATE, DELETE queues behind this lock
-- Connection pools fill up, application errors cascadeHow CREATE INDEX CONCURRENTLY Works
CREATE INDEX CONCURRENTLY builds the index without holding a lock that blocks writes. Instead of scanning the table once under a SHARE lock, it performs the build in three phases:
- Phase 1 — Catalog entry: PostgreSQL creates the index entry in the system catalog with an "invalid" flag. This requires a brief SHARE UPDATE EXCLUSIVE lock (does not block reads or writes). The index is not yet usable by the query planner.
- Phase 2 — First table scan: PostgreSQL scans the entire table and builds the index entries. This happens under a SHARE UPDATE EXCLUSIVE lock, which allows concurrent INSERT, UPDATE, and DELETE operations to continue. Changes that happen during the scan are tracked.
- Phase 3 — Second table scan: PostgreSQL does a second pass to catch any rows that were modified during Phase 2. Once this is complete, the index is marked as valid and becomes usable by the query planner. Another brief SHARE UPDATE EXCLUSIVE lock is acquired to wait for any transactions that started before Phase 2 to complete.
-- SAFE: Does not block reads or writes
CREATE INDEX CONCURRENTLY idx_orders_customer_id ON orders (customer_id);
-- Takes longer than regular CREATE INDEX (roughly 2-3x)
-- but your application continues running normallyThe Transaction Restriction
CREATE INDEX CONCURRENTLY cannot run inside a transaction block. This is a hard PostgreSQL limitation. If you try, you get an error:
-- This fails immediately
BEGIN;
CREATE INDEX CONCURRENTLY idx_email ON users (email);
COMMIT;
-- ERROR: CREATE INDEX CONCURRENTLY cannot run inside a transaction block
-- The solution: run it outside a transaction
CREATE INDEX CONCURRENTLY idx_email ON users (email);This matters because many migration frameworks wrap each migration file in a transaction by default. If your framework does this (Flyway, Alembic, Django, Rails), you need to configure it to run specific migrations outside a transaction. Check your framework's documentation for how to do this.
| Framework | How to Disable Transaction |
|---|---|
| Django | atomic = False on the Migration class |
| Rails | disable_ddl_transaction! |
| Flyway | executeInTransaction=false (in SQL comment) |
| Alembic | op.execute() with connection.execution_options(isolation_level="AUTOCOMMIT") |
| Knex | knex.schema.raw() outside transaction |
Failure Modes and Recovery
CREATE INDEX CONCURRENTLY can fail partway through. Unlike a regular CREATE INDEX (which rolls back cleanly on failure), a failed CONCURRENTLY operation leaves behind an invalid index.
Common failure causes:
- Deadlock with another transaction
- Unique constraint violation (for UNIQUE indexes)
- Out of disk space
- statement_timeout or lock_timeout reached
- A long-running transaction that prevents Phase 3 from completing
Detecting invalid indexes
-- Find all invalid indexes in the database
SELECT
schemaname,
tablename,
indexname,
indexdef
FROM pg_indexes
WHERE indexname IN (
SELECT indexrelid::regclass::text
FROM pg_index
WHERE NOT indisvalid
);Recovering from a failed build
You have two options when a concurrent index build fails:
-- Option 1: Drop and rebuild
DROP INDEX CONCURRENTLY idx_orders_customer_id;
CREATE INDEX CONCURRENTLY idx_orders_customer_id ON orders (customer_id);
-- Option 2: Use REINDEX CONCURRENTLY (PG 12+)
-- This replaces the invalid index in-place
REINDEX INDEX CONCURRENTLY idx_orders_customer_id;REINDEX CONCURRENTLY (available since PostgreSQL 12) is generally preferred because it rebuilds the index in a single operation. It creates a new index with a temporary name, swaps it with the old one, and drops the old one — all without blocking writes.
REINDEX CONCURRENTLY (PostgreSQL 12+)
REINDEX CONCURRENTLY was added in PostgreSQL 12. It rebuilds an existing index without blocking writes, similar to how CREATE INDEX CONCURRENTLY works. This is useful for:
- Rebuilding invalid indexes left by failed CONCURRENTLY operations
- Rebuilding bloated indexes (index bloat can slow down queries significantly)
- Rebuilding corrupt indexes
- Upgrading index parameters (e.g., changing fillfactor)
-- Rebuild a single index without blocking writes
REINDEX INDEX CONCURRENTLY idx_orders_customer_id;
-- Rebuild all indexes on a table
REINDEX TABLE CONCURRENTLY orders;
-- Rebuild all indexes in a schema
REINDEX SCHEMA CONCURRENTLY public;
-- DANGEROUS: Without CONCURRENTLY, REINDEX holds ACCESS EXCLUSIVE
-- This blocks all reads and writes
REINDEX INDEX idx_orders_customer_id; -- Don't do this in productionPerformance Considerations
CREATE INDEX CONCURRENTLY is slower than regular CREATE INDEX. Expect roughly 2-3x the build time because:
- It scans the table twice instead of once
- It needs to track concurrent modifications between the two scans
- It waits for existing transactions to complete before finalizing
You can speed up concurrent index builds by tuning these settings (for the session only):
-- Increase maintenance_work_mem for faster index builds
-- Default is typically 64MB — increase for large tables
SET maintenance_work_mem = '1GB';
-- Increase max_parallel_maintenance_workers (PG 11+)
-- Allows parallel index builds
SET max_parallel_maintenance_workers = 4;
-- Then create the index
CREATE INDEX CONCURRENTLY idx_orders_created_at ON orders (created_at);
-- Reset settings
RESET maintenance_work_mem;
RESET max_parallel_maintenance_workers;DROP INDEX CONCURRENTLY
Just like creation, dropping an index normally acquires an ACCESS EXCLUSIVE lock. Use DROP INDEX CONCURRENTLY to avoid blocking reads and writes:
-- DANGEROUS: Blocks all reads and writes
DROP INDEX idx_orders_old;
-- SAFE: Does not block reads or writes
DROP INDEX CONCURRENTLY idx_orders_old;
-- Note: Like CREATE, DROP INDEX CONCURRENTLY cannot run
-- inside a transaction blockRetry Pattern for CI/CD
Since CREATE INDEX CONCURRENTLY can fail due to transient conditions (deadlocks, long-running transactions), a robust migration strategy includes retry logic:
#!/bin/bash
# retry-create-index.sh
MAX_RETRIES=3
RETRY_DELAY=10
for i in $(seq 1 $MAX_RETRIES); do
echo "Attempt $i: Creating index..."
# Drop invalid index if it exists from a previous failed attempt
psql -c "DROP INDEX CONCURRENTLY IF EXISTS idx_orders_customer_id;" 2>/dev/null
# Try to create the index
if psql -c "CREATE INDEX CONCURRENTLY idx_orders_customer_id ON orders (customer_id);"; then
echo "Index created successfully"
exit 0
fi
echo "Failed on attempt $i, waiting $RETRY_DELAY seconds..."
sleep $RETRY_DELAY
done
echo "Failed after $MAX_RETRIES attempts"
exit 1Unique Index Considerations
Creating a UNIQUE index concurrently has an additional failure mode: if duplicate values exist in the column, the build fails and leaves an invalid index. Always check for duplicates first:
-- Check for duplicates before creating a unique index
SELECT email, count(*)
FROM users
GROUP BY email
HAVING count(*) > 1
LIMIT 10;
-- If duplicates exist, resolve them first
-- Then create the unique index
CREATE UNIQUE INDEX CONCURRENTLY idx_users_email ON users (email);Summary: The Rules
- Always use CONCURRENTLY for CREATE INDEX, DROP INDEX, and REINDEX in production
- Never run CONCURRENTLY operations inside a transaction block
- Check for invalid indexes after failed CONCURRENTLY operations
- Use REINDEX CONCURRENTLY (PG 12+) to rebuild failed or bloated indexes
- Implement retry logic in your deployment scripts
- Increase maintenance_work_mem for large table index builds
Automate the Check
Forgetting CONCURRENTLY on a CREATE INDEX is one of the most common migration mistakes. MigrationPilot catches this with rule MP001 (require-concurrent-index) and can auto-fix it — adding CONCURRENTLY and flagging if the statement is inside a transaction block. It also catches missing CONCURRENTLY on DROP INDEX (MP009) and REINDEX (MP021). Add it to your CI pipeline to enforce these patterns automatically.
npx migrationpilot analyze migrations/004_add_indexes.sql --fix