DEV Community

Cover image for Stop using UUID v4 for database primary keys — UUIDv7 is the 2026 default
engr anees
engr anees

Posted on

Stop using UUID v4 for database primary keys — UUIDv7 is the 2026 default

Your B-tree indexes hate UUID v4. They've hated it for years. You probably haven't noticed because the slowdown is gradual — until your table hits 10 million rows and inserts start taking 200ms instead of 5ms.

UUIDv7 fixes this. RFC 9562 made it official in May 2024. PostgreSQL 18 ships native support. MySQL 8.4 has a helper. Most ORM and language libraries added v7 in 2024–2025. By 2026, v4 should be the exception, not the default.

This post is the "why", the "when", and the "how to migrate without breaking things".


The Problem with v4

A v4 UUID is 122 bits of pure randomness:

e7d1a8c4-9fe5-4a32-8c7e-3f9d2e8a1c4b

Looks fine. Random is what we want, right? Yes — for things that need to be unguessable. No — for things that go into a B-tree index.

Why B-trees hate randomness

Database indexes (B-tree, which is what Postgres, MySQL, SQLite, MSSQL all use for primary keys by default) work best when new entries arrive in roughly sorted order. The tree can append to the rightmost leaf page, keeping the structure compact and writes localized.

When you insert 1 million v4 UUIDs, each one lands in a random page of the index. The result:

  1. Random page splits as the tree rebalances
  2. Buffer cache thrashing — random pages keep getting evicted
  3. Increased WAL/redo log volume — random pages mean random disk writes
  4. Eventually: index fragmentation, slower queries, larger storage

Public benchmarks on InnoDB (MySQL 8.0) at 10 million rows show v4 UUID inserts roughly 3× slower than auto-increment integer keys, with index size around 40% larger. The exact numbers vary by database, hardware, and workload, but the pattern is universal: random keys destroy B-tree performance at scale.

"But I'm not at scale yet"

True. At 100K rows, nobody notices. At 1M rows, you might notice slow inserts on bulk operations. At 10M rows, your DBA writes you a long Slack message.

The cost of using v7 from day one: zero. Same UUID format, same byte length, same tooling.

The cost of migrating from v4 to v7 at 100M rows: you don't migrate. You live with it forever.


What UUIDv7 Actually Is

UUIDv7 keeps the same xxxxxxxx-xxxx-Mxxx-Nxxx-xxxxxxxxxxxx format as v4 but rearranges the bytes:

0190a4b3-1c2d-7e85-a734-2c8e9f1d4a5b
└────timestamp────┘ ││ │└──random──┘
│└── version (7)
└─── variant

  • First 48 bits (0190a4b3-1c2d) — Unix milliseconds timestamp
  • Next 4 bits — Version marker (always 7)
  • Next 12 bits — Sub-millisecond ordering or random
  • Next 2 bits — Variant marker
  • Last 62 bits — Pure random

The key insight: the timestamp is in the most significant bits, so v7 UUIDs sort chronologically by string OR byte comparison.

Insert 1 million v7 UUIDs sequentially, and the B-tree is happy:

  • All inserts land near the rightmost page
  • Buffer cache stays warm
  • WAL volume stays low (sequential writes)
  • Index size matches ordered-key efficiency

Same uniqueness guarantee as v4 (74 bits of randomness — collision probability still effectively zero at realistic scale). Same length, same character set, same APIs. Just better-shaped randomness.


When to Use v7 vs v4

This is the entire decision tree:

Is the UUID stored in a B-tree index?
(database PK, MongoDB _id, any indexed FK)

├── YES → use v7
│ Examples: user_id, order_id, post_id

└── NO → does the UUID need to NOT leak time information?

├── YES → use v4
│ Examples: password reset tokens, share-by-URL
│ secrets, anti-CSRF tokens, webhook signatures

└── NO → use v7 anyway (slightly cheaper to generate)

That's it. v7 is the default; v4 is the security exception.

"But v7 leaks the creation timestamp!"

Yes — the leading 48 bits are a millisecond timestamp. Anyone who sees a v7 UUID can read off when it was generated.

For a database primary key, this is fine (rows usually have a created_at column anyway). For a sharing token where guessability of "when this was created" enables an attack, v4 stays correct.

Don't use v7 for:

  • Password reset tokens (timestamp narrows brute-force window)
  • Email verification links (same)
  • Anti-CSRF tokens
  • Webhook signatures
  • API keys

For everything else: v7.


Generating UUIDv7 in 2026

JavaScript / Node.js (uuid v10+)

import { v7 as uuidv7 } from 'uuid';

const id = uuidv7();
// '0190a4b3-1c2d-7e85-a734-2c8e9f1d4a5b'

Python 3.13+ (stdlib)
import uuid

id = uuid.uuid7()
Go (google/uuid v1.6+)
import "github.com/google/uuid"

id, _ := uuid.NewV7()
Rust (uuid v1.7+)
use uuid::Uuid;

let id = Uuid::now_v7();
PostgreSQL 18 (native, no extension required)
SELECT uuidv7();
-- '0190a4b3-1c2d-7e85-a734-2c8e9f1d4a5b'

CREATE TABLE users (
  id UUID PRIMARY KEY DEFAULT uuidv7(),
  email TEXT NOT NULL,
  created_at TIMESTAMPTZ DEFAULT now()
);
MySQL 8.4
No native function as of mid-2026. Generate in the app layer, or use a stored function:

DELIMITER //
CREATE FUNCTION uuid_v7() RETURNS CHAR(36)
BEGIN
  -- 6-byte timestamp + 10 bytes random + version/variant bits
  -- Implementation omitted for brevity
  -- See github.com/MichaelDimmitt/uuidv7-mysql for a working version
END //
Enter fullscreen mode Exit fullscreen mode

No install — just need one quickly
Generate one in your browser: UUIDv7 Generator. Supports v1, v4, v5, and v7. All client-side, no upload.

Migrating an Existing v4 Table
Short answer: don't, unless you have measured proof of pain.

Migrating primary keys is one of the most expensive database operations you can do — every foreign key, every join, every index gets rebuilt.

Pick the strategy that matches your real constraint:

Option 1: New tables only (recommended)
Use v7 for new tables going forward. Leave v4 tables alone. After a few years, v7 tables dominate.

Option 2: Add a v7 secondary key
Keep v4 as the primary key. Add a secondary:

`ALTER TABLE users
  ADD COLUMN public_id UUID DEFAULT uuidv7() UNIQUE;`
Enter fullscreen mode Exit fullscreen mode

Use v4 internally, v7 externally for new query patterns that need range scans.

Option 3: Full migration (only if performance is on fire)
This is a 6–12 month project on a real application:

Add id_v7 UUID column to all tables
Backfill v7 IDs in batches (timestamps reflecting created_at)
Update all foreign keys to point at id_v7
Switch the primary key — the painful part
Drop the old v4 column
Don't do this without benchmarks proving v4 fragmentation is the actual bottleneck. Often the real culprit is missing indexes or N+1 queries.
Enter fullscreen mode Exit fullscreen mode

Other Ordered ID Formats (and Why v7 Won)

  • Format Year 2026 status
  • UUIDv7 2024 (RFC 9562) Standard, native in Postgres 18, libraries everywhere
  • ULID 2016 Still common; v7 is the spec'd successor
  • CUID2 2022 Niche; works if already using
  • NanoID 2017 Different use case (short URL slugs)
  • Twitter Snowflake 2010 Specific to high-throughput distributed systems
  • MongoDB ObjectID 2009 MongoDB-specific
  • Starting a project in 2026: v7. Already using ULID and it works: keep it. Using v4 because that's what crypto.randomUUID() returns: switch the import.

TL;DR
B-tree indexes hate randomness. v4 UUIDs cause page splits, cache thrashing, and bloat as tables grow.
UUIDv7 fixes this by putting a millisecond timestamp in the leading 48 bits, so inserts land in monotonic order.
Same format, same uniqueness guarantee — drop-in replacement for new tables.
Use v4 only when timestamp leakage is a security issue (password reset, anti-CSRF, share-by-URL secrets).
Don't migrate existing v4 tables unless you have measured proof of fragmentation pain.
Postgres 18, MySQL 8.4, all major UUID libraries support v7 natively or via a one-line update.
What's your team's default in 2026 — still v4, switched to v7, or something else (ULID, Snowflake)? Curious to hear the migration stories in comments.

Top comments (0)