DEV Community

William Andrews
William Andrews

Posted on • Originally published at devcrate.net

Unix timestamps explained — converting, formatting, and avoiding the common mistakes

You're reading a database record and there's a column called created_at with a value of 1746835200. Or you're debugging an API response and the timestamp field contains 1746835200000. Or your log file shows 1746835200 and you need to know when that actually happened.

This guide covers everything you need to work with Unix timestamps confidently — what they are, how to convert them in every major language and tool, the timezone trap that catches most developers, the milliseconds-vs-seconds mistake that causes the worst bugs, and a few things worth knowing about where timestamp math breaks down.


What a Unix timestamp actually is

A Unix timestamp is the number of seconds that have elapsed since January 1, 1970 at 00:00:00 UTC. That moment — midnight UTC on January 1st, 1970 — is called the Unix epoch. Every second that passes increments every Unix timestamp by 1.

0           → 1970-01-01 00:00:00 UTC
1000000000  → 2001-09-09 01:46:40 UTC
1700000000  → 2023-11-14 22:13:20 UTC
1746835200  → 2025-05-10 00:00:00 UTC
Enter fullscreen mode Exit fullscreen mode

A few things this definition makes clear: timestamps are always UTC — they don't carry timezone information. They're always an integer (or occasionally a float with sub-second precision). And they're always positive for dates after 1970, negative for dates before.

The name "Unix timestamp" is used interchangeably with "epoch time", "POSIX time", and "Unix time". They all mean the same thing.


Seconds vs milliseconds — the bug that's bitten everyone

This is the single most common timestamp bug. JavaScript's Date.now() returns milliseconds since the epoch. Most Unix utilities and databases use seconds. The difference is a factor of 1000 — if you pass a millisecond timestamp where seconds are expected you get a date ~33,000 years in the future, and if you pass seconds where milliseconds are expected you get a date in early 1970.

// JavaScript — always milliseconds
Date.now()              // 1746835200000 (13 digits)
new Date().getTime()    // 1746835200000 (13 digits)

// To get seconds in JavaScript
Math.floor(Date.now() / 1000)  // 1746835200 (10 digits)

// Quick check: is this seconds or milliseconds?
// 10 digits → seconds (valid until 2286)
// 13 digits → milliseconds
// 16 digits → microseconds
Enter fullscreen mode Exit fullscreen mode

The digit count rule is the fastest way to tell which you're dealing with. A 10-digit timestamp is seconds. A 13-digit timestamp is milliseconds. When you see something in between — 11 or 12 digits — something has gone wrong upstream.


Converting timestamps in JavaScript

// Current timestamp
const nowMs  = Date.now();
const nowSec = Math.floor(Date.now() / 1000);

// Timestamp to Date object
const date       = new Date(1746835200 * 1000); // seconds → multiply by 1000
const dateFromMs = new Date(1746835200000);      // already milliseconds

// Date to timestamp
const ts = Math.floor(new Date('2025-05-10').getTime() / 1000);

// Format a timestamp as a readable string
function formatTimestamp(ts, tz = 'UTC') {
  return new Intl.DateTimeFormat('en-US', {
    timeZone: tz,
    year: 'numeric', month: 'short', day: 'numeric',
    hour: '2-digit', minute: '2-digit', second: '2-digit',
    hour12: false
  }).format(new Date(ts * 1000));
}

formatTimestamp(1746835200, 'UTC');
// → "May 10, 2025, 00:00:00"

formatTimestamp(1746835200, 'America/New_York');
// → "May 09, 2025, 20:00:00"  ← note: different day
Enter fullscreen mode Exit fullscreen mode

Converting timestamps in Python

import datetime, time

# Current timestamp
ts_seconds = int(time.time())
ts_float   = time.time()   # with sub-second precision

# Timestamp to datetime (UTC)
dt_utc = datetime.datetime.fromtimestamp(1746835200, tz=datetime.timezone.utc)
# → datetime(2025, 5, 10, 0, 0, tzinfo=timezone.utc)

# Datetime to timestamp
dt = datetime.datetime(2025, 5, 10, tzinfo=datetime.timezone.utc)
ts = int(dt.timestamp())
# → 1746835200

# Format
print(dt_utc.strftime('%Y-%m-%d %H:%M:%S %Z'))
# → "2025-05-10 00:00:00 UTC"

# Arithmetic
one_day  = datetime.timedelta(days=1)
tomorrow = dt_utc + one_day
ts_tomorrow = int(tomorrow.timestamp())
Enter fullscreen mode Exit fullscreen mode

Converting timestamps in SQL

-- PostgreSQL
SELECT to_timestamp(1746835200);
-- → 2025-05-10 00:00:00+00

SELECT EXTRACT(EPOCH FROM NOW())::bigint;
-- → current Unix timestamp

SELECT to_timestamp(1746835200) AT TIME ZONE 'America/New_York';
-- → 2025-05-09 20:00:00

-- MySQL
SELECT FROM_UNIXTIME(1746835200);
-- → 2025-05-10 00:00:00  (uses server timezone — be careful)

SELECT UNIX_TIMESTAMP(NOW());
-- → current Unix timestamp

-- SQLite
SELECT datetime(1746835200, 'unixepoch');
-- → 2025-05-10 00:00:00

SELECT strftime('%s', 'now');
-- → current Unix timestamp as string
Enter fullscreen mode Exit fullscreen mode

Converting timestamps on the command line

# macOS (BSD date)
date -r 1746835200
# → Sat May 10 00:00:00 UTC 2025

date -r 1746835200 '+%Y-%m-%d %H:%M:%S'
# → 2025-05-10 00:00:00

# Linux (GNU date)
date -d @1746835200
# → Sat May 10 00:00:00 UTC 2025

# Get current timestamp
date +%s

# Python one-liner (works everywhere)
python3 -c "import datetime; print(datetime.datetime.fromtimestamp(1746835200, tz=datetime.timezone.utc))"
Enter fullscreen mode Exit fullscreen mode

The timezone trap

Unix timestamps are always UTC. They contain no timezone information — they're just a count of seconds. The timezone only matters when you convert a timestamp into a human-readable date. This is where most timestamp bugs live.

// This timestamp is midnight UTC on May 10th
const ts = 1746835200;

// In UTC
new Date(ts * 1000).toISOString();
// → "2025-05-10T00:00:00.000Z"  ← May 10th

// In New York (UTC-4 during EDT)
new Intl.DateTimeFormat('en-US', {
  timeZone: 'America/New_York',
  dateStyle: 'full'
}).format(new Date(ts * 1000));
// → "Friday, May 9, 2025"  ← May 9th — different day
Enter fullscreen mode Exit fullscreen mode

The rule: always be explicit about timezone when converting timestamps to dates. Never rely on the server's local timezone for anything that will be displayed to users or compared across systems. Store timestamps as UTC, convert to user timezone only at the display layer.


Storing timestamps in databases

Two common approaches — integer storage vs native datetime types:

Integer storage — simple, portable, no timezone ambiguity, easy arithmetic. Not human-readable in the database.

Native datetime with timezoneTIMESTAMPTZ in PostgreSQL, DATETIME with explicit UTC in MySQL. Human-readable, supports native date functions. PostgreSQL's TIMESTAMPTZ stores in UTC and converts on output — almost always the right choice.

-- PostgreSQL: always use TIMESTAMPTZ, not TIMESTAMP
CREATE TABLE events (
  id         SERIAL PRIMARY KEY,
  created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
  -- TIMESTAMP (without timezone) stores the value as-is with no timezone context
  -- This is almost always wrong
);

-- Query by date range (PostgreSQL handles timezone conversion)
SELECT * FROM events
WHERE created_at >= '2025-05-10' AT TIME ZONE 'America/New_York'
  AND created_at  < '2025-05-11' AT TIME ZONE 'America/New_York';
Enter fullscreen mode Exit fullscreen mode

Timestamp arithmetic

const oneHour  = 3600;
const oneDay   = 86400;
const oneWeek  = 604800;

const ts = 1746835200;
const tomorrow = ts + oneDay;
const nextWeek = ts + oneWeek;

// Difference between two timestamps
const start = 1746835200;
const end   = 1746921600;
const diffDays = (end - start) / 86400;  // 1.0

// Is this timestamp in the past?
const isPast = ts < Math.floor(Date.now() / 1000);

// Is this timestamp within the last 24 hours?
const isRecent = (Math.floor(Date.now() / 1000) - ts) < 86400;
Enter fullscreen mode Exit fullscreen mode

One thing to be careful with: "add one month" is ambiguous. Adding 2,592,000 seconds (30 days) to January 31st gives you March 2nd, not February 28th. If calendar-accurate month arithmetic matters, use a library like date-fns or Temporal rather than raw seconds.


ISO 8601 — the timestamp format you should be using in APIs

When exchanging timestamps between systems, don't use raw Unix timestamps. Use ISO 8601 with an explicit timezone offset.

// Bad — ambiguous
"created_at": 1746835200

// Better — but still requires knowing it's seconds vs milliseconds
"created_at": "2025-05-10 00:00:00"

// Best — ISO 8601 with UTC offset, unambiguous
"created_at": "2025-05-10T00:00:00Z"
"created_at": "2025-05-09T20:00:00-04:00"  // same moment, NY local time

// JavaScript
new Date(1746835200 * 1000).toISOString();
// → "2025-05-10T00:00:00.000Z"

// Python
datetime.datetime.fromtimestamp(1746835200, tz=datetime.timezone.utc).isoformat();
// → "2025-05-10T00:00:00+00:00"
Enter fullscreen mode Exit fullscreen mode

The Y2K38 problem

Unix timestamps stored as 32-bit signed integers will overflow on January 19, 2038 at 03:14:07 UTC. At that moment the value exceeds the maximum positive 32-bit integer (2,147,483,647) and wraps to a large negative number — interpreted as a date in 1901.

Most modern systems use 64-bit integers, which won't overflow for approximately 292 billion years. If you're working with legacy systems or embedded hardware that stores timestamps as 32-bit values, this is worth checking. New code should always use 64-bit integers.

// 32-bit max timestamp
new Date(2147483647 * 1000).toISOString()
// → "2038-01-19T03:14:07.000Z"

// One second later in a 32-bit signed system: wraps to -2147483648
new Date(-2147483648 * 1000).toISOString()
// → "1901-12-13T20:45:52.000Z"
Enter fullscreen mode Exit fullscreen mode

If you need to convert a timestamp quickly, DevCrate's Timestamp Converter handles seconds and milliseconds automatically, lets you pick any timezone, and converts in both directions — all in the browser with nothing sent anywhere.

Top comments (0)