If you have a Notion integration that "fetches all the rows in this database" — a sync job, an export, a reporting pipeline — it may have started returning incomplete data without throwing anything. As of an early-2026 API change, Notion's paginated query and list endpoints enforce a hard 10,000-result maximum pagination depth. Past that point you don't get an error. You get a 200 OK, no next_cursor, and a new field telling you the result set was truncated — a field most existing code has never heard of and doesn't check.
So the loop terminates normally, the caller treats the partial set as the whole set, and everything downstream — the warehouse table, the dashboard, the "we synced N records" log line — is quietly wrong for every database with more than 10k matching rows.
What Actually Changed
The classic Notion pagination contract was: call the endpoint, read results, if has_more is true call again with start_cursor: next_cursor, repeat until has_more is false. That contract still holds — for the first 10,000 results.
Once a paginated query would cross the 10,000-result boundary, Notion stops the cursor walk and returns a response shaped like this:
{
"object": "list",
"results": [ /* ...the last page within the 10k window... */ ],
"next_cursor": null,
"has_more": false,
"request_status": {
"type": "incomplete",
"incomplete_reason": "query_result_limit_reached"
}
}
The tell is request_status. On a normal, fully-paginated response it's either absent or "type": "complete". On a truncated one it's "type": "incomplete" with incomplete_reason: "query_result_limit_reached". Notice what isn't different: has_more is false (just like a real end-of-results), next_cursor is null (just like a real end-of-results), the HTTP status is 200, and the results array is a perfectly valid array of perfectly valid pages. Nothing about the response trips an exception, a schema validator, or an HTTP-status check.
This applies to the paginated endpoints that can match large numbers of objects — database/data-source queries, and the list endpoints (users, comments, block children, search) — anywhere a single logical query could exceed 10k results.
Why This Is a Silent Failure, Not a Loud One
Walk through what each layer of a typical integration sees:
-
The pagination loop:
while (response.has_more) { ... }. On a truncated responsehas_moreisfalse, so the loop exits cleanly on the first iteration that hits the cap. From the loop's perspective this is indistinguishable from "we reached the last page." No retry, no warning. -
The SDK: the official
@notionhq/client(and the auto-paginating helpers built on it, likeiteratePaginatedAPI) follow the samehas_more/next_cursorcontract. They stop when the cursor runs out. They don't inspectrequest_statusand they don't throw — there's nothing to throw on; the server returned a valid200. -
Schema validation: if you validate the response,
request_statusis an additive field. A truncated response is still a structurally valid list response. Strict validators that reject unknown fields might trip — but most don't, and even then the error says "unexpected field," not "your data is incomplete." - Your "rows synced" metric: it logs however many rows came back. 10,000 is a plausible-looking number. Nobody alerts on "synced exactly 10,000 records" because that's not obviously wrong.
- The data consumer: the warehouse table, the BI dashboard, the downstream API. It sees a smaller-than-expected dataset and has no way to know whether that's because rows were deleted in Notion or because the sync truncated. It renders. It looks fine.
The first real signal is usually a human: someone notices a record that exists in Notion isn't in the report, files a "data is stale" ticket, and a few hours of debugging later you find the sync has been silently capped for weeks.
Who's Exposed
- Database-to-warehouse / database-to-spreadsheet sync tools pulling large Notion databases (project trackers, CRMs, content calendars, issue logs that have grown over years).
- Backup and export jobs that walk every row of every database.
- Internal dashboards and reporting pipelines that re-query a big database on a schedule.
- Migration scripts moving content out of Notion — the worst case, because you run it once, it "succeeds," you decommission the source, and you don't discover the missing 30% until much later.
-
Anything using
iteratePaginatedAPIor a hand-rolledhas_moreloop against a query that returns more than 10k objects.
If your databases are all comfortably under 10k matching rows for every query you run, you're fine — for now. The risk is the database that crosses the line six months from now, on a code path nobody's looked at since it was written.
How To Detect It
1. Check request_status on every paginated response. This is the actual fix. Anywhere you loop on has_more, also look at request_status:
const { Client, isFullPage } = require("@notionhq/client");
const notion = new Client({ auth: process.env.NOTION_TOKEN });
async function queryAllRows(dataSourceId, filter) {
const rows = [];
let cursor = undefined;
while (true) {
const res = await notion.dataSources.query({
data_source_id: dataSourceId,
filter,
start_cursor: cursor,
page_size: 100,
});
rows.push(...res.results);
// The new part: detect truncation explicitly.
if (res.request_status?.type === "incomplete") {
throw new Error(
`Notion query truncated: ${res.request_status.incomplete_reason}. ` +
`Got ${rows.length} rows; result set exceeds the 10,000 pagination cap. ` +
`Narrow the query with a more selective filter or partition by a property range.`
);
}
if (!res.has_more) break;
cursor = res.next_cursor;
}
return rows;
}
Throwing is the right default for a sync job: a loud failure you can see beats a quiet truncation you can't. If you'd rather degrade gracefully, at minimum increment a metric and log a warning with the row count — don't let incomplete pass unobserved.
2. Re-architect queries that legitimately exceed 10k. The cap is per query, not per database. If a database genuinely has more than 10,000 rows you care about, partition the query: filter by a date range, a status, a created-time window, or an alphabetical slice of a title property, and walk each partition separately. Each partition's pagination still has to stay under 10k, so size your partitions accordingly.
3. Add a cross-check on row counts. If you know roughly how many rows a database should have (or you can get a count another way), assert that your sync pulled within tolerance of it. A sync that returns exactly 10,000 rows when you expected ~14,000 should page someone.
4. Search your codebase for the patterns that are exposed. Grep for has_more, next_cursor, iteratePaginatedAPI, start_cursor. Every match against a Notion query is a place to add the request_status check. If you find the string query_result_limit_reached showing up in logs you didn't write that handler for, it's already happening.
The Pattern
This is the same shape as a lot of recent API changes: a vendor adds a limit, communicates it as a new field rather than a new error, and the failure mode lands in the gap between "the response is structurally valid" and "the data is actually complete." HTTP-status checks miss it. Schema validators miss it. SDKs that only know the old has_more contract miss it. The only thing that catches it is code — or monitoring — that knows the new field exists and treats incomplete as the alarm it is.
If you run integrations against third-party APIs, this is worth a standing habit: when a provider adds a status/result-metadata field to a response you already parse, assume there's a silent-failure path hiding behind it, and go check what your code does when that field says "incomplete."
Top comments (0)