DEV Community

Cover image for How to fix slow JavaScript builds before reaching for a Rust rewrite
Alan West
Alan West

Posted on

How to fix slow JavaScript builds before reaching for a Rust rewrite

The problem: your build is slow and getting slower

We've all been there. Your project starts small, builds take 2 seconds, and life is good. Six months later you've got 50 packages in a monorepo, a CI pipeline that takes 12 minutes, and developers staring at their terminals while the bundler chugs through node_modules for the hundredth time today.

Then someone on Hacker News mentions that some tool got rewritten in Rust and suddenly the team is in a Slack thread debating whether to migrate the entire build pipeline to a different runtime.

I get it. I really do. I spent two weeks last quarter trying to migrate a build pipeline before realizing the actual bottleneck was a 200-line postcss plugin that someone wrote in 2019 and never touched again.

This post is about how to figure out where the slowness actually lives, before you commit to rewriting half your toolchain.

Root cause: where build time actually goes

Build slowness usually comes from one of four places:

  • Filesystem I/O — reading thousands of small files from node_modules
  • AST work — parsing, transforming, and serializing JavaScript/TypeScript
  • User-land plugins — custom transforms, postcss plugins, babel plugins
  • Type checking — running tsc on the side

Most teams assume "the bundler is slow" without checking which of these is the culprit. And different bottlenecks need different fixes. Migrating from a JS-based bundler to a native one helps when the bottleneck is AST work. It does basically nothing if your plugin is spawning a child process for every file.

Profile first. Migrate later.

Step 1: measure where the time actually goes

Most bundlers have a profiling option that nobody uses. Webpack has --profile, Vite has --debug performance, and esbuild has --analyze. Start there.

# Webpack profile dump — produces stats.json you can load in a visualizer
npx webpack --profile --json > stats.json

# Load stats.json in the official analyse tool:
# https://webpack.github.io/analyse/
Enter fullscreen mode Exit fullscreen mode

For any Node.js build script, you can also use the built-in V8 inspector:

# Drops .cpuprofile files in ./profiles you can open in Chrome DevTools
node --cpu-prof --cpu-prof-dir=./profiles ./node_modules/.bin/your-bundler
Enter fullscreen mode Exit fullscreen mode

Open the profile in Chrome DevTools (Performance tab → Load profile). Look for wide flat sections in the flame chart — those are your hot functions.

When I did this on a recent project, 40% of the build time was being spent in a regex that someone had written to strip BOM characters. From every file. With a non-anchored pattern. It had been there for three years.

Step 2: cache aggressively, especially type checking

If you're running tsc as part of your build, it's almost certainly part of your problem. TypeScript's incremental mode is underused:

// tsconfig.json
{
  "compilerOptions": {
    "incremental": true,
    "tsBuildInfoFile": "./.tsbuildinfo",
    // Skip checking type declarations inside node_modules
    "skipLibCheck": true
  }
}
Enter fullscreen mode Exit fullscreen mode

For monorepos, project references can take a 60-second tsc run down to single-digit seconds on incremental builds. The official docs are at https://www.typescriptlang.org/docs/handbook/project-references.html and worth reading if you haven't.

Also: separate type-checking from transpilation. Transpilers like esbuild and swc don't type-check, and that's a feature. Run tsc --noEmit once in CI on the whole repo, and let the bundler just strip types during local development. You'll be amazed how much faster dev mode gets.

Step 3: cut your plugin count

Every plugin in a bundler chain has to either parse the file again or do work on the AST that was already produced. The cost compounds.

// Bad: three separate plugins, three AST walks per file
plugins: [
  babelPlugin(),
  postcssPluginThatAlsoParsesJS(),
  customStripCommentsPlugin(),
]

// Better: lean on what your transformer can do natively.
// esbuild, for example, drops console/debugger without a plugin.
build({
  drop: ['console', 'debugger'],
  // ...
})
Enter fullscreen mode Exit fullscreen mode

I migrated a project from a babel chain to esbuild's built-in transforms last year and shaved 8 seconds off cold builds. Not because esbuild is fast in absolute terms (though it is), but because I deleted six plugins that were doing redundant AST walks.

Step 4: fix your filesystem patterns

This one is sneaky. If your bundler is doing a deep glob over node_modules looking for something, that alone can cost seconds. Common offenders:

  • Watchers that scan node_modules on startup
  • Resolution that walks up the tree checking every package.json
  • Source map generation that re-reads every source file

Configure your watcher to ignore node_modules and any build output directories:

// vite.config.js — same idea works for Webpack/Rollup configs
export default {
  server: {
    watch: {
      // Don't waste fs events on directories the user never edits
      ignored: ['**/node_modules/**', '**/dist/**', '**/.cache/**'],
    },
  },
}
Enter fullscreen mode Exit fullscreen mode

When a native rewrite actually helps

Look, sometimes the answer really is "use a native tool." Native bundlers and runtimes do skip the JavaScript-to-AST interpretation overhead, and for very large projects that genuinely matters. But the speedup comes from skipping work, not from the language the tool happens to be written in.

You'll get the biggest wins from a native tool when:

  • Your project has thousands of source files and parsing dominates the profile
  • You're not bottlenecked on a single slow user-land plugin
  • Your CI machines have enough cores for the tool to parallelize

If your profile shows that 70% of your build time is in one custom plugin, switching the bundler underneath it changes very little. You'd just have a faster runtime spending most of its time waiting on the same slow plugin.

Prevention: how to keep builds fast over time

A few habits that have served me well:

  • Add a build-time budget to CI. If a PR pushes the cold build past N seconds, fail the check. It's annoying for two days and great forever after.
  • Review plugin additions in code review. Every plugin should justify its existence. "We needed it once" is not justification.
  • Profile every quarter. Build performance rots silently. Set a calendar reminder.
  • Keep dev and prod modes separate. Dev doesn't need minification. Prod doesn't need inline source maps.

I haven't tested this thoroughly yet, but I've started using a small script that runs the cold build, parses the CPU profile, and posts a comment on the PR with the top three hot functions. So far it catches regressions early without much noise.

Slow builds aren't inevitable. They're almost always the result of accumulated, unmeasured complexity. Measure first, then change one thing at a time. Rewriting your toolchain is a fine option — but it's usually the last one to reach for, not the first.

Top comments (0)