Most online image compressors have one thing in common: your files go to their server. You upload a private photo, it travels across the internet, gets processed on someone else's machine, and comes back. That always felt wrong to me.
So I built CompressImg — an image compressor that does everything in your browser. Your files never leave your device.
Here's how it works under the hood.
The core: browser-image-compression + Web Worker
The heavy lifting is done by the browser-image-compression library, but the key is where it runs:
// compress.ts
export async function compressImage(file, options) {
const { default: imageCompression } = await import('browser-image-compression')
return imageCompression(file, {
maxSizeMB: undefined, // don't limit by size, use quality instead
initialQuality: options.quality / 100,
alwaysKeepResolution: true, // never upscale
maxWidthOrHeight: 1920, // cap at screen resolution
useWebWorker: true, // off main thread
})
}
Two things matter here:
-
Dynamic import —
browser-image-compressionis only loaded when the user actually compresses an image. It's not in the initial bundle. -
useWebWorker: true— compression runs off the main thread. The UI stays responsive even for large files.
The stack
-
Next.js 14 with
output: 'export'— generates a pure static site. No server, no API routes, nothing to scale. - Vercel — serves the static files from a global CDN. Handles any amount of concurrent users automatically.
- Cloudflare — DNS only (gray cloud). Vercel manages SSL.
The PageSpeed problem
My first PageSpeed score on mobile was LCP 7.1s. Way off the < 2.5s target.
The culprit turned out to be three things stacking up:
1. Font loading
// Before
const inter = Inter({ subsets: ['latin'], display: 'swap' })
// After
const inter = Inter({ subsets: ['latin'], display: 'optional' })
With display: swap, the browser renders text with a fallback font, then swaps to Inter when it loads. On mobile simulation (slow 4G), Inter was loading at ~5s — and LCP was measured at the swap moment.
With display: optional, if the font isn't immediately available, the browser uses the system font and never swaps. LCP is measured right away.
2. Third-party scripts competing for bandwidth
AdSense (231 KiB) and GA4 (140 KiB) were loading with afterInteractive, which fires right after hydration. On mobile, this competed with the LCP element for bandwidth.
// Before
strategy="afterInteractive"
// After
strategy="lazyOnload"
lazyOnload defers until the browser is truly idle — after LCP has already painted.
3. ContentSection as a client component
I had added 'use client' to the ContentSection (1000+ words of SEO text) to handle the FAQ accordion. This caused all that static text to be bundled as JavaScript instead of static HTML.
Fix: extract the interactive part (accordion toggle) into its own tiny client component, let the rest be server-rendered HTML.
// FAQItem.tsx — only this needs 'use client'
'use client'
export default function FAQItem({ question, answer }) {
const [open, setOpen] = useState(false)
// ...
}
After all three fixes: LCP dropped from 7.1s to 2.0s.
Privacy by design
The architecture makes privacy the default, not a feature:
- Static export = no server to log requests
- Compression via Canvas API = no data transmission
- No cookies, no session storage, no user tracking beyond GA4 page views
-
ads.txtpublished for AdSense transparency
What's next
The tool is live at compressimg.pro. Next steps are adding more tools (resize, format conversion) once the SEO for image compression establishes a baseline.
If you're building something similar, the main lesson: measure PageSpeed early and often. The LCP issue would have been invisible without it.
Built with Next.js, deployed on Vercel. Questions welcome in the comments.
It's been about 3 weeks since I published this post. Here's what I learned.
The formats people actually need
I thought JPG and PNG would cover 95% of use cases. I was wrong.
The first thing users asked about was HEIC — iPhone photos that Windows
can't open natively. Added HEIC to JPG conversion
using heic2any with a dynamic import so it doesn't bloat the main bundle:
ts
const heic2any = await import('heic2any')
const blob = await heic2any.default({ blob: file, toType: 'image/jpeg' })
Then came GIF. Animated GIF compression is a completely different problem —
you can't just run it through Canvas. I ended up with gifuct-js to decode
frames and gif.js with a Web Worker to re-encode. The worker file has to
live in /public/gif.worker.js — took me an embarrassing amount of time to
figure that out.
Also added TIFF (via utif), AVIF (native Canvas in Chrome), and BMP.
The SEO side
The compression logic was the easy part. The harder problem: how do you get
Google to care about a new free tool?
I ended up building 72 static pages with Next.js output: 'export' — one
for each format, platform, and use case. Pages like
compress image to 100KB
target very specific long-tail queries ("compress photo for government form",
"image under 100kb for email attachment") that have real search volume and
almost no competition from the big tools.
Three weeks in: 534 impressions on Google Search Console, pages ranking
between position 8–15 for terms like "does Microsoft Teams compress images"
and "compress image for LinkedIn". No clicks yet — that comes when you break
into the top 5. But the trajectory is there.
The key technical decision that made this work: every page is fully static
(SSG, no SSR), with its own layout.tsx containing JSON-LD structured data
(WebApplication + FAQPage schema). Google indexes these instantly compared
to client-rendered pages.
What's next
The tool is live at compressimg.pro — still
100% free, no account, no uploads to any server.
If you're building something similar, the biggest lesson: pick client-side
processing from day one. Not just for privacy — it makes the whole
architecture simpler. No server to maintain, no storage costs, no GDPR
headaches, free hosting on Vercel forever.
Happy to answer questions about any of the implementation details below.
---
Top comments (0)