DEV Community

Super Funicular
Super Funicular

Posted on

1.1 Million Baby Monitors Were Watchable by Anyone — Here's the Architectural Mistake Behind It

On May 11, 2026, The Verge and PetaPixel reported that researcher Sammy Azdoufal had remotely accessed roughly 1.1 million network-connected baby monitors and security cameras made by Chinese white-label vendor Meari Technology — across 118 countries, by extracting a single key from the company's Android app.

"Just by inspecting the Android app, Azdoufal says he was able to extract a single key that gave him access to devices across 118 countries."
The Verge via PetaPixel

Photos from those cameras were stored on Chinese Alibaba servers with public web addresses and no protection. The same backend infrastructure is reportedly used by 378 different brands sold under different names on Amazon and at retailers like Leroy Merlin. The primary hole has been patched. The architectural mistake that produced it has not.

I build an Android app called Background Camera RemoteStream (superfunicular.com). It is a structural alternative to the cloud-camera model that just failed for the millionth time. I want to talk about why this keeps happening, because "another IoT camera got hacked" is not a useful framing — and neither is "buy a more expensive one."

What actually broke

Three things, stacked:

  1. A shared key inside a public client. The Android app ships with credentials that authenticate against a central broker. If you have the app, you have the key. Reverse-engineering an APK is not hard. Researchers do this for breakfast.
  2. An MQTT broker that trusts the key. Once you have it, you can subscribe to message topics belonging to devices that aren't yours. Meari's spokesperson described it carefully: "Under specific technical conditions, attackers may intercept all messages transmitted via the EMQX IoT platform without user authorization." Translation: the broker authenticated the app, not the user.
  3. Cloud-side photo storage with public URLs. Saved snapshots lived on Alibaba object storage with predictable, unprotected addresses. Per-photo authorization wasn't enforced; knowing the URL was the access control.

Each layer is fixable in isolation. What is not fixable in isolation is the structural decision underneath all three: the device is a thin client, and the cloud is the camera. Every byte of the video, every snapshot, every event has to go to a vendor-controlled backend so the vendor's app can pull it back down. That backend becomes a single point of compromise for every device that ships under every one of the 378 brand names that resells the same firmware.

This is not unique to Meari. The same researcher pulled the same trick on 7,000 DJI robot vacuums in February of this year. It is the predictable failure mode of "cheap IoT camera that streams to an app." The bug bounty changes; the architecture doesn't.

The thing the architecture is optimizing for

Cloud-relay cameras exist because they solve a real product problem: you want to see the feed when you're not on the same Wi-Fi as the camera. NAT, dynamic IPs, mobile network restrictions — punching through all of that with a vendor-run relay is genuinely easier than asking users to configure port forwarding.

The cost is that the vendor must store, decrypt-or-not, and route everyone's video, forever. The vendor's employees can in principle look at your living room. The vendor's API keys can be extracted from the public app. The vendor's backend can be breached. The vendor can be acquired or shut down or pressured by a government.

If you only ever needed your own feed on your own network, you paid the entire cost of a centralized backend so the vendor could give you back the thing you started with.

What "local-only" actually looks like

Background Camera RemoteStream takes the other branch:

  • No vendor cloud. Recordings are stored locally on the Android device acting as the camera. There is no Meari-style backend that proxies your video. There is nothing for me, the developer, to leak — because nothing of yours ever reaches me.
  • No account. There is no email, no password, no shared key against a central broker. There is no database of users for an attacker to dump.
  • No third-party hardware vendor. The "camera" is an old Android phone you already own. The supply chain ends at your drawer.
  • Streaming, when you want it, is YouTube Live on your channel. If you want a remote feed, you authenticate to your Google account and stream to your YouTube channel — same auth surface as the rest of your YouTube use, controlled by you, revocable by you, gone the moment you stop the stream. There is no superfunicular.com relay between you and the viewer. Even I cannot watch it without your channel link.
  • Screen-off recording. The camera phone runs with the display off so it can sit on a shelf for days without burning the screen — the local-only model has no extra cost here.

This isn't a marketing checklist. It is a different threat model. The Meari breach exists because there is a server that can be breached. We do not run that server.

What this does not solve

A local-only camera does not protect you from:

  • Someone on your home Wi-Fi who already has device access. If an attacker is already inside your LAN, neither architecture saves you. Network hygiene still matters.
  • Physical access to the camera phone. If someone walks off with the device, the recordings on it are theirs.
  • Misconfigured streaming. If you start a YouTube Live and make the link public, the feed is public. That is a property of the choice to broadcast, not the architecture.

I'm not claiming the local-only model is invulnerable. I'm claiming it does not have the specific failure mode that just exposed 1.1 million homes through a single extracted key, because there is no central server to extract a key against.

The 378-brand problem

The most uncomfortable detail in the report is that 378 different camera brands sit on top of the same Meari backend. Consumers buying a "different" camera at a retailer were buying the same vulnerability. The brand name on the box told them nothing about what their feed was traveling through.

When the architecture is centralized and white-labeled, the brand on the box is decorative. The relevant question is "whose servers does this talk to," and the box almost never tells you. With a local-only Android app, the answer is: only your device, plus whatever streaming destination you explicitly point it at. That is auditable in a way that a stack of unnamed Chinese MQTT brokers is not.

If you have a Meari-derived camera right now

Per The Verge's reporting, the primary vulnerability has been patched, but Meari has not disclosed which brands are affected or whether the Chinese server with ~220,000 still-exposed users has been fully closed. Practical steps:

  1. Check whether your camera's app routes through Meari or one of the related rebrands. The companion app name and developer-of-record on Google Play is often the easiest tell.
  2. Rotate any credentials you've reused across IoT apps. Shared-key extractions tend to publicly cluster after disclosure.
  3. If you don't strictly need remote access from outside your home, consider an LAN-only or local-only setup. If you do need remote access, prefer architectures where you hold the relay keys (e.g., your own YouTube channel, your own VPN) over architectures where a vendor does.

That last point is the whole pitch. Background Camera RemoteStream is on Google Play, free, no account, local-only by default, optional YouTube Live to your own channel. If the Meari story made you uncomfortable, that discomfort is a signal worth following — not toward a more expensive camera, but toward a smaller blast radius.


App: Background Camera RemoteStream on Google Play
Website: superfunicular.com
Source for this article: PetaPixel — "Anyone Could Have Been Watching Your Kids on Certain Baby Monitors" (covering The Verge's original report)

Top comments (0)