I ran 10,000 outbound sequences across six domains last quarter and watched Apollo report a 4.2% "bounce rate" on a list that my SMTP logs told a completely different story about. When I dug into the raw delivery receipts, I found three separate failure modes all bucketed under that same number — and each one pointed to a different problem requiring a different fix. Most articles about outbound email bounce rate explained B2B stop at "hard bounce = bad address, soft bounce = temporary." That taxonomy is real, but it's also nearly useless if you're debugging an actual campaign. Here's what the tool vendors aren't surfacing.
Your Bounce Number Is Three Different Problems Wearing One Label
Let me give you the actual technical breakdown before touching what tools report.
At the SMTP layer, rejection codes are structured: 5xx codes are permanent failures, 4xx codes are temporary deferrals. That's the foundation. But "bounce" in your sequencer dashboard almost never maps cleanly to this.
SMTP 5xx: Permanent rejection at the receiving MTA. The destination mail server accepted the connection and explicitly refused delivery. The most common are:
-
550 5.1.1— Mailbox does not exist. This is list rot. The address was valid at some point, the person left, the domain admin disabled the mailbox. -
550 5.7.1— Message rejected due to policy. This is a sending reputation signal. The receiving server knows something about your IP or domain it doesn't like. -
552 5.2.2— Mailbox full. Technically permanent in SMTP spec but often indicates an abandoned or over-quota account rather than a deleted one. -
553 5.1.3— Address format invalid. This means something in your data pipeline corrupted the address string itself.
SMTP 4xx: Temporary deferral. The receiving server is telling your MTA "try again later." Your sequencer retries, and if it succeeds within the retry window, it never shows as a bounce at all. If retries exhaust, most tools reclassify it as a bounce — but it's categorically different from a 5xx.
-
421 4.7.0— Service temporarily unavailable. Often greylisting: the first delivery attempt from an unknown sender gets deferred. Legitimate MTA behavior, common in enterprise mail infrastructure. -
450 4.2.1— Mailbox temporarily unavailable. Could be server maintenance, could be the user's account being rate-limited by their admin. -
451 4.7.651— This specific Microsoft code means your sending IP hit Microsoft's anti-spam threshold. It looks like a temporary deferral. It is not. It's an infrastructure signal, not a data signal.
ISP-level blocks that don't generate SMTP codes. This is the one that really breaks tool reporting. If your sending IP is on a blocklist that the receiving MTA checks before even completing the SMTP handshake, the connection gets dropped or refused before any SMTP dialogue happens. Your sequencer logs a connection failure. Depending on how the tool handles this, it either doesn't count it at all, counts it as an "error," or — in some implementations — rolls it into the bounce number. There's no 5xx code attached because the SMTP session never completed.
What Apollo and Snov.io Actually Report (And Where It Breaks)
I've tested both platforms heavily, and the gap between what they show you and what actually happened is significant.
Apollo reports a unified "bounce" metric in campaign analytics. When I extracted the same send data through their API and cross-referenced against raw SMTP logs from the sending infrastructure, I found their "bounce" bucket contained:
- True 550 5.1.1 hard bounces (list rot)
- Exhausted 4xx retries that never resolved
- Some connection-level failures from blocklist rejections
The ratio varied by domain health. On a clean domain sending to a healthy list, roughly 70% of Apollo's reported bounces were genuine 5.1.1 mailbox-not-found. On a domain with mild reputation problems, that ratio flipped — I was seeing more exhausted 4xxs and connection failures masquerading as data quality problems.
Snov.io is slightly more transparent. Their campaign reports separate "hard bounces" from "soft bounces," but the classification logic is opaque. Based on my testing, what they call a soft bounce is any 4xx that exhausted retries, which is technically correct but hides the distinction between greylisting (a non-event that resolved fine for other senders) and actual Microsoft 451 throttling (a real infrastructure problem).
Comparison of bounce reporting across tools I've tested:
| Tool | Shows SMTP code | Separates 5.1.1 from 5.7.1 | Surfaces connection failures | Retry visibility |
|---|---|---|---|---|
| Apollo | No | No | Mixed into bounce count | No |
| Snov.io | No | No | Separate "error" sometimes | Partial |
| Smartlead | No | No | Errors separated | Yes (retry log) |
| Instantly | No | No | Errors separated | Yes |
| Hunter.io Campaigns | No | Partial (flags policy blocks) | Separate | No |
| Mailshake | No | No | Mixed | No |
None of them give you raw SMTP codes in the UI. To get that, you need to be routing through something like SendGrid, Postmark, or your own Postfix setup where you can actually read the bounce messages.
List Rot vs. Infrastructure Rot: How to Tell Them Apart
This is the diagnostic question that actually matters. Conflating the two causes misdiagnosis: you either burn time re-verifying a list that's fine, or you keep sending from a domain that's already damaged.
Signals that point to list rot (data quality problem):
- Bounce rate spikes unevenly across different companies in the same sequence. If you're seeing 12% bounces on contacts at mid-market SaaS but 1% on enterprise contacts, the problem is segmented to that data source.
- Your bounce addresses cluster around specific domains. Run a
GROUP BYon the domain portion of your bounced addresses. If three domains account for 60% of your hard bounces, those companies probably had layoffs or domain migrations. - The bouncing addresses are older contacts. Pull the "created date" or "enriched date" metadata if your enrichment tool tracks it. PDL, Clearbit, and Clay all timestamp their data — addresses enriched 18+ months ago have measurably higher decay rates, especially in tech.
- RocketReach or Wiza verification catches them on re-check. If you run the bounced addresses back through a real-time SMTP verification tool and they fail there too, the data is bad. If they pass, your sending infrastructure is the problem.
Signals that point to infrastructure rot (deliverability problem):
- Bounce rate is relatively uniform across all company sizes and industries in your list.
- Your 4xx exhausted retries are a disproportionately large share of total bounces. If you can get this data from your sending infrastructure logs, a ratio above 30% of total failures being 4xx-origin is a yellow flag.
- The Microsoft 451 4.7.651 code appears in your logs. This is almost always an infrastructure signal. Microsoft's Sender Support team confirms that this code indicates your sending IP crossed a complaint or volume threshold.
- Your reply rate dropped before your bounce rate climbed. Deliverability deterioration usually hits inbox placement first. By the time bounces increase, the damage is already in progress.
- Running the same template from a clean secondary domain produces normal bounce rates. This is the cleanest diagnostic test available.
Reading Signal From Verification Tool Coverage Gaps
One layer the competing articles completely miss: bounce rate is also a function of how your verification tool handles edge cases, and those gaps are significant.
I ran 500 profiles enriched from Apollo through three verification stacks — Hunter.io verify, NeverBounce, and ZeroBounce — and then sent them. The profiles marked "valid" by all three still produced a 1.8% hard bounce rate on send. Profiles where tools disagreed (one marked valid, one marked risky, one marked unknown) produced a 6.1% hard bounce rate.
The coverage problem: all three tools use SMTP handshake verification (connecting to the MX, issuing RCPT TO, checking acceptance), but a significant share of enterprise mail servers — especially Microsoft 365 tenants with Exchange Online Protection and catch-all configurations — return false accepts. The server says RCPT TO accepted, the verification tool marks it valid, you send, you get a 550.
This means a portion of your hard bounces from well-verified lists are structurally unavoidable with current verification methodology, not evidence of bad data sourcing. The approximate rate from my testing: enterprise lists with heavy Microsoft 365 presence produce 0.8–1.5% unavoidable hard bounces even after verification, purely from catch-all false positives resolving on actual delivery attempt.
Tools like Lusha and Clearbit have moved toward confidence scoring rather than binary valid/invalid precisely because of this problem — but sequencers still treat any address with a score above the threshold as send-ready, collapsing the nuance back down.
What I Actually Use
For raw SMTP visibility, I route outbound through a self-managed Postfix setup that dumps bounce messages to a Postgres table. I parse SMTP codes and run a weekly breakdown — 5.1.1 as list rot signal, 5.7.1 as reputation signal, 4xx exhaustion rate as infrastructure health signal. That gives me three separate KPIs instead of one "bounce rate."
For verification before send, I layer ZeroBounce for syntax and MX checks, then run catch-all domains through a smaller manual SMTP probe rather than trusting the verification tool's answer. For enrichment, I use Clay to pull from PDL and cross-reference against LinkedIn activity dates as a recency proxy — profiles with no activity for 9+ months get deprioritized regardless of verification status.
For teams that don't want to build custom infrastructure, Smartlead's retry logs are the most transparent I've used among mainstream sequencers — you can at least separate connection failures from SMTP-level bounces. Ziwa is another option that surfaces more granular delivery status than most tools in the space, though I'd still layer it with external SMTP logging on important sends.
The honest answer is that no off-the-shelf sequencer gives you clean bounce taxonomy. Until they expose raw SMTP codes in reporting, you're diagnosing with blurry instruments.
Top comments (0)