TL;DR: The thing that sent me down this rabbit hole wasn't a technical problem — it was a Reddit thread where someone asked "Ubuntu or Fedora for a home server? " and every single reply was "just use Ubuntu.
📖 Reading time: ~39 min
What's in this article
- I Needed a Home Server OS and Couldn't Stop Second-Guessing Myself
- The Setup I Used for Both
- Package Management: Where Fedora's Freshness Bites You
- Kernel Version and Hardware Support
- Security Out of the Box: AppArmor vs SELinux
- Firewall Configuration: firewalld vs ufw
- Docker and Containers: The Real Daily Driver
- Performance: Where I Actually Saw Differences
I Needed a Home Server OS and Couldn't Stop Second-Guessing Myself
The thing that sent me down this rabbit hole wasn't a technical problem — it was a Reddit thread where someone asked "Ubuntu or Fedora for a home server?" and every single reply was "just use Ubuntu." No explanation. No trade-offs. Just vibes. I'd been running Ubuntu Server 22.04 LTS for about a year on an old Beelink mini PC (12GB RAM, 500GB NVMe), and I kept noticing things that didn't feel right — mostly around how aggressively old some of the packages were. So I bought a second identical machine and ran Fedora 39 Server on it, mirroring the same stack for six months.
The stack I ran on both wasn't trying to be exotic. Jellyfin for media streaming (transcoding 1080p to two clients simultaneously), Nextcloud 27 behind Nginx with SSL termination, Pi-hole as the DNS resolver for my whole network, and a handful of Docker containers — Vaultwarden, Uptime Kuma, and a Wireguard instance. That's it. No Kubernetes. No exotic networking. The kind of setup where you expect things to just work and get genuinely annoyed when they don't. Running this for six months on both machines gave me a real sense of where each distro buckles under the specific pressure a home server creates — which is less about raw compute and more about maintainability and package freshness.
The reason this comparison still matters is that most "Ubuntu vs Fedora" guides were written by people who spun up a VM for a weekend. The failure modes only show up over time: a Nextcloud minor version requiring a PHP version your distro doesn't ship yet, a kernel module for your NIC not being available in an LTS kernel, or a security CVE sitting unpatched for three weeks because the stable backport queue is backed up. Ubuntu's 5-year LTS cycle sounds like a feature until you realize that Nextcloud 28 needs PHP 8.2 and Ubuntu 22.04 ships PHP 8.1 by default — requiring PPAs that add their own maintenance surface. Fedora ships PHP 8.3 in its default repos today. That gap matters when you're self-hosting apps that move fast.
Neither distro is the wrong answer, but they fail differently. Ubuntu tends to fail you quietly and slowly — packages drift stale, you accumulate PPAs, and six months in you're not really running "Ubuntu" anymore, you're running Ubuntu plus four third-party repos you half-trust. Fedora fails you loudly and occasionally — the upgrades from Fedora 39 to 40 broke my Nextcloud container networking config in a way that took me two hours to debug (a change in how firewalld handles nftables backends). Loud failures are actually easier to deal with in my experience. You know exactly when things broke. For a complete list of tools worth layering into a home server stack, check out our guide on Productivity Workflows — some of those tools will stress-test your distro choice in ways Jellyfin alone won't.
The Setup I Used for Both
The thing that skews most Ubuntu vs Fedora comparisons is the hardware. People run these on a Pi, complain about I/O bottlenecks, and blame the distro. I used an Intel NUC 13 Pro with 32GB DDR4 and a 2TB Samsung 990 Pro NVMe. That's not enterprise gear, but it's also not a toy — it's exactly the kind of hardware most serious home server people actually run. No rack, no IPMI, no 10GbE. Just a machine that fits under a TV stand and idles at about 8W.
I installed Ubuntu 24.04 LTS (Noble Numbat) first, bare-metal, in January. Wiped it clean in April and put Fedora 40 Server on the same drive. No VMs, no dual-boot, no containers abstracting the kernel. I specifically wanted bare-metal because virtualization overhead muddies the water on things like NVMe latency, memory pressure under ZFS ARC, and how the scheduler behaves under actual load. Three months each, same workload: Jellyfin, Nextcloud, a few Docker containers, WireGuard, and a PostgreSQL 16 instance for a personal project.
The install process itself already tells you something about each distro's philosophy. Ubuntu 24.04's server installer is the same Subiquity interface it's used for years — guided LVM partitioning, optional ZFS during install, SSH key import straight from GitHub. I had a working system in about 12 minutes. Fedora 40 Server uses Anaconda, which hasn't changed much visually since Fedora 28, but it handled Btrfs-on-NVMe without any coaxing and the systemd-boot integration was cleaner than I expected out of the box.
# Ubuntu 24.04 — check what you actually got after install
uname -r
# 6.8.0-31-generic
cat /etc/os-release | grep -E "^(NAME|VERSION)"
# NAME="Ubuntu"
# VERSION="24.04 LTS (Noble Numbat)"
# Fedora 40 Server — equivalent check
uname -r
# 6.8.9-300.fc40.x86_64
cat /etc/os-release | grep -E "^(NAME|VERSION)"
# NAME="Fedora Linux"
# VERSION="40 (Server Edition)"
Fedora shipped with kernel 6.8.9 at install time, Ubuntu with 6.8.0. That gap matters more than it sounds — newer kernels on NVMe workloads have measurable scheduler improvements, and Fedora tracks upstream fast enough that you're usually one to two kernel minor versions ahead of Ubuntu LTS. Ubuntu LTS trades that currency for five years of security patches on a predictable schedule, which is a completely valid swap if you're running something you don't want to babysit.
One practical note: both were configured with the same user setup, the same SSH hardening baseline (no password auth, no root login, AllowUsers set explicitly in /etc/ssh/sshd_config), and the same firewall tooling swap — I replaced ufw on Ubuntu and firewalld on Fedora with nftables rules directly, so firewall behavior wasn't a variable between the two test runs. If you don't do that kind of normalization, you'll end up blaming the distro for something that's actually a firewall backend difference.
Package Management: Where Fedora's Freshness Bites You
The thing that surprised me most when I switched from Ubuntu to Fedora for my home server wasn't the commands — it was realizing how differently the two distros think about software freshness vs. stability. DNF is genuinely a better dependency resolver than APT. It backtracks, it considers more alternatives, and it almost never leaves you in a broken half-installed state the way APT occasionally does with complex dependency chains. But "better resolver" doesn't mean "better for servers." Those are different problems.
The command surface is close enough that you'll adapt in a day:
# Ubuntu
sudo apt update && sudo apt install nginx
sudo apt autoremove
# Fedora
sudo dnf check-update && sudo dnf install nginx
sudo dnf autoremove
DNF's --best --allowerasing flag is something I genuinely miss on APT — it'll swap out conflicting packages automatically rather than just failing. But the real divergence shows up when you try to install something like Docker.
On Ubuntu 24.04, sudo apt install docker.io drops Docker 24.x on your machine in one command, no repo setup. It's old — Docker CE is already at 26.x — but it works, it's in the main repo, and security patches flow through Ubuntu's normal update mechanism. On Fedora 40+, the docker.io package doesn't exist. You're adding Docker's own repo every time you set up a new machine:
# Fedora — nothing is pre-wired for you
sudo dnf -y install dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
sudo dnf install docker-ce docker-ce-cli containerd.io
# Then fix the cgroup issue that bites everyone on Fedora with systemd v2
sudo mkdir -p /etc/docker
echo '{"exec-opts": ["native.cgroupdriver=systemd"]}' | sudo tee /etc/docker/daemon.json
sudo systemctl restart docker
That cgroup config line isn't in Docker's official Fedora docs prominently — you find it after your containers randomly OOM-kill themselves. The version you get through that repo is current, which is nice, but you're now on the hook for watching that third-party repo whenever you do a major Fedora upgrade.
And you will do major Fedora upgrades. Fedora's support window is roughly 13 months — about one month after the next release drops, your current version stops getting security patches. On a home server you check every few weeks, that deadline creeps up on you. The upgrade path looks like this:
sudo dnf upgrade --refresh
sudo dnf install dnf-plugin-system-upgrade
sudo dnf system-upgrade download --releasever=41
sudo dnf system-upgrade reboot
That reboot is the part that matters. Fedora's system upgrades are genuinely reliable — I've done three without a broken system — but every major version bump is a moment where your custom kernel flags, your pinned third-party repos, and your Docker cgroup config might need revisiting. For a NAS or Plex box you want to ignore for two years, that's real operational overhead.
Ubuntu LTS is the boring answer that's correct. The 24.04 LTS window runs to April 2029 for standard support, April 2034 with ESM. I set up an Ubuntu 22.04 box running Jellyfin and Samba in mid-2022 and have done nothing except sudo apt upgrade on a cron job since then. That's what "set it and forget it" actually means in practice — not that the distro is better, but that the upgrade math works in your favor.
The RPM Fusion situation is the last friction point worth calling out. Fedora ships without H.264/AAC support because of licensing. If you're running a media server, you need RPM Fusion:
sudo dnf install \
https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm \
https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
# Then swap ffmpeg for the full build
sudo dnf swap ffmpeg-free ffmpeg --allowerasing
This works fine on Fedora 38, 39, 40. But after every dnf system-upgrade, you're checking whether RPM Fusion has published packages for the new release yet — and there's usually a 1–3 week lag where things are broken or held back. Ubuntu's restricted-extras package installs the same codecs in one line with no release-cycle dependency. For a media server specifically, that lag is the kind of thing that makes you wish you'd picked the boring option.
Kernel Version and Hardware Support
The thing that caught me off guard when I first set up a Fedora-based home server was that my Intel Arc A380 GPU — the one I bought specifically for Jellyfin hardware transcoding — just worked. No digging through forums at 11pm, no manual firmware downloads. Fedora 40 shipped with kernel 6.8.x and the i915 driver already had the support baked in. When I tried the same hardware on Ubuntu 24.04 LTS, also shipping with 6.8, the VAAPI transcoding pipeline was broken because the firmware blobs weren't present by default.
Both distros technically ship with kernel 6.8 — but "ships with 6.8" hides a real difference. Ubuntu's 6.8 kernel is built conservatively, with firmware packages separated out and not always installed automatically. Fedora's 6.8 build pulls in linux-firmware aggressively and enables a wider set of staging drivers. So you get the same version string but a meaningfully different hardware compatibility surface. Run this on both and compare what you actually have:
# Check kernel version and build flags
uname -r
uname -v
# Check if your NIC firmware loaded correctly
dmesg | grep -iE "(firmware|i915|iwlwifi|rtw|ath)" | tail -30
# Check VAAPI devices for Jellyfin transcoding
ls -la /dev/dri/
vainfo 2>&1 | grep -E "(VAProfile|error)"
If your NIC isn't recognized at all, the dmesg output is usually honest about why. You'll see something like firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode rather than a silent failure. On Ubuntu, the fix is usually one of two packages that aren't pulled in by default:
# For Intel Wi-Fi 6E / AX210 / BE200 adapters — missing on Ubuntu by default
sudo apt install firmware-iwlwifi
# For Realtek 2.5G NICs (the cheap ones in most mini PCs)
sudo apt install firmware-realtek
# Reload without rebooting
sudo modprobe -r iwlwifi && sudo modprobe iwlwifi
Ubuntu's answer to the kernel gap is the HWE track, but it's not automatic on a fresh install — you have to opt in. The linux-generic-hwe-24.04 metapackage will roll you forward to newer kernels as Ubuntu releases point updates, which matters if you're buying hardware in 2025 that Fedora 41+ supports by default. The trade-off is real though: HWE kernels update more aggressively, which means occasional regressions. I've seen ZFS on Linux break across an HWE bump twice in the past two years.
# Install the HWE kernel on Ubuntu 24.04
sudo apt install linux-generic-hwe-24.04
# Verify which kernel will boot next reboot
grep -E "^GRUB_DEFAULT|submenu|menuentry" /etc/grub/grub.cfg | head -20
# After reboot, confirm
uname -r
# Should show something like 6.11.x or newer depending on Ubuntu point release
My practical take: if you're building a server around newer Intel integrated graphics (Arc iGPUs, 12th/13th gen with Xe), Fedora gets you to a working Jellyfin VAAPI setup faster. The intel-media-driver package on Fedora just connects to the right device nodes. On Ubuntu you're also installing intel-media-va-driver-non-free, editing /etc/jellyfin/encoding.xml to point at the right render node, and double-checking group membership for the jellyfin user against /dev/dri/renderD128. Not rocket science, but it's 45 minutes of troubleshooting that Fedora skips entirely.
Security Out of the Box: AppArmor vs SELinux
The most operationally impactful difference between these two distros isn't package management or release cadence — it's which mandatory access control system you're living with at 11pm when something breaks. AppArmor and SELinux solve the same problem in fundamentally different ways, and picking the wrong mental model for whichever one you're on will cost you hours.
AppArmor on Ubuntu: Path-Based and Actually Readable
AppArmor enforces security by path. A profile says "this binary can read /etc/nginx/ but not /etc/shadow". That's it. The upside is that profiles are human-readable text files you can grep through, and debugging is usually a one-liner:
# Check which profiles are loaded and in what mode
sudo aa-status
# Real output excerpt:
# 34 profiles are loaded.
# 34 profiles are in enforce mode.
# /usr/bin/evince
# /usr/sbin/mysqld
# 0 profiles are in complain mode.
# When something gets blocked, it shows up here:
sudo grep "apparmor" /var/log/syslog | tail -20
The complain mode is underrated for home server work. Drop a profile into complain mode with sudo aa-complain /usr/sbin/mysqld, reproduce your issue, read the logs, and you have a near-complete picture of what permissions are missing. It's not perfect — path-based means symlinks and bind mounts can create weird gaps — but for a home server running Nextcloud, Jellyfin, or a personal VPN, AppArmor mostly stays out of your way unless you're doing something genuinely weird.
SELinux on Fedora: More Powerful, More Painful
SELinux enforces security by label. Every file, process, and socket gets a security context like system_u:object_r:httpd_sys_content_t:s0, and access decisions are based on those labels — not paths. This is objectively more granular and harder to bypass. It also means that moving a file doesn't preserve its label, and that's where most home server pain comes from.
# Check SELinux status and current mode
sudo sestatus
# When something breaks, this is your first stop:
sudo audit2why -a
# Real output looks like:
# type=AVC msg=audit(1718234521.003:312): avc: denied { read } for
# pid=1847 comm="php-fpm" name="data" dev="sdb1" ino=131073
# scontext=system_u:system_r:httpd_t:s0
# tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=dir
# Was caused by: Missing type enforcement (TE) allow rule.
My actual Nextcloud incident on Fedora 38: I'd moved the data directory to /mnt/data/nextcloud on a separate drive. PHP-FPM kept throwing permission-denied errors even though ls -la showed correct Unix ownership. The drive had been formatted on another machine, so the files had unlabeled_t context — SELinux's way of saying "I don't know what this is, so no." The fix was one command, but finding it took 45 minutes of confused Googling:
# This resets file contexts to what SELinux policy expects for the path
sudo restorecon -Rv /mnt/data/nextcloud
# After this, verify the context is what httpd_t can access:
ls -Z /mnt/data/nextcloud
AppArmor on Ubuntu never caused this exact failure mode because it doesn't care about labels — it cares about paths. The same Nextcloud setup on Ubuntu 22.04 just worked after I set Unix permissions correctly. The chcon vs semanage fcontext distinction adds another layer: chcon changes labels directly but they get reset on relabel; semanage fcontext writes a persistent policy rule. If you use chcon to fix a problem and it comes back after a reboot, that's why.
Docker on Fedora: Where SELinux Gets Genuinely Annoying
Docker containers on Fedora run into SELinux regularly. The container runtime labels container processes with container_t, and by default that context can't read host volumes labeled with standard types. You'll see this the first time you try to mount a host directory into a container:
# Wrong way — tempting but opens a big hole:
docker run -v /mnt/data:/data --privileged myimage
# Right way — :z relabels the volume for the container:
docker run -v /mnt/data:/data:z myimage
# Or :Z if only one container should ever access it:
docker run -v /mnt/data:/data:Z myimage
The :z flag tells Docker to relabel the volume with a shared label that containers can access. Most Docker tutorials don't mention this because they're written on Ubuntu or with SELinux disabled. On Fedora, you'll hit the --privileged temptation fast — especially with containers like Home Assistant or anything that needs device access. Resist it where you can. The correct answer is usually :z on volumes plus targeted SELinux booleans like sudo setsebool -P container_manage_cgroup on for specific use cases.
Automatic Security Updates: Config Examples That Actually Work
Both distros support unattended security patching, but the defaults are different enough that you need to explicitly configure them rather than assume they're active.
On Ubuntu 22.04/24.04, unattended-upgrades is installed by default but you should verify and tighten the config:
# /etc/apt/apt.conf.d/50unattended-upgrades
Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}-security";
// Only security updates — not all upgrades
};
Unattended-Upgrade::AutoFixInterruptedDpkg "true";
Unattended-Upgrade::Remove-Unused-Kernel-Packages "true";
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "03:00";
# Enable and verify it's actually running:
sudo systemctl status unattended-upgrades
sudo unattended-upgrade --dry-run --debug 2>&1 | head -40
On Fedora, install and configure dnf-automatic:
# Install if not present:
sudo dnf install dnf-automatic
# /etc/dnf/automatic.conf — the key section:
[commands]
upgrade_type = security # Only security updates, not everything
apply_updates = yes # Actually apply them, not just download
reboot = when-needed # Reboot if kernel or glibc updated
[emitters]
emit_via = stdio # Or 'email' if you have mail configured
# Enable the timer (not the service — dnf-automatic uses systemd timers):
sudo systemctl enable --now dnf-automatic.timer
sudo systemctl list-timers | grep dnf
One gotcha on Fedora: upgrade_type = security only applies updates that are explicitly tagged as security updates in the repo metadata. A handful of security fixes ship in regular updates without that tag, so it's slightly less thorough than Ubuntu's approach. Not a dealbreaker, but worth knowing. I run sudo dnf updateinfo list security manually once a week on Fedora machines to catch anything that slipped through.
Firewall Configuration: firewalld vs ufw
The surprise isn't which firewall tool is better — it's how quickly ufw covers 80% of home server needs with almost no learning curve, and how fast you hit its ceiling the moment your setup gets interesting.
On Ubuntu, you're three commands away from a working firewall:
# Enable ufw and allow SSH before you lock yourself out
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
# Check status — output is human-readable, unlike iptables
sudo ufw status verbose
That's it. No zones, no services files, no XML. I've handed that exact sequence to people who'd never touched Linux firewalls and they were fine. If you're running a Jellyfin box, a Nextcloud instance, or a simple Nginx reverse proxy with nothing exotic — ufw genuinely doesn't need to be more complicated than this.
Fedora's firewalld requires more upfront investment, but the zone model pays off the moment you have multiple network interfaces or trust levels. The idea is that you assign interfaces or source IP ranges to named zones (home, trusted, public, internal), and each zone gets its own ruleset. My home server has a LAN interface and a Tailscale interface — those should not be treated identically, and firewalld handles that naturally:
# Add HTTP only for your home zone (LAN traffic), not public
sudo firewall-cmd --permanent --add-service=http --zone=home
# Assign your LAN interface to the home zone
sudo firewall-cmd --permanent --change-interface=eth0 --zone=home
# Reload to apply permanent rules
sudo firewall-cmd --reload
# Verify what's allowed per zone
sudo firewall-cmd --zone=home --list-all
Where ufw starts hurting: say you want to allow port 8096 (Jellyfin) only from your local subnet 192.168.1.0/24, port 22 from a specific jump host IP, and block everything else on those ports. In ufw, you write ordered rules manually, and the ordering matters in ways that aren't obvious from the status output. It works, but you're essentially reconstructing what firewalld gives you with zones — except without the tooling to manage it cleanly.
Here's the config I actually run on a Fedora home server to allow Tailscale traffic without punching a hole in everything. The key insight is that Tailscale traffic arrives on the tailscale0 interface, so you assign that interface to the trusted zone rather than writing IP-range rules:
# Assign Tailscale interface to trusted zone
# This allows all traffic from Tailscale peers without opening public-facing ports
sudo firewall-cmd --permanent --zone=trusted --add-interface=tailscale0
# Your public-facing interface stays in the default zone (usually 'public')
# with only explicit services allowed
sudo firewall-cmd --permanent --zone=public --add-service=https
sudo firewall-cmd --permanent --zone=public --remove-service=dhcpv6-client # don't need this on a server
# Lock down SSH to LAN only by adding the source subnet to 'home' zone
sudo firewall-cmd --permanent --zone=home --add-source=192.168.1.0/24
sudo firewall-cmd --permanent --zone=home --add-service=ssh
# Remove SSH from public zone so it's not exposed externally
sudo firewall-cmd --permanent --zone=public --remove-service=ssh
sudo firewall-cmd --reload
# Sanity check — list active zones and their interfaces
sudo firewall-cmd --get-active-zones
The thing that caught me off guard with firewalld is the difference between runtime and permanent rules. If you forget --permanent, your rule disappears on the next firewall-cmd --reload or reboot. I've burned time debugging "missing" rules that were just runtime-only. Always add --permanent and then reload, or use --runtime-to-permanent after testing a rule interactively. The Ubuntu/ufw approach of writing rules directly to config avoids this foot-gun entirely, which is a real argument in its favor for simpler setups.
Docker and Containers: The Real Daily Driver
The thing that caught me off guard was how much SELinux changes the Docker experience on Fedora — not in a "it occasionally warns you" way, but in a "your containers fail silently and you spend 45 minutes reading audit logs" way. If you're coming from Ubuntu where Docker just works after following the official install docs, Fedora will humble you.
Docker CE on Ubuntu is genuinely frictionless. Add the apt repo, install, run sudo docker run hello-world, done. The official docs at docs.docker.com work exactly as written. I've never had to chase down a permission issue that wasn't my own fault. The daemon starts at boot, rootful Docker works perfectly, and Compose v2 drops into /usr/local/lib/docker/cli-plugins/ without complaint. That path matters more than you'd think — some Compose v2 installs from third-party scripts assume ~/.docker/cli-plugins/ and then docker compose (no hyphen) stops resolving. On Ubuntu this is easy to debug because nothing else is fighting you at the same time.
Fedora is a different story if Docker is your target. After install you'll hit SELinux boolean flags before your first real workload. The container_manage_cgroup boolean is just the opener:
# This one you'll find in the first Stack Overflow result
sudo setsebool -P container_manage_cgroup on
# This one you'll find after your bind mounts stop working
sudo chcon -Rt svirt_sandbox_file_t /your/host/path
# And if you're running a container that needs to write to /sys
sudo setsebool -P domain_can_mmap_files on
None of this is in the Docker CE quick-start for Fedora. The SELinux denials show up in sudo ausearch -m avc -ts recent and you have to learn to read them. I'm not saying SELinux is bad — it's genuinely better security posture — but if you're standing up Jellyfin or Nextcloud from a docker-compose.yml you grabbed from GitHub, you're going to spend real time on this.
Here's where Fedora earns it back though: Podman. It ships pre-installed on Fedora Server and rootless containers work better there than I've seen anywhere else. Running containers as your own user with systemd user units is the real win. A typical setup looks like this:
# Generate a systemd unit from a running container
podman generate systemd --new --name myapp > ~/.config/systemd/user/myapp.service
# Enable it so it starts without you logging in (requires lingering)
loginctl enable-linger $USER
systemctl --user enable --now myapp.service
That lingering setup means your containers survive reboots without root. On Ubuntu you can get rootless Docker working but it's an opt-in install path (dockerd-rootless-setuptool.sh) and systemd integration requires manual wiring. On Fedora with Podman it's the default happy path. If you care about not running container daemons as root, Fedora is genuinely ahead.
Honest take: if your home server workload is a folder of docker-compose.yml files you pulled from GitHub — Portainer, Traefik, Vaultwarden, Immich, whatever — Ubuntu gives you the least friction from zero to running. The Docker Compose v2 plugin works, the bind mounts work, the published ports work, and nothing is going to relabel your filesystem. Fedora rewards you if you're willing to learn its security model or if you specifically want rootless Podman with proper systemd integration. Those aren't equivalent skill requirements, and pretending they are would be doing you a disservice.
Performance: Where I Actually Saw Differences
The RAM idle number is the first thing everyone asks about, and the honest answer is: don't make your distro choice on it. With a minimal install on both — no desktop environment, just SSH, systemd, and a handful of services — Ubuntu 24.04 LTS and Fedora 40 were within 50MB of each other. I saw Ubuntu sitting around 280MB and Fedora around 310MB at idle, but that gap closed or flipped depending on what I had enabled. That's noise, not signal. If 50MB matters to your workload, you've got bigger architectural problems to solve.
The disk I/O scheduler is one of those things nobody checks but probably should. Both distros default to mq-deadline on NVMe, which is a reasonable choice — it prioritizes latency without being completely naive about throughput. Verify it yourself:
cat /sys/block/nvme0n1/queue/scheduler
# output: [mq-deadline] kyber none
The brackets tell you what's active. If you're running a database heavy workload like Postgres 16 with lots of concurrent writes, none (no scheduler, trust the NVMe controller) is actually worth benchmarking. But for general home server use, leave it alone — neither distro gives you an edge here out of the box.
Jellyfin hardware transcoding is where Fedora genuinely pulled ahead, and it wasn't close. My Intel N100 mini PC got QuickSync working immediately on Fedora because the kernel shipped a newer version of the i915 driver with the firmware blobs already included. On Ubuntu 24.04 LTS, I had to chase down the fix manually:
# On Ubuntu — without this, QuickSync is invisible to Jellyfin
sudo apt install intel-media-va-driver-non-free
# Then confirm VA-API sees the device:
vainfo --display drm --device /dev/dri/renderD128
Once that package was installed, Ubuntu matched Fedora's transcoding performance exactly. The difference wasn't permanent — it was a setup tax. But if you're not aware of it, you'll assume hardware transcoding is broken and waste a couple hours in the Jellyfin forums before someone mentions that package in a buried comment.
Network throughput was a complete non-issue. I ran iperf3 between the home server and my workstation on both installs — same physical machine, same switch, same cable:
# On the server
iperf3 -s
# On the client
iperf3 -c 192.168.1.X -t 30 -P 4
Both hovered around 940 Mbps on gigabit, which is as close to line rate as you're going to get. The kernel TCP stack differences between Ubuntu 6.8 and Fedora 6.9 kernels at the time did not show up in any meaningful way at this scale. Where they might diverge is under extremely high connection counts or with specialized network tuning, but for a home server streaming to a handful of clients, it's irrelevant.
Boot time is the other benchmark people screenshot and post on forums without much context. Running systemd-analyze blame on both showed they're genuinely fast — under 15 seconds to a usable SSH session on an NVMe drive. Fedora was slightly slower after kernel updates, and the culprit is SELinux relabeling. The first boot after a major update triggers a full filesystem relabel and you'll see fixfiles_t holding things up:
systemd-analyze blame | head -20
# Look for: selinux-autorelabel or fixfiles eating 8-15 seconds
This only hits on that one post-update boot, not every boot. Ubuntu with AppArmor doesn't have the same relabeling overhead, so it boots consistently fast regardless. For a server that reboots maybe once a month after kernel updates, this is a minor annoyance rather than a real performance concern — but it did catch me off guard the first time Fedora sat there for an extra 12 seconds with no obvious explanation.
Head-to-Head Comparison
The comparison that actually matters isn't "which distro is better" — it's which one breaks your home server less often and keeps it secure longer. I've run both, and the differences aren't subtle once you're six months in.
Factor
Ubuntu 24.04 LTS
Fedora 40
Support lifecycle
5 years standard, 10 years with ESM
~13 months per release
Package freshness
Stable, often 1–2 major versions behind
Bleeding edge, tracks upstream closely
Default MAC system
AppArmor (profile-based)
SELinux (label-based, enforcing by default)
Container story
Docker-first, docker.io in repos
Podman-first, rootless by default
Upgrade risk
Low — do-release-upgrade rarely bites
Medium — dnf system-upgrade has opinions
Community support quality
Massive Stack Overflow coverage, ancient answers included
Smaller, but the people answering actually know the kernel
Ubuntu's biggest dealbreaker on a home server is package staleness. You install Ubuntu 24.04 and expect to run, say, Podman 5.x or a recent Postgres 16 build — and what you get from apt is whatever Canonical froze at release time. Then the PPA chase starts. For some workloads that's fine. For anything self-hosted where you're tracking upstream security advisories, you're going to be pinning PPAs and praying they don't conflict:
# The PPA spiral that happens with Ubuntu
sudo add-apt-repository ppa:deadsnakes/python3.12
sudo add-apt-repository ppa:ondrej/php
# and now your apt update takes 45 seconds and you have 4 competing key sources
Fedora's dealbreaker is the upgrade treadmill. The ~13-month cycle sounds manageable until you miss one and realize you're two releases behind, then hit a dnf system-upgrade that pulls in a new SELinux policy that relabels your entire filesystem on reboot and takes 20 minutes — or worse, conflicts with a third-party RPM you added for Plex or a custom kernel module. I've had dnf system-upgrade leave me with a system that booted to a dracut emergency shell twice. Not catastrophically unfixable, but not something you want at 11pm when your family's Jellyfin setup is down.
# What a Fedora upgrade actually looks like
sudo dnf upgrade --refresh
sudo dnf install dnf-plugin-system-upgrade
sudo dnf system-upgrade download --releasever=41
sudo dnf system-upgrade reboot
# ...then pray your NVIDIA driver or ZFS DKMS module survived the kernel bump
The MAC story is where security-minded people should spend more time than they usually do. AppArmor on Ubuntu is path-based — you define what files a process can touch. It's easier to write profiles for and rarely blocks things you didn't expect. SELinux on Fedora is label-based, enforces by policy type, and when something breaks because of an SELinux denial, the error message you see is usually completely unrelated. Your app just silently fails or throws a permission error. The debugging workflow (ausearch -m AVC, audit2allow) is learnable but has a real onboarding cost. That said, SELinux's confinement model is genuinely stronger — if you're running public-facing services, the "harder to configure" tradeoff is worth it.
On containers specifically: Ubuntu ships with Docker working out of the box and most Docker Compose tutorials assume you're on it. Fedora's Podman-first approach means rootless containers by default, which is actually the more secure architecture — no daemon running as root. But if your home server workflow is "copy a Docker Compose file from GitHub and run it," you'll hit friction on Fedora. podman-compose handles maybe 80% of Compose files cleanly. The other 20% involve networking quirks or volume permission issues that Docker handles quietly because it's running as root and just doesn't care.
When to Pick Ubuntu Server
When Ubuntu Server Is the Right Call
The strongest argument for Ubuntu Server on a home setup isn't performance — it's the five-year LTS support window. Ubuntu 24.04 LTS gets security patches until April 2029, and with unattended-upgrades configured, I can genuinely deploy it and walk away. My home NAS box running 22.04 has had maybe four manual interventions in two years. That's the real pitch: benign neglect as a feature.
If your stack is Docker Compose files pulled straight from DockerHub, Ubuntu is the path of least resistance. The overwhelming majority of those images are built on debian:bookworm-slim or ubuntu:22.04. Volume mounts, UID mapping, bind mounts to /var/lib — all of it behaves predictably because the environment matches. I've seen Fedora users fight subtle permission mismatches with rootless Podman because the image assumed Debian-style UID ranges. Not a dealbreaker, but it's 11pm debugging you don't need.
# This is what most DockerHub self-hosted apps assume underneath
FROM ubuntu:22.04
RUN apt-get install -y some-daemon
# Fedora-based alternatives exist but are rarer in the wild
The SELinux point is real and underappreciated. Fedora ships with SELinux enforcing by default, which is genuinely good security — but when Nextcloud can't write to a mounted volume at midnight and journalctl is spitting out avc: denied messages, you need to know whether to run restorecon, write a custom policy, or use chcon. Ubuntu's AppArmor profiles do fail, but they fail quieter — you get a log entry in /var/log/syslog and usually a clear profile name to disable or tune. The blast radius of an AppArmor issue is typically one service, not a cascade of denials across your whole stack.
Snap packages are a real differentiator in specific cases. LXD — which Canonical now ships exclusively via Snap — works significantly better on Ubuntu because the Snap daemon and LXD snap are co-developed. Same story with the certbot Snap, which auto-renews cleaner than the pip or apt versions because it installs its own systemd timer. On Fedora you'd reach for Certbot via pip or a COPR package, and it works, but you're on your own for renewal hooks.
# LXD setup on Ubuntu — this is the supported path
sudo snap install lxd
sudo lxd init --auto
# Certbot with automatic renewal (Snap version handles this natively)
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
sudo certbot --nginx
# Renewal timer is already active via snap's internal scheduler
If the server is shared — family members SSHing in, a partner managing Plex, a sibling with sudo access — AppArmor's failure mode is much friendlier than SELinux's. When AppArmor blocks something, the service either starts anyway with reduced permissions or fails with a single log line. SELinux in enforcing mode can lock out an entire service silently from the user's perspective, and tracing it requires understanding audit logs and policy modules. That's not a fair thing to expect from someone who just wants to restart Jellyfin. Ubuntu is the answer when your security model needs to be "good enough but also not break when someone else touches it."
When to Pick Fedora Server
When Fedora Server Is the Right Call
The hardware support argument alone closes the deal for a lot of people. If you just bought a machine with an Intel Arc GPU, a recent AMD Radeon, or an Intel Wi-Fi 6E/7 card and tried installing Ubuntu 22.04 LTS, you probably already know the pain — firmware missing, module not loading, fallback to a generic driver that drops performance 40%. Fedora ships with a kernel that's usually within one or two releases of mainline. Ubuntu 22.04 LTS ships with 5.15 and backports selectively. Fedora 40 ships with 6.8. That gap matters enormously for anything that landed in the kernel after 2022.
Podman is where Fedora genuinely has a structural advantage, not just a version number advantage. The rootless workflow — running containers as a non-root user without a daemon — is treated as the primary path on Fedora, not an afterthought. Systemd socket activation, podman generate systemd, and quadlet unit files all work out of the box. On Ubuntu, Podman is installable but you're constantly fighting assumptions baked in for Docker. I switched a home media server workflow to rootless Podman on Fedora 39 specifically because I wanted containers to survive reboots without running a daemon as root, and the experience was night and day.
# Fedora — rootless container that auto-starts with systemd, no daemon needed
mkdir -p ~/.config/containers/systemd/
cat > ~/.config/containers/systemd/jellyfin.container <
The package freshness argument is real and saves actual headaches. On Ubuntu 22.04, PostgreSQL from the default repos is 14. You can add the official PGDG repo, but now you're maintaining an external source. Fedora 40 ships PostgreSQL 16 in the standard repos. PHP 8.3 is available without PPAs. Node.js 20 is there. This isn't about chasing shiny versions — it's about not maintaining a list of extra repo configs that each have their own GPG key rotation schedule and can silently break during dist-upgrades. If you're treating your home server as a deliberate learning environment — tracking upstream changes, reading changelogs, actually understanding what changed in kernel 6.9 — Fedora puts you closer to that signal. The Fedora release cadence (roughly every 6 months, supported for ~13 months) forces you to engage with the system instead of setting it and forgetting it. That's a bug for a production NAS. It's a feature if you're trying to get good at Linux administration fast. The upgrade path with `dnf system-upgrade` is also genuinely reliable in a way it wasn't three years ago. Fedora CoreOS is the strongest long-term reason to start here. CoreOS runs immutable, auto-updating OS images configured entirely via Butane/Ignition YAML files, with Podman as the container runtime. If that's your eventual target — and for a home server doing one or two well-defined jobs, it's a compelling architecture — then running regular Fedora Server first is the right onramp. You learn the tooling, the rpm-ostree mental model, quadlet unit files, and how Fedora thinks about system configuration. Jumping straight from Ubuntu to CoreOS cold is a rough experience. Fedora Server first makes CoreOS feel like a natural next step rather than a completely foreign system. ## The Config Files You'll Actually Need Most home server guides stop at installation and wave vaguely at "harden your SSH." That's the part where people get burned. Here are the actual file paths and exact config lines I use on both distros — no hand-waving. #### Ubuntu: Unattended Security Updates The default `/etc/unattended-upgrades/50unattended-upgrades` file ships with most of the right stuff commented out. The critical block you need to uncomment or verify:// Enable security-only updates — leave "updates" and "proposed" commented out Unattended-Upgrade::Allowed-Origins { "${distro_id}:${distro_codename}-security"; // "${distro_id}:${distro_codename}-updates"; // leave this OFF }; // Actually remove unused deps — saves you from slow disk fills Unattended-Upgrade::Remove-Unused-Dependencies "true"; // Reboot automatically only for kernel updates, at 3am when nothing's running Unattended-Upgrade::Automatic-Reboot "true"; Unattended-Upgrade::Automatic-Reboot-Time "03:00";The gotcha: enabling this file alone does nothing. You also need `/etc/apt/apt.conf.d/20auto-upgrades` to actually trigger the job:APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1";Verify it works without waiting overnight: `sudo unattended-upgrade --dry-run --debug 2>&1 | grep "Packages that will be upgraded"` #### Fedora: DNF Automatic Fedora's equivalent is cleaner. Edit `/etc/dnf/automatic.conf` and set exactly these two lines — the rest of the defaults are fine:[commands] upgrade_type = security # NOT "default" which applies ALL updates apply_updates = yes # without this it just downloads and does nothing [emitters] emit_via = stdio # change to "email" if you want a log mailed somewhereThen enable the timer (not the service — DNF automatic runs on a systemd timer):sudo systemctl enable --now dnf-automatic-install.timer systemctl list-timers | grep dnf # confirm it's scheduled#### SSH Hardening — Both Distros Same file on both: `/etc/ssh/sshd_config`. These three lines together are non-negotiable for a box exposed to the internet, even behind a firewall:PasswordAuthentication no # key-only auth; brute force attacks become pointless PermitRootLogin no # root has no business logging in directly, ever AllowUsers youruser # whitelist explicit users; everything else is denied # Bonus: kill idle sessions that ghost-hang for hours ClientAliveInterval 300 ClientAliveCountMax 2After editing, always test before reloading — I've locked myself out more than once by skipping this:sudo sshd -t # parse check, no output = no syntax errors sudo systemctl reload sshd#### Ubuntu AppArmor: Custom Binary Profiles Drop custom AppArmor profiles in `/etc/apparmor.d/`. Name the file after the binary path with slashes replaced by dots — e.g., `usr.local.bin.myapp`. A minimal profile that confines a custom binary to read its config and write to one log path:#include <tunables/global> /usr/local/bin/myapp { #include <abstractions/base> /etc/myapp/config.toml r, # read-only config /var/log/myapp/ rw, # write logs here only /var/log/myapp/** rw, deny /home/** rw, # explicitly block home dirs }Load it without rebooting:sudo apparmor_parser -r /etc/apparmor.d/usr.local.bin.myapp sudo aa-status | grep myapp # confirm it's in enforce modeIf you see `myapp (enforce)` in that output, you're good. If something breaks in your app, check `sudo journalctl -xe | grep apparmor` — the denied path will be right there, and you just add it to the profile and reload again. #### Fedora SELinux: Custom File Contexts SELinux denials on Fedora will ruin your afternoon if you're mounting data outside the standard paths. The right fix is not `setenforce 0` — it's labeling your path correctly. If you're serving web files from `/mnt/data/www`, httpd can't read them until you tell SELinux that's intentional:# Add the custom context rule (survives relabels) sudo semanage fcontext -a -t httpd_sys_content_t '/mnt/data/www(/.*)?' # Apply it to existing files sudo restorecon -Rv /mnt/data/www # Verify — you want httpd_sys_content_t in the third column ls -Z /mnt/data/www/The `semanage` step writes to the policy database permanently. The `restorecon` step actually relabels the inodes on disk. Skip the second step and your NGINX will still get `Permission denied` even though you "set the context." That's the part nobody puts in their blog post. ## My Verdict After 6 Months After running both distros on the same physical machine — a repurposed Dell PowerEdge with a mix of spinning rust and NVMe — I landed back on Ubuntu 24.04 LTS, and honestly it wasn't even close at the end of the experiment. Not because Fedora is bad, but because the thing that broke my will was a single `dnf system-upgrade` to Fedora 41 that destroyed my Samba share and corrupted SELinux contexts on my media drive. Four hours of a Saturday afternoon gone. That's the tax Fedora charges for keeping you on the bleeding edge, and for a home server I actually depend on, I stopped wanting to pay it. The failure mode was specific enough to be infuriating: the upgrade relabeled the SELinux contexts on my ext4-formatted media drive incorrectly, and Samba's `samba_share_t` context got wiped during the transition. Every share returned "access denied" silently. The fix was a full `restorecon -Rv /mnt/media` followed by manually re-adding the Samba boolean:# the upgrade to F41 torched these — had to redo them manually setsebool -P samba_enable_home_dirs on setsebool -P samba_export_all_rw on restorecon -Rv /mnt/media # then verify the context actually stuck ls -Z /mnt/media | head -5None of this is in the Fedora upgrade docs. I found the fix by cross-referencing a Red Hat bug tracker entry from 2023 that described the same behavior on F38→F39. That's the thing that kills me — it's a _known_ pattern and the tooling still doesn't account for it cleanly. What I genuinely miss from Fedora isn't trivial though. The kernel gap is real — Fedora 41 shipped kernel 6.11 while Ubuntu 24.04 launched with 6.8. For my use case (a Coral TPU for Frigate NVR and an Intel Arc GPU for hardware transcoding in Jellyfin), newer kernels actually matter for driver support. The other thing I miss is Podman's rootless story. On Fedora, rootless Podman with user namespaces and `slirp4netns` just works out of the box, including socket activation via systemd. On Ubuntu 24.04 you can get there, but you're fighting package versions — the distro ships Podman 4.x while Fedora has been on 5.x for a while. And `firewalld` zones are genuinely better than UFW for anything with multiple network interfaces; the zone-based model maps to physical topology in a way that UFW's flat ruleset doesn't. The compromise that's actually holding up: Ubuntu 24.04 LTS as the base, with the HWE kernel track enabled to get closer to mainline without jumping distros. You get the 5-year support guarantee, predictable upgrade cycles, and a Samba stack that doesn't get its contexts scrambled on major upgrades. For the kernel, one command:# switch to the hardware enablement kernel — currently 6.8.x on 24.04, # tracks newer hardware support without full distro churn apt install linux-generic-hwe-24.04 # verify you're on it after reboot uname -r # should show something like 6.8.0-xx-genericFor anyone building a Podman-native homelab — meaning you're doing rootless containers, quadlet-based unit files, and you want Podman 5.x's network stack — ignore everything I just said and run Fedora. Same advice if you bought hardware in the last 12 months that needs kernel 6.10+ for basic functionality (some Arc GPUs, newer Wi-Fi chipsets, AMD's latest integrated graphics). The LTS stability argument evaporates if your hardware barely runs on the shipping kernel. But if your server is mostly stable hardware running Samba, Docker, Jellyfin, maybe a few containers, and you want to sleep instead of debugging SELinux relabeling at midnight — Ubuntu 24.04 LTS is the boring correct answer. * * * _**Disclaimer:** This article is for informational purposes only. The views and opinions expressed are those of the author(s) and do not necessarily reflect the official policy or position of Sonic Rocket or its affiliates. Always consult with a certified professional before making any financial or technical decisions based on this content._
Originally published on techdigestor.com. Follow for more developer-focused tooling reviews and productivity guides.
Top comments (0)