Back to Blog
Docker
LXC
Proxmox
VMs
Containers
Virtualization
Docker in LXC vs VMs on Proxmox: Why This Debate Refuses to Die in 2026
January 27, 2026
9 min read
Every few years, the same argument bubbles back up in the Proxmox world. Someone asks an honest, practical question: Why not just run Docker inside an LXC? It works. It's fast. It uses less memory. And plenty of people have been doing it for years without issue.
And then the replies roll in.
"Unsupported."
"Just use a VM."
"It'll break on upgrade."
"Containers inside containers is asking for trouble."
Fast forward to 2026, and here we are again—still arguing, still deploying, still quietly breaking the "rules" in homelabs and edge servers everywhere. Even with newer features and a slightly softer official stance from Proxmox VE, the Docker-in-LXC debate refuses to settle down.
That's not because people are stubborn. It's because the tradeoffs are real, and the incentives on both sides haven't gone away.
## Two platforms, one kernel, zero patience for each other
At the heart of the debate is a simple architectural tension. Proxmox is a virtualization platform. Docker is also a virtualization platform, just at a different layer. Both rely heavily on the Linux kernel to do their job.
An LXC container on Proxmox shares the host kernel. Docker containers also expect to talk directly to the host kernel. When you run Docker inside an LXC, you're stacking two systems that both assume they're "closest" to the kernel.
Most of the time, Linux is flexible enough to make this work. Until it isn't.
Kernel updates. AppArmor profile changes. Cgroup version shifts. Filesystem driver tweaks. None of these are exotic edge cases. They're normal parts of keeping a host secure and up to date. And when something changes at the Proxmox layer, Docker doesn't know or care that it's running inside LXC. Likewise, Proxmox doesn't test its updates against Docker running inside containers it doesn't officially support.
That gap is where the horror stories come from.
## "But I've been running Docker in LXC for years"
You'll hear this a lot. And it's usually true.
Many users have run Docker inside unprivileged LXCs for five, six, even seven years. Some upgraded straight through multiple major Proxmox releases without a single incident. Their takeaway is obvious: the risk is overblown.
The catch is that success here is unevenly distributed. If your workloads are light, your storage layout is simple, and you don't rely on edge-case kernel features, Docker in LXC can feel rock solid. If you're pushing GPUs, advanced networking, or storage drivers that sit right on the boundary between kernel and user space, cracks start to show.
That unpredictability is exactly why Proxmox keeps repeating the same advice. Not because it never works, but because when it fails, it fails in ways they can't easily support or debug.
## Scenario one: "Just use LXC, no Docker"
The least controversial setup is also the least fashionable. Plain LXCs, traditional packages, systemd services, maybe glued together with Ansible.
From Proxmox's perspective, this is the happy path. LXCs are first-class citizens. They're tested. They're documented. Kernel updates are expected to work here.
The downside is obvious if you've spent the last decade living in container land. Many modern services don't really ship as "software" anymore. They ship as Docker images. No apt repo. No clean install guide. Just a compose file and a prayer.
Rebuilding those stacks by hand isn't impossible, but it's work. And it's ongoing work. Dependencies change. Docs drift. Suddenly your "simple" LXC looks like a custom distro you're maintaining alone.
## Scenario two: Docker inside LXC
This is the controversial middle ground, and the reason the debate won't die.
On paper, Docker-in-LXC doesn't buy you much isolation. Both layers use the same kernel features: namespaces, cgroups, capabilities. You're not really safer. You're not meaningfully faster either. What you are doing is doubling up on abstraction.
Paths get mapped twice. Ports get forwarded twice. UID and GID mappings become a minor personality test.
And yet, people keep doing it. Why? Because the ergonomics are unbeatable. Docker Compose is easy. Documentation exists. Backups are simple. Migration is often just copying a directory and re-running a stack.
The real risk isn't performance or security. It's compatibility drift. Docker updates its runtime. Proxmox updates its kernel or LXC profiles. Nobody is coordinating those changes across layers. Most of the time, nothing explodes. Sometimes it does.
When it breaks, it usually breaks after an upgrade. And when it does, you're on your own.
## Scenario three: Docker in a VM
This is the boring, safe answer. And in production, boring usually wins.
A VM gets its own kernel. Docker is happy. Proxmox is happy. Updates are far less likely to cause weird permission failures or kernel feature mismatches. Live migration works. HA works. Support tickets make sense.
The cost is overhead. Even a lean VM needs memory for its kernel, its init system, and its idle processes. On a big server, that's noise. On a small box or a power-constrained homelab, it's the difference between fitting everything and having to make choices.
GPU passthrough also gets trickier. A VM tends to monopolize hardware unless you're carefully slicing it up, and not everyone wants to deal with that complexity.
Still, if uptime matters more than squeezing every last watt, this is the path Proxmox actually designs for.
## Why upgrades are the real villain
The most common breaking points aren't dramatic. They're boring details.
A container runtime changes how it touches /proc.
An AppArmor profile tightens slightly.
A storage driver flips its default behavior.
Each change is reasonable in isolation. Combined across layers, they can stop containers from starting at all.
This is why people say "Docker in LXC breaks on upgrades." It's not superstition. It's an acknowledgment that you're running two fast-moving systems that don't test against each other.
If you delay upgrades, keep good backups, and accept occasional downtime, that risk might be fine. If you need predictable behavior on day one of a new release, it probably isn't.
## The OCI wildcard
Recent versions of Proxmox have added early support for running OCI images directly, without spinning up a Docker daemon inside an LXC. It's a subtle but important shift.
Instead of nesting runtimes, Proxmox treats the image as input and runs it using its own container stack. That removes one entire layer of friction. No Docker socket. No runc inside LXC. Just Proxmox managing the container lifecycle itself.
It's promising. It's also not finished.
There's no mature equivalent to Docker Compose yet. Orchestration is basic. Upgrades are cautious. Right now, it feels like a glimpse of a future where this debate might finally cool off—but not something everyone can bet their infrastructure on today.
## Why the argument keeps coming back
This debate survives because it isn't about right and wrong. It's about priorities.
If you value efficiency above all else, LXCs—Docker or not—are incredibly compelling. Memory sharing is real. Startup times are instant. Resource usage feels honest.
If you value stability and support, VMs win by default. The isolation is cleaner. The blast radius is smaller. The rules are clearer.
And if you value convenience, Docker remains hard to beat. That convenience doesn't disappear just because someone tells you it's "unsupported."
So people keep mixing and matching. They accept the risks that matter least to them and ignore the rest. Then a new Proxmox release lands, something changes, and the conversation starts all over again.
In 2026, the tools are better. The kernels are smarter. The warnings are clearer. But the tradeoffs are still there.
That's why this debate refuses to die. Not because people can't agree—but because, depending on how you run your systems, they're all a little bit right.
Keep Exploring
LXC Meets Docker? And Other Questions About Proxmox 9.1
Proxmox VE 9.1 introduces OCI image support for LXC containers and more. We answer the biggest questions about this release, from Docker-in-LXC fixes to TPM changes and upgrade stability.
Docker in LXC on Proxmox: Risks, Tradeoffs, and Lessons
Running Docker inside LXC containers on Proxmox seems efficient, but is it safe? Community insights reveal the real risks and rewards of containers-in-containers.
Proxmox 9.1 Can 'Run Docker Containers'… but Not the Way You Think
Proxmox 9.1's new OCI container feature promises Docker-like functionality, but the reality is more nuanced. We dive into what actually works, what doesn't, and why this isn't the Docker replacement many hoped for.
Immich in Proxmox LXC: A Stability Gamble Worth Taking?
Running Immich in a Proxmox LXC container sounds elegant, but real-world experience reveals stability challenges. Here's what the community learned about LXC vs VM approaches.