Back to Blog
proxmox
vmware
migration
enterprise
storage
ceph
networking
numa
Proxmox in the Enterprise: The Gotchas VMware Admins Don't See Coming
November 29, 2025
10 min read
# Proxmox in the Enterprise: The Gotchas VMware Admins Don't See Coming
When VMware admins talk about jumping to Proxmox, the conversation always starts the same way: the Broadcom bill lands, someone in finance panics, and suddenly the virtualization stack that's been running quietly for a decade becomes an urgent problem. The story is practically a genre now.
But what happens after that moment — when a team actually stands up a Proxmox proof-of-concept and starts pulling apart their old ESXi habits — that's where things get interesting. And that's where the gotchas start showing up.
Teams who've already made the trip tend to say the same thing: the migration isn't the hard part. The mindset is. And the more someone tries to treat Proxmox like VMware-with-a-different-logo, the more things wobble.
What follows is a collection of real-world lessons, warnings, and gently exasperated advice from engineers who've lived through the transition — and the surprises that keep catching VMware veterans off guard.
## Don't "migrate." The pros insist you rebuild.
Several engineers who've done this at scale warn that the moment you start thinking of the project as a straight migration, you've already boxed yourself in. One admin put it bluntly: "Forget the term migration. Think redesign and reimplementation."
It's not just philosophy. The two platforms approach storage, drivers, networking, CPU presentation, and clustering with just enough differences that 1:1 mapping feels natural at first — right up until it blows up in your face.
This is why teams who try to mimic their old VMware layout in Proxmox often end up ripping out half their work a month later. The ones who treat it as a fresh build, focusing on clean storage design, network segmentation, and node planning, tend to have a far smoother time.
## Storage is where VMware habits go to die
One admin summed it up with a dry smile: "VMFS spoiled us."
Under VMware, VMFS handles snapshots, reservations, and clustered behavior with almost eerie smoothness. But Proxmox doesn't use VMFS — and that's where the first big shock hits.
Teams usually fall into one of two paths:
### 1. NetApp shops stick with NFS or iSCSI
For environments already married to NetApp, NFSv3 ends up being the easy button. It's fast, familiar, and simple. iSCSI works too, especially if someone needs thin provisioning and SCSI reservations. But it comes with a checklist: MPIO configured properly, storage on its own VLAN, jumbo frames, and a clean separation from corosync traffic.
One engineer pointed out that NetApp now officially supports Proxmox, which instantly eliminates years of "is this allowed?" hand-wringing.
### 2. The Ceph crowd goes all in
Others go the opposite direction, building full internal storage arrays with Ceph. They describe it as the open-source cousin of vSAN — except more transparent and far more sensitive to sloppy hardware choices.
Every engineer who's happy with Ceph says the same thing:
**"Identical hardware. Full stop."**
Matched CPUs, RAM, NICs, HBAs in IT mode, and firmware kept in lockstep.
The result is a storage fabric that can scale out and heal itself, but it punishes shortcuts. Bad memory or dying SSDs become the only real failures anyone sees.
## NUMA on AMD Epyc: the gotcha almost no one expects
VMware admins tend to be familiar with tuning vNUMA on Epyc hardware. With ESXi, there are knobs for exposing NUMA topology and tweaking boundaries so workloads behave.
Proxmox's KVM layer, however, doesn't offer the same tunables — at least not with the same granularity. And that's where teams get blindsided.
One engineer laid it out simply: Intel sockets behave predictably. But AMD Epyc's micro-NUMA layout still isn't fully honored by KVM today. If a VM's vCPUs stretch across NUMA boundaries, weird latency shows up. Not catastrophic, just annoying. Unless you're Oracle, in which case it becomes audit-bait.
The workaround? Install hwloc, numactl, and a monitoring tool that exposes per-thread CPU delays on every node. It's not glamorous, but it's the only way to spot which workloads are crossing NUMA lanes and quietly lighting themselves on fire.
## Windows VMs behave differently — and licensing gets messy
Every engineer who has migrated Windows workloads says the same thing:
don't expect your old hardware IDs, GUIDs, or PCI subsystem IDs to survive.
VMware presents one virtual universe. KVM presents another. The transition resets everything Windows cares about for licensing. That means calling Microsoft for some situations, fighting CSP restrictions, and dealing with Oracle reauth for anything under the big red umbrella.
Most admins recommend testing each licensing type early — two machines at a time — before you formally schedule a migration wave. It's tedious, but it saves you from the 2 A.M. "why is the payroll VM unlicensed?" meltdown.
Windows driver swaps also come with their own choreography:
1. Remove VMware Tools
2. Reboot
3. Install VirtIO drivers
4. Reboot again
5. Start the import
6. Attach the boot disk as SATA
7. Add a tiny SCSI disk with VirtIO
8. Boot and detect the Red Hat SCSI controller
9. Shut down, remove the tiny disk
10. Reattach the real boot disk as SCSI
It sounds absurd until you've done it once — then it clicks. And yes, expect one BSOD on first boot. The system usually corrects itself on reboot number two.
## Networks behave differently — and Open vSwitch becomes your new friend
VMware's networking stack is well-loved because it does predictable things in predictable ways. Proxmox's Linux bridge works fine, but engineers who want cleaner VLAN trunking, more flexible topologies, or a closer ESXi-like feel almost always switch to Open vSwitch.
Once they do, they say VLAN mapping feels more natural, and automation layers (Ansible, Terraform) behave more consistently.
The other recurring advice:
- Put corosync on its own network
- Keep storage traffic separate
- Use a hardware watchdog
- Don't combine everything into one interface "just for testing" and forget about it
It's all the same ideas VMware admins already know — but the muscle memory doesn't always match the tooling.
## HA and cluster behavior require rethinking
VMware HA has spent years being polished into the background. Proxmox's HA stack works well, but it behaves in a more Linux-y, transparent way.
That transparency is great — until someone ignores it.
One team noted that when HA covers a large number of VMs, backfill issues can appear. Sometimes HA needs a toggle (disable → re-enable) to clear out old state. It's rare, but it's real.
Another recurring theme: just don't run two-node clusters. You're begging for a split-brain. If leadership absolutely insists, engineers say to add a QDevice at minimum.
## Expect more DIY — but fewer actual problems
This might be the biggest culture shock for VMware admins.
Several engineers who've run Proxmox for years say they've basically never opened support tickets. Not because they didn't need help — but because nothing broke at the platform level. Every failure they described was hardware: a dead SSD, a burnt-out DIMM, a power issue.
Swap the component, let ZFS or Ceph rebuild, and move on.
That's not to say Proxmox is hands-off. Kernel updates require reboots. Cluster upgrades need planning. But the environment doesn't have the same "mystery failures" people associate with ESXi host agents or vCenter oddities.
## Automation helps — but the ecosystem is different
VMware admins often come with vRealize or PowerCLI in their bloodstream. Proxmox leans hard into open tooling:
- Ansible for configuration
- Terraform for provisioning
- REST API for custom workflows
- Proxmox Backup Server for snapshot-mode backups and bare-metal recoveries
One engineer even mentioned building a quick integration layer using DreamFactory to expose lightweight REST endpoints for quotas and inventory — something that took a weekend instead of a budget cycle.
The vibe across teams is the same: Proxmox gives you the pieces. You wire them together however you like.
## And then there's Oracle…
Every engineer who has Oracle workloads says the same thing: isolate them.
Put Oracle workloads on a dedicated cluster. No migration. No vMotion. No touching CPU boundaries. No surprises for the auditor.
Just a quiet little island with documented limits.
## The real shock? How uneventful things become once the design is right
The loudest message from the engineers who've made the jump is surprisingly boring: once you get storage right, networking clean, NUMA understood, and migration practices settled, Proxmox becomes… quiet.
Not showy. Not flashy. Just stable.
Updates land. Backups run. Ceph heals itself. VMs do VM things. The platform fades into the background in the same comfortable way ESXi once did — before the bill arrived.
The real work isn't in the migration scripts or the VM imports. It's in stripping away old assumptions. VMware shaped how people thought virtualization should feel. Proxmox asks them to think a little differently.
The teams who embrace that shift?
They tend to wonder why they didn't leave sooner.
Keep Exploring
Two-Node Clusters, Fibre Channel, and a Leap of Faith: Inside a VMware-to-Proxmox Migration
An IT team managing 10 clusters and 21 hosts across global sites is migrating its entire VMware infrastructure to Proxmox, navigating architectural constraints and storage complexities that don't appear in vendor documentation.
Why Your Proxmox Migration Failed (Hint: It Wasn't Proxmox)
Most failed Proxmox migrations aren't Proxmox failures at all. They're the result of assumptions VMware spent 15 years teaching us to make — and infrastructure that stopped hiding its complexity.
Migrating 200+ VMs to Proxmox Isn't a Compute Problem — It's a Networking One
Why large-scale VMware to Proxmox migrations succeed or fail based on networking archaeology, not hypervisor mechanics.
Proxmox Clusters and SANs: The VMware Exit Problem Nobody Warned You About
Leaving VMware for Proxmox? Your SAN-backed cluster won't behave the same way—and that gap in expectations catches many teams flat-footed.