Back to Blog
    VMware
    Proxmox
    Migration
    Fibre Channel
    Storage
    Enterprise

    Two-Node Clusters, Fibre Channel, and a Leap of Faith: Inside a VMware-to-Proxmox Migration

    February 13, 2026
    12 min read
    # Two-Node Clusters, Fibre Channel, and a Leap of Faith: Inside a VMware-to-Proxmox Migration An IT team managing 10 clusters and 21 hosts across global sites is migrating its entire VMware infrastructure to Proxmox, navigating architectural constraints and storage complexities that don't appear in vendor documentation. The infrastructure isn't small: 10 clusters spread across worldwide locations, 21 hosts total, production workloads managed through a single vCenter. Storage varies by site—Pure Storage arrays, Dell PowerStore, Dell Unity—with some connected via Fibre Channel, others through iSCSI. The environment evolved over time rather than following a unified architectural plan. Different hardware budgets, site autonomy, and successive server generations created what one engineer described as "the usual enterprise patchwork." Now the plan is to move everything to Proxmox, with help from a partner firm. But the internal team is new to the platform, and questions are mounting. ## Two-Node Clusters Present Quorum Challenges VMware handles two-node clusters without issue. Proxmox requires more careful planning. Proxmox uses corosync for cluster quorum, and two nodes alone create split-brain risk. If one node fails, the remaining host lacks quorum and can't perform cluster operations. The current proposal involves adding a Raspberry Pi as a quorum device for each cluster. Proxmox supports external quorum devices, making this technically valid. But multiple engineers questioned the approach, suggesting consolidation into 3–7 node clusters instead. With 21 hosts total, the math works. The obstacle is geography. Sites are global. Workloads must remain local for latency reasons. Fibre Channel storage doesn't extend across continents, ruling out a single consolidated cluster. "Sometimes you're not designing from scratch. You're inheriting," one engineer said. ## Storage Configuration Requires Linux Expertise The most complex aspect of the migration isn't compute—it's storage. Pure Storage arrays, Dell PowerStore, Dell Unity, and IBM FlashSystem in some environments. Fibre Channel at 16Gb in some locations. iSCSI with multiple active paths in others. Proxmox behaves differently from VMware here. ## LVM Management Becomes Necessary With Proxmox, Fibre Channel LUNs typically require LVM (Logical Volume Manager). That means administrators need to understand `pvcreate`, `vgcreate`, `lvcreate`, `lvs`, and `vgs` commands. For teams accustomed to vCenter's abstraction layer, this represents a shift. VMware hides much of the underlying Linux storage stack. Proxmox exposes it. One engineer running IBM FlashSystem over 16Gb Fibre Channel said it "worked like a charm—after knowing how LVM works." The qualifier is significant. ## Thin Provisioning Creates Monitoring Confusion On Pure Storage arrays, LUNs are thin provisioned and deduplicated at the SAN level. But LVM on the Proxmox side shows them as fully allocated. The Proxmox dashboard reports 100% allocation while the SAN indicates actual usage remains low due to deduplication and thin provisioning. Without understanding this discrepancy, capacity planning becomes difficult and potentially triggers false alarms. ## Multipath Configuration Must Precede LUN Access Whether using iSCSI or Fibre Channel, multipath I/O configuration matters. If MPIO isn't configured before claiming LUNs on Proxmox nodes, duplicate LUN IDs can appear. Engineers recommend scripting MPIO configuration and applying it to every node before accessing any LUNs, then making it standard procedure. Every new LUN requires a WWID entry in the MPIO config on each node. For deployments using multiple active storage NICs, engineers emphasized testing failure scenarios by bringing down a port to verify I/O continuity. ## Migration Approaches: Import Tool vs. Backup Restoration Proxmox includes an import tool for VMware VMs. It functions by mounting ESXi storage through the ESXi hosts and importing powered-off VMs. For small VMs, it works adequately. For large production workloads, some engineers report it's slow. Many teams are using Veeam instead: back up the VM from VMware, restore directly into Proxmox, power it on, then storage migrate if needed. This approach is faster, familiar, and provides a rollback strategy. "When you're touching production across global sites, speed is nice. Safety is better," one engineer said. ## VMware Tools Removal Required Before Migration Failing to uninstall VMware Tools before migrating Windows VMs can create problems. The uninstaller may fail once the VM is no longer on ESXi, leaving broken VMware Tools in the Proxmox VM. The correct sequence: 1. Uninstall VMware Tools 2. Install VirtIO drivers 3. Install QEMU agent 4. Shut down VM 5. Convert 6. Power up in Proxmox Following this process, VMs typically adapt to the hypervisor change without issues. Skipping steps can require driver cleanup at inconvenient times. ## Network Configuration Changes After Migration When VMs move from VMware to Proxmox, the virtual NIC hardware changes. Guest operating systems often treat this as new hardware. Static IP configurations can disappear. Windows may assign a new interface. Linux might change interface naming. The issue is predictable but not obvious. Planning for post-migration network reconfiguration—documenting which VMs have static assignments—prevents confusion when VMs appear to lose network connectivity after migration. ## Proof of Concept Deployment Planned The team isn't planning an immediate full migration. A proof of concept cluster will be deployed first, followed by a staging environment and controlled migrations. This approach isn't just best practice—it's necessary. Even with partner support and experienced engineers, Proxmox operates differently from VMware. It's Linux-first, requiring administrators to own more of the stack. ## Cluster Architecture Remains Under Discussion There's an unresolved architectural question: should the environment use fewer, larger clusters or maintain clusters grouped by SAN, hardware generation, and geography? Fewer, larger clusters are easier to manage and upgrade consistently. But with globally distributed sites, local clusters keep workloads local and prevent WAN latency from affecting HA behavior. They also isolate risk. One engineer noted that large clusters concentrate risk during major upgrades. If all 21 hosts run in one cluster and a Proxmox upgrade encounters issues, every VM is affected. Smaller clusters allow staggered upgrades with testing on less critical clusters first. There's no universal answer. The choice depends on understanding the tradeoffs. ## Platform Shift Reflects Changing Enterprise Priorities This migration represents more than switching hypervisors. It's a move from a commercial, heavily abstracted ecosystem to an open, transparent, Linux-native platform. Administrators see more of the system. They configure more components directly. They're closer to the infrastructure. For some teams, this is empowering. For others, it's uncomfortable. But in environments where hardware costs matter, licensing models are shifting, and flexibility is increasingly important, Proxmox is emerging as a viable enterprise platform—particularly when deployed on enterprise storage like Pure Storage and Dell SANs. ## Key Technical Considerations Teams planning similar migrations should monitor: - Cluster design decisions made before migration, not after - MPIO configuration before claiming any LUNs - How LVM presents thin-provisioned storage - VMware Tools removal before conversion - Correct VirtIO and QEMU agent installation - Network adapter hardware changes - Storage path failure testing - Proof of concept deployment None of these are glamorous. But they're the difference between a successful migration and emergency troubleshooting. ## Moving Away From Established Infrastructure There's calculated confidence in teams making this transition—not blind optimism, but assessed risk. Proxmox isn't experimental anymore. It's running production workloads on enterprise storage in serious deployments. Still, migrating 10 clusters across the globe off a platform that's been infrastructure backbone for years requires careful preparation. Check your procedures twice. Ensure oversight is in place. Then execute. Because in 2026, with VMware's licensing changes driving migration decisions, staying with existing infrastructure isn't always the lowest-risk option.