Back to Blog
    Proxmox
    PBS
    Backup
    Performance
    Storage
    Virtualization

    Running PBS on the Same Host? Here's Why Your Backups Might Crawl

    October 28, 2025
    6 min read read
    So you've got a monster server—24 cores, 256 gigs of DDR5 RAM, and a Gen5 datacenter NVMe that could outrun most SSDs in its sleep. And yet… your Proxmox Backup Server (PBS) chugs along at a sluggish 200MB/s when backing up VMs. What gives? This is the exact headache one user ran into after deciding to host PBS inside a VM on the same Proxmox node it was backing up. On paper, the setup screams performance. In practice, the speed graph looked more like a tortoise on a treadmill. Let's break down why this happens—and why no amount of raw hardware muscle can save you from virtual bottlenecks if you're not careful. ## The Problem: When High-End Hardware Underperforms In this case, the server had every performance checkbox ticked: blazing fast NVMe (Kingston DC3000ME Gen5), gobs of memory, and CPU headroom for days. There was no load on the system, no apparent I/O bottlenecks, and yet backups plateaued around 350MB/s read/write. What made it worse was the drive's advertised specs—over 10,000MB/s in both directions. You expect snappy backups, not a traffic jam. So where was all that speed going? ## VM Networking: Your First Suspect Here's the big "aha" moment: running PBS inside a VM on the same Proxmox host means all backup traffic has to travel through virtualized layers—including virtual NICs that can become silent killers of performance. The Reddit user suspected this and was right to do so. The virtual NIC (likely defaulted to VirtIO) introduces some overhead, and when you're backing up a few hundred gigs, that adds up. Virtual network interfaces don't always saturate the same throughput as native ones, especially when you don't optimize for multi-queue support or jumbo frames (MTU 9000). Another commenter chimed in with a recommendation: match your NIC's multiqueue setting to the number of vCPUs and crank up your MTU size. That might not sound like a silver bullet—but it often delivers immediate gains. ## LVM and Virtual Disk Overhead A closer look at the setup reveals the PBS VM uses a virtual SCSI disk with SSD emulation enabled, layered on top of LVM. Sounds cool, but it's one abstraction layer too many. Performance often takes a hit when you're moving data across multiple virtualized volumes—especially when both read and write streams are dancing between LVM layers. While SCSI is fine in general, running heavy I/O through this virtual pipe on top of a host-based LVM can add latency, not just for reads, but during write amplification, syncs, and compression. And that's before we even talk about PBS's own deduplication and chunking workload. ## CPU and RAM? Probably Not Your Problem The system reported just 30% CPU usage and 3% RAM usage during backup. So, this isn't about raw horsepower. What you're likely looking at is a software-driven bottleneck where virtualized I/O and PBS's internal mechanics are not playing nice. And if your CPU's doing too little, it might mean the bottleneck is happening earlier in the pipeline—likely in the data path between the source VM and the PBS storage. ## The Alternative: Bare Metal or Hybrid Install Several voices in the thread echoed the same sentiment: "Why not just install PBS directly on the Proxmox node?" Turns out, it's possible—and even efficient. You can install PBS alongside Proxmox on the same physical server using a simple `apt install proxmox-backup-server` after adding the repo. This way, PBS has direct disk access, and you skip the VM overhead entirely. If you don't want to go full bare metal, a hybrid setup using LXC containers with bind mounts might be a sweet spot. One user noted better performance running PBS in an LXC versus a full VM, likely because LXCs skip the virtual disk layers and go straight to host storage. ## "But What If My Node Dies?" It's a valid concern. Running PBS and Proxmox on the same server feels risky. If the host blows up, what happens to your backups? This is where good storage hygiene comes into play. Some users combat this by placing backup storage on a completely separate drive, isolated from the Proxmox system disk. That way, even if the OS gets nuked, you can reinstall and mount your backup volume without data loss. Others take it further—setting up multiple PBS instances across separate nodes and syncing backups between them. Yes, that's more complex, but it gives you redundancy. And peace of mind is worth it. ## FIO Testing: Don't Guess—Measure Want hard numbers? One savvy commenter dropped an FIO command to benchmark real-world disk performance from inside the PBS VM. This test tells you whether your bottleneck is disk I/O or somewhere else. Here's the kind of command they suggested: ```bash fio --name=seq-read128k --ioengine=io_uring --rw=read --bs=128k --size=2g --numjobs=8 --iodepth=64 --runtime=20 --time_based --group_reporting ``` Run that inside your VM and compare it with host performance. If your VM is significantly slower, you've got your culprit. ## TL;DR – Here's Why Your PBS VM Is Slow - **Virtual NIC overhead** can throttle performance. Try jumbo frames (MTU 9000) and multiqueue. - **Virtual disk over LVM** adds unnecessary latency. Consider direct disk access or bind mounts. - **Running PBS in a VM** on the same node causes backup traffic to loop through virtual layers. - **PBS on the host** (bare metal or containerized) is often faster, and sometimes simpler. - **Use separate disks** for system and backup storage. Reduces risk, increases speed. - **Benchmark with FIO** to find your real-world I/O limits. ## Final Thoughts Putting PBS inside a VM seems like a clever way to keep things modular. But in practice, it often leads to performance bottlenecks that waste your time and hardware potential. If you're hitting the same speed walls, take a hard look at how you're layering your stack. Sometimes, the fix isn't throwing more hardware at the problem—it's peeling back the layers that are quietly slowing you down.