Back to Blog
    Proxmox
    VirtIOFS
    Virtualization
    File Sharing
    ZFS

    VirtIOFS Is the Best Thing You're Not Using in Proxmox

    December 14, 2025
    7 min read read
    If you're deep in the world of Proxmox virtualization, you've probably already tried your hand at containers, VMs, and maybe even a home-labbed Ceph cluster or two. But there's one low-key feature that's been flying under the radar and, frankly, deserves a lot more attention: VirtIOFS. For a lot of users, VirtIOFS has quietly become the go-to way to bridge the gap between the host and guest operating systems—especially when it comes to file sharing. And if you've been sticking to the old-school approaches, like Samba or NFS, you might be missing out on one of the simplest and fastest ways to get shared folders up and running inside your virtual machines. ## So, What Is VirtIOFS Anyway? Let's not get too bogged down in technical documentation. At its core, VirtIOFS is a shared file system built specifically for VMs. Unlike older methods (hello, 2000s-era NFS), VirtIOFS is designed to run over the VirtIO transport used in QEMU/KVM environments. What this means for you: super fast, low-latency access to files from the host system within your guest VMs. It's the kind of thing that just works. And for folks who've battled with the quirks of LXC containers or tried to kludge together workarounds for persistent shared storage, VirtIOFS can feel like a breath of fresh air. ## Why It's Gaining Traction One user summed it up perfectly: "I have been stuck with LXCs just because I could easily share a host folder with multiple containers at the same time. But LXC required extra config compared to VMs, like when using unprivileged, and using VPNs like Tailscale." This is the story for a lot of Proxmox users. Containers are great, until they're not. Running Docker Swarm inside an LXC can get messy, especially if you're juggling VPNs or trying to sandbox with unprivileged permissions. VMs offer more flexibility and isolation, but they've historically lacked a good, straightforward way to share files—until VirtIOFS. Once this user started moving from Docker Swarm in LXC to Alpine VMs, the game changed. Setting up shared storage with VirtIOFS was fast, easy, and just… worked. Even mounting the same host folder to a Desktop Linux VM? No problem. Performance? "Great." That experience isn't unique. Another user noted they "instantly migrated everything to VirtIOFS" when Proxmox rolled out native support. No hiccups. No regrets. ## Real-World Use Cases A pattern is starting to emerge. Proxmox users are doing some interesting things with VirtIOFS: - **Passing ZFS datasets from the host into guest VMs** - **Running Docker inside full-blown Linux VMs** (instead of containers) using VirtIOFS-mounted volumes - **Using it to share persistent storage between multiple VMs**—no Samba shares, no fuss - **Even using it on Windows guests** for massive file transfers (although not always smoothly—more on that in a bit) For example, one user piped in with this setup: Proxmox host on ZFS, datasets shared via VirtIOFS to a VM, which then uses them as persistent Docker volumes. Performance might not beat raw block storage via VirtIO SCSI, but for ease of use? It's hard to beat. Another user went all in: "I use VirtIOFS on a ZFS RAIDZ2 3-HDD storage set for my main data drive for an Ubuntu VM." The VM lives on a separate SSD, while datasets get piped in as needed. Simple. Clean. Functional. ## But It's Not All Perfect Not everyone's having a seamless time. One user flagged a performance gap: "Writing to a VM's VirtIO SCSI drive gives me 1.5GB/s, but VirtIOFS drops to around 150MB/s." That's a pretty stark difference—and it's something to keep in mind if you're planning heavy-duty file I/O. Another ran into stability issues transferring terabytes of data through VirtIOFS to a Windows VM. After hundreds of gigabytes, the VM would lock up—potentially due to driver limitations or issues under high load. It's worth noting that Windows support is still maturing, and your mileage may vary depending on the guest configuration. So yeah, VirtIOFS isn't perfect. But the issues people are seeing seem to be either edge cases (massive file transfers, driver bugs in Windows), or tradeoffs for ease of use. ## The Hidden Advantage: Simplicity Let's be real—Samba sucks to set up cleanly. NFS can be finicky, especially with permissions and networking. LXC bind mounts are great until you need better VM isolation. VirtIOFS sidesteps all of that. One user who was new to Proxmox admitted they chose VirtIOFS for everything simply because they didn't know the alternatives. Guess what? It worked out just fine. Sometimes, not knowing the old way lets you skip straight to the better one. For newcomers, this is gold. There's less friction, fewer moving parts, and a setup process that doesn't require diving deep into obscure Proxmox forums or trying to decipher systemd mount units from a five-year-old blog post. ## What About Performance? Let's go back to the earlier complaint: is VirtIOFS "slow"? It depends. For raw IOPS and throughput, VirtIO SCSI with block storage still has the edge. But VirtIOFS isn't meant to replace block storage—it's about file-level access. If you're reading and writing a large number of smaller files or just need shared access to config folders, app data, or Docker volumes, it's fast enough. And for many users, the tradeoff in raw speed is totally worth it for the ease of setup and use. In most real-world scenarios, VirtIOFS performs well enough that it doesn't feel like a bottleneck—especially if you're not maxing out your system with giant transfers. ## Final Thoughts: Should You Be Using It? If you're a Proxmox user and haven't given VirtIOFS a shot yet, now's the time. - **Need a quick way to share host data with VMs?** Use VirtIOFS. - **Want to move off containers into VMs but don't want to lose the easy storage sharing?** Use VirtIOFS. - **Running Docker in VMs?** VirtIOFS makes persistent volumes painless. It's not about reinventing your entire setup. It's about finding those low-friction tools that make your life easier—and VirtIOFS is one of those tools. It fits seamlessly into modern workflows, works with ZFS, and helps bridge that awkward space between containers and full virtualization. Sure, it's not perfect. There are some performance quirks, and power users pushing the limits might bump into bugs. But for the vast majority of Proxmox setups? VirtIOFS is the kind of "just works" tech we all dream of. ## TL;DR - VirtIOFS provides fast, low-latency file sharing between Proxmox host and VMs - Much simpler than Samba/NFS setups with fewer permission headaches - Great for Docker volumes, ZFS dataset sharing, and multi-VM storage access - Performance is ~150MB/s vs 1.5GB/s for block storage—fine for most use cases - Windows support is still maturing; Linux guests work best - If you're stuck on LXCs just for bind mounts, VirtIOFS might be your ticket to VMs

    Related Resources