Back to Blog
Proxmox
Ceph
Storage
High Availability
StarWind
Synology
Ceph, StarWind, Synology: How I Accidentally Tried Every Storage Idea at Once
January 25, 2026
10 min read
# Ceph, StarWind, Synology: How I Accidentally Tried Every Storage Idea at Once
There's a specific moment in every homelab journey where you stop building things and start negotiating with them.
Not fixing. Not optimizing. Negotiating.
It's the point where your storage stack looks back at you and says, "We need to talk."
This is a story about how I ended up running Ceph, StarWind VSAN, and Synology at the same time—not because I planned some elegant multi-tier architecture, but because every attempt to simplify things somehow added another layer instead. A storage turducken. Delicious in theory. Concerning in practice.
I didn't set out to try every storage idea at once. I just wanted one thing:
High availability bulk storage that doesn't explode when a node sneezes.
That's it. That was the dream. I did this to myself.
---
## The Original Plan (Which Was Perfect, Obviously)
I run a small Proxmox cluster. Five nodes. Nothing wild. It's been humming along for years. At some point in the past, I did the sensible thing and set up Ceph because:
- It's native
- It's "enterprise"
- Everyone says "just use Ceph" with the confidence of someone who has already suffered
And honestly? Ceph worked. For VM disks. For containers. For things that need to stay up when a host dies. No complaints there.
But then there was bulk storage.
Plex libraries. Frigate footage. Media files that don't care about IOPS but do care about staying online. Stuff that wants to be mounted once and never thought about again.
Enter: Synology RackStation.
The Synology did exactly what it was supposed to do. SMB. NFS. Single IP. Rock solid. Boring. Beautiful. It sat there quietly being competent while I pretended it was "temporary."
And then I thought:
"What if I moved that storage into the cluster?"
This is where everything went off the rails.
---
## StarWind: "This Will Be Simple" (Narrator: It Wasn't)
Ceph already existed. But I wanted something a little more… SAN-shaped for bulk storage. Something with synchronous replication. Something that felt like a block device I could reason about.
So I added StarWind VSAN.
And to be fair, StarWind did what it promised. iSCSI volumes. HA. Mirroring. The comforting illusion of enterprise software making hard decisions for me.
Now I had:
- Ceph for VM and container disks
- StarWind for bulk HA storage
- Synology for… backups? emotional support?
At this point, my storage diagram stopped fitting on one screen.
But the real question wasn't whether the storage replicated. It was:
**How do I actually use this without everything being duct tape?**
---
## The IP Problem (a.k.a. "Why Can't This Just Behave Like a NAS?")
Here's the thing Synology absolutely nails:
You mount it.
It has an IP.
If something breaks, it fails over.
Your mounts don't care.
That's the gold standard. That's the bar.
What I wanted was Synology HA behavior, but backed by my shiny VSAN storage.
What I got instead was a long stare into the abyss of clustered filesystems, volume managers, and "technically possible" solutions.
Because block storage is easy.
Shared filesystem semantics are not.
iSCSI gives you a block device.
But only one thing gets to write to it unless you want chaos.
So now the questions started piling up:
- Do I pass the iSCSI volume directly to containers?
- Do I wrap it in LVM?
- Do I abandon Proxmox LVs entirely?
- Do I need GFS2? OCFS2? Some other acronym that smells like pain?
Every answer came with a warning label.
---
## "Just Use a Clustered Filesystem" (Cool Cool Cool)
On paper, the solution looks straightforward:
1. Present the iSCSI LUN to multiple nodes
2. Put a clustered filesystem on top
3. Mount it everywhere
4. Bind-mount into containers
5. Profit
In reality, every step is a choice between:
- Fragility
- Performance
- Complexity
Pick two. Sometimes one.
Clustered filesystems are amazing when they're designed into the stack. They are less amazing when bolted on because you want Plex to survive a node reboot.
And performance? Sure, I don't need insane throughput. I'm not chasing benchmarks. But I'd like to hit 200 MB/s without feeling like I'm tempting fate.
At some point, I realized I was trying to build a NAS… out of things that are explicitly not NASes.
Again: I did this to myself.
---
## CephFS Enters the Chat
Every storage journey eventually loops back to Ceph. It's like storage gravity.
CephFS sounds perfect:
- Shared filesystem
- Native to the cluster
- Active on multiple nodes
- No weird fencing rituals
But CephFS comes with its own reality checks:
- HDD-backed pools don't love metadata workloads
- Two-node anything is… spiritually unsafe
- Replication math gets uncomfortable fast
Yes, Ceph has checksums. Yes, it can detect corruption. No, that doesn't magically make low-replica pools feel good at night.
Ceph is happiest when you give it:
- Multiple nodes
- Multiple OSDs
- Fast networks
- Patience
It is less happy when you ask it to behave like a cozy little NAS.
---
## Meanwhile, the Synology Is Just Sitting There
This is the part that really messes with you.
The Synology doesn't complain.
It doesn't ask philosophical questions.
It doesn't need a whiteboard.
It just:
- Serves NFS
- Keeps its IP
- Replicates when asked
- Exists
At some point, a very reasonable voice in my head said:
"Why are you doing this?"
Why move bulk storage into the cluster when the cluster already has HA where it matters?
Why force Ceph or VSAN to become a NAS when an actual NAS is already doing NAS things extremely well?
The answer, unfortunately, was curiosity. And pride. And the belief that surely there was a clean way to unify everything.
There wasn't. Not really.
---
## What This Turned Into (A Storage Truce)
I didn't end up with a single perfect solution. I ended up with a compromise. A ceasefire.
- **Ceph** stays where it shines: VM disks, containers, things that need fast recovery
- **StarWind** proved block-level HA works, but adds layers I don't actually need for media
- **Synology** keeps doing bulk storage, because it is embarrassingly good at it
And the big realization was this:
**High availability doesn't mean everything must be clustered.**
Some things just need to be reliable.
Some things just need to come back online quickly.
Some things are better boring.
Trying to make all storage behave the same is how you end up with three storage platforms and a blog post like this.
---
## The Takeaway (Before I Buy More Hardware)
If you're chasing HA bulk storage inside a virtualization cluster, ask yourself one question first:
**Do I want this to be elegant—or do I want it to work?**
Because elegance usually costs complexity.
And complexity always sends an invoice later.
I didn't accidentally try every storage idea at once. I just kept saying "this will simplify things" and believing myself every time.
It didn't. But I learned a lot.
Mostly that sometimes, the best architecture decision is letting each tool do the thing it's boringly good at—and resisting the urge to unify everything just because you can.
Now if you'll excuse me, I need to stop looking at storage diagrams before I add object storage "just to see how it feels."
## Related Resources
- [Proxmox Ceph Documentation](https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster)
- [StarWind VSAN for Proxmox](https://www.starwindsoftware.com/starwind-virtual-san)
- [Synology High Availability](https://www.synology.com/en-us/dsm/feature/high_availability)
- [CephFS Best Practices](https://docs.ceph.com/en/latest/cephfs/best-practices/)
Keep Exploring
Ceph, StarWind, or Something Else? The Awkward Middle Ground of HA Storage in Proxmox
When you want your Proxmox storage to 'just stay up' during node failures, you hit the awkward middle ground between Ceph, StarWind VSAN, clustered filesystems, and DIY NAS solutions. Here's why none of them feel quite right.
Ceph vs ZFS vs NAS: The Truth About High Availability Storage in Proxmox
Ceph vs ZFS vs NAS for Proxmox HA: tradeoffs, failure patterns, and architecture choices that improve reliability without overengineering.
Ceph, HA, and the Minimum Viable Cluster for SMBs
Exploring the smallest Proxmox cluster setup that makes sense for high availability with Ceph—from 2-node setups with QDevices to the community's recommended 3-5 node configurations.
Ceph Is a Beast, ZFS Just Works: Inside the Storage Wars of the Proxmox Community
Ceph vs ZFS in Proxmox homelabs: a practical comparison of complexity, failure handling, and performance for real-world self-hosted clusters.