Hi, right now I’m in the planning process for a self hosted virtualization and containerization environment on one or more Linux hosts. Incus looks promising. And there will be mainly Linux instances. I’m not sure how to solve the shared storage issue - since it is a bad idea to mount a fs more than once. Maybe you have some hints? I’d appreciate that. :)
The OS of an instance can sit on an exclusively used volume, that is solved for me (store it in a local storage pool).
But how should I organize shared read/write storage? It should be accessed by multiple instances at the same time. It should be easily usable as a mount point. Storage replication among multiple hosts is optional - there is rsync. Is NFS still the way to go or are there nicer options? Is there an overlayfs which could resolve concurrent writes?
Just be warned that those two are relatively complicated pieces of tech. And they’re meant to set up a distributed storage network including things like replication and load-balancing. Clusters with failover to a different datacenter and such. If you just want access to the same storage on one server from different instances, that’s likely way to complicated for you. (And more complexity generally means more maintenance and more failure modes.)
Moot point. I do not really need the distributed storage part for my scenario. Not right now.
Maybe I start with NFS and explore gluster as soon as storage distribution is needed. Looks like it could be a drop-in eplacement for NFSv3. Since it doesn’t access the block devices directly, I still could use the respective fs’ tool set (I.e. ext4 or btrfs) for maintenance tasks.