Hi, right now I’m in the planning process for a self hosted virtualization and containerization environment on one or more Linux hosts. Incus looks promising. And there will be mainly Linux instances. I’m not sure how to solve the shared storage issue - since it is a bad idea to mount a fs more than once. Maybe you have some hints? I’d appreciate that. :)

The OS of an instance can sit on an exclusively used volume, that is solved for me (store it in a local storage pool).

But how should I organize shared read/write storage? It should be accessed by multiple instances at the same time. It should be easily usable as a mount point. Storage replication among multiple hosts is optional - there is rsync. Is NFS still the way to go or are there nicer options? Is there an overlayfs which could resolve concurrent writes?

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 days ago

    There are a bunch of options available. I think the exact layout depends on the exact use-case. From GlusterFS, Ceph, to (S3 compatible) block storage, to straightforward NFS, to database replication, that’s all for different use-cases like VM failover to decoupling storage from a service, to something like a Jellyfin sharing the media library with another service, to horizontal scaling of services… I don’t think there is a single answer to all of that.

    • hsdkfr734r@feddit.nlOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Thanks. I will take a closer look into GlusterFS and Ceph.

      The use case would be a file storage for anything (text, documents, images, audio and video files). I’d like to share this data among multiple instances and don’t want to store that data multiple times - it is bad for my bank account and I don’t want to keep track of the various redundant file sets. So data and service decoupling.

      Service scaling isn’t a requirement. It’s more about different services (some as containers, some as VMs) which should work on the same files, sometimes concurrently.

      That jellyfin/arr approach works well and is easy to set up, if all containers access the same docker volume. But it doesn’t when VMs (KVM) or other containers (lxc) come into play. So I can’t use it in this context.

      Failover is nice to have. But there is more to it than just the data replication between hosts. It’s not a priority to me right now.

      Database replication isn’t required.

      • speculate7383@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        GlusterFS is (was) really cool, but I would not set up a new instance. It used to have significant support and development from RedHat, but they decided to halt their work on it, and focus on Ceph.

        GlusterFS is getting a few slow updates from some alternate developers, but I would only count on that being fixes for current installations.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        Just be warned that those two are relatively complicated pieces of tech. And they’re meant to set up a distributed storage network including things like replication and load-balancing. Clusters with failover to a different datacenter and such. If you just want access to the same storage on one server from different instances, that’s likely way to complicated for you. (And more complexity generally means more maintenance and more failure modes.)

        • hsdkfr734r@feddit.nlOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Moot point. I do not really need the distributed storage part for my scenario. Not right now.

          Maybe I start with NFS and explore gluster as soon as storage distribution is needed. Looks like it could be a drop-in eplacement for NFSv3. Since it doesn’t access the block devices directly, I still could use the respective fs’ tool set (I.e. ext4 or btrfs) for maintenance tasks.

  • jabjoe@feddit.uk
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    VirtioFS. You can share from the host to any number of VMs with that. LibVirtd is good. Even has a nice GUI in virt-manager.

    • hsdkfr734r@feddit.nlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Thanks for asking. I left that detail out. An SSD which is attached to the virtualization host via SATA. I plan to use either a LVM2 volume group or a BTRFS with subvolumes to provide the storage pool to Incus/LXC.