Its new homelab time. And with that, potentially a new OS time too.

I currently am very happy with Debian and Docker. The only issue is I am brand new to using data redundancy. I have a 2 bay NAS I’ll use, and I want the two HDDs to be in raid 1.

Now I could definitely just use ZFS or BTRFS with Debian, and be able to use Docker just like I do currently.

Or I could use a dedicated NAS OS. That would help me with the raid part of this, but a requirement is Docker.

Any recommendations?

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        3 months ago

        Yeah, that’s what he means.

        I’m doing kinda the same thing with my NAS: md raid1 for the SSDs, but only snapraid for the big data drives (mostly because I don’t really care if i have to re-download my linux iso collection, so snapraid plus mergerfs is like, sufficient for that data).

        Also using Ubuntu instead of Debian, but that’s mostly due to it being first built six years ago, and I’d 100% go with Debian if I was doing it now.

      • hendrik@palaver.p3x.de
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        3 months ago

        Yes, as the other people pointed out, that’s what I mean. The standard Linux software RAID (also called MD RAID)

        It’s proven, battle-tested, pretty robust and you don’t rely on any specific vendor formats or any hardware for that matter. The main point would be to keep it simple. You could use BTRFS or ZFS or all kinds of things. But it only introduces additional complexity and points of failure. And has no benefits over a plain mirror (what the RAID1 does) if we’re talking about just 2 devices. At least it served me well in the past. Contrary to cheap hardware RAID controllers and also BTRFS which also let me down once. But a lot of development went in to that since then and the situation might have changed. But mdraid is reliable anyways.

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            3 months ago

            That is indeed a good question. Is this something RAID is bothered with at levels 0 and 1? I think in this case it’s the job of the filesystem to care for that. But you should probably let the periodic task run that does scrubbing like once per week. You could also experience other issues than just bitrot. For example bad sectors and one of the hdds slowly degrading.

            In the end I don’t think a RAID1 can do much about bitrot and other RAID woes. There are no checksums or anything to correct for that. You’d probably need some other technology for that. But it’s probably the same for a ZFS mirror. And everything better than that needs more than 2 hdds.

            • Findmysec@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              I think ZFS does some advanced stuff which makes it better than just relying on hardware checksums (which have been shown to not be so great)

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 months ago

      I’d suggest lvmraid which is just mdraid wrapped in LVM. It’s a tad simpler to setup and you get the flexibility of LVM, plus the ability to convert from linear to mirror and back as needed. That is you could do a standard install on LVM, then add another disk to LVM and convert the volumes to RAID1. It’s all documented under man lvmraid.

    • variants@possumpat.io
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      3 months ago

      I’ve been happy with unraid, super simple to use and the community apps makes it easy to find and install docker containers

    • BlackEco@lemmy.blackeco.com
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      3 months ago

      TrueNAS SCALE expects you to deploy Kubernetes clusters, it is unfortunately not meant for running plain Docker. You can jump through hoops to get it working but I personally gave up and ended up running a VM on top of TrueNAS just to run Docker on it.

      I don’t know about Unraid though and OpenMediaVault felt a bit unpolished the last time I used it and I can’t attest for its ZFS support.

      • Mrb2@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        3 months ago

        Truenas scale is switching to docker compose. I found this out when the truecharts catalog suddenly stopped working. more info

      • golli@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        3 months ago

        I am currently using Openmediavault for my NAS and can confirm that with an official plugin so far I havent had any issue with my ZFS pool (that I migrated from trueNAS scale since I didn’t like their kubernetes use and truecharts, but as someone mentions they seem to switch to docker).

        Otherwise I am happy as well, but I am far from a poweruser.

  • Lemongrab@lemmy.one
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    3 months ago

    Generally, I think it is better to use a general server OS like Debian or Fedora instead of something specialized like Proxmox or Unraid. That way you can always choose the way you want to use your server instead of being channeled into running it a specific way (especially if you ever change your mind).

  • DaGeek247@fedia.io
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    3 months ago

    I run Debian with zfs. Really simple to set up and has been rock solid for it too. As far as I can tell all the issues I’ve had have been my fault.

    ZFS looks like it uses a lot of RAM, but you can get away without it if you need too. It’s basically extra caching. I was thrilled to use it as an excuse to upgrade my ram instead.

    Mdadm has a little more setup then zfs, as far as I’m concerned. You need to set your own scrubbing up whereas zfs schedules it’s own for you. You need to add monitoring stuff for both though.

    I’ve considered looking into the various operating systems designsd for this, but they just don’t seem to be worth the effort of switching to me.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    LVM (Linux) Logical Volume Manager for filesystem mapping
    NAS Network-Attached Storage
    RAID Redundant Array of Independent Disks for mass storage
    SSD Solid State Drive mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    5 acronyms in this thread; the most compressed thread commented on today has 20 acronyms.

    [Thread #887 for this sub, first seen 25th Jul 2024, 15:45] [FAQ] [Full list] [Contact] [Source code]

  • nickwitha_k (he/him)@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    3 months ago

    Honestly, from your description, I’d go with Debian, likely with btrfs. Would be better if you had 3 slots so that you can swap a bad drive but, 2 will work.

    If you want to get adventurous, you can see about a Fedora Atomic distro.

    Previously, I’ve recommended Proxmox but, not sure that I still can at the moment, if they haven’t fixed their kernel funkiness. Right now, I’m back to libvirt.

  • dwindling7373@feddit.it
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    3 months ago

    I’m very new to the whole ordeal but to my knowledge ZFS and, less so BTRFS are a bit too rigid for my setup, I’m personally looking at Debian with mergerFS and SnapRAID.

      • dwindling7373@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        I need to throw random spare old HDs at it, I expect failures, I expect expanding it, I expect very different sizes between the disks.

        • someonesmall@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          You can do that with ZFS. It’s built-in integrierty check will automatically heal errors and tell you what drive has gone bad.

  • Maestro@fedia.io
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    3 months ago

    Bog-standard Debian with LVM. LVM can also do RAID, but you could also do mdadm below LVM if you prefer. Keep it simple.

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    3 months ago

    Definitely use ZFS for the data volumes in order to avoid silent data corruption. If you don’t use separate drive for the OS, then you need to look into ZFS on root.

  • Shimitar@feddit.it
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    3 months ago

    Of course, Gentoo with mdadm! I am running Linux software raid for the last… 20 years? And never had a single issue.

    Also a big Gentoo fan, and of course use podman instead of docker :)

  • UnixWeeb@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    3 months ago

    I was in a similar boat. Initially, I ran debian with docker but later on decided to check out unraid. It’s pretty easy to get setup, and you have a lot of docker containers pre-configured, so you can just click and install. I have it notify me whenever something goes on with it, but outside of that, I don’t tinker much with it.

    Only two weird things about it though…

    1. You dont install unraid. Instead, you run it through a usb. More specifically, the usb has a specific config that’ll then load everything to your memory.

    2. Recently, they redid their pay structure so not too familiar with the changes but you do have to pay for unraid.

  • tritonium@midwest.social
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    3 months ago

    As long as you’re not relying on RAID as your backup. Don’t know why so many people struggle with understanding, RAID is not a backup. It’s a solution to ensure uptime in the face of a lost disk. I would guess most selfhosters shouldn’t be concerned with uptime. Use Borg or restic. Or if you are going to use zfs or btrfs then have a completely separate drive or pool where snapshots are stored.