Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

  • SmokeyDope@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    5 hours ago

    Im a hobbiest who just learned how to self host my own static website on a spare laptop over the summer. I went with what I knew and was comfortable with which is a fresh install of linux and installing from the apt package manager.

    As im getting more serious im starting to take another look at docker. Unforunately my OS package manager only has old outdated versions of docker I may need to reinstall with like ubuntu/debian LTS server something with more cutting edge software in repo. I don’t care much for building from scratch and navigating dependency roulette.

          • TeddE@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            They can but - if their current setup meets their needs - why? There ain’t nothing wrong with having a few simple spare laptops, each an isolated environment for a few simple home server tasks each.

            Don’t get me wrong - I too advocate for docker, particularly on new builds, or as a relatively turnkey solution to get started for novice friends, but the best setup is the one that works, and they sound like they got theirs where they want it.

  • sem@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 hours ago

    For me the learning curve of learning containers does not match the value proposition of what benefits they’re supposed to provide.

    • billwashere@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 hours ago

      I really thought the same thing. But it truly is super easy. At least just the containers like docker. Not kubernetes, that shit is hard to wrap your head around.

      Plus if you screw up one service and mess everything up, you don’t have to rebuild your whole machine.

      • dogs0n@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        100% agree, my server has pretty much nothing except docker installed on it and every service I run is always in containers.

        Setting up a new service is mostly 0% risk and apps can’t bog down my main file system with random log files, configs, etc that feel impossible to completely remove.

        I also know that if for any reason my server were to explode, all I would have to do is pull my compose files from the cloud and docker compose up everything and I am exactly where I left off at my last backup point.

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 hours ago

    Ok I’m arguing for containers/VMs and granted I do this for a living… I’m a systems architect so I build VMs and containers pretty much all the time time at work… but having just one sorta beefy box at home that I can run lots of different things is the way to go. Plus I like to tinker with things so when I screw something up, I can get back to a known state so much easier.

    Just having all this things sandboxed makes it SO much easier.

  • yessikg@fedia.io
    link
    fedilink
    arrow-up
    4
    ·
    9 hours ago

    It’s so simple that it takes so much less time, one day I may move to Podman but I need to have the time to learn. I host Jellyfin

  • Billegh@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 hours ago

    It depends on the service and the desired level of it stack.

    I generally will run services directly on things like a raspberry pi because VMs and containers offer added complexity that isn’t really suitable for the task.

    At work, I run services in docker in VMs because the benefits far outweigh the complexity.

  • Evotech@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    7 hours ago

    It’s just another system to maintain, another link in the chain that can fail.

    I run all my services on my personal gaming pc.

  • HiTekRedNek@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 hours ago

    In my own experience, certain things should always be on their own dedicated machines.

    My primary router/firewall is on bare metal for this very reason.

    I do not want to worry about my home network being completely unusable by the rest of my family because I decided to tweak something on the server.

    I could quite easily run OpnSense in a VM, and I do that, too. I run proxmox, and have OpnSense installed and configured to at least provide connectivity for most devices. (Long story short: I have several subnets in my home network, but my VM OpnSense setup does not, as I only had one extra interface on that equipment, so only devices on the primary network would work)

    And tbh, that only exists because I did have a router die, and installed OpnSense into my proxmox server temporarily while awaiting new-to-me equipment.

    I didn’t see a point in removing it. So it’s there, just not automatically started.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      10 hours ago

      Same here. In particular I like small cheap hardware to act as appliances, and have several raspberry pi.

      My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier. It is actually running containers but i don’t have to deal with that. It also needs to be always available so i use efficient “right sized” hardware and it works regardless whether im futzing with my “lab”

      • Damage@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier.

        If you’re talking about backups and updates for addons and core, that works on VMs as well.

        • AA5B@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          For my use case, I’m continually fiddling with my VM config. That’s my playground, not just the services hosted there. I want home assistant to always be available so it can’t be there.

          I suppose I could have a “production “ vm server that I keep stable, separately from my “dev” vm server but that would be more effort. Maybe it’s simply that I don’t have many services I want to treat as production, so the physical hardware is the cheapest and easiest option

  • SailorFuzz@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 hours ago

    Mainly that I don’t understand how to use containers… or VMs that well… I have and old MyCloud NAS and little pucky PC that I wanted to run simple QoL services on… HomeAssistant, JellyFin etc…

    I got Proxmox installed on it, I can access it… I don’t know what the fuck I’m doing… There was a website that allowed you to just run scripts on shell to install a lot of things… but now none of those work becuase it says my version of Proxmox is wrong (when it’s not?)… so those don’t work…

    And at least VMs are easy(ish) to understand. Fake computer with OS… easy. I’ve built PCs before, I get it… Containers just never want to work, or I don’t understand wtf to do to make them work.

    I wanted to run a Zulip or Rocket.chat for internal messaging around the house (wife and I both work at home, kid does home/virtualschool)… wanted to use a container because a service that simple doesn’t feel like it needs a whole VM… but it won’t work…

    • ChapulinColorado@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      I would give docker compose a try instead. I found Proxmox to be too much, when a simple yaml file (that can be checked into a repo) can do the job.

      Pay attention to when people say things can be improved (secrets/passwords, rootless/podman, backups), etc. And come back to them later.

      Just don’t expose things to the internet until you understand the risks and don’t check in secrets to a public git repo and go from there. It is a lot more manageable and feels like a hobby vs feeling like I’m still at work trying to get high availability, concurrency and all this other stuff that does not matter for a home setup.

      • Lka1988@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        7 hours ago

        I would give docker compose a try instead. I found Proxmox to be too much, when a simple yaml file (that can be checked into a repo) can do the job.

        Proxmox and Docker serve different purposes. They aren’t mutually exclusive. I have 4 separate VMs in my Proxmox cluster dedicated specifically to Docker; all running Dockge, too, so the stacks can all be managed from one interface.

        • ChapulinColorado@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          I get that, but the services listed by the other comment run just fine in docker with less hassle by throwing in some bind mounts.

          The 4 VMs dedicated dockge instances is exactly the kind of thing I had in mind for people that want to avoid something that sounds more like work than a hobby when starting out. Building the knowledge takes time and each product introduced reduces the likelihood of it being completed anytime soon.

          • Lka1988@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            4 hours ago

            Fair point. I’m 12 years into my own self-hosting journey, I guess it’s easy to forget that haha.

            When I started dicking around with Docker, I initially used Portainer for a while, but that just had way too much going on and the licensing was confusing. Dockge is way easier to deal with, and stupid simple to set up.

  • zod000@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    14 hours ago

    Why would I want add overheard and complexity to my system when I don’t need to? I can totally see legitimate use cases for docker, and work for purposes I use VMs constantly. I just don’t see a benefit to doing so at home.

  • splendoruranium@infosec.pub
    link
    fedilink
    English
    arrow-up
    11
    ·
    16 hours ago

    Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

    If it aint broke, don’t fix it 🤷

  • medem@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    4
    ·
    14 hours ago

    The fact that I bought all my machines used (and mostly on sale), and that not one of them is general purpose, id est, I bought each piece of hardware with a (more or less) concrete idea of what would be its use case. For example, my machine acting as a file server is way bigger and faster than my desktop, and I have a 20-year-old machine with very modest specs whose only purpose is being a dumb client for all the bigger servers. I develop programs in one machine and surf the internet and watch videos on the other. I have no use case for VMs besides the Logical Domains I setup in one of my SPARC hosts.

  • fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    23 hours ago

    Have done it both ways. Will never go back to bare metal. Dependency hell forced multiple clean installs down to bootloader.

    The only constant is change.

  • sepi@piefed.social
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    3
    ·
    1 day ago

    “What is stopping you from” <- this is a loaded question.

    We’ve been hosting stuff long before docker existed. Docker isn’t necessary. It is helpful sometimes, and even useful in some cases, but it is not a requirement.

    I had no problems with dependencies, config, etc because I am familiar with just running stuff on servers across multiple OSs. I am used to the workflow. I am also used to docker and k8s, mind you - I’ve even worked at a company that made k8s controllers + operators, etc. I believe in the right tool for the right job, where “right” varies on a case-by-case basis.

    tl;dr docker is not an absolute necessity and your phrasing makes it seem like it’s the only way of self‐hosting you are comfy with. People are and have been comfy with a ton of other things for a long time.

    • kiol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      3
      ·
      1 day ago

      Question is totally on purpose, so that you’ll fill in what it means to you. The intention is to get responses from people who are not using containers, that is all. Thank you for responding!

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    100
    arrow-down
    4
    ·
    1 day ago

    Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.

    Yes, I’ll die on this hill.

    • sylver_dragon@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      1 day ago

      But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!

      In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.

      • AtariDump@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 hours ago

        …oh shit, the RAM is on fire.

        The RAM. The RAM. The 🐏 is on fire. We don’t need no water let the mothefuxker burn.

        Burn mothercucker, burn.

        (Thanks phone for the spelling mistakes that I’m leaving).

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 day ago

        kubernetes

        Kubernetes isn’t just resource isolation, it encourages splitting services across hardware in a cluster. So you’ll get more latency than VMs, but you get to scale the hardware much more easily.

        Those terms do mean something, but they’re a lot simpler than execs claim they are.

        • mesa@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          8 hours ago

          I love using it at work. Its a great tool to get everything up and running kinda like ansible. Paired with containerization it can make applications more “standard” and easy to spin back up.

          That being said, for a home server, it feels like overkill. I dont need my resources spread out so far. I dont want to keep updating my kub and container setup with each new iteration. Its just not fun (to me).

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Oh for sure - containers are fantastic. Even if you’re just using them as glorified chroot jails they provide a ton of benefit.