I have been wanting to self-host recently I have an old laptop it’s a Toshiba satellite m100-221 sitting around it only has 4gb of ram, but I don’t know what is a good starting point for an OS for my home lab I discovered yunohost but heard mixed opinions about it when searching I would like lemmy’s opinion on a good OS for a beginner wanting to start a home lab I would prefer a simple solution like yunohost but would like it to be configurable it’s fine if it needs a bit of tinkering.

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 days ago

    If you just want LXCs, use Docker or Podman on whatever Linux distro you’re familiar with. If you get extra hardware, it’s not hard to have one be the trunk and reverse proxy to the other nodes (it’s like 5 lines of config in Caddy or HAProxy).

    If you end up wanting what Proxmox offers, it’s pretty easy to switch, but I really don’t think most people need it unless they’re going to run server grade hardware (i.e. will run multiple VMs). If you’re just running a few services, it’s overkill.

        • drkt@scribe.disroot.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 days ago

          They use some of the same kernel functions but they are not the same. They are not comparable. LXCs are used to host a whole separate system that shares kernel with its host, docker is used to bundle external requirements and configs for a piece of software for ease of downstream setup. Docker is portable, LXCs much less so.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            2 days ago

            Sure, Docker is more or less an abstraction layer on top of LXC. It’s the same tech underneath, just a different way of interacting with it.

    • Windex007@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      If you’re just running a few services, and will only ever be running a few services, I agree with you.

      The additional burden of starting with proxmox (which is really just debian) is minimal and sets you up for the inevitable deluge of additional services you’ll end up wanting to run in a way that’s extensible and trivially snapshotable.

      I was pretty bullish on “I don’t need a hypervisor” for a long time. I regret not jumping all-in on hypervisors earlier, regardless of the services I plan to run. Is the physical MACHINEs purpose to run services and be headless? Hypervisor. That is my conclusion as for what is the least work overall. I am very lazy.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        For snapshots, you can use filesystem features, like BTRFS or ZFS snapshots. If you make sure to encapsulate everything in the container, disaster recovery is as simple as putting configs onto the new system and starting services (use specific versions to keep things reasonable.

        I think that’s also really lazy, it’s just a different type of lazy from virtualization.

        My main issue with virtualization is maintenance. Most likely, you’re using system dependencies, and if you upgrade the system, there’s a very real chance of breakage. If you use containers, you can typically upgrade the host without breaking the containers, and you can upgrade containers without touching the host. So upgrades become a lot less scary since I have pretty fine-grained control and can limit breakage to only the part I’m touching, and I get all of that with minimal resource overhead (with VMs, each VM needs the whole host base system, containers don’t).

        Obviously use what works for you, I just think it’s a bit overwhelming for a new user to jump to Proxmox vs a general purpose Linux distro.