• jordanwhite1@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 years ago

    I would documented everything as I go.

    I am a hobbyist running a proxmox server with a docker host for media server, a plex host, a nas host, and home assistant host.

    I feel if It were to break It would take me a long time to rebuild.

    • bmarinov@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      Ansible everything and automate as you go. It is slower, but if it’s not your first time setting something up it’s not too bad. Right now I literally couldn’t care less if the SD on one of my raspberry pi’s dies. Or my monitoring backend needs to be reinstalled.

      • Notorious@lemmy.link
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        IMO ansible is over kill for my homelab. All of my docker containers live on two servers. One remote and one at home. Both are built with docker compose and are backed up along with their data weekly to both servers and third party cloud backup. In the event one of them fails I have two copies of the data and could have everything back up and running in under 30 minutes.

        I also don’t like that Ansible is owned by RedHat. They’ve shown recently they have zero care for their users.

        • constantokra@lemmy.one
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 years ago

          I didnlt realize that about ansible. I’ve always thought it was overkill for me as well, but I figured i’d learn it eventually. Not anymore lol.

  • ThorrJo@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 years ago

    Go with used & refurb business PCs right out of the gate instead of fucking around with SBCs like the Pi.

    Go with “1-liter” aka Ultra Small Form Factor right away instead of starting with SFF. (I don’t have a permanent residence at the moment so this makes sense for me)

    • constantokra@lemmy.one
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      Ah, but now you have a stack of PiS to screw around with, separate from all the stuff you actually use.

  • i_lost_my_bagel@seriously.iamincredibly.gay
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 years ago

    Not accidentally buy a server that takes 2.5 inch hard drives. Currently I’m using some of the ones it came with and 2 WD Red drives that I just have sitting on top of the server with SATA extension cables going down to the server.

    • Wingy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      I’ve had an R710 at the foot of my bed for the past 4 years and only decommissioned it a couple of months ago. I haven’t configured anything but I don’t really notice the noise. I can tell that it’s there but only when I listen for it. Different people are bothered by different sounds maybe?

    • Toribor@corndog.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      Converting my environment to be mostly containerized was a bit of a slow process that taught me a lot, but now I can try out new applications and configurations at such an accelerated rate it’s crazy. Once I got the hang of Docker (and Ansible) it became so easy to try new things, tear them down and try again. Moving services around, backing up or restoring data is way easier.

      I can’t overstate how impactful containerization has been to my self hosting workflow.

    • spez_@lemmy.worldBanned from community
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      I’m mostly docker. I want to selfhost Lemmy but there’s no one-click Docker Compsoe / Portainer installer yet (for Swag / Nginx proxy manager) so I won’t until it’s ready

    • howrar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      Same for me. I’ve known about Docker for many years now but never understood why I would want to use it when I can just as easily install things directly and just never touch them. Then I ran into dependency problems where two pieces of software required different versions of the same library. Docker just made this problem completely trivial.

  • artificial_unintelligence@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    I would’ve gone with a less powerful nas and got a separate unit for compute. I got a synology nas with a decent amount of compute so I could run all my stuff on the nas, and the proprietary locked down OS drives me a bit nuts. Causes all sorts of issues. If I had a separate compute box I could just be running some flavor of Linux, probably Ubuntu and have things behave much more nicely

  • DilipaEli@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    To be honest, nothing. Running my home server on a nuc with proxmox and a 8 bay synology Nas (though I’m glad that I went with 8 bay back then!).
    As a router I have opnsense running on a low powered mini pc.

    All in all I couldn’t wish for more (low power, high performance, easy to maintain) for my use case, but I’ll soon need some storage and ram upgrade on the proxmox server.

  • Carter@feddit.uk
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    I recently did this for the second time. Started on FreeNAS, switched to TrueNAS Scale when it released and just switched to Debian. Scale was too reliant on TrueCharts which would break and require a fresh install every couple of months. I should’ve just started with Debian in the first place.

  • Anarch157a@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    I already did a few months ago. My setup was a mess, everything tacked on the host OS, some stuff installed directly, others as docker, firewall was just a bunch of hand-written iptables rules…

    I got a newer motherboard and CPU to replace my ageing i5-2500K, so I decided to start from scratch.

    First order of business: Something to manage VMs and containers. Second: a decent firewall. Third: One app, one container.

    I ended up with:

    • Proxmox as VM and container manager
    • OPNSense as firewall. Server has 3 network cards (1 built-in, 2 on PCIe slots), the 2 add-ons are passed through to OPNSense, the built in is for managing Proxmox and for the containers .
    • A whole bunch of LXC containers running all sorts of stuff.

    Things look a lot more professional and clean, and it’s all much easier to manage.

      • Anarch157a@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 years ago

        Can’t say anything about CUDA because I don’t have Nvidia cards nor do I work with AI stuff, but I was able to pass the built-in GPU on my Ryzen 2600G to the Jellyfin container so it could do hardware transcoding of videos.

        You need the drivers for the GPU installed on the host OS, then link the devices on /dev to the container. For AMD this is easy, bc the drivers are open source and included in the distro (Proxmox is Debian based), for Nvidia you’d have to deal with the proprietary stuff both on the host and on the containers.

      • oken735@yukistorm.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 years ago

        Yes, you can pass through any GPU to containers pretty easily, and if you are starting with a new VM you can also pass through easily there, but if you are trying to use an existing VM you can run into problems.

  • rarkgrames@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    I have things scattered around different machines (a hangover from my previous network configuration that was running off two separate routers) so I’d probably look to have everything on one machine.

    Also I kind of rushed setting up my Dell server and I never really paid any attention to how it was set up for RAID. I also currently have everything running on separate VMs rather than in containers.

    I may at some point copy the important stuff off my server and set it up from scratch.

    I may also move from using a load balancer to manage incoming connections to doing it via Cloudflare Tunnels.

    The thing is there’s always something to tinker with and I’ve learnt a lot building my little home lab. There’s always something new to play around with and learn.

    Is my setup optimal? Hell no. Does it work? Yep. 🙂

  • Toribor@corndog.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    I should have learned Ansible earlier.

    Docker compose helped me get started with containers but I kept having to push out new config files and manually cycle services. Now I have Ansible roles that can configure and deploy apps from scratch without me even needing to back up config files at all.

    Most of my documentation has gone away entirely, I don’t need to remember things when they are defined in code.

  • chickenfingersub@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    Use actual nas drives. Do not use shucked external drives, they are cheaper for a reason, not meant for 24-7. Though I guess they did get me through a couple years, and hard drive prices seem to keep falling.