By DMing me you consent for them to be shared with whomever I wish, whenever I wish, unless you specify otherwise

  • 2 Posts
  • 61 Comments
Joined 2 years ago
cake
Cake day: June 26th, 2023

help-circle

  • Update went fine on a bare metal install. Customising the webUI port is a little easier now, instead of editing lighttdp.conf I think you can do it in the UI.

    I struggled to find some settings, I looked for ages for the API token. Found it in all settings: expert, scroll for half a mile down the webUI API section.

    Also, struggled with adding CNAMES in bulk, I thought you could do that in the old UI. You might be able to in the new UI. I just 'one by one’d them.

    Docker update went flawlessly.

    I have an lxc and to go which is a task for another day, unless TTeck’s updater beats me to it.



  • My main storage is a mirrored pair of HDD. Versioning is handled here.

    It Syncthings an “important” folder to a local back up only 1 HDD.

    The local Backup Syncthings to my parents house with 1 SSD.

    My setup can be better, if I put the versioning on my local backup it’d free space on my main storage. I could migrate to a dedicated backup software, Borg maybe, over syncthing. But Syncthing I knew and understood when I was slapdashing this together. It’s a problem for future me.

    I’ve been seriously considering an Elitedesk G4 or Dell/Lenovo equivalent as back up machines. Mirrored drives. Enough oomph to HA the things using the “important” files: immich paperless etc.


  • My big problem is remote stuff. None of my users have aftermarket routers to easily manipulate their DNS. One has an android modem thing which is hot garbage. I’m using a combination of making their pi be their DHCP and one user is running on avahi.

    Chrome, the people’s browser of choice, really, really hates http so I’m putting them on my garbage ######.xyz domain. I had plans to one day deal with Https, just not this day. Locally I just use the domain for vaultwarden so the domain didn’t matter. But if people are going to be using it then I’ll have to get a more memorable one.

    System updates have been a faff. I’m 'ssh’ing over tailscale. When tailscale updates it kicks me out, naturally. Which interrupts the session, naturally. Which stops the update, naturally. Also, it fucks up dkpg beyond what --configure -a can repair. I’ll learn to update in background one day, or include tailscale in the unattended-upgrades. Honestly, I should put everything into unattended-upgrades.

    Locally works as intended though, so that’s nice. Everything also works for my fiancee and I remotely all as intended, which is also nice. My big project is coalescing what I’ve got into something rational. I’m on the make it good part of the “make it work > make it good” cycle.





  • I am not the person to be asking, I am no docker expert. It’s is my understanding depends_on: defines starting order. Once a service is started, it’s started. If it has an internal check for “healthy” I believe watchtower will restart unhealthy containers.

    This is blind leading the blind though, I would check the documentation if using watchtower. We should both go read the “depends on” documents as we both use it.


  • It’s Watchtower that I had problems with because of what you described. Watchtower will drop your microservice, say a database, to update it and then not reset the things that are dependent on it. It can be great just not in the ham fisted way I used it. So instead I’m going to update the stack together, everything drops, updates, and comes back up in the correct order

    Uptime Kuma can alert you when a service goes down. I am constantly in my Homarr homepage that tells me if it can’t ping a service, then I go investigating.

    I get that it’s scary, and after my Watchtower trauma I was hesitant to go automatic too. But, I’m managing 5 machines now, and scaling by getting more so I have to think about scale.


  • I’ve encountered that before with Watchtower updating parts of a serrvice and breaking the whole stack. But automating a stack update, as opposed to a service update, should mitigate all of that. I’ll include a system prune in the script.

    Most of my stacks are stable so aside from breaking changes I should be fine. If I hit a breaking change, I keep backups, I’ll rebuild and update manually. I think that’ll be a net time save over all.

    I keep two docker lxcs, one for arrs and one for everything else. I might make a third lxc for things that currently require manual updates. Immich is my only one currently.



  • Release: stable

    Keep the updates as hands off as possible. Docker compose, TTeck’s LXC updater, automatic upgrades.

    I come through once a week or so to update the stacks (dockge > stack > update), I come through once a month or so to update the machines (I have 5 total). Total time updating is 3hrs a month. I could drop that time a lot when I get around to writing some scripts to update docker images, then I’d just have to “apt update && apt upgrade”

    Minimise attack surface and outsource security. I have nothing at all open to the internet, I use Tailscale to create tunnels. I’m trusting my security to Tailscale but they are much, much, better at it than I am.


  • I use Joplin for day-to-day: to-dos, journals etc. I like Joplin, but I haven’t tried the others. I tend to be sticky with services, if something “works” I don’t go looking for better. Only when I have a specific problem I can’t solve do I branch out.

    I use bookstack for documentation on the server, faqs guides, updates etc. perhaps that works for others. The lack of android app is what moved me to Joplin.


  • If nothing else, thank you for informing me about hide.me.

    My, personal, inability to intelligently compare VPNs is what’s holding me back from port forwarding.

    I’ve been trying to parse what I mean and failing:

    A VPN should be at least affordable for me, no point looking if I can’t afford it.

    It should be suitably secure, again no point looking if I don’t trust them to give/sell/surrender my data.

    It should be suitably fast, no point looking if it’s slower than dialup.

    And, it should have a minimum of features: port forwarding, easy to set up, etc.

    Where the minimums are is subjective but I think these are the things that each of us consider. Price, privacy, performance and feature set.

    Comparisons are either really good for “here’s the cheapest, here’s the most private and here’s the fastest” but neglect whether they’re P2P friendly or allow port forwarding. Or, the comparisons are really detailed on the feature set “max handshake encryption, max data encryption” but neglect how much I might pay.

    It’s a whole lot of research for something I know I don’t/won’t understand and with potentially huge consequences should I get wrong. So: “Here’s the most private” I’ll take that one please

    I’m currently on Mullvad, it topped a bunch of vpn comparison for ‘normals’ on security, and I have been content with them. But I’m ready to move up when my sub ends. Testimonials are just about all I’ve got.

    Edit: I suppose it’s ‘mid-level’ guides I think are missing. Beginners have their cheap/secure/fast articles. Advanced users can compare on “max handshake encryption” whatever that means. I need a “so you want to effectively and securely support the swarm, here are your options.”


  • I’m the “it works for me” normie. At least right now, accepting a 10% failure rate comes at the benefit of a decrease in complexity, and thus an increase in security (on account of me being less likely to fuck it up). This is an attractive proposition.

    When I was beginning TRaSH-Guides was the scripture and on their port forwarding page (for qbit) they only mention Torguard. On the torguard page they quote:

    As of 13 March 2022 Torguard Settles Piracy Lawsuit and has agreed to use commercially reasonable efforts to block BitTorrent traffic on its servers in the US using firewall technology. ‼

    I Talked to several people and they are still able to use Torguard for Torrents, Perhaps because the connection is encrypted. And others just selected a server in another country.

    “Torguard settles piracy lawsuit” is scary for normies, at least it was for me when I was setting this all up. So I went with Mullvad who actively do not want to know who I am. I’m a UK resident so my entire Linux ISO stack, is under Gluetun.

    Generally the documentation around port forwarding, “who to use, how and how much they cost” was hard to find, difficult to follow, or out of date. Perhaps I should look again though.

    Ideally, I’d want a “you want budget use X £#pcm, you want privacy use Y £#pcm and if you want speed use Z £#pcm” article with the guides for getting X,Y and Zworking in the style of TRaSH. I get that takes time and effort, but I think that’s what it would take for mass adoption. Advanced users can debate the minutiae elsewhere on best vpn client combo, advanced users are building seed boxes. Normies need 3 meaningful choices (price, privacy, speed) and hand holding to the finish line.


  • Fedegenerate@lemmynsfw.comtoSelfhosted@lemmy.worldMini pc arriving tomorrow
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Op I was you 12 months ago. +1 installing proxmox. The ability to make mistakes in an LXCs and always having the nightly back up right there was worth it alone. Helper scripts get you close to where you want to go fast. As for guides, there’s a bunch, raid owl, technotim both have initial proxmox setup guides. There are many like them, just two I remember.

    It might just be me, I struggled with every step of every guide I followed, mostly because I skip to copy paste the commands… Don’t do that. Chatgpt, plug the command in there and start quizzing it: “what does this do, what are the flags doing, I want to do x will command work”. Then don’t copy chatgpt either, take its output back to the documentation and make sure it makes sense. Then take a snapshot. Then paste the thing. It at least forced me to slow down.

    In the beginning I was about a month, just on a pi, getting a pihole and a servarr installed and configured. Then I nuked it and rebuilt in a couple weeks. Then I messed up again and rebuilt in a couple days. I dedicate 1hr to try fix what I broke using Chatgpt as mentor/rubber duck, if I can’t make progress on a fix in that time I load the snapshot. Troubleshooting is a great skill, however, everything you need gets installed at least once, so get good at installing things. Back ups need testing and you should be familiar with the process, get good at recovering from back ups. Chatgpt solves most of the problems surface level problems. You’ll get to a point when you get stuck chatgpt won’t be any help either, but let gpt get you there quickly.

    I genuinely prefer Dockge to Portainer, learn Portainer. As a rule learn the industry standard then migrate. Tonnes of articles and resources for Portainer, almost everyone using Dockge can help you with Portainer, not the other way around. The only difference is when the non-industry standard is specifically made to solve problems you have with the IS, I went with nginx proxy manager over nginx for example. GUIs are nice and I can see things working, unlike pasting a massive config and hoping. Now I have huge compose.yaml stacks for docker that I used to install one by one in Portainer.

    Security is hard. Outsource all you can. Your ISP firewall is perfectly serviceable don’t punch holes in it (for now). Tailscale is perfectly serviceable don’t try make your own tunnels (for now). One of my earliest posts was me installing a firewall on my pi, separate from the my router, and then going into a blind panic about punching holes in my firewall. Funny to look back on, my isp firewall is still completely intact, I picked a different path.

    Each iteration add one layer of complexity and take easy wins for everything else. I set up pihole bare metal, messed up the unbound install, go again. I used docker starter to set up pihole+unbound, messed up [something]… go again… Prioritise “working” over “perfect”. You don’t know what perfect is anyway. I don’t know what perfect is, but just getting something working teaches me what would be better for next go around. If what you did is “wrong” it’s going to break sooner rather than later so you get to go again. If what you did works forever be happy and enjoy the thing you built.

    Oh I forgot. No big updates right before bed, before a big event or when you’re out of the house. I once had an auto updater [watch tower] go off and delete my access to the internet [pihole] before downloading the new image, on my fiancée’s first day off, and while I was at work. I learned a lot about redundancy for essential infrastructure to Facebook that day, rightly so. If you can’t/won’t want to fix broken things right then, don’t be doing stuff that might break things.



  • I did think about cron but, long ago, I heard it wasn’t best practice to update through cron because the lack of logs makes things difficult to see where things went wrong, when they do.

    I’ve got automatic-upgrades running on stuff so it’s mostly fine. Dockge is running purely to give me a way to upgrade docker images without having to ssh. It’s just the monthly routine of “apt update && apt upgrade -y” *5 that sucks.

    Thank you for the advice though. I’ll probably set cron to update the images with the script as you suggest. I have a “maintenance” homarr page as a budget uptime kuma so I can quickly look there to make sure everything is pinging at least. I made the page so I can quickly get to everyone’s dockge, pihole and nginx but the pings were a happy accident.


  • On my home network I have nginxproxymanager running let’s encrypt with my domain for https, currently only for vaultwarden (I’m testing it for a bit for rolling it out or migrating wholly over to https). My domain is a ######.xyz that’s cheap.

    For remote access I use Tailscale. For friends and family I give them a relay [raspberry pi with nginx which proxys them over tailscale] that sits on their home network, that way they need “something they have”[the relay] and “something they know” [login credentials] to get at my stuff. I won’t implement biometrics for “something they are”. This is post hoc justification though, and nonesense to boot. I don’t want to expose a port and a VPS has low WAF and I’m not installing tailscale on all of their devices so s relay is an unhappy compromise.

    For bonus points I run pihole to pretty up the domain names to service.swirl and run a homarr instance so no-one needs to remember anything except home.swirl, but if they do remember immich.swirl that works too.

    If there are many ways to skin a cat I believe I chose to use a spoon, don’t be like me. Updating each dockge instance is a couple minutes and updating diet pi is a few minutes more which, individually, is not a lot on my weekly/monthly maintence respectfully. But on aggregate… I have checklists. One day I’ll write a script that will ssh into a machine > update/upgrade the os > docker compose pull/rebuild/purge> move on to the next relay… That’ll be my impetus to learn how to write a script.