• 0 Posts
  • 160 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle
  • If all you care is money, then it’s even less on hertzner at 48/year. But the reason I recommended Borgbase is because it’s a bit more known and more trustworthy. $8 a year is a very small difference, sure it will be more than that because, like you said, you won’t use the full TB on B2, but still I don’t think it’ll get that different. However there are some advantages to using a Borg based solution:

    • Borg can do backup to multiple places at once, so you can have the same thing do a backup to the cloud and to some secondary disk
    • Borg is an open source tool, so you can run your own Borg server, which means you can have backups sent to your desktop
    • Again, because Borg is open you can run a raspberry pi with a 1TB usb disk for backup, and that would be cheaper than any solution
    • Or you could even pair with a friend hosting their backup on your server and he doing the same for you.

    And the most important part, migrating from one to the other is simple, just changing config, so you can start with Borgbase, and in a year buy a minicomputer to leave on your parents house and having all of the config changes needed in seconds. Whereas migrating away from B2 will involve a secondary tool. Personally I think that this flexibility is worth way more than those $8/year.

    Also Borg has deduplication, versioning and cryptography, I think B2 has all of that but I’m not entirely sure, because it’s my understanding that they duplicate the entire file when some changes happen so you might end up paying lots more for it.


    As for the full system backup I still think it’s not worth it, how do you plan on restoring it? You would probably have to plug a liveusb and perform the steps there, which would involve formating your disks properly, connect to the remote server and get your data, chroot into it and install a bootloader. It just seems easier to install the OS and run a script, even if you could shave off 5 minutes if everything worked correctly in the other way and you were very fast doing stuff.

    Also your system is constantly changing files, which means more opportunities for files to get corrupted (a similar reason why backing up the folder of a database is a worse idea than backing um a dump of it), and some files are infinite, e.g. /dev/zero or /dev/urandom, so you would need to be VERY careful around what to backup.

    At the end of the day I don’t think it’s worth it, how long do you think it takes you to install Linux on a machine? Because I would guess around 20 min, restoring your 1TB backup will certainly take much longer than that (probably a couple of hours) and if you have the system up you can get critical stuff that doesn’t require the full backup early. Another reason why Borg is a good idea, you can have a small critical stuff backup to restore in seconds, and another repository for the stuff that takes longer. So Immich might take a while to come back, but authentik and Caddy can be up in seconds. Again, I’m sure B2 can also do this, but probably not as intuitively.


  • I figure the most bang for my buck right now is to set up off-site backups to a cloud provider.

    Check out Borgbase, it’s very cheap and it’s an actual backup solution, so it offers some features you won’t get from Google drive or whatever you were considering using e.g. deduplication, recover data at different points in time and have the data be encrypted so there’s no way for them to access it.

    I first decided to do a full-system backup in the hopes I could just restore it and immediately be up and running again. I’ve seen a lot of comments saying this is the wrong approach, although I haven’t seen anyone outline exactly why.

    The vast majority of your system is the same as it would be if you install fresh, so you’re wasting backup space in storing data you can easily recover in other ways. You would only need to store changes you made to the system, e.g. which packages are installed (just get the list of packages then run an install on them, no need to backup the binaries) and which config changes you made. Plus if you’re using docker for services (which you really should) the services too are very easy to recover. So if you backup the compose file and config folders for those services (and obviously the data itself) you can get back in almost no time. Also even if you do a full system backup you would need to chroot into that system to install a bootloader, so it’s not as straightforward as you think (unless your backup is a dd of the disk, which is a bad idea for many other reasons).

    I then decided I would instead cherry-pick my backup locations instead. Then I started reading about backing up databases, and it seems you can’t just back up the data directory (or file in the case of SQLite) and call it good. You need to dump them first and backup the dumps.

    Yes and no. You can backup the file completely, but it’s not a good practice. The reason is that if the file gets corrupted you will lose all data, whereas if you dumped the database contents and backed that up is much less likely to corrupt. But in actuality there’s no reason why backing up the files themselves shouldn’t work (in fact when you launch a docker container it’s always an entirely new database pointed to the same data folder)

    So, now I’m configuring a docker-db-backup container to back each one of them up, finding database containers and SQLite databases and configuring a backup job for each one. Then, I hope to drop all of those dumps into a single location and back that up to the cloud. This means that, if I need to rebuild, I’ll have to restore the containers’ volumes, restore the backups, bring up new containers, and then restore each container’s backup into the new database. It’s pretty far from my initial hope of being able to restore all the files and start using the newly restored system.

    Am I going down the wrong path here, or is this just the best way to do it?

    That seems like the safest approach. If you’re concerned about it being too much work I recommend you write a script to automate the process, or even better an Ansible playbook.






  • I would bet that the problem is with Plex being inside docker. Might be one of those situations where being more experienced causes issues because I’m trying to do things “right” and not run the service on my server directly or with root or on network host mode.

    But being inside a container causes these many issues I can’t even begin to imagine how it would be to get it to do more complex stuff like be accessible through Tailscale or being behind authorization.




  • Some of it yes, the claim for example, but the rest is still pretty bad UX (and even that is stupid, I shouldn’t need a claim to watch locally), I’m an experienced self hosing person and I’m getting frustrated every step of the way, imagine someone who doesn’t know their way around docker or is not familiar with stuff… Jellyfin might be less polished as some claim, but setting it up is a breeze, never had to look at documentation to do it.



  • It’s curious that I’m almost in the opposite boat, have been using Jellyfin without issues for around 5 years, but recently was considering trying Plex because Jellyfin is becoming too slow on certain screens (probably because I have too much stuff, but it shouldn’t be this slow).

    Edit: this made me want to check in Plex, so I’ll leave my story for people amusement:

    My experience with Plex:

    • Write the docket compose
    • leave out the claim because it’s optional and I have no idea what it is
    • launch it
    • asks me to create an account
    • not really comfortable creating an external account to access my local server, but okay.
    • discovered I already had an account. Huh? I wonder why I don’t remember ever running Plex then.
    • login to that account
    • shows me a bunch of stuff
    • find it weird that it already scanned everything, especially because I didn’t pointed it to my media
    • proceed to try to watch something
    • can’t play due to DRM
    • WAT?
    • go back and discover there’s a bunch of content that’s not in my library
    • ok, so this must be some free content
    • how do I configure my local library?
    • spend 15 min navigating the UI trying to find it
    • open the docs, they say to click the settings icon
    • that icon is nowhere to be seen
    • click a similar one
    • can’t find anything the docs say I should
    • maybe I’m not on the right site? site is :/web/yaddayaddayadda so it seems correct
    • try to go to : get to the same page
    • look at the docs on how to access the web app says to go to :/web
    • try that, get a message about not being authorized
    • WAT?
    • read some more docs discover I need that claim
    • spend some time trying to find that in the UI
    • google it up, find the link
    • go to that page, grab the claim, set it up on the server and restart the server
    • I’m able to get to the web app now
    • Do you want to access it from the internet? If this works it would be great, so yes!
    • setup my library
    • let it scan and try to watch something from it
    • UX sucks, video plays in a sort of popup in landscape on my phone.
    • Ah, dumb of me, I probably have my browser set to desktop mode
    • No, I don’t.
    • Ok, so the web is maybe only expected to be used on desktop, let me install the app
    • Install the app, login to my account, only have the Plex provided content
    • Look around trying to find the media I scanned, find a thing saying my server is disconnected
    • WAT?
    • Go back to the web app via IP, try to look into settings
    • “You are not connected directly to the server”
    • WAT?
    • everything else seems okay, I even enabled remote access there and it says it’s working
    • Every few minutes the page says my server is not available for a few seconds then comes back
    • It’s now been 1 hour and I haven’t been able to watch anything.

    It’s now been 1 hour of trying to set this up and I give up. Jellyfin is much more easy to setup, and even if Plex was instantaneous I could have loaded my TV library hundreds of times in the 1h I just wasted trying to get this to work. Probably every other time I tried I got similar results which is why I have an account there even though I don’t remember ever using Plex.

    Edit2: after some nore more fiddling managed to get it working, not sure what I changed, so now:

    • Open the app, see my content there
    • Try to watch something
    • “You’re watching in indirect mode, quality might be bad”
    • Ok, so it’s not connecting directly to my server, anyways, let’s ignore this for now, maybe it’s getting confused because it’s in a docker container
    • “Activate Plex”
    • Ah, ok, it’s the “pay or not now” screen, not now
    • No subtitles play
    • Try different subtitles
    • Still nothing
    • Plus quality seems shit
    • Confirmed, it’s reproducing at 720x300 even though it’s a 4K video
    • Look at docs, figure out the direct play is about converting the video
    • Select maximum quality which according to docs should use the original file
    • Still get a 300p video
    • Figure out maybe it’s the android app that’s the problem, go to the TV, install Plex and connect to it
    • Video takes forever to load
    • Give up again after a couple of minutes waiting for the movie to load


  • Others have already answered the main question here, but I have a different question for you and a couple recommendations.

    I’m a Playstation gamer looking into moving to Linux gaming as the next Playstation might not be able to play physical games.

    While I’m happy to see more people flock over, you’re also not going to get physical games here, so not sure what’s the advantage for you.

    As for recommendation, many have replied that your system doesn’t meet the minimum requirements, but there are other management games that you could play that would run okay on your system, my main recommendation is RimWorld it’s an amazing colony building game, and a lot more in-depth than Frostpunk, plus it should run on your machine.





  • In that sense it is a bit of scripting, it’s a templating language similar to Jinja, so you put things you want to display between {{ }}, for example {{name}} will get rendered as the content of the name variable. [[ ]] is the way Silverbullet habgles links, so [[Something]] is a link to the file Something.md, so [[ {{ name }} ]] is a link to the file with the name from the variable.

    Also that’s because I wanted a custom view, a very similar thing could be done with:

    \```query
    recipe
    \```
    

    BTW, you can have a table of contents on Silverbullet by just putting a block named toc, i.e. ```toc and closing it on the next line.



  • Let me give you an example, I have a page with this:

     ```template
     | Name | Keywords |
     |-----------|-----------------|
     {{#each {recipe}}}
     | [[{{name}}]] | {{keywords}} |
     {{/each}}
    \ ```
    

    Then each recipe page has a header, so for example if I have a file named Recipes/Steak.md with the content:

    ---
    tags: recipe
    keywords: beef easy
    ---
    
    # Ingredients 
    
    Yadda yadda yadda...
    
    

    So that table gets populated with all of the recipes wherever they are and I can add other columns or info there. It’s very neat and customizable.