

Can you explain the difference to me such that my feeble mind may understand?
Can you explain the difference to me such that my feeble mind may understand?
Yeah let’s instead install a massive bloated shit project that the original developers left years ago and the maintainers don’t know heads from tails of the codebase because it’s too massive to maintain, with enough dependencies to make even a small child think he’s independent by comparison.
All so that we can, uh, synchronize a markdown text file across 3 computers.
These projects exist so that we don’t all have to re-invent the wheel every single time we need something simple. They have a purpose, even if they’re not pushing the envelope. I’ve developed a bunch of software to do extremely simple things for myself because all the existing options are massive and bloated and do a million more things than I need.
I’m sure your projects look impressive on your resumé, though.
people still use plex after the last sneaky they pulled?
Even if that was possible, I don’t want to crash innocents peoples browsers. My tar pits are deployed on live environments that normal users could find themselves navigating to and it’s overkill when if you simply respond to 404 Not Found with 200 OK and serve 15MB on the “error” page then bots will stop going to your site because you’re not important enough to deal with. It’s a low bar, but your data isn’t worth someone looking at your tactics and even thinking about circumventing it. They just stop attacking you.
Bots will blacklist your IP if you make it hostile to bots
This will save you bandwidth
Build tar pits.
https://u.drkt.eu/PZJz6H.png I don’t know how to embed an image link
It’s not fundamentally different
I already saw copyparty but it appears to me to be a pretty large codebase for something so simple. I don’t want to have to keep up with that because there’s no way I’m reading and vetting all that code; it becomes a security problem.
It is still easier and infinitely more secure to grab a USB drive, a bicycle and just haul ass across town. Takes less time, too.
Sending is someone else’s problem.
It becomes my problem when I’m the one who wants the files and no free service is going to accept an 80gb file.
It is exactly my point that I should not have to deal with third parties or something as massive and monolithic as Nextcloud just to do the internet equivalent of smoke signals. It is insane. It’s like someone tells you they don’t want to bike to the grocer 5 minutes away because it’s currently raining and you recommend them a monster truck.
Why is it so hard to send large files?
Obviously I can just dump it on my server and people can download it from a browser but how are they gonna send me anything? I’m not gonna put an upload on my site, that’s a security nightmare waiting to happen. HTTP uploads have always been wonky, for me, anyway.
Torrents are very finnicky with 2-peer swarms.
instant.io (torrents…) has never worked right.
I can’t ask everyone to install a dedicated piece of software just to very occasionally send me large files
It counts
For one I don’t use software that updates constantly. If I had to log in to a container more than once a year to fix something, I’d figure out something else. My NAS is just harddrives on a Debian machine.
Everything I use runs either Debian or is some form of BSD
Why?
I’m sure there’s ways to do it, but I can’t do it and it’s not something I’m keen to learn given that I’ve already kind of solved the problem :p
I think it’s great you brought up RAID but I believe when Immich or any software mess things up it’s not recoverable right?
RAID is not a backup, no. It’s redundancy. It’ll keep your service up and running in the case of a disk failure and allow you to swap in a new disk with no data loss. I don’t know how Immich works but I would put it in a container and drop a snapshot anytime I were to update it so if it breaks I can just revert.
I recommend it over a full disk backup because I can automate it. I can’t automate full disk backups as I can’t run dd reliably from a system that is itself already running.
It’s mostly just to ensure that I have config files and other stuff I’ve spent years building be available in the case of a total collapse so I don’t have to rebuilt from scratch. In the case of containers, those have snapshots. Anytime I’m working on one, I drop a snapshot first so I can revert if it breaks. That’s essentially a full disk backup but it’s exclusive to containers.
edit: if your goal is to minimize downtime in case of disk failure, you could just use RAID
My method requires that the drives be plugged in at all times, but it’s completely automatic.
I use rsync from a central ‘backups’ container that pulls folders from other containers and machines. These are organized in
/BACKUPS/(machine/container)_hostname/...
The /BACKUPS/
folder is then pushed to an offsite container I have sitting at a friends place across town.
For example, I backup my home folder on my desktop which looks like this on the backup container
/BACKUPS/Machine_Apollo/home/dork/
This setup is not impervious to bitflips a far as I’m aware (it has never happened). If a bit flip happens upstream, it will be pushed to backups and become irrecoverable.
I need it
I’m never using a Chromium browser again in my life but Mozilla hates winning so we just have 2 bad browsers.
I only looked at dumpdrop and it seemed fine, to me. Compared to other similar projects which are 10 times as large and provide essentially the same functionality. The world of web-based file-uploading solutions is fucked.