

All my systems are backed up with “rsnapshot” to a file server. File server is backed up to backblaze with duplicacy.
All my systems are backed up with “rsnapshot” to a file server. File server is backed up to backblaze with duplicacy.
🤣
I have a cron job set to run on Monday and Friday nights, is this too frequent?
Only you can answer that - what is your risk tolerance for data loss?
You’re not going to run deepseek r1 without GPUs (plural).
I’m also considering UnRaid instead of Proxmox for a NAS OS.
NAS just has no meaning anymore?
Ah - I gotcha. That’s some terrible luck with drives.
This is why I do my first-level of backups with rsnapshot
. It backs up to the plain filesystem using rsync and uses hard-links to de-dup between backups. No special filesystem, no encryption, restore is just an ‘rsync’ away.
Is it? I guess I don’t know the difference… I installed mine myself but my system is also dead simple - only 2 wires. My first thermostat was just a bi-metallic strip. 😀
I believe so, my system is baseboard so I didn’t use a fan. There is a fan control option on the device. The instructions say it can do auto, on, of, circ.
https://digitalassets.resideo.com/damroot/Original/10011/33-00181EFS.pdf
More details here:
I’ve got a couple Honeywell T6 z-wave thermostats that work great and didn’t cost a lot. I control them through home assistant with some custom code to set them on a schedule, but they can also still be operated manually if needed.
I’d highly recommend not doing that. A smart thermostat is much easier and going to be a lot more reliable. And it won’t stop working if your server goes down.
Sure thing - one thing I’ll often do for stuff like this is spin up a VM. You can throw 4x1GiB virtual drives in it and play around with creating and managing a raid using whatever you like. You can try out md, ZFS, and BTRFS without any risk - even unraid.
Another variable to consider as well - different RAID systems have different flexibility for reshaping the RAID. For example - if you wanted to add a disk later, or swap out old drives for new ones to increase space. It’s yet another rabbit hole to go down, but something to keep in mind. When we start talking about 10’s of terrabytes of data you start to lose somewhere to temporarily put it all if you need to recreate your raid to change your raid layout. :-)
Yeah - that’s fair. I may have oversimplified a tad… The concepts behind RAID, the theory, implementations, etc. are pretty complicated. And there are many tools that do “raid-like-things” with many options about raid types… So the landscape has a lot of options.
But once you’ve made a choice the actual “setting it up” is usually pretty simple, and there’s no real on-going support or management you need to do beyond just basic health monitoring which you’d want to do even without a RAID (e.g. smartd). Any Linux system can create and use a RAID - you don’t need anything special like Unraid. My old early-to-mid-2010’s Debian box manages a RAID with NFS just fine.
If you decide you want a RAID you first decide which “level” you want before talking about any specific implementations. This drives all of your future decisions including which software you use. This basically focuses on 2 questions - how much budget do you have and what is your fault tolerance?
e.g. I have a RAID5 because I’m cheap and wanted biggest bang-for-the-buck with some failure resiliency. RAID5 lets me lose one drive and recover, and I get the storage space of N-1 drives (1 drive is redundant). Minimum size for a RAID5 is 3 drives. Wikipedia lists the standard RAID levels which are “basically” standardized even though implementations vary.
I could have gone with RAID6 (minimum 4 disks) which can suffer a 2 drive outage. I have off-site backups so I’ve decided that the low-probability of a 2 drive failure means this option isn’t necessary for me. If I’m that unlucky I’ll restore from BackBlaze. In 10+ years of managing my own fileserver I’ve never had more than 1 drive fail at a time. I’ve definitely had drives fail though (replaced one 2 weeks ago - was basically a non-issue to fix).
Some folks are paranoid and go with RAID1 and friends (RAID1, RAID10, etc.) which involves basically full duplication of drives. Very safe, very expensive for the same amount of usable storage. But RAID1 can work with a minimum of 2 drives. It just mirrors them so you get half the storage.
Next the question becomes - what RAID software to use? Here there are lots of options and where things can get confusing. Many people have become oddly tribal about it as well. There’s the traditional Linux “md” RAID which I use that operates under the filesystems. It basically takes my 4 disks and creates a new block device (/dev/md0) where I create my filesystems. It’s “just a disk” so you can put anything you want on it - I do LVM + ext4. You could put btrfs on it, zfs, etc. It’s “just a disk” as far as the OS is concerned.
These days the trend is to let the filesystems handle your disk pooling rather than a separate layer. BTRFS will create a RAID (but cautions against RAID5), as does ZFS. These filesystems basically implement the functionality I get from md and lvm into the filesystem itself.
But there are also tools like Unraid that will provide a nice GUI and handle the details for you. I don’t know much about it though.
SSDs fail too. All storage is temporary…
Setting up a simple software raid is so easy it’s almost a no-brainer for the benefit imho. There’s nothing like seeing that a drive has problems, failing it from the raid, ordering a replacement, and just swapping it out and moving on. What would otherwise be hours of data copying, fixing things that broke, and discovery of what wasn’t backed up is now 10 minutes of swapping a disk.
Had you tried ‘podman rm -f containerid’?
I mean… It’s better than “I bought two drives for my homelab and they’re fine” reports on social media.
Do not use “bare metal” in this way. “Outside containers” is sufficient.
That seems like it would screw the creators more than YouTube.
Agree - good point.
Let’s be honest - in this community NAS means “my do-everything server that may have a RAID and probably also shares storage”.