

Redacted has a pretty good amount of DVD rips, DSD etc. not all encompassing but definitely not rare there.
Redacted has a pretty good amount of DVD rips, DSD etc. not all encompassing but definitely not rare there.
I’m interested, primarily in the idea that some parts could be useful longer term… I’m an old school Internet janitor, and appreciate the idea that enshittification is a moving target.
I’d ask up front though if you’ve seen or considered Veilid? It might be wise to evaluate working with that as a baseline so a stronger fight against the shit can be had.
Let’s fuck up some fascists.
Are there specific issues noted, or just asking based on the time last updated?
I’ve thought about looking over to see what might need cleaned up, but many of my personal use cases or methods might not fully align with the general consensus.
Along the same lines, is there anything you’d like to see added?
“However, this may prove challenging” … That sentence doing a lot of lifting, lol
Second the suggestion for *Arr here! It’s clever enough to rename to exactly what jellyfin wants as well as in place hardlinks if you are Linux, so that the original names persist in qbit while jellyfin sees the good stuff.
There’s other benefits like tracking quality you have on disk for if you are wanting something better down the line, but allowing it to handle naming and sorting is a huge life saver.
Drive failures have almost nothing to do with access if they are mechanical. Most failures are from bearing or solder interconnect failures over time.
Also, most seeding is in smaller chunks that are read and cached if popular… Meaning less drive hits than 1-1 read vs upload.
You will almost always have drives fail from other aspects like heat or power or old age before wear from seeding would ever be enough to matter.
I have drives in the excess of 10+ years, with several seeds that have been active for many years of those, that are still running just fine.
Actually there’s a better reason for the encryption! You are correct that they use torrent clients to connect and record swarm nodes.
It prevents an ISP from traffic shaping against known torrent traffic!
Many ISP will watch for certain unencrypted headers and if it sees torrent will throttle it to nearly nothing. With the encryption, it all just looks like SSL.
It was one of a few stupid things and I wound up just telling him to leave.
Kinda wish it was more dramatic and/or gory, but I usually am just too tired to turn to violence.
Besides, I’d never admit to owning that chipper shredder anyway.
This!
I haven’t had one since an idiot roommate decided they wanted to fuck around. I fixed the problem (no more roommate).
Been 20 years now.
VPN seems a way to screw up decent performance when all you need is to stay away from public trackers.
Ironically I’m ok with this in moderation… But scalpers that charge a high price, more than materials and maybe a small fee, I dislike.
Filling a void is a service. While there a grey area, there’s a balance between acceptable fee and blatant junk.
Region locking of any type I feel should be outright illegal, but I’m just a lowly pirate with bent ethics.
I think you are right on the money (heh pun).
It’s one type of pirate to hoard and spread about the booty, but it takes a special form of hypocrisy to turn around and charge for hard money.
As to communities, I can’t help too much there but instead offer that there actually do exist groups that effectively have created swarms that are like a private tracker but for live/on demand viewing. Think an invite group of dozens of jellyfin servers. It might be safer to work toward identifying and joining one of those, depending on your desire.
Not everyone gets to decide the first level of networking they get unfortunately. Many ISP services block by default. Many folks are in shared networks.
Many of these situations effectively block the performance of torrents.
Private trackers are closed communities for sharing torrents. Often you can either interview to get access, or occasionally one with have open sign up for access. These usually have strict requirements to maintain a reasonable ratio of seeding for your downloads to prevent greedy users from ruining performance of sharing.
Redacted is one of these communities, based strictly around music and maintaining quality, refusing to allow low quality encoding of the data. It is harder to get into the community, as well as very strict seeding requirements to maintain.
Information about who they are and how to apply for access can all be found at https://interviewfor.red/
Yeah, Lidarr and Red are not a great fit… Too many options and unless you wrote super complex filters you’d wind up with the weirdest version of each album.
It took a bit to get established there, but with the tokens they’ve been more freely giving lately it’s not too bad. There’s also a point when you are perma seeding that it starts to coast along and just take care of itself… I now see more seeding after I download something ironically.
Seeding is a fickle beast there, but I’m glad to have em.
If you have access to certain music-focused private torrent trackers, many will do spotlight articles on independent or smaller artists who are also members.
This kind of sharing is often welcomed and a valued thing, so could even be a way in to some of those communities.
Redacted does this, and I’ve been introduced to some really good music this way.
Alternately, as others have noted, Bandcamp is a good way to offer as well. If you go this route, even with setting the music as free, you might make some small money… I’ve often tipped a bit via the “Pay what you want” pricing tier.
For my larger boxes, I only use SuperMicro. Most other vendors do weird shit to their back planes that make them incompatible or charge for licenses for their ipmi/drac/lightsout . Any reputable reseller of server gear will offer SuperMicro.
The disk to ram ratio is niche, and I’ve almost never run into that outside of large data warehouse or database systems (not what we’re doing here). Most of my machines run nearly idle even serving files several active streams or 3gb/sec data moves on only 16gb RAM. I use CPU being maxed out as a good warning that one of my disks needs checking, since silvering or degraded in ZFS chews CPU.
That said, hypervisors eat RAM. Whatever machine you might want to perform torrents, transcoding, etc, give that box RAM and either a good supported GPU or a recent Intel quicksync chip.
For organizing over the arrays, I use raided SSD for the downloads, with the torrent client moving to the destination host for seeding on completion.
Single instance of radarr and sonarr, instead I update the root folder for “new” content any time I need to point to a new machine. I just have to keep the current new media destination in sync between the Arr and the torrent client for that category.
The Arr stacks have gotten really good lately with path management, you just need to ensure the mounts available to them are set correct.
In the event I need to move content between 2x different boxes, I pause the seed and use rsync to duplicate the torrent files. Change path and recheck the torrent. Once that’s good I either nuke and reimport in the Arr, or lately I’ve been doing better naming convention on the hosts so I can use preserving hardlinks. Beware, this is pretty complex route unless you are very comfortable in Linux and rsync!
I’m using OMV on bare metal personally. My proxmox doesn’t even have OMV, it’s on a mini PC for transcoding. I see no problem running OMV inside proxmox though. My baremetal boxes are dedicated for just NAS duties.
For what it’s worth, keep tasks as minimal and simple as you can. Complexity where it’s not needed can be pain later. My nas machines are largely identical in base config, with only the machine name and storage pool name different.
If you don’t need a full hypervisor, I’d skip it. Docker has gotten great in its abilities. The easiest docker box I have was just Ubuntu with DockGE. It keeps it’s configs in a reliable path so easy to backup your configs etc.
Ugh that is extra shitty. Yeah eBay is absurd sometimes with the risks.
For anyone skimming, my cards are all based around the ancient but great LSI 9211-8i chips.
I flash my own, so I can disable BIOS and efi. I suppose if someone gets to the larger hoarding, they should be comfortable flashing their own cards too.
Most of my drives are in the 3tb/4tb range… Something about that timeframe made some reliable disks. Newer disks have had more issues really. A few boxes run some 8tb or 12tb, and I keep some external 8tb for evacuation purposes, but I don’t think I trust most options lately.
HGST / Toshiba seems to have done good by me overall, but that’s subjective certainly.
I have 2 Seagate I need to pull from one of the older boxes right now, but they are 2tb and well past their due:
root@Mizuho:~# smartctl -a /dev/sdc|grep “Vendor|Product|Capacity|minutes” Vendor: SEAGATE Product: ST2000NM0021 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Accumulated power on time, hours:minutes 41427:43
root@Mizuho:~# smartctl -a /dev/sdh|grep “Vendor|Product|Capacity|minutes” Vendor: SEAGATE Product: ST2000NM0021 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Accumulated power on time, hours:minutes 23477:56
Typically I’m a Debian/Ubuntu guy. Easiest multi tool for my needs.
I usually use OpenMediaVault for my simple NAS needs.
Proxmox and XCP-NG for hypervisor. I was involved in the initial development of OpenStack, and have much love for classic Xen itself (screw Citrix and their mistreatment of xenserver).
My dockers are either via DockGE or the compose plugins under OMV, leaning more toward DockGE lately for simplicity and eye candy.
Overall, I’ve had my share of disk failures. Usually from being sloppy. I only trust software RAID, as I have better shot at recovery if I’m stupid enough to store something critical on less that N+2.
I usually buy drives only on previous generation, and at that only when price absolutely craters. The former due to being bitten by new models crapping out early, and latter due to being too poor to support my bad habits.
Nearly all of my SATA disks came from externals, but that’s become tenuous lately… SMR disks are getting stuck into these more and more, and manufacturers sneakier about hiding shit design.
Used SAS from a place with solid warranty seems to be most reliable. About half my fleet was bought used and I’ve only lost about 1/4 of those with less than 5+ years active run time.
I personally have dedicated machines per task.
8x SSD machine: runs services for Arr stack, temporary download and work destination.
4-5x misc 16x Bay boxes: raw storage boxes. NFS shared. ZFS underlying drive config. Changes on a whim for what’s on them, but usually it’s 1x for movies, 2x for TV, etc. Categories can be spread to multiple places.
2-3x 8x bay boxes: critical storage. Different drive geometric config, higher resilience. Hypervisors. I run a mix of Xen and proxmox depending on need.
All get 10gb interconnect, with critical stuff (nothing Arr for sure) like personal vids and photos pushed to small encrypted storage like BackBlaze.
The NFS shared stores, once you get everything mapped, allow some smooth automation to migrate things pretty smoothly around to allow maintenance and such.
Mostly it’s all 10 year old or older gear. Fiber 10gb cards can be had off eBay for a few bucks, just watch out for compatibility and the cost for the transceivers.
8 port SAS controllers can be gotten same way new off eBay from a few vendors, just explicitly look for “IT mode” so you don’t get a raid controller by accident.
SuperMicro makes quality gear for this… Used can be affordable and I’ve had excellent luck. Most have a great ipmi controller for simple diagnostic needs too. Some of the best SAS drive planes are made by them.
Check BackBlaze disk stats from their blog for drive suggestions!
Heat becomes a huge factor, and the drives are particularly sensitive to it… Running hot shortens lifespan. Plan accordingly.
It’s going to be noisy.
Filter your air in the room.
The rsync command is a good friend in a pinch for data evacuation.
Your servers are cattle, not pets… If one is ill, sometimes it’s best to put it down (wipe and reload). If you suspect hardware, get it out of the mix quick, test and or replace before risking your data again.
You are always closer to dataloss than you realize. Be paranoid.
Don’t trust SMART. Learn how to read the full report. Pending-Sectors above 0 is always failure… Remove that disk!
Keep 2 thumb drives with your installer handy.
Keep a repo somewhere with your basics of network configs… Ideally sorted by machine.
Leave yourself a back door network… Most machines will have a 1gb port. Might be handy when you least expect. Setting up LAGG with those 1gb ports as fallback for the higher speed fiber can save headaches later too…
Yeah, my personal experience was similar but it’s all volunteers in the community doing the interviews.
I kept my queue window open and just chilled nearby and it came sooner than they estimated by a bit. Been worth it for me though.
I am not in it, but there’s a similar site called Orpheus that might be easier time wise to interview for, but I don’t have much experience with them.