

That’s how it works. Wake-on-LAN wakes the computer if the computer receives a network request. Which is the same thing you’re asking for, right?
🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍
That’s how it works. Wake-on-LAN wakes the computer if the computer receives a network request. Which is the same thing you’re asking for, right?
I’ve been using Contabo. German company, several geographic locations for your nodes, reasonably priced.
The oil runs out in another 30 or 40; things are going to fall apart pretty quickly after that, when we won’t be able to get enough food in to maintain cities.
I was scanning, and thought you said we’d be battling nuclear mutants for the last bottle of Tabasco.
I don’t know which version is more probable.
2 opponents have acquisition, one looks like they’re firing, there’s a third ready for a snap shot, and they all have AKs.
Player is… reloading a handgun.
I know it’s a video game, but my suspension of disbelief has limits. This isn’t sci-fi; there are no magic shields or SuperArmor; so player only survives if
I mean, utter, blinding incompetence aside, player 1 is dead. Even with a massive plot shield, it would be hard to swallow.
Maybe they stole some armor cloth from the John Wick universe; that stuff is sci-fi level good.
Restic to BackBlaze. B2 support is built in to restic, so all you need is an account and credentials.
Most of my home data - servers, PCs - I back up to HD and B2. I have a few VPS I only back up to B2.
I’ve never had a Cyberpower that hasn’t worked just find with but. Nuts a PITA to configure, but other than that it likes Cyberpower. I have that model - without the 3-R - and it’s great.
I would be extremely surprised if the 3-R version didn’t work. With Cyberpower, I don’t even bother to look up compatability. I bought 3 EC850LCDs blind, for the router and a couple other servers around the house. They all came up just fine.
Apples are such great fruit. They might seem a little boring, but they keep well, and there are so many varieties. Almost the perfect fruit.
In the Verge article, are you talking about the table the the “presumably” qualifier in the table column headers? If so, not only is it clear they don’t know what, exactly, is a attributable to the costs, but also that they mention “gross pay”, which is AKA “compensation.” When a company refers to compensation, they include all benefits: 401k contributions, the value of health insurance, vacation time, social security, bonuses, and any other benefits. When I was running development organizations, a developer who cost me $180k was probably only taking $90k of that home. The rest of it went to benefits. The rule of thumb was for every dollar of salary negotiated, I had to budget 1.5-2x that amount. The numbers in “Presumably: Gross pay” column are very likely cost-to-company, not take-home pay.
I have some serious questions about the data from “h1bdata.info”. It claims one software engineer has a salary of $25,304,885? They’ve got some pretty outlandish salaries in there; a program manager in NY making $2,400,000? I’m sceptical about the source of the data on that website. The vast number of the salaries for engineers, even in that table, are in the range of $100k - 180k, largely dependent on location, and a far cry from a take-home salary of 500,000€.
Nobody is paying software developers 500.000€. It might cost the company that much, but no developers are making that much. The highest software engineer salaries are still in the US, and the average is $120k. High-end salaries are $160k; you might creep up a little more than that, but that’s also location specific. Silicon Valley salaries might be higher, but then, it costs far more to live in that area.
In any case, the question is ROI. If you have to spend $500,000 to address some sites that are being clever about wasting your scrapers’ time, is that data worth it? Are you going to make your $500k back? And you have to keep spending it, because people keep changing tactics and putting in new mechanisms to ruin your business model. Really, the only time this sort of investment makes sense is when you’re breaking into a bank and are going to get a big pay-out in ransomware or outright theft. Getting the contents of my blog is never going to be worth the investment.
Your assumption is that slowly served content is considered not worth scraping. If that’s the case, then it’s easy enough for people to prevent their content from being scraped: put in sufficient delays. This is an actual a method for addressing spam: add a delay in each interaction. Even relatively small delays add up and cost spammers money, especially if you run a large email service and do it at scale.
Make the web a little slower. Add a few seconds to each request, on every web site. Humans might notice, but probably not enough to be a big bother, but the impact on data harvesters will be huge.
If you think this isn’t the defense, consider how almost every Cloudflare interaction - and an increasingly large number of other sites - are including time-wasting front pages. They usually say something like “making sure you’re human” with a spinning disk, but really all they need to be doing is adding 10 seconds to each request. If a scraper of trying to indeed only a million pages a day, and each page adds a 10s delay, that’s wasting 2,700 hours of scraper computer time. And they’re trying to scrape far more than a million pages a day; it’s estimated (they don’t reveal the actual number) that Google indexes billions of pages every day.
This is good, though; I’m going to go change the rate limit on my web server; maybe those genius software developers will set a timeout such that they move on before they get any content from my site.
Start it up before you go to bed. If it isn’t indexed when you wake up, it’s just not going to work for you.
Jellyfin is pretty good about preserving the index; you only really pay a cost during that first start up, or if you shuffle content around on the storage. Otherwise, it only indexes new stuff, which should be mostly not noticeable.
Ah, that’s where tuning comes in. Look at the logs, take the average time-out, and tune the tarpit to return a minimum payload consisting of a minimal HTML containing a single, slightly different URL back to the tar pit. Or, better yet, JavaScript that loads a single page of tarpit URLs very slowly. Bots have to be able to run JS, or else they’re missing half the content on the web. I’m sure someone has created a JS forkbomb.
Variety is the spice of life. AI botnet blacklists are probably the better solution for web content; you can run ssh on a different port and run a tarpit on the standard port, and it will barely affect you. But for the web, if you’re running a web server you probably want visitors, and tarpits would be harder to set up to catch only bots.
Yeah, that’s a good one.
Temperate-zone seed fruit like apples and pears rarely look, taste, and feel like their genetic parents.
And this is why there are, like, 431,663 different varieties of apples? Or is it all selective cultivation?
Honeycrisp is the state apple here, for which I’m grateful because it’s my favorite apple, so you can always get it. But I swear, when we go to the arboretum Apple House, the little market run by The Arb, there are bins of 20 different kinds of apples and it’s a different 20 every year. Find a new apple you like? Hah! Joke’s on you, you’ll never come across it again in the rest of your life.
How many episodes in the show? Depending on the hardware, that could take a few minutes. If it’s trying to index over a network mounted drive, it could take a long time. My material was mounted locally over USB3 on an older 16-core Ryzen machine.
Once indexing is done, it’s fast, but there initial indexing can be slow.
A good tar pit will reduce your bandwidth. Tarpits aren’t about shoving useless data at bots; they’re about responding as slow as possible to keep the bot connected for as long as possible while giving it nothing.
Endlessh accepts the connection and then… does nothing. It doesn’t even actually perform the SSL negotiation. It just very… slowly… sends… an endless preamble, until the bot gives up.
As I write, my Internet-facing SSH tarpit currently has 27 clients trapped in it. A few of these have been connected for weeks. In one particular spike it had 1,378 clients trapped at once, lasting about 20 hours.
How long did you give it? It indexes the library. I had to rebuild my library once, and while I don’t have a huge collection - mainly just rips of my DVD collection, about 450 films, and it takes over an hour to index everything. Until it’s done, not everything shows up.
Yes! I always think about the fact that the Hebrew law against eating pork is because the ancient tribes were migrant herders, and their competition was farming communities. You can’t herd pigs, but you can herd sheep, and you don’t want to enrich those other people so you make a dictate against eating pork. Not politics, but economics, although in a lot of ways that’s a distinction without a difference.
I think the MSC certification may only apply in the US. The UK probably has different certifications.
I’ve domains from both DomainMonger and NameCheap. If it were trivial, I’d probably move my domains to NameCheap. The web UX is a little better; aside from that, I’ve never had issues with either, not heard anything particularly bad about either.
But, yeah: +1 on the NameCheap suggestion.