Not on my own, I’m technically only responsible for the network and cybersecurity, but not being able to log out of an education account on a public computer is a pretty serious threat. Fortunately I’m on good terms with the dean and he’s always been receptive to my concerns.
I take my shitposts very seriously.
- 3 Posts
- 274 Comments
As a university sysadmin that spent half a fucking hour yesterday trying to log someone out of a classroom computer’s MS Office software (the “sign out” button did fuck all, go figure): fuck Microsoft, fuck Office, fuck Outlook, fuck Onedrive, fuck their SSO, and their mother too. Next semester I’m sanitizing the computers. Students will use LibreOffice and they’ll like it.
I might be a little angry.
Fuck it. *uses
ulong
to store a boolean*
Proxmox is a great starting point. I use it in my home server and at work. It’s built on Debian, with a web interface to manage your virtual machines and containers, the virtual network (trivial unless you need advanced features), virtual disks, and installer images. There are advanced options like clustering and high availability, but you really don’t have to interact with those unless you need them.
Well that’s not true. I live in a Soviet era house that had an entire second floor built on top of it. We’ve had to drill through the brick walls to replace the natural gas pipes with pipes that run outside the walls, we’ve had to dig under the foundation when we got connected to the city’s sewer system (again, Soviet-built), and again when the main water pipe burst and threatened to wash out the foundation. If the load-bearing walls had been constructed to the same “it works” standard as the things we’ve had to fix, we wouldn’t have a house anymore.
rtxn@lemmy.worldto Programmer Humor@programming.dev•I'm new to using Ruby and this tickled me pink261·27 days agotimedelta
marks time in days, seconds, and microseconds. It doesn’t take leap years into account because the concept of years is irrelevant totimedelta
. If you need to account for leap years, you need a different API.
rtxn@lemmy.worldto Selfhosted@lemmy.world•Why are anime catgirls blocking my access to the Linux kernel?English382·29 days agoNew developments: just a few hours before I post this comment, The Register posted an article about AI crawler traffic. https://www.theregister.com/2025/08/21/ai_crawler_traffic/
Anubis’ developer was interviewed and they posted the responses on their website: https://xeiaso.net/notes/2025/el-reg-responses/
In particular:
Fastly’s claims that 80% of bot traffic is now AI crawlers
In some cases for open source projects, we’ve seen upwards of 95% of traffic being AI crawlers. For one, deploying Anubis almost instantly caused server load to crater by so much that it made them think they accidentally took their site offline. One of my customers had their power bills drop by a significant fraction after deploying Anubis. It’s nuts.
So, yeah. If we believe Xe, OOP’s article is complete hogwash.
rtxn@lemmy.worldto Selfhosted@lemmy.world•Why are anime catgirls blocking my access to the Linux kernel?English23·30 days agoThat’s why the developer is working on a better detection mechanism. https://xeiaso.net/blog/2025/avoiding-becoming-peg-dependency/
rtxn@lemmy.worldto Selfhosted@lemmy.world•Why are anime catgirls blocking my access to the Linux kernel?English3·30 days agoWith how much authority you wrote with before, I thought you’d be able to grasp the concept. I’m sorry I assumed better.
rtxn@lemmy.worldto Selfhosted@lemmy.world•Why are anime catgirls blocking my access to the Linux kernel?English4·30 days agoTHEN (and this is the part you don’t seem to understand) the client process has to waste time solving the challenge, which is, by the way, orders of magnitudes lighter on the server than serving the actual meaningful content, or cancel the request. If a new request is sent during that time, it will still have to waste time solving the challenge. The scraper will get through eventually, but the challenge delays the response and reduces the load on the server because while the scrapers are busy computing, it doesn’t have to serve meaningful content to them.
rtxn@lemmy.worldto Selfhosted@lemmy.world•Why are anime catgirls blocking my access to the Linux kernel?English6·30 days agoIt’s not client-side because validation happens on the server side. The content won’t be displayed until and unless the server receives a valid response, and the challenge is formulated in such a way that calculating a valid answer will always take a long time. It can’t be spoofed because the server will know that the answer is bullshit. In my example, the server will know that the prime factors returned by the client are wrong because their product won’t be equal to the original semiprime. Delegating to a sub-process won’t work either, because what’s the parent process supposed to do? Move on to another piece of content that is also protected by Anubis?
The point is to waste the client’s time and thus reduce the number of requests the server has to handle, not to prevent scraping altogether.
rtxn@lemmy.worldto Selfhosted@lemmy.world•Why are anime catgirls blocking my access to the Linux kernel?English9·30 days agoThat’s the great thing about Anubis: it’s not client-side. Not entirely anyways. Similar to public key encryption schemes, it exploits the computational complexity of certain functions to solve the challenge. It can’t just say “solved, let me through” because the client has to calculate a number, based on the parameters of the challenge, that fits certain mathematical criteria, and then present it to the server. That’s the “proof of work” component.
A challenge could be something like “find the two prime factors of the semiprime
1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139
”. This number is known as RSA-100, it was first factorized in 1991, which took several days of CPU time, but checking the result is trivial since it’s just integer multiplication. A similar semiprime of 260 decimal digits still hasn’t been factorized to this day. You can’t get around mathematics, no matter how advanced your AI model is.
rtxn@lemmy.worldto Selfhosted@lemmy.world•Why are anime catgirls blocking my access to the Linux kernel?English39·30 days agoThe developer is working on upgrades and better tools. https://xeiaso.net/blog/2025/avoiding-becoming-peg-dependency/
rtxn@lemmy.worldto Selfhosted@lemmy.world•Why are anime catgirls blocking my access to the Linux kernel?English2081·30 days agoThe current version of Anubis was made as a quick “good enough” solution to an emergency. The article is very enthusiastic about explaining why it shouldn’t work, but completely glosses over the fact that it has worked, at least to an extent where deploying it and maybe inconveniencing some users is preferable to having the entire web server choked out by a flood of indiscriminate scraper requests.
The purpose is to reduce the flood to a manageable level, not to block every single scraper request.
You can go with Debian (or Devuan), easily. My home server is running Proxmox on metal (Debian-based itself), virtualized OPNSense, and multiple Debian containers on an i3-4150 and previously 4GB RAM (now mis-matched 10GB).
rtxn@lemmy.worldto Programmer Humor@programming.dev•Everyone knows what an email address is, right? (Quiz)431·1 month agoAll of the modern internet is built on the decaying carcasses of temporary solutions and things that seemed like a good idea at the moment but are now too widely used to change.
rtxn@lemmy.worldto Programmer Humor@programming.dev•What do you call your production branch?18·1 month agoI let the interns handle those. If they survive, they get bragging rights.
rtxn@lemmy.worldto Programmer Humor@programming.dev•What do you call your production branch?91·1 month agoBranch on every commit. Never delete. If something needs to be rolled back, merge it back into HEAD. Conflict resolution only through melee combat.
It’s a joke.
Linux has two different kinds of “used” memory. One is memory allocated for/by running processes that cannot be reclaimed or reallocated to another process. This memory is unavailable. The other kind is memory used for caching (ZFS, write-back cache, etc) that can be reclaimed and allocated for other things as needed. Memory that is not allocated in any way is free. Memory that is either free or allocated to cache is available.
It looks like
htop
only shows unavailable memory as “used”, while proxmox shows the sum of unavailable and cached memory. Proxmox “uses” 11 GB, but it’s not running out of memory because most of it is “available”.