Mustard and pickle slices for me!
A Wild Mimic appears!
Ah yes, the famously communist Russian Federation. The RF is nakedly an imperial fascist regime that doesn’t even have communist window dressing, it boggles the mind that self-proclaimed communists and anti-imperialists support them.
- 1 Post
- 93 Comments
A Wild Mimic appears!@lemmy.dbzer0.comto Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com•Have you changed you youtube pfp to clippy ?English314·29 days agoClippy already was a big step in the wrong direction from Microsoft, i’m not gonna celebrate mistakes of the past because they seem benign in comparison to todays mistakes.
A Wild Mimic appears!@lemmy.dbzer0.comto Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com•is it possible to pirate Microsoft Flight Simulator? (PLEASE read before commenting)English5·29 days agoThe Fitgirl Repack is the 2020 version. There is a portable version released by Digitalzone (Archive size 9.49GB, based on the online-fix.me p2p release), which works online, but still needs a microsoft account according to release notes (so a burner account is preferable). Haven’t tried it, but the Digitalzone portables i did try in the past all worked well so i expect the same here.
A Wild Mimic appears!@lemmy.dbzer0.comto Technology@beehaw.org•4chan will refuse to pay daily UK fines, its lawyer tells BBC14·30 days agoSince “chaotic neutral” includes the mentally deranged and insane, your description is apt. But for once I agree with 4chan - if the UK don’t want to see it, they will have to close their eyes.
A Wild Mimic appears!@lemmy.dbzer0.comto Showerthoughts@lemmy.world•Depth psychological psychotherapy is like unblocking a pipe with a plunger: You shake deep-seated problems until everything almost works normal again.7·30 days agoTherapy while being on ketamine i think. Seems to work pretty well in particularly stubborn depression.
edit: DONT just jump into the k-hole before you head to your therapist. Find a therapist who is willing to do that before.
A Wild Mimic appears!@lemmy.dbzer0.comto Linux Gaming@lemmy.world•Watched a video about Debian and now I am not sure is the best for me.English2·1 month agoI would recommend Nobara, Bazzite or PopOS for gaming; My personal experience with Debian is that it’s a great OS, but the focus lies less on cutting edge features or support of the latest hardware, and more on stability over everything else, and the desktop environment is more of an convenience feature - Debian is very happy as a headless server. If you want an OS with record setting uptimes, pick Debian; but for gaming you want to be on the cutting edge, and that’s simply not the case with Debian.
A Wild Mimic appears!@lemmy.dbzer0.comto Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com•Can’t pay, won’t pay: impoverished streaming services are driving viewers back to piracyEnglish5·1 month agothere’s a pinned post on the lidarr discord server:
Hi everyone, it’s July 25. Yesterday, the devs and mod team here have begun (early) alpha testing of the new Lidarr metadata server. In general, things are working fairly well. There are a few issues to resolve before it can go live. But we wanted to let everyone know that we have some concrete forward movement happening behind the scenes. NOTE: This stage of testing is NOT OPEN to users. We appreciate your patience, but at this stage you cannot help. This update is meant to let you know that the project is not dead, as some have incorrectly theorized, and that there is behind-the-scenes work heading toward getting the new metadata server up and running as quickly as possible. Please continue to be patient, and continue to use this channel for Lidarr support questions. If you have other conversation topics, please use general or another more appropriate channel for that. Thank you from the devs and mod team
so it’s not dead, but it might take a bit until the new metadata server is available.
e: there are also singular reports in the last days on the lidarr subreddit from people on the development branch that it occasionally works, so might be that there is movement behind the curtain.
A Wild Mimic appears!@lemmy.dbzer0.comto Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com•Can’t pay, won’t pay: impoverished streaming services are driving viewers back to piracyEnglish2·1 month agostartups in a nutshell
A Wild Mimic appears!@lemmy.dbzer0.comto Showerthoughts@lemmy.world•The 2025 version of "Please consider this environment before printing this email" should be "Please consider this environment before using A.I. to respond to this email"1·1 month agoa) many people swear by it - is it so useless then? it’s a personal question, and the answer is not the same for everyone. ChatGPT is one of the most downloaded apps worldwide, so to assume that all of those people do not gain something from it cannot be right. b) people do a lot of useless things that all have a climate cost, but noone bats an eye when someone says they watched 2 hours of 4k video, which uses a lot more ressources than Chatbots. c) chatbots that noone interacts with do not consume resources in a meaningful way, since you have to make a request for that - the greeting will be hardcoded in 99% of cases. d) i agree that the amount of VC money inflates AI usage, but VC money does this with everything: dotcom bubble, 2008 crash (housing bubble), crypto bubble, nft bubble… the difference here is that people actually have personal use scenarios, regardless of VC money. e) I agree that opt-in should be the default, i’m no fan of google’s bot, but i actually just don’t use google, i use mullvad leta for most things, and i’m waiting for the rollout of the european search index that qwant and ecosia created - it went live in france recently and i’m excited to try it out when it starts here!
You know what’s funny? i don’t even use ChatGPT, i rely on locally running models - and i’m pretty sure my GPU is less efficient than the setup in a data center.
A Wild Mimic appears!@lemmy.dbzer0.comto Showerthoughts@lemmy.world•The 2025 version of "Please consider this environment before printing this email" should be "Please consider this environment before using A.I. to respond to this email"2·1 month agoits more like 0.0017% of 4.32grams of weed, see my response above
A Wild Mimic appears!@lemmy.dbzer0.comto Showerthoughts@lemmy.world•The 2025 version of "Please consider this environment before printing this email" should be "Please consider this environment before using A.I. to respond to this email"2·1 month agoyou got an error in magnitude there: its 5.32*10¹⁴ requests, so 532 407 407 407 407 requests
(2.3 × billion tonnes)/(4.32 grams) 2.3 billion tonnes = 2.3 trillion kilos = 2.3 quadrillion grams ≈ 532 407 000 000 000 requests needed for equivalence to Covid CO2 drop ≈ 912 500 000 000 requests made per year calculator output below:
912 500 000 000/532 407 407 407 407 ≈ 0.001 7% (2.5 × billion) × 365 = 912 500 000 000 (2.3 × billion tonnes)/(4.32 grams) ≈ 532 407 407 407 407
See what i mean? Stop ChatGPT, achieve 0,0017% of the reduction that Covid brought.
A Wild Mimic appears!@lemmy.dbzer0.comto Showerthoughts@lemmy.world•The 2025 version of "Please consider this environment before printing this email" should be "Please consider this environment before using A.I. to respond to this email"2·1 month agoi had no car my entire life and have flown 4 times (i like trains lol).
Ok, something nearly everyone does - washing clothes.
A 500W washing machine uses this amount of energy in 10.8 seconds during the spin cycle.
Using a dryer? 4.3 seconds@3000W.
If we look at the equivalent mechanical energy, lets look at your bike (you gotta eat to offset the energy loss, causing emissions through agriculture and cooking!)
You can generate a Chatbot response by pedaling ~2 minutes@100W power!
The chips are already pretty optimized, since they are in principle not different to high-end gaming GPU’s. I have a 3070 TI, and i can generate a complex response locally in under a minute @ 290W TDP, 30-40 seconds if i use something like qwen 2.5, which aligns pretty well with what i’ve said before. Playing a video game uses a lot more power.
A Wild Mimic appears!@lemmy.dbzer0.comto Showerthoughts@lemmy.world•The 2025 version of "Please consider this environment before printing this email" should be "Please consider this environment before using A.I. to respond to this email"2·1 month agoBut even with a loss leader you cannot crank up the price stupidly high if the costs per request were prohibitive; ChatGPT subscriptions cost 20$/month. API pricing for the most expensive option is 10k$/1Million Tokens, so it’s a buck per 100 tokens.
A Wild Mimic appears!@lemmy.dbzer0.comto Showerthoughts@lemmy.world•The 2025 version of "Please consider this environment before printing this email" should be "Please consider this environment before using A.I. to respond to this email"4·1 month agosorry, it should be 3 Wh, you are correct of course. The 3Wh come from here:
- Aug 2023: 1.7 - 2.6 Wh - Towards Data Science
- Oct 2023: 2.9 Wh - Alex de Vries
- May 2024: 2.9 Wh - Electric Power Research Institute
- Feb 2025: 0.3 Wh - Epoch AI
- May 2025: 1.9 Wh for a similar model - MIT Technology Review
- June 2025: Sam Altman himself claims it’s about 0.3 Wh
Every source here stays below 3Wh, so it’s reasonable to use 3Wh as an upper bound. (I wouldn’t trust Altmans 0,3 Wh tho lol)
A Wild Mimic appears!@lemmy.dbzer0.comto Showerthoughts@lemmy.world•The 2025 version of "Please consider this environment before printing this email" should be "Please consider this environment before using A.I. to respond to this email"2·1 month agoWell, that depends on workload and the employer. If you are one of the lucky ones where it’s just important that shit gets done on time, it would result it lower usage. That’s on the employer, not on the LLM.
3W/request (4W if you include training the model) is nothing compared to what we use in our everyday life, and it’s even less when looking what other activities consume. Noone would have an issue with you running a blender for 30 seconds, even tho it’s the same energy usage as an Chatbot request.
A Wild Mimic appears!@lemmy.dbzer0.comto Showerthoughts@lemmy.world•The 2025 version of "Please consider this environment before printing this email" should be "Please consider this environment before using A.I. to respond to this email"6·1 month agoThe thing is: those AI datacenters are used for a lot of things, LLM’s usage amount to about 3% of usage, the rest is for stuff like image analysis, facial recognition, market analysis, recommendation services for streaming platforms and so on. And even the water usage is not really the big ticket item:
The issue of placement of data centers is another discussion, and i agree with you that placing data centers in locations that are not able to support them is bullshit. But people seem to simply not realize that everything we do has a cost. The US energy system uses 58 trillion gallons of water in withdrawals each year. ChatGPT use about 360 million liters/year, which comes down to 0.006% of Americas water usage / year. An average american household uses about 160 gallons of water / day; ChatGPT requests use about 20-50 ml/request. If you want to save water, go vegan or fix water pipes.
A Wild Mimic appears!@lemmy.dbzer0.comto Showerthoughts@lemmy.world•The 2025 version of "Please consider this environment before printing this email" should be "Please consider this environment before using A.I. to respond to this email"3·1 month agoWhether its useful or not is another discussion, but if you used a LLM to write an email in 2 minutes that you would use 10 minutes for (including searches and whatever), you actually generate LESS CO2 than the manual process:
- PC, 200W/h, 10 min: 33W
- Monitor, 30W/h: 5W
- Google searches, lets say 3, about 0,3W/Search: 1W
equals ~40W
compared to:
- PC, 200W/h, 2 min: ~7W
- Monitor, 30W/h: 1W
- ChatGPT, 3 requests (normally you would expect less requests than google searches, but for the argument…): 9W
equals ~17W.
And that is excluding many other factors like general energy costs for infrastructure, which tilt the calculation further in chatgpts favor.
EVERYTHING we do creates emissions one way or another, we create emissions by simply existing too; it’s important to set things into perspective. Both upper examples compare to running a 1000W microwave for either 2:20 min or 1:05 min. You wouldn’t be shocked by those values.
A Wild Mimic appears!@lemmy.dbzer0.comto Showerthoughts@lemmy.world•The 2025 version of "Please consider this environment before printing this email" should be "Please consider this environment before using A.I. to respond to this email"3·1 month agothat’s online, what is used in the data centers per request. local is probably not so different, depending on device - different devices have different architectures which might be more or less optimal, but the cooling is passive. If it would cost more it wouldn’t be mostly free.
This is a pretty well researched post, he made a cheat sheet too ;-)
A Wild Mimic appears!@lemmy.dbzer0.comto Showerthoughts@lemmy.world•The 2025 version of "Please consider this environment before printing this email" should be "Please consider this environment before using A.I. to respond to this email"1·1 month agoYeah, it’s stupid that they built it where it’s not supported by the necessary infrastructure. btw, do you know what happens after they use it to cool the servers? it gets placed back into the river, it doesn’t disappear. this situation is an infrastructure issue, not an AI issue.




Thats just the first round of IP law vs AI Corpos. I’m really interested if this process will fix the broken copyright system or fix the AI Corpos - one of the two has to give.