• 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle
  • I’m bitterly clinging to my iPhone 13 mini, because I suspect it’s the last phone I’ll ever actively enjoy. I went along with bigger phones when that became the trend and decided I didn’t like them, and the mini line was such a relief to go back to. Once it’s no longer tenable, I’ll probably just buy a series of “the least bad used phone I can find” because I know I’ll be mildly frustrated every time I use it.


  • I’m still using an iPhone mini and I haven’t experienced any bad layouts, broken websites, or any difficulty like that. It has the same resolution of the biggest iPhone I’ve ever had (iPhone X) so things are smaller, which would make it a poor fit for someone with poor vision, but for me it’s an absolutely perfect phone. It’s frustrating to know that the perfect phone for me could easily exist, and yet Apple will refuse to make it for me. I’ll be stuck with phones I don’t like for the rest of my life, it seems.


  • I empathize with that frustration. The process of thinking you’re right, learning you’re wrong, and figuring out why is very fundamentally what coding is. You’re taking an idea in one form (the thing you want to happen in your mind) and encoding it into another, very different form, a series of instructions to be executed by a computer, and your first try is almost always slightly wrong. Humans aren’t naturally well-adapted to this task because we’re optimized for instructing other humans, who will usually do what they think you mean and not always what you actually said, can gloss over or correct small mistakes or inconsistencies, and will act in their own self-interest when it makes sense, but a computer won’t behave that way, it requires you to bend completely to how it works. It probably makes me a weirdo, but I actually like that process, it’s a puzzle-solving game for me, even when it’s frustrating.

    I do think asking an AI for help with something is a useful way to use it, that really isn’t all that different from checking a forum (in fact, those forums are probably what it’s drawing from in the first place), and hallucinations aren’t too damaging because you’ll be checking the AI’s answer when you try what it says and see if it works. It’s more the blindly accepting code that it produces that I think is harmful (and you aren’t doing that, it sounds like.) In an IDE it’s really easy to quickly make pages of code without engaging the brain, and it works well enough to be very tempting, but not, as I’m sure you know, well enough to do the whole thing.


  • Yeah, totally fair. I’ll note that you’re kind of describing the typical software development process of a customer talking to the developer and developing requirements collaboratively with them, then the developer coming back with a demo, the customer refining by going “oh, that won’t work, it needs to do it this way” or “that reminds me, it also needs to do this”, and so on. But you’re closer to playing the role of the customer in this scenario, and acting like more of an editor or manager on the development side. The organizers of a game jam could make a reasonable argument that doing it this way is akin to signing up for the game jam, coming up with an idea, then having your friend who isn’t signed up for the game jam implement it for you, when the point is to do it all in person, quickly, in a fun and energetic environment. The people doing a game jam like coding, that’s the fun part for them, so someone signing up and skipping all that stuff does have a little bit of a “why are you even here then” aspect to it. Of course it depends on the degree the AI is being used, how much editorial control or tweaking you’re doing, it’s a legitimate debate and I don’t think you’re wrong to want to participate.


  • I’ll acknowledge that there’s definitely an element of “well I had to do it the hard way, you should too” at work with some people, and I don’t want to make that argument. Code is also not nearly as bad as something like image generation, where it’s literally just typing a thing and getting a not-very-good image back that’s ready to go; I’m sure if you’re making playable games, you’re putting in more work than that because it’s just not possible to type some words and get a game out of it. You’ll have to use your brain to get it right. And if you’re happy with the results you get and the work you’re doing, I’m definitely not going to tell you you’re doing it wrong.

    (If you’re trying to make a career of software engineering or have a desire to understand it at a deeper level, I’d argue that relying heavily on AI might be more of a hindrance to those goals than you know, but if those aren’t your goals, who cares? Have fun with it.)

    What I’m talking about is a bigger picture thing than you and your games; it’s the industry as a whole. Much like algorithmic timelines have had the effect of turning the internet from something you actively explored into something you passively let wash over you, I’m worried that AI is creating a “do the thinking for me” button that’s going to be too tempting for people to use responsibly, and will result in too much code becoming a bunch of half-baked AI slop cobbled together by people who don’t understand what they’re really doing. There’s already enough cargo culting around software, and AI will just make it more opaque and mysterious if overused and over-relied on. But that’s a bigger picture thing; just like I’m not above laying back and letting TikTok wash over me sometimes, I’m glad you’re doing things you like with the assistance you get. I just don’t want that to become the only way things happen either.


  • The irony is that most programmers were just googling and getting answers from stackoverflow, now they don’t even need to Google.

    That’s the thing, though, doing that still requires you to read the answer, understand it, and apply it to the thing you’re doing, because the answer probably isn’t tailored to your exact task. Doing this work is how you develop an understanding of what’s going on in your language, your libraries, and your own code. An experienced developer has built up those mental muscles, and can probably get away with letting an AI do the tedious stuff, but more novice developers will be depriving themselves of learning what they’re actually doing if they let the AI handle the easy things, and they’ll be helpless to figure out the things that the AI can’t do.

    Going from assembly to C does put the programmer at some distance from the reality of the computer, and I’d argue that if you haven’t at least dipped into some assembly and at least understand the basics of what’s actually going on down there, your computer science education is incomplete. But once you have that understanding, it’s okay to let the computer handle the tedium for you and only dip down to that level if necessary. Or learning sorting algorithms, versus just using your standard library’s sort() function, same thing. AI falls into that category too, I’d argue, but it’s so attractive that I worry it’s treating important learning as tedium and helping people skip it.

    I’m all for making programming simpler, for lowering barriers and increasing accessibility, but there’s a risk there too. Obviously wheelchairs are good things, but using one simply “because it’s easier” and not because you need to will cause your legs to atrophy, or never develop strength in the first place, and I’m worried there’s a similar thing going on with AI in programming. “I don’t want to have to think about this” isn’t a healthy attitude to have, a program is basically a collection of crystallized thoughts and ideas, thinking it through is a critical part of the process.


  • Bluesky’s more like an aspirationally decentralized platform, you can keep your own data on your own server and use your own domain name as a user name, but most of the rest of it is “centralized, but we’re designing it in such a way that we can open it up later.” Even then, though, it’s heavily influenced by the original idea of “let’s make something decentralized that Twitter can switch to once it’s worked out” which means that even when they do open things up, it’s likely that a lot of Bluesky will only be practical at “big tech company scale” to run yourself, whereas Mastodon or Lemmy you can just spin up on a server and it’ll be fine until you get a lot of users.


  • I as a human being have grown up and learned from experience and the experiences of previous humans that were documented or directly communicated to me. I can see no inherent difference with an artificial intelligence learning on the same data.

    It’s a massive difference in scale. For one, before you even leave the womb you have millions of years of evolution shaping the initial structure of your brain. Then your “training” begins, but it’s infinitely richer than anything we’re giving to these LLMs. Sights, sounds, smells, feelings, so many that part of what your brain is learning is what it must ignore. You’re also benefitting from the interactivity of your environment, you can experiment with things and get feedback for what happens. As you get older and develop more skills, you can start integrating them together to do even more complex things, and the people around you will use their own incredible intelligence to specifically tailor your training to what you need as you learn and grow.

    Meanwhile, an LLM is getting fed words, and learning how to predict the next word. It’s a pale shadow of the complex lives humans live. Words are one of the more powerful things we have for thinking and reasoning, so if you’re going to go all in on one skill, it’s a rich environment for learning and in theory the contents of all of humanity’s writing probably contains all the information necessary to recreate human intelligence, but our current technology doesn’t even come close to wringing every ounce of knowledge from the training sets.


  • Archive Team often uses the Internet Archive to share the things they save and obviously they have a shared goal of saving a copy of everything ever made, but they aren’t the same people. The Archive Team is a vigilante white hat hacker group (well, maybe a little bit grey), and running a Warrior basically means you’re volunteering to be part of their botnet. When a website is going to be shut down, they’ll whip together a script and push it out to the botnet to try to grab as much of the dying site as they can, and when there’s more downtime they have some other projects, like trying to brute force all those awful link shorteners so that when they inevitably die, people can still figure out where it should’ve pointed to.


  • chaos@beehaw.orgtoTechnology@beehaw.orgRSS and OPML
    link
    fedilink
    arrow-up
    2
    ·
    10 months ago

    OPML files really aren’t much more than a list of the feeds you’re subscribed to. Individual posts or articles aren’t in there. I would expect that importing a second OPML file would just add more subscriptions, but it’d be up to the reader app to decide what it does.


  • If you ask an LLM to help you with a legal brief, it’ll come up with a bunch of stuff for you, and some of it might even be right. But it’ll very likely do things like make up a case that doesn’t exist, or misrepresent a real case, and as has happened multiple times now, if you submit that work to a judge without a real lawyer checking it first, you’re going to have a bad time.

    There’s a reason LLMs make stuff up like that, and it’s because they have been very, very narrowly trained when compared to a human. The training process is almost entirely getting good at predicting what words follow what other words, but humans get that and so much more. Babies aren’t just associating the sounds they hear, they’re also associating the things they see, the things they feel, and the signals their body is sending them. Babies are highly motivated to learn and predict the behavior of the humans around them, and as they get older and more advanced, they get rewarded for creating accurate models of the mental state of others, mastering abstract concepts, and doing things like make art or sing songs. Their brains are many times bigger than even the biggest LLM, their initial state has been primed for success by millions of years of evolution, and the training set is every moment of human life.

    LLMs aren’t nearly at that level. That’s not to say what they do isn’t impressive, because it really is. They can also synthesize unrelated concepts together in a stunningly human way, even things that they’ve never been trained on specifically. They’ve picked up a lot of surprising nuance just from the text they’ve been fed, and it’s convincing enough to think that something magical is going on. But ultimately, they’ve been optimized to predict words, and that’s what they’re good at, and although they’ve clearly developed some impressive skills to accomplish that task, it’s not even close to human level. They spit out a bunch of nonsense when what they should be saying is “I have no idea how to write a legal document, you need a lawyer for that”, but that would require them to have a sense of their own capabilities, a sense of what they know and why they know it and where it all came from, knowledge of the consequences of their actions and a desire to avoid causing harm, and they don’t have that. And how could they? Their training didn’t include any of that, it was mostly about words.

    One of the reasons LLMs seem so impressive is that human words are a reflection of the rich inner life of the person you’re talking to. You say something to a person, and your ideas are broken down and manipulated in an abstract manner in their head, then turned back into words forming a response which they say back to you. LLMs are piggybacking off of that a bit, by getting good at mimicking language they are able to hide that their heads are relatively empty. Spitting out a statistically likely answer to the question “as an AI, do you want to take over the world?” is very different from considering the ideas, forming an opinion about them, and responding with that opinion. LLMs aren’t just doing statistics, but you don’t have to go too far down that spectrum before the answers start seeming thoughtful.


  • In its complaint, The New York Times alleges that because the AI tools have been trained on its content, they sometimes provide verbatim copies of sections of Times reports.

    OpenAI said in its response Monday that so-called “regurgitation” is a “rare bug,” the occurrence of which it is working to reduce.

    “We also expect our users to act responsibly; intentionally manipulating our models to regurgitate is not an appropriate use of our technology and is against our terms of use,” OpenAI said.

    The tech company also accused The Times of “intentionally” manipulating ChatGPT or cherry-picking the copycat examples it detailed in its complaint.

    https://www.cnn.com/2024/01/08/tech/openai-responds-new-york-times-copyright-lawsuit/index.html

    The thing is, it doesn’t really matter if you have to “manipulate” ChatGPT into spitting out training material word-for-word, the fact that it’s possible at all is proof that, intentionally or not, that material has been encoded into the model itself. That might still be fair use, but it’s a lot weaker than the original argument, which was that nothing of the original material really remains after training, it’s all synthesized and blended with everything else to create something entirely new that doesn’t replicate the original.