Listening to another pitch about how AI can empower workers at various jobs across my industry, I was striken by the comparison in the title

3d printing, just like generative models, have it’s actual niche uses, where it’s obvious downsides are irrelevant and they come handy, e.g. prototyping, replacements, small-series production

Where it comes to the top-down AI promotion trend, it feels not unlike the idea of printing the whole product - a car, or a house, from the smallest details - applying the least effective method, doomed to have a worse than average outcome due to technological limitations

And screws, the thing that we nailed down long before, and that is completely incompatible with that mode of production, is a screaming, growling, shrieking example of how helpful tech can be mispurposed in the most stupid way

  • kescusay@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    2 days ago

    So there are a few very specific tasks that LLMs are good at from the perspective of a software developer:

    1. Certain kinds of analysis tasks can be done very quickly and efficiently with Copilot in agent mode. For instance, having it assess your existing code for adherence to stylistic standards where a violation isn’t going to trigger a linting error.
    2. Quick script writing is something it excels at. There are all kinds of circumstances where you might need an independent script, such as a database seed file. It’s not part of the application itself, but it’s a useful utility to have, and Copilot is good at writing them.
    3. Scaffolding a new application. If you’re creating something brand new and know which tools you want to use for it, but don’t want to go through the hassle of setting everything up yourself, having Copilot do it can be a real time saver.

    And that’s… pretty much it. I’ve experimented with building applications with “prompt engineering,” and to be blunt, I think the concept is fundamentally flawed. The problem is that once the application exceeds the LLM’s context window size, which is necessarily small, you’re going to see it make a lot more mistakes than it already does, because - just as an example - by the time you’re having it write the frontend for a new API endpoint, it’s already forgotten how that endpoint works.

    As the application approaches production size in features and functions, the number of lines of code becomes an insurmountable bottleneck for Copilot. It simply can’t maintain a comprehensive understanding of what’s already there.

    • Omgpwnies@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      I use it to generate unit tests, it’ll get the bulk of the code writing done and does a pretty good job at coverage, usually hitting 100%. All I have to do for the most part is review the tests to make sure they’re doing the right thing, and mock out some stuff that it missed.

      • kescusay@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        You’re right, unit tests are another area where they can be helpful, as long as you’re very careful to check them over.

    • JustTesting@lemmy.hogru.ch
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      one other use case where they’re helpful is ‘translation’. Like i have a docker compose file and want a helm chart/kubernetes yaml files for the same thing. It can get you like 80% there, and save you a lot of yaml typing.

      Wont work well if it’s mo than like 5 services or if you wanted to translate a whole code base from one language to another. But converting one kind of file to another one with a different language or technology can work ok. Anything to write less yaml…

    • WorldsDumbestMan@lemmy.today
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      They are getting faster, having larger context windows, and becoming more accurate. It is only a matter of time until AI simply copy-cats 99.9% of the things humans do.

      • kescusay@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        Actually, there’s growing evidence that beyond a certain point, more context drastically reduces their performance and accuracy.

        I’m of the opinion that LLMs will need a drastic rethink before they can reach the point you describe.