• Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 days ago

    I work on a relatively niche open-source project, where we hardly ever get outside contributions, so we’ve only gotten one AI-generated pull request so far, but that one was just incredible.

    It just failed immediately with a compile error, meaning the person who opened it, had never even tried to compile the code. Like, that’s the one unequivocal problem with AI-generated code, that folks will think they can contribute without understanding anything about the project setup or context or really even what they’re supposed to be doing.

    I’m pretty sure, they just copy-pasted the issue text we had written into a chatbot and hoped it would cough up the right solution. But the issue text didn’t describe all necessary steps. Anyone looking to actually solve that issue would have had to enter into a discussion with us to work out those details, like a human would’ve probably understood.

    Meanwhile, the LLM changed maybe ⅕ of the files, it should’ve changed, and that in ways that made it look plausible, but were far away from solving any of the real complexities or even solving the simple stuff consistently.
    It’s just genuinely not even a start to a solution, because the complexity isn’t in producing the code. The complexity lays in thinking through a solution, which is just not something an LLM can do.