• ryannathans@aussie.zone
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    11
    ·
    18 hours ago

    Some models are getting so good they can patch user reported software defects following test driven development with minimal or no changes required in review. Specifically Claude Sonnet and Gemini

    So the claims are at least legit in some cases

    • 6nk06@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      23
      ·
      17 hours ago

      Oh good. They can show us how it’s done by patching open-source projects for example. Right? That way we will see that they are not full of shit.

      Where are the patches? They have trained on millions of open-source projects after all. It should be easy. Show us.

      • JustinTheGM@ttrpg.network
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        15 hours ago

        That’s an interesting point, and leads to a reasonable argument that if an AI is trained on a given open source codebase, developers should have free access to use that AI to improve said codebase. I wonder whether future license models might include such clauses.

      • ryannathans@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        8
        ·
        13 hours ago

        Are you going to spend your tokens on open source projects? Show us how generous you are.

        • 6nk06@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          ·
          12 hours ago

          I’m not the one trying to prove anything, and I think it’s all bullshit. I’m waiting for your proof though. Even with a free open-source black box.