• TimeSquirrel@kbin.melroy.org
    link
    fedilink
    arrow-up
    62
    arrow-down
    4
    ·
    edit-2
    2 months ago

    This has been obvious for a while to those of us using GitHub Copilot for programming. Start a function, and then just keep hitting tab to let it autotype based on what it already wrote. It quickly devolves into strange and random bullshit. You gotta babysit it.

    • 0laura@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      4
      ·
      2 months ago

      very unlikely to stem from model collapse. why would they use a worse model? it’s probably because they neutered it or gave it less resources.

      • TimeSquirrel@kbin.melroy.org
        link
        fedilink
        arrow-up
        12
        arrow-down
        1
        ·
        edit-2
        2 months ago

        It learns from your own code as you type so it can offer more relevant suggestions unlike the web-based LLMs. So you can make it feed back on itself.

    • NekuSoulA
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 month ago

      Same thing with Stable Diffusion if you’ve ever used a generated image as an input and repeated the same prompt. You basically get a deep-fried copy.

        • NekuSoulA
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Oh yeah, you’re right. It’s both degradation in some way, but through entirely different causes.