A long form response to the concerns and comments and general principles many people had in the post about authors suing companies creating LLMs.

  • Gumby@beehaw.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I think this has to do with intent. If I read a book to use it for the basis of a play, that would be illegal. If I read for enjoyment, that is legal. Since AI does not read for enjoyment, but only to use it for the basis of creating something else, that would be illegal.

    Is my logic flawed?

    • Umbrias@beehaw.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      This isn’t how it works at all. I can, and should, and do, read and consume all sorts of media with the intention of stealing from it for my own works. If you ask for writing advice, this is actually probably one of the first things you’ll hear: read how other people do it.

      So this does not work as an argument, “the intent of the reading” because if so humans could never generate any new media either.

      • Peanut@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        This is the thing I kept shouting when diffusion models took off. People are effectively saying “make it illegal for neural nets to learn from anything creative or productive anywhere in any way”

        Because despite the differences in architecture, I think it is parallel.

        If the intent and purpose of the tool was to make copies of the work in a way we would consider theft if done by a human, I would understand.

        The same way there isn’t any legal protection on neural nets learning from personal and abstract information to manipulate and predict or control the public, the intended function of the tool should make it illegal.

        But people are too self focused and ignorant to riot enmass about that one.

        The dialogue should also be in creating a safety net as more and more people lose value in the face of new technology.

        But fuck any of that, what if an a.i. learned from a painting I made ten year ago, like any other disney/warner hired artists who may have learned from it? Unforgivable.

        At least peasants get to use stable diffusion instead of hiring a team of laborers.

        I don’t believe it’s reproducing my art, even if asked to do so, and I don’t think I’m entitled to anything.

        Also copyright has been fucked for decades. It hasn’t served the people since long before the Mickey mouse protection act.

        Specific to this article “Imagine a single human, reading a novel. Now pretend that human has a photographic memory and they can store that data perfectly”

        Because that’s what it was doing right? Perfectly reproducing the work? No? That’s just regular computers? Or a human using a computer?

        Humanity needs a big update.

        • flyingowlfox@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Regardless of intent, let’s not pretend that the scale at which LLMs “process” information to generate new content is comparable to humans. That is obviously what was intended for copyright laws (so far).