A long form response to the concerns and comments and general principles many people had in the post about authors suing companies creating LLMs.

  • Peanut@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    This is the thing I kept shouting when diffusion models took off. People are effectively saying “make it illegal for neural nets to learn from anything creative or productive anywhere in any way”

    Because despite the differences in architecture, I think it is parallel.

    If the intent and purpose of the tool was to make copies of the work in a way we would consider theft if done by a human, I would understand.

    The same way there isn’t any legal protection on neural nets learning from personal and abstract information to manipulate and predict or control the public, the intended function of the tool should make it illegal.

    But people are too self focused and ignorant to riot enmass about that one.

    The dialogue should also be in creating a safety net as more and more people lose value in the face of new technology.

    But fuck any of that, what if an a.i. learned from a painting I made ten year ago, like any other disney/warner hired artists who may have learned from it? Unforgivable.

    At least peasants get to use stable diffusion instead of hiring a team of laborers.

    I don’t believe it’s reproducing my art, even if asked to do so, and I don’t think I’m entitled to anything.

    Also copyright has been fucked for decades. It hasn’t served the people since long before the Mickey mouse protection act.

    Specific to this article “Imagine a single human, reading a novel. Now pretend that human has a photographic memory and they can store that data perfectly”

    Because that’s what it was doing right? Perfectly reproducing the work? No? That’s just regular computers? Or a human using a computer?

    Humanity needs a big update.

    • flyingowlfox@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Regardless of intent, let’s not pretend that the scale at which LLMs “process” information to generate new content is comparable to humans. That is obviously what was intended for copyright laws (so far).