Internet Watch Foundation has found a manual on dark web encouraging criminals to use software tools that remove clothing. The manipulated image could then be used against the child to blackmail them into sending more graphic content, the IWF said.

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    edit-2
    2 months ago

    What you are asking for is equivalent to stopping people from writing literotica about children using word.

    Nobody is advocating for child literotica or defending it, but most understand that it would take draconian measures to stop it. Word would have to be entirely online and everything written would have to pass through a filter to verify it isn’t something illegal.

    By it’s very nature, it’s very difficult to remove such things from generative models. Although there is one solution I can think of which would be to take children completely out of models.

    The problem is this isn’t a solution that is being proposed, sadly all current possible legislations are meant to do one thing and that is to create and cement a monopoly around AI.

    I’m ready to tackle all issues involving AI but the main current issue is a handful of companies trying to rip it out of our hands and playing on people’s emotions to do so. Once that’s done, we can take care of the 0.01 % of users that are generating CP.