A very NSFW website called Pornpen.ai is churning out an endless stream of graphic, AI-generated porn. We have mixed feelings.
A very NSFW website called Pornpen.ai is churning out an endless stream of graphic, AI-generated porn. We have mixed feelings.
The report came from a (non-US) government agency. It wasn’t reported as AI generated, that was what we discovered.
But it highlights the reality - while AI generated content may be considered fairly obvious for now, it won’t be forever. Real CSAM could be mixed in at some point, or, hell, the characters generating it could be feeding it real CSAM to have it recreate it in a manner that makes it harder to detect.
So what does this mean for hosting providers? We continuously receive reports for a client and each time we have to review it and what, use our best judgement to decide if it’s AI generated? We add the client to a list and ignore CSAM reports for them? We have to tell the government that it’s not “real CSAM” and expect it to end there?
No legitimate hosting provider is going to knowingly host CSAM, AI generated or not. We aren’t going to invest legal resources into defending that, nor are we going to jeopardize the mental well-being of our staff by increasing the frequency of those reports.
Very true and I would like to look into it further. Being able to disguise real content with an AI label could make things harder for people that detect and report these types of issues.
I don’t understand the logic behind this. If it’s your job to analyze and deduce whether certain content is or is not acceptable, why shouldn’t you make assessments on a case by case basis? Even if you remove CSAM from the equation you still have to continuously sift through content and report any and all illegal activities - regardless of its frequency.
And it’s the right of any website or hosting provider to not show any content they deem unsuitable for it’s viewers. But this is a non sequitur - illegal activities will never stop and it’s the duty of people like you to help and combat the distribution of such materials. I appreciate all the work people like you do and it’s a job I couldn’t handle. CP exists and will continue to exist. It’s just an ugly truth. I’m just asking a very uncomfortable question that will hopefully result in a very positive answer: can AI generated CP reduce the harm done to children?
Here’s a very interesting article of the potential positive effects of AI generated CP
Btw I appreciate your input in all of this. It means a lot coming from someone actually involved with this sort of thing.
Edit: and to your point, the article ends with a very real warning:
"Of course, using AI-generated images as a form of rehabilitation, alongside existing forms of therapy and treatment, is not the same as allowing its unbridled proliferation on the web.
“There’s a world of difference between the potential use of this content in controlled psychiatric settings versus what we’re describing here, which is just, anybody can access these tools to create anything that they want in any setting,” said Portnoff, from Thorn."
The bit about “ignoring it” was more in jest. We do review each report and handle it in a case by case basis, my point with this statement is that someone hosting questionable content is going to generate alot of reports, regardless of whether it is illegal or not, and we won’t take an operating loss and let them keep hosting with us.
Usually we try and determine if it was intentional or not, if someone is hosting CSAM and is quick and responsive with resolving the issue, we generally won’t immediately terminate them for it. But even if they (our client) is a victim, we are not required to host for them and after a certain point we will terminate them.
So when we receive a complaint about a user hosting CSAM, we review it and see they are hosting a site advertising itself as intended to allow users to distribute AI generated CP, we aren’t going to let him continue hosting with us.
This is not an accurate statement, at least in the U.S. where we are based. We are not (yet) required to sift through any and all content uploaded on our servers (not to mention the complexity of such an undertaking making it virtually impossible at our level). There have been a few laws proposed that would have changed that, as we’ve seen in the news from time to time. We are required to handle reports we receive about our clients.
Keep in mind when I say we are a hosting provider, I’m referring to pretty high up the chain - we provide hosting to clients that would say, host a Lemmy instance, or a Discord bot, or a personal NextCloud server, to name a few examples. A common dynamic is how much abuse is your hosting provider willing to put up with, and if you recall with the CSAM attacks on Lemmy instances part of the discussion was risking getting their servers shutdown.
Which is valid, hosting providers will only put up with so much risk to their infrastructure, reputation, and / or staff. Which is why people who run sites like Lemmy or image hosting services do usually want to take an active role in preventing abuse - whether or not they are legally liable won’t matter when we pull the plug because they are causing us an operating loss.
I’m just going to reply to the rest of your statement down here, I think I did not make my intent/purpose clear enough. I originally replied to your statement talking about AI being used to make CP in the future by providing a personal anecdote about it already happening. To which you asked a question as to why I defined AI generated CP as CSAM, and I clarified. I wasn’t actually responding to the rest of that message. I was not touching the topic or discussion of what impact it might have on the actual abuse of children, merely providing my opinion as to why, whether legal or not, hosting providers aren’t ever going to host that content.
The content will be hosted either way, but whether it is merely relegated to “offshore” providers but still accessible via normal means and not criminal content, or becomes another part of the dark web, will be determine at some point in the future. It hasn’t become a huge issue yet but it is rapidly approaching that point.