

the experiment you are referring to was specifically designed to deceive whereas AI vulnerabilities would just be simple bugs.
In my original comment, I was specifically referring to OpenClaw. Given that it doesn’t live in a vacuum and can be influenced with prompt injection, it’s not safe to assume that whatever bugs it creates aren’t specifically designed to deceive.
Secondly, the security requirements of the Linux Kernel are way more important/stringent than Lutris, which has no special access & is often even further sandboxed if installed via Flatpak.
Sure, but that’s not the point I was trying to make. You said that I don’t trust the guy to audit the code for malicious intent before committing and I gave you a reason why nobody should: if multiple people with decades of experience in a specialized domain can’t catch vulnerabilities disguised as subtle bugs, one guy who isn’t scrutinizing the changes nearly as hard definitely won’t.









It’s the same for me.
I don’t care if somebody uses Claude or Copilot if they take ownership and responsibility over the code it generates. If they ask AI to add a feature and it creates code that doesn’t fit within the project guidelines, that’s fine as long as they actually clean it up.
This is the problem I have with it too. Using something that vulnerable to prompt injection to not only write code but commit it as well shows a complete lack of care for bare minimum security practices.