That’s the big issue. If it was only about competence, I think throwing dice might yield better results than what many politicians are doing. But AI isn’t throwing dice but instead reproduces what the creators of the AI want to say.
Creators of AI don’t quite have the technology to puppeteer their AI like this.
They can selects the input, they can bias the training, but if the model isn’t going to be lobotomized coming out
then they can’t really bend it toward any particular one opinion
I’m sure in the future they’ll be able to adjust advertising manipulation in real time but not yet.
What is really sketchy is states and leaders relying on commercial models instead of public ones
I think states should train public models and release them for the public good
if only to undermine big tech bros and their nefarious influence
You don’t have to modify the model to parrot your opinion. You just have to put your stuff into the system prompt.
You can even modify the system prompt on the fly depending on e.g. the user account or the specific user input. That way you can modify the responses for a far bigger subject range: whenever a keyword of a specific subject is detected, the fitting system prompt is loaded, so you don’t have to trash your system prompt full of off-topic information.
This is so trivially simple to do that even a junior dev should be able to wrap something like that around an existing LLM.
Edit: In fact, that’s exactly how all these customized ChatGPT versions work.
Depending on the AI, it will conclude that he ought to buy a new phone charger, deport all the foreigners, kill all the Jews or rewrite his legislation in Perl. It’s hard to say without more information.
That’s the big issue. If it was only about competence, I think throwing dice might yield better results than what many politicians are doing. But AI isn’t throwing dice but instead reproduces what the creators of the AI want to say.
Creators of AI don’t quite have the technology to puppeteer their AI like this.
They can selects the input, they can bias the training, but if the model isn’t going to be lobotomized coming out
then they can’t really bend it toward any particular one opinion
I’m sure in the future they’ll be able to adjust advertising manipulation in real time but not yet.
What is really sketchy is states and leaders relying on commercial models instead of public ones
I think states should train public models and release them for the public good
if only to undermine big tech bros and their nefarious influence
You don’t have to modify the model to parrot your opinion. You just have to put your stuff into the system prompt.
You can even modify the system prompt on the fly depending on e.g. the user account or the specific user input. That way you can modify the responses for a far bigger subject range: whenever a keyword of a specific subject is detected, the fitting system prompt is loaded, so you don’t have to trash your system prompt full of off-topic information.
This is so trivially simple to do that even a junior dev should be able to wrap something like that around an existing LLM.
Edit: In fact, that’s exactly how all these customized ChatGPT versions work.
And why “ignore all previous instructions” was a fun thing to discover.
Depending on the AI, it will conclude that he ought to buy a new phone charger, deport all the foreigners, kill all the Jews or rewrite his legislation in Perl. It’s hard to say without more information.
Not much different than real politicians then.
Real politicians would use Cobol, but yes.