Creators of AI don’t quite have the technology to puppeteer their AI like this.
They can selects the input, they can bias the training, but if the model isn’t going to be lobotomized coming out
then they can’t really bend it toward any particular one opinion
I’m sure in the future they’ll be able to adjust advertising manipulation in real time but not yet.
What is really sketchy is states and leaders relying on commercial models instead of public ones
I think states should train public models and release them for the public good
if only to undermine big tech bros and their nefarious influence
You don’t have to modify the model to parrot your opinion. You just have to put your stuff into the system prompt.
You can even modify the system prompt on the fly depending on e.g. the user account or the specific user input. That way you can modify the responses for a far bigger subject range: whenever a keyword of a specific subject is detected, the fitting system prompt is loaded, so you don’t have to trash your system prompt full of off-topic information.
This is so trivially simple to do that even a junior dev should be able to wrap something like that around an existing LLM.
Edit: In fact, that’s exactly how all these customized ChatGPT versions work.
Creators of AI don’t quite have the technology to puppeteer their AI like this.
They can selects the input, they can bias the training, but if the model isn’t going to be lobotomized coming out
then they can’t really bend it toward any particular one opinion
I’m sure in the future they’ll be able to adjust advertising manipulation in real time but not yet.
What is really sketchy is states and leaders relying on commercial models instead of public ones
I think states should train public models and release them for the public good
if only to undermine big tech bros and their nefarious influence
You don’t have to modify the model to parrot your opinion. You just have to put your stuff into the system prompt.
You can even modify the system prompt on the fly depending on e.g. the user account or the specific user input. That way you can modify the responses for a far bigger subject range: whenever a keyword of a specific subject is detected, the fitting system prompt is loaded, so you don’t have to trash your system prompt full of off-topic information.
This is so trivially simple to do that even a junior dev should be able to wrap something like that around an existing LLM.
Edit: In fact, that’s exactly how all these customized ChatGPT versions work.
And why “ignore all previous instructions” was a fun thing to discover.