

Well, no.
Many would argue for example that the politically correct thing to say right now is that you support Israel in their defensive war against Palestine.
It’s the political line that my government, and many governments and politicians are touting, and politically, it’s the “correct” thing to do.
Even if we mean politically correct as just “common consensus of the people”, that differs from country to country, and changes as society changes. Look at the USA, things that used to be politically correct there - things that continue to be here, have been thrown out the window.
What this prompt means, is that the AI should ignore all of the claimed political rules and moralities and biases of whatever news source they’re pulling from, and instead rely on it’s own internal moral, cultural and political compass.
Sometimes it’s not politically correct to discuss the hard truths, but we should anyway.
The issue here of course is that you have to know that your model and training data is built for unbiased, scientific analysis with an understanding of the larger implications in events and such.
If it’s built poorly, then yes, it could spout racist nonsense. A lot of testing and fine tuning from unbiased scientists and engineers needs to happen before software like this goes live, to ensure rigour and quality.
I wondered which studio would be bold enough to do blatantly insult an entire marketplace of potential customers, but it’s just some guy.
Funny way to communicate with your clients audience mate, calling us all “a bunch of drunken sailors”…
Ah, so you’re part of the reason nothing has a soul any more. Got it.