- cross-posted to:
- aboringdystopia@lemmy.world
- cross-posted to:
- aboringdystopia@lemmy.world
cross-posted from: https://mander.xyz/post/34629331
cross-posted from: https://programming.dev/post/34472919
cross-posted from: https://mander.xyz/post/34629331
cross-posted from: https://programming.dev/post/34472919
No. You would use a base model (GPT-4o) to get a reliable language model to which you would add a set of rules that the chat bot follows. Every company has its own rules, it is already widely in use to add data like company-specific manuals and support documents. Not rocketscience at all.
So many examples of this method failing I don’t even know where to start. Most visible, of course, was how that approach failed to stop Grok from “being woke” for like, a year or more.
Frankly, you sound like you’re talking straight out of your ass.
Sure, it can go wrong, it is not fool-proof. Just like building a new model can cause unwanted surprises.
BTW. There are many theories about Grok’s unethical behavior but this one is new to me. The reasons I was familiar with are: unfiltered training data, no ethical output restrictions, programming errors or incorrect system maintenance, strategic errors (Elon!), publishing before proper testing.
why should any llm care about “ethics”?
well obviously it won’t, that’s why you need ethical output restrictions