

Adobe creative suite, most cad software, games (work with Proton already so little need for this), etc.
Adobe creative suite, most cad software, games (work with Proton already so little need for this), etc.
At some point companies will be forced to accept that they’re losing out on revenue by not releasing a linux version of their software.
Vague statement said authoritatively.
Trump did answer worse-- he’s never going to be more intelligent-- but Biden looked like a reanimated corpse ngl and he would have lost the vote hard
Ah okay, that makes more sense. Thanks.
Right, but we were talking about how people were less concerned when men get sent to the death camps, and then you made the point that some protections don’t apply to men. You can see the connection. I don’t believe that’s the point you were intending to make but nonetheless I felt it was necessary to voice my disagreement for the sake of a complete discussion.
I wouldn’t consider “not being sent to a death camp” to be an extra protection that only applies to specific groups of people, though
But we’re talking about a situation in which the protections are against unjust persecution. Selective lack of protections in this case is quite literally the same as selective persecution.
So based on that statistic, we should treat them differently? This line of thinking leads to some very bad places.
Don’t lock people out of making valid arguments because they sound vaguely like arguments used by other people for negative means
Nobody should be subjected to this
He can be in it for himself, and it can still be a good thing for us
Undertale also has really bad code and it’s a great game
Those are drawbacks
Your “probably not” argument gets thinner every major AI update.
Right, but I’m talking about whether they’re already using it, not whether they will in the future. It’s certainly interesting to speculate about it though. I don’t think we really know for sure how good it will get, and how fast.
Something interesting that’s come up is scaling laws. Compute, dataset size, and parameters so far appear to create a limit to how low the error rate can go, regardless of the model’s architecture. And dataset size and model size appear to require being scaled up in tandem to avoid over-/under-fitting. It’s possible, although not guaranteed, that we’re discovering fundamental laws about pattern recognition. Or maybe it’s just an issue with our current approach.
Alphago was designed entirely within the universe of Go. It is fundamentally tied to the game; a game with simple rules and nothing but rule-following patterns to analyze. So it can make good go moves, because it has been trained on good go moves. Or self-trained using a simulated game maybe, idk how they trained it.
ChatGPT is trained the same way, but on human speech. It is very, very good at writing human speech. This requires it to be able to mimick our speech patterns, which means its mimickry will resemble coherent thought, but it’s not. In short, ChatGPT is not trained to make political decisions. If you’ve seen the paper where they ask it to run a vending machine company, you can see some of the issues with trying to force it to make real-world decisions like running a political campaign.
You could train an AI specifically to make political campaign decisions, but I’m not aware of a good dataset you could use for it.
Could AI have been used to help run a campaign? Yes. Would it have been better than humans doing it? Probably not.
The AI isn’t helping much, but there’s also no resistance, so… rip.
Congrats!