That does make more sense, and I’ll even give it to Donnie this is what he meant. But as usual, his communiques are not clearly articulated.
That does make more sense, and I’ll even give it to Donnie this is what he meant. But as usual, his communiques are not clearly articulated.
I feel compelled to point out that “back door man” was already a common expression in blues lyrics.
I assume that’s what was being referred to.
I was thinking a nice golden throne. More appropriate for a god-emperor.
It’s okay to say “ass” on the internet, FYI.
Alright this just has me wondering which is worse, a wet fuck or a dry one…
Excellent! So immersive!
Where’s the dedicated DRADIS monitor?
Was that Edelweiss? I don’t know what to do with this.
This isn’t the most substantive of your comments in this chain, but I think it deserves some attention. It’s perfectly worded and it’s a concept more people need to embrace: you don’t have to speak in absolutes and it’s okay to express the limits of your knowledge.
“The usual? Cake and pop?” “No, April, the unusual.” “Fish and pop?” “No.” “Cake and fish?” “No fish!”
Like the infosquitos: “this guy sure loves porno!”
Yeah that’s how I feel about ads targeting children (even when the products are intended for children): they are not yet equipped to look at the ads critically and recognize when they’re being manipulated.
Do you have any theories as to why this is the case? I haven’t gone anywhere near it, so I have no idea. I imagine it’s tied up with the way it processes things from a language-first perspective, which I gather is why it’s bad at math. I really don’t understand enough to wrap my head around why we can’t seem to combine LLM and traditional computational logic.
My sense in reading the article was not that the author thinks artificial general intelligence is impossible, but that we’re a lot farther away from it than recent events might lead you to believe. The whole article is about the human tendency to conflate language ability and intelligence, and the author is making the argument both that natural language does not imply understanding of meaning and that those financially invested in current “AI” benefit from the popular assumption that it does. The appearance or perception of intelligence increases the market value of AIs, even if what they’re doing is more analogous to the actions of a very sophisticated parrot.
Edit all of which is to say, I don’t think the article is asserting that true AI is impossible, just that there’s a lot more to it than smooth language usage. I don’t think she’d say never, but probably that there’s a lot more to figure out—a good deal more than some seem to think—before we get Skynet.
Holy crap his tie isn’t red