• 0 Posts
  • 13 Comments
Joined 24 days ago
cake
Cake day: June 8th, 2024

help-circle









  • justaderp@lemmy.worldtoaww@lemmy.worldCow sleeping in someone's lap.
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    6 days ago

    I know it’s just a joke. But, black and brown bears are very intelligent and quite peaceful creatures. I’ve run into forty or fifty in the wilderness. I’ve never once felt the bear was considering an attack. They’re smart enough to recognize our complex behaviors as a large risk to their safety.

    The story of the vast majority of humans mauled by bears:

    Your dog has a perfect record of defending the pack. Every single time the target either runs or turns out to be friendly. No other pack member defends. Its primary reason to exist is to defend. A bear has a perfect record of fights with anything but another bear.

    One day the bear smells some food, good stuff it can’t find normally. It’s some campers with their dog. The dog smells the bear, full adrenaline drops for its whole reason to exist, and defends the pack. The bear wins in about one second.

    The human defends the dog. The bear fights because that’s what it’s doing right now. Then, it reconsiders and runs away. Finally, the Forest Rangers track down and kill the bear quietly, preserving the tourism the community relies on.

    We’re really shitty to bears, at least here in the US. They’re not even very dangerous relative an wild elk, moose, or even free range livestock. It’s the big and dumb ones you need to watch out for. And marmot. Never disagree with a marmot.




  • I’m not actually asking for good faith answers to these questions. Asking seems the best way to illustrate the concept.

    Does the programmer fully control the extents of human meaning as the computation progresses, or is the value in leveraging ignorance of what the software will choose?

    Shall we replace our judges with an AI?

    Does the software understand the human meaning in what it does?

    The problem with the majority of the AI projects I’ve seen (in rejecting many offers) is that the stakeholders believe they’ve significantly more influence over the human meaning of the results than exists in the quality and nature of the data they’ve access to. A scope of data limits a resultant scope of information, which limits a scope of meaning. Stakeholders want to break the rules with “AI voodoo”. Then, someone comes along and sells the suckers their snake oil.