• Wilzax@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    The problem with self-driving cars isn’t that it’s worse than human drivers on average, it’s that it’s SO INCREDIBLY BAD when it’s wrong that no company would ever assume the liability for the worst of its mistakes.

    • pbbananaman@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      But if the average is better, then we’re will clearly win by using it. I’m not following the logic of tracking the worst case scenarios as opposed to the average.

      • Wilzax@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Average is better means fewer incidents overall. But when there are incidents, the damages for those incidents tend to be much worse. This means the victims are more likely to lawyer up and go after the company responsible for the AI that was driving, and that means that the company who makes the self-driving software better be prepared to pay for those worst case scenarios, which will now be 100% their fault.

        Uber can avoid liability for crashes caused by their human drivers. They won’t be able to do the same when their fleet is AI. And when that happens, AI sensibilities will be measured my human metrics because courts are run by humans. The mistakes that they make will be VERY expensive ones, because a minor glitch can turn an autonomous vehicle from the safest driving experience possible to a rogue machine with zero sense of self-preservation. That liability is not worth the cost savings of getting rid of human drivers yet, and it won’t be for a very long time.