A neuromorphic supercomputer called DeepSouth will be capable of 228 trillion synaptic operations per second, which is on par with the estimated number of operations in the human brain

Edit: updated link, no paywall

  • gibmiser@lemmy.world
    link
    fedilink
    English
    arrow-up
    93
    arrow-down
    1
    ·
    11 months ago

    For real. I’m reading the title all wondering how the fuck they mapped all the neuron connections and… nope, the real innovative part of the story is clickbait

    • neuropean@kbin.social
      link
      fedilink
      arrow-up
      44
      ·
      11 months ago

      That’s only counting connections. The brain learns by making new connections, through complex location and timing dependent inputs from other neurons. It’s way more complex than the number of connections, and if neuroscientists are still studying the building blocks we don’t have much hope of recreating it.

      • IHeartBadCode@kbin.social
        link
        fedilink
        arrow-up
        31
        ·
        11 months ago

        This also ignores that the brain is not wholly an electrical system. The are all kinds of chemical receptors within the brain that alter all kinds of neurological function. Kid of the reason why drugs are a thing. On small scales we have a pretty good idea how these work, at least for the receptors that we’re aware of. On larger scales it’s mostly guessing at this point. The brain has a knack of doing more than the sum of all parts on a pretty regular basis.

        • 0ops@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 months ago

          Not to mention the scale and nature of the “dataset” that our brains were trained on. Millions of years of instinct encoded in DNA, plus a few years gathering data from dozens of senses 24/7 (including chemical receptors, like you said) and in turn manipulating our bodies, interacting with the environment, and observing the results. We’ve been doing all of this since embryo.

          We can’t just feed a model raw image and text data and expect it’s intelligence to be comparable to ours. However you quantify intelligence/consciousness whatever, the text/image model’s thought processes will be alien to ours, which makes sense because their “environment” is nothing like ours - just text and image input and output.