I’ve been saying this for about a year since seeing the Othello GPT research, but it’s nice to see more minds changing as the research builds up.

Edit: Because people aren’t actually reading and just commenting based on the headline, a relevant part of the article:

New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

“[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    10
    arrow-down
    7
    ·
    10 months ago

    I’ve been saying this all along. Language is how humans communicate thoughts to each other. If a machine is trained to “fake” communication via language then at a certain point it may simply be easier for the machine to figure out how to actually think in order to produce convincing output.

    We’ve seen similar signs of “understanding” in the image-generation AIs, there was a paper a few months back about how when one of these AIs is asked to generate a picture the first thing it does is develop an internal “depth map” showing the three-dimensional form of the thing it’s trying to make a picture of. Because it turns out that it’s easier to make pictures of physical objects when you have an understanding of their physical nature.

    I think the reason this gets a lot of pushback is that people don’t want to accept the notion that “thinking” may not actually be as hard or as special as we like to believe.

    • Redacted@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      10 months ago

      This whole argument hinges on consciousness being easier to produce than to fake intelligence to humans.

      Humans already anthropomorphise everything, so I’m leaning towards the latter being easier.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        8
        arrow-down
        3
        ·
        10 months ago

        I’d take a step farther back and say the argument hinges on whether “consciousness” is even really a thing, or if we’re “faking” it to each other and to ourselves as well. We still don’t have a particularly good way of measuring human consciousness, let alone determining whether AIs have it too.

        • Redacted@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          10 months ago

          …or even if consciousness is an emergent property of interactions between certain arrangements of matter.

          It’s still a mystery which I don’t think can be reduced to weighted values of a network.

          • automattable@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            10 months ago

            This is a really interesting train of thought!

            I don’t mean to belittle the actual, real questions here, but I can’t shake the hilarious image of 2 dudes sitting around in a basement, stoned out of their minds getting “deep.”

            Bro! What if consciousness isn’t real, and we’re just faking it

            brooooooo

      • webghost0101@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        10 months ago

        Or maybe our current understanding of conscious and intelligence is wrong and they are not related to each other. A non conscious thing can perform advanced logic like the Geometrical patterns found within the overlapping orbits of planets, the Fibonacci being found about everywhere. We also have yet to proof that individual strands of grass or rocks aren’t fully consciousness. There is so much we don’t know for certain its perplexing how we believe we can just assume.

        • Redacted@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          10 months ago

          Standard descent into semantics incoming…

          We define concepts like consciousness and intelligence. They may be related or may not depending on your definitions, but the whole premise here is about experience regardless of the terms we use.

          I wouldn’t say Fibonacci being found everywhere is in any way related to either and is certainly not an expression of logic.

          I suspect it’s something like the simplest method nature has of controlling growth. Much like how hexagons are the sturdiest shape, so appear in nature a lot.

          Grass/rocks being conscious is really out there! If that hypothesis was remotely feasible we couldn’t talk about things being either consciousness or not, it would be a sliding scale with rocks way below grass. And it would be really stretching most people’s definition of consciousness.

          • webghost0101@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            10 months ago

            I understand what you’re saying but i disagree that there is any proper defining of the concept. The few scientist that attempt to study it can’t even agree on what it even is.

            I agree that my example where far out, they are supposed to be to represent ideas outside the conventional box. I don’t literally believe grass is conscious. I recognize that if i/we don’t know, then i/we don’t know. In the face of something we don’t know the nature off, the requirements for, the purpose it serves i prefer to remain open to every option.

            I know Wikipedia isn’t a scientific research paper but i expect that if there really is a agreed upon scientific answer it wouldn’t be like it currently is:

            “Consciousness, at its simplest, is awareness of internal and external existence. However, its nature has led to millennia of analyses, explanations and debate by philosophers, theologians, and all of science. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one’s “inner life”, the world of introspection, of private thought, imagination and volition. Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked.

              • webghost0101@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 months ago

                I fear it was inevitable, with no framework where we can agree upon semantics are all there is.

                I truly wish we humanity had more knowledge to have a more proper discussion but currently it seems unproductive, especially in the context of a faceless online forum debate between 2 strangers.

                Thank you for your time, and input on this matter.

    • TimeSquirrel@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      3
      ·
      10 months ago

      The bar always gets raised for what counts as actual “AI” with each advancement too. Back in the 60s, the procedural AI of the 80s and 90s would have fit the bill, but at the time, we said “nope, not good enough”. And so it kept getting better and better, each time surpassing the old tech by leaps and bounds. Still, not “true” AI. Now we have ChatGPT, which some still refuse to call “AI”.

      We’re going to eventually have fully sentient artificial beings walking around amongst us and these people are going to end up being an existential threat to them, I can see it now.

      • Redacted@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        10 months ago

        Think you’re slightly missing the point. I agree that LLMs will get better and better to a point where interacting with one will be indistinguishable from interacting with a human. That does not make them sentient.

        The debate is really whether all of our understanding and human experience of the world comes down to weighted values on a graph or if the human brain is hiding more complex, as-yet-undiscovered, phenomena than that.