I asked Google Bard whether it thought Web Environment Integrity was a good or bad idea. Surprisingly, not only did it respond that it was a bad idea, it even went on to urge Google to drop the proposal.

  • localhost@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    That’s not entirely true.

    LLMs are trained to predict next word given context, yes. But in order to do that, they develop internal model that minimizes error across wide range of contexts - and emergent feature of this process is that the model DOES perform more than pure compression of the training data.

    For example, GPT-3 is able to calculate addition and subtraction problems that didn’t appear in the training dataset. This would suggest that the model learned how to perform addition and subtraction, likely because it was easier or more efficient than storing all of the examples from the training data separately.

    This is a simple to measure example, but it’s enough to suggests that LLMs are able to extrapolate from the training data and perform more than just stitch relevant parts of the dataset together.

    • fuzzzerd@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      That’s interesting, I’d be curious to read more about that. Do you have any links to get started with? Searching this type of stuff on Google yields less than ideal results.

      • hikaru755@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Check out this one: https://thegradient.pub/othello/

        In it, researchers built a custom LLM trained to play a board game just by predicting the next move in a series of moves, with no input at all about the game state. They found evidence of an internal representation of the current game state, although the model had never been told what that game state looks like.