• Buffaloaf@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    I asked Bard how ChatGPT can fix itself, here’s what it said:

    ChatGPT can fix itself in a number of ways, including:

    Retraining on a larger dataset. ChatGPT is trained on a massive dataset of text and code, but it can always be retrained on a larger dataset. This would help ChatGPT to learn new things and improve its performance.

    Using a different algorithm. ChatGPT is currently trained using the GPT-3 algorithm, but there are other algorithms that could be used. For example, ChatGPT could be trained using the Megatron-Turing NLG algorithm, which is one of the most powerful language models in the world.

    Using a combination of algorithms. ChatGPT could also be trained using a combination of algorithms. This would allow ChatGPT to benefit from the strengths of different algorithms.

    Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.

    Using human feedback. ChatGPT could also be used to generate text, and then this text could be reviewed by humans. The human feedback could then be used to improve ChatGPT’s performance.

    • lagomorphlecture@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 months ago

      “Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.”

      So basically create its own Fox News and see how that goes.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        The full suggestion includes “This would allow ChatGPT to learn from its own mistakes”, which implies that the text it generated would be evaluated and curated before being sent back into it for training. That, as well as including non-AI-generated text along with the AI generated stuff, should stop model collapse.

        Model collapse is basically inbreeding, with similar causes and similar solutions. A little inbreeding is not inherently bad, indeed it’s used frequently when you’re trying to breed an organism to have specific desirable characteristics.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      If having an AI tell researchers that they should base its next iteration off of Megatron isn’t the plot of a Michael Bay Transformers movie already, it should have been.