• wise_pancake@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    They’ll tell it to polite, helpful, and always be racially diverse, so there’s no way it can be any of those things.

    • Rustmilian@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      11 months ago

      That heavily depends on how well they train it and that they don’t make any mistakes.

      • wise_pancake@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        I’ll have to look at that later, that video sounds promising!

        I was just joking because the default prompts don’t magically remove bias or offensive content from the models.