• rizzothesmall@sh.itjust.works
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    1 month ago

    They probably can’t completely patched in their training, but using a pipeline which reviews the prompt and response for specific malicious attack vectors has proved very successful if adding some latency and processing expense.

    You can, however, only run these when you detect a potentially malicious known exploit. If the prompt contains any semantic similarity to grandma telling a story or how would my grandma have done x, for example, you can add the extra pipeline step to mitigate against the attack.

    • Ziglin (it/they)@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      One could also completely fix it by knowing what data gets used for training it and removing the instructions for building bombs. If it’s as bad at chemistry as it is programming that should at least make it be wrong about anything it does end up spitting out.

    • Kowowow@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      I’ve been wanting to try and see if t a l k i n g l i k e t h i s gets past any filters