TikTok ran a deepfake ad of an AI MrBeast hawking iPhones for $2 — and it’s the ‘tip of the iceberg’::As AI spreads, it brings new challenges for influencers like MrBeast and platforms like TikTok aiming to police unauthorized advertising.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 year ago

      I’m not familiar with the term, and Google shows nothing that makes sense in context. Can you explain the concept?

        • wildginger@lemmy.myserv.one
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Its AI poison. You alter the data in such a way that the image is unchanged to a humans visual eye, but when imaging AI software uses the image within its sample imaging, the alterations ruin its ability to make correlations and recognize patterns.

          Its toxic for the entire data set too, so it can damage the AI output of most things as long as its within the list of images used to train the AI.

          • P03 Locke@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            That seems about as effective as those No-AI pictures artists like to pretend will poison AI data sets. A few pixels isn’t going to fool AI, and anything more than that is going to look like a real image was AI-generated, ironically.

              • P03 Locke@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Wake me up when orgs like Stability AI or Open AI removed about this technology. As it stands now, it’s not even worth mentioning, and people are freely generating whatever pictures, models, deepfakes, etc. that they want.

                • wildginger@lemmy.myserv.one
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  1 year ago

                  Why would they openly removed about it? Thats free advertising that it works. Not to mention, you cant poison food someone already ate. They already have full sets of scrubbed data they can revert to if they add a batch thats been poisoned. They just need to be cautious about newly added data.

                  Its not worth mentioning if you dont understand the tech, sure. But for people who make content that is publicly viewable, this is pretty important.

        • stolid_agnostic@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It’s sort of like the captcha things. A human brain can recognize photos of crosswalks or bikes or whatever but it’s really hard to train a bot to do that. This is similar but in video format.