I’ve heard arguments for both sides and i think it’s more complicated then simply yes or no. what do you guys think?

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    In my experience ONE person using the same seed will not be able to create the same image.

    I mean actual seed, not prompt. If you’re using something like Stable Diffusion it gives a seed number with each image. Using the same prompt and seed number gives exactly the same image.

    Every AI image creator has blacklisted words/tags for preventing copyright abuse or prevent creation of offensive images.

    Only the ones you can’t run locally. Most people still use Stable Diffusion because it’s the most powerful and open, and that lets you create anything you’d want and allows you to train it on whatever you’d want. You can make a model based on 500 images of Kirby and it can make similar-looking images with the same art style.

    There’s also nothing stopping you from sitting down at your desk and drawing a picture of Kirby with a pen.

    This argument never made sense to me. If I’m drawing it, I put in the effort and made it with my own hands. AI image generators can mass-produce images. Not to mention that they’re based off other people’s work, not yours. It’s not the same.

    • auf@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I don’t get it. How is the seed different from the actual data of the picture then?

      • BadlyDrawnRhino @aussie.zone
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        The seed is more like an address. It’s a number that gets paired with the prompt to tell the model what variation of the thing it should output. Given the same seed and same prompt, the model will output the same image every time, no matter what.

    • skulblaka@kbin.social
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      Fair points on the locally run AIs, I admit I don’t have experience with those and didn’t realize they were run differently. I defer to your knowledge there.

      I disagree on the drawing point though. Nearly every artist learns their style by learning from other artists, in the same way that every programmer learns to code by reading other code. It IS different, but I don’t think it’s THAT different. It’s doing the exact same thing a human would do in order to create a piece of art, just faster, and automated. Instead of spending ten years to learn to paint in the style of Dali you can tell an AI to make an image in the style of Dali and it will do exactly what a human would - inspect every Dali painting, figure out the common grounds, and figure out how to replicate them. It isn’t illegal to do that, nor do I consider it immoral, UNLESS you are profiting from the resulting image. Personally I view it as a fair use of those resources.

      The sticky situation arrives when we start to talk about how those AIs were trained though. I think the training sets are the biggest problem we have to solve with these. Train it fully on public domain works? Sure, do what you want with it, that’s why those works are in the public domain. But when you’re training your AI on copyrighted works and then make money on the result? Now that’s a problem.

      • ParsnipWitch@feddit.de
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        As an artist you do not look at how 300 other artists have drawn a banana, you look at a banana and try to understand how you can use different techniques to capture the form, texture, etc. of a banana.

        An AI calculates from hundreds of images the probability of lines and colours being arranged in a certain way and still being interpreted as a banana. It never sees a banana or understands what it is.

        Tell me, where do you see a similarity in these two processes.