• DudeWTF@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    10 months ago

    Honestly, may I ask, how do you perceive this?

    I have used images to help me learn how training works with AI. It is far easier to see ass nipples are a mistake than it is to see that poor text training has resulted in a middle aged woman with excessive hairiness and a passion for gardening is now going by the name Harry Potter.

    I may have a database of images and trained models that I have used to learn, not your content in particular, and not any particularly good results. I’ve mostly explored why labias are so bad with stable diffusion, and scrapped a couple of ftv galleries. I wouldn’t call myself a fan of anyone really. I’m certainly not a mark in this space. My real interest is in other AI applications. Posting trained models of people seems too gray area for me. At the same time, this is becoming a super powerful tool that essentially expands exposure and likely attracts the type of person that would pay for more. Like the recent creation of Open Dream makes it possible to do image layering for complex composition. I’m curious about a content creator’s take here.

    • young_fun_couple@lemmynsfw.comOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      Is this question how do I feel about someone taking my content and then using AI to create “new” content that isn’t mine?

      If it’s meant as inspiration, and the user is not selling this, then I don’t personally have a problem with it. If it’s for personal use, I think someone could do what they want with it. It’s when they distribute something that is my name/image/likeness without my consent that it becomes a big concern.

      • Redwax@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        So there’s a few layers to this and as I recognise your username as a fairly prolific poster here and on Reddit I am interested in if there’s a hard line line and/or degrees of acceptance. Block of text ahead so feel free to skip 😅 but I’m interested in the ethics and perceptions, that you might have at each of these four levels of integration to the AI ‘mind’. As regards to personal use and even to commercial use.

        You can take a single existing image and set it as the “noise” for an AI generated photo; that is, the AI will make it’s own photo, but it starts with yours and changes it iteratively. You can make it adhere closely to the source image, or very loosely. The AI can make it match a different art style, or add a little detail. Make it look like a painting. Maybe add a garter belt where there wasn’t one before. That might be what this is, if you recognise the pose or photo.

        You can take a bunch of photos, and ‘train a network’. This creates something like an “extra concept” for the AI. When this concept is loaded up alongside the much larger base of image info used as base; and then on top of that base it remembers what young-fun-couple means, and when prompted, even from a text prompt with no input image, it might produce something recognisably you. Again you can set the strength of the influence; from very little, to strong enough that if you have a consistently placed watermark, the AI tries to add it.

        You can also share this ‘extra network’ with other uses, eg on a site like civit.ai. Others can take that extra concept and use it too. On Civit.ai you’ll find extra networks of everyone from Mr T to My Cherry Crush. Ones of real people usually come with disclaimer that they shouldn’t be used to make fake porn. Most of these are free. I do see the ocassional person saying “get my new update from patreon” on previously uploaded popular networks. But having downloaded it somehow anyway, a person can now replicate with no effort of their own scenario 2.

        People can also take this extra network and merge it into their extra networks. Young-fun-couple gets added to the concept of “amateur porn” and becomes a drop, large or small, in the mix of amateur porn. Such that when someone asks for “hot white lady”, some part of that is young-fun-couple content, another might be a bit of Xev Bellringer. Unless it is added very strongly, prompting for young-fun-couple probably no longer brings back something recognisable, but you are in there nonetheless, and with great effort and tweaking, someone could probably draw it out again, mostly by negative-prompting a lot of other stuff.

      • DudeWTF@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        To a degree, my question is, how do you feel about others being able to generate content, especially when it is limited in flexibility and quality.

        Also, I’m curious if you see the real potential market if you flipped the perspective, adopt the tech, and use it to your advantage. Maybe it is layering and backgrounds for composition, maybe it is full on training to generate content, or maybe it is simply maximizing time by allowing the AI to rework images.

        Like the typical image generation process most people think about turns a text prompt into an image using an image consisting of mathematically random noise and turning it into a version of the prompt in a series of steps. There are other methods too. One method takes an image as input, overlays some noise, and then uses this as the baseline to generate an image from. Basically a blurry or bad image can add just a small amount of noise and the AI can render it better. This isn’t like photo filters or editing. I would be using this to my advantage. I would also look very carefully at what is hard to generate with AI rn and focus on making stuff that it cannot do well. There is a lot more generated content than I thought before I learned how this works and what AI does poorly.