• 7 Posts
  • 97 Comments
Joined 2 years ago
cake
Cake day: July 13th, 2023

help-circle

  • Hyping up AI is bad, so it’s alright to call someone a promptfondler for fondling prompt.

    I mostly see “clanker” in reference to products of particularly asinine promptfondling: spambot “agents” that post and even respond to comments, LLM-based scam calls, call center replacement, etc.

    These bots don’t derive their wrongness from the wrongness of promptfondling, these things are part of why promptfondling is wrong.

    Doesn’t clanker come from some Star Wars thing where they use it like a racial slur against robots, who are basically sapient things with feelings within its fiction? Being based on “cracker” would be alright,

    I assume the writers wanted to portray the robots as unfairly oppressed, while simultaneously not trivializing actual oppression of actual people (the way “wireback” would have, or I dunno “cogger” or something).

    but the way I see it used is mostly white people LARPing a time and place when they could say the N-word with impunity.

    Well yeah that would indeed be racist.

    I’m seeing a lot of people basically going “I hate naggers, these naggers are ruining the neighborhood, go to the back of the bus nagger, let’s go lynch that nagger” and thinking that’s funny because haha it’s not the bad word technically.

    That just seems like an instance of good ol anti person racism / people trying to offend other people while not particularly giving a shit about the bots one way or the other.


  • we should recognize the difference

    The what now? You don’t think there’s a lot of homophobia that follows “castigating someone for what they do” format, or you think its a lot less bad according to some siskinded definition of what makes slurs bad that somehow manages to completely ignore anything that actually makes slurs bad?

    I think that’s the difference between “promptfondler” and “clanker”. The latter is clearly inspired by bigoted slurs.

    Such as… “cracker”? Given how the law protects but doesn’t bind AI, that seems oddly spot on.


  • Note also that genuine labor saving stuff like say the Unity engine with Unity asset store, did result in an absolute flood of shovelware on Steam back in the mid 2010s (although that probably had as much having to do with Steam FOMO-ing about the possibility of not letting the next Minecraft onto Steam).

    As a thought experiment imagine an unreliable labor saving tool that speeds up half* of the work 20x, and slows down the other half 3x. You would end up 1.525 times slower.

    The fraction of work (not by lines but by hours) that AI helps with is probably less than 50% , and the speed up is probably worse than 20x.

    Slowdown could be due to some combination of

    • Trying to do it with AI until you sink too much time into that and then doing it yourself (>2x slowdown here).
    • Being slower at working with the code you didn’t write.
    • It being much harder to debug code you didn’t write.
    • Plagiarism being inferior to using open source libraries.

    footnote: “half” as measured by the pre-tool hours.


  • And yet you are the one person here who is equating Mexicans and Black people with machines. People with disabilities, too, huh. Lemme guess next time we’re pointing and laughing at how some hyped-up “PhD level chatbot” can’t count the Es in dingleberry, you’ll be likening that to ableism.

    When you’re attempting to humanize machines by likening the insults against machines to insults against people, this does more to dehumanize people than to humanize machines.

    edit: Also I never seen and couldn’t find instances of “wireback” being used outside pro-bot sentiments and hand-wringing about how anti bot people are akhtually racist. Had you, or is it all second or third hand? It’s entirely possible that it is something botlickers (can I say that or is that not OK?) came up with.

    edit: especially considering that these “anti-robot slurs” seem to originate in scifi stories where the robots are being oppressed, whereby the author is purposefully choosing that slur to undermine the position of anti robot characters in the story. It may well be that for the same reason that author has in choosing these slurs, they are rarely used (in the earnest).


  • To be honest, hand wringing over “clanker” being a slur and all that strikes me as increasingly equivalent to hand wringing over calling nazis nazis. The only thing that rubs me the wrong way is that I’d prefer the new so called slur to be “chatgpt”, genericized and negative connotated.

    If you are in the US, we’ve had our health experts replaced with AI, see the “MAHA report”. We’re one moron AI-pilled president away from a less fun version of Skynet, whereby a chatbot talks the president into launching nukes and kills itself along with a few billion people.

    Complaints about dehumanizing these things is even more meritless than a CEO complaining that someone is dehumanizing Exxon (which is at least made of people).

    These things are extension of those in power, not some marginalized underdogs like cute robots in scifi. As an extension of corporations, it already got more rights than any human - imagine what would happen to a human participant in a criminal conspiracy to commit murder and contrast that with what happens when a chatbot talks someone into a crime.




  • Even to the extent that they are “prompting it wrong” it’s still on the AI companies for calling this shit “AI”. LLMs fundamentally do not even attempt to do cognitive work (the way a chess engine does by iterating over possible moves).

    Also, LLM tools do not exist. All you can get is a sales demo for the company stock (the actual product being sold), built to impress how close to AGI the company is. You have to creatively misuse these things to get any value out of them.

    The closest they get to tools is “AI coding”, but even then, these things plagiarize code you don’t even want plagiarized (because its MIT licensed and you’d rather keep up with upstream fixes).





  • I’d say its a combo of them feeling entitled to plagiarise people’s work and fundamentally not respecting the work of others (a point OpenAI’s Studio Ghibli abomination machine demonstrated at humanity’s expense.

    Its fucking disgusting how they denigrate the very work on which they built their fucking business on. I think its a mixture of the two though, they want it plagiarized so that it looks like their bot is doing more coding than it is actually capable of.

    On a wider front, I expect this AI bubble’s gonna cripple the popularity of FOSS licenses - the expectation of properly credited work was a major aspect of the current FOSS ecosystem, and that expectation has been kneecapped by the automated plagiarism machines, and programmers are likely gonna be much stingier with sharing their work because of it.

    Oh absolutely. My current project is sitting in a private git repo, hosted on a VPS. And no fucking way will I share it under anything less than GPL3 .

    We need a license with specific AI verbiage. Forbidding training outright won’t work (they just claim fair use).

    I was thinking adding a requirement that the license header should not be removed unless a specific string (“This code was adapted from libsomeshit_6.23”) is included in the comments by the tool, for the purpose of propagation of security fixes and supporting a consulting market for the authors. In the US they do own the judges, but in the rest of the world the minuscule alleged benefit of not attributing would be weighted against harm to their customers (security fixes not propagated) and harm to the authors (missing out on consulting gigs).

    edit: perhaps even an explainer that authors see non attribution as fundamentally fraudulent against the user of the coding tool: the authors of libsomeshit routinely publish security fixes and the user of the coding tool, who has been defrauded to believe that the code was created de-novo by the coding tool, is likely to suffer harm from misuse of published security fixes by hackers (which wouldn’t be possible if the code was in fact created de-novo).


  • I think provenance has value outside copyright… here’s a hypothetical scenario:

    libsomeshit is licensed under MIT-0 . It does not even need attribution. Version 3.0 has introduced a security exploit. It has been fixed in version 6.23 and widely reported.

    A plagiaristic LLM with training date cutoff before 6.23 can just shit out the exploit in question, even though it already has been fixed.

    A less plagiaristic LLM could RAG in the current version of libsomeshit and perhaps avoid introducing the exploit and update the BOM with a reference to “libsomeshit 6.23” so that when version 6.934 fixes some other big bad exploit an automated tool could raise an alarm.

    Better yet, it could actually add a proper dependency instead of cut and pasting things.

    And it would not need to store libsomeshit inside its weights (which is extremely expensive) at the same fidelity. It just needs to be able to shit out a vector database’s key.

    I think the market right now is far too distorted by idiots with money trying to build the robot god. Code plagiarism is an integral part of it, because it makes the LLM appear closer to singularity (it can write code for itself! it is gonna recursively self-improve!).


  • In case of code, what I find the most infuriating is that they didn’t even need to plagiarize. Much of open source code is permissively enough licensed, requiring only attribution.

    Anthropic plagiarizes it when they prompt their tool to claim that it wrote the code from some sort of general knowledge, it just learned from all the implementations blah blah blah to make their tool look more impressive.

    I don’t need that, in fact it would be vastly superior to just “steal” from one particularly good implementation that has a compatible license you can just comply with. (And better yet to try to avoid copying the code and to find a library if at all possible). Why in the fuck even do the copyright laundering on code that is under MIT or similar license? The authors literally tell you that you can just use it.



  • I dunno, I guess I should try it just to see what the buzz is all about, but I am rather opposed to plagiarism and river boiling combination, and paying them money is like having Peter Thiel do 10x donations matching for donations to a captain planet villain.

    I personally want a model that does not store much specific code in its weights, uses RAG on compatibly licensed open source and cites what it RAG’d . E.g. I want to set app icon on Linux, it’s fine if it looks into GLFW and just borrows code with attribution that I will make sure to preserve. I don’t need it to be gaslighting me that it wrote it from reading the docs. And this isn’t literature, theres nothing to be gained from trying to dilute copyright by mixing together a hundred different pieces of code doing the same thing.

    I also don’t particularly get the need to hop onto the bandwagon right away.

    It has all the feel of boiling a lake to do for(int i=0; i<strlen(s); ++i) . LLMs are so energy intensive in large part because of quadratic scaling, but we know the problem is not intrinsically quadratic otherwise we wouldn’t be able to write, read, or even compile the code.

    Each token has the potential of relating to any other token but does only relate to a few.

    I’d give the bastards some time to figure this out. I wouldn’t use an O(N^2) compiler I can’t run locally, either, there is also a strategic disadvantage in any dependence on proprietary garbage.

    Edit: also i have a very strong suspicion that someone will figure out a way to make most matrix multiplications in an LLM be sparse, doing mostly same shit in a different basis. An answer to a specific query does not intrinsically use every piece of information that LLM has memorized.





  • Film photography is my hobby and I think that there isn’t anything that would prevent from exposing a displayed image on a piece of film, except for the cost.

    Glass plates it is, then. Good luck matching the resolution.

    In all seriousness though I think your normal set up would be detectable even on normal 35mm film due to 1: insufficient resolution (even at 4k, probably even at 8k), and 2: insufficient dynamic range. There would probably also be some effects of spectral response mismatch - reds that are cut off by the film’s spectral response would be converted into film-visible reds by a display. Il

    Detection of forgery may require use of a microscope and maybe some statistical techniques. Even if the pixels are smaller than film grains, pixels are on a regular grid and film grains are not.

    Edit: trained eyeballing may also work fine if you are familiar with the look of that specific film.


  • Oh wow it is precisely the problem I “predicted” before: there are surprisingly few production grade implementations to plagiarize from.

    Even for seemingly simple stuff. You might think parsing floating point numbers from strings would have a gazillion examples. But it is quite tricky to do it correctly (a correct implementation allows you to convert a floating point number to a string with enough digits, and back, and always obtain precisely the same number that you started with). So even for such omnipresent example, which has probably been implemented well over 10 000 times by various students, if you start pestering your bot with requests to make it better, if you have the bots write the tests and pass them, you could end up plagiarizing something identifiable.

    edit: and even suppose there were 2, or 3, or 5 exfat implementations. They would be too different to “blur” together. The deniable plagiarism that they are trying to sell - “it learns the answer in general from many implementations, then writes original code” - is bullshit.