

Ah, the Kurds, who have been betrayed and abandoned by the US in Iraq and in Syria multiple times? Whom Turkey still considers a major threat, those Kurds?
Great. This is going to go great.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.


Ah, the Kurds, who have been betrayed and abandoned by the US in Iraq and in Syria multiple times? Whom Turkey still considers a major threat, those Kurds?
Great. This is going to go great.


Given that Spain is a part of the EU and the US can’t have independent trade agreements with member nations of it - a thing that Angela Merkel infamously had to explain to Trump multiple times while Trump kept pestering her at a meeting for a US/Germany trade agreement - I’d say lying. Assuming you don’t charitably count “too pudding-brained to understand what he’s talking about” as something other than lying, I suppose.


The price of liberty is constant vigilance, and so forth.


We’re talking about downvotes here. Those don’t indicate anything other than what a particular mob thinks about a subject, not whether they actually are a troll or racist. Unless you want to expand the term “troll” to be synonymous with “someone who says something that’s unpopular with the community he said in in”, which is unfortunately a definition I’ve seen people use that word to mean and makes the word pretty much useless.


Yeah, I think that caveat applies, with the same disclaimer for me; I haven’t studied the code or architecture in detail, I’m just going off a high-level understanding of how this works. The “server of record” is the server that re-broadcasts everything to those that federate with it so it can act as a filter.
ActivityPub is a nice idea, but I can imagine some manipulative shenanigans an instance could get up to with this kind of setup. I hope there are mechanisms for detecting that sort of thing.

Yeah. And “personhood” in the legal sense, which is already pretty well defined. If Thaler had made a corporation to hold that copyright then this would be an entirely different situation.
Thaler v. Perlmutter has been cropping up under this headline again and again over the years since the courts keep ruling against him and “AI-generated art can’t be copyrighted” is an extremely popular bait for clicks. The one positive about this particular cycle is that now that the Supreme Court has ruled he may finally be out of ways to have yet another do-over. This might be the last one.


But that’s the point he’s making, you can’t necessarily judge whether something is a “racist” or “troll” simply because lots of people are downvoting them. That just means the mob doesn’t like what they said. Mobs aren’t always morally correct.
There’s probably a bunch of places in the Fediverse you could go and say something that you thought was perfectly fine and reasonable and get a whole bunch of downvotes for it. Would that make you the racist troll, then?


As I understand it, you can downvote the content posted by instances that don’t support downvoting, they just don’t see it over there and can’t add downvotes of their own when using that instance.
In a similar way I could make my own instance that implemented “sidevoting” alongside up/down, people could vote left and right on those posts and comments, and the rest of the Fediverse would be blissfully unaware of the nonsense going on over on my instance.


Only one social media protocol is allowed to implement each particular approach to voting at a time?

This headline implies a misunderstanding of what this case is actually about. This is the Thaler v. Perlmutter case. Thaler has been arguing - since 2018, before the current AI “boom” - that the copyright of a particular image that he’d generated with an AI belonged to the AI itself rather than to him.
The copyright office, and every court he’s appealed to since then, have responded “AIs aren’t legal persons and so cannot hold copyright.” That’s all this Supreme Court rejection is affirming. It’s a really obvious outcome, Thaler is basically a loon with too much money and time on his hands for continuing to pursue the case.
It doesn’t address the general copyrightability of AI-generated art. In this particular case Thaler is explicitly saying “I don’t hold the copyright, my AI holds the copyright.” The courts have said “well, if you don’t hold the copyright and the AI can’t hold the copyright, that means nobody holds the copyright and therefore this particular piece is in the public domain.” Thaler is allowed to disclaim his own copyright over the work, he just can’t assign it to an AI like this.
There are primaries everywhere, those were just some of the more prominent ones. Go look up when the ones in your own state are.


You seem to have forgotten that this is a social media website comments section discussion, not a court of law.
And you are forgetting that it’s a discussion about a court of law. It’s right in the title, this is about a lawsuit.
You’re presenting a big wall of text that’s explaining your opinions on the matter. I could likewise present a big wall of text that explains my opinions on the matter. Neither of those things actually matter, though. The title and subject of this thread is not “hey, what do you all think about this stuff?” It’s “here’s what the US Supreme Court ruled (or in this case chose to let stand without making a ruling).”
I get what your opinion is. I’ve seen this opinion presented plenty of times over the years. I don’t think that’s how the courts are going to rule, though, because so far they’ve been ruling in other ways and I think I’ve got a pretty firm understanding of why they’ve been ruling that way.


Still, this all feels a bit like Schroedinger’s Copyrighted Work to me… the work exists, so who made it?
It’s simply not the court’s job to determine this, in this particular case. Which is why it’s so frustrating that this particular case keeps ending up under headlines claiming that it’s established that “AI generated art can’t be copyrighted.”
All the rest of this argument is out of scope of this case, you’d need to look to other cases. You can argue and opine however you like about what you think the outcomes should be but that doesn’t change what the outcomes of those cases actually end up being.


I’m not a lawyer, maybe you are. I can’t fully speak to the legalities at play.
This is specifically about legalities, though.
AI simply cannot produce an output without consuming other works to be used as training data.
Obviously an AI can’t work without being trained. Neither can a human.
The issue is about the legalities of this process.
From what I understand, the scope of those judgments are limited to the specific context of those uses, as well as the jurisdiction in which they were made, right?
As is the case for basically all court judgements, yeah. But once one’s been made it becomes precedent that can be cited in subsequent cases that makes them go the same way a lot easier. So when a court rules that Anthropic was operating within fair use when it trained its LLMs off of books that makes it a lot more likely that OpenAI will win a ruling about its own similar training processes. They’re opinions that matter.
It’s worth noting, for the sake a more complete discussion, this draft report from the United States Copyright Office from May 2025,
Also worth noting that this is the lowest starting level for regulation. The US Copyright Office makes rules like these, then they get challenged in court and the court that can decide whether those rules actually conform to the law. Thaler v. Perlmutter is exactly such a case.
I think if you look at something as blatant as the OpenAI Studio Ghibli filter, it’s very clear that the works that were used in training could have been, and almost certainly should have been licensed from Studio Ghibli for the creation of such a feature
Okay, you think that. What do the judges think? That’s what it ultimately comes down to.
I should note that it’s a very long-standing and well established principle that style cannot be copyrighted.


I was wondering what I’d done to warrant this attention. It’s kind of puzzling - I’m not keeping my Reddit account “secret”, I link to it and mention my interests in my profile bio. But XLE has been acting like he was a sleuth cracking a case.
Water your tree of liberty, then.


I explained it in detail in a comment I put on the root of the thread. In a nutshell, Thaler is declaring “I am not the copyright holder of this artwork, the AI itself is the copyright holder of the artwork. I want to register this artwork’s copyright to the AI that produced it.”
The copyright office - and, subsequently, all the courts he has appealed the case to - have told him “but an AI is not a legal person, so an AI cannot hold copyright to the artwork. And you are declaring that you yourself are not the holder of the copyright, you are quite insistent on that. So this artwork has no copyright holder. That means it’s public domain.”
This is an important distinction. The court isn’t ruling that AI art in general is in the public domain. It’s ruling that this art is in the public domain because this guy trying to register it is insisting that it was created without any human involvement. Unfortunately a lot of news articles miss this distinction because a headline declaring “AI art ineligible for copyright” draws a ton of clicks. This has been going on for over three years now, at least.
Criminy, I just checked. Thaler began jousting this windmill in 2018, that’s when he first made this ridiculous application. Years before modern generative AI came onto the scene. The Thaler v. Perlmutter case started in mid-2022. He is a very persistent loon.


I don’t see “vapor” on that chart.


That’s the website, not ENS itself. ENS is a smart contract running on Ethereum, access to it cannot be disabled without shutting down Ethereum as a whole.
Given that the US appears to have no coherent war aims, I suppose negotiations would be pretty pointless too. You need to actually want something specific to negotiate over.