

To be fair, spite is kind of the point of this community.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.


To be fair, spite is kind of the point of this community.


You can’t do anything else anyway.
Yes, this is my fundamental point. The Fediverse doesn’t have tools for Fediverse-wide censorship, nor should it.


That stops bots for a particular instance, assuming they guessed right about which accounts were bots. It doesn’t stop bots on the Fediverse.


This is just regular moderation, though. This is how the Fediverse already works. And it doesn’t resolve the question I raised about what happens when two instances disagree about whether an account is a bot.


There is one nice feature that pushes back against hive-mindedness here that Reddit lacks, at least; you can see your upvote and downvote totals rather than just the single aggregate total. Reddit used to be like that years ago but they got rid of it.
That means that if you say something that gets a ton of both downvotes and upvotes you can at least know that there were a significant number of people who liked it it. Over on Reddit saying anything that netted negative karma felt like screaming into the void.
Oh, and the small population means that downvoted comments are still likely easy to see. That helps too.
Still, the Fediverse does feel more strongly bubbled than Reddit does, from my subjective and anecdotal position.


10 hours ago over in lemmyshitpost@lemmy.world you saw a picture that you rather liked but that was getting a lot of downvotes and you didn’t know why. You were told by @breadleyloafsyou@lemmy.zip that “lemmy doesn’t like AI”
Also 10 hours ago over in nostupidquestions@lemmy.world you said “I know its an unpopular opinion, but I don’t agree with punching Nazis. It makes them look like a victim, and violence never works.” You got a bunch of downvotes for that yourself.
Just a couple of examples of situations where an opinion that was against the consensus view of the community got “punished.”


Technically, yeah. Some instances are run by tin-pot dictators with delusions of godhood, but if you get banned from one of those just switch to another one.
The communities tend to be bubblier, though, since they’re small. So if your opinions don’t match you’ll get shouted down harder.


A major problem faced by first-mover companies like OpenAI is that they spend an enormous amount of money on basic research and initial marketing and hardware purchases to set up in the first place. Those expenses become debts and have to be paid off by the business later. If they were to go bankrupt and sell off ChatGPT to some other company for pennies on the dollar that new owner would be in a much better position to be profitable.
There is clearly an enormous demand for AI services, despite all the “nobody wants this” griping you may hear in social media bubbles. That dermand’s not going to disappear and the AIs themselves won’t disappear. It’s just a matter of finding the right price to balance things out.


Graphics cards haven’t been used in any significant quantity for cryptocurrency mining for a long time now.


How else would this “trusted” status be applied without some kind of central authority or authentication? If one instance declares “this guy’s a bot” and another one says “nah, he’s fine” how is that resolved? If there’s no global resolution then there isn’t any difference between this and the existing methods of banning accounts.


If this is something that individual instances can opt out of then it doesn’t solve the “bot problem.”


If users want control then they have to take some responsibility.


Boom, centralized control of the Fediverse established.


It’s important to separate the training AI part from the conventional copyright violation. A lot of these companies downloaded stuff they shouldn’t have downloaded and that is a copyright violation in its own right. But the training part has been ruled as fair use in a few prominent cases already, such as the Anthropic one.
Beyond even that, there are generative AIs that were trained entirely on material that the trainer owned the license to outright - Adobe’s “Firefly” model, for example.
So I have yet to see it established that generative AI inherently involves “asset theft.” You’ll have to give me something specific. That page has far too many cases jumbled together covering a whole range of related subjects, some of them not even directly AI-related (I notice one of the first ones in the list is “A federal judge accused a third-party law firm of attempting to “trick” authors out of their record $1.5 billion copyright class action settlement with Anthropic.” That’s just routine legal shenanigans).
I just wish it wasn’t via such a monstrously painful mechanism.


Whenever I’ve done nothing wrong, I like to make that clear by going down to the courthouse and threatening judges not to charge me with anything.


First it would need to be established that generative AI inherently involves “asset theft”, which so far has not been the case in the various lawsuits that have reached trial.


And illustrates exactly the point I’m making. If people are going to hate it purely because it’s AI, regardless of whether it’s labelled or not, then there’s every incentive there to simply not label it. It’s counterproductive.
The leopards in America happen to be feeding very well in the past year, so naturally there’s a lot of posts from there.
Be the change you want to see. Find some non-American examples to post.