

Sure, not disputing that. I’m more annoyed by the double standard regarding his successful decisions.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.
Sure, not disputing that. I’m more annoyed by the double standard regarding his successful decisions.
No, just surprised about how uninformed and knee-jerk those opinions are.
In my experience, it’s likely that some of those downvotes come from reflexive “AI bad! How dare you say AI good!” Reactions, not anything specific to mental health. For a community called “technology” there’s a pretty strong anti-AI bubble going on here.
What I mean is that when Musk-owned companies have successes people are very often quick to accuse him of “just hiring smart people” or “just buying a successful company.” It’s only when those companies have failures that he gets credit for being hands-on in their design decisions.
Don’t get me wrong, I think Elon Musk is a pretty terrible person both in terms of his personality and his politics. But pretty terrible people can nevertheless be smart and make good engineering decisions. Just look at von Braun as a prime example.
Always interesting to see the view of the degree of Elon Musk’s involvement in his companies’ decisions swing depending on whether the outcome is good or bad.
They are using them, however. They’re visiting websites with them, using apps with them, and so forth.
If they don’t then someone else will.
Meanwhile, publishers: “why is everyone using AI instead of viewing our sites themselves?”
The goal has always been to make these things more human-like, after all.
Interestingly, I’m not seeing your quoted content when I look at this article. I see a three-paragraph-long article that says in a nutshell “people don’t visit source sites as much now that AI summarizes the contents for them.” (Ironic that I am manually summarizing it like that).
Perhaps it’s some kind of paywall blocking me from seeing the rest? I don’t see any popup telling me that, but I’ve got a lot of adblockers that might be stopping that from appearing. I’m not going to disable adblockers just to see whether this is paywalled, given how incredibly intrusive and annoying ads are these days.
Gee, I wonder why people prefer AI.
It’s sort of a monkey’s paw luck, though. The whole reason MAGA is so rabidly obsessed with getting this guy is because he has the media coverage. They made a big dramatic show out of declaring that he’ll “never walk free on American soil again” and now find themselves scrambling to do anything they can to prevent that, regardless of reason.
Any reason to say that other than that it didn’t give the result you wanted?
The enemy is at the same time too strong and too weak.
That’s a work of fiction. You might as well suggest dropping lightsabres on the bunker.
Why is this any different?
The judgment in the article I linked goes into detail, but essentially you’re asking for the law to let you control something that has never been yours to control before.
If an AI generates something that does indeed provably contain a sample of a piece of music in a song you recorded, then yes, that output may be something you can challenge as a copyright violation. But if the AI’s output doesn’t contain an identifiable sample, then no, it’s not yours. That’s how copyright works, it’s about the actual tangible expression.
It’s not about the analysis if copyrighted works, which is what AI training is doing. That’s never been something that copyright holders have any say over.
Funny, for me it was quite heartening. If it had gone the other way it could have been disastrous for freedom of information and culture and learning in general. This decision prevents big publishers like Disney from claiming shares of the pie - their published works are free for anyone with access to them to train on, they don’t need special permission or to pay special licensing fees.
There was actually just a big ruling on a case involving this, here’s an article about it. In short: a judge granted summary judgment that establishes that training an AI does not require a license or any other permission from the copyright holder, that training an AI is not a copyright violation and they don’t hold any rights over the resulting model.
I’m assuming this case is why we have this news about Anthropic scanning books coming out right now too.
So, people were angry at them for pirating books. Now we find they actually purchased books to scan, and people are angry about that too.
And the other mice will clean it up too.