Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit before joining the Threadiverse as well.

  • 0 Posts
  • 2.24K Comments
Joined 2 years ago
cake
Cake day: March 3rd, 2024

help-circle





  • There is one nice feature that pushes back against hive-mindedness here that Reddit lacks, at least; you can see your upvote and downvote totals rather than just the single aggregate total. Reddit used to be like that years ago but they got rid of it.

    That means that if you say something that gets a ton of both downvotes and upvotes you can at least know that there were a significant number of people who liked it it. Over on Reddit saying anything that netted negative karma felt like screaming into the void.

    Oh, and the small population means that downvoted comments are still likely easy to see. That helps too.

    Still, the Fediverse does feel more strongly bubbled than Reddit does, from my subjective and anecdotal position.




  • A major problem faced by first-mover companies like OpenAI is that they spend an enormous amount of money on basic research and initial marketing and hardware purchases to set up in the first place. Those expenses become debts and have to be paid off by the business later. If they were to go bankrupt and sell off ChatGPT to some other company for pennies on the dollar that new owner would be in a much better position to be profitable.

    There is clearly an enormous demand for AI services, despite all the “nobody wants this” griping you may hear in social media bubbles. That dermand’s not going to disappear and the AIs themselves won’t disappear. It’s just a matter of finding the right price to balance things out.








  • It’s important to separate the training AI part from the conventional copyright violation. A lot of these companies downloaded stuff they shouldn’t have downloaded and that is a copyright violation in its own right. But the training part has been ruled as fair use in a few prominent cases already, such as the Anthropic one.

    Beyond even that, there are generative AIs that were trained entirely on material that the trainer owned the license to outright - Adobe’s “Firefly” model, for example.

    So I have yet to see it established that generative AI inherently involves “asset theft.” You’ll have to give me something specific. That page has far too many cases jumbled together covering a whole range of related subjects, some of them not even directly AI-related (I notice one of the first ones in the list is “A federal judge accused a third-party law firm of attempting to “trick” authors out of their record $1.5 billion copyright class action settlement with Anthropic.” That’s just routine legal shenanigans).