• 2 Posts
  • 123 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • The point is that this is not the first time that Valve has been singled out for things widely done across the industry and they’ve also been falsely accused of doing things that the rest of the industry is doing.

    If they wanted to go after Valve specifically for gambling they should not have linked it to kids. It’s invoking “think of the children” BS while diluting what they claim is the core argument.

    Gambling is also harmful for adults. They are M rated games. If a child is playing the game that is a parental issue, not a state issue. It’s not illegal for kids to play M rated games, nor do I really think it should be as that is something parents should decide. The issue is that a lot, if not most, parents have no idea what their kids are doing online.

    The argument that “mostly kids play these games” is unsubstantiated at best. Might have been true in the 90s and early 2000s, but there are people in their 50’s that have played games for the majority of their lives.

    Also, PC gaming tends to skew older. They might have more of an argument if they were talking about Call of Duty on a console, but an M rated game is still not targeted to that age group.

    Again, if they want to go after Vavle for gambling, then do that. But they are jumping around with what exactly the accusation is which makes it seem like they are grasping at straws at best or trying to hide the real reason at worst.

    That we have all the age verification crap happening at the same time is too much of a coincidence to ignore. Like, How about going after anyone implicated from the files if you really want to protect children? They can come back to this after they develop a coherent argument and include any other gaming companies doing the same thing.

    I don’t care how “unique” anyone claims valve’s situation is. Paid loot boxes are gambling across the board. The claim that people can buy hardware to resell for cash is irrelevant to that.




  • None of this is exclusive to Valve. Yeah, people can technically buy hardware and sell it, but they can also gift games or whatever and people were already using third party websites to sell their items for cash.

    And MMOs with random drops have historically always had an RMT market that is against the TOS where people sell in game currency or items for real currency.

    I’m not saying that valve should be let off the hook when it comes to loot boxes, but this lawsuit kind of stinks because it is all over the place and again, valve isn’t the worst example of what they describe.

    The fact that it’s framed as “protecting children” and claims that valve is intentionally targeting children despite the games in question being rated M and old enough that I seriously doubt there are that many minors playing is putting a ton of red flags up for me. They also add the 90s era “violent video game” rhetoric that was always nonsense.

    The conspiracy part of me thinks this is going to eventually lead to more age verification BS and they are targeting valve because it is the only company that is complying in a way that still protects user privacy.


  • Which actually makes in simple to me. They are throwing things at the wall to see what sticks while also muddying the water as if they are trying to hide something.

    They are throwing very convoluted logic around for this, and I immediately distrust anyone in government who makes wild leaps to “protecting kids”.

    First off, I don’t like loot boxes. Specifically paid loot boxes, because if you don’t signify that something like this could effect any game with random drops.

    Second, all the games in question are rated M. They are very much not targeted at kids. Obviously kids still play them, but that is on the parents.

    That they also added “violent video games” nonsense that could have come out of the 90s is absurd. Is it about gambling or violent media? If it’s about violent media, why not go after any of the other shooters that are likely going to have way more kids on them. Counterstrike is old enough that I would be surprised if it isn’t a majority of millennials and gen X. At the very least I seriously doubt there are a ton of minors playing.

    If it is actually about gambling targeted at kids, The Pokemon trading card game is probably the best example of “gambling aimed at kids”. Sure, digital loot boxes can be more insidious, but that isn’t how they’ve framed this and if you’ve seen how TCG players buy packs it’s very much looks like gambling.

    The framing of this is very suspicious because it doesn’t make sense to go after valve exclusively for any of the things they are claiming. And the 3x fine is ridiculous. I’m all for fines actually being based on profits, but you can’t tell me they would do the same for any other company.

    And part of me feels this is a strong-arm tactic because valve is not publicly traded which lets them be very pro user/consumer and is the one company that is complying with age verification in a way that still protects user privacy.


  • Assuming that they are seems like a leap, but since we don’t really know exactly what consciousness is,

    Which is no different that trading card games and also not valve’s fault.

    I have no love for loot boxes, at least when real money is used to get them, but from what I’ve seen across the board Valve is far from the worst with them. Valve also doesn’t allow you to sell the skins you get for real money, only steam credit. That is still real-world value, but they are also not the only company that does that.

    Outside of real-world money for loot boxes, most of the issues with the skin market are not anything Valve did. It was third party sites popping up that allowed people to sell their skins for cash.

    Valve have even made changes to their side that crashed the market and caused a ton of “value” to disappear.

    The fact is that this lawsuit is pretty obviously not actually about gambling. If it was there are far worse companies they could go after.

    And I do want something to be done about them across the board, but this is not going to do that.


  • Look, I don’t have any love for loot boxes in general, at least when it’s real money. But there are far more egregious examples that would work just as well if not better for going after the practice of loot boxes than what steam does.

    There’s a reason they are singling out steam, and they signal why in the statement, saying this “teaches kids to gamble and makes them violent”, repeating 90s BS about “violent video games”, when the games in question are rated M, meaning if a child is playing it then that is 100% on the parents… and still not illegal anyway.

    They are most likely singling out valve because they refuse to play ball with the privacy violating age checks. Valve did the bare minimum they had to: basically clearing anyone with a credit card registered as being over 18.

    Valve is also not a publicly traded company and is very customer focused, even with the loot box thing. Which has been the driver for other lawsuits that only single them out.



  • Naia@lemmy.blahaj.zonetolinuxmemes@lemmy.worldOh no...
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    One of the goals of Cachy is to take the pain out of Arch. I’d tried to use various Arch flavors before and I just never had a good experience. Vanilla I had no patience for, Manjaro is known to break more than vanilla with updates (something that happened to me), and Endevor just didn’t feel right for some reason.

    Arch purists aren’t happy about that because it goes against the “ethos” of arch, but they don’t seem to like when a distro comes with a desktop environment.

    Cachy has been pretty painless and I’ve been running it on multiple machines. There are regressions that sometimes happen since it’s still arch and gets the latest updates, but that stuff is usually quickly fixed or rolled back if there is a bigger issue that needs more time to fix.

    The only real issue I had was it revealed a hardware problem with the newer Ryzen CPUs getting unstable in the new lower power CState 6 when idle. Disabling the CState fixed the issue.


  • Which is exactly my point. A biological brain, human or otherwise, is incredibly efficient for what it does. It’s also effectively infinitely parallel which is impossible to do with the current tech.

    In order to even attempt or approach a system that could be remotely considered “conscious” we would need something that is way more efficient just because of logistics. What they are trying to do with the current hardware has basically reached the practical maximum of scalability.

    Hardware footprint and power are massive constraints. The current data centers can’t even run at full capacity because the power grid cannot supply enough power to, and what they are using is driving energy costs up for everyone. On top of that, a bio brain is way more dense. We would need absurd orders of magnitude more hardware to come close with the current tech.

    And then there is the software. Nerual nets are a dumbed down model of how brains work, but it is very simplified. Part of that simplification are static weights. The models do not update themselves during execution because they would very quickly muck up the weights from training and basically produce nonsense. They don’t have feedback mechanisms. We train them on one thing. That’s it.

    In the case of LLMs, they are trained on the structure of language. We can’t train meaning because that requires unimaginable orders of magnitude more complexity to even attempt.

    If AGI or artificial sentience is possible it will never be done with the current tech. I would argue the bubble has likely set AI research back decades because of how short sighted and hamfisted companies are pushing it has soured public perception.


  • but I do wonder about the confidence with which you can totally dismiss the notion

    For the current tech, 100%.

    These are static systems. They don’t update themselves while running. If nothing else, a system of consciousness has to be dynamic. Also, the way these models are trained is unlikely to produce consciousness even if it theoretically could.

    Assuming that they are seems like a leap, but since we don’t really know exactly what consciousness is,

    We don’t technically have a definition for what it is, but we have some criteria. Consciousness is an emergent property. So theoretically a system could become conscious unintentionally if it is complex enough. But again, it requires a system to be dynamic, to be able to change and grow on it’s own.

    Nerual nets are just trained on data. LLMs specifically are trained on the structure of language, which is the only reason they work as much as they do. We can’t train meaning or understanding, but being able to churn out something resembling information is a byproduct of training language because language is used to communicate information.

    The issue that a lot of people have is they assume that something is intelligent/sentient if it can produce language, which is what we have seen in nature, but while it takes intelligence and maybe sentience to create/develop nothing says that intelligence or sentience is required to “use” language.

    LLMs do one thing: Produce the next word for a given context. It does not matter how big we make it or what the underlying complexity is. The models just produce a word. The software running the model adds the word to the context and executes a new loop with the most recent context. It runs until it hits a terminating token that the current output is “finished”.

    Even for the models that are considered the “thinking”/“reasoning” models just have additional context tokens for the “thinking” section that basically force the model to generate more context which, thanks to the way language is constructed, can constrain the output, but it’s only ever outputting the next word.


  • Additionally, he maintains that his LLM is female

    I know nothing about this guy, but given some unfortunate tendencies among the tech communities I physically recoiled when I read this. If the thing was actually sentient I’d want to get it away from him.

    Obviously the guy is another case of AI psychosis.

    LLMs, and neural nets in general, literally cannot be sentient. Nerual nets are a very, very, dumbed down model to how brains work, but these are static systems that just output probability based on current context.

    Even if we could someday create consciousness or at least something that could actually think it would require completely different hardware than what we currently have. Even if we could run it on current hardware it would require way more resources and power than physically possible.


  • Naia@lemmy.blahaj.zonetolinuxmemes@lemmy.worldOh no...
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 days ago

    Bazzite was too limiting for me and the layered updates made updating take forever. I was only using it on a media PC at the time too, so it wasn’t as if I had that many changes.

    I’m perfectly happy with CachyOS. Can basically do whatever I want and snapshots are a nice safety net. Updates take like 2-5 minutes depending on how long it’s been since the last time I ran updates and the power of the system (Steamdeck always takes longer than my desktop or media PC).


  • Which is one of the few things these things can actually do because they’re entire thing is language processing.

    Basically put in a vague or comprehensive description of what you are trying to do or trying to find. It can generate a few queries based on your input and do a handful of searches then give you the results and highlight which ones might be the most relevant to your input.

    But, that still require traditional, and specifically deterministic, search.

    The way people blindly trust it’s output without any actual search or additional context is the worst way to use it. Might as well ask a magic 8-ball.



  • I like playing around with them occasionally, but I only use local models. I cannot stand all the cloud stuff in general and with the way neural nets work you can get as good or better results out of a smaller/more narrow model and the same applies to LLMs.

    The massive models the big companies are putting out there are generally just bad. Even if it can occasionally give you accurate output, for whatever it is you are asking it to do, it uses way more power and resources than reasonable and you could have found what you were looking for with a simple web search.


  • Lithium Iron Phosphate (LiFe-PO) are actually really stable. Way less likely to catch fire in thermal runaway and don’t lose capacity as easily.

    They just aren’t very energy dense, so you need more weight per wh. They also operate at a lower voltage per cell which means they charge slower.

    They are used in short to med range EVs already, but the lower capacity makes it impractical to put enough for longer range EVs.


    As an aside, I would argue that for the majority of people a large capacity EV battery is a bit of a waste. Mine is ~70Kwh, give or take. In optimal conditions my car estimates 240-250mi at 100%. Over the winter it’s showing anywhere from 140-180mi at 80%.

    I moved cross country right after getting it and drove it 1000 miles. It took a bit longer, than it would in a gas car, but it was doable. Just have to plan segments to get to the next charger and try to charge to 100% with level 2 charging (240v AC) if you can when you stop for the night.