• 0 Posts
  • 408 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle
  • There’s a whole history of people, both inside and outside the field, shifting the definition of AI to exclude any problem that had been the focus of AI research as soon as it’s solved.

    Bertram Raphael said “AI is a collective name for problems which we do not yet know how to solve properly by computer.”

    Pamela McCorduck wrote “it’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, but that’s not thinking” (Page 204 in Machines Who Think).

    In Gödel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter named “AI is whatever hasn’t been done yet” Tesler’s Theorem (crediting Larry Tesler).

    https://praxtime.com/2016/06/09/agi-means-talking-computers/ reiterates the “AI is anything we don’t yet understand” point, but also touches on one reason why LLMs are still considered AI - because in fiction, talking computers were AI.

    The author also quotes Jeff Hawkins’ book On Intelligence:

    Now we can see the entire picture. Nature first created animals such as reptiles with sophisticated senses and sophisticated but relatively rigid behaviors. It then discovered that by adding a memory system and feeding the sensory stream into it, the animal could remember past experiences. When the animal found itself in the same or a similar situation, the memory would be recalled, leading to a prediction of what was likely to happen next. Thus, intelligence and understanding started as a memory system that fed predictions into the sensory stream. These predictions are the essence of understanding. To know something means that you can make predictions about it. …

    The human cortex is particularly large and therefore has a massive memory capacity. It is constantly predicting what you will see, hear, and feel, mostly in ways you are unconscious of. These predictions are our thoughts, and, when combined with sensory input, they are our perceptions. I call this view of the brain the memory-prediction framework of intelligence.

    If Searle’s Chinese Room contained a similar memory system that could make predictions about what Chinese characters would appear next and what would happen next in the story, we could say with confidence that the room understood Chinese and understood the story. We can now see where Alan Turing went wrong. Prediction, not behavior, is the proof of intelligence.

    Another reason why LLMs are still considered AI, in my opinion, is that we still don’t understand how they work - and by that, I of course mean that LLMs have emergent capabilities that we don’t understand, not that we don’t understand how the technology itself works.




  • From the blog post referenced:

    We do not provide evidence that:

    AI systems do not currently speed up many or most software developers

    Seems the article should be titled “16 AI coders think they’re 20% faster — but they’re actually 19% slower” - though I guess making us think it was intended to be a statistically relevant finding was the point.

    That all said, this was genuinely interesting and is in-line with my understanding of the human psychology that’s at play. It would be nice to see this at a wider scale, broken down across different methodologies / toolsets and models.


  • Why is 255 off limits? What is 127.0.0.0 used for?

    To clarify, I meant that specific address - if the range starts at 127.0.0.1 for local, then surely 127.0.0.0 does something (or is reserved to sometimes do something, even if it never actually does in practice), too.

    Advanced setup would include a reverse proxy to forward the requests from the applications port to the internet

    I use Traefik as my reverse proxy, but I have everything on subdomains for simplicity’s sake (no path mapping except when necessary, which it generally isn’t). I know 127.0.0.53 has special meaning when it comes to how the machine directs particular requests, but I never thought to look into whether Traefik or any other reverse proxy supported routing rules based on the IP address. But unless there’s some way to specify that IP and the IP of the machine, it would be limited to same device communications. Makes me wonder if that’s used for any container system (vs the use of the 10, 172.16-31, and 192.168 blocks that I’ve seen used by Docker).

    Well this is another advanced setup but if you wanted to segregate two application on different subnets you can. I’m not sure if there is a security benefit by adding the extra hop

    Is there an extra hop when you’re still on the same machine? Like an extra resolution step?

    I still don’t understand why .255 specifically is prohibited. 8 bits can go up to 255, so it seems weird to prohibit one specific value. I’ve seen router subnet configurations that explicitly cap the top of the range at .254, though - I feel like I’ve also seen some that capped at .255 but I don’t have that hardware available to check. So my assumption is that it’s implementation specific, but I can’t think of an implementation that would need to reserve all the .255 values. If it was just the last one, that would make sense - e.g., as a convention for where the DHCP server lives on each network.










  • Fair point, I should have asked about commercial games in general

    That said I didn’t mean that the game studio itself would do the AI training and own their models in-house; if they did, I’d expect it to go just as poorly as you would. Rather, I’d expect the model to be created by an organization specialized in that sort of thing.

    For example, “Marey” is one example I found of a GenAI model that its creators are saying was trained ethically.

    Another is Adobe Firefly, where Adobe says they trained only on licensed and public domain content. It also sounds like Adobe is paying the artists whose content was used for AI training. I believe that Canva is doing something similar.

    StabilityAI is also doing something similar with Stable Audio 2.0, where they partnered with a music licensing company, AudioSparx, to ensure that artists are compensated, AI opt outs are respected, etc…

    I haven’t dug into any of those too deep, but they seem to be heading in the right direction at the surface level, at least.

    One of the GenAI scenarios that’s the most terrifying to me is the idea of a company like Disney using all the material they have copyright for to train their own, proprietary GenAI image, audio, and video tools… not because I think the outputs would be bad, but because of the impact that would have on creators in that industry.

    Fortunately, as long as copyright doesn’t apply to purely AI generated outputs, even if trained entirely on your own content, then I don’t think Disney specifically will do this.

    I mention that as an example because that usage of AI, regardless of how ethically the model was trained, would still be unethical, in my opinion. Likewise in game creation, an ethically trained and operated model could still be used unethically to eliminate many people’s jobs in the interest solely of better profits.

    I’d be on board with AI use (in game creation or otherwise) if a company were to say, “We’re not changing the budget we have for our human workforce, including for contractors, licensed art, and so on, other than increasing it as inflation and wages increase. We will be using ethical AI models to create more content than we otherwise would have been able to.” But I feel like in a corporate setting, its use is almost always going to result in them cutting jobs.



  • Depends on your e-reader! If you have a Kindle, Kobo, or Nook, yes, that’s true. However:

    Boox has e-readers that run Android and you can install Hoopla. The Palma 2 is phone sized which is great. The Page, Leaf2, and Go 7 are all in the 7” form factor, plus they have 6” versions. And they have tablet sizes, too. They have both traditional black&white and color e-ink displays.

    I have the Boox Air 3C and the original Palma and both are great. I’ll likely get a Boox as my next standard sized e-reader, too (whenever I replace my Kindle Oasis). Though unless the technology drastically improves before then, it’ll be one with a black and white screen. (The color is nice in the tablet sizes, though.)

    Some other options that I’m less familiar with include:

    • Bigme has Android 7” color e-readers, as well as tablets and e-ink smartphones.
    • Meebook has e-readers that run Android (and Android e-ink tablets)
    • The MuSnap Aura C is a 10” Android e-ink tablet
    • XPPen has an 11” Android e-ink tablet


  • But do nontechnical users care about the “missing” features? A lot of nontechnical users prefer simpler apps.

    There is a version of Blender that was made for Android. It’s quite old, though. But if you’re competent enough with Blender that you’ve memorized all its keyboard shortcuts and workflows, you’re likely technical enough to get it working via Termux. But if not, Nomad Sculpt (on both iOS and Android), SpaceDraw (Android only), and several other apps can serve the same purposes.

    Not sure why you listed video editing software and two different specific video editors, but Android and iOS both have Lumafusion. I’m sure there are other decent editors but I haven’t used them because Lumafusion is great. iPads do have DaVinci Resolve, though, for what that’s worth. If you care about using a FOSS video editor then you should care enough to install it via Termux. But let’s be real, most nontechnical users are probably happy using CapCut.

    DJ software - Cross DJ is free. There are other alternatives. And there are web based DJ software apps like YouDJ.