The fact that you don’t understand it doesn’t mean that nobody does.
I would say I do. It’s not that high of a bar - one only needs some nandgame to understand how logic gates can be combined to do arithmetic. Understanding how doped silicon can be used to make a logic gate is harder but I’ve done a course on semiconductor physics and have an idea of how a field effect transistor works.
The way a calculator calculates is something that is very well understood by the people who designed it.
That’s exactly my point, though. If you zoom in deeper, a calculator’s microprocessor is itself composed of simpler and less capable components. There isn’t specific a magical property of logic gates, nor of silicon (or doping) atoms, nor for that matter of elementary particles, that lets them do math - it’s by building a certain device out of them that composes their elementary interactions that we can make a tool for this. Whereas Searle seems to just reject this idea entirely, and believes that humans being conscious implies you can zoom in to some purely physical or chemical property and claim that it produces the consciousness. Needless to say, I don’t think that’s true.
Is it possible that someday we’ll make machines that think? Perhaps. But I think we first need to really understand how the human brain works and what thought actually is. We know that it’s not doing math, or playing chess, or Go, or stringing words together, because we have machines that can do those things and it’s easy to test that they aren’t thinking.
That was a common and reasonable position in, say, 2010, but the problem is: I think almost nobody in 2010 would have claimed that the space of things that you can make a program do without any extra understanding of thought included things like “write code” and “draw art” and “produce poetry”. Now that it has happened, it may be tempting to goalpost-move and declare them as “not true thought”, but the fact that nobody predicted it in advance ought to bring to mind the idea that maybe that entire line of thought was flawed, actually. I think that trying to cling to this idea would require to gradually discard all human activities as “not thought”.
it’s easy to test that they aren’t thinking.
And that’s us coming back around to the original line of argument - I don’t at all agree that it’s “easy to test” that even, say, modern LLMs “aren’t thinking”. Because the difference between the calculator example and an LLM is that in a calculator, we understand pretty much everything that happens and how arithmetic can be built out of the simpler parts, and so anyone suggesting that calculators need to be self-aware to do math would be wrong. But in a neural network, we have full understanding of the lowest layers of abstraction - how a single layer works, how activations are applied, how it can be trained to minimize a certain loss function via propagation - and no idea at all about how it works on a higher level. It’s not even “only experts do”, it’s that nobody in the world understands how LLMs work under the hood, why they have the many and specific weird behaviors they do. That’s concerning in many ways, but in particular I absolutely wouldn’t assume with little evidence that there’s no “self-awareness” going on. How would you know? It’s an enormous blackbox.
There’s this message pushed by the charlatans that we might create an emergent brain by feeding data into the right statistical training algorithm. They give mathematical structures misleading names like “neural networks” and let media hype and people’s propensity to anthropomorphize take over from there.
There’s certainly a lot of woo and scamming involved in modern AI (especially if one makes the mistake of reading Twitter), but I wouldn’t say the term “neural network” is at all confusing? I agree on the anthropomorphization though, it gets very weird. That said, I can’t help but notice that the way you phrased this message, it happens to be literally true. We know this because it already happened once. Evolution is just a particularly weird and long-running training algorithm and it eventually turned soup into humans, so clearly it’s possible.
Not really. As far as I can see the goalpost moving is just objectively happening.
If “think” means anything coherent at all, then this is a factual claim. So what do you mean by it, then? Specifically: what event would have to happen for you to decide “oh shit, I was wrong, they sure did make a machine that could think”?