Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> Brain doesn’t perform computations in any CS accepted sense (neither it is a turing machine nor it has any encoded program to execute any defined algorithm).

That's a big statement. Anything that can be effectively described with math, can be described using Turing machines and algorithms. Any Turing-complete system is equivalent. As far as we know, all of physics can be described by equations, therefore is (theoretically) computable. What makes you think brains are special? Are there any other physical systems that you think are uncomputable?

The only arguments I've found for why the brain can't be described by algorithms go into unconvincing pseudo-scientific arguments about the magic of quantum mechanics, which I find very unconvincing (quantum algorithms are still algorithms, and describable through math after all). Do you have a better one?

> Would you call an ants colony a computer? A tree? A government? But they all obviously processes information and seemingly perform computations, don’t they?

Yes, absolutely. You can use ant colonies or slime molds as biological computers to solve real-world finding problems: https://www.youtube.com/watch?v=BZUQQmcR5-g&t=1s

Some of your other examples are more complicated, but all of them can be described through the computational lens, and modeled as Turing machines. you will find many scientific papers filled with equations trying to describe the algorithms behind each of them



sort by: page size:

> My point is that we know for a fact that our brain doesn't work anything like a Turing machine.

That's of course true, but then most computers aren't that similar to a classic Turing machine either. The idea of Turing machines is that they can simulate anything else that we mean when we say "computation".

> Can a Turing machine simulate a brain - as in, create intelligence like the one our brains expose?

Yes, it can. I mean, if it can't, that means there's some notion of "computability" that a Turing machine doesn't capture - this would violate the "Church-Turing" thesis. Such a thing is possible, of course, since we have no proof that a Turing machine is the limit of what we mean by "computable" - but it would be a completely revolutionary fact in Computer Science, and one that is very unlikely, considering how many years people have been trying to come up with alternative models of "computation" that all turn out to be equivalent to Turing machines.

-

The human brain isn't magic - it works according to the laws of physics like anything else. Either it is simulatable by a Turing machine, which would make the most sense, or it isn't, in which case we're missing something about how you can compute things, which would be surprising and amazing.


>> Ok, but why are biological neurons exempted from this argument?

Because biological neurons are not computers? I think the argument is that our current theory of computation is not enough to explain the human mind, not that human minds are magickal and special compared to other computational devices. We just happen to understand one category of computational device that does not include human minds.

I think if we look hard enough we can find examples of computational systems that exist in nature and that are not Turing machines. For examples, a weighted chain forming a catenary can instantly calculate the shape of a load-bearing arch, without going through any states: it suffices to set up the computational system (the weighted chain) and the result of the computation (the shape of the arch) is immediately known.

Meaning, "there are more things in heaven and earth that are dreamt of" in our computer science. Our mind is probably the biggest fucking mystery in all of existence and it is a little ridiculous to think we can understand it, and simulate it, with our primitive science and technology.


> Where is the concrete evidence that human intelligence can be simulated by a Turing Machine?

Short of building such a machine I can’t see how you’d produce evidence of that, let alone “concrete” evidence.

Regardless, we don’t know of any measurable physical process that the brain could be using that is not computable. If we found one (in the brain or elsewhere), we’d use it to construct devices that exceeded the capacity of Turing machines, and then use those to simulate human brains.


> The brain can compute.

You'll see in my comment and your quote that I don't say the brain can't compute. I agree, the brain can compute. But that doesn't mean it is a computer, because computing is an ability. People can do many other things aside from computing, none of which rely on computation, for instance they can imagine, which is the ability to think new thoughts. Computers can't imagine because all they do is compute: that's their programming. No amount of programming can produce imagination. Computation and imagination are categorically distinct as different intellectual powers and abilities.

You are conflating an ability with ontology. We know what a brain is. It's a collection of fatty material with neurons that do not explicitly fire exactly like a computer. Key word there is like. Church-Turing built a model of computational logic off of intuitions about the brain and formal mathematical logic. That's it's not provable doesn't prove your point; it removes any distinction between it being right or wrong: because it is a model (lets make something like the brain).

That an industry was built on computation doesn't prove anything. We know computation is an ability. For instance it's also something we can do with abacuses. We could have built an enormous industry on building elaborate abacuses. We built computers do be extremely fast at computation. We didn't build computers to be brains.

You'll notice, if you read the review, that the author of the review repeatedly cites cognitive neuroscientists, even evangelists of the singularity, philosophers, psychologists, and zoologists, who have published at length on this topic and repeatedly critcise and disrupt the simple idea that the brain is a computer or an algorithm or even a machine. An entire branch of philosophy developed off of Ludwig Wittgenstein to counter the computational model of consciousness. Numerous books in the Philosophy of Mind argue that the assumption that the brain is a computer is not just unsupported, it is logically nonsensical.


> This line of argument does achieve the goal of making it trivially true under the definitions given that the brain is a computer, but it seems to me it robs the assertion of insight

I think, rather, it reveals the fundamental lack of clarity of the contrary position.

> e.g., the brain is computer because everything is a computer, including the nearest rock, since it's also a physics-governed system.

Right. But no one is questioning the ability to build a computer that simulates the behavior of a rock; or most other physics-governed systems. The AI-is-impossible position boils down to the argument that the mind is not like all other physics governed systems, though it tends to waffle and hedge and bob and weave around that point rather than coming right out with it. Pointing to Turing equivalence and the apparent computability of natural phenomena forces the AI-is-impossible-because-the-mind-is-not-the-kind-of-thing-a-computer-can-simulate argument to come straight out and either (1) reject the universal computability of physical systems, or (2) reject the mind as a physical system.

It still, of course, leaves plenty of room for the proposition that AI is possible but really quite hard.


> To claim that the brain has a fundamental advantage over a Turing machine is to claim that there's some magical quantum (or otherwise) advantage in the brain's physical system that is un-simulate-able by a Turing machine.

Again, please point to any research you know of indicating that the universe itself is fully Turing-computable.


> The problem with that line of reasoning is you're assuming the brain is a computer, or that it merely computes.

The brain can compute. That's extraordinary. I say one type of thing does that, computers. You say no, two things, computers and then also brains. But when pressed to explain what is a brain if not a computer you'll just sputter (probably at length) without offering any substance.

In a sense that's the wrong way up to explain it. Church-Turing intuitively defines computation (the things computers can do) in terms of what our brains can do, so the match is not a coincidence but it also isn't there for the reason you probably expect. Because it's an intuition Church-Turing isn't provable, but you may notice that we subsequently built an _entire world-changing industry_ upon it in a lifetime.

You pointed to a review, others have written entire books, always they can be summarised as simply arguments from incredulity. "What? Nonsense, the brain can't be a computer, I simply won't believe that". It's unfortunate that we have woken such people from their daydreaming, I have no doubt that if similarly aroused they'd give the mathematicians what for too, "What? Nonsense, how can there be numbers which aren't ratios of whole numbers, I simply won't believe it".


> I don't think you can say that life is built on computational processes unless you use a definition of "computation" that is so vague and all-encompassing that it becomes effectively meaningless.

I mean, there's nothing physically stopping us from simulating a brain, right? It's a finite object with a finite amount of physical ingredients, and therefore with a finite amount of computing power we can simulate what it does. To me personally, that's a computational process. Maybe that's an overly broad definition of computation, but I think these debates tend to be about whether there is something fundamentally different about "life" (by which I assume you include consciousness). But maybe that's not what you're saying.

> He points out that "if we are to suppose that the brain is a digital computer, we are still faced with the question 'And who is the user?'"

What does that question even mean? I think it seems deep because we humans have a tendency to ascribe some sort of supernatural aura to our lived experience. Life is something incredible but that (at least to my knowledge) is not uncomputable...

> There is no computational process that will produce the sensations of colour or sound or touch.

Got one: the brain!

> At best you will have some representation that requires having actually experienced those sensations to understand it.

Why does a computer not "experience" something?


> our knowledge of computational complexity, algorithms, and general problem solving shows that our human intelligence performs far more sophisticated operations than are possible given the number of neurons that exist in our brains

This is not true.

I wish I could respond with something more interesting, but that's all there is to it. You are just saying something that is incorrect. Our brains are information-theoretically and complexity-theoretically entirely reasonable.


> Well it's definitely true that the human brain is Turing complete.

That is definitely false. "In computability theory, a system of data-manipulation rules... is said to be Turing-complete... if it can be used to simulate any Turing machine." [0] A Turing machine has infinite memory; the human brain does not.

[0] https://en.wikipedia.org/wiki/Turing_completeness


> I don't have much justification, but I believe it's false. There is a lot more to the universe than computers. Computers cannot simulate even elementary physical systems accurately.

Correct me if I'm wrong, but wouldn't a system that computes a non-recursive function be some sort of box where, given the same inputs, you invariably obtain the same outputs; however there is no way of finding any mathematical or algorithmic expression that could, in any finite amount of time, predict the same result? To find yourself in that situation, you need a physical process that is intrinsically non-computable; being unable to compute the sum of computable processes is not enough.


>If I had to guess right now, I'd say that it is more likely that the human brain has clever ways to exploit signal timings and randomness

Then why do you think it's a hypercomputer? These are all classical physical effects. What math problems do you think the human brain can solve that a Turing machine with sufficient time can't?


> Our brains are Turing complete. Anything that is computable/ understandable, can be understood by humans.

Our brains are fallible, and so only approximately equivalent to Turing machines; and even ignoring that can only compute every computable thing given both infinite error-free storage capacity (which they don't have internally or externally) and sufficient time to execute the necessary steps of the computation.

Realistically, there are computable functions which no human brain will be ever be capable of computing.

Whether being able to compute a thing is sufficient, necessary, or tangential to understanding it is also a question.


> So you're claiming that organisms such as ourselves cannot be run on a Turing machine? Any particular reason/evidence?

This is just my thinking: a turing machine, or equivalently a digital circuit, doesn't want anything. It's an artificial construct, effectively a simulation, that can exist as an abstract mathematical idea.

If there is a real world where real stuff happens, real things have their own goals - generally increasing entropy or minimizing energy. Although it seems like we should just be able to capture this mathematically, in doing so, we remove the actual desire... this sounds a bit nutty maybe, but what else is the difference between reality and simulation? An abstract calculation, anything equivalent to a Turing machine has no skin in the game. Something happened in the universe does - and this could include conciousness, as a property of matter, which for example could be effectively sum of the compulsions of the composing matter to minimize their energy (you can look this up, there are theories about conciousness that posit it's a property of matter).

If there is no difference between reality and a simulation, I'm wrong, we can all be represented on a universal computer, and so as abstract math, and in some sense existence is meaningless it just follows from math. My experience doesn't support this, but presumably I would have evolved a blind spot if my existence actually was meaningless.

TLDR, I think there is some undiscovered difference between reality and simulation, that I would guess relates to desire / conciousness, that means we can't simulate conciousness or real intelligence

I'm a scientist, I know the above doesn't withstand any scrutiny, I'm just trying to share my speculation because you asked


>Our minds aren't constructed to understand reality, they're constructed to help our bodies survive in a savannah a few hundreds of thousands of years ago.

I agree with your main point, however I don't find this a compelling argument. Regardless of what our brains are constructed for, with pen and paper and a set of rules we can roughly approximate a turing machine. Everything we as humanity know about computation suggests that at minimum this is equivalent to any computation possible using all known physical properties & interactions with some amount of overhead (even quantum can be simulated classically).

Thus, while it's possible there is an unknown kind of interaction that we cannot understand or simulate classically, the fact that our brain is more or less designed around survival and reproduction is irrelevant; change or optimize our DNA and the new super-human will still be at best roughly equivalent to a turing machine in terms of computation.


> but in real world any turing machine would be incapable of doing anything outside of its purpose.

If what the brain is doing is computational, then it's not doing anything more powerful than a Turing Machine could do. There would be some Turing Machine design that could do the same processing.


> CPUs as turing complete, programmable machine are a strict superset of what brains can do

In what way can this be proven?

It's very tempting in an era of tech-centered growth to think of computers as the solution to everything, but we are barely even beginning to understand the brain. We know computers fairly well and can talk about them, but how can we make such a claim when we don't know the other thing we're talking about?

In fact, the brain created the computer, didn't it? Therefore, from that standpoint it is arguable that the brain is a superset of the computer. It's not something I really believe in (because my opinion is that you can't really equate things that are of entirely different units, one of which being unknown), but just a "devil's advocate".


> But the human brain, which most certainly must be a kind of computer

Must?

It's one thing to observe that the brain appears to do computation, and to have as a working hypothesis that everything the brain does is a form of computation.

It's something else to say that it must be true, when, for the moment, the brain also appears to do things that we have no idea how to make computers do, that we have significant difficulty getting computers to even pretend to do.


> You could build a brain out of any of them, unless the brain computes something that is not computable.

Could you? That's sort of begging the question. We do not know if something "Turing complete" can be used to build a brain like the human brain. That's precisely the point.

> If the brain does something that is not computable, that's a direct challenge to some of our most established science.

A challenge for computational neuroscience maybe. Otherwise I don't see the challenge for neither neuroscience nor computer science. If someone wants to make the claim you can build a human brain out of something Turin-machine-like, that's an extraordinary claim, not established science.

next

Legal | privacy