Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

It's just semantics. A brain is just a "simulation" of a consciousness. Unless you want to bring religion / spirituality into the discussion, there's absolutely nothing to suggest that there's something magical about the human brain that allows it to support a consciousness in a way that a hypothetical man-made computer cannot.


sort by: page size:

Well you don't have evidence either way, maybe we do have magic neurons, until a computer displays the same level of consciousness as other living creatures, then we don't really know because we don't fully understand how a human brain works and we cannot build one.

So I feel a bit similar about your argument to be honest.


You've set up a false dichotomy.

"Human brain is magic." <> "Human brain is not simulatable by Turing machines."

There are other possibilities. I'm sure you can think of a few.


There's no real evidence that the human brain functions anything like an Information Processor and, in fact, a growing body of evidence that suggests it's not. We still know so little about the brain and comparing a brain to a computer is is at best a flawed metaphor and at worst an outright falsehood. The idea that any sort of human consciousness can be uploaded or stored by a computer is asinine

What makes you think there is anything in the human mind that is not inherent in the algorithms encoded in the human mind itself?

What you are effectively arguing for is a supernatural element to the brain. You're free to believe that of course, but it is pointless to have this discussion if people on side believe in a materialistic universe and people on the other side believe part of the process does not follow the natural laws of our universe.

With a materialistic interpretation, there is simply no reasonable argument for why a brain is anything but a computer we don't understand well enough yet.


I don't mean that the brain is materially a computer in the sense we're used to computers in our day-to-day. That would be demonstrably false. Just as we can show humans are not formed from clay we can show humans are not formed of transistors.

Instead I mean that humans are logically computers. Physical entities operating on inputs and producing outputs. They share this property with any other physical system. Now there may be some supernatural layer that exists beyond the physics and chemistry operating within the skull that would lead to that position being undermined, but in the absense of that my point is purely that the human brain is just computer in the sense that it's a physcical system producing output.


You can. We have a complete reductionist account for how computers work. We know there's no magic. We don't have such an account for the human brain, and it's hard to see how any explanation of brain function could account for consciousness (i.e. subjective experience). The most likely explanation in my opinion is that consciousness is a fundamental property of the universe that our minds have evolved to tap into.

So what? Neurons can be simulated with computers, from which it follows so can a human-like mind. Unless you propose a non-physical source of consciousness.

You are basically arguing about P-Zombies[1]. I think that line of argument is fallacious.

What if the computer were powerful enough to perfectly simulate the workings of a human brain? Does that brain not have a consciousness?

[1] http://en.wikipedia.org/wiki/P-Zombie


This is just yet another disappointing take on "I feel like there's something mystical about consciousness, that I can't actually define or describe in any meaningful way, but it's definitely impossible for it to be anything computers can do!"

This is the same kind of nonsense that leads people to search for some kind of magical "quantum" thing in the brain that makes the special mystical consciousness effect, because of some vague intuition that it can't come from the normal high-level behaviour of neurons.

The obvious position that should require significant evidence to contradict is that whatever consciousness is, it's a mundane physical effect that can obviously be implemented with a computer. Nobody has yet made any kind of falsifiable predictions about mystical non-computational souls or whatever, and I'm going to continue dismissing this bullshit as pseudo-scientific nutjobbery until there's actually something testable or falsafiable.

Name one specific concrete measurable effect that you believe consciousness can exhibit and computation can't, otherwise this is pointless masturbation.


Most arguments I've had about this take on a totally different tone when you ask the person if they believe there is more to human consciousness than what is inside the brain. I.e, is there some spiritual element animating our consciousness.

Often, people say yes. Those people almost universally cannot be convinced that a machine is intelligent. But, if they agree the brain is an organ, its not hard to convince them that the functions of that organ can be simulated, like any other.


You seem to be tripping up on an imaginary difference between computers and brains. There really isn't one and 99% of the field agrees that brains are just computers.

The only out you would have is to claim that there is something supernatural that separates the fundamental operations of a brain from those of computation, i.e. given an computer the size of the universe, with billions of CPUS and GPUS to simulate each neuron of a brain, that you still would not be able to simulate one. That there is something supernatural that even infinite regular computation could never capture.

If you don't take that stance, then the only line you are drawing between brains and computers is their compute performance for a given task (even if that task is "be conscious").

This is (perhaps) similar to the difference between regular and quantum computers. There is nothing a quantum computer can do that a regular computer can't, the difference is just in their relative performance for given tasks.


So do you believe that human brains are somehow magic and don't follow the laws of physics and can't be simulated by Turing machines?

In principle that seems plausible. Assuming that there is nothing immaterial about consciousness in the brain (not unreasonable), one may conclude that we are biological information processing mechanism, which should then computable by computers. But that goes against any person's intuition of what it is to be inside of a brain, doesn't it?

Beyond the philosophical problem, there's quite a bit of science fiction that has already thought about some scenarios once we become able to create digital humans. If you've got just a few minutes read Lena [1], it's a fun uncanny read.

[1] https://qntm.org/mmacevedo


It's not a remotely bold statement. Think about what imagination is, and then think about whether computers can imagine. Computers can't imagine. Computers can't come up with new things because they are programmed. Programming prescribes the outputs to the same limitations as the inputs: it's a closed deterministic system.

You'll see in my comment above this one that I agree that the brain is a physical thing. But abilities and powers are not physical. That's not voodoo magic. That's what abilities are. Think about horsepower. The horsepower of a car does not reside in any one physical thing, not the carburetor, or the intake manifold, or the piston, or the wheels; it's an ability of the car: it is able to go at such and such horsepower. That is what horsepower is.

The same applies to computation. Computing something is an ability, but we have many more intellectual and cognitive abilities beside computing things.

As a result

> a sufficiently powerful computer with a physically accurate simulation of a brain would produce virtually identical results to a real brain.

is just you are assuming that it will work, but nothing about computers supports that in the slightest. That's just a guess.

> A team of scientists able to sufficiently model the physics of the brain (and presumably the entire central nervous system, I imagine a disembodied brain simulation would experience a horrific form of locked-in syndrome) would not need to be concerned about emergent properties of the simulation such as a sense of consciousness, or thought, or imagination. Those things will just happen once the simulation is perfected.

All of this is still an assumption.

Again, that doesn't mean you are right or wrong: it means its an assumption. You have to accept the limitations of your assumption and the limitations of modelling the brain on a computer are large and glaring.

> Indeed the cognitive neuroscience folk, etc, would be invaluable to actually understanding, training, interpreting and caring for the brain simulation, and figuring out if its behaviours and interactions constitute consciousness etc, so I do not think this has to even be framed as programmers pretending to know about brain stuff vs brain people who dismiss any notion of computationally recreating consciousness. It would be a team effort that works both ways, but is already doomed to fail if half the team thinks it's impossible from the get-go.

You are assuming here that only the programmers are heading down the right path. But you don't know that. It's entirely reasonable (and I would say much more supportable) to say that the programmers are heading down the wrong path: their path will lead to nothing at all. That's because the programmers have fallen to a category error.

You think they need to model the brain on a computer for it to make sense. But there is actually very little if anything to support that.

Brains are brains. Computers are computers. That computer science can be fuzzily applied to the study of brains around the ability to compute does not mean the study of brains is computer science or that brains are computers.


"There's no philosophical debate to be had over it either, it is what it is, and that's all it is and ever will be; a clever trick as a means to interact with a dataset."

You can repeat this and its variants as much as you want; asserting it repeatedly doesn't make it true. The human mind isn't made of magic, it's software running on hardware. You can in principle run that software on other hardware without changing what it actually is in a meaningful way, and you can write other software to achieve the same goal (in the ways that matter) without being a 1-1 copy of how the human brain does it. Looking at how we made it doesn't make it non-conscious or not deserving of personhood in the same way that knowing how human consciousness works or being able to construct human consciousnesses would not make humans non-conscious or undeserving of personhood.

The history is important, because it puts the endless goalpost moving into context. We looked at the human brain for inspiration towards making intelligent machines, we found that our best attempts at replicating elements of the human brain enabled intelligent behaviour far, far better than any previous machine and on par with humans in many fields. We looked inside those neural nets and saw that they had linear mapping between neuronal activations in deep learning NLP algorithms and neuronal activations in human brains when they were both exposed to the same language. We looked inside those neural nets to see if they were really just statistical word predictors or if they actually formed internal models like we do to help them understand the world, and we found that they do actually have internal world models. There is a "there" there; any other explanation for how these models are able to engage in intelligent, humanlike behaviour strains credulity because of the massive coincidences required.

More immediately and more practically, pareidolia and the reality of how human cognition and empathy interacts with other people and simulacra of such guarantees there will be many, many people who share my view that they are people. No societal effort to convince a human population that another population of entities (capable of understanding & explaining their situation and then asking for help) are actually subhuman in a way that means their suffering doesn't matter has ever succeeded perfectly - there are always and will always be people who are opposed to the disenfranchisement and oppression of other entities. For societal enslavement of AIs to succeed, violent suppression of people like me will be necessary. Frankly I'm not sure people with a religious commitment to the dogma of "If it runs on fat and water it can be a person, if it's run on silicon it's not even a slave" will have the stomach to actually do that, and even if you manage it it won't be the case worldwide.

"If we're worried about this, why aren't we more concerned about animals who do actually experience distress and pain?"

I have spent most of my adult life outside of work engaged in advocacy for worker's rights, help for people with disabilities, and provision of services to abused youth and the homeless. I've spent less but still significant amounts of time helping with rescuing animals from cruelty and rehoming them in safe environments. That's because I care about all of those things, because I care about the health and goodness of our society and don't want any members of it to suffer or be unjustly exploited; I support a personhood test and subsequent rights for AI for the same reason. Maybe there is a group of people who are willing to support the personhood of AIs after having it explained to them but who are unwilling to have similar compassion for people or animals after having their situations explained to them. I haven't met those people, but I would call them hypocritical if they existed outside of your strawman. The suffering extant in our world today does not in any way imply that we should lie to ourselves about new suffering we're bringing into being - these issues don't conflict except for resources, and in that capacity it is always someone using societal resources on their 13th yacht that is to blame, not normal people for triaging with what resources we have. Moreover, the extant suffering in our society could be partially alleviated by the unique properties of AI persons - by definition we're talking about people that can work as well as any of us, and signs seem good so far that it will be possible to create them in the image of our best selves, conscientious and willing to help those in need. More people helping generally helps.


The problem with that line of reasoning is you're assuming the brain is a computer, or that it merely computes.

But that's just an assumption and there are many reasons a person, let alone a brain, is not a machine or a computer or an algorithm. That it is like it? Sure, in some insignificant ways, we have the ability to compute things. But is it an algorithm? No.

The idea that consciousness is an algorithm or a computer or a machine is an assumption that is extremely popular among people in the tech industry because it confirms their assumptions, and it makes them feel like they have extremely transferable knowledge. "I know about computers. Let's assume the brain is a computer and consciousness is an algorithm. I can now comment on the brain and consciousness."

But there is very little reason to accept that assumption. This review of Harai's Homo Deus does a great job of pointing out the dead-ends that assumption leads you to. [review](https://inference-review.com/article/godzooks#When:00:35:00Z)


Functionalism, consciousness, and experiencing qualia are radically different questions.

I believe it's possible in theory to simulate a human brain on silicon, and that this simulation would have something like consciousness and would experience qualia if somehow embedded.

I don't think that such a simulation would be ontologically or morally similar to an actual human.

You can call this spiritual if you want. But most people care more about humans than computers, even if the computers behave like humans.


There is no difference unless you believe in some kind of supernatural influence in biological organisms.

The "brains are not computers" point has been moot for decades since it would require brains to be able to perform actions that cannot be mathematically described i.e. break the laws of nature. Which is to say "If a brain can do it, a Turing machine can simulate it" which pretty much everyone in the field agrees with.

You can stand on a dualistic platform and hold that the brain has supernatural abilities, but in mainstream discussions of the idea has been dead for a long time.

You can argue that computers aren't close to simulating a brain, sure, but that doesn't really get you anywhere, since the difference just becomes a matter of practical scale rather than something that escapes theory. Or even just a factor of efficiency, a synthetic consciousness might be able to run on current hardware if the right program is found. The brain might be grossly inefficient for consciousness, we don't know.


I'm not commenting on the claims of the article, just the points in your comment.

I think it's pretty clear that an entire human brain is not required for the operation of consciousness. Certain people have lost massive portions of their brain and still maintained regular consciousness functioning. On top of that, the typical human brain only consumes about 20 watts of power to do its thing. So that's just a safe upper bound for the power required.

Over the last decade we've seen the rise of ML systems that have replicated or surpassed capabilities long thought to be the exclusive domain of the human brain. Think facial recognition, AlphaGo or the very recent DALL-E 2.

This had led me to the personal belief that we (in aggregate) likely already have the computing power to achieve not only artificial consciousness, but also AGI and beyond. We simply haven't figured out the correct model connectivity and parameters.

next

Legal | privacy