Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Really? A sufficiently advanced computer is a perfect P-zombie to me. This is the Chinese Room problem restated. A computer is capable of behaving as a conscious actor without any concept of knowledge or qualia as we would recognize it. But again this gets into a faith-based discussion. To me a sufficiently complex computer doesn't suddenly cross a "consciousness threshold" of complexity, but if to you it does there's no problem, which we could call "camp 3" in this discussion.


sort by: page size:

You are basically arguing about P-Zombies[1]. I think that line of argument is fallacious.

What if the computer were powerful enough to perfectly simulate the workings of a human brain? Does that brain not have a consciousness?

[1] http://en.wikipedia.org/wiki/P-Zombie


Sufficiently smart ones, certainly. I've seen no evidence that we've built one yet, but we will eventually, and when we do, I'd ascribe exactly as much consciousness to a computer as I do to a human.

I don't believe "p zombies" exist --- or, rather, I believe we're all p zombies.


Personally, I'm of the opinion that the human/animal mind is a computational process happening in the brain, and that a sufficiently complex computer could also "run" such a mind. In that sense, I believe they are entirely physical objects, just as much as the Linux kernel running on my phone is a physical object.

As such, I believe that qualia are just how we would label some aspects of this computation, and I believe that concepts like a p-zombie are thus more or less non-sensical.

I'm not claiming to know for sure that this is the way. I do think it's, at least in principle, possible that a mind is a different thing, one that our understanding of physics can't (yet?) fully model. Hell, even though I'm an atheist, I wouldn't even claim that the concept of a divine soul is completely impossible.


The author seems to assume that human consciousness needs more than computation. Certainly a valid point of view, but what is the evidence? If you're not even touching upon the question what consciousness actually is, how can you say a machine can't ever be conscious? I can't even be sure that other humans other than me are conscious (the p-zombie idea), how can we sure about machines?

Ironically I think the narcistic idea here is that only complex organic entities (in particular humans) can ever be conscious.


Your basically describing the P-Zombie thought experiment posited by David Chalmers.

I always thought it was a bad argument. If you assume consciousness is not required for some of the advanced “bio-computation” that we do, then of course it’s going to be superfluous.

You know at the very least that you feel conscious. And it feels like you are actively participating, guiding your body and mind to do cool and nuanced things everyday. So you probably need to be conscious to do what you do, so the P-Zombie just isn’t possible.

There’s some real philosophy that Chalmers is doing around feasibility when it comes to the P-Zombie argument, though that goes past my head.


p-zombies can believe that they are though. Which circles back to 'consciousness is nothing special' rather than 'consciousness doesn't exist'.

To put it another way, if AGI is computable, then we are all p-zombies. And evidence is starting to strongly hint that AGI is computable.


I think I basically agree. Unless somebody can come up with an empirical test for consciousness, I think consciousness is irrelevant. What matters are the technical capabilities of the system. What tasks is it able to perform? AGI will be able to generally perform any reasonable task you throw at it. If it's a p-zombie or not won't matter to engineers, only philosophers and theologians (or engineers moonlighting as those.)

In truth, I'm a panpsychist so I don't actually believe that machines (or anything else for that matter) doesn't have an experience of reality.

What I find odd is that you don't recognizing the difference at all between the conception of a p-zombie (an automaton with no experience of reality) and your own experience of reality. Typically we use machines as an example because most people would say that, unlike with people, machines don't really think, or feel, or suffer, so we there are never any moral considerations for dealing with them. Dark Mirror likes to demonstrate this common belief frequently in its episodes about "cookies", or people who are replicated in technology: "it's just code". Given that most people make the distinction, it is usually sufficient to point out that empiricism can make no measurement to determine the difference between an entity that experiences the world, and one that is an automaton.

You're purely materialist way of looking at things, where consciousness is a product of being a sufficiently complex system still begs the question: how complex? Again, I do not think there is any satisfactory empirical definition.


Yes, people who believe that consciousness is an emergent property of sufficiently complicated systems would say that a sufficiently complicated computer experiences qualia and is conscious.

"The very concept of p-zombies illustrates this a priori refusal to admit any possible evidence whatsoever of consciousness. Another person could simply decide that I am in fact a p-zombie and lock themselves in a closed system of thought out of which there is no path to demonstrating that I "experience" anything at all."

This is a good point and makes the problem interesting in an additional way. We (I) assume something like p-zombies exist in non-human consciousness, dogs and cats for example. It's like something to be a dog. How far down do we want to go ? Frogs? I'll bite; it's like something to be a frog:

https://www.youtube.com/watch?v=w8IY2eTBqd8

But here's a counter to the p-zombies argument, OK?

The p-zombies argument is usually taken to mean there comes a point where what has been created is so indistinguishable from "real" people, ala Ex Machina, that arguing over it is a form of ideologically motivated perversion.

Let me turn that round and say that the p-zombie argument is (accidentally) making the following strong claim- it is impossible to build a machine which in every way acts human but has no experience.

That's a very very strong claim on this universe. I wouldn't take the bet, because someone's going to do it.

But if someone is going to do it, how can we tell when they have or they haven't? The Turing Test is outdated (as I see it) and anyway already passed for some judges ( re: ELIZA).

To me, this circles back to the original problem. We can't distinguish between the high probability that someone can eventually create an actual zombie and "real" experience-having artificial intelligence, and why is that ?

The issue is just another form of the basic problem- we don't have the conceptual framework to get our minds around what experience is.

Our basic assumptions may be off. Instead of quarks et.al. being the basic building blocks of matter and matter of brains and brains of consciouness, some people take experience to be the most basic building block of the universe.

This was my conclusion and I thought it would just brand me as an eccentric so I never pushed it, but now I see it's being kicked around by people with careers.

Another assumption is that experience/consciousness is comprehensible to the level of scientific causality/reality we're aiming at, (let's just shorthand it to "ultimate reality"), because there are separate, distinct things in the first place.

But what if separate things is not a fact about ultimate reality? What if they're more like a hardwired perceptual compulsion we can't escape? Then we might very well find truly insoluable mysteries on the foundational tier of our conceptual scaffolding, because none of the "things" we think about are real in the first place. Things which don't exist, don't have to "add up".

So this would mean our minds and ultimate reality are just not made for each other, even as that reality directly impinges on our personal daily lives in ways we can and do readily experience and talk about.

It seems like the most far fetched and deflating hypothesis possible, but consider we'd merely be joining the rest of the animal kingdom in this regards.


And David Chalmers has an excellent rebuttal to Dennett's rebuttal. Different people take different sides, and I'm with Chalmers on this one.

The evidence for the conceivability of p-zombies is in your living room, if you have a television. You can see people on it and they behave intelligently. Are they conscious? Of course not, they're just red, green, and blue dots. Maybe we could make this a bit more realistic?

You're now in a very realistic virtual reality. Your character is conscious, because you are. So is the character being played by your friend over there (assuming he's not a p-zombie). But what about the characters next to you? Maybe Doug Lenat and Geoffrey Hinton collaborated and developed an AI as intelligent as a person, and that's what controls them. They can't be conscious, because they're just pixels like the dots on your flat-screen TV, so not conscious in the virtual reality, and they're not conscious in the real world either, because they're just software, which is an epiphenomenon, like wetness or tidiness. The computer which runs the VR, and the AIs, might be conscious but that would be a very different consciousness from yours. Maybe it senses voltage levels at its memory addresses, but it certainly wouldn't see you or the submachine gun you're carrying in the VR.

Maybe we already are in a virtual reality. Nick Bostrom thinks we might be. If it's sufficiently realistic, there might be no way to tell, and no way to tell whether everyone is conscious, or nobody is conscious except you.


Your second quote sounds like a fairly straightforward description of P-zombies to me: https://en.m.wikipedia.org/wiki/Philosophical_zombie

However, to add to the criticism you have of this article about this book, your two quotes taken together appear to be contradictory: if consciousness is a fundamental element of living matter, then given that we can make new living matter, why should there be any reason we can’t make a conscious artificial machine?


I don't think it actually brings up any relevant issues. For instance, you mention a p-zombie, but that's another one with glaringly obvious problems that are immediately evident. Does bacteria have consciousness? Or did consciousness arise later, with the first conscious creature surrounded by a community of p-zombies, including their parents, siblings, partners, etc. Both possibilities seem pretty detached from reality.

Pre-computation is another one that seems to obfuscate the actual issue. No, I don't think anyone would think a computer simply reciting a pre-computed conversation had conscious thought going into it; but that same is true for a human being reciting a a conversation they memorized (which wouldn't be that different from reading the conversation in a book). But that's a bit of a strawman, because no one is arguing that lookup table-type programs are conscious (you don't see anyone arguing that Siri is conscious). And the lookup table/precomputations for even a simple conversation would impossibly large (run some numbers, it's most likely larger than the number of atoms in the universe for even tiny conversations).

So I don't see these arguments as bringing up anything useful. They seem more like colorful attempts to purposefully confuse the issue.


That is a reasonable response to fdvessen's one-sentence rebuttal, but with regard to the significance of p-zombies in general, an appeal to authority is much more effective when all the authorities agree, but this is not the case here, with Dennett being just the most visible counter-example. From popular articles on the issue of consciousness, you might think that most philosophers are opposed to physicalism or anything like it, but that is apparently not the case.

It does not help that these same popular articles exaggerate and simplify - they make claims about the philosophical arguments that cannot be sustained by the arguments themselves. As I mentioned above, it is a misrepresentation to state that Chalmers' highly conditioned claim about the plausibility of p-zombies proves that no machine can have consciousness.


Assume, for the sake of the argument, that we'll eventually have computers powerful enough to simulate brains to the accuracy we think necessary for it to function.

There are several ways this experiment could go:

First, we could fail to produce a mind, because there's some secret sauce to it we're not aware of (eg a God-given soul).

Second, we could produce a zombie, indistinguishable from a conscious individual while actually not being conscious (though note that we'd have to treat it as if it were conscious, for reasons that should be obvious).

Third, we could produce a conscious mind.

I'm in the camp that thinks option three is plausible.

Let's assume I'm right. Now, instead of a supercomputer, give every one of the billions of humans on the planet an instruction manual, a pocket calculator and a phone to communicate results, and have them do the exact same calculations the supercomputer would do. Despite the latency, if option three were true, we should expect that this would still produce a conscious mind, albeit a rather slowly-thinking one.


Eh, I am not saying consciousness is not required for advanced bio-computation

I am saying that going from observation of behavior (computation, memory, actuation, etc...) to saying that "poof! that explains consciousness" is a leap of assumption without having deescribed the mechanism that brings about actual awareness

The P-zombie thing is pretty interesting honestly. So far on initial read it seems to make sense to me, except that to me the idea of "non-physical" doesn't mean anything magical or otherworldly. But rather that maybe theres more to the universe that the 5 senses do not interact with, and maybe someday we will discover physical artifacts of free will in living things that are able to manifest an attribute of consciousness like free will

For instance, quantum mechanics shows matter is intrinsically non-deterministic, although the statistics of outcomes are robust

Maybe that, or some other physics we have not discovered yet, can help show artifacts of consciousness that would not be feasible in a very complex and sophisticated classical computer?

Maybe there lies the distinction?

But again, I am not saying that such a physical artifact would cause consciousness, but rather shows that the human being allows consciousness to manifest in the human kind of like a mirror (the body) reflecting light (the conscious being, or the soul)

From Baha'i: "Know thou that the soul of man is exalted above, and is independent of all infirmities of body or mind. That a sick person showeth signs of weakness is due to the hindrances that interpose themselves between his soul and his body, for the soul itself remaineth unaffected by any bodily ailments. Consider the light of the lamp. Though an external object may interfere with its radiance, the light itself continueth to shine with undiminished power. In like manner, every malady afflicting the body of man is an impediment that preventeth the soul from manifesting its inherent might and power. When it leaveth the body, however, it will evince such ascendancy, and reveal such influence as no force on earth can equal. Every pure, every refined and sanctified soul will be endowed with tremendous power, and shall rejoice with exceeding gladness."


I tend to agree, although I think the crux is in the definition of consciousness. Where neuroscience and computer science are aligned is that they are both (like all science) concerned with observable and falsifiable truth. But consciousness as a concept is hopelessly subjective, because it's self-evident to us, and we can only recognize it in our earth animal kin.

Imagine an alien traveler comes to earth, will we consider it sentient or not? Does it depend on whether it is a robot or a naturally evolved organism? That seems like an odd distinction. Does it depend on how much it has a brain similar to ours? Hmmm...

Ultimately no matter how much we peel the neuroscience onion, the philosophical zombie question can never be falsifiable. We take it on faith that we are not zombies because each of us knows our own consciousness, and then by Occam's Razor we extrapolate that to humans and other earth animals to which we instinctively relate, but beyond that there's nothing to hang your hat on.


> Our theory says that if we want to decide whether or not a machine is conscious, we shouldn't look at the behavior of the machine, but at the actual substrate that has causal power. For present-day AI systems, that means we have to look at the level of the computer chip. Standard chips use the Von Neumann architecture, in which one transistor typically receives input from a couple of other transistors and projects also only to a couple of others. This is radically different from the causal mechanism in the brain, which is vastly more complex.

This riles me up for so many reasons. Transistors (and computers) are capable of emulating more complex systems. Looking at a transistor you will not understand why your mail can'd be delivered, nor will you understand consciousness from an individual neuron. Consciousness is related to behaviour, and both are related to survival and the environment. AI's don't have such an environment as of yet so they can't be conscious yet, but they could be. There is no magic dust in the brain.

> As long as a computer behaves like a human, what does it matter whether or not it is conscious?

If it walks like a duck, ... But seriously, why should consciousness be so special as to require a quality that can't be observed through behaviour (understanding that it is non-physical, non-observable)? P-zombies are just a thought experiment, and a bad one. Why should consciousness exist? To protect the body, to self reproduce, to exist. It exists to exist and evolve. And this is done by behaviour, by acting in a smart way. How would p-zombies come to be, if not through fighting for survival? Without the evolutionary mechanism there is no explanation for consciousness, and the evolutionary argument is sufficient. Consciousness is the inner optimisation loop, the outer one being evolution. They both work for survival, for their own sake.

An AI could repeat the same process by evolving as a population of agents cooperating and competing between themselves. They would have to be inside an environment that is complex enough, and they should be subject to evolution. It will be a consciousness, just not a human consciousness, which is tied to our environment, which is made in large part of other humans.


The p-zombie concept always struck me as stupid solipsism. If you can't tell the difference doesn't that mean there is no meaningful difference?

Likewise if they fear the loss of priviledge that implies some degree of consciousness /somewhere/. If some absurd set of state and pseudorandom random number generator capable of passing every metric of consciousness in response to inputs then it is a consciousness even if it is made of a bizzare set of equations and state.

Anyway for the consciousness somewhere - take a hypothetical hyperintelligence or supercomputer capable of simulating a human brain completely calculation by calculation in various events like say being flayed alive. It isn't torturing anybody because the actions are simply calculations that it itself is running. The victim may not be real but there is a real intelligence somewhere behind it - and it may or may not care about the simulated suffering. Where it is "run" is material like the difference between acting out a murder and actually murdering someone.

next

Legal | privacy