Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Consciousness is worth thinking about (hack.ly) similar stories update story
61.0 points by ca98am79 | karma 17314 | avg karma 5.64 2013-08-11 16:33:37+00:00 | hide | past | favorite | 85 comments



view as:

Imagine having consciousness having an evolutionary advantage. Imagine the possibility for a complex neural network to "bind" a part of external consciousness to itself. Then you have external but brain-trapped consciousness as an evolutionary possibility.

How to prove: First step is to find some creature with nerves but provably lacking consciousness.


> Imagine having consciousness having an evolutionary advantage.

Because consciousness exists among apparently successful species, it's a reasonable working hypothesis that it has survival value. In evolutionary studies, the burden of evidence is properly borne by those who would argue that an existing, seemingly adaptive trait doesn't have survival value.

> First step is to find some creature with nerves but provably lacking consciousness.

As to "provably", that can't be done. To prove a lack of consciousness, we would first have to create an objective positive test for consciousness. To do that, we would have to fully define and objectively quantify consciousness. We're nowhere near that stage.


We can try to figure out how to create consciousness in something that didn't have it or remove consciousness from some living being, but we can't prove that we have a success before we understand what consciousness is.

By the way, if we were find why consciousness has evolutionary advantage we'll have a subset of what consciousness is. A set of traits for which we can test.


> By the way, if we were find why consciousness has evolutionary advantage we'll have a subset of what consciousness is.

Not necessarily. There are plenty of fitness adaptations we can crudely quantify (group A had adaptation X and was more successful as a result) but can't explain (in a rigorous, falsifiable way). The reason is that nature (natural selection) is "smarter" than we are.

Also, the Turing test -- the idea that a sufficiently advanced computer program might pass as human if accessed at a text-only terminal -- tells us that our idea about what counts as a conscious being is pretty limited.

> A set of traits for which we can test.

If we chose a set of traits that we thought identified a conscious person, we could just submit the set to a computer for blind simulation. My point is that consciousness is more complicated than simple behavior, however complex.

It is said that elephants pass the mirror test, meaning when they look in a mirror, they apparently realize they are the source of the reflection. This test has come to be a crude measure of consciousness and self-awareness. But I could easily program a robot to imitate the external behavior, without any real sense of what it was doing, or anything resembling consciousness.

Consciousness is more complex, and more subtle, than most people realize, and we really haven't even begun sorting it out.


Am I the only one who thinks mirror test is stupid?

You won't realize your recorded voice is yours. You won't realize it's you on a CCTV footage. It's not granted.

On the other hand, it's trivial to make robot pass the mirror test, indeed necessary in the environment where mirrors are common.


> Am I the only one who thinks mirror test is stupid?

It's pretty reliable, and the scientists who measure it have a number of ways to be sure they're witnessing what they think. One way is to put a colored spot on an animal's forehead in advance of the test. If the animal looks in the mirror, sees the spot, and then reaches to his own forehead to remove it, we can be pretty sure he grasps that he's looking at a reflection of himself.

But as to the outcome supporting or not supporting some theory of consciousness in animals, that's a different story.

> On the other hand, it's trivial to make robot pass the mirror test, indeed necessary in the environment where mirrors are common.

That's certainly true, and it's a good argument for the limited significance of the mirror test in ideas about consciousness.


While the mirror test isn't a positive test for consciousness, for a species not to be able to pass the mirror test indicates that their 'mental model' of the world is lacking in something that would seem to be a necessary component of self-awareness and (therefore?) consciousness.

I think the mirror test, and consciousness, are separate issues.

1. As someone else here pointed out, a computer can be programmed to pass the mirror test, indeed a sophisticated robot must be able to do this, to be able to successfully navigate a complex environment. But that robot need not be conscious to do so.

2. A conscious creature (a creature that shows classic signs of being conscious) isn't necessarily able to pass the mirror test. In this article --

http://en.wikipedia.org/wiki/Mirror_test

-- we read that "Primates, other than the great apes, have so far universally failed the mirror test." I think many primates would meet various criteria for consciousness, but can't pass the mirror test.

Interestingly, in the linked article, birds of the family Corvidae (crows, ravens, European magpies, gray jays (aka "camp robbers"), and others) pass the mirror test, possibly the only bird type that can. Corvids are very smart birds.

http://en.wikipedia.org/wiki/Corvidae


elephant have been seeing their reflection when drinking water in lakes/ponds for millenia. Their brains evolved to identify with those patterns. Mirror test just proves if any species evolved enough to identify with that "pattern".

humans are just animals. nothing special. their consciousness is no different than say a crows or fox(cunning) consciousness. I would say one thing though....human brain is probably one of fluidest/flexible/organs ever created by nature. Take two humans...put one human in jail from birth (dont even teach language), and put other human in silicon-valley from birth, send him to stanford. Now, compare the "consciousness" of these two. Or even better, put the former human in jungle with monkeys from birth (dont teach him anything). I am sure there is lot of existing R&D/scientific articles on this.


> elephant have been seeing their reflection when drinking water in lakes/ponds for millenia. Their brains evolved to identify with those patterns. Mirror test just proves if any species evolved enough to identify with that "pattern".

Yes, but many animals see their reflections in watering holes -- only a few of them understand that they're seeing their own reflection.

> humans are just animals. nothing special.

That's certainly true.

> Take two humans...put one human in jail from birth (dont even teach language), and put other human in silicon-valley from birth, send him to stanford. Now, compare the "consciousness" of these two.

This diminishes the role of inheritance, but over time research, especially twin studies, has produced increasing support for the idea that we're much more shaped by inheritance than environment.


> Yes, but many animals see their reflections in watering holes -- only a few of them understand that they're seeing their own reflection.

its the nature doing its thing. only few animals/species develop certain faculties. And humans just happen to develop "mind" faculty. I wouldn't be surprised if some other species on a different planet developed telepathy or some other advanced faculties.

> This diminishes the role of inheritance, but over time research, especially twin studies, has produced increasing support for the idea that we're much more shaped by inheritance than environment.

inheritance only gives you latent abilities ... They will not flower if there is no proper environment for them to be actualized. everything is just manifestation of the nature. we are all just different "expressions" of the ultimate (aka "god" in some religinos)


> How to prove: First step is to find some creature with nerves but provably lacking consciousness.

This doesn't seem entirely workable, since it's impossible to prove that anything is/isn't conscious without making auxiliary assumptions (Chalmers calls these "bridging principles"), and these auxiliary assumptions typically fall outside of scientific consensus. No one has conclusively proven that anything lacks consciousness, living or otherwise.


> How to prove: First step is to find some creature with nerves but provably lacking consciousness.

This creature is called Philosophical Zombie. Check its wiki page at http://en.wikipedia.org/wiki/Philosophical_zombie for a list of arguments regarding the Zombie.


I don't think real Philosophical Zombie will be indistinguishable from a normal human being.

If it would, why aren't we all Philosophical Zombies? Why have we evolved to have consciousness? And that seems probable.


I remember an aside in a neuroscience textbook which asked if, one day, a mystery like "where consciousness comes from" might seem as oddly posed as "where life comes from". Once you've broken "life" into its biological and chemical components - cells, proteins, chemical reactions - incredibly complex as they all are, the notion of "life", or of some kind of essential "life force" just kind of disintegrates. Once you get down to that microscopic level, the illusion breaks down.

But with life, everything is physically observable. How you would observe conscious experience in something other than yourself? I don't mean the behaviors associated with consciousness, but conscious awareness itself?

People do a lot of handwaving around this issue and claim we've got it cracked just because we sorta know how the brain works, and yet we don't have a way even to detect whether this thing we're talking about exists. Much less do we know how it works, or how it's created. Our supposed scientific explanations are just a lot of mumbo jumbo when we can't even measure the thing we're talking about. We can detect brain activity, but we're just assuming that's the same thing.

I think ultimately this will be resolved experimentally, but we're a long way from accomplishing that.


I think the point is that we perhaps should not expect to find a particular mechanism for consciousness, any more than we should expect to find a life-force.

It could be that the "feeling" of consciousness is just something that comes from being a massively-parallel, unsupervised learning machine.


Why are we drawing the line at "cells", "proteins" and "chemical reactions", you're going to have to get to the most fundamental level to really break down the illusion.

I don't think we do. There are lots of chemical reactions in the world, happening all the time. ~99.999% of them don't then beget further reactions. The rest are the ones we refer to as 'life'. I'm not sure if anyone would ever need a more complex idea for life than that.

Whether David Chalmers believes that consciousness doesn't come from the brain depends on what you mean by "comes from." He believes that physics can completely describe the physical world; he does not believe in souls and interactionist/Cartesian dualism. Instead, he suggests that information may have both physical and mental manifestations.

While I once found this line of reasoning convincing, I'm no longer so sure it matters. While consciousness itself may be non-physical, under the assumption that physical effects have only physical causes, we should be able to explain why we say we are conscious in physical terms. I'm not sure that an explanation of the latter will also explain the former, but in the end, I don't think we will care. I think we will find that the everyday concept of "consciousness" mixes together many different ideas, each of which are simpler to solve on their own than they are when we consider them together.

Roughly a century ago we went through a similar process with the concept of life. People wanted to know what makes a thing alive, and many intelligent scientists were drawn to the idea that there is something innate to a living organism that makes it alive. After decades of biological research, we're still not sure what makes an organism alive, in the sense that the line between living organisms and inert biological material is not well-defined, but we know how living organisms reproduce and grow, and that's really what matters.


After decades of biological research, we're still not sure what makes an organism alive, in the sense that the line between living organisms and inert biological material is not well-defined, but we know how living organisms reproduce and grow, and that's really what matters.

It might be as simple as "it's alive while it's a functioning machine." A disassembled car is just parts and can't drive/be-alive, but assembled it is a car and can drive/be-alive.


I've come to the conclusion that it is yet another useless question. Or perhaps "it's alive if it interests us". Is the great red spot on Jupiter alive? I don't know, but who cares if it is interesting enough to study? Imagine we finally manage to land on an alien planet, and all we find is robots "living" there. Would we go "ah, too bad, we didn't find life"?

I don't think the question is useless, but rather very difficult to answer. We simply don't know enough yet to answer it, but might someday.

As for alien life, I expect if there's anything out there, it's probably intelligent, redundant, and efficient. Who knows what form that could take? It might be robots, or computers, a mesh network, or something else entirely. Whatever form it takes, it'll be limited by the constraints of the universe (or multiverse, or whatever "reality" is).

A lot of people's ideas about aliens are molded from television and movies. Historically we've had humans playing those characters, severely limiting how "alien" they can be. While very inspirational and valuable, it has the negative of potentially pigeon-holing peoples minds.


The interesting question in my opinion is, why do we find something interesting. For example if we hope to find alien life, what do we hope to find? That is the real question, not some arbitrary definition of life.

To put it kindly, I'd say this is sophomoric philosophy.

The OP suggests that "a more simple and elegant resolution to this paradox is that there is simply one consciousness, and that it alone observes (possibly, decides) these quantum states.".

This raises obvious the question: where does that consciousness come from? The author makes no attempt to address that problem. The answer he gives "the UNIVERSE is like ALL consciousness, man" doesn't really have substance.

There is other crackpottery, such as the claim that the material world is created by consciousness. This is unscientific (and lousy philosophy) because it's a philosophy that explains something without giving us any insight or deeper understanding.

The suggestion that "consciousness is the present moment" is also confused. Brains experience the present moment together because the present moment is part of the state of the individual brains. If two brains don't share any common input then they don't share the present moment either. Or to put it in a different way, we wouldn't share the present moment in a meaningful way either with a brain that lives in the same world as us but runs 1000x faster or 1000x slower.

I could pick this apart further, but I don't see the point. This article really isn't up to HN quality standards.


Your comment is sophomoric, I'd pick it apart, but I don't see the point. Your comment really isn't up to HN quality standards.

A lot of people check the comments before reading an article. My comment is intended as some sort of public service announcement: don't bother reading the article. I think there's value in that.

Of course, if I posted a blog post with the same contents and titled it "Consciousness is worth thinking about: a rebuttal" it would certainly deserve to get flagged.


Your comment doesn't really have any "picking apart". You just pose a few questions and frame the article's author as someone not be taken seriously.

You can't argue against someone simply by saying "what you said is crackpottery".

I found more value in the article than your comment.


  "My comment is intended as some sort of public service announcement: don't bother reading the article."
Isn't this by and large antithetical to the pursuit of knowledge? It's not quackery, it's not sophomoric, it's an inquisitive exploration of an idea.

Also, please see: http://duncancarroll.tumblr.com/post/57994245946/no-really-c...


And:

> Around this same time I became aware of the problem of observation in quantum mechanics. At it’s basic level the issue is that you can affect particles by simply observing them.

That's simply because you can only measure by adding energy to the system, which obviously changes the state of the system.


That makes it sound like a straightforward classical effect. It's weirder than that. In the two-slit experiment, if you measure which slit a photon (or other small particle) passes through, then after a bunch of particles you get two spots on the wall. If you don't measure, you get an interference pattern, as if the particle were a wave that passed through both slits.

This is true even if you make the decision about whether to measure, after the particle has already passed through the slit. Check the measurement after the particle passes the slit, but before it hits the wall, and the interference pattern goes away.


Hi, I am the OP. Thank you for taking the time to read my article and to write a response.

> where does that consciousness come from?

great question! Yes, I make no attempt to address that problem only because the main point of my article was to raise awareness that there is a possibility that consciousness doesn't come from the brain. The important question you bring up is extremely interesting in itself, and I really don't have any idea to the answer. It reminds me of gravity - we have theories about it, but we don't know where it comes from.


If you're interested in consciousness, I recommend reading some of Antonio Damasio's books. I really enjoyed "The Feeling of What Happens".

He has some fascinating theories on the machinery of (brain-originated) consciousness, embodiment and the primacy of emotion, and he tends to explain them in the context of his various clinical cases (eg lesions to certain brain areas such as the central gray will irrevocably destroy consciousness, while others can have much more subtle effects).

I think his concept of "core consciousness" seems close to what meditation practitioners are trying to explore.


I've discussed this topic of consciousness with others and there is a common visceral reaction (like the one in this comment) that is common when challenging the presumed location of consciousness, especially from programmers, and I'm curious why.

One thought I've had is that programmers (given our profession) tend to fetishize the idea that ultimately we will understand the brain well enough to recreate consciousness in machine and code. It's pretty exciting to believe that as a programmer; much like it was probably exciting to think that we were at the center of a Ptolemaic universe.

I'm okay with someone wanting to ask whether the world is really flat, even if he doesn't address what other shape it could be. I found this article to be a very interesting questioning about some assumptions we have regarding consciousness.


I think most of the "argument" comes from the conflation of the "what is it like" with "where does it live" questions.[0]

Most of the "dumb" arguments about "where does it live" are actually "what is it like" questions, most of which are thoroughly disposed of in Dennett's Quining Qualia.[1]

[0] http://lesswrong.com/lw/no/how_an_algorithm_feels_from_insid...

[1] http://ase.tufts.edu/cogstud/papers/quinqual.htm


What's the big deal about consciousness anyway??

I don't see the 'magic' about being self-aware. Surely it just a step up the cognitive ladder from existing without awareness.


“Life is far too important a thing ever to talk seriously about.” Oscar Wilde

What is "cognitive ladder"? Where does it come from? How many steps does it have? Are we the last one?


I can imagine a race of aliens rejecting human consciousness on the basis of them not passing Smell Mirror Test.

Let's suppose dogs will qualify which will humiliatingly put humans pretty low on the ladder.


It's certainly worth thinking about longer than the author has. I don't want to be mean, but the part about QM, in particular, was just infuriating.

I wish we could send out a memo to the world that says "Please do not draw metaphysical conclusions from mathematics or science unless you are an expert on the field you are drawing conclusions from."


I'd tear up the memo. Ignore, demolish or be persuaded by a conclusion by all means but don't define who may or may not contribute it on the basis of a fitness-to-participate filter.

I didn't say anything about prohibition. I said I want to send a memo requesting that people not do that. It's for their own dignity as much as anything. Please don't try to make this a "freedom of speech" thing. It's not.

But it's silly to try to generalize to philosophy from science and math that you have only a superficial grasp of. It can be a good way to look deep to people who are similarly ignorant (see Deepak Chopra), I suppose, but it's not a good way to find the truth.


David Chalmers is an expert, and you sir are not. Keep it classy.

David Chalmers is most certainly not an expert on quantum mechanics (and Chalmers doesn't seem to be someone the author is relying on in his points about QM). Neither am I, but I would make a strong bet that I have a better understanding of it than the author of this article.

For one thing, he seems (like many laymen) to fundamentally misunderstand what an "observer" is in the Copenhagen interpretation. An observer does not by any means imply consciousness. A computer program could take measurements of a quantum system and then delete that data before anyone looked at it, and it would have exactly the same effect as if a human took those measurements.

Moreover, he seems to think that the primary motivation for MWI is to solve the Schrödinger's cat paradox, and that this is its only advantage. This is, of course, completely wrong.


> the primary motivation for MWI is to solve the Schrödinger's cat paradox, and that this is its only advantage. This is, of course, completely wrong.

By MWI, I am assuming you mean the Many Worlds Interpretation. I am curious: Can you describe what are the other motivations there are for this theory beyond resolving Schrödinger's cat paradox ?


Sure. To start off, I'd like to point out that Schrödinger's cat really isn't a problem for the Copenhagen interpretation. The hypothetical device is "observing" the system during the entire experiment, so the cat is never in a superposition between dead and alive.

At least, I can tell you some of its other advantages.

First of all, wavefunction collapse is non-local. That is, if you take the wavefunction as being physically real in some sense, then collapse is an event which propagates faster than the speed of light. Because this cannot be used to actually send information faster than light, it doesn't violate causality. But it still has a smell to it. MWI does not have this problem.

Another advantage of MWI is that it is deterministic. Once again, there's no rule that says that physics has to be deterministic. But when you're going along, finding one deterministic law after another, and then all of a sudden it seems that things are non-deterministic after all, it should give you pause.

There are other problems MWI solves which I don't understand as well, most dealing with problems with wavefunction collapse. In short, collapse has a lot of properties which are highly suspicious because you just don't see them elsewhere in physics.

It's also arguably a simpler interpretation, in which case Occam's razor applies. Unfortunately, we don't yet have good ways of estimating the algorithmic complexity of arbitrary hypotheses, so whether this is actually the case is a matter of debate.


Consciousness is just an illusion. It is not a useful concept either. It won't help you to program robots, for example.

I was very surprised to learn that according to Less Wrong, the concept of "Zombies" is accepted by a lot of researchers of consciousness: http://lesswrong.com/lw/p7/zombies_zombies/ A zombie here is defined as a person that is exactly like a conscious person, down to every atom, but is not conscious. I think that is complete nonsense.


It will help me to understand myself. That's much more important than programming robots. What if I don't actually need any robots in the first place?

In my opinion it won't help you, because it is just an illusion. If it would help you understand yourself, it would also help you build a robot that is similar to you.

But OK, that is just an opinion, I lack the energy to write more atm.


But the zombie idea is an interesting thought experiment. If the zombie isn't conscious then it also (likely) rules out the quantum hand-wavery of the OP - since any physical quantum effects would be replicated. It draws out the distinction pretty well.

It is interesting, I am just puzzled that people think such zombies are conceivable.

The problem I have with saying "consciousness is an illusion" is sort of similar to saying "God was the first cause". It pushes the question one layer away, because then the next obvious question is "an illusion to what, exactly?" How do you produce the illusion of qualia to something that does not experience qualia?

It rather seems to me talking about "qualia" is pushing the question one step away, or creating a circular argument.

If I attach a light sensor to my Arduino and then upload code to the arduino that says "if sensor active, flash light", I'd say the arduino has a "qualia" of light. Perceiving light flips some bit in the arduino. That's all there is to it.


Except that non-sensory experiences also produce qualia, and some drugs produce qualia that are never triggered in ordinary experience. The question of whether qualia are real is still very much the subject of debate, and not something you can easily just wave away like that.

I don't see the difficulty with that? If I read Wikipedia right qualia basically just means "subjective impressions"? Alright then, compute around for a while on your Arduino, and you get some subjective impressions. For example you could run some simulation that is known to exhibit randomness (like some cellular automatons) and then just pick some state of that.

I think this just boils down to the old and tired argument that "computers can only do what humans told them to do, therefore they can't be conscious", which is just misinformed. If I tell the computer car to turn left every time it sees a duck, I will have no idea where it will end up.

Also building a "drug me" button into an Arduino seems trivial. You could XOR all your data if the button is pressed, or do whatever else you fancy (even trigger a completely different program). I don't find drug experiences puzzling at all. If you want, overclock the Arduino so that it produces some computing errors. Hey presto, your Arduino has a drug experience.


The question is not whether computers can have qualia, the question is whether qualia are real. The argument goes that if your Arduino is doing the kind of "computing around" necessary to have subjective impressions, then it is experiencing qualia, and it is in some sense conscious (although not, of course, if philosophical zombies are possible).

Well then qualia is just an artificial definition for something that separates me from you. Since they can't be measured I can always claim "you have no qualia" and treat you like a machine.

I see no reason to assume that qualia cannot be measured, in principle.

The statement "consciousness is an illusion" has always struck me as a contradiction, since the nature of illusion is that there's something perceiving it.

You don't need consciousness to perceive something. It's just electrons interacting with atoms.

Saying that consciousness is just an illusion created by reacting atoms makes sense from a third-person perspective. It does sound inconceivable that a zombie could be the exact same as a "conscious" person atomically but simply lack a so-called consciousness. If we're talking about a person Bob, sure, all his thoughts are just atomic reactions.

But if that's the case, why does the first-person point of view exist? Yes, there will be a bunch of people, all supposedly with their first-person points of view, but why are you inside yourself? What is different about your illusion of consciousness such that you are inside it from the first-person point of view?

If consciousness is just an illusion created by chemical reactions, it shouldn't exist at all. Consciousness is a first-person thing, and were it just a bunch of reactions, no one would be able to call themselves I.


It's really easy to make a computer say "I".

sigh

Yes, as it is for a person in the third person. Are you essentially a complex computer? If so, why do you experience I? Why do "you" exist? Why is there not simply a Tichy in the universe acting on its own accord?

Years ago I had a good talk with David Chalmers at a Quantum Mechanics and Consciousness conference. That prompted me to read his book. Roger Penrose and others also have interesting ideas of what is really consciousness.

Most people probably never experience expansive meditation, but for those of us who do that seems like further evidence that consciousness is not all in our physical brains.


Thank you for your comment - I totally forgot to mention Roger Penrose in the article. My friend actually also brought him up with David Chalmers. For others, here is a short video on some of his ideas:

http://www.youtube.com/watch?v=3WXTX0IUaOg


Frankly, Penrose's thesis that consciousness stems from the quantum effects in microtubules, and the pseudo-"proof" based on a scammy reductio-ad-absurdum showing that consciousness can not be a computable function, are one of the biggest piles of horsesh*t in this whole field. I'm amazing he hasn't lost all credibility from such kind of deceitful thinking.

Penrose has credibility because of his numerous other achievements. His views on philosophy of mind are not regarded as serious positions in academic philosophy, neuroscience, or physics.


I am writing this in part directly to gizmo but also to the rest of the HN readers. First of all, I didn't think it was necessary to call this 'crackpottery'. Coming entirely from a scientific perspective one could write this off as completely philosophical and almost non-sensical. I would like to say that I've definitely gained some new insights or nearly a moment of eureka from reading certain words of the article.

Of course, I can't describe exactly what is it that I've felt. But I can sort of provide the environment into which I've felt that feeling to try to find that common denominator which may exist in all of us.

I've been a big fan of Buddism, I can't say I'm a Buddist but I definitely try to take elements from Buddism that I concur with to better define my own experience with the world. Buddism has a notion which is identically described by the author - observation of the world without egos. Buddism suggests that being able to achieve this allows inner zen. I'm here not to advocate nor prove anything related to Buddism, I just want to use it for a setup leading to where I am at.

Through schooling an education, one thing we've learned is to be objective, and unbiased. This means innocent until proven guilty; do not accept any claims without facts; discounting scientific discoveries without proofs etc. That is, we come to be capable of being objective with things that do not 'directly' connect with our consciousness. Now, entering the consciousness, can we still be objective? Can we observe our own failures just like how we observe the failures of others without emotional attachment? Can we connect to what others say to comfort us when we experience a rejection, a breakup? Can we understand that the past cannot be adjusted and therefore NEVER experience the feeling of regret and ONLY gain experience of what could be done better next time if the same were to happen? In general, can we completely deattach ourselves from our emotions and just observe the entities, whether it be you, him, or I, it's all the same.

I can't speak for others, but for a brief moment while reading the article, I've felt a weird out-of-place sensation where I'm just a being, and I am not I. It's almost as if disconnecting my consciousness with my body, it's almost scary.

Just food for thoughts.


Here on HN we've seen some sort of quasicrystal fabric distributing the prime integers and providing a possible entry onto gaining headway proving the Riemann Hypothesis:

HN search:

http://golem.ph.utexas.edu/category/2013/06/quasicrystals_an...

http://fadereu.posterous.com/knk103-the-crystals-of-mt-zeta

So OP ca98am79's suggestion of a common consciousness we -as I vaguely understand it - be as transceivers(?) of/with(?) merits some similar consideration, for those still cycling among these perennial questions.

Rowing to City Dock everyday, a recurring observation of complex wave crest interference's semi-chaotic patterns with discrete natural nos sub-wave-train symmetric `beats', share the same medium of all semi-uniform distributed diversely scaled and chaotically asynchronous impulse sources afloat upon all of Eagle Harbor and beyond.

Thank you ca98am79 to boldly post here.


My somewhat frustrated response to this comment thread. http://duncancarroll.tumblr.com/post/57994245946/no-really-c...

I regularly have a discussion with friends on "consciousness". And a question that regularly comes under discussion is following:

Suppose we are living in a time when medicine has advanced tons. In this time, a friend of yours gets into an accident, and gradually starts to lose different body functions/parts. Each time a body part stops working, it is replaced with an artificial one. There comes a time when everything physical about your friend is replaced, even his brain. Though during brain change (say brain is replaced with a chip), all his memories are shifted from his old brain to the replacement. When this physically totally new friend of yours wakes up after the memory transfer, he claims he is your friend, and can answer any question you throw at him, that would have been answered by your original friend.

Is this physically new friend actually your old friend? Or is he faking it having access to all the memories of your friend? Did a "consciousness transfer" occur as well? Or not?

Any take?


when you look at consciousness very thoroughly, you will realize (as did by buddha, and several advanced advaita masters) that there is no "you" "me" "my conciousness" etc.etc. "You" are "everything" == meaning its all in your head. "You" are just a particular manifestation of nature will certain variables/parameters/context/scope. If/When all those ingredients are replicated ...it will be you. The physical body is just a host. "You" are "compute context" operating inside the host. You call it "consciousness" or "my consciousness". You can refer to advanced buddhist or advaita masters, who realized this in nirvana.

Nature is doing its thing. "You" are just one particular manifestation of it, doing its thing (aka "living life"). To end "self" suffering (the core teaching of buddhism) you need to see this, realize this - you are guranteed to see it, as you "it" yourself. And buddha has a program for you. As does several other masters/"paths".


Thinking about teleportation I eventually realized that if it ever works, it will probably work by copying a person and destroying the old copy. I think Michael Crichton described it in the same way in "Timeline".


There may have been a TL;DR part of this I'm missing, but it's hard to argue against a thalamic or brainstem stroke wiping out consciousness...

I wrote an article [1] on the topic of considering your brain 'the significant other'. On a personal level, I think there are significant productivity advantages to be had if you consider your brain as not your 'self'. [1] http://humbleware.com/the-runaway-brain/


I think ultimately this will be resolved by experiment.

If progress continues far enough, then eventually people will want to upload their minds to computers, probably using Moravec's method of gradually replacing neurons with hardware.

The prudent way to do this would be to ask the person how it's going as you progress, and try it in various different sequences.

So if, say, you replace the visual cortex, and the person reports that their perception of color and depth has disappeared, but they still know where things are, then you know that the new hardware is correctly reporting information, but for some reason isn't supporting conscious perception of that information.

And then you can try to debug it. Maybe your algorithms are wrong, or maybe consciousness requires some physical effect.


I've wondered about this since I was twelve, though I didn't what it was at the time. My personal experiences have led me to conclude that it is actually impossible for consciousness to be biological.

If you look at consciousness from a third-party perspective, it makes sense for consciousness to be biological. Suppose we have a human named Bob; Bob thinks with his brain, and it's a safe bet to assume his brain also creates his consciousness. But now think about your own consciousness. If it's your brain that creates consciousness, why are you in the first-person point of view? Why is there a first-person point of view? Why are "you" not just a brain with its own consciousness? Where do "you" come into the picture?

On a somewhat related note, the only safe bet is to assume you're the only person in the world. You know you exist because you think and therefore are. You don't know, however, if anyone else does.


Legal | privacy