I'm finally starting to unravel New Age terminology which deals with this subject. Eastern philosophy is almost impenetrable due to the cultural barrier, but those who succeeded imported some relevant ideas to Wester society.
It's really bizarre how some of these wisdom teachers go on conspiratorial rants which have a completely different tone from the actually useful information.
Eastern philosophy isn't impenetrable, and people who cannot penetrate it aren't hampered by cultural difference.
Even for natives of those cultures, penetrating that philosophy often takes years of spiritual practise and training. The reason is that it isn't about ideas, it's an actual body-mind training and transformation of being. It isn't something you just sit and talk about and discuss, although it can include that, but also it must come with actual implementation to occur.
That's the main difference I've noticed between western philosophy and eastern philosophy which makes them difficult to integrate. The former is almost always understood by the experience of so-called "ordinary beings" and communicated in terms of intellectual arguments, whereas the latter cannot really be understood unless you actually physically and spiritually transform your very being.
For example, I used to practise Soto Zen, and Dogen's philosophical writings are very popular there. Western philosophers, or just general non-practitioners, absolutely bash their heads against his writing, it's just so difficult to wrangle and there's no connection. In Soto Zen we understand it by sitting zazen (seated meditation). Literally, the only way I know to understand it is by doing meditation practise. Then next time you read it and you have a deep connection with it that you can't explain in a way that someone who hasn't had that connection would understand. Western philosophy seems to not be compatible with those kinds of transformations. If it cannot be explained in a way that is somehow independent of the observer's own practise, then it is not considered rigorous, but also such a thing is impossible when talking, for example, about Zen.
I did use the qualifier 'almost'... and I don't see you actually disagreeing with what I meant.
There was something called Eckhart Seminar Training or something which took people through decades of training in the more cumbersome system in a week. Similarly, a lot of cultural impedance has been removed in the teachings of Western sages.
Teachings need to be updated and modernized as we progress as society.
> Teachings need to be updated and modernized as we progress as society.
I don't think there's any way to do that without understanding them, but to understand them fully you basically have to become a Buddha...so it is not clear how to do this without removing essential parts of the teachings, as are often done when surgically transposing bits of eastern philosophy into the west
What Westerners need to be taught is different from the needs of other cultures. Seekers here are seldom lacking in their discernment, but very lacking in their empathy.
I haven't experienced that the teachings need to change personally, but yes the emphasis needs to. Unfortunately westerners are being fed teachings that are more like "this meditation will make you experience a sober LSD trip" and "this one will cure your depression" and "this will give you insight into the nature of reality", because that's what westerners want. But what they actually need is teachings on compassion, loving kindness, etc, which they are in no way interested in. Ultimately you cannot have one without the other. There are no eastern spiritual paths that don't integrate both insight and compassion.
AFAIK the ads don't really correspond to the content. Regardless if you are looking for weight loss, business success, or liberation, you come into contact with the same teachings because it does all of the above. It brings people in with concepts they are familiar with.
Yeah I think you have to misrepresent to get people in, but personally I don't like that. I would rather we don't lie about it and Buddhism (for example) dies than we misrepresent it.
Nothing in this essay addresses the single most prominent question of contemporary philosophy of mind, which is the hard problem of consciousness. It only addresses the soft problems, which everyone agrees are solvable "merely" with enough time, funding, and initiative. Nothing about the "beast machine" addresses how qualitative experience arises from physical phenomena, or as Chalmers puts it, "Why are we not philosophical zombies?" Of course the hard problem, by its nature, is likely irreducible, so expecting it to be solved is unreasonable.
The point is books and passages like this are only compelling if you've already accepted a hand-wavey answer to the hard problem. In this case the author's accepted answer seems to be the ever popular, "consciousness is an illusion," but that particular explanation to the hard problem is no more valid (or any less vague) than any other. Stating it with authority and not acknowledging it as a "faith-based" axiom undermines the work.
So far the only answer I've seen people give to the question is "what problem? There's no problem". Basically, there are many people who don't (or also have convinced themselves that they don't) perceive an incongruity between their personal subjective experience and the idea that consciousness is just an emergent phenomena of a physical system. I haven't found a good way to explain it, I just know that there's something wrong or missing.
So basically it seems there's three camps:
Those that feel the incongruity and don't know a solution.
Those that feel the incongruity and have found their own solution (which is usually what people will call "religion").
Those that do not feel the incongruity.
I don't personally see a way to justify it if someone doesn't feel the problem in their bones.
I don't understand how someone can not understand the problem.
You and I have conscious experience, we have a meaningful sensation of qualia. A billet of 4140 steel does not, or so we assume. Both are governed by physical laws of the universe, so there must be some distinction beyond these physical laws that differentiate them.
In this framing, there are typically four camps, three of them line up with your camps:
1) I am not like a block of steel, but I don't know why.
2) I am not like a block of steel, but I know why ("religion")
3) I am like a block of steel, neither are conscious (no incongruity/consciousness is an illusion)
4) I am like a block of steel, both are conscious ("it from bit"/Chalmers's universal conciousness)
Of course this implies the existence of a 5th and 6th camp that believes the block of steel is conscious, but we are not. I find these camps throw the best parties.
Well we do know emergent phenomena that occur as abstractions. Probably the most obvious example is a computer program. At a basic level, the computer really doesn't look like it's doing useful work, it just processes lists of rules and stuff, but actually the emergent behaviour is that I am browsing Hacker News. What is the difference between a computer and a rock? Well there isn't actually a fundamental difference, other than that the computer is physically arranged such that it is doing some complicated calculation resulting in this rich UI experience.
I think it's the same with the brain. Why is there consciousness in this brain and not in a rock? Well because the brain is set up to perform a complicated "calculation" and the rock is not. Just like the program's behaviour only exists as an emergent property of the computer's physical state, so does consciousness arise as an emergent property from the brain's physical state.
EDIT: to clarify this is why it is justified to say there is not a problem, but I do not personally find it satisfying
Those same people might say that it is just the experience of "being the computer" rather than seeing it from afar. But I don't know, because personally it isn't an argument I find convincing.
As an aside, I think you have to be a bit arrogant or closed minded to assert that they actually can see the problem. I really don't think my wife is lying or being nefarious when she says she doesn't see a problem. I trust her and others when they say they don't see a problem.
Yes, people who believe that consciousness is an emergent property of sufficiently complicated systems would say that a sufficiently complicated computer experiences qualia and is conscious.
Exactly, you have a handwave. A vague idea that when a system becomes "complex" enough it suddenly becomes conscious. There's no mechanism, no rigor, no cause and effect. With the browser you can chase down every bit through the hardware, down to the quantum tunneling effect dictating what's happening in the silicon and come up with a concrete explanation based in physical laws.
With the browser example, at no level of abstraction does there exist any ambiguity. "Emergent" in that framing simply means we can explain abstracted behaviors in terms of other abstractions. But if you wanted you could drill down to the particle physics and have a complete, rational explanation of the system.
Not so with consciousness, and thus the "hard problem"
Apologies if my language is emotional, I like this debate a lot and get rare few opportunities to discuss it with people who are familiar with all the background. I don't mean to invalidate anyone, I think all camps are equally valid regardless of my personal feelings.
One obvious difference between computers and humans is that only one of them is an animal. So far as we can tell, consciousness only arises in animals. This sounds like a truism, but I wonder if using phrases like “complex system“ masks it: both humans and computers are complex systems, and only one of them is conscious, and therefore what could possibly be the difference?
Well, the list of differences between the two is enormous. As the article discusses, some of those differences include embodiment, emotions and moods. So perhaps the “hard problem” is not a philosophical one but a question of detailed knowledge: if we had the ability to drill down to the underlying particle physics of bodies, emotions and moods, plus all the other things that comprise our animal natures, then perhaps the problem would be solved.
To put this another way: perhaps the issue with many approaches to understanding consciousness is that it assumes our animal nature doesn’t matter and that conscious minds can exist abstractly, or even be modelled in silicon. But maybe they can’t be, and so for us to truly understand consciousness we will need to truly understand the body, which we are very far away from.
To sum this up: consciousness, in this view, is not an emergent property of complex systems. Rather it is a fundamental property of animals.
If we take a human and replace one of their neurons with a silicon chip that exactly replicates its electrical activity, is the human any less "conscious"? Presumably their conscious experience is the same (if not why not?) If the human is still conscious, what if we continue one by one to replace all the neurons with silicon chips in the same way? Where does this thought experiment fail?
Personally I think most people who argue consciousness is an emergent property of complicated physical systems would believe that the machine would still be conscious after replacing every neuron with a chip
Given a sufficiently advanced chip which included all chemical interactions (or effective simulations), why wouldn't they be?
If after the transition is complete we determine/decide they aren't conscious, then we have to argue about at what point the 'hybrid' brain ceases to manifest consciousness, and why. Maybe the organic parts would start to change their prior functionality in response to the synthesized parts... but that would suggest we just didn't have a complete enough model of the neuron and we're back to square one. Once that model is complete, there should by definition be no problem, unless you want to argue it can never be complete, for which you then need evidence to convince everyone.
On the topic of simulating neurons, we might be able to simulate the _static_ structure but neurons also move around and make new connections. This video opened my eyes on how far we really are from a realistic simulation: https://youtu.be/CJ3d1FgbmFg
My house is made of bricks. I can confidently remove one of the bricks and replace it with a Kleenex box. The house is still standing, it’s perfectly liveable and no one can tell the difference. Now what happens when I replace all the bricks with Kleenex boxes?
At some point, the structure will collapse or blow over. When that happens depends on the order in which you work, and the weather conditions, but the outcome is at least well-defined and conceptually graspable.
You think your computer isn't conscious? How do you know? Just because it's not demonstrating free will? How many layers of error correction are built into computers to stifle the "randomness" of electronic equipment? Do you think you could still exercise free will locked in a padded room and strapped to a bed with a straightjacket?
There's a difference between knowing, and knowing how you know. I know my computer is not conscious even if I'm not entirely certain how I know. I chalk this up to the fact that, as a conscious animal with millions of years of evolutionary history, it is both beneficial and entirely natural for me to be able to recognize conscious beings when I encounter them.
(It also helps that I know how to program computers and don't view that activity as bringing consciousness in the world. I have children, however, so it turns out that I am — with some help — also able to bring consciousness into the world. The former activity I am both able to do and fully understand, while the latter I am clearly able to do, but don't at all understand.)
I recognize this point of view isn't popular among a lot of technical folks. I get it, I was there once too, but I've come around to a new appreciation for our animal nature. This question — how do you know what is conscious? — is very similar to questions like, "how do you know that that dog is afraid?" The short answer to that question is, "because we are kin". Which is an explanation I find much more rich and satisfying than reductionism.
That breaks down for anything that is not your kin though. "How do you know the computer is not conscious?" does not allow the answer "Because we are not kin". Best you can say is "Because we are not kin, I don't know". The dog example is illustrative of only half of the question.
I take your point, but I disagree: your statement assumes that there are conscious beings who are not my kin (and to be clear, by "kin" I broadly mean animals: living entities that are embodied and show characteristics like intention, mood and emotion). But there isn't any evidence that these exist and there's little if any evidence that they are even possible.
At best, you can put forward a thought experiment that starts with, "Suppose there are beings which are not animals but which are nonetheless conscious." I'm questioning the premise of that thought experiment, however, because so far as we can tell, there is no such thing.
In other words, computers are not conscious because they are not animals.
This may seem like circular reasoning, but not if you take the view that consciousness is a fundamental property of animals as opposed to an emergent property of complex systems.
No, there's no such assumption in my statement. That statement is merely open about what we don't know. Conversely, there is an assumption in your reasoning that non-animals are never conscious, which is, well, an assumption, and not derived from facts.
This creates a blind spot where an area of non-knowledge is assumed to be known about, and rejected out of hand. I have to point out that taking the view that consciousness is a property of animals does not exclude non-animals having it.
Even then, there will be a tussle over the exact line of what qualifies as an animal for the purpose of consciousness, resulting in the need to answer the same generalizations (e.g. intention, mood, emotion) that would allow yet undiscovered but plausible forms of life to qualify as conscious.
Some great points, well made (and apologies for putting that assumption on you, you are correct about that). I think the blind spot you refer to also creates the risk we may encounter consciousness and not recognize it.
I’ve long been of the common-in-tech-circles belief that at some point in the future we will be able to create intelligent, conscious machines. Where my view on this fascinating issue is shifting is that I’m much less convinced that our animal nature is immaterial to this task.
In any case this has been a fascinating and insightful conversation and I appreciate the opportunity to have it with you and the other commenters here.
>To sum this up: consciousness, in this view, is not an emergent property of complex systems. Rather it is a fundamental property of animals.
Or at the very least, it's a fundamental property of the particular kinds of physical systems that animals happen to be, and without understand what that is, we have no hope of replicating consciousness in any non-animal.
This makes sense to me, because consciousness seems to be a question related to brains, which are part of animals, so a really great way to confuse yourself would be to abstract the entire question of consciousness away from the life sciences and then wonder why you're so confused.
> a really great way to confuse yourself would be to abstract the entire question of consciousness away from the life sciences and then wonder why you're so confused
Yes, exactly. Which is a direct challenge to transhumanism because it means that maybe it actually won't be possible to achieve immortality via brain upload, or build general AI (without first solving the life science "problem", i.e. building an artificial body), and so on.
Sure, but was anyone really suggesting that we somehow create disembodied people? That seems to be the other side of the dialectical coin from immaterial souls, which are precisely what most Hard Problem disbelievers deny.
“ if we had the ability to drill down to the underlying particle physics of bodies, emotions and moods, plus all the other things that comprise our animal natures, then perhaps the problem would be solved.”
Doubtful since the medium of our thought is clear to us but impossible to share and difficult to describe.
"If I learn C I would be able to understand the Linux kernel"
It is but a first step, and particle physics/C is the lowest level, we have to understand the complex code written with them and the emergent phenomena.
The problem in your stating (and many others) is that the notion of consciousness has self-awareness is baked into it. It's odd to think that the ability to experience qualia magically occurs for sufficiently complex systems, but it's not odd to think that self-awareness just arises once a system that can experience qualia reaches a sufficient complexity level. Unfortunately, people latch on to the reasonableness of the latter, and ignore the ridiculousness of the former.
The trouble with the "hard problem" of consciousness is that it's predicated on the possibility of the existence of p-zombies. Even if you don't know how a complex physical system gives rise to consciousness, you understand that a block of steel is not a complex physical system - so the fact that it isn't conscious isn't that contentious.
If p-zombies could exist as complex systems without consciousness, then there would be a problem. But it's not so obvious that their existence is a possibility - or even, as Chalmers puts it, that they are "conceivable". To me, the leap of faith involved in the "hard problem" is really this first one.
Yeah I think a more interesting problem is asking what the difference between an electronic robot that behaves in every way like a human, and a human, is. Or further, an electronic robot where every single neuron in the human was completely simulated. I thought about this a lot when I was a teenager lol, surely in this case such a robot must have a subjective experience just as we do, but why?
Really? A sufficiently advanced computer is a perfect P-zombie to me. This is the Chinese Room problem restated. A computer is capable of behaving as a conscious actor without any concept of knowledge or qualia as we would recognize it. But again this gets into a faith-based discussion. To me a sufficiently complex computer doesn't suddenly cross a "consciousness threshold" of complexity, but if to you it does there's no problem, which we could call "camp 3" in this discussion.
A vast majority of people who believe that consciousness is an emergent property would also say that a sufficiently advanced computer is exactly as conscious as you or I. At least when I believed that human consciousness was an emergent property I believed that an advanced robot would be totally conscious.
In the original Chinese Room thought experiment it’s a human inside performing all the calculations by hand. To be consistent you would also believe that this room with a human in it has its own consciousness as a whole, correct? This just seems too weird to me.
For me this is analogous to, say, a PC running a NES emulator.
You can talk about the emulated CPU. Its state: its 8-bit register values, the contents of its 2KB of RAM, what 6502-architecture code it's currently running. Its concept of time: how many clock cycles it spends doing what, at an assumed fixed number of nanoseconds per clock cycle. Its inputs and outputs: controller button presses, video out.
Or you can talk about the PC's CPU. Its 64-bit register values, the contents of its gigabytes of RAM, what x86-architecture code it's currently running. Its own clock cycles, its own inputs and outputs.
Both of those can be considered to exist at the same time.
But of course, the emulated CPU doesn’t have an independent physical existence; it’s just an abstraction that exists within the PC. Its register values and RAM contents are represented by some part of the PC’s register values and RAM contents, with some arbitrary encoding depending on the emulator in use. Its time flows whenever the PC feels like running the emulation. The emulator might be set to 1x real time, or 2x or 0.5x, but even that setting only applies on average; individual clock cycles will always proceed at a highly erratic rate. The emulated CPU’s output might go to a real screen or it might just be saved as a video file. And so on.
But if the CPU isn’t real:
(1) Does that mean it’s a “p-zombie” that is only pretending to run 6502 code, but isn’t really?
(2) Does that mean you’re not really playing Super Mario Bros. if you play on an emulator?
My answer to 1 is: maybe, maybe not, but it makes no difference. Because my answer to 2 is: no, you definitely are playing the same game regardless of whether you’re using an emulator or not. The essence of what it means to “play Super Mario Bros.” is to interact with a system that follows certain rules to map controller inputs to video outputs. The rules are a mathematical abstraction, not inherently connected to any physical object. So it doesn’t matter whether the rules are implemented by a physical CPU or an emulator inside another CPU.
And I see consciousness as basically the same thing. A conscious being, to me, is fundamentally a mathematical abstraction consisting of some state, some inputs and outputs, and rules for manipulating the state and producing outputs from inputs, where the rules have to meet certain standards to count as conscious. For example, the rules should be able to perform general-purpose logical reasoning; they should include some kind of concept of self; etc. The exact boundaries could be litigated, but at minimum the rules corresponding to the operation of the human brain quality.
And a physical brain is really just one possible physical representation of that abstraction. A Chinese room would work too. It would be impossibly slow, and the person would need an astronomical number of filing cabinets to store the necessary data, and they’d inevitably commit calculation errors, but aside from that it would work.
So yes, the Chinese room, or the mathematical process it represents, can have consciousness, while the person within it has their own independent consciousness. Just as a NES CPU can exist “inside” a PC CPU, an emulated brain can exist “inside” a real brain (well, “inside” except for the filing cabinets).
The problem with the Chinese Room formulation is there are arguments against the near term possibility of autonomous vehicles that go "It's the same as AGI." In other words, a "Chinese Room" driver is not good enough.
What the Chinese Room may really be telling us is that machine consciousness might be as far from us as usable ML was to 1970's AI researchers, or at least much farther than we think. And, on top of that, if and when it does appear, it won't look human. It won't even look like an animal because it isn't one.
Which is non-physical because the storage complexity of a lookup table containing all possible answers to all possible inputs grows exponentially. It wouldn't fit in the universe.
>This is the Chinese Room problem restated. A computer is capable of behaving as a conscious actor without any concept of knowledge or qualia as we would recognize it.
This is begging the question of p-zombie existence. Searle and Chalmers both think it's possible for arbitrarily complex systems to exist without qualia. But given that qualia is unmeasurable, there's no way to know for sure.
That's why I consider the statement of the problem to be faith-based. It is possible that consciousness is an emergent property, and qualia is experienced by every sufficiently-complex system. Declaring that p-zombies are conceivable requires assuming that this is not so.
I concur with this completely. Rejecting p-zombies as a concept aligns you either with (3) or (4) of the toy framing above, depending on if you're materialist (consciousness emerges from physical phenomenon/doesn't exist) or non-materialist (consciousness emerges from non-physical phenomenon, and does so universally) about the follow on question.
Blocks of steel don't have nervous systems of any kind, let alone complex brains. Hell, they aren't even alive at all: they lack the underlying physiology for a nervous system to sense and control in the first place.
Seems like a bit of a hint for why we're conscious and they're not.
I didn't say "from the brain". I also didn't say "emerging". I mentioned the nervous system as a whole, and physiology (eg: internal bodily control systems) to boot.
2a) I am not like a block of steel, but I know why ("evolution")
In the same way that a camera takes pictures while a block of steel doesn't because someone designed it that way, we are conscious because evolution designed us that way. Conscious people survive better than those knocked unconscious.
But then you are like a block of steel, you are deterministically governed by the laws of physics. Any block of matter arranged just so is conscious in that framing, which differs from the religious framing.
In nature, there are many instances of systems which change drastically as they pass a tipping point.
I suspect the human (and many animal) brain has an information density sufficient to pass such a tipping point. That could even explain why observating quantum phenonema changes the outcome (due to multiple extremely-low-entropy states being unstable).
There could, of course, be many other explanations. Which camp is "I'm unlikely to figure out the answer, and I'm unclear on how it would help me if I did, so I'll continue to disregard the question".
That's a nice looking dead end, because your subjectivity biases you. Instead, step outside your own consciousness.
Based on objective observation, you are dynamic and reactive, steel is static and mostly unreactive. Beyond that, you might as well be the same. You claim to experience your existence, but what evidence do you have? And what evidence do you have that a bar of steel doesn't experience something of some sort?
> Magnetic scanners. Strong evidence in my opinion.
> I think you shouldn't start with steel, you should start with chemicals that produce nervous signals.
The idea being that none of this is direct evidence of consciousness. These give you evidence for answers to soft problems, a block of steel cannot see because it does not have eyes to see. If you attached eyes it would not see, because it would not have an optical cortex. If you gave it an optical cortex it would not see because it does not have... etc.
You can solve all of these soft problems, but they would never tell you how the block of steel experiences the color red, the "qualia" of vision. That's the hard problem. All of the soft problems are "minor" by comparison.
I think (4) gets more interesting if we replace "block of steel" with "fetal tissue" at various stages of development
i personally lean towards (4) for this reason. if we say "soul assignment" is not the answer, then we have to explain how some unconscious cells eventually acquire consciousness, and then go back to unconscious again. one solution is that its been there all along, and is every where.
Perhaps the human brain (any brain?) is just a lens for something quantum, or a radio for some signal? or maybe there are some immaterial forces that we can never observe under a microscope.
I agree about the best parties. But what about the idea that consciousness exists on a continuum — that everything that processes information has a scope of consciousness that corresponds to the complexity of what it processes? A block of steel doesn't really process any information, but a thermostat does, and so does a paramecium. I'm happy to grant qualia to a paramecium, even if I don't think those qualia are very substantial or interesting in human terms. I feel a little weirder about the thermostat, but, um, sure, why not, I guess?
I guess I'm siding with group 4, but just saying that quantity has a quality all its own. Or that universal consciousness doesn't have to be spooky.
Asking "how is consciousness quantized?" definitely would put you in (4) and is favored by Chalmers I believe. The idea that consciousness can somehow be "aggregated" by sufficiently complex or anti-entropic systems lends itself to this view.
I like that option. You don't have to find something that would give a hard border between "conscious" and "unconscious" entities, nor claim qualia doesn't exist, nor claim a rock has a level of consciousness equivalent to that of humans.
This is personally my feeling of it too. I think the boundaries between us are illusory, and that it is all continuous and fuzzy. So a universal consciousness is no issue for me, one that is neither singular nor separate.
I agree with this. IMO any theory wrt to consciousness has to first and foremost consider that no hard line can be drawn along the continuum from human beings to elephants to ants to yeast cells.
Consider the weather. Weather exists, I think everyone can accept this. But what is weather? Is it simply an aggregate emergent property of many interacting atmospheric and geological systems? Or is it something more? How can emergent properties of even a complex system create self-organizing and self-perpetuating semi-stable phenomenon like tornadoes? What controls the weather?
Consider consciousness. Everyone can accept that it exists. But what is consciousness? Is it really just an emergent property of brain-body activity? Or is it something more? How can an emergent property of a complex system create semi-stable behaviors and self-awareness? What controls consciousness?
IMO, consciousness is just an emergent property of brain-body activity (mostly brain). More recent studies of animal behavior have shown that consciousness is not a binary -- you have it or you don't -- thing. It is a gradient. Different animals possess different levels of cognitive ability, consciousness, self-awareness, social-awareness, etc. In short, there is (again IMO) nothing special about human consciousness: I submit that any sufficiently complex trainable associative-memory neural network (like a biological brain) will possess some level of consciousness as an emergent property. Just like any sufficiently large atmospheric + geological environment will possess weather.
Thus, there is no problem to understand (or not understand).
As an aside, are humans even "fully self-aware"? How would we know if we're not? What does that even really mean?
7) I am a model of a primate created/executed by the actual brain of the primate that the model is modeling. Ie, the primate is not like a block of steel cause it is modeling its own existence, while the block of steel is not modeling it’s own existence.
7) A purely phenomenological viewpoint (such as the one held by some schools of Mahayana Buddhism) would claim that the only reality is one's own mind. The steel block along with all other aspects of perceived reality, are akin to a magical illusion or dream.
I think a variation of the Sorites paradox [1] should be sufficient to argue that 4) is closest and that there is a continuum of consciousness:
You and I are (I presume) conscious and have a meaningful sensation of qualia. If you were to take a high powered laser and destroy exactly one neuron, we still would be. Given how many people survive and do relatively fine after traumatic brain injuries, you can probably extend that to some number of fried neurons while still preserving consciousness.
If the laser destroyed all of our neurons and left our brains a smoking lump of cinder, we would not be conscious. If the laser destroyed all but one neuron, we probably still would not be conscious. That's probably equally true of five or ten neurons.
Now, it may be that if you extend those two scenarios then at some number of neurons, the two will meet and frying just one more neuron completely flips your brain from "as conscious as a fully functional human" to "as inert as a block of steel". But the existence of that magic number seems highly unlikely to me. It would require some explanation for what is special about that exact quantity of neurons that one more or less completely lacks.
It seems then that the most likely reality is that consciousness is a continuum where cognitive machines and living beings can have relatively more or less of it and any threshold that we define to mean "conscious" is mostly as arbitrary as the names of colors.
I’m quite firmly in the “do not feel the incongruity” camp.
What I'm about to say might make no sense at all, but I seem to remember (or at least think I do, it might be a false memory arising from thinking too much about it) slowly gaining consciousness in early childhood.
I have a birthmark on the back of my hand and some of my earliest memories are triggered by noticing it and realizing I’m a separate being from other people and can influence my actions, stop being a mere observer of my body running on “autopilot”.
I have only passing knowledge of lucid dreaming, but from what I’ve read I’d say becoming a lucid dreamer is not at all dissimilar to developing consciousness in childhood. So maybe that’s a potential avenue for consciousness research.
Yeah that's a pretty common experience actually. There are a lot of memes on tiktok and twitter joking about "gaining consciousness" as a child. I think a lot of people remember moments as young children where they first truly felt like real separate beings.
I think a lot discussion around consciousness focuses too much on the binary (e.g. either you have it or you don't), but I like to think of consciousness as a spectrum, with organisms falling into various points along that spectrum. Your take fits nicely into that picture I think.
This is precisely one of the reasons I prefer attention schema theory as an explanation of consciousness. It actually has a progressive build up, with each individual stage conferring an advantage while also leading to the next stage. Evolution does not like big-bang progression. And it's also very hard to explain big-bang progression in multiple, drastically-unrelated species -- such as humans and octopuses.
Attention schema instead builds gradually. It starts with selective signal processing, advancing to body schema, then to attention schema, then to social schema (modeling other's bodies and attention).
> I just know that there's something wrong or missing.
We know with certainty - since you tell us - that you have that feeling. People have strong feelings about lot of things. People disagree about things of the most intense importance to them, such as the existence of god. I won't list any more examples to avoid arguing about them.
But since people have mutually exclusive very strong intuitions/knowledge/feelings, it seems very clear to me that the presence of the feeling entails absolutely nothing about the further reality of the thing being believed.
A simple and complete explanation is that you have this feeling, and the feeling is all there is to it. Maybe the feeling has some functionality in a social ape. The feeling entails nothing.
Susan Greenfield has a nice expression of a similar idea: the only known functionality of consciousness is that when you ask yourself 'am I conscious?' the answer seems to come back 'yes'.
>I just know that there's something wrong or missing.
I'm starting to think that the thing that feels "wrong" is just that it directly contradicts foundational elements of your current belief system. And it's perfectly natural to avoid updating a belief system that has and continues to serve you reasonably well, even if you know deep down that it's inaccurate. Because you also know that updating it will have a significant cost as the implications unfold.
I don't think it's a coincidence that the "religion solution" contains a lot of brute-force belief-writing and evidence-avoidance techniques.
Coming from someone who no longer feels the incongruity, and has paid for that update.
If you are going to demand a precise, non-handwavey answer to the hard problem, then you must first give the hard problem in a precise, non-handwavey way, or at the very least prove in a precise, non-handwavey way that there even is such a thing as the hard problem. As far as I've seen nobody has really done this, it's always just "what breathes fire into the equations?" "what is it like to be a bat?" "I just feel it in my bones that there's something missing"... P-zombies are an attempt, except that "p-zombies are conceivable" is to be taken as axiomatic without any further explanation.
That's the problem and exactly why I say I don't think it can't be explained via a vague feeling. I've never seen someone who says there is a hard problem of consciousness adequately explain to someone why there isn't. The poster is now making what seem more like appeals to emotion throughout the thread now: "are you saying that sufficiently complicated blocks of matter are conscious?" and the like. This is what always happens.
While I personally believe there is a problem, I cannot explain why, and for a long time I didn't think there was a problem
The so-called "hard problem of consciousness" may be a problem through which one may find a suitable definition for their own humanity, but it absolutely is not a scientific problem.
If one wishes to gain concrete data, one needs to define the problem before defining the answer. Any current "problems" relating to consciousness are explainable without needing a consciousness component. I have not encountered a single problem which couldn't be explained by pure cognitive computation.
Consciousness becomes a problem suitable for scientific endeavor the moment someone can actually define the problem.
Yes, just like the various interpretations of quantum mechanics, it's not a scientific question because it doesn't allow for any testable hypothesis. Science is bound by the scientific method, questions of philosophy exist outside these bounds. Is math invented or discovered? Does the wave function collapse upon observation, or do observers exist in superposition? Is my "red" the same as your "red"?
Even if we could have exact answers to these questions, they wouldn't allow use to make predictions, and so they are not science. I think it's reasonable to say that means they aren't "useful", we can't go to space or build skyscrapers with these answers, but I think it equally makes them the more interesting questions. Our usual tools at rational inquiry fail to penetrate these sorts of questions.
Actually most of the problems you described are relevant and do exist in the objective world. The problem of the problem of consciousness is that there is no aspect in the definition of consciousness which couldn't be explained by objective mechanisms.
Currently, asking whether there is a consciousness is like asking if there is glaxtimbo. What is glaxtimbo, you may ask? I can't explain it, but I feel like it is there. That's the extent to which the definition of consciousness in its unexplainable attributes currently reaches.
Agreed, I somewhat address this in my follow on comment. In my mind rejecting the hard problem is the same as accepting the "consciousness is an illusion" answer to the problem. But I can see the point of view that views the entire conversation as a category error.
I find the classical argument with color qualia very convincing (in fact, I thought about it before even knowing what qualia were...). Nothing you can find out about wavelengths, cones, rods, neurons and so on will tell you why you see red the specific way you do, and whether my perception of red corresponds to yours or my "red" is actually your "green". So there is a gap there, something that doesn't follow from the mechanical state of the system.
Of course, I won't claim to be able to define that gap in a precise way (I'm not even a philosopher), but I think it's clear that it exists, and in my view the burden on proof is on those that claim the problem doesn't exist because no one can come up with a believable methodology (even supposing infinite time, infinitely precise measuring instruments, etc.) that would tell us the answer to questions like why my "red" feels like it does or whether my experience of red is different from yours.
I don't see why two people experiencing red differently causes a problem for consciousness. Can't we both be conscious even with very different qualia?
Of course we can be all conscious but with different qualia. But the thing is that if consciousness was merely a physical phenomenon, we should be able to determine qualia purely by looking at the physical system, just as we determine any other property. However, we could be in a situation where we have understood our behaviors down to particle level, and I would still have no idea how you perceive colors.
> However, we could be in a situation where we have understood our behaviors down to particle level, and I would still have no idea how you perceive colors.
I do not see how this is possible. If you can perfectly simulate me you can perfectly simulate any response I will give to any question about how I experience red. Moreover you will have better idea than I do because you could also simulate how my experiences of red will change with time as I gain more experience from seeing red or anything else really.
There is a very fruitful line in philosophy of mind and language that addresses the colour qualia question: anti-cartesianism and anti-skepticism.
Wittgenstein's arguments on seeing-as is the best example of this line. He argues, quite convincingly I think, that a lot of the kind of skepticism, of a cartesian sort, implicit in the colour qualia question arises out of conceptual confusions in language. The goal isn't to solve the debate, or directly refute it per se, but to dissolve the entire conceptual confusion upon which it relies by making clear how it is essentially a house-of-cards.
For instance, we know that humans have a relatively uniform perceptual landscape. Physiological realities in the eye limit the variations in human visual perception to a certain set wavelength band. With all the correct parts working correctly, we see and we all see in relatively the same way. We don't see our seeing. We don't perceive our perceptual experience from the inside out. We see. The variations in "what people see" are more in language, which is to say that people report seeing different things. As Wittgenstein would put it, we often "see-as" just as much as we "see-that", which are ways of "seeing" that intersect and mix in language.
Colour is interesting because in our language it is heavily dependent on ostensive definitions, which is the linguistic way of saying colour is reliant on pointing at examples. Often quite literally pointing with your hand, or directing someone to look at something in particular. So if there is a red car in front of you and both people say the car is "red" then they are seeing the same red because the colour is not the result of empirical validation of an internal private perceptual experience, but because the colour is in language. They are reporting the same colour.
Viewed in this line, the colour qualia question essentially dissolves away as irrelevant.
>the colour qualia question essentially dissolves away as irrelevant
And yet, I still wanna know.
The question is IMO as valid as "if I poke you with this pen, do you feel it different enough than what I feel if I poked myself?"
If we go through the exercise of breaking down the physiology, neurology and linguistic, I still wanna know if they feel it any different and how different.
> The question is IMO as valid as "if I poke you with this pen, do you feel it different enough than what I feel if I poked myself?"
Funnily enough, Wittgenstein also famously looked at pain as well [0] in the context of his famous private language argument, and he comes to the same conclusions as he does about perception and language.
It should be noted that Wittgenstein's central point is about the intelligibility of color, or perception, or pain, or a private language. He is not denying the existence of pain, or of what could be considered "first person priority", as a sensation. But he's commenting on the logical necessity of a public intelligibility for it. For him that intelligibility comes about through behavior or public criterions, that is, what we might call pain-behavior wrapped up in a public form of life.
> I still wanna know if they feel it any different and how different.
Wittgenstein's whole point, really, is that you can get an answer by asking them because, after all, you're speaking the same public language. That is also why pain, as it is intelligible, has a history.
I don't understand the point: we could both be seeing a red car (which is objectively red, in the sense that it has a wavelength of 625–740 nm) but we could be perceiving it radically differently (for example, maybe my perception of red is exactly like your perception of blue) and language would never help us find out.
In my world, that car, stop signs, fire extinguishers or chili peppers would look like the clear sky or sea water in yours. But we would never find out, because I would point at the car and say "red" (I have learned that all objects that look that way from my point of view are "red"), and you would agree that it's "red". There is no way to find out that our perception is different.
But I guess my lack of understanding is because I haven't read Wittgenstein, so I hope to do so when I get the time. Thanks!
> There is no way to find out that our perception is different.
Wittgenstein's point is more that there is no way in which a distinction of that kind could even be intelligible in the way that you have presented it. What is intelligible is our public behavior, namely our language use, so to find out if what you perceive is different you can ask someone if you're looking at the same thing, if it has the same characteristics, etc.
He would point to the fact that we don't have radical uncertainty in our day to day life and perception. The philosophical question is a question of sense and nonsense around the concept rather than a metaphysical question about an "inside" vs. an "outside".
I would really recommend reading Wittgenstein's Philosophical Investigations if you are interested in these kinds of questions, it's incredibly influential in philosophy precisely because of the way it approaches "age-old" problems in ways that lead to clarity and very interesting results.
> I find the classical argument with color qualia very convincing (in fact, I thought about it before even knowing what qualia were...). Nothing you can find out about wavelengths, cones, rods, neurons and so on will tell you why you see red the specific way you do, and whether my perception of red corresponds to yours or my "red" is actually your "green".
Red being green makes no sense. Let's simplify a bit and say you have a neuron for hearing the word "red", one for the visual stimuli "red" and then a sensory fusion/abstract concept of red. Several layers up you'll also have one for a "stop sign" (which also takes inputs like octagon shapes and some white letters). There are many other, more distant associations of course, we have many neurons after all.
The spoken words, physical wavelengths and stop sign design are more or less physically or socially fixed. If you magically swapped some red and green neurons then those physical stimuli wouldn't change. The brain would be forced to rewire all the associations until it arrived at the original connections again. Or it would suffer from observable mispredictions such as being more likely to respond to a green rather than a red stop sign.
Why would there be a free-floating "qualia green" neuron connected to "visual red" that is yet somehow disconnected from "physical green" and why wouldn't it update once it became miswired?
Suppose that I see red the way you see green. Then, when I see a stop sign or a fire extinguisher, I'm seeing the color that you would call "green". But of course, I have been told since birth that fire extinguishers, stop signs, etc. are red, so I would naturally call that color red. And in the same way, if I see green the way you see red, when I look at grass I would be perceiving the same color that you see when you look at a stop sign. But I would call it green, because everyone knows grass is green, what else would I call it?
There would be no disagreement between us about what objects are red, no way to tell that we see them differently, and no real-world consequences that would force to rewire any neurons, because our ways of perceiving the world would both be coherent and equally valid.
By that sense, if qualia have no discernible impact on any part of the system, in what sense do they exist?
One can take this thought further. Red vs green light have real physical properties that affect one's subjective experience of them. For example different colors' physical properties change the way that they are perceived in low light conditions, the way they are perceived when placed next to other colors, etc. So at the end of the day even if hypothetically one person's qualia for red is swapped for green, you end up with a green that acts an awful lot like a red for all intents and purposes in the brain.
Edit: So my personal hunch is that qualia can't be meaningfully said to exist.
> Suppose that I see red the way you see green. Then, when I see a stop sign or a fire extinguisher, I'm seeing the color that you would call "green".
Are you sure it is possible to pluck the color I'm seeing out of my head, and the color you're seeing out of your head, and compare them side-by-side, like we could compare two different colors of paint on a piece of paper?
I'm not so sure about that. It seems to depend a lot on what phenomenon "how you see a color" actually refers to.
If "how you see a color" refers to the neural patterns excited by light of the relevant wavelengths, and you and I have the same neural patterns, then we see the color the same way. There is no metaphysical layer on top that can make one of those experiences green and the other red.
> Nothing you can find out about wavelengths, cones, rods, neurons and so on will tell you why you see red the specific way you do, and whether my perception of red corresponds to yours or my "red" is actually your "green".
This is begging the question. You're assuming that color qualia exist in the way that objects do. That "my red" and "your green" are not just properties of personal experience, but refer to objects that could be identified, drawn out, moved around and compared like the ordinary objects in our world.
The basic mistake of qualia theory is to assume that qualia share any of the properties of physical objects or even abstract concepts. That qualia are entities distinct and stable enough to be placed side by side, compared and contrasted, the way you could compare rocks or different software architectures.
In my opinion, this makes about as much sense as asserting that square circles exist. You can say it, you can even kinda-sorta imagine it, but you can do that with a lot of things that fall apart on closer inspection.
The same applies to p-zombies. For all we know, they're square circles. We think that they're a coherent concept because we can kinda-sorta imagine them, but we're nowhere near knowing that they're a coherent concept.
There is such a thing as trusting feelings and intuitions and imaginations too much, especially in the incredibly abstract context of somehow reifying your experiences and attempting to understand what they "are".
My question is simply "why am I me?" Why and how is my personal point of view fixed to this particular lump of meat and I never wake up one day as you?
Let's assume that consciousness works in roughly the way implied by your question: it exists, but it exists entirely separately from both the body and from mental contents.
Now, let's further assume that the scenario you're proposing actually happened. Last night, in our sleep, you and I switched souls. Yesterday's me is actually you now, and yesterday's you is actually me now.
How can either of us tell the difference? Clearly the contents of our consciousness - our memories, knowledge, skills, perceptions, emotions, etc. - are attached to the body, particularly the brain. This is just the assumption we started with. What, then, makes you you and me me? Clearly our souls aren't doing a very good job at telling the difference!
Check out Chapter 11 of The Hidden Spring by Mark Solms.
In it he identifies subjective experience, for example perceiving the colour red, as an observational perspective.
He reminds us that we can also experience vision objectively, with the right lab equipment, e.g. by listening to spike trains travelling from the retina down the optic nerve.
Say we do the latter. We can then legitimately ask what the same process looks like from the point of view of the system/subject/patient. Answer, vivid red.
So spike trains and redness are differing valid perspectives on the same underlying physical process, namely vision. One doesn't arise from the other; they are both products of vision.
I don't think so... I thought GP was explaining how they would be quantifying qualia, and pointed out that this is kind of against the definition of the word.
To turn this on it's head, maybe the problem is assuming that everything else in the universe is without consciousness aside from certain classes of matter organized in a particular configuration of a brain. If we instead assume that consciousness is an intrinsic property of all matter and space, then the problem goes away. Or to state it another way, either everything is conscious or nothing is.
It doesn't do a darned thing to the problem. It just moves it around; the question becomes "If the entire universe is 'conscious', for whatever your definition is, why are some configurations of matter more able to express it and others less?" How can we build an identifier that says what will be more expressive and less by inputting the configuration? How can we engineer configurations of matter that are more expressive rather than less? It is clearly obviously that configurations of matter are not just additive, that is, every 3 pounds of matter is the exact same type of conscious as any other 3 pounds of matter, so what's the difference, exactly?
Which happens to be the exact same problem.
This problem doesn't go away with any amount of changing definitions. Shifting the mystery under the rug where you can't see it doesn't mean the mystery has gone away, nor that everyone else is going to be fooled. Some of still see the lump under the rug you just created.
To me, all those questions are much closer to creating a testable hypothesis than trying to ask how to go from 0 to 1, from the inert to the conscious. "Consciousness expression" sounds like something you could develop a measurement of based on an observable behavior. I'm not sweeping the problem under the rug, rather presenting a scenario in which trying to discover the origin of consciousness is like trying to discover the origin of energy.
My reasoning is that the people that defined it did so in a time when our science was less developed, so of course the problem seemed much harder.
Information theory makes even the hard problem a matter of understanding interaction fields of physics, with the chemistry of biology. Relativity and sensory network effects explain relative experience elegantly enough.
A lot of theories from back in the day are built on outdated understanding. Unfortunately their authors did not get to see our achievements in engineering unravel a lot of their over baked theories due to a need to fill in gaps without hard evidence. Same as we won’t see technology in the future have no need of all this software we wrote. It won’t literally be handed on.
Luminiferous Aether was once a thing to many even though it’s not one discrete thing. One might consider it was a poetic initial take on field theory, which we now rely on sets of glyphs of shares meaning. Which artistically could also be imagined as flowing sets of matrix code that glow like a luminous field.
If there’s a hard problem to consciousness it’s an unwillingness to consider there is no Valhalla. Is there a hard problem? Or do we hope for an answer that suggests we’re not just meatbags?
The hard problem has technological progress built into its definition. It defines problems of consciousness that can be solved by progress as the "soft" problems, and the problem that cannot be solved by progress as the "hard" problem. The hard problem doesn't say that a mechanism _hasn't_ been found, it says a mechanism _cannot_ be found. Like Gödel's incompleteness theorems or Heisenberg's uncertainty principle it places a limit on what can be known about the system, "[the hard problem will] persist even when the performance of all the relevant functions is explained."
The validity of that is up to you, but if you accept the hard problem as a valid question it will not be solved by technological progress.
Yep it’s not an interesting idea, the hard problem. We can never see outside our universe. We can’t know all states of matter ever. We can’t peek beyond the speed of light. We can solve a lot of problems we actually have without an answer (42, but what…)
Humans have a willingness to see truth in metaphor and analogy, and invent them to avoid accepting we’re just meat bags.
That’s what the hard problem of consciousness is to me; biological ideation run amok.
It has useful political effects, it can be used to disabuse the self righteous because it’s a purposeful thought ending monolith, nothing more.
We’ll keep iterating on our theories of the interaction of fields and matter and stop caring about the hard problem like we quit discussing luminiferous aether. We’ll stop seeing the literal edge of reality as a boundary on experience in the first place.
There is also the possibility that our cognitive limits will prevent us from creating artificial intelligence and consciousness, even if it is materially possible.
I’d be more interested in augmented human intelligence. Growing neuron structures to speed the acquisition of skill and knowledge.
AI as we know it now is for empowering aristocrats. Here’s Googles data center empowering Google to make business choices that involve extracting effort from us.
I’d rather science and technology empower individuals uniquely and not be ground down to the fiscally prudent efforts.
I think you are the one doing hand-waving to pretty trivial resolutions of p-zombie problem.
Specifically, that the way I'd define sentience, "qualia", and experience as certain replies to certain stimulae. Namely, sentience as ability (skill level) to play any game like Starcraft, experience as ability to reproduce previously received inputs, and qualia as the set of answers to all possible questions to describe a particular input.
Given these definitions, the construct of Chinese room (which is supposed to be a p-zombie) has all 3.
So unless you can either provide better definitions (without violating Occam's Razor and Popper's criteria) of the 3 concepts, which Chinese room would not satisfy, or give another way to construct a p-zombie, which would not have one of them, but still feel otherwise indistinguishable from "a person", the problem's solution is staring at you right there.
I firmly believe this problem will melt away in time. 'Consciousness' will gradually cease to be something that empirical science is concerned with, any more than the 'soul' is now, though it was highly salient to science 200 years ago and still is to non-science communities.
The signs of this are: there's no broad agreement on what it means, there are no falsifiable predictions outstanding, and the only reason to not accept that we are philosophical zombies is what people say when you ask them about it. People believe all kinds of things that are obviously not true. Why do we give such special credence to this one?
Why is it so hard to grasp the idea that "consciousness" isn't any more complicated than our brains following natural laws of the universe and that you in fact don't have free will?
Look, if you jump, you will fall back down to the earth, and there is nothing you can do about it. Just the same as if you remove the eyes' nerves, you can no longer see, or if you have someone tap your knee is a certain spot, it will jerk.
We are nothing more than input, output machines, whether made from "meat", or other parts as in people with machines for pacemakers. Get over it.
I don’t know.. consensus reality is what I would call the reality that is portrayed in media or mabye the reality that fits the most nr of people.
Objective reality is best described with particle physics but that stuff can’t be understood by the human mind. The human mind can’t keep track of so much complexity at once.
We may have body consciousness at times, though conscious doesn’t depend on body consciousness or is required to be embodied. When I dream I often am not conscious of breathing or my body for example.
I once had a strange dream that exemplifies this. I was dreaming in Wisconsin of being on a subway train in NYC. A man in the train crouches down near me. I follow him down. He asks, “What is your name?” I tell him, “it’s Seth.” “That’s an old name”, he responds.
I am thinking, who is this guy?
Then as if to show me something, he pantomimes a mallet and strikes an imaginary bell, except I hear the bell ring out as clear as if I were awake. A disembodied voice says something in German, which I don’t understand, though I wish I could remember it now. Then I wake.
See I was conscious of the bell, though none was struck. I didn’t hear it in my ear, I heard it in the mind. So I don’t think vibration is sound and EM waves aren’t color, though they are the most common precursors to this sort of consciousness.
We can go to places in our minds where our bodies can not follow. I think it’s a hopeful vision of what might be possible some day. That we might find a way to transform ourselves into a world less inhabited with fears and self loathing to world of wonder and discovery.
P.S. I understand I’m not solving the hard problem of consciousness in this passage. Though I wanted to add some notes of what I’ve learned about consciousness along the way.
Consciousness is nothing more but the final arbiter of attention. As such it must have a way to evaluate situations, including our own internal state and this is done by emotions. Emotions are our default approach interpret a given situation.
I think the 'beast machines' analogy is a good one. We're reproduction machines. We're not thinking machines. We do not evaluate our environment using logical thinking and although some may learn to use this tool (logical thinking) to justify paying attention to something in particular, it's emotions that guide our eyes and our attention.
Anything that has attention to guide has a consciousness. It might not use words or other symbols to reinforce attention but the basic mechanism, the cycle of emotion, evaluation and attention stays the same.
> Brains are not for rational thinking, linguistic communication, or perceiving the world.
Isn't this false?
Brains are for whatever brains are for, and if brains are thinking rationally (2+2 = 4), communicating linguistically (I'm typing this using language), and perceiving the world using senses than they are for those things just as much or as little as they are for or not for anything else. I suppose you can say something along the lines of "brains are for continuing survival and propagation of genes" as they author did which is fine, but it would make more sense to state that and try to prove it... Because how do we know that thinking rationally isn't what evolutionary drive is? How do we know that brains are evolved for survival? Survival for what? Gut bacteria? The earth (keep reproducing to keep organic material churning)? How do we know that the emergence of language and propagating that isn't what brains are for?
I think it's hard to really say what brains are for or not for. That's a deceptively difficult question and probably shouldn't casually throw out "brains are not for X".
-edit-
I also wouldn't take it for granted that brains or anything is for anything.
I do think we are "philosophical zombies" as someone else put it, but it's not a distinction that matters because whether we are or are not doesn't change anything. It's like accepting that you don't have free will - it doesn't change anything. You still go to jail if you rob someone (the interesting question comes from the morality of this actually IMO).
Consciousness is probably an evolutionary adaptation for increased cooperation - to that end it's a shared delusion (language allows us to become "more" conscious), but a useful one.
I'd posit that certainly all mammals are conscious (if they're not, then I'm not sure you can argue that anything is conscious besides yourself which is fine if you hold that view) and likely all animals are conscious, just a matter of degree of genetic expression. Panpsychism is tempting but if we wanted to say that rocks are conscious than I think we could set aside still different "levels" of conscious expression and draw meaningful distinctions - it's not a hill one has to die on.
Most of our work on consciousness that I've observed has historical religious baggage (not insulting religion here) with humans being at the center and "special" but if you strip that away I think it's pretty easy to see that we're not. Many scientists that I've seen or observed kind of have the God problem - which is that they're wrestling with a question or dilemma and supposing God is involved or exists and so it must be dealt with. Similarly in the study of consciousness it seems that we're supposing that humans are conscious to some special degree and trying to fit science to explain that - which is a faulty premise from the start.
“ Experiences of being you, or of being me, emerge from the way the brain predicts and controls the internal state of the body.”
I don’t know about you other beast machines, but I rarely ever think of the internal states of my body (trying to adjust beating heart, pupil adjustment, etc.). To think that consciousness is required for these rare moments of internal state reflection seems a stretch (considering most life doesn’t require it).
It's really bizarre how some of these wisdom teachers go on conspiratorial rants which have a completely different tone from the actually useful information.
reply