Not exactly. I am saying that consciousness is fundamental; that matter, physics, our brains, etc emerge from consciousness. Physics is the OS, consciousness is the transistors/electricity, and we are conscious agents (to borrow a term from Donald Hoffman) using the OS.
That doesn't change the question though. We don't know how many layers are beneath the known laws of physics. Who knows, one of them could be consciousness. Maybe it's turtles all the way down.
They question remains. Can you provide a direct answer? If we measured all the particles in the brain, would they be operating in a way not compatible with the currently known laws of physics?
>If we measured all the particles in the brain, would they be operating in a way not compatible with the currently known laws of physics?
Their operation would be perfectly compatible with the known laws of physics. But their operation is _not_ thinking itself, it's what thinking looks like when observed across a dissociative boundary. If I am sad, and you look at my face and see tears, you would never think that the tears were my sadness itself; they are a representation, an image, of the sadness. Tears are what sadness looks like from across that boundary. I experience sadness from a first-person perspective, you see my tears from a second-person perspective, across a dissociative boundary. So, you can measure electrical activity in your brain when you are thinking thoughts, but that activity is not your thoughts, in the same way that flames are the image of fire but they are not fire itself. Neuronal activity is the image of thought, it is what your thoughts look like from across a dissociative boundary.
Okay, so if I use the laws of physics to simulate a human brain, it will behave exactly like a human brain in the real world. Will it also be conscious and experience sadness?
>Will it also be conscious and experience sadness?
Imagine you are programming an AI simulation. You could train a detector to associate a certain wavelength with a certain color. When shown a red light, the AI could say "that is red", because it correctly identified the wavelength. But it would never know what it feels like to see red, right? This is similar to how a blind person cannot know what it feels like to see red, but they can intellectually understand that it is an oscillation of 4.3*10^14 cycles per second.
A different example: you could train an AI to recognize 10,000 songs. It would listen to the frequencies and patterns, and make an identification.
In both of these cases, we have quantity as the input and output.
If after training, you asked the AI to identify the first song it was trained on, would the AI experience nostalgia? Would there be a way the AI "feels" about the song? We can probably both agree that the answer would be no. For the same reasons, the answer to your question is also no.
I appreciate your response, but it's impossible for me to imagine programming an AI simulation capable of feeling the perception of red. The idea that I can't imagine doing these things is because of a shortcoming in my knowledge. Maybe that knowledge is out there, or maybe it isn't. So the exercise gets us no closer to answering the question at hand.
But what I can do is imagine creating a physics simulation. There's no gaps in our knowledge there. So again, I'll ask. We create a physics simulation of a human brain. Can the brain write a novel? Answer questions about what it's like to perceive the color red? This is just a yes or no question.
>The idea that I can't imagine doing these things is because of a shortcoming in my knowledge.
I don't think there is a shortcoming in your knowledge. Your metaphysics intuition is correctly tuned: simulations cannot feel.
>We create a physics simulation of a human brain. Can the brain write a novel? Answer questions about what it's like to perceive the color red? This is just a yes or no question.
GPT-3 can do both of these things. Is it conscious? If you need a direct answer, it is yes. But when we reword your second question as "can a simulated brain experience the color red?" the answer becomes more clear; the simulation can identify a wavelength and know it is called "red". But the experiential part is akin to explaining color to a blind person.
A simulated brain could identify molecular patterns of cocoa and sugar, but can it know what it is like to taste chocolate? Think about what it means to taste chocolate. Is it purely quantitative, like the balance of ingredients, or is there something else going on that is qualitative? Something abstract, something with meaning, something more close to the metal? We can probably agree that it feels like there is. What we are describing is subjectivity — your private conscious inner life. I suggest it is this that is fundamental and cannot be simulated. This is the layer where experience "happens". From this layer emerges meta-consciousness.
OK, so a simulated human brain acts the same as a real human brain, is capable of the same things, tells you that yes, it can experience the color red, etc, but does not have any internal conscious experience.
Why then would evolution produce beings with internal conscious experience?
This would also mean that our internal conscious experience has no effect on what decisions we make, on whether we cry, smile, what memories we form, etc.
>Why then would evolution produce beings with internal conscious experience?
Well, I'm arguing it's the other way around. But to address what I think is the spirit of your question: if reality is only mind, if consciousness is truly fundamental, then why can't you read my thoughts? Why do we feel like individuals? Why do we have obviously separate private conscious inner lives?
When you are asleep and dreaming, your "alter" generally does not know they are dreaming. Your dream self has dissociated from your waking self, but it is only after you wake up that you realize you were dreaming (if you even remember). Another example of this phenomenon is Dissociative Identity Disorder, where one mind splits into many alters, each unaware the others exist. I'm glossing over a significant amount in order to get to my point, but here are a couple links that go into great detail:
My point is, our private conscious inner lives are dissociations, alters, from a "mind-at-large" fundamental consciousness. And the boundaries of these dissociations, the "containers" of individual private conscious inner life, (again glossing over so much, like panpsychism's combination problem) are metabolizing organisms. Metabolizing organisms are what alters _look like_ from the outside. Kastrup:
"Since we only have intrinsic access to ourselves, we are the only structures known to have dissociated streams of inner experiences. We also have good empirical reasons to conclude that normal metabolism is essential for the maintenance of this dissociation, for when it slows down or stops the dissociation seems to reduce or end. These observations alone suggest strongly that metabolizing life is the structure corresponding to alters of [fundamental consciousness]
But there is more: insofar as it resembles our own, the extrinsic behavior of all metabolizing organisms is also suggestive of their having dissociated streams of inner experiences analogous to ours in some sense" (from the mdpi link)
>This would also mean that our internal conscious experience has no effect on what decisions we make, on whether we cry, smile, what memories we form, etc.
Don't your thoughts and feelings influence your behavior?
> Well, I'm arguing it's the other way around. But to address what I think is the spirit of your question: if reality is only mind, if consciousness is truly fundamental, then why can't you read my thoughts? Why do we feel like individuals? Why do we have obviously separate private conscious inner lives?
No, this is not the spirit of my question at all. I feel like the spirit of my question is completely being missed. I realize you've thought very deeply on how and why the world arrives from consciousness itself and you've been working very hard to get this concept across. I get the general gist of your theory and there's lots of detail and thought behind it.
I'm not sure how I can set my question out more clearly than I already have. But I'll try. Rather than trying to explain your own theory in greater detail, can you try to work with me on getting a mutual understanding of my line of reasoning?
We have a very excellent predictive model of particles and fields. So much so we are building experiments worth billions upon billions of dollars to attempt to find places where reality differs even to the slightest degree of that model.
The human brain, the warm squishy stuff in your head, can be viewed as being composed of particles and fields. Particles and fields may just be some manifestation of some pan psychic reality, but we can still use our model of particles and fields to predict the behavior of those particles and fields.
So the first question. Can we use our model of particles and fields to predict the behavior of the particles and fields within the human brain? You've already appeared to answer this question in the affirmative "Their operation would be perfectly compatible with the known laws of physics."
From this it follows that I can create a computer model of a human brain, complete with all the cells, proteins, neurons, etc, and that human brain will be capable of any action (eg, the signals sent by neurons out of the brain) a real human brain is. There would be no way to discern between a real flesh and blood human brain and the simulated one by talking to it. Since a real human brain tells you that it's conscious and has internal experience, the simulated one must tell you the same.
While the simulated human brain will tell you that it's conscious, it is of course not proof that it is. But this leads to your next question:
> Don't your thoughts and feelings influence your behavior?
If thoughts and feelings are a thing that don't pass the computability test, but a simulated human brain doesn't have external behavior that differs from a flesh and blood human brain, then no, thoughts and feelings have no effect or even influence on your behavior. In such a case they are a mere passenger. Any effect they have would necessitate that the particles and fields within the brain suddenly behave in a way that violates our model of particles and fields.
I'm sorry, I am not trying to frustrate you or avoid your questions. I'm enjoying this conversation. I will try to work with you.
> From this it follows that I can create a computer model of a human brain, complete with all the cells, proteins, neurons, etc, and that human brain will be capable of any action (eg, the signals sent by neurons out of the brain) a real human brain is. There would be no way to discern between a real flesh and blood human brain and the simulated one by talking to it. Since a real human brain tells you that it's conscious and has internal experience, the simulated one must tell you the same.
> While the simulated human brain will tell you that it's conscious, it is of course not proof that it is.
I still agree with all of this. A simulated brain could give the appearance of consciousness while not being conscious. It would not have a private conscious inner life, but it could say things that make it look like it did.
> If thoughts and feelings are a thing that don't pass the computability test, but a simulated human brain doesn't have external behavior that differs from a flesh and blood human brain, then no, thoughts and feelings have no effect or even influence on your behavior. In such a case they are a mere passenger. Any effect they have would necessitate that the particles and fields within the brain suddenly behave in a way that violates our model of particles and fields.
I am struggling to follow your point here. Thoughts and feelings are internal experiences which correlate with phenomenal consciousness and are absent in a simulation. Could you give me an example of the effect you are describing and how that would violate our current models?
Ok, so you have a simulated brain, and a real brain. Both are to an external observer, functioning identically. For something to have any effect on the real brain, it would also need to have an effect on the simulated brain. Otherwise the simulated brain would deviate from the real brain and an external observer would be able to identify which is which.
Therefore by your definition of internal experience, internal experience has no effect on our behavior.
> For something to have any effect on the real brain, it would also need to have an effect on the simulated brain.
It would just need to _look like_ it has an effect on the simulated brain, right? If you ask me a question and I pause, say "hmm", and put my hand to my chin, can you know that I am actually thinking and formulating a response? If the entirety of your observations are external, of course you can't. There is no way to tell if my response is a random choice from an array of preset answers, or a group of concepts activating each other.
That's because brain activity is part of what our inner, first-person experience looks like from a second-person perspective (ie, an external observer). Tears are not sadness, they are what sadness looks like, they are an external description of an internal state. Sadness can only be experienced by the person experiencing it. Tears are a description of sadness. But can't tears be faked?
So when we see the same neuronal activity in both brains, we have no way of knowing whether inner experience actually gave rise to the activity, or it just looks like it did.
If a human being with inner experiences behaves identically to one without inner experiences in what way can inner experiences be said to give rise to our behavior? You're saying our behavior is not effected by our inner experience. I don't see how any other conclusion can be reached other than our inner experience in no way gives rise to our behavior.
reply