Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

If you are an agent in a physical reality you need an internal model of that physical reality to have mastery over it; a way to simulate actions and outcomes with reasonable precision. There are infinitely many such models. Humans are born with one such model. It is our firmware. It was found via evolution and we all share the same one. You were not born as a blank slate, quite the opposite. What is the relationship between reality and a model of reality? What if every agent you could communicate with had exactly the same model as you? It would be easy to get confused and imagine there is no model at all; that you all are somehow experiencing the world as it truly is. We are all in the same Matrix. In order to explain the redness of red you must first explain the relevant aspects of redness in the specific model of physical reality that all humans share. We did not come up with the model. We inherited it at birth. We have no idea how it works. The only thing we can do is say "if you find yourself in the Matrix, look at something red, you will understand as we have understood, we do not yet know of another way."


sort by: page size:

Because it matters. There is no red you cant meassure red we will never experience red axactly the same way. One of the biggest mistake humans make is thinking that experience or qualia is what it represents itself as. Its important to understand that we simulate a reality not the reality.

Nothing "real" exists exactly the way we perceive it. The world we experience is one where the amount of detail has been compressed by an extreme factor. We don't perceive the quantum fields of the surfaces we touch, we don't perceive individual photons hitting our eyes. The "real" world contains so much information that our brains have no way to process it in detail.

Instead, our brains will create a model of the world that is optimized for utility (in a Darwinian sense). In fact, this happens at different levels. Some centers of the brain (those that regulate basic functions as well as sensory input) will have access to different information to what reaches our consciousness.

A "purple" flower doesn't emit "purple" photons, but rather a combination of photons that our sensory system aggregate to "purple". Another mix of the same types of photons, but at different relative number may look "brown", "gray", "white", etc. In other words, all such qualia are some kind of illusion, or maybe more accurately, aggregated information.

The same is true for most categories of concepts. Physical objects, surfaces, fluids, living objects, etc. Including what we think of as "agents".

"Agents", whether they are the tiger we imagine in the forest, another human or indeed ourselves, and whether real or imagined, are also way too complex for us to comprehend in detail.

What we're interested in (or rather, what has Darwinian utility) when dealing with "Agents", as opposed to non-"Agents", is that "Agents" tend to respond to stimuli in more sophisticated ways than non-"Agents". If we look big and scary, a wolf may decide not to eat us or a robber from robbing us, but it will not stop a house fire from burning us.

As a consequence, there are actions we may take to improve the odds that the "agent" types around us behave in ways that are beneficial to us. Just like we're not able to comprehend something as simple as "real" photons behind the color "purple", our brain is not able to calculate EXACTLY what we can do to reach that goal, however.

Instead we (most of us) seem to have a simplified framework more or less built in to find something approximate to optimal ways to modify the environment in ways that affect the behavior of the "agents" around us. "Free will" or "consciousness" or not perfect representations of the ways the brains of "agents" work. But the concepts are simple enough that our brains can reason about them (at some level), while predictive enough to have significant utility and also to be hard to "hack" for an agent that may have conflicting interests.

Roughly speaking, the utility of the concept "free will" is that it allows us to communicate to others a set of behaviors that are likely to be met with retribution, while mostly limiting such threats to situations where the threat itself is likely to prevent the unwanted behavior from happening in the first place.

Actually having to dish out punishment is usually not high utility behavior. It can be both costly and expensive. Having to do it, can mean one (or more) out of three things:

1) The punishment isn't harsh enough

2) The adversary does not see the threat as credible. A thief may think they will not be caught, or a state leader thinks the ally of a country will not help defend them.

3) The adversary is not able to understand the threat, or in some other way incapable of taking the threat into account when acting.

In the case of the last one, one can have a rule that punishes people from killing other people, for instance. If the punishment is severe enough, and the likelihood of getting caught is close to 100%, most people will stop killing each other. However, SOME people will not be able to take the information into account for some reason or another. It could be that they go into a rage that make them ignore any kind of upcoming punishment, there may be some kind of mental disease, they may not have enough intelligence to understand the threat, or avoiding killing them could require some skill they do not possess (maybe they were not able to tell the gun was real, and indeed loaded).

If we were able to isolate all cases that fall under type 3, we might call those cases acts should not be punished, in other words, they did not have the "free will" to avoid them.

But the distinction may be hard to identify. Making too many exceptions will often both increase the number of people who do not see a threat of punishment as credible AND also increase the number of people that would be able to understand what the rule was.

So, basically:

- "Free will" is a simplification of the kind of process that causes threat of punishment to prevent us to do some set of actions that will be met by retribution. If legal systems (or similar informal retribution threats) are set up properly, those who actually have "free will" about the type of action, will really be punished for it.

- Those who DO get punished will very often NOT have "free will", in the sense that they are unresponsive to stimuli about that type of action. When people DO end up in prison (and assuming that punishment is strict enough and the police is able to catch most criminals), will be those NOT responsive to the threat of punishment, for some reason or another.

- This may still provide close to optimal utility for the set of people who decide what to punish. The cost of punishing those who are for some reason not able to avoid doing the "crime" is lower to that group than the benefit gained by the threat causing many others to NOT commit the "crime", when they otherwise would have.

- Indeed, since we are not capable of a fully detailed analysis of the "real" things happening inside the brains of "agents", we have no choice but using a simplified model. Some concept like "Free Will" is probably needed to lay out consequences of actions. Also some people have more of an ability to understand the details and limitations of such models, so simpler models have the advantage that it is easier to communicate. (To children, people with limited cognitive abilities or immigrants with poor cultural understanding.)

At this point, I could start writing about how empathy comes into this, how empathy is modulated by a feeling of kinship or shard identity as well as a perception of being threatened by someone, but that would make this post quite long :)


I’m not sure of the point you’re making. The point is it’s entirely possible to conceive of a complex biological agent that can take actions on the basis of sensory input data without invoking the need for a subjective experience. That would be the ‘philosophical zombie’ described by David Chalmers.

However we have a subjective experience of what it ‘feels like’ to see red. Why is that needed?


My understanding of the thought experiment is that it is not about being able to detect red, but about experiencing the feeling that it is to see red. So in a hardware/software device, could we make the hardware experience seeing red the way you or I experience the color red.

My strong intuitive hunch is that we'll never build something electronic that can experience things (that it feels like something to be that device).

Compare this to how our immune systems works, it reacts on inputs and does very complex things but without experience. There could be beings or machines that could have this exact discussion like that, learning things, agreeing, disagreeing but without experiencing anything.


The experience of colour is a fact about the brain, yes, which is additional to the knowledge of colour. A very simple system - a camera - can have knowledge of colour without the experience of colour. We say that something "knows" it is seeing the colour red if we can identify a part of the abstract world-model that is instantiated in that thing, such that the part is activated (for whatever "activated" means of that world-model's structure) iff the input to the world-model is red. I say that something "experiences" the colour red if additionally that world-model has a structure similar enough to my own that the "activated" part of the model has a direct analog in my own mind; and something "experiences" to a greater or lesser degree depending on how close the analogy is.

Of course I don't know whether anyone else "experiences" the colour red ("is my red the same as your red?"), but from the way people behave (and from knowledge of science) I have lots of evidence to suggest that their world-models are similar to mine, so I'm generally happy to say they're experiencing things; it's the most parsimonious explanation for their behaviour. Similarly, dogs are enough like me in various physical characteristics and in the way they behave that I'm usually happy to describe dogs as "experiencing" things too. But I would certainly avoid using the word "experience" to describe how an alien thinks, because the word "experience" is dangerously loaded towards human experience and it may lead me to extrapolate things about the alien's world-model that are not true.

Mary of Mary's Room therefore does gain a new experience on seeing red for the first time, because I believe there are hardcoded bits of the brain that are devoted specifically to producing the "red" effect in human-like world-models. She gains no new knowledge, but her world-model is activated in a new way, so she discovers a new representation of the existing knowledge she already had. The word "experience" is referring to a specific representation of a piece of knowledge.


You are right, there is still the experiential question of consciousness.

I.e. qualia.

I think that will become tractable with AI, since we will be able to adjust what their mind has access to vs. what it doesn’t. Experiment with various levels of awareness.

By which I mean, our experience of the color red seems magical in that we can’t explain our experience of it. The experience is manufactured below our conscious level for our conscious level.

If we (or AI) could progressively add more access to lower encodings, we could at least experiment with our experiences in ways we cannot now.

But qualia is the only unanswered question I think. (Including the qualia of being self-aware of one’s own self awareness, which doesn’t need to be a special case.)

How do these codings of perceived information get seemingly indescribable observed qualities?


Any agent which has the ability to perceive red must have some mechanism which corresponds to that percept. The percept of red has to be different to other percepts so that it is not mistaken for something that is not red. It is subjective because the agent has no mechanism for objective experience.

I think to conceive of a philosophical zombie, you have to say that consciousness is something uniquely special in that something possessing all its describing qualities is not it.


Relevant reading: https://en.wikipedia.org/wiki/Philosophical_zombie

The answer seems to lie in qualia. But by definition that is something that one cannot authentically communicate. And the notion of qualia itself is increasingly convergent with DNN latent space. Is the redness of red or the warmth of sunlight just a vector in n-dimensional latent space somewhere in our brain computers?

Maybe we're not really conscious.


Notice you did not explain why there needs to be subjective experience. Why do I see the color red? Why am I not some elaborate clockwork that I supposedly am, without inner sensation. I could have none and just describe one to you, but I assure you I actually have one. (Or you are a sollipsist, which is fine)

I find that people either understand this immediately or they don't.


Um, "agents that perceive the environment" sounds a lot like homunculi, which effectively just punts the Hard Problem downstream, but doesn't address it.

This is no different in essence than Searle's Chinese Room problem, which at its core asks "If the parts aren't conscious, how can the gestalt be?"

We don't have an answer, but it must be true as long as the brain is involved. Individual neurons are unconscious electrochemical devices, but they still add up to experiencing the redness of red.


How? You look at red apple, you think "such a nice red apple, but one side is a bit yellow". But "red" doesn't mean anything to you, you don't experience it. If you can imagine that, explain why red-p-zombie-you insist on being able to experience red color, while in fact he/she doesn't experience it.

I can only suppose that you imagine someone like you with a textual annotation above his/her head "I don't experience red color", and not trying to imagine what it is like to be such a person.

Doesn't it defy the purpose of a thought experiment? You don't imagine conscious experiences, but imagine descriptions of presence or absence of conscious experiences.

ETA: Using your analogy it is like imagining pigeon with a label "Dragon" above its head, and then concluding that there's no logical reasons that dragons can't fly or exist.

When you imagine a "real" dragon and then ponders on this imagination, you have to conclude that your imagined physics must be different for it to work.


No, it's not. There is nothing to vary by Dennett's account. Our two "red qualia" are exactly the same: non-existent.

Dennett argues (actually, he just asserts it over and over), that there is no such thing as subjective experience, so to speak. He argues that, whether or not my version of red is different than yours, we're sharing a common delusion -- the delusion of subjective, conscious experience.

(Of course, we're aware of an experience, but more along the lines of a webcam program that, upon sensing the color red, flips some boolean is_aware variable.)

I'd add that -- ok, qualia have many variations. From where do we deduce that it's a grand illusion? Matter has many different variations.


Consciousness is not an experience of something. It's the experience itself. So it doesn't have qualities--it IS qualities. It's not red, it's what it's like to see red. It's the immediate (no intermediary) subjective happenings of your conscious mind. I can't point you to it because my pointing is a view presented to you in consciousness.

It's not mystical or anything. In fact, it requires less faith than anything else -- it's all you've ever known directly. You could be living in the real world or in the Matrix, it doesn't matter. Either way, consciousness doesn't change, just the contents of consciousness.


You have no idea if you live in a world of zombies who just say they perceive qualia, who just give the appearance of seeing red.

I don't see it as a real problem. If a program or machine asserts that it perceives qualia, who are we to argue it's wrong? We are in no better a position to prove the qualia of our subjective conscious experience other than our physical resemblance to our interlocutor.

Maybe qualia is what it's like for matter to be in a feedback loop with physical interaction. Panpsychism is pretty much my position.


Exactly.

It is understandable that having an model of an observer can be useful to a brain.

But how/why does that opens a window to an actual observer, and not just a model, is the question.

And we only know that - an actual observer exists - through first hand experience. It is our most immediate and certain knowledge (Cogito, ergo sum), everything else can be questioned. Yet, there's nothing in physics or computer science that gives a hint to this being the case.


We do have a direct experience with reality, but we are only capable of processing an approximate & infinitely simplified model of it.

When you take a picture of an apple, you have a picture of an apple, not the apple itself. Both are real and related, but not the same thing.


The way I look at it, conscious experiences are behavior: the behavior of an information-processing organ that has, in some mysterious but not necessarily magical way, become aware, to some extent, that it is not only an actor in its own model of what its sensory inputs mean, but also that it is a special actor, the one that actually creates the model.

Seeing red may be less mysterious than that - is it anything more than recalling similar sensations, and things (including abstract things like information/language constructs and emotions) that were associated with it?


You might be interested in the concept of qualia [0]

Vsauce did a good video on it, "Is Your Red The Same as My Red?" [1]

0: https://en.wikipedia.org/wiki/Qualia

1: https://www.youtube.com/watch?v=evQsOFQju08


If you can't answer the question the way I asked it, then there's really no point in continuing. You're misinterpreting the qualia concept in an effort to believe a certain thing about existence. My question was aimed at isolating this misinterpretation. Qualia are individual subjective perceptions. There is no 'qualia of redness'. There is a qualia of when I perceive a particular red thing, assuming qualia exist, that is. That qualia cannot be compared or contrasted with your qualia when you see another red thing. But the instant you take that out of context, what you're dealing with is no longer qualia. There's no such thing as a 'redness' qualia.

We can isolate red as a wavelength of EM radiation because of this.

next

Legal | privacy