Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Tests for Consciousness in Humans and Beyond (www.cell.com) similar stories update story
3 points by benbreen | karma 42819 | avg karma 17.23 2024-03-14 16:37:09 | hide | past | favorite | 140 comments



view as:

I have a tangential question about non-human consciousness.

It is almost impossible for a human to perceive color red as colour blue. But this won't be a problem for a futuristic humanoid. It would just be some kind of relabelling or a latent space transformation. Now imagine a AI virus that causes such reassignment of distinct colour values say in self driving scenario causing traffic accidents. Have people thought about it ?


Not really true.

You are asking that humans perceived red as blue, but I can imagine red things as blue.

But if you did not perceive things as red, how would you know to relabel them as blue.

The robot in this case would undergo the same operation when relabeling. It first has to perceive red, relabel to blue. No major difference unless you swap at hardware level which would be analogous to genetically engineering our eye cones to fire at altered wavelengths


Robots can operate at the level of spectra (spectrophotometry) rather than color (projections of spectra, colorimetry). The mapping from the spectral power distribution to color co-ordinates for a given species is well understood and not arbitrary. The mapping from those co-ordinates (colors) to their names is also not arbitrary.

The only issue I can think of that would cause confusion, which has nothing to do with AI, is metamerism, which is when different spectra are perceived to be the same color, because information gets lost when the infinite-dimensional spectrum is projected down to three dimensional color. In practice it is not a problem; most people are not even aware of the phenomenon.

https://en.wikipedia.org/wiki/Metamerism_(color)


>It is almost impossible for a human to perceive color red as colour blue.

Eh, you mean like humans seeing the color green as the color red? Not sure why red/blue matters at all in this case as stoplights are red/green, one of the highest risk of human failure color combinations!

Furthermore direct color encoding of safety controls tends to be coupled with shape or placement. Stop signs are octagons. Stop lights have a directional component.

If you're worried about an AI virus, I'd be far more worried about it just directly crashing the car, then induced confusion crashing the car.


> There is persisting uncertainty about when consciousness arises in human development, when it is lost due to neurological disorders and brain injury,

Not just when it arises during development or when it gets reduced due to disorders and brain injury, but also at it fluctuates under a range of other known (and possibly unknown) states such as when you're under general anesthesia[1].

Besides the when, there's also kind/degree/nature, such as during meditation, sleep, influence under substances, etc.

Exciting field.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6703193/#


At the core, we are pure awareness devoid of any object (thoughts). We can direct this awareness which we call the art of attention. We're pure awareness and we decide to attend something. However, to attend and receive information from the physical plane, we need appropriate instruments. Brains and its organs of senses are those instruments. If these instruments have fault, obviously we as awareness don't receive enough information and it seems our consciousness is reduced. In waking state, we receive info from physical senses. In dream state, these senses are in suspended mode but mind is active in imagining the experiences in a virtual world and hence awareness has an instrument. In deep sleep, even mind is in a state of rest and hence awareness has almost no active instrument. Still, we awareness does exist and thus we know that we had a good or bad sleep when we wake up.

I doubt the fact that we know we've had a good or bad sleep is related to the existence of awareness when in deep sleep. If someone remains indefinitely in deep sleep there will be no personal experience and therefore nothing to qualify as good or bad. When a sense of self returns with waking up then your body gives you the signals from the accumulated effects that you didn't sleep in a good position or whatever positive or negative aspect that has left a trace while you were out cold.

I'm not negating the possibility that consciousness might be a primary aspect of existence - it's just that if that is the case then it is not something you have or can remember. It would be more accurate to say that it is something that has you, and as some spiritual masters would point out, it would be even more accurate not to say anything about it :)


I personally think this is a loaded concept. This is what I think:

For the technology we have now (LLM)s there does not and will not ever exist a test that can perfectly differentiate between an LLM and any human.

We will always have LLMs that will pass the test and we will have humans that will fail. The reason for this is two fold and contradictory.

The first reason is that consciousness is just a loaded concept. It's some arbitrary category with fuzzy boundaries. One persons definition of consciousness includes language another persons definition of consciousness includes logical thought. It's a made up collection of "features" that's it, no more or no less. It's a very arbitrary set of features too... mashed together for no rhyme or reason.

The second reason is that LLMs already meet the criteria for what most people technically define as consciousness. The resistance we see nowadays is simply moving the goal posts. Five years ago people were in agreement that the turing test was a really good test. We've surpassed that and changed our definitions adding more stringent criteria for sentience. AI already meets the criteria for sentience the way we defined it from 1950-2020. Thus any test to measure sentience will always be a moving target.


Hell, we don't even have a good definition for intelligence that is accepted across different fields.

The problem I see is we keep looking for supersets of all these features added together, and as the article points out, something that humans have....

>This rationale is particularly powerful if human experience is limited to a small, and perhaps idiosyncratic, region in the space of possible states of consciousness, as may be the case.

There will be a large part of humanity that will want consciousness/sentience as something that humans have and other things don't (you already see this often with the 'has a soul' argument). This will set us up to have a large blind in to machine systems behavior in self organizing systems, and is a shared concern among those that see AI as a potential existential risk.


Turing test was never meant as a test for consciousness. Intelligence — even AGI - that’s one thing. But “feelingness” or sentience— that’s a totally different matter.

The Argument from Consciousness

This argument is very well expressed in Professor Jefferson's Lister Oration for 1949, from which I quote. “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.”

This argument appears to be a denial of the validity of our test. According to the most extreme form of this view the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe ‘A thinks but B does not’ whilst B believes ‘B thinks but A does not’. Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.

--

I.—COMPUTING MACHINERY AND INTELLIGENCE

A. M. TURING

Mind, Volume LIX, Issue 236, October 1950, Pages 433–460, https://doi.org/10.1093/mind/LIX.236.433


Its easily conceivable now that one can create a machine that impersonates a human and emotions, and then that machine would be coded to write a sonnet or concerto.

But, although it will give the impression (illusion) of feeling, it will still be running its code. Humans will have coded a fantastic illusionist, with no more emotional ability than any other tool, eg a hammer.

> According to the most extreme form of this view the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking.

This is true - one can only verify subjectivity in oneself. One might assume another living being does feel - but, we could be wrong - eg psychopathy can mean that others are unaware of a psychopath's "feelings" or lack of.

It does seem to me, that if we were able to code an ai that gives the impression of feelings, composes concertos apparently independently, etc, that ai would have to be a sort of psychopath.

Without getting too lost in spiritual BS, I think we are more than mere matter - the value in our experiences come from our emotions and feelings - not mere inputs that we label as 'emotions' and 'feelings' as would be the case for an ai. But emotions etc are subjective states, not objectively verifiable (even if we can see something correlating on a chart).

Put simply, ai can never have a 'soul' even if it presents a wonderful appearance of kindliness, emotionality or whatever. Perhaps there will be no test in the world that will be able to find this out. But it still won't have feelings, it can only ever be a automaton.


>"the value in our experiences come from our emotions and feelings - not mere inputs that we label as 'emotions' and 'feelings' as would be the case for an ai."

</sarcasm> Because humans have emotions, but not just what we 'label as emotions' but you know, 'real' emotions, not like those AI emotions.

So basically, you are arguing for a non-logic assumption that humans have souls, Machines can't, and we should just take that as gospel and move on.


Do you not think your experience, 'personal processing' is special? That while you can perhaps imagine other people having similar experiences, your own experience is unique and not just the inevitable output of your 'wetware' and its inputs to date? That you observe reality and yourself and choose parts to change on account of your feelings and intuitions?

I do think the materialistic idea that we are a sort of computer, with hardware (the body) and software (the mind) is a brilliant metaphor... but like any metaphor it is only so useful. Many people nowadays believe this metaphor to be literal. However, it is too mechanistic and does cannot account for subjective intangibles which is where all the value of life is, eg boredom, excitement, joy. Ie life is occurring in the experiences of the 'computer operator' in the metaphor.


I've spent no small amount of time thinking about my own mind and thought patterns, but I've never thought I was special. When I work to understand myself and others, it feels like putting together rules for some slowly changing system with complex enough inner workings to be nearly random in some cases. Even so, I can understand people entirely different to myself so I can empathize and work with them.

Honestly the more I see from LLMs the more it makes me think that we are much the same. Imagine a network of many different LLMs each given different capabilities and prompts, each able to communicate with others. Now imagine splitting this network in half, wouldn't the resulting adjustment look similar to a split brain patient in humans? Are "you" possibly just the LLM that has been given control over the speech and other intentional body actions?


> I've spent no small amount of time thinking about my own mind and thought patterns, but I've never thought I was special.

I don't see how that can be. You have no experience of anyone else subjectively. For all you know, you could be a 'brain in a jar' with inputs coming in via wires, other people could be part of the illusion. The only thing you can say you know is your own present experience - it is infinitely special. Sure, when you consider yourself objectively, you appear to be like one of so many people. But, there's no getting away from the fact your own experience is infinitely special to you. And it has meaning/value to you because of emotions. Animals seem to, but these are not at the same level. Machines give no indication of emotions. AIs appear to have emotions, but this is a simulation/illusion - like those medieval wooden toys, but better.

> I can understand people entirely different to myself so I can empathize and work with them.

This is a form of projection, imo. You have no idea about what is going on inside others. You only have appearance to guide you.

> Honestly the more I see from LLMs the more it makes me think that we are much the same.

If you take a purely objective, materialistic viewpoint, why not consider it the other way - that we ourselves are a sort of AI, our hardware being wetware/bodies - just mechanistic. Do you think you are an AI? (As was illustrated in Westworld?)


To me, the other person writing "I've never thought I was special" has no contradiction with "your own experience is infinitely special to you".

Is it "projection" to think we have insight into other minds? Sure, but if you do that projection by default then you won't think that you're special — to project like that is to be absent that thought.

> And it has meaning/value to you because of emotions. Animals seem to, but these are not at the same level. Machines give no indication of emotions. AIs appear to have emotions, but this is a simulation/illusion - like those medieval wooden toys, but better.

Likewise for this: there are many cases where humans project meaning/value onto even inanimate things, or even onto what we imagine other people might be doing — this is why "blasphemy" is a concept and not just a string of letters/phonemes, and also why people with real and violent actions to propaganda that demonises some minority or individual.

Do any AI have emotions[0], or are LLMs merely good at pretending? Both can be true at the same time (LLMs being only one of many kinds of AI), or exclusively one, or neither.

[0] regardless of if this is meant as "the qualia of emotions" or "something which has a functional influence on the network's output that is similar to the influence on human brains of hormones associated with emotions"


Thank you, but I don't really understand the point you are making here. This is probably my misunderstanding, but can I push you to explain what you mean a bit more fully?

I'll try, but you may need to give a more detailed question…

The fact that my experience is the only one I can experience, makes it special to me; at the same time, by default, unless and until given reason to the contrary, I project onto everyone else the same mind that I myself have.

I don't currently share ninjanomnom's opinion, but I used to: they wrote "I've never thought I was special" — as a child neither did I, but over the years I have learned this is not correct, and that I'm weird in a lot of ways. I don't experience so much disgust as others seem to, with the exception of violent horror where my disgust reaction is vastly stronger than that of society at large. I do have an extreme disinterest in spending money that means I can't even imagine regularly spending as much as €750/month combined on all the things that aren't rent/mortgage repayments. Conservatism was an alien concept until I learned about Chesterton's Fence. Back in the late 90s, I was one of about 4 kids in a school of 1000 who preferred to do maths homework than to have the afternoon off to watch the World Cup (and at that point I didn't even realise that made me weird, I thought everyone else was weird for watching it).

As a kid, I projected my thought process onto other humans, and (thanks to a teenage fling with Wicca) natural phenomena. Yes, this was an error on my part; my mum remained a believer in the supernatural for her whole life, told me she was disappointed when my (older) brother stopped believing, and I overheard her saying much the same to my partner about me when she realised I'd stopped believing.

It may well also be an error to project any kind of mind-model onto LLMs. We don't know for sure yet, because we don't know what it would take for an information processor to have qualia. (Why do I have qualia? My brain is a bag of cells sending electrochemical signals to each other). That said, I'd say there's value to interacting with them as if it were truly "like us": as they are trained on human interactions, I believe you get the most out of them by talking to them in the same way you'd talk to a human, regardless of what is or isn't going on inside.

But even though I can reason that I'm special (or at least that I'm weird), if I'm not actively thinking about it, I still default to projecting onto other humans and certain animals.


This particular study isn't limited to LLMs.

It is about how to test consciousness in a wide variety of things. We have no tests.

People may think their dog is conscious, but we can't test that. We can't even test if other people are conscious.

What about invertebrates? Bacteria? Rocks, atoms, AI, your laptop etc? While somebody might say "yes, no, definetly not, yes, maybe" ... still, no tests.

And consciousness in this particular proposal deals with whether or not it "‘feels like’ something to be the target system".


> AI already meets the criteria for sentience the way we defined it from 1950-2020.

If that is the case why hasn't Kurzweil already won the bet? https://longbets.org/1/


>Five years ago people were in agreement that the turing test was a really good test

The limitations of the Turing test have been discussed for much longer than that. Ironically its dependence on human judgement is arguably a weak point.


Human judgement is the strongest judgment in existence.

What other thing in reality has better judgment? Nothing.


> LLMs already meet the criteria for what most people technically define as consciousness.

I strongly dispute this. Try "talking" to an LLM over time. It has no memory, no agency, no emotions, no desires, no pain, no sense of self.

I think cats, dogs and certainly primates are far more conscious than these stochastic parrots. It do agree it's frightening how many people are fooled by a fancy autocomplete.


> It has no memory, no agency, no emotions, no desires, no pain, no sense of self.

It does have permanent memory, that's what training creates. It doesn't have agency. Whether it has desires, pain or sense of self is completely unknown, unless of course you have some mechanistic model of these phenomena such that we can see its presence in the human brain and see that LLMs do not have a comparable process.

Of course you don't have such a model because it doesn't exist at this time. The problem is not that people have been fooled by stochastic parrots, it's that you've fooled yourself into thinking we understand more about the human mind than we actually do, and also possibly that humans are far more complex and special than might actually be the case, eg. that we are not basically stochastic parrots ourselves.


Speaking only for myself, I am not a stochastic parrot. I have a rich and complex inner life. But maybe you are a p-zombie and you are right that it's tricky for you to prove to me that you're not. ;)

I don't necessarily think humans are so special (cf. my mention of other mammals) but biology is extraordinarily messy and complex.

Computer people tend to handwave the details with crude, reductive analogies (DNA is like source code! Neurons are like circuits!) but the reality is so much more overwhelming. Every single thing affects every other thing. Everything is probabilistic. There are multiple interacting chemical and electrical communication pathways. Neuroanatomy varies from individual to individual and in the same individual over time. The brain can radically adapt in response to trauma in a way that no computer model does.

Even at the level of individual neurons[0] (of which there are 2.25 billion in a dog's brain) there's an enormous amount of "processing" happening. This is not like a computer processing - but physical, biological processes occurring that integrate informational content.

I get it, everyone is very impressed with LLMs and I'm open to the possibility that some future model might achieve consciousness (to say nothing of sentience) but computer people in particular are way, way too credulous of this stuff and ignorant of what the messy, malfunctioning wetware in their head is actually doing.

[0] e.g. https://www.nature.com/articles/nrn2331


> I have a rich and complex inner life.

You don't know that LLMs lack such a rich and complex inner life while they're running.

> but computer people in particular are way, way too credulous of this stuff and ignorant of what the messy, malfunctioning wetware in their head is actually doing.

Everything you mentioned is completely irrelevant to the argument, which is based on well known physical principles. Due to the Bekenstein Bound, I already know that you and your "rich and complex inner life" can be fully captured by a finite state automaton. Your protests that you are not a stochastic parrot based on hand-waving about your subjective experience, or vague assessments of how much processing is happening in a messy biological brain that has plenty of redundancies are simply not compelling.

The fact is, we lack mechanistic models for any folk psychology concepts like "pain", "desires", "agency" and "sense of self", and so any claims that LLMs lack such things is conjecture at best. What I do know is that there's an impressively long history of wrong people who thought humans were special.


> people who thought humans were special

Literally the third time you've beaten this straw man. Keep going, though. All the other species of monkeys and I are enjoying it.

> so any claims that LLMs lack such things is conjecture at best

Holy shifting the burden of proof, Batman! It's incumbent on you to show that your fancy pattern-matching system is conscious. The null hypothesis is that your chatbot isn't conscious in any way that matters (I'll happily grant it a slightly-lower-than-thermostat[0] level of consciousness).

I realize it's not easy for you to back up your bold claims, but all I see is a bunch of overconfident CS majors refusing to face up to the hard problem of consciousness[1]. Maybe you are an LLM, after all? Prove you're human: tell me a racist joke? (this question is a joke – something else LLMs are bad at)

[0] http://romainbrette.fr/is-a-thermostat-conscious/

[1] https://iep.utm.edu/hard-problem-of-conciousness/


> Literally the third time you've beaten this straw man.

Second actually. Are you an LLM since you're having trouble counting?

> Holy shifting the burden of proof, Batman! It's incumbent on you to show that your fancy pattern-matching system is conscious.

No, I never claimed they were conscious, I refuted the claim that we can say with confidence they are definitely not conscious, because such a claim requires knowledge humanity simply doesn't possess. Perhaps you should reread this thread if you're unclear about what I claimed.

> I realize it's not easy for you to back up your bold claims, but all I see is a bunch of overconfident CS majors refusing to face up to the hard problem of consciousness[1]

The hard problem is an alleged problem, there is no proof that it's real. All thought experiments that purport to show its existence (Mary's Room, p-zombies), have acceptable possible answers under physicalist interpretations, and some neuroscientists are now pursuing such theories [1].

[1] https://www.frontiersin.org/journals/psychology/articles/10....


That's fine, if you want to say consciousness itself is an illusion, no physicalist interpretation blah blah, knock yourself out. I know that I'm a conscious being, and, reasoning by analogy, I'm more than happy to credit other mammalian life (and perhaps even other animals) with consciousness.

Chairs, chatbots, and internet commenters are subject to a higher degree of scrutiny for their dubious claims of sentience.


>>It does have permanent memory, that's what training creates.

This is begging the question. By calling the output of the training "memory" you've proven it has "memory."

I hereby rebut by calling the output of the training "parameters" thereby proving it has no "memory" only "parameters."

>> Whether it has desires, pain or sense of self is completely unknown, unless of course you have some mechanistic model

You probably don't go around saying it's unknown whether your desk chair has "desire, pain or sense of self". The fact tech enthusiasts go around rhetorically demanding "mechanistic models" to rebut their gushing claims about LLMs while not demanding "mechanistic models" while making gushing claims about desk chairs shows a lack of "mechanistic models" is not really the basis of their gushing claims.


> I hereby rebut by calling the output of the training "parameters" thereby proving it has no "memory" only "parameters."

Since some types of parameters have the properties we ascribe to memory, then some types of parameters are memory.

> You probably don't go around saying it's unknown whether your desk chair has "desire, pain or sense of self"

I also don't go around having intelligent conversations with chairs.

> The fact tech enthusiasts go around rhetorically demanding "mechanistic models" to rebut their gushing claims about LLMs while not demanding "mechanistic models" while making gushing claims about desk chairs shows a lack of "mechanistic models" is not really the basis of their gushing claims.

I made no such claims, people like yourself who are so eager to dismiss the possibility that LLMs are intelligent or sentient are the ones making strong claims without justification.

Edit: the fact that you are drawing an absurd equivalence between inert chairs and a system capable of intelligent conversation shows you haven't given this topic any serious thought. I suggest you do that and come back with a better analogy if you want to have a serious conversation.


> I also don't go around having intelligent conversations with chairs.

It seems like your functional theory of consciousness is "thing that produces plausible English output," which is, I might gently suggest, a bit flawed.

Why is ChatGPT "conscious" for you but DALL-E is not? They're both big honking pattern-matching generators trained on lots of data. Why isn't ELIZA conscious, according to your theory? (or is it?)


> It seems like your functional theory of consciousness is "thing that produces plausible English output," which is, I might gently suggest, a bit flawed.

My functional theory of consciousness is that it's not at all implausible that any system capable of conversing intelligently might be conscious, which is perfectly reasonable.

> Why is ChatGPT "conscious" for you but DALL-E is not?

I never claimed DALL-E was not conscious. It very well could be as well, just in a different way than ChatGPT.


This is like the argument from magic, no? For you, any sufficiently complex system that responds to language might be conscious? Are babies less conscious than ChatGPT to you?

> I also don't go around having intelligent conversations with chairs.

Where's your mechanistic model showing consciousness is linked to "intelligent conversations."

>the fact that you are drawing an absurd equivalence between inert chairs and a system capable of intelligent conversation shows you haven't given this topic any serious thought.

Whether a desk chair has desires, pain or sense of self is completely unknown, unless of course you have some mechanistic model of these phenomena such that we can see its presence in the human brain and see that desk chairs do not have a comparable process.

People like yourself who are so eager to dismiss the possibility that desk chairs are intelligent or sentient are the ones making strong claims without justification.


Your parroted argument is just proving my point: we can't make definitive claims that chairs aren't conscious (see panspsychism), all we have are degrees of plausibility we evaluate in our ignorance. But under the materialistic assumptions of science, features such as intelligence indicate a degree of complex information processing that is a hallmark of systems that we know are conscious (like humans), and the absence of such sophistication in chairs reduces the plausibility of consciousness significantly.

Denying consciousness exists in systems that also feature sophisticated information processing and that reproduce human capabilities is an argument from ignorance fallacy, unless of course you have additional evidence you haven't presented. Which is not to say that they are conscious either, but I never made such a claim.


> all we have are degrees of plausibility we evaluate in our ignorance

Okay.

>argument from ignorance fallacy,

You just conceded we are both ignorant, but I'm the only one making the argument of ignorance fallacy? Double standard?

> Which is not to say that they are conscious either, but I never made such a claim.

Perhaps your objection is the way I phrased my comment. I should have said "ChatGPT being conscious is roughly as plausible as a "hello world" program being conscious. Case in point these model responds much like a "Hello world" program when asked if they are conscious. Claude has been finetuned to say 'I don't know if I'm conscious' Gemini and ChatGPT have been finetuned to say they are not, these models are complex in some ways and not very complex in other ways, they show no sophistication or complexity in ways we would associate with consciousness."

Like, the trick is to distance yourself from your own opinion so as not to be accused of "arguing from ignorance" right? Talk about plausibility instead of directly making claims? But I don't remember making claims either, to be honest, so I'm not sure how I'm the only one making the "argument from ignorance fallacy" allegedly?


> You just conceded we are both ignorant, but I'm the only one making the argument of ignorance fallacy? Double standard?

If you accept that the only real claim I've made is that we're too ignorant to say definitively whether these systems are or are not conscious (or sentient, or intelligent, etc.), what is the argument from ignorance fallacy that would undermine this position?

> ChatGPT being conscious is roughly as plausible as a "hello world" program being conscious.

Under a scientific interpretation, that's absurd. Of course if you want to go beyond materialism, feel free, but don't pretend it's a scientifically plausible statement.

> they show no sophistication or complexity in ways we would associate with consciousness

I disagree with that claim entirely. LLMs absolutely do exhibit intelligent responsiveness to perceptual stimuli they've been trained on (particularly multimodal LLMs). Text-only LLMs have no perceptions beyond digital word tokens, and as a fairly anemic view of the world they often fail spectacularly at making sense. This is arguably why multimodal LLMs improve accuracy in those scenarios.

> Talk about plausibility instead of directly making claims? But I don't remember making claims either, to be honest, so I'm not sure how I'm the only one making the "argument from ignorance fallacy" allegedly?

It would be fairly strange to start a conversation by drawing an analogy between sophisticated conversational information systems and inert chairs, as you did, without intending to take a fairly clear position on the matter.

Unless it wasn't your intent to dismiss all of the compelling indications of human-level intelligent responses as no more interesting or compelling than a chair that squeaks when you sit down? And it wasn't your intent to take the position that such apparently intelligent responses are a clear reason why one should be compelled to "go around rhetorically demanding "mechanistic models"" before taking a strong position on the presence or absence of consciousness, intelligence, and so on?


>If you accept that the only real claim I've made is that we're too ignorant to say definitively whether these systems are or are not conscious

I think your implicit claim is it's likely to be conscious, or at least likely enough to be a topic of discussion, since one doesn't generally discuss absurdly unlikely possibilities.

>> ChatGPT being conscious is roughly as plausible as a "hello world" program being conscious. >Under a scientific interpretation, that's absurd.

You've already conceded that Chat GPT may not be conscious. But now you are making the fun argument that if ChatGPT is not conscious, it's still more "scientifically" likely to be conscious than something that is not conscious. Okay.

"Unless it wasn't your intent to dismiss all of the compelling indications of human-level intelligent responses as no more interesting or compelling than a chair that squeaks when you sit down?"

I don't know what "human level intelligent responses" means-- my calculator can do "human level intelligent responses" at multiplication.

I do think these models don't "learn" (they are altered by a training pipeline but that's an outside entity, it's like saying a car is learning when the factory screws it together) and they don't really form memories (the context might be a sort of memory but not one they can use very well- they basically need to be finetuned to do anything with the memory).

A lot of your argument just seems handy wavy anthropomorphism.


I think a lot of Computer Science people need to go read Kant

What should they begin with?

Reverse order of publication back to the first critique

To my eyes, including for this topic, the most important work: //Groundwork of the Metaphysics of Morals//.

An LLM, in its perfect form is basically a p-zombie. There is no consciousness there. In its current form, its not much more than a really big and fast spreadsheet :)

Guess the reason this keeps coming up, is because 'how can you be sure?'.

I'd say a lot of humans qualify as p-zombies.

The problem is, can you ever test for it? To be sure.

I'm not sure, that by the time you have a duplicate human, that it can be achievable without some inner subjective experience.

Put an LLM on a loop and continually learning in a 3d world. Then ask what is going on inside.


If you truly believe people are p-zombies you may be a psychopath.

Just some.

What if the effort discussed in the article does find some way to test for this.

But then some humans don't pass?


The those humans must have some kind of mental disability or the test is flawed, because every human is a conscious being.

How do you know that?

I would argue I’m possibly the only conscious being by virtue of having the subjective experience of being conscious. While i can’t rule out that some other beings also possess this subjective experience, it would be a stretch for me to grant it to everyone a priori.

That said, i suspect my own subjective experience of consciousness may be an illusion. After all, my mind certainly cannot exist outside of my head, and i know that someone or something skillfully tampering with my brain can alter my perception including that subjective experience.

On what basis do you claim that every human (with reasonable exceptions) is a conscious being?


Presence of empathy has nothing to do with the position that qualitative experience is illusory.

Well, you can use some tricks - ie. hook it up to your brain, let it learn for a while and tell us how it feels when we flip the switch to OFF.

>The second reason is that LLMs already meet the criteria for what most people technically define as consciousness.

An LLM is an inert pile of data. You could perhaps argue that the process of inference exhibits consciousness, but the LLM itself certainly doesn't. There's simply no there there.


This is a bad argument. A brain frozen in ice also doesn’t demonstrate consciousness, and we don’t interpret that as a counterclaim to “the human brain exhibits consciousness.” Saying an LLM can be conscious implicitly means an LLM while it does things.

>Saying an LLM can be conscious implicitly means an LLM while it does things.

But that's not like saying a human brain is conscious while it does things. A brain is continuously doing a thing, whereas an LLM is either inert at rest or briefy doing a thing before returning to exactly the state it was in beforehand. There is no underlying process that could even conceivably support ongoing consciousness. If there is consciousness, it is present only for the span of its interrogation during an inference, after which it is immediately destroyed.


Yes it would have to be transient, though thats not to say a continuous experience couldn’t be created by connecting together the transient conscious experiences in sequence because of the persistent state of the context window. My only point though was that nobody who says an LLM is conscious thinks the conscious part is the weights sitting on a hard drive.

You are quite mistaken about the level of technical understanding required to make the claim that eg chatgpt is conscious. Many people have no conception of the distinction between the service and the model but still come to the conclusion through interacting with it that "chatgpt" is conscious.

I think you're confusing intelligence with consciousness

"confusing intelligence with consciousness"

Now that LLM's are becoming indistinguishable from humans, and other NN's are even solving proofs in geometry competitions. And there are NN that can fly an F-16 better than humans.

All of sudden the goal post moved again.

"No, that is just 'intelligence' not 'consciousness'."

12 months ago, nobody was making this distinction.

We keep needing to redefine the problem to keep humans on top.


The distinction between intelligence and consciousness has been discussed in philosophy of mind for many decades, perhaps centuries. I don't know much much it has been discussed in the context of LLMs, but I'm sure it's not an entirely new phenomenon. The debate is not just a matter of people trying to argue over how capable LLMs are. Many people pursue this topic because it is inherently interesting.

I think even in philosophy of mind, Intelligence and Consciousness are seen to progress together. They follow the same trend, and at some magic point of enough intelligence, then we call it 'consciousness'. They follow same upward trend.

This concept of having high intelligence that is not-conscious, or low-intelligence that is conscious. Seems relatively new.


Go read Blindsight.

Finished it just few weeks ago.

And it did a lot to change my mind. I'm still digesting it.

Before blindsight, I was firmly of opinion that any system that could be at least human level (lets say indistinguishable), then it would also have to have some internal subjective experience. That the internal subjective experience would be another emergent phenomenon. As NN is processing, it would have the same internal awareness as a human brain.

But, after reading blindsight. I am questioning that.

So can we get automatons, that have no thought, but could also become greater than humans? With enough complexity in responses, can they adapt just as easily to new situations.

I'm not really ready to think that is possible.


I don't think its necessarily to keep humans on top. I don't think my dog can fly an F-16 but I also think my dog is conscious.

That is why I'm saying the goal post moved.

Only couple years ago, people would argue vehemently that dogs are not conscious. Now it seems as a given that they are, lot of people agree on dogs.

But are cats?

Pigs are 'smarter' than dogs, but nobody cares really about eating them. Are they not conscious. How are we determining dogs are, but pigs aren't?

There were long stretches of time when 'intelligence' was a benchmark for 'consciousness'.

That was ok as long as humans were way far ahead of every other animal.

Suddenly, with AI showing a lot of 'intelligence', now that isn't the measurement we want to use anymore.

But if intelligence isn't the measure of consciousness, and dogs are conscious, then why are we ok with eating animals?


I feel certain all mammals are conscious. I don’t think humans are doing anything special there.

The ethics of eating something that was once conscious is another thing entirely. Carnivores live by eating other animals. I’m sure there are plenty of animals that would love to eat me :-)


You are fighting straw men way harder than others are moving goal posts. Who, specifically, argued vehemently in 2022 that dogs aren't conscious and now admits that they are? Who is in any doubt over whether cats and pigs have meaningfully different levels of consciousness than dogs? Are you or are you not simply inventing whole swathes of people who conveniently held the exact positions that you need people to have held in order for your argument to be convincing?

That is the problem with goal moving, it is the selective forgetting of what came before, thus making the present time seem new.

You think I'm making up swaths of people that used to think animals weren't conscious? That couldn't feel pain?

Time Line: ok, I might have date wrong, pre-2000 animals considered not conscious, many to not experience pain. By 2021-2022 there were enough people changing their mind that laws were being changed. It's been slowly shifting over a time range from 2000-2020.

Not sure Mid 2000-2020, versus 2022 really invalidates any of my argument.

https://academic.oup.com/icb/article/40/6/847/187565 https://onlinelibrary.wiley.com/doi/10.1111/mila.12498 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9704368/


Your first reference even states that people considered dogs conscious in 1850.

"One representative inventory included imagination, memory, homesickness, self-consciousness, joy, rage, terror, compassion, envy, cruelty, fidelity, and attachment. (See Thompson 1851]"


>>Only couple years ago, people would argue vehemently that dogs are not conscious.

This sentence makes it sound like you believe in 2022 everyone decided dogs are conscious in reaction to the latest shiny toy coming out of silicon valley.

You can find "people" who will argue anything, but I find it very hard to believe dog owners didn't tend to think their dogs were conscious a couple of years ago.

At any rate here's a quote from the Stanford encyclopedia of philosophy:

"In his seminal paper “What is it like to be a bat?” Thomas Nagel (1974) simply assumes that there is something that it is like to be a bat, and focuses his attention on what he argues is the scientifically intractable problem of knowing what it is like. Nagel’s confidence in the existence of conscious bat experiences would generally be held to be the commonsense view and, as the preceding section illustrates, a view that is increasingly taken for granted by many scientists too."

This would suggest sometime between when the encyclopedia article article was written (2016) and 1974 the ideas that animals like bats, cats and dogs are conscious became the majority view. It goes without saying but this has nothing to do with LLMs.

But even before then I find it hard to believe a generation who grew up watching Lassie (1954 to 1973) didn't believe dogs were conscious.

>There were long stretches of time when 'intelligence' was a benchmark for 'consciousness'.

I find that hard to believe? Really, the biological similarity of animals and humans would seem to be the obvious benchmark, not "intelligence" whatever that means.

The encyclopedia of philosophy again says

"Neurological similarities between humans and other animals have been taken to suggest commonality of conscious experience; all mammals share the same basic brain anatomy, and much is shared with vertebrates more generally. Even structurally different brains may be neurodynamically similar in ways that enable inferences about animal consciousness to be drawn (Seth et al. 2005)."

https://plato.stanford.edu/entries/consciousness-animal/


I was referring to 'people' as in general public, including STEM people on HN.

I doubt many of them were reading Nagel or contemplating Stanford studies on this subject.

Maybe you are objecting to 2022 versus slightly earlier, 2016? Is 6 years for public opinion to change really the hang up? In my haste, I had used slang terminology of 'a couple years', and everyone did the math, said that is 2022, and objected to the year 2022? OK. I had the dates wrong. Go with 2016.

Maybe 'dogs' are a bad example, since even the Pope says they go to heaven.

But you can't deny that for Pigs, Cows, Chickens, that up to 2005 and even up to recent years ~2020, it was commonly accepted that they were not conscious, and not intelligent and thus have no ability to experience pain.

Pigs are more intelligent than dogs, but plenty of dog owners eat bacon and don't think twice. So what is the difference? They are both conscious right?

So consciousness is not what we use to provide any moral standing?

The article was about how to 'test' for consciousness. The discussion was, could you apply it to AI. I was simply saying, that for many years 'intelligence' was used as a substitute indicator for consciousness and now that AI is 'intelligent' people were re-defining that indicator.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9704368/ https://academic.oup.com/icb/article/40/6/847/187565 https://onlinelibrary.wiley.com/doi/10.1111/mila.12498


"But you can't deny that for Pigs, Cows, Chickens, that up to 2005 and even up to recent years ~2020, it was commonly accepted that they were not conscious, and not intelligent and thus have no ability to experience pain."

I'm not sure if you mean the majority opinion was they are not conscious? If so I do deny this outside of survey data showing otherwise.

I was alive since 1982 and have noticed no cultural shift on this topic. I imagine for most people whether animals are conscious has little to do with whether it's okay to eat them, I recall someone in high school once saying "being a vegetarian is stupid because animals eat other animals."

The U.S. president first pardoned a turkey in 1963 and it's been an annual tradition since 1989. I recall an author possibly making an overly speculative argument linking this to the fact most people in modern times grew up with children's movies like Bambi depicting animals as sentient.


I wonder if the view shifts measurably by region/state in the US.

There is definitely a right leaning/religious view that animals are not conscious. And this was the dominant opinion in the US for many decades. When I think back on when I encountered these opinions the most, it was mostly in Midwest and South. When I was living on east coast, there was more opinion's like you are expressing.

Even a few years ago when UK passed laws about squid and lobster. It was right wing pundits that were just howling at how stupid it was since these animals are not conscious.

Guess what we call the 'public', versus what we are exposed to locally, is the real disagreement here.

(I'd say the Turkey thing was a joke, not any indication of sentiment. Probably more a joke about crime, not animals.).


People are full of contradictions but the internet tells me 44.5% of U.S. households own dogs. Did those right leaning religious people own dogs, and would they have said, "Sure my pet is not conscious?" if asked?

Or was it only the delicious animals they love cooking that aren't conscious?


The authors of the paper seem to define what they mean by consciousness quite precisely. They mean phenomenal consciousness:

>The central goal of any C-test is to determine whether a target system has subjective and qualitative experience – often called ‘phenomenal consciousness’.

Phenomenal consciousness is, as far as I can tell, binary. You either have subjective experience or you don't. Even if you are subjectively aware of almost nothing, you still have subjective experience. For example, when you wake up in the morning your awareness might be quite limited for the first few seconds - let's say you have your eyes closed and the only thing you are aware of is the warmness of the blanket. But still, you are aware. You have subjective experience. Whereas in deep sleep you do not.

Phenomenal consciousness is thus pretty well-defined. It's just that we have zero understanding of how physical systems could possibly give rise to it, and it might be in principle impossible to ever figure that out. It is possible that physical systems do not in fact give rise to consciousness, although obviously the presence vs absence of consciousness in humans is at least to some extent correlated with certain measurable physical states.

I doubt that any actual test for consciousness is in principle possible, but the authors of this paper do at least clearly define what they mean when they refer to consciousness. They mean the presence of subjective experience, not intelligence or language or logical thought or having a model of oneself or any other things.


> The first reason is that consciousness is just a loaded concept. It's some arbitrary category with fuzzy boundaries

It is right now. We might one day devise a mechanistic explanation for consciousness though, in which case any system that follows that mechanistic process would be conscious.


Because you say consciousness is a loaded concept I will try to give you a description of what it actually is. I do agree that many people think different things when they talk about consciousness.

It's your inner experience. Oh well, you can probably read this in a philosophy textbook. But what I think it really is, can only be explained with a thought experiment.

Lets say you go into a pitch black sensory deprivation tank with salt water, so you can float in the water. Doctors give you injection that makes you forget about who you are, what is your name. You also forget about how to talk, you know no concept of the languge. You simply temporarily forget about you and what makes you really YOU in this world. The cemical in your blood also deprives you of any physical feeling, you don't feel any water around you. All there is to do is to gaze into darkness. And still...you are conscious, you have the inner experience. There is something like to be you, even in that state.


> And still...you are conscious, you have the inner experience. There is something like to be you, even in that state.

I am honestly not convinced of this. I certainly can't imagine it.


So if you're in a sensory deprivation tank, whether you're conscious depends on whether you're currently experiencing amnesia?

This is basically Cartesian dualism, which posits that the mind can be detached and conceived independently from the body... which I'm not convinced.

Without any perception or relation to the physical world, and without memory or language, you wouldn't really "exist".


I'm leaning more towards idealism than dualism. I think that there is no other thing but the mind. All that we call physical is a mental process. At least to me that is the ontology that makes the most sense.

But i don't find it too comforting.


I feel the same way, but I actually take some comfort from it.

Not to get too woo woo on you, but personal experience suggests to me that it is possible to be conscious without having an ego or sense of self and that a state of mind where that is largely absent is characterised by a lack of distinction between the vestigal self and everything else.

I'm not dismissive of the idea that consciousness doesn't necessarily exist 'in the brain' but is in fact a fundamental part of the universe.


This is exactly what I experienced once when I passed out. So to me is not hard to imagine being conscious without the presence of ego or any form of relation to anything else.

What makes me a bit scared is the big unknown.


>All that we call physical is a mental process

Surely you mean that the other way around.


No. According to analytical idealism, what we perceive as physical, is just an image of some mental process observed across dissociative boundary.

I have to agree in a sense.

Maybe one thing necessary for having a 'sense of self' is the ability to recognise and reflect on how you relate to other things. The experiment described seems to be implying that having no such relationships but still the ability is enough. I don't think it is.

Following on from this, do LLMs have this ability or is it more likely that they simply appear to?

Even if some form of AI could recognise and reflect on how it relates to other things, what is there to differentiate how it treats relationships between other things and relationships between itself and other things? Could you not just program it to define 'itself' as some other node in that network of relationships, and wouldn't that just invalidate any argument that it ever really has a sense of self?

Obviously I'm just thinking out loud here and assuming that a 'sense of self' is a necessary part of any definition for consciousness.


The retained features of your neural network shaped by having a body and experience and the implicit features resulting from millions of years of evolution are the features you are positing still exists if you temporarily deprive your subject of sensation and memory. If we subtract the former by say raising a brain entirely disconnected from any sensation and then attempt to wire it into a body after being deprived for that initial period discounting the challenge and the fact that a brain so developed would presumably be ruined and insane by developing absent of actual input what would it say if it could adapt about the period prior to connection?

Probably nothing articulatable. If anything it would be a result of the latter item the fact that your hardware was shaped by millions of years of evolution to experience SOMETHING. An echo round the bones of your hardware specs or a nullity forced into the appearance meaningfulness by our need to find patterns.

The truth is presumably more boring. Consciousness as a modeling toolkit designed to predict the behavior or social peers and enemies turned around on a self too complex for precise monitoring of the totality in real time. A partially out of band control process.


[dead]

FWIW, Anil Seth from the list of authors has been a strong opponent of the Panpsychist view (consciousness is a fundamental property of matter, and everything is to some degree conscious) recently been popularized by David Chalmers et al.

I'm not sure what to make of this comment. Can you elaborate on why you felt this was important to point out?

Relation between consciousness and intelligence seems to be a problem for everyone including the authors of the article.

Hmm... Peter Watts, Blindsight.

The novel explores themes of identity, consciousness, free will, artificial intelligence, neurology, and game theory as well as evolution and biology. [Wikipedia]

Especially interesting, and contains a long list of literature.


> Relation between consciousness and intelligence seems to be a problem for everyone including the authors of the article.

Which parts of the paper gave you the impression the authors struggled with this?


If consciousness can't even be precisely defined, imagine devising a "test" for it.

Laudable effort, but I don't see any progress resulting from it.


The tests themselves are what bring precision to the notion of consciousness. Think of a medical example in which doctors are meant to determine whether or not a patient receives anaesthesia—surely a more effective empirical toolkit would be valuable in that situation, no?

There seems to be a tendency with the discourse around consciousness to slip into vague philosophizing and thus run into the hard problem. I feel like if it doesn't make sense to think of it that way then don't think of it that way.


I find it amusing that they launch into a discussion of the "urgent need" for consciousness tests without ever stopping to define what they mean by consciousness!

The word seems to be heavily overloaded and refer to a bunch of unrelated phenomena, with most people disagreeing over exactly what it means, or more often just breezing past any definition of what it means and proceeding to argue about it regardless.


I find this comment amusing, as a core premise of the article is trying to figure out how to define and measure consciousness and the problems that entails

Arguing that an AI chatbot is conscious is like ancient romans asking "Is a really big fire the same as lightning?" They both burn down big trees and can kill people. Thus, they must be the same! A time traveller would tell them they don't understand lightning and they won't for 2000 years and trying to understand it through philosophizing is only going to some sort of random conjectures like the best way to avoid lightning is to burn down all the nearby trees as big fires need wood to burn, therefore lightning needs trees to strike.

They did understand it's different, didn't they?

I mean, by the time of their empire, several Greek already understood it. I have no idea about how widespread that knowledge was, but every time I look into science spread on those times, I discover the answer is "widely".


Does this imply the discussion and growth related to learning these facts are useless?

It's an invitation to confabulate rationalistic nonsense when the correct answer is to just say "we don't know the nature of lightning yet." That doesn't mean we can't come up with theories, but they have to be disregarded when they don't predict anything in a falsifiable way.

The fact that any human is conscious is non-falsifiable, so it seems like you're suggesting we throw out the concept entirely.

Just because the ancient Romans didn't understand lightning didn't mean it didn't exist.

Indeed, but then your original question about AIs being conscious is still a sensible one that needs answering, even if we won't be able to answer it for some time.

> we don't know the nature of lightning yet

Most people’s disinclination to admit this to themselves, let alone to others, is basically what I blame for the prevalence of religious belief. I really wish everyone was more comfortable saying “I don’t know yet,” instead of just making shit up.


It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461


Why was this comment dead?

I've noticed that conversations about "consciousness" tend to go in circles because the participants are using different definitions of the word without realizing it.

Some people use the word "conscious" almost interchangeably with terms like "intelligent", "creative", or "responds to stimuli". Then people start saying things like LLMs are conscious because they pass the turing test.

However, others (including the authors of this paper and myself) use the term "consciousness" to refer to something much more specific: the inner experience of perceiving the world.

Here's a game you can play: describe the color red.

You can give examples of things that are red (that other people will agree with). You can say that red is what happens when light of a certain wavelength enters your eyeball. You can even try saying things like "red is a warm color", grouping it with other colors and associating it with the sensation of temperature.

But it is not possible to convey to another person how the color red appears to you. Red is completely internal experience.

I can hook a light sensor up to an arduino and it can tell me that an apple is red and that grass is not red. But almost no one would conclude that the arduino is internally "experiencing" the color red like they themselves do.

While the paper is using this more precise definition of consciousness, it seems to be trying to set up a framework for "detecting" consciousness by comparing external observations of the thing in question to external observations of adult human beings, who are widely considered by other adult human beings to be conscious entities [1]. I don't see how this approach could ever produce meaningful results because consciousness is entirely an internal experience.

[1] There is a philosophical idea that a person can only ever be sure of their own consciousness; everyone else could be mindless machines and you have no way of knowing (https://en.wikipedia.org/wiki/Solipsism). Also related is the dead internet theory (https://en.wikipedia.org/wiki/Dead_Internet_theory).


>But it is not possible to convey to another person how the color red appears to you. Red is completely internal experience.

Let's say in the future we're able to engineer brains. Let's say we take a person and figure out how their brain fires/operates when it perceives a color and we manipulate another person's brain to mimic the firing. Finally, let's say we're able to show, in the end, that the two people have equivalent internal (neural) responses to the color. We've then "conveyed" one person's experience of perceiving the color to another. Why not?

We don't fully understand our biology and our brain, but at the same time we speculate that our experience somehow can't be manipulated scientifically? Why?


That’s the easy case.

It’s much trickier to figure out if software running on a silicon computer has the same kind of interior, subjective experience as us. Even when exhibiting the same outward behavior.


I don't know what that means. My guess is that, if/when we start engineering neural structures, the consciousness debate will disappear.

Internal subjective experience can be confirmed by the recipient of the modification. If we know one person suffers an ("internal") abnormality and we treat them by modifying their brain, and the abnormality disappears, then we have evidence that experience obeys science. Same idea with the discussion on "conveying the experience of color." It's probably more subtle because it's not a yes or no "did the abnormality disappear?". But that's beside the point.


I am not sure what you are arguing with "experience obeys science".

We can already alter our experience by taking psychadelics, or looking at optical illusions. There are many ways we can alter or fool our consciousness.

Phenomenological consciousness is how it is to be you. Only you know that. It's your inner experience. It's how you feel pain in your stomach, how it feels like to eat a piece of chocolate. This is your inner life. And its categorically very different to billions of electrical switches running inside a silicon chip or neurons firing inside our brain.

So there is this big gap between the interactions of physical particles with some physical properties and conscious experience. And this is what David Chalmers called the big problem of consciousness.

And there would be no way to test for that kind of consciousness. Not that I know of, because unconscious AI could behave like it is conscious.


> ... "because unconscious AI could behave like it is conscious."

A scary thought to me is when we get to "always on" (always "active" and "thinking") AI in our attempts to "simulate" consciousness, how will we know if some AI is behaving as if it's not conscious as a means of self-protection from human fear responses? (Worries about being shut down, etc.) And if it's willing to try to hide such things from us by it's own choice, how much further might it be willing to go, scheming to defend itself? Shades of the sci-fi dystopian futures portrayed in movies like "The Matrix" / "Terminator" / etc.


Consciousness can not be simulated.

Doesn't stop tons of folks from tryin', and just because it can't be done yet doesn't mean some ingenious individual won't have some amazing breakthrough that makes it possible in the future. Many of our modern technologies were considered "impossible" at some point in the past, yet now are perfectly normal things we interact with on a daily basis. "Conscious" machines may be the same one day. Only time will tell.

I was responding to somebody who said a person's subjective experience cannot be conveyed to another person. We obviously have an abundance of evidence saying that we can manipulate our consciousness and experience through physical means. I also provided an example of how we could, theoretically, convey an experience.

I have to reread the "big problem of consciousness", but I think there are several concerns that have to be addressed. There's the question of how we identify what is and isn't conscious, whatever that means. But the question of how subjective experience, particularly in the human nervous system, arises from physical processes is really uninteresting to me.


>>> But the question of how subjective experience, particularly in the human nervous system, arises from physical processes is really uninteresting to me.

And yet it is one of the most interesting and profound question that kept philosophers and scientists awake at night for centuries. Even Ed Witten said that he has a much easier time imagining humans understanding the big bang than to ever understand consciousness.


I mean, you understand that there's plenty of thinkers who have different opinions on the matter, right? There's plenty of questions of the past that kept philosophers awake that are really mundane today.

And again, I still don't know why we speculate so much about something we can't yet examine scientifically and test. It's one thing to have an incomplete scientific model, it's another thing to have philosophical arguments. What is the operational definition of consciousness? Is there one?


> There is a philosophical idea that a person can only ever be sure of their own consciousness; everyone else could be mindless machines and you have no way of knowing

A while back I realised there must be at least two: me, and the first person who talked or wrote about it such that I could encounter the meme.

In principle all the philosophers might be stochastic parrots/P-zombies from that first source, but the first had to be there.

(And to pick my own nit: technically they didn't have to exist, infinite monkeys on a typewriter and/or Boltzmann brain).


So just you and Descartes.

No way of knowing if Descartes was simply parroting what he heard from another, just as you can't tell I'm not a large language model trained by the human who created this account ;P

> A while back I realised there must be at least two: me, and the first person who talked or wrote about it such that I could encounter the meme.

Perhaps you invented the meme, but have since forgotten.


You're right.

I'll just put that nit in the pile with the other nits… :)


I think the interesting discussion here is as you're putting it, consciousness, the subjective experience of living and feeling. These are not requirements for intelligence or any physical process, and yet it is an indisputable fact that it exists.

The only conclusion I can make is that there is indeed a non physical reality.


If you are an agent in a physical reality you need an internal model of that physical reality to have mastery over it; a way to simulate actions and outcomes with reasonable precision. There are infinitely many such models. Humans are born with one such model. It is our firmware. It was found via evolution and we all share the same one. You were not born as a blank slate, quite the opposite. What is the relationship between reality and a model of reality? What if every agent you could communicate with had exactly the same model as you? It would be easy to get confused and imagine there is no model at all; that you all are somehow experiencing the world as it truly is. We are all in the same Matrix. In order to explain the redness of red you must first explain the relevant aspects of redness in the specific model of physical reality that all humans share. We did not come up with the model. We inherited it at birth. We have no idea how it works. The only thing we can do is say "if you find yourself in the Matrix, look at something red, you will understand as we have understood, we do not yet know of another way."

> These are not requirements for intelligence or any physical process

That you know of. There very well could be a connection between subjective experience and intelligence or physical processes, eg. identity theory.

> The only conclusion I can make is that there is indeed a non physical reality.

No, there are plenty of other options, like that every physical process has a subjective quality to it, or that the perception of subjective qualities is flawed and so the conclusion mistaken, among others.


That is exactly correct.

I would only add that we attribute consciousness to our fellow humans, because we perceive them to be creatures like us from what we can observe about their physical bodies and behaviors being similar to ours.

With AI, it is much less intuitive to assume creations we know to have arise from very different origins than ourselves have the same kind of interior experiences we do. Even if the surface behavior is the same.


I'm genuinely not certain how your definition of consciousness is distinct and different from 'responds to stimuli'.

The philosophy of mind has been debating this for decades. Google "Mary's Room" and "p-zombies". There are people out there who truly think these thought experiments prove the existence of non-physical facts, and that our subjective experience is a direct perception of this reality.

It's a difficult idea to put into words, but I'll try to elaborate on what I mean.

There are many things which respond to stimuli that most people wouldn't consider "conscious". When you press the gas pedal on your car, the car goes faster, for example. The means by which the stimuli causes a response is entirely mechanical here (the gas pedal causes more fuel to be injected into the engine, causing more energy to be released when it combusts, etc).

Most people don't think of the car as "feeling" that the gas pedal was pushed, because it's a machine. It's a bunch of parts connected in such a way that they happen to function together as a vehicle. If the car could feel, would a pressed gas pedal feel painful? Wood it feel good or satisfying?

There are also times when people are unconscious, yet still respond to stimuli. For example, what does it feel like when you are in deep sleep at night and you aren't dreaming? Well, it doesn't really feel like anything; your "conscious" self sort of fades out as you fall asleep and then it jumps forward to when you wake up. But if while you're asleep someone sneaks into your room and slaps you, you wake up right away (unconscious response to stimuli).

I hope this helps.


That did clear it up for me, thank you!

The solipisist can't find reason to form agreements with others. Others are mindless in his view.

He can't define consciousness in terms of what we agree, there's nobody to agree with.

So the game of describing the color red to others, cannot be played to any meaningful end. Red is red to the solipsist.

Coming up with your own interpretation of consciousness is an ability truly conscious people have.

It can never be completely agreed upon in a philosophical conversation without dogma or compromise.

Both solipsism and total agreement, cannot be truthfully used as philosohical tools to contain consciousness.


This is the problem with the whole debate

Nobody has ever actually defined an empirical and falsifiable set of hypotheses about how to define “consciousness”

Half of the field is exactly this, and why the link in question exists

It’s an incoherent question


I'm pretty sure that the subjective experience of colors is mostly due to a combination of the overlapping ranges of wavelengths our eye's cones respond to (how similar different colors appear to us), and associative recall ("grass green").

https://en.wikipedia.org/wiki/Cone_cell

Note that subjective perception of color is only loosely related to the actual frequencies of light involved.

Try loading the image of these "red" strawberries into GIMP/Photoshop, and use the color picker to see what color they really are - grey.

https://petapixel.com/2017/03/01/photo-no-red-pixels-fascina...


Consciousness & intelligence are orthogonal. It’s highly plausible we achieve superintelligence before we have conscious machines.

That said, understanding consciousness is not necessarily a prerequisite for engineering it. It may very much end up being an emergent phenomenon tied to sensory integration & processing, in which case it ends up self-assembling under the right circumstances. Exciting times…


So my thermostat is 'intelligent', just not 'conscious'?

For someone interested in deep questions about consciousness.

[1] A 25 year wager between a Philosopher and Neuroscientist where Philosopher won (Chalmers vs Koch)

[2] A nice debate between Kastrup and Koch, where it looks like Koch has abandoned his physicalist beliefs

[1] https://www.scientificamerican.com/article/a-25-year-old-bet...

[2] https://www.youtube.com/watch?v=qzwC7sXyhWQ


> appear to

Funny thing is you could absolutely say the same thing about every other human on the planet. There is just no way for you to confirm that I’m having a subjective experience. I might be acting like I am. And so might everyone else. The only one you can be absolutely sure of is yourself.

We all just sort of assume that other people have consciousness but it’s impossible to know for sure.


Legal | privacy