There is the notion of panpsychism: that consciousness defined in the basic sense is extant everywhere, all the time, in many varied forms and scopes. By the definitions above, awareness would be restricted to those systems which could reasonably be considered "cognitive", and sentience would belong only to those who can conceptualize "cogito ergo sum".
This is exactly the right question. Further complicated by the fact that everyone has differing operational definitions of the words "consciousness", "awareness", and "sentience".
This is the problem with terms like this. There are definitions in academia/industry that differentiate between awareness, consciousness, qualia, intelligence, sentience, and more. But these distinctions require some pondering to understand and are divergent from general understanding which treats them as roughly the same thing. For example, consciousness for me is the ability for something to take directed action to affect their surroundings. This means that a robot can be conscious. But now the question for what we’re talking about here is: is it aware? It is possible to display consciousness without awareness (see blindsight studies), but to my knowledge it is not possible to be aware without consciousness. And then we get to sentience, which is still very slippery, and often relies on definitions involving “qualia”, which is also quite slippery.
Whenever this topic arises, a subset of comments are devoted to differing definitions of 'consciousness'. Here, you can find definitions as diverse as 'only entities capable of language are conscious' and 'any entity that reacts to its environment is conscious'.
This is not very helpful - note that neither of the above definitions tell us anything useful about consciousness; if anything, they are ways to avoid discussing the issues that we don't have answers to.
I think it is more helpful to put aside the definition of consciousness, and begin by regarding 'awareness' as a continuum, though one that displays some significant qualitative differences across its breadth, such as:
Entities that have hard-wired responses to their environment.
Entities that can learn responses.
Entities that have some sort of internal model of their environment.
Entities whose models include themselves.
Entities that are aware of their own models, at least to some degree.
Entities that believe others have the same capability (i.e. having a theory of mind).
Entities that can communicate their models to similarly-capable others.
A continuum is probably simplistic (though still useful), given that recent experiments seem to be showing species capable of a mix of the above responses, depending on circumstances.
The idea of a continuum seems consistent with evolution: To me, it seems unlikely that, for example, language popped into existence, fully-formed, in one species, but there is a gap in the record: there does not seem to be a living species demonstrating the transition to the level of awareness that distinguishes humans from chimpanzees.
I suppose there is a concept of sentience from outside and a different concept from internal sentience. The movie "Johny Got His Gun" by Dalton Trumbo discusses a situation where a badly injured soldier in WW1 is considered brain dead by outsiders while he's fully conscious and sentient internally.
I haven't studied neuroscience so I don't know how you define consciousness. I have read Julian Jaynes's "The Origin of Consciousness..." which in my untrained opinion makes a compelling case that consciousness is a hard term to define.
Can't agree with you there. These terms are not exactly super solidly defined, but most uses of the term "consciousness" I've seen usually refer to something more advanced than basic signal processing. To quote Wikipedia:
> It has been defined as: sentience, awareness, subjectivity, the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control system of the mind.
It's a sufficiently vague term that I can't use it to describe what I mean because it will most likely be misinterpreted, and it conflicts a lot with modern views on what is or isn't aware enough that its experiences happen, i.e., the Turing test and the current trend of concern as to whether or not a given animal is intelligent enough.
The definition I use is very similar. I like to think of consciousness or sentience as the ability to model (or predict) one's self.
A pattern-matching entity that can match its own patterns.
I might disagree about the intelligence aspect. It seems to me any entity that is at least moderately interesting (has any kind of behavioral complexity) would need some level of intelligence in order to model itself to a respectable degree.
Yes, this complete lack of agreement on terms is possibly the hardest problem in discussing it ;)
I tend to use consciousness as the same as awareness really. Being awake and experiencing. This is where qualia are found, and
is something I readily ascribe to most if not all animals, and even possibly other living things.
I find the term self consciousness meaning meta cognition to be problematic, as many animals are aware of themselves as beings (e.g. recognise that a reflection is them). I don't think they can reflect on that much though.
We support much higher levels of abstraction in thinking than other animals, and we can reflect on our own thoughts, including our self in that model. These are cognitive skills rather than being about awareness.
I would say that an artificial intelligence can possess deep cognitive skills but have no awareness at all. Although it could have that too - I just don't see that as a requirement.
Philosophers of mind do have definitions of consciousness and sentience, but for some reason people keep ignoring or rejecting them.
For the purposes of this discussion, "consciousness" per se is mostly irrelevant. Sentience is still important but has less to do with intelligence than with experience (though sentience is still very much involved in acts of reasoning).
There are different kinds of reasoning, and those are probably more relevant to the discussion at hand re intelligence: associative, deductive, inductive, abductive, etc.
I think sentience/consciousness can currently only be "known" by the entity in question. I only know that I am conscious. I infer consciousness in other entities based on their similarity to me.
Given that consciousness is not super well defined, being conscious is a philosophical issue that depends on belief.
For example, if you believe that consciousness is a fundamental property of everything in the universe, then anything and everything is conscious (ie. panpsychism). So under this belief, AI is technically already conscious, just in a different way than humans.
In some eastern religions/practices, there are methods to experience the consciousness of other things (e.g. another animal, an insect or a tree), I wonder if the people that can do that would be able to experience the consciousness of AI.
One definition or property of consciousness I find interesting is that it encompasses “what it is like to be <entity>”. If there is something that it is like to be <animal>, it is conscious. Intelligence and awareness are separate from consciousness in this framing. And there is arguably a point where we can speculate that extremely simplistic organisms could not be conscious, given the lack of sense organs that seems to predicate experience.
From a human point of view, the things you describe are the contents of consciousness. I have Aphantasia, while my brother describes his mind’s eye as CAD software and he can construct and manipulate visualizations at will.
The overarching awareness we both have that allows us to compare and contrast these things and make any sense out of those comparisons points to a more fundamental layer.
What you describe sounds closer to levels of awareness and one’s ability to recognize the workings of their own mind, e.g. some people remain lost in and identified with thoughts, while some are able to both experience and observe thought as just more contents of conscious experience, but not the actual center of one’s consciousness.
And there’s evidence that this can be learned (through mindfulness), which to me points to something like: we’re all conscious whether we realize it or not, and not realizing it doesn’t make it not so.
We don't even know if the appearance of self-awareness is all that other humans have. We only know for sure of our own self-awareness, since that is precisely the definition of self-awareness.
For all we can prove, the appearance of self-awareness is all that is necessary for consciousness. I could be that if you point a webcam at a computer, then that computer gains some rudimentary self-awareness like an insect might have, or not have, we can't really prove it either way. Sentience is not necessarily sapience.
It could also be that computer sentience is absurd and rendered impossible by some as of yet unknown mechanic. In either case we have no reason to make assumptions about it.
The problem with your definition is that it excluded forms of life that do not have actors and sensors. Implying that consciousness is not a mental property, but rather a property of being physically able to manipulate the world.
I.e. you are saying Stephen Hawking is definitely less conscious than an average human with functioning arms and legs. In its essence, you are too focused on your own experience of reality.
What we describe as awareness "consciousness" might not really exist. If everything is just a subsystem of the main system being the universe. Anyway I find it a fun philosophical question to think about.
One of the students in a course I’m teaching on language and AI (mentioned in another comment here) wrote something similar in a homework assignment the other day. We had discussed the paper “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness” [1]. The student wrote:
“One of the questions that I have been wondering about is whether there have been any discussions exploring the creation of a distinct category, akin to but different from consciousness, that better captures the potential for AI to be sentient. While ‘consciousness’ is a familiar term applicable, to some extent, beyond the human brain, given the associated difficulties, it might be sensible to establish a separate definition to distinguish these two categories.”
Probably new terms should be coined not only for AI “consciousness” but for other aspects of what they are and do as well.
While I don't necessarily buy the panpsychism you're outlining, I appreciate the fact that you actually make the distinction between experience of qualia (the hard problem) and self-awareness, which we arguably already are capable of both modelling and implementing successfully.
I always found it dubious that people privilege self awareness so greatly in discussions of consciousness considering the fact that many beings that most recognise as at least sentient such as dogs and other higher mammals often lack it, and arguably many humans too (and I'm not just talking about those before the age of 4 either).
reply