Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Unfortunately, the hype about LLMs has generated breathless ruminations on AGI and consciousness. Dig a little deeper and there doesn't seem to be much there, as one may find that these terms are not adequately defined.

Here we see this attempt:

> First, a disclaimer. Consciousness is a notoriously slippery term, if nonetheless possessed of a certain common-sense quality. In some ways, being conscious just means being aware — aware of ourselves, of others, of the world beyond — in a manner that creates a subject apart, a self or “I,” that can observe.

Which is followed by paragraphs correctly calling out the inadequacy of the above definition. Nowhere though does the reader get a satisfactory answer: what really is consciousness?

One might say who cares if they don't define it? Well, if you don't define it, there's no point in discussing whether it exists, how it came to be, or what it comprises.

You might as well be asking the question: how close are we to quidlesmoopy?



view as:

I have a definition of consciousness: consciousness arises when a creature not only applies a theory of mind to others (predators, prey, and its conspecifics would all be likely others) but also applies a theory of mind to itself. (given that we have the most data on ourselves, it is not surprising that we'd have an "I", but it is a little surprising that the "I" modelled usually has such a high variance from others' models of oneself)

Great (although that's a hypothesis for a cause rather than a definition), but for any discussion with another person, you have to be sure in advance that they don't use one of the other 49 common meanings of "consciousness".

What I care about when I ask if an AI is or isn't conscious, is if it has the kind of experiences of existing that I know I have.

Why is there a feeling of sensations rather than just the sensations themselves in this body of mine? How is it that I am more than mere stimulus-response? Is it homunculi all the way down?


I found the works of Thomas Metzinger illuminating, at least in terms of human consciousness.

+1 for Metzinger's ideas. It's pretty much the only theory of consciousness that I've read about that makes any sense.

Here is a little extract for anyone not familiar:

> My first step in this chapter is to introduce the hypothetical notion of a phenomenal self-model (PSM). [...] I will then investigate how a PSM, integrated into a yet more complex, higher-order form of representational content, forms the central necessary condition for a conscious first-person perspective to emerge on the representational as well as on the functional level of description. [...]

> The content of the PSM is the content of the conscious self: your current bodily sensations, your present emotional situation, plus all the contents of your phenomenally experienced cognitive processing. They are constituents of your PSM. Intuitively, and in a certain metaphorical sense, one could even say that you are the content of your PSM. All those properties of yourself, to which you can now direct your attention, form the content of your current PSM. Your self-directed thoughts operate on the current contents of your PSM: they cannot operate on anything else. [...]

> From a logical and epistemological perspective it is helpful to differentiate between simulation and emulation, in order to further enrich the concept of a PSM. We can then, in a second step, conceptually analyze the PSM as a special variant, namely, self-emulation. [...] It is the possibility of reflexive or self-directed emulation that is of particular interest from a philosophical perspective. Self-modeling is that special case, in which the target system and the simulating-emulating system are identical: A self-modeling information-processing system internally and continuously simulates its own observable output as well as it emulates abstract properties of its own internal information processing. [...]

> Using a term from computer science, we could therefore metaphorically describe self-modeling in a conscious agent as “internal user modeling.” In human beings, it is particularly interesting to note how the self-model simultaneously treats the target system “as an object” (e.g., by using proprioceptive feedback in internally simulating ongoing bodily movements) and “as a subject” (e.g., by emulating its own cognitive processing in a way that makes it available for conscious access). This is what “embodiment” means, and what at the same time generates the intuitive roots of the mind-body problem: the human self-model treats the target system as subject and object at the same time. [...]

from Chapter 6 of "Being No One: The Self-Model Theory of Subjectivity" by Thomas Metzinger.

___

Also worth looking into are the similar explanations on concisousness-as-self-model (in a simulation generated by our brain) by Joscha Bach. I'm not sure if Bach came to the same perspective independently or he read Metzinger, but he definitely presents his ideas in a more accessible language language (vs. Metzinger's heavy, precise philosophical jargon, which I found heavy to read).

I'd love to find more sources on this (books/talks/papers), or find a place where these ideas are actively discussed.


Good luck. I've given up trying to explain qualia to people, and why they are at the core of why conscience matters. It's so frustrating. Once I heard it for the first time it seemed obvious to me and I was glad someone had already made up a word for it, but every time I try to explain it to people in the consciousness discussion people just look at me cross-eyed I must just be explaining it wrong :)

How do we know that other H sapiens have the same kind of experiences of existing as you have? Best I can do is that "I know, right?" is a fairly universal reaction, so we know that even if others' qualia are not identical to ours, they are at least isomorphic.

As to "more than mere stimulus-response", much of me is pretty basic stimulus-response. Certainly the derivative controller, and arguably all the proportional controller as well. It's only the integral controller that's not, and much of that could be adequately explained by an FSM with a high number of states. (it is a little known fact that Regexen were originally derived as a mathematical model of what neural networks could potentially recognise!)

(see https://thefarside.net/i/61ee3cb4 infra)


>How do we know that other H sapiens have the same kind of experiences of existing as you have? Best I can do is that "I know, right?"

This was answered in a different comment: https://news.ycombinator.com/item?id=39103099

I think the point is that we agree that I (as the individual asking) has consciousness, then we can infer that others have consciousness based on similarity. Another H sapiens is very much like me, so they are very likely to have similar experiences. Chimpanzees less so, but still similar enough to infer they are more likely than not to be have similar experience. And on down the line, through dogs, earthworms, plants, and bacteria...each getting potentially less likely the further from similitude they become.


I prefer using my definition because it offers something more testable than just some (arbitrary?) degree of similarity.

For instance, cows have been shown to spend (significantly) more time standing near (randomly placed) pictures of smiling farmers than pictures of frowning farmers, which suggests that they have enough of a theory of mind to prefer farmers who appear to be in good moods to those who appear to be in poor ones.

That, of course, doesn't say anything about whether cows have a theory of their own mind, nor did that study do the (perhaps not so obvious?) step of recording picture placement relative to the cow's "bubble" (a curious bovine approaches head on; a frightened bovine leaves head first; a skeptical bovine places itself sideways to the object at a distance, with one eye upon it, so it can change to either of the first two if it comes to a decision either way), but at least it offers something more quantifiable than "I believe this thing to be more like me than that thing".


>I prefer using my definition because it offers something more testable than just some (arbitrary?) degree of similarity.

That's fine to have a preference, but we must also concede that many (most?) human classification ontologies are arbitrary. Even in your example of what constitutes a "smiling" or "frowning" farmer is a somewhat arbitrary definition. What you might call a frown, I may call a smiling 'smirk.'

I think this testable framework preference may border on a reductionist perspective that dismisses the hard problem of consciousness on the assumption that if it can't be measured, it's not real (see also my other comment: https://news.ycombinator.com/item?id=39103504). Consider the perception of muscle fatigue/burning; in the context of a workout it may be found to be pleasurable, but in the context of an illness it might be felt as suffering. An MRI may "test" and show both have the same neural circuits activated but yet the subjective experience is vastly different. Just because we don't (yet...maybe?) have the tools to test for it, would we conclude that differences in subjective experience don't exist?


> How do we know that other H sapiens have the same kind of experiences of existing as you have? Best I can do is that "I know, right?" is a fairly universal reaction, so we know that even if others' qualia are not identical to ours, they are at least isomorphic.

Seeing my own phrasing echoed back, I notice an ambiguity: "the same kind" can be either "all the same things" or "it is kinda close".

The former is provably different between humans: aphantasia, gender dysphoria, sexuality (including the absence), ASD, the conservative/liberal divide, that some people describe having religious experiences and others don't, amusia, … there's so many ways to be different, I'd be surprised if any of us is entirely "normal".

I meant the latter. And the only positive evidence I have for that, is that other people were writing about it before I was born… which doesn't actually tell you that I have it, as for all you know I might be a P-zombie that's just parroting phrases it once read, I'm the only one who can be sure that there's an "I" in this body of mine.


> for all you know I might be a P-zombie that's just parroting phrases it once read

Right, but as long as the phrases you parrot appear to continue a conversation that I usefully model as produced by (another?) consciousness, I pragmatically (see Peirce) don't care what the actual mechanism that produced them may be. (an analogy: when using computers we almost always care that a protocol is followed, often care that a specific function is computed, sometimes care about which algorithm computes it, and almost never care about the exact implementation) If you were to be a giant lookup table or a Searlean room instead of a conscious being, well, that's on you.

> what if I am a P-zombie dreaming "I" am conscious? — not Zhuangzi

see also https://xkcd.com/810/


It depends on your goal. If all you care about is getting stuff done? I'd agree with your position.

But when that's all I care about, I don't ask that question. To me, the question of "consciousness" matters in, and only in, scenarios where the inner world matters, which is why I use that for my definition of "consciousness":

1. If I uploaded my brain, would that be a fork of my consciousness that would survive the death of my physical body? Tautologically only if it's "conscious".

2. Are the AI we make and use suffering a fate worse than the actual slaves in the Caribbean who invented the term "zombie" out of a fear that their masters would force them to keep working after death? Only if they're "conscious", and even then possibly not (suffering is an extra question to be asked once consciousness is determined).

I'd kinda like to get my brain uploaded, but there's no point if it's not a continuation of my consciousness.

I'd like to use a super-smart AI to solve my problems, but not if doing so involves creating minds that exist only for a few seconds before being destroyed.


Thinking about this a little, it's not clear to me that AI would have a problem with death. We seem unnecessarily asymmetric, caring a lot about our ego not existing in the future when we hardly consider that it already didn't exist for a very long time in the past, but that's probably evolutionary due to the embodiment?

— "Make me one with everything," said the monk to the hot dog vendor

(Vendor gives hot dog to monk, monk pays)

— "..." waits the monk

— "How much was that hot dog, anyway? I gave you a 20" says the monk

— "Change," replies the vendor, "is an illusion."


> Thinking about this a little, it's not clear to me that AI would have a problem with death. We seem unnecessarily asymmetric, caring a lot about our ego not existing in the future when we hardly consider that it already didn't exist for a very long time in the past, but that's probably evolutionary due to the embodiment?

Sure, but that same chasm of difference which means we can't guess at that (despite it being so innate to many of us), means we could very well accidentally cause suffering or induce rage even when we think we're being nice.

https://en.wikipedia.org/wiki/Bokito_(gorilla)


[dead]


The only two things I can say with much confidence about consciousness:

1. I definitely have it, some of the time. (I'm not sure if anything else does, but I can't rule it out.)

2. The chances other things have it feels lower as they become more different from me. Another person may or may not have consciousness, and a rock may or may not have consciousness, but the human feels more likely to have it than the rock does.

This feels like the common sense stance for anyone who has considered solipsism in depth, and doesn't want to proclaim a unshakeable faith in things like panpsychism or non-dualism or whatever.

It's honestly a bit frustrating, because I find mathematical platonism otherwise quite compelling, but it's totally possible there's just abstract mathematical objects and oh yeah this weird fairy dust we sprinkle over certain reifications of things to ontologically privilege them. I know, Occam hates me.


It may be that the tools we have come to rely on so much to define such terms are, almost by definition, inadequate to to define consciousness. The scientific method and rational discourse are used to describe objective reality; I don't know that they can fully answer the "hard" problem of consciousness because it is largely concerned with subjective reality.

> It may be that the tools we have come to rely on so much to define such terms are, almost by definition, inadequate to to define consciousness.

And yet, we are confronted by the fact that we use the concept of consciousness in dozens of different contexts and we understand perfectly well (usually) what people are talking about. That's the basic insight of ordinary language philosophy. These aren't fundamental mysteries, they're webs of concepts tied up in language and careful attention to how they are used, un packing the web and knot of uses, reveals what they mean. The definition is a "rule" or explanation for their use in language, which is not "subjective" (private) but public behaviour.


Mm, I'm still on the "by definition inadequate" side personally. I just don't see any unambiguous way to verify that another being even experiences qualia, to say nothing of actual consciousness.

> I just don't see any unambiguous way to verify that another being even experiences qualia, to say nothing of actual consciousness

That's actually one of the core parts of Wittgenstein's "Private Language Argument"[0]. The problem isn't so much that verification is "difficult", it's that certainty in the specific sense of intelligibly private is a logical impossibility. The implication is that "experiential qualia" have a specific intelligible meaning in language (or even more broadly in behaviour), which is to say that our "subjectivity" is personal but not private. Further implications tumble outward. For instance, Wittgenstein spends a lot of time discussing the "qualia" of pain. It was (and still is, obviously considering the rise of "AI" discussions lately) an enormously consequential argument in philosophy.

[0]: https://en.wikipedia.org/wiki/Private_language_argument#:~:t....


clicks link

"Wittgenstein does not present his arguments in a succinct and linear fashion; instead, he describes particular uses of language, and prompts the reader to contemplate the implications of those uses. As a result, there is considerable dispute about both the nature of the argument and its implications. Indeed, it has become common to talk of private language arguments."

Yeah, sorry, I don't have a hundred hours to spend on some dude who decided to write half of an argument in ten different places. Maybe a link to plato.stanford.edu would've been better.


Sure, if you want: https://plato.stanford.edu/entries/private-language/

But the wiki entry is a far more succinct summation of the main points of the argument. The very first subsection, which you apparently didn't even try to read, in the wiki is "Significance" which summarizes the argument in all of...2 paragraphs.


I think the crux of it is how the OP was talking about making claims "with confidence".

I think you're saying we can get a general sense of what is meant without a formal definition. That's a fuzzy definition; i.e., one with high uncertainty/low confidence. My point is that I don't know that we have the tools to make high-confidence claims. We can't even agree on when consciousness starts, whether it's a toggle switch or a dimmer switch etc. That all speaks to high levels of uncertainty, where we can't agree on a "public" language. I think the limits of language to describe the phenomena is right in line with the "we don't have the right tools" argument.


It's still a very interesting article. But I think that your point is correct. This article demonstrates why philosophy was largely obsoleted by science.

It's true, given enough time, we won't ask questions anymore. Every abstract concept and moral belief will be simply data. "Meaning" will stop making sense. There will only be science and those who do science.

Some poor fool might ask every once and a while "why are we doing this science anyway?" and they will be swiftly silenced.

We will of course force children away from their natural wonder, as nothing good could come from something like that. There are plenty of other things to fill the kids brain with anyway.

There will only be data. Meaninglessly complete and perfect data. A young would-be Kurt will come along and ask "is it possible this is really complete? all our data? how might we verify it?" He too, would need to be silenced.

One night your wife will sob "but why are you being like this?" But you know this is a silly thing to ask: you are just what your atoms are, we can measure them perfectly you know. What other "why" does she want?

Things like "value" and "human" and "well being" aren't important concepts anyway. If it can't be resolved down to data, lets just let the market and the state decide. It's not like we are going to be able to bring up some obsolete, non-falsifiable ideas in order to defend ourselves! It doesn't matter much anyway, its not science.

And when you die you will know that your whole life really meant nothing but at best a small donation to the science. Because what value could subjective experience really have outside of, you know, doing science. But you are still happy, you did Science. But don't ask why you are happy!

And the great drama of science vs. philosophy will have ended. There was always going to be a winner after all! It was totally a battle, not a dichotomy. And the battle is over.

\s


I think your argument may be a little too strong. Would you refrain from discussing, say, happiness, sadness, etc. in your daily life without a formal definition of what happiness is exactly at a structural level? And I say that as someone interested in formalizing ethics, and formalizing all of those notions. The point is, to even (ideally, formally) define them we need intuition and intuitive discussions about what their definitions should be, and we are still at (very) early stages of what could constitute a real formalization.[1]

More significantly, the utility of those concepts far precludes scientific definitions, those terms have been useful for thousands of years, and are essential to human society!

(Again, certainly not against definition, but be mindful of being too strict, too soon about it :) )

[1] The benefits of formalizing (human concepts) are subtle: like formalization in math, they help us build confidence that we're not building sand castle, dig deeper into theories of meaning, and use this to slowly improve lives of everyone. But like math, we've been doing it informally for very long, and a strict axiomatic formalization is still today relatively rare (although progressing with the aid of computer proof systems) for most fields. Still, the evolving standards for proof are likely essential for many of the deep theories we've achieved (probably including say modern logic and Godel's theories, algebraic geometry, functional analysis, etc.).


You raise some good questions.

Although humans discuss emotions a great deal, there doesn't seem to exist formal definitions for those concepts. Despite this, data can be collected that sidesteps this fact that can be useful: we can ask many people if they feel particular emotions and draw conclusions from that data. Still we must concede that a shared definition may not exist, which weakens these conclusions.

In the case of consciousness, we could ask many people if they thought that someone/something else was conscious, but it's not clear what value that would have.


>One might say who cares if they don't define it? Well, if you don't define it, there's no point in discussing whether it exists, how it came to be, or what it comprises.

> You might as well be asking the question: how close are we to quidlesmoopy?

Adequately, and generally defining intelligence, consiousness, AGI & such... big ask. Impossible in practice.

So... dies that mean the conversation ends here because any further discussion = quidlesmoopy? We don't know what intelligence is precisely and therefore can't reason about it at all.

I don't think this level of minimalism or skepticism is viable.

We need to make do with placeholder definitions, descriptions and theories of consciousness and intelligence. We can work with that.

So yes... language as a foundation for conciousness, intelligence is "just a theory." Its probably not entirely correct. And still... positing and testing the theory by building and artificial talking machines is possible.


I think placeholder definitions are the way to go: one defines a concept that is acknowledged not to be the target and works with it.

So for the purposes of discussion, one may define consciousness(prime) as something less than true consciousness and attempt to work with that lesser definition. However, at all times, one must admit to working with a lesser definition that may never lead to knowledge that applies to the target definition.

We must also admit that consciousness as many understand it may not exist.


> One might say who cares if they don't define it? Well, if you don't define it, there's no point in discussing whether it exists, how it came to be, or what it comprises.

This is not how we normally treat other concepts, even in physics or mathematics. Concepts can exist and even be studied for a long time while lacking a rigorous definition.

For example, a truly rigorous and satisfying definition of natural numbers probably dates from the 19th century. Does that mean that people before this shouldn't have discussed natural numbers? For another example, the Dirac delta function was used in the study and teaching of QM while not being very well defined as a mathematical object (it's not an actual function, for example).

In general, most interesting concepts begin life as some kind of vague intuition, and it often takes significant study to create a formal definition that captures the intuition well. This applies to any kinds of concepts, even in mathematics.


This is why the Turing Test gets talked about a lot (although I gather Turing's original 'imitation game' suggestion had slightly different emphasis).

If you can reliably reproduce a situation where a human can't tell whether they are conversing with an AI or a Human, then thats a kindof yardstick.

So yes we can't define consciousness but we've got a sortof yardstick for when we might consider something to have achieved it.

And people think the Turing Test is too easy but the point is that the 'tester' should ask difficult questions to try and probe the depth of thinking of their subject.


Can some people even pass Turing test?

In my layman's mind consciousness, at it's lowest level, is awareness, as mentioned in the article.

I think what humans have achieved is the next level which is "awareness" of our awareness. The ability to self-reflect. Perhaps it is even a form of "recursive consciousness" (?) And perhaps structured language is the secret sauce that makes this possible.

Without that next level I can't differentiate humans from other mammals.

This has caused me to ask: Is there a level above this? ie: Awareness of your awareness of your awareness. and... can humans achieve that next level with our current biological equipment?

:-)


This is the problem with terms like this. There are definitions in academia/industry that differentiate between awareness, consciousness, qualia, intelligence, sentience, and more. But these distinctions require some pondering to understand and are divergent from general understanding which treats them as roughly the same thing. For example, consciousness for me is the ability for something to take directed action to affect their surroundings. This means that a robot can be conscious. But now the question for what we’re talking about here is: is it aware? It is possible to display consciousness without awareness (see blindsight studies), but to my knowledge it is not possible to be aware without consciousness. And then we get to sentience, which is still very slippery, and often relies on definitions involving “qualia”, which is also quite slippery.

I have no formal knowledge in the domain to go much deeper but I can understand the problem you stated about defining the terms properly. I suppose this spinning around on word definitions is similar to my "awareness of awareness" description in that we have the ability to ponder the meaning of the very tool (language) that we are thinking with. But my old head begins to implode if I go very deep...

Isn't that next level the "theory of mind", that the article says LLMs seem to have achieved (in at least some form) now ?

I agree that definitions are the problem. It seems the conscious term is overloaded and refers to a lot of concepts some of which are overlapping and some are totally different.

People also want to define conscious in terms of the concept they feel is most importantly different about humans. So this definitions conversation gets mixed with a what concepts are important conversation.

For example, the thing that I think is important is subjective experience. I think this is the most important difference between beings that need to be given moral consideration and those that don't and this is always what I have thought of as consciousness. However, I often have conversations where I am just talking past someone because they are interested in something else entirely. I hope we can start defining this upfront and having separate conversations here.


If I ever create a working definition of artificial consciousness, I’m definitely calling it quidlesmoopy.

Legal | privacy