Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> Without an understanding of what consciousness is, there is also no reason to expect an implementation of an adder to not have some subjective experience.

I don't think we need to understand what consciousness is, but rather define what it is that we want to talk about. The english word "consciousness" is a sloppy catch-all for a bunch of experiential phenomena including things like self-awareness (only marginally more specific!), qualia, etc.

However, even without a rigorous definition (make up 10 new words if current ones don't cover it), it seems the core of what most people mean when they say "consciousness" has an introspective aspect to it - what something "feels like", which requires some perceptive/analytic machinery to be put to use. You can feel anything without using a "feeler".

So, with that said, it seems pretty clear than things like thermostats and adder circuits are not conscious in the slightest since they neither have the feedback paths nor feedback-directed perceptual and analytical machinery that would be required. OTOH it's perhaps not so absurd to consider that something with an architecture like GPT-3 might be a "tiny bit conscious" since it DOES have some of those necessary architectural/functional abilities.

Of course many people will ridicule any such suggestion due to their own emotional investment. If people don't feel comfortable with the idea of a machine being conscious under any circumstance, then they are necessarily going to refute it in the case of today's simplistic "AI" architectures.



sort by: page size:

> In this view, a building with a thermostat would seem to be conscious to some degree.

I don't think this is as overly broad or bad of a definition as you presume. I recall sitting at a laundromat one day, in a somewhat sleepy state daydreaming, with the rhythm of the washing machine in front of me playing through my mind. My subconscious told me that the machine was conscious. I dug deeper into that intuition, and realized I was seeing a bit of the inner workings of the mind of the engineer that designed the algorithm. Not a perfect example, as I didn't know if the alg was closed-loop, but in any case, it made me realize that our minds are just large bodies of these algorithms working together. We have some additional oracles, random data generators, and probabilistic mechanisms thrown in as tools, but consciousness really is mechanistic and pluralistic.

To recreate it requires not only the qualitative aspects of a being that can think, but also sufficient (read: vast) quantity of systems to get to a useful general purpose thinking machine. There is no "soul" or special sauce or singular definition of consciousness. It's an illusion.


> I have a gut feeling that there needs to be more to consciousness than a bunch of neurons connected to other neurons.

Well, considering the brain is just that, and consciousness exists in the brain, then at a physical level that's exactly what consciousness is. But like a computer, it requires more than the simple presence of the hardware. That hardware has to be in an active state executing instructions, the mass of neurons has to be firing in a specific pattern, which seems to me self-booting given how we start out. We know consciousness is just a bunch of neurons, the question is how do they work. Just like we know a given chip is capable of various functions, the question is how.

Knowing the building is made of steel does not make one capable of recreating a skyscraper.


>I don’t see how consciousness is at all hard to define. A system that is aware of itself.

A thermostat that is aware of the temperature inside its own housing is "aware of itself". It's not conscious however.

Nailing down exactly what consciousness means is remarkably difficult.


> consciousness is what allows you to experience things rather than being a mindless automaton.

Well this is certainly a better definition IMO, however, what does a "mindless automaton" mean? If the automaton is intelligent then it's not mindless.

> What you mean might be that you'd prefer a definition that you can use to measure objects around the world to detect consciousness?

That's not what I mean. I actually think Turing created a decent method of detecting consciousness. What I mean is that someone on this thread said the problem with consciousness is that it is not well-defined, and I happen to agree in the sense that when people talking about consciousness they are often talking about different things. Just in this one thread 2 different definitions were proposed, and the article uses a 3rd: "the feeling of being inside your head"


> how it is possible to be conscious of anything at all

I ... have always seen that as something obvious. On a high level, isn't consciousness the result of combining a bunch of neurons (a "hardware", of sorts) in certain preexisting configuration (a "software" of sorts) and stimuli (inputs) ?.

To me the problem is on the details. The separation between layers is not needed because there's no limited consciousness needing to understand the system in the first place. All the encodings and pathways in all 3 levels are tortuous and non-intuitive, and profoundly intertwined. An image being seen through the eyes will end up being encoded as a bunch of electrical signals that will then sent to neurons that will pass it to other neurons until eventually certain other neurons will release molecules will get released through another end and that will activate other neurons and that gets encoded as "pleasure".

But to me it is obvious that this is what's going on. The "hard part" is the reverse-engineering the software, the operative system and the hardware.

This would be a deeply challenging technical puzzle - perhaps beyond our capabilities to tackle, perhaps it would require many generations of people. But I don't understand where this "philosophical hardness" is. I only see the so-called "easy" problems.

What's mysterious about Consciousness the software/hardware it is part of is finicky and tricky, like something written by a crazy savant with a big ego.

https://en.wikipedia.org/wiki/Saccade


> Experts largely agree that current forms of AI are not conscious, in any sense of the word. While there have been many studies on “computational consciousness,” or how something that might be considered conscious can be realized with computers, these studies are very preliminary and do not offer anything close to a concrete plan on building “conscious” machines. The reality is that we don’t have a widely accepted definition, let alone understanding, of consciousness. Claiming that we have already replicated such a nebulous concept with computers seems improbable at best. Granted, the claim could also be reasonable, if a particular definition of consciousness was specified as well.

is the last sentence of this paragraph not a direct contradiction of the first?

and look at the third sentence. what’s wrong with saying “we don’t understand enough about consciousness to make claims in either direction when it comes to our neural networks”? really: why do so many people in this article seem uncomfortable with other people saying as much? is it just the PR angle? is this field really less about answering the interesting questions than it is about satisfying the public?


> We have absolutely no idea what consciousness is or how it works.

We know what it is, we just don't know how it works.

Self-awareness comes in stages. We generally move through those stages throughout our daily lives. Animals display wide variations in qualities of self-awareness as well. So we can define consciousness as the sum of all of these qualities.

What I think is going to happen is that we'll start separating out lots of aspects of consciousness and explore them in software, adding more and more of them in as the state of the art of hardware gets better. The consciousness algorithms will slowly, over time, shed complexity.

Eventually, well before we get to even human-level self-awareness, we'll run into hard physical limits and realize that biology is way more effective at making the sort of compact, yet incredibly complex evolved system than any design process could be.

Biology has an advantage we don't have, it does not have to understand what it is doing, and it works ceaselessly. It can simply try over and over again over millions of years.

I predict that getting machines to become truly self-aware will be more trouble than it's worth, and that then you'll be choosing levels of comparatively-lower self-awareness for each individual component of a software system as part of architecting it. In fact, we have that tradeoff now. Do I really need to pull out ML to write a shell script?


> Certainly I believe we can agree that if you program am application or service to act as though it were conscious, that doesn't actually make it conscious

I'm not sure we agree upon that, especially without a clear definition of consciousness.


> it is possible to find a neural network that acts externally precisely as you would, up to an arbitrarily small epsilon. However, one wonders if such a thing would be consciousness and if so, where does the consciousness sit, in the matrix multiplication or the graphics card.

I don't like very much this reverent way of thinking about consciousness, as if it is from another world, or a different essence.

I believe consciousness is the ability of the agent to adapt to the environment in order to protect itself and maximise rewards. It's not just in the matrix multiplication, but in the embodiment, the environment-agent loop. Consciousness is not something that transcends the world and matrices, it's just a power to adapt and survive.

And it feels like something because that feeling has a survival utility, so the agent has a whole neural network to model future possible rewards and actions, which impacts behaviour and outcomes.


> The idea that consciousness is an algorithm or a computer or a machine is an assumption that is extremely popular among people in the tech industry because it confirms their assumptions

No, it's not because of that. It's because it is effectively a computer. Not in the vague sense of "it has stuffs connected to stuffs and there's electricity involved", but in the more specific sense that it takes inputs, produces complex outputs, has clearly identifiable hardware and indirectly identifiable software. It even has internal structure we're only beginning to understand, but that we know enough about to reasonably infer what computations happen where. There's little reason to assume there's some metaphysical mystery here, as exactly zero other things in the universe that we studied since the dawn of humanity turned out to be magic.

TL;DR: what else could it be? And before someone says "antenna", I don't buy it. "Computer" is a simpler explanation for all known facts than remote consciousness being received by the brain. See my take on this before[0]. See also: Occam's razor.

> "I know about computers. Let's assume the brain is a computer and consciousness is an algorithm. I can now comment on the brain and consciousness."

Yeah, well, sure. If I know the limit of applicability of my computer knowledge, I sure can comment on brain and consciousness. Like, I wouldn't say "it's vulnerable to SQL injection" because that would be an idiotic statement. But I could say "it implements visual processing, audio processing, collects other telemetry, and does sensor fusion in real-time in under 20 Watts, with power to spare". Because that's observation, physics, and modelling reality along a particular perspective of interest.

--

[0] - https://news.ycombinator.com/item?id=19490801


>> and you'll find a whole lot of things that most people wouldn't call "conscious"

> I disagree. I cannot think of anything with a nervous system engaging in the particular neurochemical reaction I'm talking about not being conscious. No example comes to mind?

Is a jellyfish conscious? Does it have the particular neurochemical reaction you are talking about?

> A machine which imitates some highly abstract equational description of thought is as close to thinking as a bird is to an aeroplane. The bird's heart will burn as much jet fuel as your machine will think.

This is actually a good analogy for our disagreement. Your definition of "thought" seems to inherently depend on the implementing substrate; and if it doesn't burn jet fuel, a bird doesn't really "fly".

But for me the substrate is irrelevant; I don't care whether a machine "really thinks", so long as it can solve any problem which I might have to "think" about otherwise.


> I think that consciousness is a collection of experiences and observations ('data'), nothing more. I am curious what lead you here:

Consciousness is part of having an experience. Observations don't require consciousness, since they can be performed by measuring equipment, although one can get deep in the woods on the whether observation needs a mind to make it meaningful, and thus an observation, not simply a physical interaction.

> I am curious what lead you here:

Being a self isn't about crunching data, it's about having a body and needing to be able to distinguish your body from the environment for survival and reproductive purposes.

An algorithm has no body and thus no reason to be conscious or self-aware. That it's even an algorithm and not just electricity is an interpretive act on our part (related to the deep semantic debate over what counts as an observation).

A robot might someday be a self. It would certainly be advantageous for robots to avoid damaging themselves, and being safe around other robots and humans (treating them as selves). But how we go about making robots self-aware, and more problematically, how they have experiences is something nobody knows how to do. Saying it will emerge once the robot is sophisticated enough is the same as saying nobody knows how to make a machine conscious.


> When I say an entity is conscious, I mean to say it not only has the ability to react to stimuli, but it can also abstractly choose how to react. It can rewire its own reactions, not just in a Pavlovian sense, but it can also develop internal thought frameworks and route its reactions through the frameworks it prefers.

These are just reactions with a memory component. This would include any computers, and so is too broad. I think consciousness will end up being a specific type of information process, with certain properties including those you describe, but it must have more properties and so be more specific than what you outline.


> So it is conscious but we know where it comes from - it comes from the computational power and interactions with the world it can do.

How little computational ability is required to define something as conscious? Even a brick reacts to stimuli, albeit in the same completely predictable (to our high-powered brains) way that a simple robot does.


> This capacity to produce a stable, psychological model of self is also widely understood to be at the core of the phenomenon we call “consciousness”.

No, it's not.

If you're calling something "conscious" you're arguing that it has a first person subjective experience. A soul. A spark.

If you're calling a machine "conscious" you're saying you have somehow given it life.

This is a major fucking claim and we'll need some major fucking evidence. Including an actual definition of "consciousness."

If we ever do get in the vicinity of building machines that could legitimately be argued to be alive or have consciousness we are going to find ourselves in a deep and unsettling moral and ethical quandary. And businesses working with this stuff will loudly claim, "no, no, no — no way it's conscious" because the alternative would create an expensive and difficult situation. Most tech companies don't want unions — you think they want to have to treat their computers like conscious living things? Do you have to pay it? Give it agency? Bodily autonomy? Self-ownership? What if you kill it?


> It appears to me that the entire field of AI research is utterly confused about this elementary distinction.

That is because consciousness isn't a scientific concept but a philosophical, and sometimes religious one.

>What we're really looking for is a machine that, like a human, decides on its own which problems to solve, and solves them without needing to be specifically directed to do so.

We already have AI based agents that do this but no matter how sophisticated they are people can always claim they are hardwired and deterministic, while not realizing we can always claim the same thing about humans. Again these are distinctions of philosophy word games and therefore don't find get much traction in the research world.


> I wonder how you can be so sure about digital logic not being able to produce subjective experience? How is being able to describe how a mechanism works related to knowing if it feels something or not?

I have a consciousness, and this is something I am able to observe. I hope that some day there will be an explanation for this, but until then the lack of an explanation does not invalidate my own observations.

>How is being able to describe how a mechanism works related to knowing if it feels something or not?

It's not, but if you can't observe that a machine is conscious and you also can't observe any mechanism that would make it conscious then there's no basis to assume a consciousness exists.

>We have no sure way to tell if any other human experiences anything or if they are philosophical zombies. This is just something we have not figures out how to measure conceptually let alone technically. Where do you take the confidence of judging whether the AI has subjective experience in this case?

Yes it is technically true that nobody can be sure of anybody possessing a consciousness but themself. However, it's not illogical to assume that another human being who is very similar to you must also have similar experiences even if it cannot be proven outright. It's highly illogical to assume that a computer program would have similar experiences to a person.

>Do you think that there is something still to be found in the human brain that will explain why we (humans, animals) uniquely have subjective experience and others (AI) are zombies?

Yes. I can't even begin to speculate on what that is, but the fact that we (or at least I) are conscious implies that something must be causing that consciousness.


>We as humans do not understand what makes us conscious.

Yes. And doesn't that make it highly unlikely that we are going to accidentally create a conscious machine?


> conflates consciousness and intelligence and this confuses matters

I think this is an excellent point. I like your example with colors, which shows that there is a difference between seeing (i.e. experiencing) colors and producing symbols which give the impression that an entity can see colors.

I don't follow any argument that proposed that computers can be conscious but other machines (e.g. car engines) cannot. In the end, symbols don't really exist in physical reality - all that exists is physical 'stuff' - atoms, electrons, photons etc. interacting with each other. So how can we say that one ball of stuff is conscious but another is not? And why isn't all of the stuff together also conscious? Why not just admit we don't know yet?

Consciousness may be hard to define, but lets take something simpler - experience, or even more specifically - pain. I can feel pain. While I can't be 100% sure, I believe other humans feel pain as well. However I don't believe my laptop has the capacity to feel pain, irrespective of how many times and in how many languages it can say 'I feel pain'.

Perhaps the ability to experience is the defining characteristic of consciousness?

next

Legal | privacy