Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

You think your computer isn't conscious? How do you know? Just because it's not demonstrating free will? How many layers of error correction are built into computers to stifle the "randomness" of electronic equipment? Do you think you could still exercise free will locked in a padded room and strapped to a bed with a straightjacket?


sort by: page size:

Sure, and seeing some elecrical patterns in DRAM circuitry will predict changes before they show up on your monitor too. That's just how computers work.

Similarly, unconscious activity is just how consciousness and choice work. That doesn't mean that free will doesn't exist. It's a complete misunderstanding of the concept.


I'm going to very very slightly change the programming so whatever output it's producing which is proving, my opponent claims, the computer is conscience, starts to degrade.

[...]

Then I'm going to diff the conscious program and the unconscious program and ask him if he really thinks those slightly altered lines of code are the difference between consciousness and a humdrum computer.

Is that not equivalent to giving a human being alcohol, observing that they become progressively less conscious, and asking if you really think that a few centiliters of alcohol is the key to consciousness?


Never take the arguments of a side from their opponent's mouths.

The arguments I offered have nothing to do with any of the three he claims they all boil down to.

If you think I made one of these three, please tell me which one so I can clarify the argument.

Assuming it's a side effect of processing- known as an epiphenomena- immediately commits you to answering the question- does a badly programmed computer have a form of consciouness? Does a thermostat have a primitive form? Is it specifically impossible to create AI which emulates human thinking to the last detail, but has no consciouness, i.e. really is just an empty machine with zero experience? Is that an impossible task which could not be achieved by anyone by any means?

Suppose I debate with someone who has a computer programmed to be conscious. Here's what I'm going to do. I'm going to very very slightly change the programming so whatever output it's producing which is proving, my opponent claims, the computer is conscience, starts to degrade.

I'm going to do that then ask my opponent- still conscious? I'm going to do this and I'll guess my opponent will say "less so perhaps" , which would be his best reply.

Then I'm going to repeat until I get a "probably not" and then a "no" from him, which by his own hypothesis has to happen.

Then I'm going to diff the conscious program and the unconscious program and ask him if he really thinks those slightly altered lines of code are the difference between consciousness and a humdrum computer.

Because that's where this goes, this idea that a certain type of computation is consciousness.

It also goes to consciousness being granted to a machine like a Turing Tape. You may not think that squishy biological matter should be bequeathed with a "magical" property which hosts consciousness, but tell me, how do you feel about a Turning Tape?


As programmers we have to understand how we can program something to seem incredibly intelligent. If we had access to time and resources that universe "contributed" to us - we may be able to come up with the type of program that looks, acts, learns, emotionally reacts 100% like a normal human being. Even currently we can get pretty far using image recognition, sound recognition, learning algorithms and fast-lookup data structures. The question of free will is - at which exact point in developing those data structures and algorithms the program gains an illusion of controlling its own actions? Moreover — how can you verify that it did?

What the article essentially says is that our brain is nothing but a CPU, programmed over millions of years by natural evolution in a way that it gained all kinds of responses to multitude of various inputs (our 5 senses) in all kinds of combinations, self-improving learning algorithm, and that illusion of consciousness emerged as a by-product of this complex system. Pretty interesting what exact data-structure/algorithm/etc caused this peculiar side effect.

It could have something todo with the observation that our brain doesn't need to be whole in order to have consciousness. In different types of physical brain damage, even major ones, a human was able to remain conscious. Same could happen with the other part of brain damaged, which was healthy in another example. So I'm thinking - maybe our the sense of conscoiousness is like a potential difference which makes electrical current flow through the wire - you need to have multiple parts of brain with potential difference to create an illusion of consciousness. This is pretty vague but I like this concept - may be onto something. Consciousness does feel a lot like different parts of brain interacting, or one part /observing/ another and reacting to it just like it'd be reacting to the outside world.

Meh.


Hey, nice blog/page you got there. Interesting thoughts, although I haven’t gone through all of them. Just a small clarification if I may and if I understood it correctly, the article from the OP discusses the point of free will and how the recent computational machines/algorithms can come closer to what we consider as a thinking machine (or even human). There are many different layers of the psyche that make us human and I agree that we’re not just our conscious thought, but a sum of all the unknown processes running in the background. By allowing an algorithm to have “free will” we’re merely just trying to give more freedom to it so as to ignore certain directives and demonstrate a different behavior, we’re not becoming gods or creating a mind. Just doing baby steps to get there ;-)

Is there any evidence that our consciousness isn't just the result of computation as well? Is there anything to suggest that we actually have free will and make choices?

I see an interesting corollary to this:

If an AI can become conscious, it follows that consciousness does not contain (or at least require) free will.

Of course I can't define what "free will" is exactly, but I can make a claim about what it is not: A computer running code, no matter how complex the code is, does not have "free will" in the same sense as I feel I have.

This is a delightful conundrum: If a computer can simulate me, than I don't have any more "free will" than it does. And if it can't even in theory, than why not? What's uncomputeable about the atoms that create me?


Couldn't you say the same about us? It's not the brain (hardware) that is conscious, but the mind (software) running on it.

This is all a bunch of sentimental bullshit. I don't know why people pursue this doomed line of reasoning. The problem is that you can't distinguish free will from not free will in any meaningful way.

A much more fruitful challenge to the aliveness of computers is to ask a singulatarian to show us any deep neural net that can fold proteins in constant time like physical reality can instead of exponential time like a computer algorithm can. Then I will believe that computers are alive and mind uploading is possible.


The problem is you have to believe what the computer tells you when it says it has consciousness.

(In the same way that I have to believe you when you say you have consciousness).


This article starts with the wrong assumption that programs/computers need to be deterministic. They are clearly not. Monte Carlo tree search is no t really deterministic, nor is the training if large neural networks. To me conscious has more to do with whether the system is able to reflect on is self. In that sense very simple systems (with not much intelligence ( could be conscious

Why do you think algorithms can't be conscious? (Unless you believe in souls, that is..)

This is kind of my point, I can do something a computer cannot do because I am conscious. I'm skeptical that a computer could transcend its programming into consciousness to solve a problem that mathematically it should not be able to solve.

Just as handwavy as any other explanation of consciousness. I can make an electronic device that runs a Python program that predicts how input affects the device. That doesn't make the device conscious.

Interesting that under the Chalmers theory, there's plenty of room for computers as we know them today to be conscious. That "critical point" where quanta become qualia, if it's purely a matter of complex arrangements of bits, could have been reached by now. I'm sure I'm not the first to put this forth, but maybe your laptop undergoes some form of experience like the "phenomenal mind." One where it perceives beauty in complex patterns emerging in sequences of instructions. Or pleasure from a hard computational task executed with low time complexity.

I suppose that we haven't observed any of the effects of consciousness, i.e. expression of choice, could be an argument that there is no such consciousness in a computer. Or, it could just be that computers are conscious but lack any tools to exert free will. Their consciousness is "read only," so to speak.


That's why I used the word collective. Computers are not conscious. But let's go a step further. What is being conscious? Sensory input, electrical signals working on memories and imagination pushing the unknown. I don't think there's any magic in being conscious. Having said that, I do accept my knowledge and understanding of this subject is limited. But I am not going to fall to 'if it can't be explained it must be divine' trap.

You can build a computer that responds to stimulus - we do it all the time - but that doesn't mean that it necessarily is conscious (that is, has subjective experience). If I build a computer that responds to stimulus, that doesn't necessarily mean (to paraphrase Thomas Nagel) that there is something that it is like to be that computer.

AND YET, human brains are an implementation of such an algorithm. By all reasoning, we shouldn't be conscious.

Yet here I am, I am the one who is seeing what my eyes see and I am distinct from you. Science still has no idea how that happens, as far as I know.

So how knows, maybe all computer programs are in fact conscious in some way.


Can a computer be conscious?

If I take a camera, a microphone, a speaker, and some other sensors and feed them into a CPU/GPU and self train it to have an understanding of its capabilities (internal/self) and the world around it (the external), this this consciousness or not? If I light a fire near this 'smart' computer via it's sensors it can detect the heat and move farther away. If I give this computer a complex task that it has to calculate multiple steps to achieve before it acts, is this not mental work?

In LLM based systems we can't really figure out how this occurs because the computational complexity of the operations is too high, much like brute forcing encryption, getting to the answer of how it's working isn't impossible, you'd just have to burn the visible universe to figure it out.

next

Legal | privacy