> What happens, for example, if we recreate the structure of the brain on a computer by interacting silicon chips where you had interacting neurons? Would it be missing anything that’s missing in the mind? I’d like to think that a computer could be conscious. I don't think consciousness is reducible to a pattern of interactions in the brain, but I do think that if one reproduces that pattern of interactions in fine enough detail, one will reproduce consciousness.
Perfectly said. Consciousness is not reducible to a pattern of interaction in the brain because it's a system of brain+world that creates consciousness. The brain learns from the world first, only later it is capable of independent imagination and thinking. A brain never connected to the world would not be conscious. An AI connected to the world (or a simulation) could be conscious of the world it exists in, and of its own existence there.
> But if it is ultimately just a biological process, no matter how complicated, then it could be simulated given true understanding and a fast enough/big enough computer?
Though commonly referred to as such in metaphor, computers are not brains. Computers also have hard limitations that are very different from the limitations of biological brains. Maybe it is possible to accurately simulate a healthy brain but is unattainable due to physical and economic limitations. Then again, maybe even if we exhausted the global economy and every natural resource to build a planet-sized computer and simulated every minuscule part of a human brain, every cell, every neuron, and all the electro-chemical interactions, and it still wouldn't be conscious, maybe that is a far better result than if it could be conscious. What are we even doing? I ask you, what is the goal and benefit here? How would that be better than the unconscious AI we already have and the automatons we can already produce now, but haven't? For a time, computers will get smaller, faster, more powerful, but if we haven't already achieved Strong AI with the largest fastest computers available today, in the future if room-sized clusters can fit in a cell phone, how can we expect them to be conscious? And if the room sized computers of the future can be made conscious, how will that benefit anyone?
> I'm not sure there is a third possibility?
Well, imagine having two separate conscious brains, the one in your head and another somewhere else that is also physically a part of you. Maybe we'll alter our species to evolve these separate brains, I just don't know what the point would be, as it would not at all be like multiprocessing or parallel computing, but more like slavery.
> A consciousness is unpredictable and I doubt it can actually be modeled on a computer.
Just FYI, in quantum computing the Church-Turing-Deutsch principle states that any physical process can be simulated by a computer[1]. So assuming the principle is true (computers can certainly simulate all known physics, in principle), and assuming consciousness is a physical process (however you define it!), computers can simulate consciousness.
Many secular people seem to apply a mystic woo-woo to consciousness, but aren’t willing to take the leap that consciousness is actually metaphysical. I personally don’t think it’s metaphysical, just poorly understood philosophically and basically unexplored scientifically. If you do think it’s metaphysical then I don’t know how to persuade you otherwise. The philosophy here really is shaky! But the known science states that computers can simulate consciousness.
>Instead it might be possible that your consciousness is a product of a simulation where your entire subjectivity - including the observation that you have a brain - is a manifestation of another mechanism that is outside of observability.
Ok, well in that simulation materialism is true and I can make an AI with emergent consciousness ¯\_(?)_/¯
> What an odd comment. So because we performed some clever math to make some basic neural nets, our AI is now experiencing a form of consciousness?
Yes, you keep re-hashing this point which makes it clear that you still (and probably always will) believe that consciousness is something 'more' -- something mystical and unexplainable -- because it's so difficult and non-intuitive to wrap your head around (as it is for most people).
> Not sure if trolling or not? You should look up protein folding and protein machines at some point, basically there is a LOT going on at the nano scale that we still don't fully understand. Additionally tell any neuroscientist that the brains neural networks are the same as our computer version of NNs and you'll be laughed out the room.
Who said our models are the same? You're missing the point. Re-read the paragraph you're replying to, and look what we've accomplished with just dozens of simplified neurons. The point is that the details beneath our high level understanding of neurons (basically just spiking and activation) are likely not important to consciousness. Just like a high school physics student can usefully understand the mechanics of rubber balls without fully grasping the chemistry at the atomic level.
> And perception is a function of the brain yes, how is that an opinion? Your arguments really make no sense.
Here's the context, to remind you:
> Amazing that every sentence you wrote is objectively wrong.
>> 7. This internal stream of perceptions and judgements is consciousness.
> No, this is just a function of part of the brain, which we can be aware of through consciousness.
Just want to feel like you won an argument today? No need to turn this into a troll battle -- I'm actually trying to explain my viewpoint to you, if you're interested.
> If consciousness and the brain was only computational your argument would make sense, but many if not all neuroscientists would baulk at the claim that the brain is a computer or is fundamentally computational.
That's okay. They're allowed to be wrong. They're even allowed to be complete idiots.
The fact remains that the brains IS a computer, and we know this by the simple observation that it computes. The question is whether it is anything else.
> Have the Blue Brain project claimed anywhere that they aim for full human behaviour in 10 years?!
They predicted that an artificial human brain can be built within 10 years.
So, yes.
> It would be wonderful if they can simulate a rodent's brain with nerve inputs and get it learn to run in a maze and do similar things.
Agreed, that would be amazing.
But I can already do that with relatively simple electronics, that does not even require software. What would be even more amazing if this simulated creature showed initiative of its own.
> I thought the technical term was "biological damage"?
Not neccesarily. A computer that is running a stored program in ram is indistinguishable on an atomic level from one that has just been rebooted but is hanging at the boot prompt waiting for a key press because there is no network to boot from.
The one is a useful tool, the other one a door stop.
Damage did not enter in to it, the computer is fine, the bootstrap sequence is what is holding it up.
> You really seems to argue that anyone with some biological brain damage from oxygen deprivation loses their "soul"?!
You are trying to make me sound ridiculous because I tried to draw an analogy between how religious people (and I'm not religious) see the situation and how an electrician or a scientist would see it. State is information, the information is what can be lost, even though you still have all the physical components.
> Either you're drunk,
thank you for that.
> trolling me
Apparently not, I don't have time to waste in spite of spending a considerable amount of time on answering you.
I may be mistaken, and I'm certainly open to learning.
> or you don't know anything at all about biology
I've worked my way through a university grade course in Genetics, but do not have any formal education in the field, in fact I have very little formal education at all. That hasn't stopped me from learning though.
> and damage to the brain and loss of partial function and therapy.
There seem to be a multitude of failure modes, not all of them are well understood.
>>What if the state is more like 'RAM', and it needs the software running on it now to keep it going.
> I think that argument was killed by electric shock therapy.
Could you explain that ?
>> I don't think they are dishonest or idiots at all, merely underestimating the scope of the problem.
> Uhm, (i) WHAT have the Blue Brain project really claimed?
To be able to simulate a full brain within 10 years.
> (ii) I'm not an expert
Neither am I, but I do know that if you make a bold claim like that you have a problem if you fail to deliver. That would be a pity because I think that if there ever is going to be an answer to these questions that answer is going to come from a project like this. By overselling it they are damaging their long term prospects.
The same happened to 'regular AI'.
> (I'll read up on brain science in a few years, when it has stabilized...
I read as much as I can, several hours a day on lots of different subjects, including this one, I highly doubt that they have the trajectory planned out to the point where can state that they will have a functioning artificial brain in 10 years.
> the field seems to move faster than light, right now) and the little I know is mostly on cellular level, but you really don't know anything at all about biology?
That is your assumption to make, I don't think you are right.
But I will not resort to calling you either a drunk or trolling me as you just did, I appreciate the time you took to write your answer.
Not everybody that you find yourself disagreeing with is drunk or a troll.
> Also, I've said that before, but we already know that brains operate under an irreversible computation model.
We don't know that brains operate under any kind of computational model at all. It's often postulated, but it's not proved. Every attempt I've seen at a proof reduces to begging the question.
Edit: To be clear I’m not saying a computational model of the brain can’t be a useful tool. Newtonian physics works quite well quite often even though reality isn’t Newtonian.
> - But internal representation of phenomena by the brains of humans certainly seems like it can be explained by biology, physics and general common sense.
Yes, it can. Computer programs have internal representation of phenomena, and so do brains/minds. This doesn't address the problem of consciousness, though.
> That the brain produces internal processes when it receives external stimuli - how hard is that?
Again, no mystery there - we make machines that produce internal processes when they receive external stimuli, and we understand pretty well how they work.
But that's the problem, because that understanding doesn't imply anything that resembles consciousness, and no-one knows how to make that leap other than either claiming consciousness is just a property of all processes, or claiming that the problem doesn't exist.
What stops you from writing a conscious computer program?
> It's not even clear exactly how our brains work so its hard to imagine that they couldn't be implemented with a sufficiently powerful computer...
Not commenting on what OP said, but I don't think this is correct. Even in principle, how can any computational process produce conscious experiences, which are by nature subjective and unquantifiable?
> It depends on whether consciousness is a computational process.
Absolutely agree. But that is the assumption that I would liken to alchemists comparing lead and gold. We know almost nothing about the brain. We know almost nothing about consciousness. But yet some people assume that consciousness is computable just because we don't know anything else it could be (just as alchemists assumed gold and lead could be transform because they were both chrysopoeic base metals. They hadn't discovered atomic theory yet). When all you have is a hammer, everything starts to look like a nail.
We know that the vast majority of numbers are uncomputable[1]. We also have proved that computation is incomplete[2] and can be undecidable[3]. It seems perfectly logical that consciousness is not computable. Or it could be computable, I obviously don't know. If someone makes the claim that consciousness is computable, then the burden of lies with them. We can't accept that on blind faith. At this point it is all opinion and speculation (as you said) because we still can't even define consciousness in a rigorous way. (and I don't think we will ever create artificial consciousness until we can define it, but that is an orthogonal issue).
>Actually, we can be sure that a computer with a stateful predictive generative world model is likely to be conscious and self aware.
The idea that computation leads to consciousness is ridicolous. The claim that some functioning world model leads to inner experience seems to not even be worth considering, because it is so obviously wrong.
> it does not explain the qualia of consciousness, or why we are not p-zombies. This is a very hard problem to chip away at from a scientific perspective.
Why do you think so? What makes you think subjective experience is not just another type of brain function, like smell, taste or value judgement?
> Do you believe that if we were capable if making a replica of a brain that performed all the same functions in silicon, and had the same computational capacity that it would be conscious? Why or why not?
Yes I do believe so. I believe that consciousness is a function of our nervous system, so if you could make a perfect copy, put it into the same state, and gave it all the same inputs and outputs, it would be conscious.
However I think we're very far from that. the functions of the brain depend on complex and diverse sub-cellular processes which current models don't scratch the surface of replicating with any kind of fidelity, so I think it will be quite some time before we can test this hypothesis.
To pose a similar question to you, if you don't think a perfect replica of the brain would give rise to consciousness, what do you think would be missing?
> Let's imagine that we simulate the brain in all biological details on a supercomputer. Will that supercomputer be conscious?
> No. It doesn't matter whether the Von Neumann machine is running a weather simulation, playing poker, or simulating the human brain; its integrated information is minute. Consciousness is not about computation; it's a causal power associated with the physics of the system.
Can someone shine some light one what this might mean? I can't wrap my head around what's different about a physical system vs. a sufficiently powerful simulation. I can see an argument that there might be some complexity that is too difficult to compute, but just saying "nope, has to have complex physical connections" seems arbitrary?
Edit: Ah, I missed the discussion further down the thread. Deferring to there.
> you could do all the computations required for the nematode worm neural network with a pencil and paper rather than an electronic computer.
Yes.
> Would you say that this pencil and paper is now conscious?
No. But your experiment involves "you with a pencil and a paper" where "you" is conscious.
> Does building a computer out of organic particles cause consciousness?
It's not necessary to use organic particles. We will probably soon build the electronic computer which will, from the perspective of the humans communicating with him, indeed behave "as a conscious person": soon we won't need you to write the answer you wrote, the computer will be able
to make even better one.
https://en.wikipedia.org/wiki/Blue_Brain_Project
Started in 2005 the goal of the blue brain project is to simulate the brain in the computer.
They currently have a complete working simulation of the neocortex of the rat brain.
reply