And it did a lot to change my mind. I'm still digesting it.
Before blindsight, I was firmly of opinion that any system that could be at least human level (lets say indistinguishable), then it would also have to have some internal subjective experience. That the internal subjective experience would be another emergent phenomenon. As NN is processing, it would have the same internal awareness as a human brain.
But, after reading blindsight. I am questioning that.
So can we get automatons, that have no thought, but could also become greater than humans? With enough complexity in responses, can they adapt just as easily to new situations.
Blindsight is a very compelling exploration of the idea that there may be different roles of consciousness in advanced intelligence. We know that a conscious decision on say the visual processing loop ends up costing 120ms or so. Why is this a good thing? What if you can have advanced intelligence without consciousness? Some work in AI (and some worries people have about AI) suggests this may be a real possibility as well.
The end result for me ends up being a much more intense feeling of the kinds of varieties of intelligence that may exist in the galaxy, or that we may end up creating ourselves.
Blindsight by Peter Watts is a fiction novel with pretty interesting ideas on consciousness. In the book, consciousness is presented as a error during evolution, with no added value for species to survive. I find that idea quite disturbing and extremely interesting!
Blindsight was a well crafted book but it sort of underplayed the benefits that consciousness gives us. We're bombarded by a huge number of sensory impressions at any given time. The unconscious parts of our brain do a huge amount of work in processing it and figuring out which bits are important. But without consciousness to synchronize, serialize, and persist it then all that information will just fade away within a few seconds. To remember something, or talk about it, or engage in multi-step planning regarding it you need to be consciously and not subliminally aware of it.
Relation between consciousness and intelligence seems to be a problem for everyone including the authors of the article.
Hmm...
Peter Watts, Blindsight.
The novel explores themes of identity, consciousness, free will, artificial intelligence, neurology, and game theory as well as evolution and biology. [Wikipedia]
Especially interesting, and contains a long list of literature.
Blindsight by Peter Watts also discusses what can be intelligent but not conscious. In the current hypefest of LLMs it’s interesting to consider that they may be similar.
If you enjoy this article you might also enjoy the novel "Blindsight" by Peter Watts, which is available for free under Creative Commons and has some interesting ideas about consciousness vs. intelligence.
Go read Blindsight, dispair, then realize consciousness is a decent intelligence bootstrapping mechanism that is likely convergent and common in the universe. Then wonder how quickly it will become vestigial, or if it already has.
It argues that consciousness evolved out of sensation, where we developed an "inner self" to predict how sensations would affect us, and it's that inner self that became our conscious.
Don't miss out on the comments section, the author answers a lot of question in there.
Former neuroscientist here, whose lab studied consciousness. I'm not sure about the book, but the review is light on details.
We know biology is connected somehow to conscious awareness (sleep, illusions, blindsight, TMS, etc.) but we still haven't jumped the gap separating the objective from the subjective. It's called the Hard Problem for good reason.
Very interesting paper. It reminded me of the discussion on the level of consciousness a system possibly/potentially has in Consciousness: Confessions of a Romantic Reductionist by Christof Koch.
Experience isn’t the right way to put it. The system you describe is straightforward and makes total sense how it can understand and act upon its environment. What is unexplainable, and perhaps always will be, is the awareness that I am. Even if I meditate and remove all aspects of the surrounding world from me what I’m left with is a pure awareness that groups of neurons simply can’t explain.
Why am I present at all and not just a mindless automaton? Surely an automaton could evolve and be just as successful simply by being a complex neural network. This is why I, and lots of others, think that awareness is fundamental.
My apologies, the actual book had a better format. Although not quite a summary, here's what it's about:
"The content that most interested me was Hofstadter’s insistence that self-awareness and consciousness arise directly from what he calls “strange loops”, i.e. self-referential structures in formal systems, of which human minds are just one example (what else could they be?). It’s a very difficult subject to think about in a reasonable way. We all have that sensation of the homunculus inside our heads, somehow driving us from a seat just behind our eyes, and we naturally ascribe the same sense of self-awareness to other systems like ourselves, other people. But where does that sense of self-awareness really come from? There isn’t a homunculus driving us! All there is in our heads is a kilogram or two of glucose-fueled computing machinery, and yet somehow, from the fluctuations and vacillations of that tissue, our sense of I arises. How on earth does that happen? And if it can happen in our brains, can it happen in other systems? In a lot of discussions of artificial intelligence, there is a failure to appreciate the scale issues at work in this problem (Searle’s Chinese Room is a particularly egregious example). There are tens or hundreds of billions of neurons in the human brain with hundreds of trillions of connections. How do we begin to imagine or understand what epiphenomena might arise from that scale of complexity? Hofstadter tries to get at this scale problem a little using the example of a self-aware ant colony called Aunt Hilary, a friend of one of the characters in the dialogues, an Anteater.
Aunt Hilary knows little and cares less about ants, and the Anteater even now and then eats some of the ants composing the computing substrate on which Aunt Hilary’s personality runs. The other participants in the dialogue where this is discussed are a little horrified and not a little sceptical, but the Anteater insists that Aunt Hilary is just as self-aware as they are, and that there’s nothing particularly surprising about the whole setup. The analogies ants : ant colony :: neurons : brain and ant colony : Aunt Hilary :: brain : you or me are whimsical and a little silly, but render the central issue of consciousness in real clarity: how can the collective interactions of a large number of (more or less) simple elements lead to complex emergent behaviour?"
I quite enjoyed The User Illusion (the author has a great way of looking at the world). I think its chapters on consciousness are mostly about what consciousness gets to see. Building up from the idea that there is a very limited amount of information (as in Shannon) going to consciousness per second, plus some other experimental evidence (e.g. Benjamin Libet's experiments), to his thesis that consciousness is essentially a "user illusion".
But if I recall correctly, the book doesn't touch the question of how consciousness can arise to begin with, other than making a passing reference to GEB's strange loops.
I can see some value, for example, in the idea that consciousness (maybe I should rather say subjective perception or awareness) somehow emerges out of a reasonably complex system.
But that wouldn't explain what it is in our universe that allows awareness to emerge to begin with.
I think some fun questions to think about are:
* Can we create awareness in a circuit by making it complex enough? What are the requirements?
* If we simulate a universe, does it inherit the ability to create awareness from our universe?
* How the heck do you validate any of these answers?
Well, I'm confused now, consciousness is too hard. Sorry if I sound like gibberish.
EDIT: This isn't meant to be a criticism of the book, which I can only recommend to anyone.
I'm working my way through Being No One at the moment. It's very interesting, and (to me, anyway) sounds like he's on the right track with his 'transparent self-model' understanding of consciousness.
Interestingly a lot of robotics seems to work the same way - the robot has a model of itself and the world around it, and when it "looks at the world" it's actually looking at that model.
Yeah, I'm not sure how it really works, just that I couldn't see the book (or, I guess, couldn't experience seeing the book), but that didn't stop me from reading it.
I figure our brains sort of cobble together an illusion of selfhood out of a whole bunch of partially-independent processes. Most of the time the illusion is reasonably well supported by decent coordination among the processes, but the coordination isn't instant or perfect, and the gaps become more noticeable if something messes with one or more of the underlying processes.
There are a few other odd effects that seem to me like maybe they're examples of similar kinds of mess-ups. There's the simultaneous waking/sleeping I've experienced--that was a combination of extreme fatigue due to chronic fatigue syndrome, plus a strong alertness-enhancing drug.
And there's the absolute conviction someone close to me occasionally had that an invisible person was in the room with us (even though she was simultaneously aware that no such thing could be true). I think maybe that one was the effect of a brain injury she'd suffered in a car accident.
Blindsight by Peter Watts has some interesting ideas about how consciousness may not be a necessarily condition for advanced life, and indeed may be a hindrance and an evolutionary dead end. The appendices also have a lot of good references on the subject.
And it did a lot to change my mind. I'm still digesting it.
Before blindsight, I was firmly of opinion that any system that could be at least human level (lets say indistinguishable), then it would also have to have some internal subjective experience. That the internal subjective experience would be another emergent phenomenon. As NN is processing, it would have the same internal awareness as a human brain.
But, after reading blindsight. I am questioning that.
So can we get automatons, that have no thought, but could also become greater than humans? With enough complexity in responses, can they adapt just as easily to new situations.
I'm not really ready to think that is possible.
reply