Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Philosophers in general have no special insight, but certain philosophers do and I'd argue more than certain others in other professions. They seem to have the best grasp of the consciousness question when compared to physicists, biologists, computer scientists, and psychologists, who all seem to get wrapped up in applying their expertise. A philosopher's expertise lies in crafting and analyzing questions and concepts. Which is the stage our understanding of consciousness remains at and seems to be stuck at for the near future.

I thought the article was very good, you claim it to not be. You say

> Poking at the edges of consciousness (mostly with drugs) leads to all sorts of contradictions and challenges to what people usually think of as consciousness

Well I personally have done all the drugs, and I find those experiences have only strengthened my confidence in what I think of as consciousness. This article outlines my views more or less, which is closely related to the philosophical field of phenomenology. We can take our experience of consciousness in and of itself as a way to define consciousness and this shows how that clashes with the computational (and I think mainstream) view.

Interesting that our personal experiences are opposite. However I don't particularly care about yours or others experiences of consciousness; I'm more interested in my own.



sort by: page size:

I think people start with the premise that consciousness is a specific “thing”, that it is unique and special to humans (and maybe dogs because we like them but definitely not spiders and flies because we don’t) and then try to work backwards to define it in some ways that keeps it special.

I don’t think consciousness is so specific, and I think people aren’t clear about how they think about it as something separate from recall, text generation, agency, etc.

My personal experience is that consciousness, like free will, is a useful illusion. Poking at the edges of consciousness (mostly with drugs) leads to all sorts of contradictions and challenges to what people usually think of as consciousness.

Aside: I’m starting to be bothered by the trend of assuming that philosophers have special insight. There’s plenty of shitty, non-useful philosophy, and there’s plenty of articles like this where someone writes in circles like they’re paid by the word. Generating text for hours without an anchor to the real world is not a productive method of generating insight about that world.

> But we must resist the allure of this seductive idea.

Why? Starting with this assumption and searching for reasons it might be true is clear motivated reasoning.


Philosophy is just thoughts and thoughts about thoughts. To really understand consciousness researchers should study meditation (for direct experience) and traditions such as Buddhism that have been studying consciousness for millenia.

See for example, the Buddhist descriptions of the jhanas, progressive levels of consciousness in which meditators peel back the layers of their personality, human awareness and end up in pure awareness and beyond. It's hard to read (and experience, albeit only the initial stage in my case) such things and not be left in little doubt that consciousness doesn't derive from thought like philosophers like to believe (no, Descarte, you are not just because you think).

It's for this reason I don't buy the AGI hype. Maybe after fundamental breakthroughs in computation and storage allow better simulations, but not any time soon since these traditions tell us consciousness isn't emergent. Most AGI researchers are barking up the wrong tree. Still, the hype boosts valuations so perhaps it's in their best interests anyway.

Philosophers can get so wrapped up in thoughts they say nonsense like "I can't comprehend not having an internal monologue", which you can experience any time you watch a film, listen to music, etc. Someone with only the smallest experience of meditation shouldn't fall into such thought traps.


Philosophy isn't science. But we can ask whether consciousness can someday be explained by science, and what that might look like. David Chalmers proposed a science of consciousness that allows for a non-physical property that comes with rich information processing. A new kind of law that specifies the existence of conscious states any time there is such information in the universe. Panpsychism would be another approach.

I'm not claiming either are correct. I'm just pointing out that consciousness is very hard to fit into a scientific framework. If it emerges, what does that entail? Certainly more than merely making that statement.


For small values of "consciousness". The author writes "Consciousness is any experience, from the most mundane to the most exalted." One could call that "input". Still, the author is a serious neuroscientist, not a philosopher. He did, however, start with a Jesuit education, which biases one towards thinking that humans are special.

I personally think that trying to understand "consciousness" should be deferred in favor of trying to understand "common sense", or "how to avoid doing bad stuff in the next few seconds". Once we have a handle on that basic survival mechanism, seen in all mammals, then we might be able to address consciousness. Also, we could build robots able to keep themselves out of trouble. Biggest unsolved AI problem right now. It's the basic reason computers do stupid stuff sometimes.

Trying to understand "consciousness" without the ability to emulate basic common sense results in philosophical tail-chasing. There's a lot of that around.

I got too much of this going through Stanford CS in the mid-1980s. This was during the "expert systems" period, just about when expert systems turned out not to be that useful. The philosophers were still in charge. One exam question was "Does a rock have intention?", which gives a sense of where the faculty was coming from back then.

(The reviewer meant "adduce", not "abduce". "Abduce" is just moving your limbs outward. The muscles that power such movements are called abductors.)


Well, you could try asking neuroscientists about consciousness rather than philosophers.

But that would, apparently, be far too much of a downer for most people, finding out that questions can be answered with evidence.


> Consciousness is everything a person experiences — what they taste, hear, feel and more. It is what gives meaning and value to our lives, Chalmers says.

> Despite a vast effort — and a 25-year bet — researchers still don’t understand how our brains produce it, however. “It started off as a very big philosophical mystery,” Chalmers adds. “But over the years, it’s gradually been transmuting into, if not a ‘scientific’ mystery, at least one that we can get a partial grip on scientifically.”

I'm sure these two scientists have interesting things to say that I never thought about.

But I stopped reading the article at this point, because that's where I expected a precise definition of consciousness. That first sentence isn't one as far as I am concerned.

Philosophy without strict definitions and a common language tends to be useless. And I love philosophy and consider it the root of all science.


> After all, does it seem like any one of these features, or all of them combined, comprise what William James described as the “warmth” of conscious experience? Or, in Thomas Nagel’s words, “what it is like” to be you? There is a gap between the ways we can measure subjective experience with science and subjective experience itself. This is what David Chalmers has labeled the “hard problem” of consciousness. Even if an A.I. system has recurrent processing, a global workspace, and a sense of its physical location — what if it still lacks the thing that makes it feel like something?

> When I brought up this emptiness to Robert Long, a philosopher at the Center for A.I. Safety who led work on the report, he said, “That feeling is kind of a thing that happens whenever you try to scientifically explain, or reduce to physical processes, some high-level concept.”

Writers once talked of an 'elixir of life' - some indescribable quality of biological life that could not be replicated in the laboratory. It gave life a hidden, magical quality. We now know that the concept of an 'elixir of life' is false - we can replicate at a chemical level, all the building blocks of life.

We'll probably feel this way about consciousness in the coming years, but articles like this from the NYT indicate that the general public is not there yet.


Also, you're not being rational and consistent. First you say no body truly knows what consciousness is, that philosophers debate about it... then you define it as an "experience." Two inconsistent statements. For your second statement? Isn't that just a definition you chose that many "philosophers" disagree/agree with?

Don't answer that question. It's rhetorical. A rational discussion. can't be had with someone who is inconsistent with the statements he makes or someone who is deliberately trolling. I suspect you are the later.


I personally don't spend a large amount of time thinking about consciousness because I question the practicality. The subject matter is so dense, and like other(s) have stated, philosophers have dedicated their entire lives to pondering the issue.

Frankly it sometimes upsets me when I do put an extensive amount of time into it and don't end up coming up with anything conclusive; it is depressing. There is (obviously) a incredible amount of ambiguity surrounding the subject, and there is something about this particular issue that is unnerving.

On the other hand if you can stomach the subject matter it is very stimulating, unfortunately it does not lead to concrete conclusions.


Sure, I honestly don't have any problem defining consciousness in terms of the human experience of it. After all to my mind that's what it is - an experience. We can use scientific processes and techniques to analyse it for sure, to trim away misconceptions and more precisely understand it's parameters but we're never going to identify a 'consciousness particle', or special quantum entanglement whatsit that Roger Penrose seems to believe is responsible for it. I'd rather just embrace the fact that this is a philosophical question. Science can illuminate philosophy, just as philosophy can illuminate science.

Absolutely not. After 2500 years, Western philosophy has yet to achieve broad consensus on anything at all. Philosophers don't have a clue what consciousness is - they can barely even describe it without using utterly ambiguous words like "qualia" or "sense datum". The only way we'll ever be able to know anything about consciousness besides what's going on inside our own heads is for us to directly experience what's going on inside someone else's (or some animal's).

Not until the Singularity, basically. I'm not going to hold my breath for that one.


Why is this subject so fraught? The brain is a wet computer. Reality is its internal model of the external world informed by imperfect sensory inputs. Consciousness is what thinking feels like. When the brain ceases to function, the individual is destroyed. So what? There's no mystery here. The knowledge to be gained has to do with the operation of the brain, not the nature of "consciousness", whatever that is.

As far as I'm concerned, after Wittgenstein, philosophy has had nothing to do. All the "big questions" are either answerable scientifically as a matter of neurology or they're ill-posed and nonsensical.

Actually, I'm wrong: philosophy does have an occupation these days. It's complicit in the creation of utter nonsense like continental thought and critical theory, all of which amounts of sophistry that justifies us in exercising our world impulses toward magical thinking.

Why bother?


Most of what i 've heard about consciousness and experience from philosophers involve arbitrary constructs such as "qualia"; one of the most prominent philosophers of mind posits that consciousness should be an ubiquitous property on top of the physical world, leading to some sort of panpsychism [1].

While charming, these sound to me as borderline serious new-age dualist contraptions. Consciousness and experience are under active investigation by neuroscientists and we have no reason to believe they won't be explained away.

Imho, the most valuable domain of philosophy is still the domain of Ethics.

[1] http://en.wikipedia.org/wiki/Hard_problem_of_consciousness


Perhaps this is the point. If you don't have an agreed upon definition of the word, it is not a useful tool. A claim o consciousness, if that claim is meaningless, isn't useful.

But aside from that, there is a lot of philosophy on what consciousness is (https://en.wikipedia.org/wiki/Consciousness has some of it). And those people, especially philosophers in the crossover of computer systems/intelligence and general philosophy are "experts".


Quoting my previous comment: "I think it also useful to semantically separate cognitive consciousness (i.e. Knowing and expressing the existence of your thought processes through arguably higher, more abstract thought processes - which might even go on recursively - knowing that I know that i am conscious, etc) from the externally unmeasurable 'conscious experience', (i.e qualia, the awareness of sensory or thoughts at the most essential level, 'seeing' what one sees, etc). One could imagine a living being with one but not the other, for example qualia without cognitive consciousness (if I had to guess, I would imagine this experience to be similar to being drugged to the point of having no internal monologue, no complex thought process, but keeping your sensations and vision, etc, or being a barely-sentient animal in purely instictual mode of thought and action)

The opposite, cognitive consciousness without qualia - a.k.a the philosophical zombie - or a computer which can argue the existence of it's thoughts without feeling them is, I gather, a more controversial state of being.

What I find interesting is that in separating the two 'consciousness's, the former ends up taking almost all of the importance and the latter none - anything which can be externally measured ends up in the first category (which is a computable logic process), which leaves very little of utilitarian/evolutionary/algorithmic importance in the second. However, in much discourse about consciousness the latter takes a disproportionate role (i.e fear of losing your unmeasurable consciousness when teleporting, etc, though the cognitive consciousness, being by definition a logical and measurable process is theoretically preserved)"

In this case although the article, to get quick views, appeals to our love for the mystical attributes of the 'conscious exprience phenomenon', the paper actually studies the (in my opinion much more) interesting empirical characterisation of the level of cognitive consciousness in insect 'brains' - i.e., how elaborately their software models its own processes.


Nothing about the topic consciousness is fundamentally testable. Which is why it's often a topic for philosophy rather than science.

Arrgh! This is what annoys me about reading much philosophy. The article is about consciousness which is a term used by many different people in many ways and with many different implied definitions. But does the guy give a clear definition of what he's talking about? No he leaves it vague but waffles on for hundreds of words in a way that you can't say if it's right or wrong because you're not sure what he means.

If you are using consciousness to mean being aware of stuff in a way that you can act on it then the questions are fairly simple - humans are conscious when in a normal state, not so when knocked unconscious. Likewise rabbits. The United States can show collective consciousness in that it's citizens in aggregate can be aware of things and react. If you look at a different meaning of consciousness in terms of subjective experience then probably other people have similar experience and rabbits a simpler version but it's hard to tell.

I don't get why philosophical writing tends to be so vague and waffly. Maybe because they don't achieve much in the real world unlike say neuroscientists studying consciousness and so need to hand wave and be vague to cover up the lack of real content?


How droll, isn't this the equivalent of trying to figure out how an operating system works by trying to look at activity on an encrypted SSD?

And how exactly is this a win for philosophers? I'm not aware of any philosopher having made meaningful strides within the subject either.

I'm sure once someone figures out how consciousness works it'll be irrefutable while seeming incredibly obvious in hindsight. Which, honestly, just makes it more frustrating that there isn't a full accounting already. How many key insights are needed to fully describe consciousness and how many do you think have been found? Do you think anyone in the past has figured out how consciousness worked and taken it to their grave because of how obvious it seems?

Maybe it's like searching for the Dragon Balls, and you need to put all 7 key insights together to figure it out.


Philosophers of mind still insist on metaphysical descriptions of consciousness Eventhough there is so much progress in neuroscience that makes it hard to take it seriously anymore. Afaik, most young neuroscientists abstain from these endlessly-goal-shifting discussions. Its a pity too because with recent progress in deep learning there couldnt be a more exciting time to do mind philosophy

(You should have realized that HN has been infiltrated a long time ago)

next

Legal | privacy