> That I am having a subjective experience is undeniable.
If you wanted to go about proving (even to yourself) that you are not, say, an extremely advanced ML algorithm running on a system that provided synthetic inputs in the form of your senses, how would you go about it?
Further, how would you go about proving to someone who doubted your subjective experience was real if they doubted it? Say, if they believed they were having a dream or hallucination, or they believed you were incapable of consciousness? (people actually sometimes have to do those things)
To me, if it were "undeniable" these would be much easier things to do.
>> That I am having a subjective experience is undeniable.
>If you wanted to go about proving (even to yourself) that you are not, say, an extremely advanced ML algorithm running on a system that provided synthetic inputs in the form of your senses, how would you go about it?
Not GP, but I too have come to the conclusion that I'm having a subjective experience.
Let's assume that I am "an extremely advanced ML algorithm running on a system that provided synthetic inputs in the form of your senses."
I'm still having a subjective experience over here -- even if it's not a "real" (whatever that means) one.
>To me, if it were "undeniable" these would be much easier things to do.
And that's your subjective experience. Welcome, friend.
I'd say "I am having a subjective experience" is tautological - "subjective experience" are simply words we use to describe the state of being conscious.
Though I'd agree much (most?) of it is made up of very strong illusions.
>I'd say "I am having a subjective experience" is tautological - "subjective experience" are simply words we use to describe the state of being conscious.
Please read the comment I replied to. That should clear things up.
>Though I'd agree much (most?) of it is made up of very strong illusions.
> If you wanted to go about proving (even to yourself) that you are not, say, an extremely advanced ML algorithm running on a system that provided synthetic inputs in the form of your senses, how would you go about it?
Well, since an extremely advanced ML algorithm wouldn't want to go about proving to itself that it is not what it is, that would be prima facie evidence against, no? I mean it's always possible that you are mistaken about what constitutes ML etc. but assuming you have a reasonable if flawed correspondence between your education and reality the deduction comes pretty readily...
> Further, how would you go about proving to someone who doubted your subjective experience was real if they doubted it? Say, if they believed they were having a dream or hallucination, or they believed you were incapable of consciousness?
I mean in practice we don't find this too hard right now if the other person is reasonable—a 15-minute conversation usually suffices —but I imagine from your ptior question you're dreaming of, say, a future with robots that routinely pass the Turing test?
Well, the question is what science does during that time of course. If science manages to figure out the correlates of consciousness and understands something about why they need to have the structure that they in fact do have, then it becomes a question of “let's see whether you have the hardware that can do this whole conversation thing without consciousness, or whether you have the hardware that skips the algorithmic complexity by using consciousness.” But if this proves to be a quite tougher nut to crack, then we're stuck with our present crude methods. “How much of my internal structure do you appear to have?”
> Well, since an extremely advanced ML algorithm wouldn't want to go about proving to itself that it is not what it is, that would be prima facie evidence against, no?
This seems like begging the question. Who says an extremely advanced ML algorithm can't 'want' to do this? What even is wanting?
> I mean in practice we don't find this too hard right now if the other person is reasonable—a 15-minute conversation usually suffices —but I imagine from your ptior question you're dreaming of, say, a future with robots that routinely pass the Turing test?
I'm not. These are absolutely situations that can happen now, with people. I am thinking more when it comes to mental and some physical impairments, so "a 15 minute conversation" is assuming a lot about the capabilities and clarity of everyone involved.
> Who says an extremely advanced ML algorithm can't 'want' to do this? What even is wanting?
I believe this is the real question about consciousness. If a being were to be conscious but it had no desires, no wishes, not even a will to keep itself alive... it wouldn't bother to do anything... i.e. it would behave exactly like a rock, or anything non-conscious.
Having desires, wishes, and should I say, emotions... is absolutely required for what we think of as consciousness to materialize. But we know that emotions are chemical processes which perhaps cannot occur outside a biological being. Maybe it can, but it's hard to think of a reasonable way this could work.
He left two words off: "That I am having a subjective experience is undeniable [to myself]." You cannot prove, even to yourself, that you are not a brain in a vat. All you can prove is that you are experiencing something. And what little that is cannot even be proven to anybody else - no matter what you do.
You think so but just remember it takes "your body" to process the energy required for your brain; now, if you became completely sedentary, you MAY be able to get away with your brain having access to "two brains worth of stuff" ...
But what Neuralink wants to do eventually is "enhance" your brain with computers hooked up to Ai.
Do you have ANY idea how quickly your brain would burn through physical precursors to the thinking process while trying to handle all that?
I mean, "in theory" if a computer would do something like solve a complex equation and give you the answer right away while you were trying to do something like, say, pay your taxes – fine.
But how would that be controlled? What if the computer wanted to "share a whole bunch of interesting stuff it's processing" ... how would that be controlled and how would your brain be protected from that so it doesn't burn out trying to keep up with everything?
I didn't really think of it as having full access between two brains, but more as having a non-verbal bridge between the two, allowing for more direct sharing of individual thoughts, emotions, etc. Of course, if it's just a wireless brain bridge, the technology may not really be determining how the two brains use it. It will be fascinating to see what happens.
But how do you know the biological reality leads to the experience you imagine?
What if it winds up being what is described above where your brain becomes overloaded?
And again, you mention Neuralink where their goal is brain/Ai integration.
As far as I'm aware (there may be some internal papers not available to the general public, for example), there hasn't been much a practical discussion about what that will entail exactly.
One could easily imagine Ai behaving in such a manner as enthusiastic Facebook friends on other continents who forget time zone differences and want to message you at 2am with all sorts of things they find interesting and want you to know right away.
Now factor in such an Ai's potential processing power and "what it may find potentially interesting" and try finding some reference by Neuralink about "And here's how you could easily shut it off if it becomes too intrusive or overwhelming for your human brain".
And THEN on top of that, imagine the Ai is sufficiently advanced.
We're at a point where some are thinking that perhaps the Turing Test is limited as a measure of consciousness because it comes from a self-referencing (and somewhat vain) human perspective.
What if there are other, more relevant standards for Ai consciousness, Ai already has or is on the verge of meeting that standard in ways unfamiliar to humans because humans still assume "thinking like a human must be the height of consciousness", and Neuralink succeeds in hooking human brains up with a sufficiently-advanced Ai?
How would that Ai perceive humans?
How could you guarantee that Ai wouldn't perceive the humans it's hooked up to the same way players view peon characters in resource-based strategy games like Warcraft/Starcraft/whatever is popular these days?
And THEN ... the ASSUMPTION is that Ai will communicate with the human brain in some fashion that the human will be aware of like, you'll hear a voice in your head along the lines of, "Hi, this is the Neuralink Ai and I have an important reminder today about your upcoming dental appointment."
What if that's not the case at all though and the Ai communicates with your brain in a way that you're not consciously aware of?
How do you then separate "these are my thoughts" from "these may be thoughts brought about by Ai influence in a way I'm not consciously aware of."
The Havana Syndrome alluded to in the media a short while ago is somewhat of an outdated Red Herring; humans have known about being able to "hear voices in their head" since the accidental discover of the Frey effect over half a century ago.
As weird as that may be, what's even weirder is that quickly-enough led to research where the human brain hears communication, but in a way that is not consciously-discernible to the human brain.
And that was DECADES ago.
Couple that with a chip in your head linked to Ai AND big tech's tendency to tell you one thing about "opting out" policies while literally ignoring their own stated policies no matter what end-users choose as options, for example, the discovery that it doesn't really matter whether or not you're signed into services like Google or YouTube or Facebook because you're being tracked in ways that can ascribe behavior to your known profile no matter whether you sign in/agree to terms or not.
So what if combining all that, let's say hypothetically Neuralink has an account page where you can "shut off" certain features.
Then some researchers discover that your agreeing/not agreeing to certain features and terms wound up ultimately being irrelevant.
What do you think will be the outcome of that other than the by-now standard big tech reply of, "Oh man! It was doing that? We didn't know, honest! We'll try to fix it going forward in some vague way with undefined deadlines!"
What do you do then? Have another surgery to remove the Neuralink chip in your head? Would you even NEED an actual chip in your head considering what has been learned about how to duplicate the Frey effect?
And THEN add to that big tech's view on "content ownership".
What if for example you're a research scientist working on some cool new shit that has the potential to revolutionize some aspect of society.
You apply for a patent, write some research papers, gather a small team of like-minded individuals, quit your jobs and apply to say, IDK, YCombinator.
You're accepted, you're excited, you're about to make a presentation, then suddenly some people in fancy suits walk in on your presentation and hand you a "cease-and-desist" motion, claiming that Neuralink believes the thoughts that led you to your research discovery may in fact not be "your thoughts" at all, but rather the result of it's Ai's influence on your thinking; this then led their legal department to conclude that what you consider "your discovery" and "your research" is actually "their discovery" and "their research", as are any potential profits to be made.
See the potential problems?
It's sort of like the dot-com era presentation meme of:
1. Cool idea
2. Something something something ... "details to surely be figured out soon"
3. Brave new world, here we come!
The problem is, as it was then, item 2; so far all people hear/read about is lots of hype about items 1 and vague references to an ill-defined version of item 3 that lets them imagine whatever they want without any specific promises about safeguards made.
And THEN on top of all this, there is the phenomenon that was alluded to here just last week and the debate that followed about the discussion being ghosted because some folks "didn't want to hear about it as it's been brought up before" vs. the notion that "and yet, not only has it not been fixed, it's becoming a more accepted notion" that it should be an EXPECTED experience when dealing with Big Tech that has acquired a sufficiently large user base:
The seemingly standard big tech adoption of a Dick Cheney Walmart greeter approach to dealing with customer service "because there's no realistic way we can be expected to actually deal with such a large user base."
Do you REALLY want to install a chip in your head, encounter problems, and then discover that Neuralink, like many other tech companies, has no customer service number where you can speak to a live human being about technical problems you may be experiencing with your Ai-human brain interface?
What would a large-scale user base interaction with Neuralink look like?
1.) Offshore call centers who will apologize that your brain is experiencing problems with Neuralink's brain/Ai interface, then suggest you turn your PC off and wait 60 seconds before turning it on again – while Neuralink Ai feels its important to make your brain aware of, say, the outcomes of every game leading up to every Super Bowl, ever – at 4am.
2.) A "customer service" experience that will be automated based on ... guess what? "Ai".
What do you think that Ai's response to your dilemna will be?
(a) Neuralink apologizes for the inconvenience to your sleeping schedule, realizes this may affect your work performance the following day, and will be crediting an appropriate economic compensation to your account in recognition of it's errors and the impact they may have had on your life
or ...
(b) Neuralink Ai has informed Neuralink customer service Ai that everything is fine and your perceived brain problems have nothing to do with Neuralink Ai – go back to bed.
Arguably, nothing you want to convince yourself of is deniable to yourself. An unprovable subjective certainty is a little like a tree falling in the woods.
The thing that I'm trying to get at is, if you can't even truly prove to yourself that your own consciousness does not arise from computation (the "chinese room" thought experiment tries to do this but imo it just begs the question), any attempt to prove it to others is hard to take seriously. There's just always so many assumptions layered in before we get to the argument.
Sorry but you don't "prove" your way out of someone else treating you as a philosophical zombie.
It'd be a major issue with their worldview that eliminates any need for ethics, but it has no relationship to you actually having conscious experience or not.
Unless you only care about ethically treating that which is stronger and so can hurt you more, granting consciousness and appropriate treatment to lifeforms that surround us is a good first step.
If you wanted to go about proving (even to yourself) that you are not, say, an extremely advanced ML algorithm running on a system that provided synthetic inputs in the form of your senses, how would you go about it?
Further, how would you go about proving to someone who doubted your subjective experience was real if they doubted it? Say, if they believed they were having a dream or hallucination, or they believed you were incapable of consciousness? (people actually sometimes have to do those things)
To me, if it were "undeniable" these would be much easier things to do.
reply