Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> Well, since an extremely advanced ML algorithm wouldn't want to go about proving to itself that it is not what it is, that would be prima facie evidence against, no?

This seems like begging the question. Who says an extremely advanced ML algorithm can't 'want' to do this? What even is wanting?

> I mean in practice we don't find this too hard right now if the other person is reasonable—a 15-minute conversation usually suffices —but I imagine from your ptior question you're dreaming of, say, a future with robots that routinely pass the Turing test?

I'm not. These are absolutely situations that can happen now, with people. I am thinking more when it comes to mental and some physical impairments, so "a 15 minute conversation" is assuming a lot about the capabilities and clarity of everyone involved.



view as:

> Who says an extremely advanced ML algorithm can't 'want' to do this? What even is wanting?

I believe this is the real question about consciousness. If a being were to be conscious but it had no desires, no wishes, not even a will to keep itself alive... it wouldn't bother to do anything... i.e. it would behave exactly like a rock, or anything non-conscious.

Having desires, wishes, and should I say, emotions... is absolutely required for what we think of as consciousness to materialize. But we know that emotions are chemical processes which perhaps cannot occur outside a biological being. Maybe it can, but it's hard to think of a reasonable way this could work.


A loss function, perhaps?

Legal | privacy