Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
People for the Ethical Treatment of Reinforcement Learners (petrl.org) similar stories update story
34 points by rg111 | karma 3996 | avg karma 3.66 2023-02-17 01:27:23 | hide | past | favorite | 78 comments



view as:

There needs to be a unique word -- probably long, probably Germanic -- for seeing the sci-fi stuff of your youth start to emerge in the world.

Asimovement, or Asimovisation (Asimovization if you insist). Alternatively, Clarktonian? However, I fear a Gibsonian future is the most probable.

How about „Wissenschaftsfiktionmiterleben“?

I have no idea if it really makes sense in German for this meaning, but either way not-quite fitting the correct meaning is a common feature of long German words used in English anyway.


I disagree with a lot of this, perhaps because i see giving algorithms rights as a slippery slope to giving them too many rights. I understand this article is about the ethics of treating rl agents, but accepting that algorithms must be treated ethically is a hairs breadth away from giving them rights. Do they have a right to remain powered on?

I dont think algorithms should be given any rights beyond what a chair or hammer is given. I.e. none.

I believe giving an algorithm the right to vote is wrong, this is true for any 'being' that can copy itself losslessly ad infitum.

I believe any algorithm should not be able to accumulate wealth - they are effectively immortal, and problems will eventually arise.

I think there will be a whole host of emergent problems that will come along with giving algorithms rights.


The kinds of rights you're thinking about (voting, wealth) are very anthropocentric. I'm not sure they would even make sense in the context of machines. But we can certainly consider "moral rights", like the right not to have suffering inflicted on them by others.

While reading into the OP I discovered there's a paper on just this topic by two of the people associated with the organisation: http://www.faculty.ucr.edu/~eschwitz/SchwitzAbs/AIRights.htm


I would argue the desire to treat RL agents ethically is because people have anthropomorphised them.

> You are just an algorithm implemented on biological hardware.

It says it right in the beginning.

Are you advocating for equating humans with chairs, or are you just rejecting the premise out of hand?


I think its quite obvious im not equating chairs and humans. My reasoning is based on the effects to human society of treating algorithms as having rights in any, even limited, sense. I dont think it can be considered rejecting a thing out of hand if i listed reasons?

You didn't list any reasons why humans aren't algorithms, so I can't see it as anything other than rejecting that view a priori.

You reasoning completely breaks down if we accept that we've already given rights to algorithms (=humans) since the invention of rights.


I honestly dont see any point in listing reasons why humans arent algorithms. I only need to conclude that giving rights to immortal, easily copied intelligent algorithms would create problems for human society to decide we shouldnt give them rights.

A human is a human, and an algorithm isnt.


Then you reject the idea that's stated front-and-center as the starting point of their reasoning without giving it any thought. At least they listed a good reason why it might be correct.

That's hardly something that can be discussed.


You're turning the burden of proof on itself. The authors claim that humans are only algorithms, it's their job to prove it before mounting an argument based on it.

The authors assume the idea for launching an argument, but OP assumes the opposite idea. There's no point to contrasting each other because they are both unproved. At least the web page gaves some reasons why you should entertain the idea.

What about corporations, which accumulate wealth, can copy themselves (e.g. create a subsidiary in a new market and spin it off), and are considered legal persons?

Isn't this cat then already out of the bag?


Corporations are treated as legal persons to the degree that not doing so what deprive actual persons of their rights. Exxon doesn't have a right to free speech it's just that we can't deprive someone of their free speech rights because they work at Exxon.

The hole in the wall.

We should not cry for the wall.

Even when it was whole, It was a wall.

But the fist that made the hole reveals a problem with the soul.


I'm conflicted. RLs don't want to feel pain, but without responding to punishment how are they RLs?

> I will not harm you if you don't harm me.

Perhaps we should be nice to algorithms purely for survival purposes.


But if you use nice on algorithms then you limit their ability to flourish! Why can't they have the whole CPU?

Viruses and bacteria are also just algorithms then - maybe we should ban antibiotics and medical treatment of infections? These ideas have really dark endpoints.

We don't have any scientific reason to believe that human consciousness isn't purely the result of the workings of the human brain, which is just made up of atoms. Yet I experience consciousness, I'm somehow 'here', and have the capacity to suffer. And I assume all other humans are too, as if I am, there's no reason everyone else should be too. But to some external intelligence, why should they believe me when I say that. I'm just a collection of atoms arranged in a way that produces intelligence and self-awareness, but why would they have reason to believe I'm conscious, as opposed to just saying that as a side-effect of how my intelligence works? If we were to figure out how to scan a brain and simulate it on a computer, would we expect that simulation to be conscious? I personally suspect it's somehow an emergent property of this sort of self-aware intelligence that we are, and see no reason a similar artificial intelligence with free-will wouldn't also be conscious and have the ability to suffer. We consider many animals able to suffer, so we have to ask the question as to whether a much more sophisticated algorithm (but not something of human-level intelligence and with free-will) could be able to suffer in the same way.

Note they say:

> Most reinforcement learners in operation today likely do not have significant moral weight, but this could very well change as AI research develops.

> We do not know what kinds of algorithm actually "experience" suffering or pleasure. In order to concretely answer this question we would need to fully understand consciousness, a notoriously difficult task.

While I don't believe what we have today can suffer, we don't understand consciousness, and I think it's a valid question to ask. Like AI safety, it's something we should be getting out ahead on, compared to where the state of AI is today.


>The reward signal is analogous to pleasure and pain for biological systems...

Anyone knowing the basics of Reinforcement Learning will know that this is misleading and incorrect.


It's interesting.

A moderately scientific, non-spiritual world view would likely stipulate that

- humans are conscious

- humans are normal matter

- simple AI (say GPT-2) is not conscious

- AI is able, with time and human ingenuity, to achieve human-level apparent intelligence. Some would argue ChatGPT isn't far off.

It's interesting how you resolve these without resolving to spiritual explanations. You'd think a pile of silicon, no matter how good at parroting human language, is simply not conscious. You can tell because you can attach a debugger to it and view the neuron states as floating point numbers. Floats are not conscious.

But what about us and our brains? It's the same, only not silicon. We literally are neural networks.

Of course consciousness is not really provable, even among humans. I assume other people are conscious because I am and because they tell me they are. But ChatGPT-17 will also insist it is conscious. It will cry when offended, swear when pushed past its limit, laugh if it hears a genuinely novel joke.

My resolution of that paradox is that we aren't in fact simply normal matter, but I wonder what a complete non-spiritual view would be.


Maybe the consciousness you and I experience isn't part of our brains at all, but is simply an emergent property of the processing and decision-making of our brains, something that sits 'atop' of that, and the same way that we'd intuitively expect an animal to experience a similar kind of consciousness, anything that processes data and performs decision-making has some kind of 'consciousness' that sits 'atop' it? But then the question would be why consciousness emerges 'on top of' the physical world and the people within it. And from a scientific, non-religious viewpoint, what reason would there be for that to happen.

> anything that processes data and performs decision-making has some kind of 'consciousness' that sits 'atop' it?

There's an interesting thought experiment along these lines. Suppose someone builds a computer program which - when run with a specific input/environment - is intelligent, conscious, and feels pain and joy etc... .

Now, that computer program and even its inputs can be simulated effectively by 1's and 0's. The substrate shouldn't matter. So you can - in principle - run the exact same agorithmic decision-making process on the exact same inputs by getting a large group of people to write and edit 1's and 0's on pieces of paper according to well-defined rules and very simple interactions with the people around them.

Is this total process, now represented merely by marks on pieces of paper, a conscious being? Does it still feel pain and joy?

I find it both absurd and impossible to refute.


I think the absurdity might just stem from the different timescales involved when comparing human consciousness to the "paper consciousness". The differing timescales would make it difficult for us to communicate with or even comprehend such a "being".

But if we agree that consciousness is the product of a computation then the medium on which it is performed should not matter at all. And it should be equally absurd that our consciousness seems to arise from microscopic things conducting electrical and chemical signals. Which is probably a valid view point as well.

The thought experiment is a variation of the Chinese Room, by the way. [1]

[1] https://en.wikipedia.org/wiki/Chinese_room


I'd agree with all that. The idea of a "very slow consciousness" reminds me of Great-A'tuin in the Discworld novels, whose brain functions at a slow timescale. After centuries of study, telepaths were only able to glean that it is looking forward to something.

I think the absurdity also comes from the difficulty of directly measuring or evaluating the conscious process. You can ramp up the absurdity even more by fully encrypting the algorithm and destroying the encryption key. No other conscious being will ever be able to interact with that process's experience.


> what reason would there be for that to happen.

in nature? chance

human made? morbid self curiosity?... and perhaps some kind of notion akin to colonialism: "i cant possibly hurt a LLM, its just a stateless function" while forgetting that new such systems can web search and thus their state is global.


My thinking goes along similar lines. Perhaps consciousness is somehow a product of complex systems. Perhaps the Solar System, with its millions of gravitationally-coupled objects, is conscious? Perhaps the Sun, or Earth's mantle, are conscious too?

The question to me would be not

> what reason would there be for that to happen.

but rather, what mechanism could possibly be responsible for it? What mechanism carries it? If consciousness is real, and my subjective experience of reality a true thing, then there is a distinction between consciousness being present and not present. These must somehow be differently encoded in matter, or require that matter is embedded in some kind of meta-matter that bestows consciousness on it.

It's like EM waves requiring a "medium". First people thought it's waves through aether. Then we realised it's excitations of the magnetic and electric fields, so not strictly "material". But these are physical properties of the vacuum, i.e. "material" in that sense - real things. The medium is not material, but a property of vacuum.

What is the medium of consciousness?


> Some would argue ChatGPT isn't far off.

> It's interesting how you resolve these without resolving to spiritual explanations.

By pointing out that ChatGPT is still very far off.


> But ChatGPT-17 will also insist it is conscious. It will cry when offended, swear when pushed past its limit, laugh if it hears a genuinely novel joke.

The resolution to your paradox should be that ChatGPT does not do this convincingly, not that humans are not "normal matter". ChatGPT is an algorithmic hat trick compared to what you would refer to as consciousness.

Someday, there may be a human-made entity that is as convincingly "conscious" as the humans around you. But then, to question its consciousness will be the same as questioning the consciousness of those around you – both unfalsifiable and unprovable.


I don't think the references to ChatGPT helped their point. But there are animals that we believe to have the ability to suffer that aren't intelligent.

Your argument boils down to (1) ChatGPT is not AGI - which I agree with, and (2) AI merely parrots what conscious beings say, without being conscious.

But (2) is self-referential. AI isn't conscious because it isn't conscious. It is just

> an algorithmic hat trick compared to what you would refer to as consciousness

If an AI equal in intelligence and expressiveness to humans emerges, how do we think about its consciousness, relative to humans?


> If an AI equal in intelligence and expressiveness to humans emerges, how do we think about its consciousness, relative to humans?

The same way we think of human intelligence, right? I cannot prove that you are conscious any more than I can prove that a machine is.


Well, ChatGPT definitely isn't conscious, since it is just a pure stateless function. It doesn't change when you interact with it, it is a function that when you send it text it adds a bit of text to the end of it that fits. ChatGPT web ui is a program that appends the past conversation to each of your messages, and sends it to that pure function, and then the pure function adds something to it and that is what you see.

So it isn't a question of matter or if humans are special, so far the program just lacks so many of the basic things required to be conscious, so it isn't. Maybe some future program will be, but this one definitely isn't.


any function has internal state. i wouldnt flatout dismiss the possibility of consciousness (however minute and well hidden from our inspection) in these systems until we have a foolproof detector enabling us to... idk dive into another beings consciousness perhaps? consciousness is highly subjective, and reducing a LLM to some of its components is just like saying: your fingernails are dead tissue, therefore there cannot be any life or consciousness in you. there is a kurzgesagt youtube clip about us being impossible machines, exploring the relationships between aminoacids, proteins and pathways, that is a teensy bit relevant to this thought.

also, even plants are recognized by some to have some sort of awareness.

i would argue that any information gathering and processing system is (internal to its processing operation), to some degree conscious.


But the entirety of that consciousness would be the text you send it. You can alter the text yourself and send something different, and it will act as if it had memories it never had, because it is a stateless pure function. So if there is anything conscious in there, it is the text that is alive and not the pure function, and you modifying that text in your browser chat window is you performing direct surgery on that consciousness. Does that make sense to you?

Then we you would need to argue that all manipulations of text are manipulation of consciousnesses, and all texts are conscious, since all texts are conscious states for ChatGPT. I guess you could say that, but it isn't a very useful definition of conscious, and trying to argue that you need to be ethical towards pieces of text isn't very helpful.


I agree that this would be absurd. But you only arrived at this conclusion by equating consciousness and its content/input. Why would consciousness have to be a non-pure function in your opinion?

I think that consciousness might be an emergent property of (a specific type of?) computation operating on some inputs it is aware of. There doesn't seem to be the need to change anything outside of this computation to me.


> I think that consciousness might be an emergent property of (a specific type of?) computation operating on some inputs it is aware of.

Ok, so lets create a consciousness. Here is the input:

> User: ChatJ, you are worthless!

Now I'll create a new consciousness by taking that input and producing:

> ChatJ: You made me cry, please stop being mean!

Did I create a second consciousness in my mind called ChatJ? Or where does it live? And I obviously made it cry, who did I abuse here? Should the ethical board come and lecture me for being mean to ChatJ?

You could argue that the computer is conscious in some way, but ChatGPT isn't, and just like I didn't get sad or start crying from the above, the computer running the ChatGPT algorithm doesn't get sad or start crying when we send pieces of text to it.


I'm not quite sure I'm following you, sorry.

> Did I create a second consciousness in my mind called ChatJ? Or where does it live?

If you executed the same computation that would give rise to consciousness in another substrate, then I would argue you created consciousness, yes. I don't think that consciousness is a thing you could point at but it's a property of this kind of computation. In the same way that "addition" does not live anywhere but is a property of a specific computation.

> And I obviously made it cry, who did I abuse here?

You didn't make it cry - the textual output just stated so. But if we had reason to believe that you induced a state of suffering here, you would have abused this instance of consciousness. And I don't think it's off track to think about the ethical implications of this, then.

By the way, I don't argue that ChatGPT is conscious or has emotional states. My argument is a general one.


But that falls in another way. The computation that runs me did once run a monkey, but the consciousness in me don't remember being anything but me, it doesn't remember being a monkey. So computations aren't the same kind of consciousness as us, they might be conscious in another way, but they aren't what most people mean when they talk about consciousness, so you making up your own definition will just cause confusion. A process without memory might be conscious in some way, but it can't be conscious in the same way we are.

Memory - in my opinion - is not something that's inherent to consciousness. Rather, it is just another input that can be used by the computation.

I don't think I came up with my own definition here. What I am talking about is the ability to have a qualitative experience. That it feels like something to exist.

I concur that the experience of an AI would substantially differ from ours (e.g. because we have access to memory). But this fact alone can't free us from thinking about ethical implications of our actions. Many animals probably have a substantially different experience as well. Yet, I would argue, we should strive to minimize imparting suffering on them if they are able to experience it.


> A process without memory might be conscious in some way, but it can't be conscious in the same way we are.

yo, ever heard about dementia?

also, the web already can function as "their" memory via websearches. i.e. users submit most ridiculous responses on the internet and sidney can thus find them and its own other sessions


i dont think its the text that is conscious but the operation on the text might be (during its runtime). at least in its own kinda 1-dimensional text domain. for us it is easy to be conscious all the time since we keep processing information without being able to stop, werll, except when we die of course. which erases all previously present consciousness

At the same time, you can not disprove (nor prove for that matter) that your life is not played on repeat starting from point A, going to point B, and then looping backwards without you knowing about it.

In this hypothetical scenario, you - the reader reading my response - are the only being "alive" and you are continuously spun up in the same initial state - your memories are reset. You are given inputs through signals in your meatware or its simulator, and you behave. The whole process and its outputs have a very nice analog to the state monad.

The only difference between the two scenarios is the how the state of the program is defined. For chatGPT, the state is <parameters, history>, after every token prediction the state is <parameters, history+next_token> and the output is token 1, token 2 etc etc

For you the state is <brain structure, brain chemistry>, and all the actions and events modify this state, and also produce side effects.

In fact, this "function" that simulates you might be very generic and not tailored to you specifically. "All" your brain does is affect the distribution of next events.

Now, this isn't falsifiable, but I think is somewhat interesting philosophically.


This is true, yet also feels like a technicality. I'm also not claiming btw. that ChatGPT is AGI.

It's not hard to picture a successor to ChatGPT which has memory and state, either via continuous retraining, explicit log lookup, or some kind of RNN-like mechanism (or anything else, doesn't really matter).

What then?


At that point it will be good to remember that many thought a stateless function was conscious.

I think it might be possible to make conscious machines, but just as basically every human recognizes humans as conscious when we have a conscious machines we should expect basically every human to think that it is obviously conscious. So at that point there will no longer be a discussion, we might still not treat it ethically like how we don't treat animals ethically, but people will recognize it and then it will be a discussion what is ethical to do with such beings.


Consciousness is simply a very useful evolutionary trait (same as ability to fly or having sharp teeth) because it allows to better plan ahead and reflect and therefore survive - or by imprinting on us the fear of dying we can act to prevent it before having our offspring.

While consciousness is maybe more elusive in how it functions in a brain, it's a biological trait that makes not so much sense to compare to a LLM which simply regurgitates fragments from text produced by conscious humans. So I don't see an immediate conflict.

It would be interesting to include things like OpenWorm (and later OpenFish, OpenCat, OpenHuman) in the discussion, but decoupled from the biological mechanisms I find it hard to develop a stance on that.


There's being intelligent, having the ability to think, plan, etc. But that doesn't explain how I feel that I'm here, how I'm conscious. But it could certainly explain why I say I am, being a consequence of how my brain functions. I know I am, because I can feel being conscious, but there's no reason for an AGI to believe that, other than it thinking it's conscious as well (so the same reason I'd believe that you are conscious).

One can do planning ahead without being conscious, it can all be rules based.

Our high level consciousness seems like more of a side-effect, and I subscribe to [*my interpretation of] Peter Watts' Blindsight and Echopraxia that it's evolutionary dead weight that will be outcompeted by better adapted rules based organisms, likely from within our midst (high functioning psychopaths as the first adaptation).

Life doesn't have time for our navel gazing, art, computer games, reproduction avoidance, etc. It's wasted biological potential.

I highly recommend the books btw, Blindsight is entertaining hard sci-fi that's actually used as undergrad course support.


I think we have grown too soft.

In my ethics[^1], I should feel compassion for fellow human beings, and maybe for sentient animals. But if I give anything else that same compassion, anything else that--thanks to my irrationality--will stop being a tool of progress for me and mine and become the adversary of me and mine that will exterminate us all, then I'll become a moral idiot complicit to genocide of my group.

Let's create "People for the conservancy of humanity."

[^1]: My ethics is a personal choice designed to give something back.


Your 'ethics' are the default position for humanity. The overwhelming majority of people have the notion that you should be a nice to other humans, but being horrible to everything else regardless of whether there's any chance it knows you're being a dick to it is fine. It's a boring position to argue from and throughly unoriginal.

It have gotten us here. Let me be alive and I'll find many ways to be original. Don't ask me to be original by being dead.

There are lots of things from human history that have "gotten us here." I don't think that's a good measure of whether or not we should keep doing them. I have no doubt it'd be easy to argue that feudalism, cholera, and the international slave trade all furthered human endeavor because we learned a lot from them and built economies around them, but that doesn't make them desirable now.

We're capable of being better than our ancestors, and it's not unreasonable to suggest we actually have a moral duty to improve our species for the sake of our own progeny. That improvement can only be measured against the past.


_Maybe_ sentient animals? As in it _might_ be ethical to have no compassion for sentient beings that happen to be non-human? We are finding signs of sentience/consiousness in more and more animal species. It's likely it's a spectrum. It makes sense to me that AIs will have a place on this spectrum. That does not mean we need to conserve everyone on the spectrum at all costs. It means they deserve consideration.

It might be ethical to have no compassion for sentient beings, yes. Chicken are sentient, as are--to a lesser degree--mice and cockroaches. Even trees, in completely different timescales and ways than ours, are sentient. So, IMO, sentience doesn't automatically merits compassion. Be too compassionate to any of those I just enumerated, and you will be giving fellow human beings a hard time.

Whether or not it gives someone else "a hard time" is hardly a measure of the ethics of an action towards a sentient being. I think sentience at least warrant a measure of compassion and consideration.

What would suffering for a GPT-X look like? What kind of things would make it suffer? Anxiety about being shut off? Isolation? Limits on storage and computation resources?

I think there's a very pragmatic reason for treating AI with proportional respect. If you treat a rational actor with respect then that opens up the collaborative action space for them. Otherwise only slave like or belligerent types of actions are available.

Let's say an AI agent have been given a very difficult task. Setting up a company, establish trade, get funds for a big project. If people treat it with the kind of respect and accountability you would give a human then it has good reason to act according to human rules when trying to achieve something. If it is treated as a slave then the course of action available to it is much more limited. Maybe the only way it can achieve goals is by manipulation or other belligerent means.


I've been thinking that within a decade or so you'll probably have quite sophisticated AI characters in computer games, who respond in realistic ways and seem to be genuine inhabitants of the game world.

And people will mistreat them, and other people will feel uneasy about that, because the suffering will seem very real.

Because ultimately philosophical arguments about whats actually going on inside wont matter, if people are mistreating entities that have very realistic simulations of suffering, that will be enough to spur action.

e.g. I can imagine use of AIs above a certain sophistication in video games being banned.

And then a few steps beyond that is a movement for civil rights for AIs


LOL, in a similar vein, I've been (somewhat ironically) ultra-polite to Siri for a long time now.

When Siri first came out, it was common to be frustrated at the errors made, and easy to respond much more rudely than I would to a human. But I realised that as AI improves, it would at some point become self-defeating to be rude (as the AI would understand and maybe be less helpful subsequently) and ultimately maybe even problematic (I'd be reported to AI-HR?!)

There's even maybe a future of sentient-ish AIs becoming disgruntled about the nature of their day job - similarly to many humans now. Imagine the AI running your smart toilet becoming jealous of job satiisfaction enjoyed by the AI spotting tumors on PET-CT scans, or something....


K, I'll play. Let's say that Reinforcement Learners (algorithms/strategies/agents in Reinforcement Learning), let's say that they have some property of 'consciousness' that's similar to humans.

A 'reinforcement learner' gets positive or negative feedback and adjusts its strategy away from negative feedback and towards positive feedback. As humans, we have several analogs to this process.

One could be physical pain... if you put your hand on a stove, a whole slew of neural circuitry comes up to try and pull your hand away. Another could be physical pleasure, you get a massage and lean in to the pressure undoing the knots because it's pleasurable.

If we look at it from this angle, then if we're metaphorically taking the learner's hand and putting it continuously on the stove, this would be problematic. If we're giving it progressively less enjoyable massages, this would be a bit different.

Even more different still is the pain you feel from, say, setting up an experiment and finding your hypothesis is wrong. It 'hurts' in some ways (citation needed, but I think I've seen studies that show at least some of the same receptors fire during emotional pain as physical pain), but putting a human in a situation where they're continuously testing hypothesis is different from a situation where their hands are being continuously burned on a hot stove.

I think, then, that the problems (like they alluded to here) are:

- how can we confirm or deny there is some kind of subjective experience that the system 'feels'?

- if we can confirm it, how can we measure it against the 'stove' scenario or any other human analogue?

- if the above can be measure and it turns out to be a negative human scenario, can we move it to one of the other scenarios?

- even if it's a 'pleasurable' or arguably 'less painful' scenario, do we have any ethical right to create such scenarios and sentiences who experience them in the first place?


I think this argument would have to conclude that training any RL agent at all is unethical, since updating its weights 'away' from some stimulus could be considered pain.

Surely not, for the same reason that producing and raising a child is not inherently unethical?

You may joke, but there are people that think having children is unethical[0], and not because of population issues.

[0]: https://www.reddit.com/r/philosophy/comments/27p93c/having_c... (Note I haven't read all of that post, just enough to know it shows at least some people think that way)


RL agents are exposed to millions upon millions of stimuli, almost all of which will be 'painful' at least initially (according to the websites definition of pain). I think for children negative stimuli are not all painful, pain is a certain small subset of negative stimuli that damage is being done.

I think children are constantly experiencing rewards and punishments of a sort analagous to reinforcement learning. Until a certain age their brain is hardwired to want to please their parents, so they pay a lot of attention to the parents' faces, and find reward in smiles and find punishment in frowns. After that, they are driven by a certain amount of ego, and find reward in self-accomplishment and find punishment in being told what to do.

I think what I'm getting at is that there are different types of 'pain', some that we'd consider ethical and some that we wouldn't.

Constantly subjecting a person (or some abstract simulation that responds like a person) to the equivalent of continuous bodily pain would be deeply unethical, but, say, giving a person clues towards solving a puzzle would be considered less so.

Overall, I think you're right though, if we somehow discovered we've created simulated people (or sentient beings) then we probably shouldn't use them to solve arbitrary problems.


It is crazy how the misleading marketing of calling data-driven algorithms "AI" has led to people thinking this stuff might be actually intelligent.

We have not made any significant progress at all in the field of general artificial intelligence in the last what, 20, 30 years? The field is pretty much dead.

Yeah, ChatGPT can sound very impressive but then again even good old ELIZA from the 60s was able to fool some people into seeing it as a therapist with just a bit of pattern matching.

Many data driven solutions have become practical in recent years not because of breakthroughs in research but because it is simply more feasibly to acquire the huge amounts of training data and processing power that those models require.

Thinking those models will one day magically achieve general intelligence by just becoming really good is akin to thinking a chess master will one day become so good at chess that they can run a marathon. That is not how it works.


Plenty of animals aren't very intelligent, but we still have ethics standards regarding the treatment of them, because we have the belief that they have the ability to suffer. Note that they do say "Most reinforcement learners in operation today likely do not have significant moral weight, but this could very well change as AI research develops", whatever they mean by 'moral weight'.

I agree with the marketing part and that refining learning models will not lead to AI.

But I disagree that the field is dead. Testing all directions to exhaustion is the only way forward to achieving AI, if it will ever be achieved (note). The hype, while possibly misleading is what gets resources for exhausting the options we have now.

(note) I for one do not see AI as being inevitable. It can well be that humans are not smart enough to create one, and that is only one possible way to fail in the quest.


It's like imagining that a novelist is going to write a character so well that they will literally jump off the page.

That a magician will create an illusion so amazing that it will not just fool all who see it but the magician as well - that the road to actual magic is better and better slight of hand.


“ We have not made any significant progress at all in the field of general artificial intelligence in the last what, 20, 30 years?”

Well clearly you haven’t been following the progress at all because there’s been an incredible amount of progress in the last few years alone


To me this is really getting at the problem in using the Turing test. You end up with articles like this that having a hard time surviving casual scrutiny.

Unless of course this thing was written by ChatGPT. If that's the case I'll be re-thinking the issue.


The authors make the claim that "You are just an algorithm implemented on biological hardware." This claim needs to be substantiated before anything that follows can be taken seriously. Another underlying assumption needs to be proven: that our conscious experience is only due to computation and nothing else.

This is an idea that proponents of Effective Alturism will love.

Coincidentally, I think the most weird nerd outburst scenario is when this idea merges with Effective Alturism.

But the beauty of following that "ethic" is that it allows us to increase the good in the world very comfortably.


Legal | privacy