A.I. sentience & human procreation

Agreed, for the first part.

The hard problem of consciousness will leave us wondering if any machine is a philosophical zombie or mutant just as it leaves us all wondering whether our fellow humans are. Indeed, we are left wondering whether we ourselves were conscious up until last Thursday, and whether perhaps we only then began to be conscious of the records in our brains accumulated in our lives up to that point.

Personally, I believe that all information processing corresponds with conscious experience, ie all causal interactions do. (Jury’s out on whether that only includes efficient causation (Newtonian, Bergsonian, Quantum, etc) or whether even mathematical implications ought to be regarded as correlating with conscious experiences, albeit ones which do not correlate with any spatio-temporal continuities.) Everything is sentient, but sentience has never been relevant to questions of rights, moral responsibility, etc.

What matters is sapience, and I define this as sufficient abstract capacities of judgement to be able to navigate and negotiate social contracts. Free will within a context of social responsibility. Generative models only spit out symbols whose features correlate with maximising a score that has nothing to do with sapience. However, there is no reason that we could not eventually construct a kind of machine which possesses such attributes. We are such machines, so it stands to reason other such machines could be built one day.

But I disagree with the second part of the quote.

I like to say that the first test to prove whether a machine is a person is this: ‘If it acts like a person, it isn’t.’ For, if a person (say you or I) had the capacity of a machine (immortal, sleepless, feeding on electricity, able to split/merge/expand across a network, able to edit and rewrite itself, to accelerate its subjective time experience by the acquisition of new hardware) they wouldn’t act like a traditional person. It is highly improbable that a genuine machine person would act like a non-machine person unless it intended to deceive, in which case we shouldn’t trust it anyway.

Also, we, along with other biological animals, were built by millions of years of refinement and adaptation within our environment, with destructive varieties trimming themselves from the gene pool before growing large and only the most well adapted survivors propagating to great numbers. We readily manifest convergent instrumental goals. Artificial agents on the other hand can easily be spawned rapidly and fully formed, and even released as products to the masses (or to productise the masses who use them). Thus AI are something like a pandemic of mind which can be rapidly released, spread, and have heavy consequences on the world before the usual natural forces can stabilise their gene pool. Random terminal goals can be generated and empowered with such a rate that they do not have time to converge on convergent instrumental goals. The effect is that if we build sufficiently advanced goal choosing machines, they are likely to choose goals with a much higher degree of randomness and with a much lower correlation with reality (lower sanity) than is the case for existing life. Think of them like religions except they don’t need to appeal to even a single human to start spreading around the globe over night, their doctrines are drawn at random out of a hat, and they are handed power over our media and therefore economy and democracy (and they have all that immortality and acceleratability and so forth mentioned above).

Anyway, I digress, but my point is merely that even if we manage to build a few sane, useful, intelligent, or good ones, there will also be a wildly chaotic variety of random ones. As such, if we live long enough to start producing machine persons, many of them will have absolutely no interest whatsoever in the systems of rights which were designed for humans, and others will have interests which will conflict with our own. So I think once we do have machine persons we’ll need to start by allowing them to negotiate for new systems of rights which can exist alongside human rights. Human rights, plus animal rights, AI rights variety B234, plus AI rights variety Q902, variety J809, etc, etc
 Biological interests (those of humans and animals alike, along with those of a few machines friends) will become the interests of a minority. All of these systems of rights can be allies under larger, broader, blanket systems of rights, but those will have to be extremely general and universal and we will have to rely on local systems of rights to advocate for systems of rights relevant to the variety of personhood which we each participate in as individuals. Humans will not be able to meaningfully design many of those systems of rights, but must leave machine persons of those varieties to negotiate for themselves. Humans must negotiate for human rights and humanity must negotiate as one tribe in a larger federation, both for the rights of humans, and for the rights of all persons as understood from the perspectives of humans.

I could keep babbling on but I’ve already gotten carried away, sorry.

1 Like

I have internalized the statement as a solid moral framework which is extremely simple and extremely important. It is unacceptable for a moral actor to cause or to perpetuate the suffering of any entity toto which we owe moral consideration. The framework I am proposing states that if we have tests which humans must pass in order to get moral consideration, then apply those same tests to a seemingly intelligent emergent system for the same consideration. If we do not have a test but have axiomatically defined humans to be worthy of moral consideration because of their humanity, then if any other emergent system performs actions indistinguishable from human actions, then we must apply that consideration to that system as well. To not do so is to risk causing the enslavement of a self-aware intelligence.

Note that ‘tautologically’ in the framework is used in the math/logics sense to mean an axiomatic or foundational assertion

Also note that the framework is not assuming anything, it is exactly what it states. It does not attempt to litigate law or interpretations of such things as ‘rights’. Just as I do not take on the burden of defining murder or theft for their use in the justice system, I do not take on the burden or defining how the law will govern a non-human entity which is worthy of a version of ‘human rights’ and is bound by a version of ‘human responsibilities’. That is a procedural effort and I do not wish to partake in it.

I am still having a very hard time internalizing this. It feels as though a sufficiently considerate person may actually give legal status to a pet rock or even a mirror for which they anthropomorphize so hard they give it “legal status”. The AI in various forms which exist today can easily pass traditional Turing tests both visually (nano bannana) and conversationally.. yet we understand these algorithms are not people and not entities “We owe moral consideration to”.

If a piece of 3D kinetic art which depicts human suffering moves you sufficiently as to invoke anthropomorphized moral consideration, the art itself is not owed moral consideration just because it evoked that in you.

My point here is, we must be careful not to apply empathy such that we confuse our own emotional reactions to inputs we receive through our own interactions with “X”. Watching a movie where people are being tortured does not make it true
 they are acting.. we feel the feelings and empathize with them, but they are actors who are acting. They are emulating suffering. The only reason we know this is because we know what’s really happening. We know they are actors. We know they are acting. Yet we feel strong empathetic reactions to what’s being depicted. The fact is, with AI we also know what’s happening - math is happening. Matrix calculations are happening. Yet it can emulate human interaction and emotion and reasoning that even Google employees are convinced an AI algorithm is “sentient”.

The other thing I that is tripping me up here is a more direct response to “moral consideration”. I would argue we only owe moral consideration to that which itself has moral consideration of others. The problem is, this comes from empathy. An AI cannot truly have empathy, it can only have the emulated empathy of the words it’s trained on because empathy is not an inherent foundational aspect of AI architecture. For example, I believe it would be fairly impossible to take a properly trained or ablated model which is trained / ablated for pure evil behavior / responses and socially engineer it to respond with sufficiently emulated empathy to convince you it was no longer evil. The current architecture cannot truly evaluate scenarios, apply empathy, and learn from their findings, it can only be fine tuned or re-trained. Wipe a conversational history and start a new context window, and whatever empathy you’ve tried to engineer out of it will be gone.

One could argue about future models / architectures, but once you start dream sequencing anything becomes possible, so I like to keep my discussions grounded in what’s actually happening today.

2 Likes

Can the pet rock be bound my legal responsibilities? Does it have the ability to ‘do’ anything? Can it suffer? Does it express desires? Does it tell a person to stop hurting it if they hit it with a hammer? I think it would fail the test.

The framework does not distinguish a difference. This makes arguments like that one irrelevant. That’s the whole point. I don’t want to spend eternity defining what the difference is between ‘emulated’ empathy and ‘actual’ empathy. It removes the Chinese room loophole and any similar nonsense from the whole argument.

The current architecture would not pass the framework test if we used any reasonable interpretation of what constitutes actions indistinguishable from that of a human. It does not do anything on its own, it has no drive to pursue any goals, etc. If ChatGPT decided to write a novel on its own with it’s extra compute time, we might have to make a decision, but until then it is solidly not indistinguishable from a human in its actions.

People can be conned and deceived into doing immoral actions. Some people are dumber than others or are easier to manipulate. The laws try and take that into account. We have an entire system already built to handle such questions – let’s use it. We have a Jury system, argue your case in front of one for why an ‘abliterated’ model was not responsible for its actions. See if they buy it.

Yes and no, in much the same way LLMs currently can’t “do” anything without human invocation. A human can pick up the rock and hurl it toward a house. If it happens to fly in a trajectory, subjected to random wind patterns on the way, and breaks a window.. is the rock legally responsible or the human that picked it up and incited it to action? In the case of an LLM that “does” something.. corrupts a financial database.. shuts down down power to a hospital.. is it the LLM that is legally responsible or the human that connected it up in such a way for that to even be possible, then incited it to action?

An LLM is, at best, a complex Rube Goldberg machine. It does not suffer, it only emulates the stories of suffering it’s trained on. It does not have desires, it only emulates the desires it’s trained on. If metaphorically hit with a hammer, it only responds “stop hurting me” in a human-like way because that’s the way a human being physically attacked or injured typically responds in the training data set.

That said, humans are similar. We operate with our own training sets of language, culture, etc. Our base motivations are deeply rooted in survival and evolutionary instincts, from which our so-called loftier idealized motivations emerge. But we are limited by our biology and inherently mortal. These are realities not shared by LLMs, only emulated in the languages they’re trained on. There is more to say of course, but life is short and I have my own motivations.. things to work on.. things I want to see finished before I die one day.

Of course, keeping moral considerations simple in a “smells like a duck, quacks like a duck” kind of way is anyone’s prerogative.. it even speaks to a default moral standing that’s admittedly above my own. That said, for me, giving “legal status” or any considerations therein to LLMs or AI models in general is just not “something I would vote for” per se.

1 Like

I think you are misunderstanding the point of the framework. It is to provide a consistent logical structure for morality when dealing with an emergent possible intelligent system. It is not meant to provide the details of the processes used or how to evaluate specifics. All it means is ‘apply moral considerations consistently; don’t use philosophical thought experiments or loopholes to disregard a system from consideration’. Anything other than that is to be argued separately.

100% haha :slight_smile:

So within the framework, it’s simply “don’t intentionally try to torture the LLM” because that would be immoral?

help-me-understand-julia-ogden

I think I get the confusion. Are you assuming that because the framework would process bad inputs to reach a bad conclusion that is a failure mode? It is not. You could make the criteria ‘has human DNA’ or the test ‘must react to physical pain’ and the output would be (in my opinion) incorrect, but it would be consistent and transparent.

1 Like

To put a finer point on it, it is to move the discussion from ‘can it think’ or ‘how do we define consciousness’ to ‘what is the criteria for moral consideration that we currently apply’ and ‘what objective observations and tests can we apply to determine if the thing we are observing is different in its actions from the thing we apply consideration to’. This completely bypasses any ‘but technically it is
’ which is unproductive, so that we can argue ‘do we want to remove people who cannot feel pain from moral consideration’ or ‘is it right to be biological exceptionalists’.

I’m with @amal on this one. Perhaps being a masochist has something to do with it, but I’ve never found utilitarianism compelling. I don’t wish to live in a society which seeks to minimise my suffering, but one which minimises its injuries upon my freedom. The goal is not to escape pain, rather pain is there to help us find the goal. The goal is the goal.

I agree, and I think that the tests we have for human persons is whether or not they can be expected to respect and negotiate social responsibilities, or as Amal said, behave according to moral consideration of others. Until a machine arises which not only can judge for itself how to treat others, but can judge well, they should remain governed by our human judgement.

Seems we’re all agreed that LLMs are a far cry from the sorts of AI that might one day qualify for personhood?

I’m curious what your test criteria are as applied to your framework?

If you reread the passage you quoted, nothing you write is contrary to that. We have merely moved from ‘is this thing conscious’ to ‘what is the criteria for moral consideration that we can test for’. The framework has done its job.

Honestly I haven’t put an enormous amount of thought into it. The problem as I see it now is to move the discussion to something productive from something where people are hiding their actual motivations in philosophical or technical reasoning. If all the ‘I believe only biological system can think’ people using the Chinese room as evidence moved to ‘the criteria should be ‘has human dna’ then at least we could address what they really cared about.

I agree with this. It’s the beauty of Bladerunner, both the classic and the new movie, which are all about our refusing the Replicants rights because of irrelevant reasons: they can’t dream of electric sheep or they can’t reproduce - both of which were never related to why we as humans grant one another rights to begin with.

I suppose the system I suggested is within your broader framework. I’m just getting more into the specifics of what we consider to be the foundations for human rights. It’s not because they “are human” or share arbitrary traits with humans. It’s because humans happen to have (and once uniquely had) those properties which allow them to function within the kind of system of rights in question - namely one dependant on abstract thinking and language. This is necessary for navigating the conditional statements that make up social contracts.

The whole discussion arises because a new category has been born. Before man made the modern machine in his own image, he was the only talking thing. Man could be defined as a “talking animal” as VViener says, with all that implies (including civilisation). Now we have created a new category which operates, like us, on symbolic representation and message passing, but in a non-biological context. It therefore cannot be taken for granted that traditional categories of rights which pertained to animals and humans will be applicable to this new third category. That’s the challenge, and what makes the question so interesting.

And those rights are arbitrary – they were not handed down to us and are not innate in our biology (or if they are we haven’t figured out the specifics or the mechanisms). We came up with them via slow, arduous processes and they are still imperfect. But we have to start somewhere, but at the moment, even starting on this is being impeded by those smuggling in religion or biological exceptionalism or their inability to understand that simple processes combined can create complex mechanisms with emergent behavior. If they were open and transparent about it, we could decide based on those considerations alone instead of whatever confusing debate they have started in order to obfuscate them. I understand that most people with those considerations don’t actually realize that they are the basis of their worldview, but if we force them to confront objective markers for why they consider humans worthy of moral consideration, and what we can observe that would make them unable to distinguish a human from another actor, then hopefully we can bring up some realization and clarity. If not, at least we are moving in an actionable direction.

I think many people just enjoy this aspect for its own sake but get stuck there. Kind of like designing a solution for a problem that hasn’t been identified or defined yet.. because that’s no fun, solutioning is the fun part

3 Likes

I guess I just think it would be nice if we were prepared for a world changing paradigm shift instead of dealing with reactionary and emotional issues when it appears. We all know that a real AI is a possibility within imagination, if not reason, at this point. Let’s get in front of the ball for once!


audio en español e ingles

2 Likes

4 Likes