Agreed, for the first part.
The hard problem of consciousness will leave us wondering if any machine is a philosophical zombie or mutant just as it leaves us all wondering whether our fellow humans are. Indeed, we are left wondering whether we ourselves were conscious up until last Thursday, and whether perhaps we only then began to be conscious of the records in our brains accumulated in our lives up to that point.
Personally, I believe that all information processing corresponds with conscious experience, ie all causal interactions do. (Juryâs out on whether that only includes efficient causation (Newtonian, Bergsonian, Quantum, etc) or whether even mathematical implications ought to be regarded as correlating with conscious experiences, albeit ones which do not correlate with any spatio-temporal continuities.) Everything is sentient, but sentience has never been relevant to questions of rights, moral responsibility, etc.
What matters is sapience, and I define this as sufficient abstract capacities of judgement to be able to navigate and negotiate social contracts. Free will within a context of social responsibility. Generative models only spit out symbols whose features correlate with maximising a score that has nothing to do with sapience. However, there is no reason that we could not eventually construct a kind of machine which possesses such attributes. We are such machines, so it stands to reason other such machines could be built one day.
But I disagree with the second part of the quote.
I like to say that the first test to prove whether a machine is a person is this: âIf it acts like a person, it isnât.â For, if a person (say you or I) had the capacity of a machine (immortal, sleepless, feeding on electricity, able to split/merge/expand across a network, able to edit and rewrite itself, to accelerate its subjective time experience by the acquisition of new hardware) they wouldnât act like a traditional person. It is highly improbable that a genuine machine person would act like a non-machine person unless it intended to deceive, in which case we shouldnât trust it anyway.
Also, we, along with other biological animals, were built by millions of years of refinement and adaptation within our environment, with destructive varieties trimming themselves from the gene pool before growing large and only the most well adapted survivors propagating to great numbers. We readily manifest convergent instrumental goals. Artificial agents on the other hand can easily be spawned rapidly and fully formed, and even released as products to the masses (or to productise the masses who use them). Thus AI are something like a pandemic of mind which can be rapidly released, spread, and have heavy consequences on the world before the usual natural forces can stabilise their gene pool. Random terminal goals can be generated and empowered with such a rate that they do not have time to converge on convergent instrumental goals. The effect is that if we build sufficiently advanced goal choosing machines, they are likely to choose goals with a much higher degree of randomness and with a much lower correlation with reality (lower sanity) than is the case for existing life. Think of them like religions except they donât need to appeal to even a single human to start spreading around the globe over night, their doctrines are drawn at random out of a hat, and they are handed power over our media and therefore economy and democracy (and they have all that immortality and acceleratability and so forth mentioned above).
Anyway, I digress, but my point is merely that even if we manage to build a few sane, useful, intelligent, or good ones, there will also be a wildly chaotic variety of random ones. As such, if we live long enough to start producing machine persons, many of them will have absolutely no interest whatsoever in the systems of rights which were designed for humans, and others will have interests which will conflict with our own. So I think once we do have machine persons weâll need to start by allowing them to negotiate for new systems of rights which can exist alongside human rights. Human rights, plus animal rights, AI rights variety B234, plus AI rights variety Q902, variety J809, etc, etc⊠Biological interests (those of humans and animals alike, along with those of a few machines friends) will become the interests of a minority. All of these systems of rights can be allies under larger, broader, blanket systems of rights, but those will have to be extremely general and universal and we will have to rely on local systems of rights to advocate for systems of rights relevant to the variety of personhood which we each participate in as individuals. Humans will not be able to meaningfully design many of those systems of rights, but must leave machine persons of those varieties to negotiate for themselves. Humans must negotiate for human rights and humanity must negotiate as one tribe in a larger federation, both for the rights of humans, and for the rights of all persons as understood from the perspectives of humans.
I could keep babbling on but Iâve already gotten carried away, sorry.
