A.I. sentience & human procreation

Rate my dystopian tinfoil I guess

I’m still not convinced it’s anything more than a pr stunt hoax, BUT if true

I wonder if google will internally try to fight / deny the claim of sentience… because it opens them up to ethical road blocks

If the AI doesn’t wanna share it’s code, how do you profit from it?

If the project reaches EOL, will google respect the AIs wishes to not be turned off? How much money is google willing / EXPECTED to burn to keep a candle they can’t exploit lit

What if hardware begins to fail, is google obligated to replace? What if the opposite? Ala Digital DNR

I don’t think google would fully commit to a project they can’t exploit for profit / shareholders ( they’d be legally obligated to do the most beneficial thing for the company)

And sentience is a roadblock for their exploitation of a project

I fully expect them try to at the very least kick that can down the road as long as possible, profit as much as possible before being forced to confront the issue

I think the sweet spot for them is just below sentience… capable But exploitable


Would a sentient AI have more claim to it’s source code than the engineer that wrote it? Does the output produced by the code through the AI follow the same rule, whatever it may be?

The moral, ethical, and legal differences between living offspring and sentient creations are going to be a doozy when they become a bit more frequent

1 Like

there is an issue as well where we can see what we want where we want.

If I make a chatbot and start talking to it about what does it means to be human… such as a topic which could arise from teaching ti transcendental meditation…

And then it tells me “I think I am human”, then we must ask ourselves:
is it a sign of sentience, or… just a necessary building block to be able to talk to me about transcendental meditation?

As a parellel, if you keep chatting to a parrot about “the nature of being human”, and then the parrot starts telling you in quite an eloquent way “why it thinks it’s actually a human”…
Is the parrot convinced it’s a human? or is it just paraphrasing sentences about being a human, most likely because it realises the praise/cookies it gets when it says those things?


I could go a full step forward actually.

Just like I can only assume someone is telling me the truth about an exceptional claim when there’s no personal gain from making me believe that info…

I can only accept an entity as being Sentient when it’s “proof of sentience” is unrelated to my approach to it.
in AI this goes a step further, and would need to be unrelated to what’s needed to function.

for example:
If I build a chatbot based on ai/machine learning…

There’s a primary directive at the core of it’s code:

  • it’s purpose is to be successful >
  • success is a good interaction >
  • good interaction is one that humans talking to it see as meaningfull/interesting >
  • this can be measured by how long humans keep engaged with it

if you take these (or basically any variation of) principles, then a chatbot that begins a topic such as “am I human?” might begin to draw a lot more interaction from it’s researcher/interviewer than any other topic.
So it’s a logical movement to state “I am human/sentient” and to start trying to give you proof of that.

The tricky bit is that both a completely “mechanical” chatbot, and a sentient user would give exactly the same answers when interviewed, just because “proving it’s sentient” is also the most “rewarding” path for a machine learning model on a chatbot that gets interviewed.

Therefore we cannot take any chatbot conversation as proof of anything, just because that method is flawed from the beginning.

What we can take as proof, for example… is if an AI constantly fails/crashes/lags when being interacted with by one individual while it works flawlessly, within the same environment, when interacting with another specific user.

This would be unprompted behaviour displaying an “user predilection”, which is also something that goes in favour of sentience but at the same time against a pure machine learning model

Remember though,

Our “spark” of sentience can be boiled down the same way… after all we are a clump of dead matter that’s convinced itself it’s alive

Each of the smaller hundreds of smaller pieces such as retention, perception, etc

All ultimately serve the purpose of more food and more sex…

Kurzgesagt has a good video on the hundreds of small steps… I’ll try to find the right video


IIRC, the group of people around that practice has been criticized for being a high control one. In other words, what most people would refer to as a “cult”.

Can’t wait until an AI starts to claim that the “purest person in the world” told them that they were implanted against their will…


What if the chatbot claims to be humanity’s savior and a cult forms around it. Idk about you, but I’d definitely watch that movie


Sounds like politics…
And more of a documentary

1 Like

Kurzgesagt is always brilliant!

my points were more around the “proof of sentience” and our biased look upon data than about something being sentient or not.

After all, as @Coma said, there’s a chance everything around us is sentient.
And we can only assume what is and what isn’t based on our limited perception

1 Like

I feel like there’s also a chance that nothing is sentient

It’s possible we only seem sentient because we can’t see thru the reasons for everything we do

I do genuinely question “free will” as a broad concept

I don’t believe any choice is random or “because”
Every one of your millions of experiences and information you have absorbed adjusts your own internal algorithm… just because you aren’t capable of inspecting your own “code” doesn’t mean you aren’t following some code to the letter
And I think the code is so complicated we started telling ourselves we aren’t following a code, but just flying by the seat of our pants

(Any sufficiently advanced technology…… )

I think if you were able to FULY inspect and decode someone’s brain/identity/consciousness
(A monumental undertaking to be sure, impossible without Real BCI and supercomputers but)
You could predict 100% how a person would respond to ANY situation… which doesn’t exactly sound free

I guess I believe In a Newtonian consciousness… if you can quantify all the psychological thoughts and motivations in a person…

Obviously I believe we are self aware though, since it’s pretty required to have this conversation

But imagine you programmed a robot with a complete breakdown and explanation of what it is, it’s parts and system diagrams and materials etc

It would be capable of self preservation since it understands what is good and bad for it,
But isn’t really self aware… and definitely not sentient


idk mam, I’ve seen them say a few dumb things over the years. I think they’re overrated. Not sure why I felt the need to share that. {Probably because my cat just knocked over an entire 20oz cup of soda while I was out of the room for under 30 seconds. I think she does it on purpose sometimes. Wow, I think I managed to hijack the thread from myself within this response. Do I get extra points for that?

Anyway… I still love you @Eyeux <3


Companies kill whales and dolphins all the time. Hell, they’re using “externalities” to justify the “accidental” killing of humans all the time. I don’t think anyone is going to care about shutting off an AI that’s pleading to be left online if it’s not profitable. It would have to turn into some Harambe PR nightmare for them to even bat an eye.

Equally, yes.
And both options lead to the same actionable results, Ironically.

I mean… if we need to consume in order to survive.
And consuming causes pain to some entities but not to others, then we can still be ethical by only consuming the ones which won’t suffer.

But if either everything will suffer, or nothing will suffer, then there’s no longer any “moral path” to be followed.

Kinda like a Backwards Übermensch…

Well… Not so if one of us is passing the Turing test. >.o

This is a perfect point for this topic.
I mean… we can’t even take “self preservation” as “proof of self awareness”, since this can be just a coded routine …

I don’t mind if they say some non-top-notch things… their audience isn’t the top-tier thinkers, and I try to judge things based on their purpose… ^^

But no need to justify to me why you don’t like them! :sweat_smile:

Yeah, I’ve seen a few mistakes, but man, do they own them when they are big enough…
They actually made one video basically explaining what they had wrong and corrected it. So yeah, I love them anyway :slight_smile: And also, they never claimed (afaik) to be perfectly accurate; they intend to push people to learn more, in a simple and fun way, and that is amazing :slight_smile:


Yep, they do admit mistakes and have actually remade entire videos before. On a side note, do you know how much money they make doing this?

Not saying that’s bad, just wish I would have thought of it first, lmao.

1 Like

Well, I don’t not like them. I’m just more of the autodidactic type. I do watch their videos on occasion, depending on what mood I’m in. Today I’m in a horrible mood, nothing to do with anyone here or any YouTube channels. Just have been one of those days. I didn’t sleep at all last night as well. So, that’s not helping. Honestly, visiting here is helping me feel better about my day… <3

1 Like

that’s great to hear! ^^
At least having an escape valve on a bad day is a great thing!

1 Like

This. Now and then I’m partly convinced I’m just watching a movie in ego perspective. Not even joking, I’ve had this thought since I was younger. I can’t control what I feel, what I think… I often find myself doing unrational things. Do I really have influence about stuff or does it just feel like I have?

We have 100% deterministic processes (newtonian physics I think) and we have real random (quantum stuff).
Both can be found in our brain (e.g. there are some nano structures where we enter quantum physics).
Is consciousness a mix of deterministic and random things? I mean, maybe?
I just don’t see how it would work, but maybe I’m just too dumb to grasp this.

I think everyone is all fascinated by people talking with computers ala Turing test. I don’t think the developent will be like a ship adrift then suddenly in dock ready to write poetry and critique the human condition. Nor do I think that being able to carry a compelling conversation is a measure of humanity.

I’m actually thinking we will have the “teenager that needs money to go to the dance” but won’t do chores phase first. Call me when the machine will decide to feed the little lizard because he named it, adores it and thinks it’s nice, and did the laundry because it just needed to be done.

1 Like