IIRC there have been a few psychological studies that tried things on AI models and then extrapolated the results to human behavior. This is the same nonsense flipped on itās head.
Besides, psychology has tools that can be used to describe and understand interactions between humans. And those can easily be made to fit into interactions between humans and machines that mimic human interaction.
This however does not imply that chatGPT is āhumanā or has personhood.
I think that answering the philosophy questions about whether or not AIs can become individuals and/or āhumanā will require significantly more research into the fields of artificial intelligence and neuroscience.
There are some things about Open AI and the organizations that sound it that I find unnerving. No, I donāt fear AI but I have seen how fragile the connection between the human mind and reality can be and mass delusions can get dangerous or halt progress very easily.
Yes, but then the problem still requires more input
What resolves the orientation is knowing where you are in relation to me.
Are you sitting by my side, or in front of me?
But even then, orientation is actually irrelevant.
Iāve heard in many places that problem as a āmultiple possible solutions problemā, because āit depends on your orientation to the gearsāā¦
But it does not.
Ultimately, there is only 1 answer. if you label the gears from 1 to 6 without specifying any conditional, then the sequence is in the same orientation as whatever you called āclockwiseā.
So touching the top of gear 3 and moving it clockwise, your finger will always be touched by gear 4.
It then does not matter if you are across the table from me, or even upside down 90 degrees sidewaysā¦ I might see your finger go āanti-clockwiseā, and I might see it touch what is, for me, the second gear from left to rightā¦ but you labelled them, which makes is so that regardless of me being on the other side, that gear has the label ā4ā in it.
Iāve had a whole argument about that with a teacher back in my uni days which I could only resolve by bringing fucking paper gears and labelling tape into the classroom.
Separate reply because itās a very distinct topic:
I hate that sooooo bloody much!!
There is nothing good which comes from that.
itās always possible to apply human behavioural characteristics to any pattern, if done in hindsight, therefore rendering this practice moot.
Psychological theories should not be used to āexplain behaviourā, because I can take any psychological theory and apply to any human, if done in hindsight. No matter what you do, I can always find a way to say whatever I want was the reason (Thatās my main beef with Freud), thus rendering moot any āexplainingā.
Which leaves us with: Psychological theories are only good for āuntying a knot you haveā, which isnāt applicable for AI models unless they start to get depressedā¦ or āpredicting human behaviourā.
Not even with humans we can use a single theory to predict everyoneās behaviours! If anyoneās interested, thereās a good point in comparing Freud with Adler, his Nietzsche-oriented disciple. both go to radial opposites, and both are ācorrectāā¦ only that each one for a very specific distinct group of people.
Then most of the studies I saw āextrapolating to human behaviourā not only committed the sin of believing said theory used to extrapolate was āuniversalā (otherwise any extrapolation becomes invalid), but they also usually committed crass errors of logic!
More than that! those can be used to fit interactions between humans and their body pillows as well! or their left shoes!
Itās just plain old Malarkey!
More than that.
In order to say if my table is Cyanā¦ I first need to be able to describe what is the colour Cyan, right?
We are still unable to properly define what is it that is to be āhumanā. There is no consensus!
So how could we ever say that something else is āhumanā?
Finally, someone who thinks critically and has a clear understanding of psychology.
You summarized what I wanted to say really well. I also hate that I encountered a group that fears AI and practices undue influence on itās members while recruiting AI researchers.
Oh well, the best I can do is to educate people on how those things operate.
I have to put that degree to some sort of use!! otherwise itās almost a decade of my life thrown away!
Jokes aside, Psychology is great! but itās so badly applied!
I used to work with researchers, and so many times Iāve found even serious studies being published with so many unacknowledged (henceforth uncontrolled) variables that should render it moot in any other field!
There is something good which can come from that, thoughā¦
One of the main drives to BCI investment at the moment seems to be exactly the āfear of AIāā¦ which leads investors such as Musk to put money towards BCI studies because āthatās one of our better defenses against AI: to enhance ourselves to itās levelāā¦
When I first saw this years ago I wasnāt told if it was real or not by context. I was so disappointed when I looked it up and it was fake I remember their website was pretty cool though
Itās obviously fake. But Iād love to have a modular body. Iād rather go full robot but Iām not going to judge those who chose to keep some meat.
Yeah, it raised immediate red flags, so I went looking for more of it. but you know when you want to be fooled?
As inā¦ would be so much better if it were true!!
I would rather find some synthetic meatā¦
As in some bits are cool being metal, but āboutique soft tissuesā also sound appealing!
The same applies to the video game Somaā¦ I guess that you could argue that you transferred yourself if you manage to temporarily stop your human brain, and then write to it what you experienced while running on different hardware?
As in put your brain in a coma and copy yourself, then live as a robot for a while, and finally power down said robot body and copy yourself back to your meat brain?
If thatās achievable, you could argue that as long as thereās only one instance of you, you could argue that youāre still the same person and that you transferred your mind between bodies.
Or maybe I just want this to be possibleā¦ But I guess weāll cross that bridge when we get there.
Yes!
But thenā¦ isnāt that we just fooling ourselves?
(not that thereās anything wrong with that. Reality is nothing more than an illusion/interpretation we collectively agree upon anyway)
The core concept behind that is that once we wake up, if we have only one stream of memories, itās still us.
I get that.
But we can also achieve the same thing with, letās say, deep sleep hypnotherapy:
Put someone in a chemically induced coma. Shut down only the superconscious mind and keep adding mnemonic suggestions. 1 year later that person wakes up with memories of being transfered into a robot body, then back.
The end result in both cases is exactly the same.
The way I draw a line isā¦
If by transfering my mind to silicon My previous mind still exists, then Iām a copy.
Me intentionally killing the previous mind would not change that.
If the transfer process is destructive by natureā¦ not because we added a destructive step, but because itās impossible to copy without it being destructiveā¦ Then Iām on the fence.
Now, if the transfer process is non-destructive, but for some unexplicable reason the previous mind ceases functioningā¦ or goes into a stuporā¦ as soon as the transfer happens, then Iāll believe itās a transfer, not a copy.
Which makes me think one aspect they havenāt properly explored even on the books is āsleeve swingingā.
There sure would be market for swapping bodies as soon as thatās a thing!!
Itās like they say in Shadowrun:
āand as soon as humans discovered magic exists, they quickly rushed to do the three things they did better: Exploit it for Profit, Exploit it for Porn, and most importantly, Exploit it for Profiting out of Porn.ā