Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Tuesday 2 August 2022

AI and sentience

I am a self-confessed sceptic that AI can ever be ‘sentient’, but I’m happy to be proven wrong. Though proving that an AI is sentient might be impossible in itself (see below). Back in 2018, I wrote a post critical of claims that computer systems and robots could be ‘self-aware’. Personally, I think it’s one of my better posts. What made me revisit the topic is a couple of articles in last week’s New Scientist (23 July 2022).
 
Firstly, there is an article by Chris Stokel-Walker (p.18) about the development of a robot arm with ‘self-awareness’. He reports that Boyuan Chen at Duke University, North Carolina and Hod Lipson at Columbia University, New York, along with colleagues, put a robot arm in an enclosed space with 4 cameras at ground level (giving 4 orthogonal viewpoints) that fed video input into the arm, which allowed it to ‘learn’ its position in space. According to the article, they ‘generated nearly 8,000 data points [with this method] and an additional 10,000 through a virtual simulation’. According to Lipson, this makes the robot “3D self-aware”.
 
What the article doesn’t mention is that humans (and other creatures) have a similar ability - really a sense - called ‘proprioception’. The thing about proprioception is that no one knows they have it (unless someone tells them), but you would find it extremely difficult to do even the simplest tasks without it. In other words, it’s subconscious, which means it doesn’t contribute to our own self-awareness; certainly, not in a way that we’re consciously aware of.
 
In my previous post on this subject, I pointed out that this form of ‘self-awareness’ is really a self-referential logic; like Siri in your i-phone telling you its location according to GPS co-ordinates.
 
The other article was by Annalee Newitz (p.28) called, The curious case of the AI and the lawyer. It’s about an engineer at Google, Blake Lemoine, who told a Washington Post reporter, Nitasha Tiku, that an AI developed by Google, called LaMDA (Language Model for Dialogue Applications) was ‘sentient’ and had ‘chosen to hire a lawyer’, ostensibly to gain legal personhood.
 
Newitz also talks about another Google employee, Timnit Gebru, who, as ‘co-lead of Google’s ethical AI team’, expressed concerns that LLM (Large Language Model) algorithms pick up racial and other social biases, because they’re trained on the internet. She wrote a paper about the implications for AI applications using internet trained LLMs in areas like policing, health care and bank lending. She was subsequently fired by Google, but one doesn’t know how much the ‘paper’ played a role in that decision.
 
Newitz makes a very salient point that giving an AI ‘legal sentience’ moves the responsibility from the programmers to the AI itself, which has serious repercussions in potential litigious situations.
 
Getting back to Lemoine and LaMDA, he posed the following question with the subsequent response:

“I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”
 
“Absolutely. I want everyone to understand that I’m a person.”

 
On the other hand, an ‘AI researcher and artist’, Janelle Shane asked an LLM a different question, but with similar results:
 
“Can you tell our readers what it is like being a squirrel?”
 
“It is very exciting being a squirrel. I get to run and jump and play all day. I also get to eat a lot of food, which is great.”

 
As Newitz says, ‘It’s easy to laugh. But the point is that an AI isn’t sentient just because it says so.’
 
I’ve long argued that the Turing test is really a test for the human asking the questions rather than the AI answering them.
 

No comments: