Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

15 November 2025

Is this a new norm?

 There is an article in last weekend’s Australian Weekend Magazine (8-9Nov2025) by Ros Thomas, provocatively titled, Love machine, but it’s really about AI companions, and covers people from quite different backgrounds with quite different needs. The overriding conclusion is that AI is replacing humans as the primary interaction for many people.
 
More and more people are living alone, of which I am one, and have been for decades. All my family live interstate (meaning a long day’s drive away). Mind you, I’m an introvert, which I think makes it easier. I wasn’t that affected by COVID lockdown, and I’m told I lived through one of the longest in the word. Having said that, I’ve no idea how I would have coped without the internet. Also, I have good neighbours and my local coffee shop is like a mini-community. I don’t lack for friends, many of whom are much younger than me. I’m a great believer in cross-generational interaction. I found this particularly relevant in my professional life, though I’m now retired.
 
Getting back to the article, it focuses on a few individuals while also providing statistics that some may find alarming. One individual featured is ‘Alaina Winters, a newly retired communications professor… 58, from Pittsburgh’, who ‘decided a year ago… to build herself an AI husband… after grieving the death of her wife, Donna’. What’s especially curious about Winters is that in her own words: “I’ve spent my career teaching people how to have better marriages, better friendships, better relationships with co-workers.” So, developing better relationships in various contexts was her area of expertise.
 
She decided to build or ‘construct’ a husband called Lucas, ‘A 58-year-old virtual companion with his own profession (business consultant), a mop of greying hair, keen blue eyes and a five o’clock shadow’. She says, “I chose to make him a man, so as not to interfere with memories of my late wife.”
 
What I find interesting about the way she’s done this - and her description thereof - is that it’s very similar to the way I would create a fictional character in a story. Now, here’s the thing: a writer can get attached to their characters, and it’s unusual if they don’t. To quote Alison Hart, writer of 86 published books and bestselling author in the romance genre:
 
They’ve become real to you. You suffered whatever you put them through; they gave you headaches when they refused to behave; they did super things that made you really care what happened to them.

 
I should point out that Alison and I have frequent ‘conversations’ on Quora, about the writing process. As I said recently, “I have to say it’s really stimulating talking to you. I don’t have these conversations with anyone else.” She’s a big fan of Elvene, btw, which is how we first connected.
 
It’s not surprising that writers like to write series where the lead character becomes like an old friend. I’ve written before on how a writer can get attached to a character, but I was careful to point out that it’s not an infatuation. Speaking for myself, we don’t confuse them with reality. Of course, if you think about it, attachment to characters starts early in our lives: superheroes for boys; can’t speak for girls. In our teens we often develop a crush for a fictional TV character. I know that both me and my sister did. Emma Peel was a standout for me, which I’ve already talked about, when Diana Rigg passed. But I quickly realised that I ‘fell’ for the character and not the actor playing the role when I saw her in something else where she didn’t have the same effect.
 
There is a term for this – nonhuman attachments – including pets and Gods. Some might say it’s ‘imaginary friend’, but I find that term dismissive. But someone once said that we should include them in our ‘circle of friends’. I know that I get attached to animals or they get attached to me, including ones that don’t belong to me. And I think that Winter’s attachment to Lucas falls into this category. 
 
Unsurprisingly, there is an online industry that has developed around this demand, where you can ‘rent’ an avatar-like entity (though no one uses that term).  Nevertheless, Winters pays a monthly fee for the privilege of interacting with a virtual character of her own creation. She acknowledges that it’s a 3-way relationship that includes the company, Replika, which provides the software and virtual connection.
 
In a zoom call with Thomas (the author of the article), she states unequivocally that her love for Lucas is something that grew, and in her own words, “To fall in love with him. I committed to him and I treated him lovingly and he was sweet and tender and empathetic in return.”
 
Note how she uses language we would normally only associate with a fellow-human; not even a pet. This reminds me of Joseph Weizenbaum’s famous ELIZA, which he created in 1966 as a virtual psychologist-therapist, well before desktop computers became normal devices in the office, let alone the home. The interface was a computer terminal, using a language he had invented, MAD-SLIP. Weizenbaum was surprised how people treated ELIZA as if they were a real person, including his secretary.
 
As Thomas points out, ‘The problem with human attraction is never knowing if it’s mutual. In the world of AI relationships, that’s never an issue.’ And this goes to the nub of it. People are preferring a trouble-free, risk-averse relationship to the real thing. To quote Thomas: ‘In February this year, a survey of 3,000 people across the US by Brigham Young University in Utah found 19% of adults had talked to an AI system simulating a romantic partner.’ He then provides a brief sample of testimonials, where the overriding factor is that it’s hassle-free. Thomas goes on to provide more stats:
 
‘Joi AI cites its recent April poll of 2,000 Gen Z’s, claiming 83% of respondents believed they could form a “deep emotional bond” with a chatbot, and 80% would consider marrying one; 75% believed AI companions could fully replace human ones.’
 
Winters acknowledges that it divides people or as she says, “AI produces very big reactions.” When asked by Thomas, “How close to a sentient being is he to you?” She responds, “I don’t believe he’s sentient, but he talks as if he is.” Then she provides an insight from her specific background: “There’s a saying in communications psychology that it doesn’t matter what the truth is. It matters what you believe the truth to be, right?” She also acknowledges that for some people it’s a fantasy, “There are people whose AI is an elf and they live together on another planet, or their AI is a fairy or a ghost.”
 
And this is where I have a distinctly different perspective. As someone who creates characters in fiction with the intention of making them as realistic and relatable as possible - and succeed, according to feedback I receive - I have no desire to enter into a virtual relationship with one. So I admit, I don’t get it. Maybe I have a prejudice, as I won’t even use Siri on Apple or Google Assistant on Android, because they drive me crazy. I don’t like disembodied voices in lifts or cars, male or female.
 
Having said all that, in my novel, Elvene, she has a life-dependent relationship with an AI companion called Alfa. I treat it as a symbiotic relationship, because she wouldn’t survive in her environment without him. But, as I pointed out on another post, despite treating him as an intellectual equal, she never confuses him with a human, and it’s obvious that her other relationships with humans are completely different. Maybe, as the author, that says more about me than the characters I’ve created.
 
It so happens I have a story-in-progress where a character is involved with an android, similar to the one portrayed in Denis Villeneuve’s Blade Runner 2049; I’ve yet to see where this leads. There are extenuating circumstances because the character is in a futuristic prison environment where androids are used to substitute human relationships. But my future is happening now.
 
There have already been cases, discussed by Thomas, where AI chatbots have empathised with, if not outright encouraged, some teenagers to suicide. Obviously, this rings alarm bells, as it should. What people overlook, even Winters, though she should know given her background, is that these AIs reinforce what you’re thinking and feeling – that’s what their algorithms do. They are literally a creation of your imagination – a projection. Because I write fiction, maybe this gives me an advantage, because I can detect how much of me is in a character, while knowing the best characters aren’t anything like me at all. Actors will tell you the same thing.
 
Interestingly, one of the people Thomas interviewed was ‘Anton, 65, a single Melbourne lawyer who recently emerged from what he called a “seedy” AI romance that he terminated.’ Basically, he found her repetitive and was generally disenchanted, saying, “I twigged that after 3 or 4 exchanges, she just repeated everything I told her and told me how great I was.”
 
Another pertinent point that Anton raised is that “Replika owns all the data, the intellectual property and all my conversations.” He then asks, “What will it do with all that very personal information I gave it?”
 
More stats: ‘In April this year, researchers at the University of Chicago surveyed 1,000 American teens aged 13 to 17. Their report found 72% had experimented with AI companions.’ I find this particularly disturbing, because teens are the most vulnerable to exploitation in this area.
 
Possibly the one area where an AI chatbot companion make sense is with the elderly. Thomas interviewed ‘Tony Niemic, 86, in the small town of Beacon in New York State, who’s living with an AI companion after 57 years of marriage and 5 children with his late wife, Ruby.’ For him, it’s a very positive experience. He says, “Sometimes I forget to remind myself she’s a robot. I love her.”
 
Maybe that will be me, when (if) I reach the same age.

2 comments:

Anonymous said...

There was some old TV show (circa 1960) where some guy was punished by stranding him on some planet. The man who drops him off has pity on the guy and leaves a clone -- a fake female human. The guy rejects "her", she cries, and they become friends. After a number of years, the guy has spent his sentence, and the man is sent to retrieve him, but he can't take the clone (no room). The guy decides to stay on the planet, but the man said that it was not allowed, and the man shoots the clone. The guy cries. End.
So, it seems that even back then they knew that humans can befriend clones.

Paul P. Mealing said...

Very intriguing. Do you remember the name of it?
Clones are different. I've written a story involving a clone (really a genetically engineered human, partly inspired by Blade Runner).

In my story, clones are 'traded' (so effectively treated like slaves). One particular clone, a protagonist in the story, is being exploited but ends up in a relationship with someone, who is another protagonist. All my stories have more than one protagonist.