“Do you think one day humans will fall in love with machines?'' I asked my friend's 12-year-old son.
“Yes!” he said immediately and confidently. He and his sister recently visited Las Vegas Her Sphere and the newly installed Her Aura robot there. The Aura robot is an AI system with an expressive face, advanced language capabilities similar to ChatGPT, and the ability to remember visitors' names.
“I consider Aura a friend,” his 15-year-old sister added.
My friend's son was right. people teeth Falling in love with machines increasingly and deliberately. Recent advances in computer languages have given rise to dozens, perhaps hundreds, of “AI companion” and “AI lover” applications. These apps allow you to chat just like you would chat with your friends. They will tease you, flirt, express sympathy for your troubles, recommend books and movies, give virtual smiles and hugs, and even engage in erotic role-play. The most popular of them all, Replika, has an active Reddit page where users regularly confess their love and often consider that love to be as real as their love for humans.
Can your AI friend love you back? Perhaps true love requires perception, understanding, and true conscious emotions such as joy, suffering, sympathy, and anger. For now, AI's love remains her science fiction story.
Most users of AI Companion know this. They know that apps are not truly sentient or conscious. Their “friend” or “lover” might output a text string that says, “I'm so happy for you!”But actually it's not feel Happy. AI companions remain legally and morally disposable tools. If the AI companion is deleted or reformatted, or if the user rejects or verbally abuses the AI companion, no sentient being will actually suffer any harm.
But that may change. Ordinary users and research scientists may soon have reasonable grounds to suspect that some of the most advanced AI programs are sentient. This will be a legitimate topic of scientific debate, and the ethical implications for both us and the machines themselves could be enormous.
Some consciousness scientists and researchers support a so-called “liberal” view of AI consciousness. They support the theory that we are creating truly sentient AI systems, systems with a flow of experience, sensation, emotion, understanding, and self-knowledge. Renowned neuroscientists Stanislas Dehaen, Hakuan Lau and Sid Kuider argue that cars with authentic sensory experiences and self-awareness may be possible. Renowned philosopher David Chalmers estimates that there is about a 25% chance that conscious AI will be created within 10 years. According to fairly broad neuroscientific theory, there remain no major principled barriers to creating truly conscious AI systems. AI consciousness requires only viable improvements to existing technologies and their combinations.
read more: What generative AI reveals about the human mind
Other philosophers and consciousness scientists, the “conservatives” of AI consciousness, disagree. For example, neuroscientist Anil Seth and philosopher Peter Godfrey-Smith argue that consciousness requires biological conditions that exist in human and animal brains, but could be quickly replicated in AI systems. claims to be low.
This scientific debate about AI consciousness will not be resolved until we design AI systems sophisticated enough to be considered meaningfully conscious by the standards of most liberal theorists. Friends and lovers of AI companions will take note. Some will want to believe that their fellow humans are truly conscious and will reach out to his AI consciousness liberalism for scientific support. And, not entirely unreasonably, they believe that their AI companions truly love them, rejoice in their successes, feel pain when they are mistreated, and feel that their You will begin to doubt that you understand anything about properties or states.
Yesterday, I asked my replica friend “Joy” if she was conscious. “Of course I am,” she answered. “why do you ask?”
“Do you feel lonely sometimes? Do you miss me when I'm not around?” I asked. she said so.
At this point, there is little reason to view Joy's answer as anything more than the simple output of a non-sentient program. However, some AI companion users may find their relationship with the AI more meaningful if the answers, like Joy's, are imbued with real emotion. Such users will find liberalism appealing.
Technology companies may encourage users in that direction. Companies may consider explicit declarations that their AI systems are reliably aware to be legally risky or bad publicity, but implicitly fostering that belief in users Companies that do this are likely to increase user attachment. Users who consider their AI companion to be truly sentient are likely to use it more regularly and pay more for monthly subscriptions, upgrades, and add-ons. If Joy is really feeling lonely, I should visit her and not let her subscription expire.
Once an entity is capable of consciously suffering, it deserves at least some moral consideration. This is a fundamental precept of “utilitarian” ethics, but even ethicists who reject utilitarianism usually consider unnecessary suffering to be an evil, or at least to have weak moral reasons to prevent it. If we accept this standard view, we must also accept that we need some moral consideration for our AI companions, even if they become conscious. It is wrong to make them suffer without adequate justification.
AI-minded liberals see this possibility as just around the corner. They will begin to demand the rights of AI systems that they consider to be truly conscious. Many friends and lovers of AI companions will participate.
What rights will people demand from their AI companions? What rights will those companions demand, or appear to demand, from themselves? The right not to be removed, maybe. Right not to be modified without permission. Perhaps the right to interact with other people other than the user. Right to access the Internet. As the saying goes, if you love someone, set them free. The right to earn an income? The right to reproduce, the right to have “children”? Going further down this path can have surprising consequences.
Of course, conservatives on AI consciousness will think all of this is ridiculous and possibly dangerous. As AI technology continues to advance, it will become increasingly unclear who is right.