Benefits & Risks of My AI Digital Double- “Me” Without the Humanity

Photo by Tara Windstead

According to the experts, soon we will all have an AI twin. 

More than an avatar that simply mimics your appearance, an AI digital double is built on the digital trail you have created- e.g. in social media, health data, or biometrics. Your AI digital double will replicate your face, mannerisms, voice, personality, behaviors, and decision making. Your twin will be able to make decisions on your behalf with a large degree of accuracy and confidence. 

There are both advantages and risks in this futuristic technology.

Your digital twin can attend those boring virtual meetings while you go to the beach. Your AI digital double can do routine tasks and conversations on your behalf, or manage your digital life and business. A digital twin could create ongoing social media content for you (freeing up more of that beach time).Your AI double could become a movie star- appearing in movies on your behalf (but think of those SAG-AFTRA complications!) 

Doctors could use your digital “self” to personalize your medical treatments to proactively predict and treat health issues, current and future. (Wouldn’t it also be great if your digital clone could do that nasty colonoscopy for you?) 

An AI digital double can even serve as a “griefbot” to help your bereaved family members after you pass away. (Beyond an inherent creep factor of possible exploitation of grieving relatives, posthumous rights raise another concern- could family members misuse a deceased person’s data?) 

Your future AI twin is already in demand. 

Companies want to use “you” and other digital doubles to test consumer reactions to new products and marketing. Scientists want to speed up all that pesky research- for instance, in human brains studies, they could run thousands of experiments on AI twins in no time. 

Doctor’s digital twins are being developed to replicate a doctor’s medical expertise, knowledge and decision making. Even more than passing on medical knowledge, Delphi company has created “legend” avatars of historical figures so people can hear AI generated responses from Albert Einstein, George Washington, Genghis Khan, or Mahatma Gandhi. (Such technology can be clearly problematic and is called deepfake when used to mimic a real living person without permission).

Creators are also available for anyone to set up their digital selves, for all the many advantages previously mentioned. Downsides have arisen however, as with Snapchat influencer Caryn Marjorie in 2023 with her CarynAI. She hoped her online clone would offer emotional support to her followers- but shut it down completely when fans got sexually explicit. AI-generated images and videos have been reshaping the adult industry with AI-generated porn and user-created fantasies for years- which may be why CarynAI’s offer of simple emotional support for some fans was misused.

Online dating sites imagine your AI clone dating other “dating concierges” to help match you for a “real” human date, by using your AI twin to eliminate the other choices. Taking this a step further though, developing video twins (or even physical robots) ultimately raise worries that people can instead choose digital relationships over human ones (ie. choosing the illusion of companionship without the human demands). One-sided relationships with AI or robots could become addictive or unhealthy, not the beneficial tools they were designed to be.

H&M and Nike are already using created digital twins for storytelling in marketing campaigns to cut costs. Tech companies of course are leading the way by already using avatars of their executives for public communications. But the infamous 2024 Hong Kong video conference scam took advantage of this technology, to create a video call attended by the deepfaked company’s CFO and some colleagues. The “attendees” induced a finance worker at their multinational company to transfer $25 million to various bank accounts.

 A digital double is technology’s answer to a human- but without the humanity.  

A current joke goes that if AI is taught emotions, it will cry in ones and zeroes. It’s that “humanity” part of humans that a digital double cannot replicate. However this raises some uncomfortable questions about any future digital double of “me.” 

Even if my digital double exactly mimics my speech, style and decisions, will it always make the choices that I would- or could it be made to act independently or distort my personality? Who owns my digital clone and its data- me, my heirs, the company that created it? More troubling, what happens if my digital double is updated, sold, or changed without my permission? Who is accountable if my digital double does or says harmful things, is used criminally, or for profit, or endorses products or politics not of my choosing? 

Developing AI digital doubles to be human is like the previous building of the atomic bomb. 

Both are breakthrough technologies capable of enormous good and enormous harm. Ethics, safety, and accountability issues inevitably arise, even as creators race along anyway, experimenting with the explosively-growing technology of AI digital doubles. 

In the meantime, any digital double of mine would do all its tasks flawlessly, keep files tidy, and save me time and energy. But my AI digital double would not lose its phone or glasses, burn the toast, laugh, dance with two left feet, thoroughly savor time with my family and friends, or cry over those heart-breaking dog commercials. 

Digital doubles may outthink me, but no algorithm can match the human spirit that forgets, fumbles and feels. Code can’t replace life. That gloriously imperfect, emotional, unpredictable spark of humanity still remains mine.  

Author: cmshannon2002

I am a freelance writer of research articles and fiction short stories, along with doing freelance copywriting (with a SEO focus) for a computer website design company. Drawing on my years of working at a commercial airport, I have also penned a revealing collection of short stories called "The Airport Chronicles."