Benefits & Risks of My AI Digital Double- “Me” Without the Humanity

Photo by Tara Windstead

According to the experts, soon we will all have an AI twin. 

More than an avatar that simply mimics your appearance, an AI digital double is built on the digital trail you have created- e.g. in social media, health data, or biometrics. Your AI digital double will replicate your face, mannerisms, voice, personality, behaviors, and decision making. Your twin will be able to make decisions on your behalf with a large degree of accuracy and confidence. 

There are both advantages and risks in this futuristic technology.

Your digital twin can attend those boring virtual meetings while you go to the beach. Your AI digital double can do routine tasks and conversations on your behalf, or manage your digital life and business. A digital twin could create ongoing social media content for you (freeing up more of that beach time).Your AI double could become a movie star- appearing in movies on your behalf (but think of those SAG-AFTRA complications!) 

Doctors could use your digital “self” to personalize your medical treatments to proactively predict and treat health issues, current and future. (Wouldn’t it also be great if your digital clone could do that nasty colonoscopy for you?) 

An AI digital double can even serve as a “griefbot” to help your bereaved family members after you pass away. (Beyond an inherent creep factor of possible exploitation of grieving relatives, posthumous rights raise another concern- could family members misuse a deceased person’s data?) 

Your future AI twin is already in demand. 

Companies want to use “you” and other digital doubles to test consumer reactions to new products and marketing. Scientists want to speed up all that pesky research- for instance, in human brains studies, they could run thousands of experiments on AI twins in no time. 

Doctor’s digital twins are being developed to replicate a doctor’s medical expertise, knowledge and decision making. Even more than passing on medical knowledge, Delphi company has created “legend” avatars of historical figures so people can hear AI generated responses from Albert Einstein, George Washington, Genghis Khan, or Mahatma Gandhi. (Such technology can be clearly problematic and is called deepfake when used to mimic a real living person without permission).

Creators are also available for anyone to set up their digital selves, for all the many advantages previously mentioned. Downsides have arisen however, as with Snapchat influencer Caryn Marjorie in 2023 with her CarynAI. She hoped her online clone would offer emotional support to her followers- but shut it down completely when fans got sexually explicit. AI-generated images and videos have been reshaping the adult industry with AI-generated porn and user-created fantasies for years- which may be why CarynAI’s offer of simple emotional support for some fans was misused.

Online dating sites imagine your AI clone dating other “dating concierges” to help match you for a “real” human date, by using your AI twin to eliminate the other choices. Taking this a step further though, developing video twins (or even physical robots) ultimately raise worries that people can instead choose digital relationships over human ones (ie. choosing the illusion of companionship without the human demands). One-sided relationships with AI or robots could become addictive or unhealthy, not the beneficial tools they were designed to be.

H&M and Nike are already using created digital twins for storytelling in marketing campaigns to cut costs. Tech companies of course are leading the way by already using avatars of their executives for public communications. But the infamous 2024 Hong Kong video conference scam took advantage of this technology, to create a video call attended by the deepfaked company’s CFO and some colleagues. The “attendees” induced a finance worker at their multinational company to transfer $25 million to various bank accounts.

 A digital double is technology’s answer to a human- but without the humanity.  

A current joke goes that if AI is taught emotions, it will cry in ones and zeroes. It’s that “humanity” part of humans that a digital double cannot replicate. However this raises some uncomfortable questions about any future digital double of “me.” 

Even if my digital double exactly mimics my speech, style and decisions, will it always make the choices that I would- or could it be made to act independently or distort my personality? Who owns my digital clone and its data- me, my heirs, the company that created it? More troubling, what happens if my digital double is updated, sold, or changed without my permission? Who is accountable if my digital double does or says harmful things, is used criminally, or for profit, or endorses products or politics not of my choosing? 

Developing AI digital doubles to be human is like the previous building of the atomic bomb. 

Both are breakthrough technologies capable of enormous good and enormous harm. Ethics, safety, and accountability issues inevitably arise, even as creators race along anyway, experimenting with the explosively-growing technology of AI digital doubles. 

In the meantime, any digital double of mine would do all its tasks flawlessly, keep files tidy, and save me time and energy. But my AI digital double would not lose its phone or glasses, burn the toast, laugh, dance with two left feet, thoroughly savor time with my family and friends, or cry over those heart-breaking dog commercials. 

Digital doubles may outthink me, but no algorithm can match the human spirit that forgets, fumbles and feels. Code can’t replace life. That gloriously imperfect, emotional, unpredictable spark of humanity still remains mine.  

The Internet Promised Me Knowledge- AI Lost It In Translation

Doing any type of research has been greatly transformed in our brave new world. Fact-finding used to take a researcher on multiple trips to a library- those quaint tree-based data repositories of books (the original tablets but with no charger required). For those too young to remember them, encyclopedias were specialized books that contained a wealth of useful data- sort of like an analog Google, but alphabetized and way heavier. And those flat papery things called newspapers were the original news feeds- but news feeds that involved no WiFi or likes- maybe just some ink on your fingers. Plus (though it’s hard for some to imagine), newspapers were knowledge sources that you couldn’t scroll through. Also they only s-l-o-w-l-y, boringly updated every 24 hours.

Today’s research usually involves online research- which can be risky, since misinformation (not only the evil deliberate kind) is definitely out there in internet land. You truly cannot believe everything you read- AI, search engines, and translators– or all three combined- can get data horribly wrong.

Lost in Translation: The Bot Version

AI (chatbots, helpers, content generators) can hallucinate facts and invent studies, because they are trained with huge data sets that do not necessarily distinguish good from bad. (AI is the student that didn’t read the book but still wrote a 1,000 word essay).

Search engines simply rank and amplify information, and reward popularity- not accuracy. The junk can rise above the genuine information, with algorithms personalizing my “truth” from your “truth” by the results they feed us. (And the nonsense with the most clicks wins!)

Translators (both human and AI) can turn accurate knowledge into gibberish, due to literal mistranslation of idioms, slang, context, humor, puns, or tone. AI famously mistranslated a French joke “Why do fish hate computers? Because they’re afraid of the net” into “Why do fish dislike computers? Because they fear the Internet”-technically correct but entirely missing the pun. Another classic meme example was “Knowledge is power” which reemerged from the computer mind as “Cheese is strength.” (In other words, machines can spit out words but don’t get the full meaning- only humans do that).

Chatbots Have Confidence (Just Not Always Facts)

A recently reported internet fiasco was the Chicago Sun-Times/The Philadelphia Inquirer article about their 2025 recommended summer reading list (“Heat Index: Your Guide to the Best of Summer”).  Of the 15 book titles recommended, 10 were totally made up, down to the detailed plot descriptions. The authors were all real authors, but the books were apparently figments of fevered digital imaginations. The writer of the news article had used AI for his research, but had not double-checked its accuracy enough. The freelance writer who wrote the article was “completely embarrassed” and was terminated by the King Features syndication that hired him for the Sun-Times story. (Ironically AI kept its job).

So Much Data, So Little Understanding

Not long ago I experienced some confusing computer misinformation of my own. After getting a Ring security doorbell, I attempted some online research about what Ring is and how it’s used. The article I found immediately went “off the rails” (there’s an idiom that AI might have trouble with!) First it defined Ring as a circular band worn on the finger as an ornament or symbol. Then it confidently continued on to describe Ring as a Japanese horror movie, highly influential in its genre, with numerous sequels, remakes, and adaptations that solidified its status as a cinematic horror classic. Finally, Ring’s pro side was bulleted as easy to install, affordable, with good video quality, its con side being privacy concerns. Something had quite obviously been lost in AI translation somewhere for this computer research. In spite of this confusing internet fail, I still managed to install the Ring doorbell.

When Your Search Engine Goes Full Hannibal Lecter

By the way, be careful how you phrase your questions to AI- it can make a big difference in the answers you receive. While everyone has occasionally experienced some inaccurate AI responses due to badly worded inquiries, some terribly phrased questions gone wrong deserve a gold medal. A person once asked AI for dog food recipes “using human ingredients” rather than more correctly specifying “human-grade ingredients.” The resulting menu from AI could have been taken straight from my aforementioned Ring horror movie. And early AI bots inaccurately took a Reddit joke (about how to make homemade napalm) as pure fact. Bots proceeded to bizarrely tell questioners to make napalm by mixing gasoline and tomato paste. Nothing says homemade like explosive marinara sauce…

The Internet Promises Wisdom, Not Just Ads, Emojis, & Spellchecks

I remain hopeful for the future of humanity’s relationship with AI though. While Artificial Intelligence is growing incredibly quickly, with many possibly disturbing repercussions, humans are indispensable to shape AI growth. Humans are the ones with conscience, empathy, common sense, context, oversight, and moral judgment- not AI. Humans must set the ethical boundaries and regulations, institute the best accurate AI training data, and establish accountability and recourse if AI systems do wrong.

The key concept is that machine wisdom is to serve humanity- NOT replace or mislead it.

Anyway, I must stop my research and writing now- my smartwatch is telling me to take a moment to breathe.