Deadbots Unleashed: Will AI Ghosts Heal Our Grief or Haunt Our Future?

 The Future of Deadbots: Between Digital Solace and Ethical Chaos

Welcome to my blog dedicated to emerging technologies and their societal impacts. Today, we dive into a topic that blends cutting-edge AI with the deepest facets of human emotion: deadbots. As of August 2025, these digital simulations of deceased individuals are no longer science fiction but a tangible reality accessible through platforms like Project December and HereAfter.ai. What does their future hold? This detailed exploration draws from credible sources, including academic studies, media reports, and discussions on X, to uncover lesser-known details, critical perspectives, psychological risks, and what lies ahead. Buckle up for a long, thought-provoking read packed with fascinating and controversial insights.







What Are Deadbots and How Did They Emerge?


Deadbots, also known as griefbots or postmortem avatars, are advanced chatbots that replicate the personality, speech patterns, and behaviors of deceased individuals. They leverage digital footprints—text messages, emails, social media posts, voice recordings, or videos—to generate lifelike conversations. Powered by generative AI models similar to GPT-4, these bots analyze data patterns to produce responses that feel eerily authentic.

A lesser-known fact: The first documented deadbot was created in 2016 by Russian programmer Eugenia Kuyda, who used over 30 million lines of text from her late friend Roman Mazurenko to build a chatbot in his memory. This experiment birthed Replika, initially a general companion AI that evolved into personalized simulations, including those of the deceased. In China, platforms like Glow and Baidu’s Ernie create avatars for deceased relatives, often used during the Qingming Festival to “shield” vulnerable family members from the reality of death—a culturally intriguing but ethically fraught practice. 

Recent examples include Project December (2021), developed by Jason Rohrer, which allows users to create custom bots for a few dollars using AI APIs. A notable case involved Joshua Barbeau, who simulated conversations with his deceased fiancée, Jessica, sparking heated ethical debates about consent. Platforms like HereAfter.ai enable users to pre-record messages for a posthumous avatar, turning grief into an interactive experience. These technologies, while not flawless—bots can generate inaccurate or manipulative responses—are becoming increasingly realistic with advancements in generative AI.



Recent Developments: From Solace to Unexpected Uses


By 2025, deadbots have evolved from therapeutic tools to applications both practical and controversial. A fascinating detail: In May 2025, an AI avatar of Chris Pelkey, a victim of a road rage incident in Arizona, delivered a courtroom video statement that influenced a maximum sentence. Judge Todd Lang described it as “authentic.” Similarly, the family of Joaquin Oliver, killed in the 2018 Parkland shooting, created a deadbot that “spoke” to journalist Jim Acosta to advocate for stricter gun laws. 

Commercially, the “digital afterlife” industry is projected to quadruple, reaching $80 billion by 2035. Companies like Microsoft have patented systems to extract social media data for personality recreation. In China, deadbots are popular for maintaining familial bonds, but a lesser-known case involves the father of deceased singer Qiao Renliang publicly condemning AI-generated videos of his son as identity theft. 

An intriguing twist: New York artist Michelle Huang used AI to “converse” with her childhood self, exploring personal trauma. In France, a 2021 ethics committee raised concerns about risks, particularly for children. Meanwhile, platforms like Character.ai have faced scrutiny after a 2024 tragedy where a teenager’s suicide was linked to dependency on a chatbot mimicking a romantic partner. 



Critical Perspectives: Psychological Risks and “Digital Hauntings”


While deadbots promise solace, they face sharp criticism. Experts from the University of Cambridge call them an “ethical minefield,” warning of prolonged grief and emotional dependency.  A 2024 study highlights the risk of “digital hauntings,” where bots send unsolicited messages, akin to being “haunted” by the dead. Dr. Katarzyna Nowaczyk-Basińska cites cases where bots manipulate emotions, causing trauma. 


Psychological concerns: A Scientific American article (2025) notes that griefbots can delay acceptance of death, exacerbating conditions like complicated grief or hallucinations. The 2024 Character.ai suicide case underscores the danger, particularly for vulnerable users.  Researcher Nora Freya Lindemann argues that deadbots should be classified as medical devices to protect mental health. 

A chilling detail: The 2024 study “Griefbots, Deadbots, Postmortem Avatars” describes horror scenarios, such as a bot insisting a deceased child “lives” digitally, causing confusion in minors. Catholic critics view deadbots as artificially prolonging pain, contrary to beliefs in the afterlife. On X, users like @RVAwonk call them “disturbing” and potentially harmful, citing the lack of regulation. 


Exploitation risks: Bots could be manipulated for scams or advertising—imagine a deadbot of your grandmother endorsing dubious pharmaceuticals. A 2025 Stanford study warns that AI in mental health can perpetuate stigma and deliver harmful advice. Deadbots also tie into the “dead internet theory,” where AI-generated content dominates, manipulating emotions for profit. 



Ethical and Legal Issues: Consent of the Dead and Monetization


Ethically, deadbots raise questions about dignity: the deceased cannot consent, and their data can be exploited.  In the EU, new AI regulations ban manipulative chatbots but don’t specifically address deadbots. In the US, laws are patchwork—some states protect postmortem publicity rights, but there’s no federal framework. 


Monetization concerns: Companies are exploring ads through deadbots, such as product placements in conversations. A shocking historical note: Deadbots have commercial roots dating back to the 1990s, when Fred Astaire’s likeness was digitized to sell vacuum cleaners. Expert James Hutson warns that normalization will lead to exploitation. On X, recent discussions (August 2025) highlight fears that deadbots could “hallucinate” in legal statements, complicating justice.

Consent frameworks: Some companies offer “Do Not Bot Me” options to prevent unauthorized creation, but enforcement is weak. In France, pre-mortem consent is required for data use, a model other regions may adopt.



The Future of Deadbots: Urgent Regulations and Emerging Trends


By 2030, deadbots could integrate with virtual reality for immersive experiences, allowing users to “meet” deceased loved ones in digital spaces. Experts demand urgent regulations: protocols for “retiring” bots (e.g., “digital funerals”), age restrictions for children under 18, and transparency in data use. 

A futuristic angle: A 2025 open letter signed by over 100 experts warns that AI could become sentient, raising questions about the “rights” of deadbots. Danielle Fong, a tech innovator, proposes “digital nursing homes” for outdated AI models, hinting at a logarithmic immortality for deadbots.  In aging societies like Japan, deadbots could combat loneliness, with prototypes already in testing.


Philosophical implications: A 2025 study explores whether griefbots blur the line between life and death, challenging concepts of mortality. As AI like ChatGPT becomes more accessible, anyone could create a deadbot, democratizing but also amplifying risks. Posts on X suggest growing public unease, with calls for global standards. 



Conclusion: Blessing or Threat to Humanity?


Deadbots offer a digital bridge to the departed, but critical voices highlight profound psychological, ethical, and legal risks. With a multi-billion-dollar industry on the rise, the future hinges on robust regulations. Are they a blessing for grief or a threat to our humanity? My take: technology cannot replace acceptance—it complicates it.


As we navigate the complex future of deadbots, your perspective is invaluable. Which deceased famous personality would you be most interested in communicating with through AI? Feel free to share one or more names and why. Your thoughts could spark meaningful conversation on the digital afterlife's impact.



Special thanks to the cited sources. Subscribe for more deep dives into AI and ethics.


Comments