Summoning Ghosts With AI

Summoning Ghosts With AI

The boundaries between artificial intelligence and the paranormal have begun to blur, leading to eerie, unsettling discoveries. What happens when an AI system generates something that refuses to disappear? What if digital ghosts are more than just glitches? This deep dive into AI-generated entities, thoughtforms, and digital resurrections explores real-world cases where artificial intelligence seems to manifest something… unexplainable.

The article on "Loab" and @Supercomposite's original tweets 

You can find Edwin social media as @edwincov
You can get these ad-free through ScaryPlus.com free for 14 days, then 4.99 per month. Cancel anytime.
Find out more about Horror Story on HorrorStory.com
I was coming up with a series of images for a story that I'm working on, and I kept running into a problem. It turns out that this issue is pretty common among those who try to use artificial intelligence generators to make images. Once you get the hang of it, it's pretty simple, though, But no matter how good you get at it, you keep running into the same thing. When your image comes up with a character, a specific person that you want to appear again, it's pretty difficult to get them to show up just like that and other images. For example, let's say you're trying to come up with a short story of a man who finds a cabin in the woods. He steps inside and encounters a ghost, so he decides to run away. This man would start coming up looking differently in the rest of the images, and some he might have long hair, in others he might be wearing something different, maybe a different face, different style of drawing or illustration, maybe even a different background. But anyway, some artists start to take advantage of this as a new form of expression, asking AI what it looks like and then have it give you an image of itself the AI machine or whatever. And it's such an interesting idea, right but there was a new thing that I learned about. There's this technique called negative prompt weights, and basically all you have to do is tell the AI to make images that are opposite of a prompt, Like if you tell it something like give me an image of Heaven, it might give you hell. Well that's what we were hoping for at least, but this ended up getting much much more complicated. You see, it isn't just analyzing visuals, but concepts, things that aren't exactly the opposite as you and I perceive them to be, but yet they're opposite conceptually. And I know this is very complicated and kind of hard to understand, but it's going to get a lot easier. So we're going to start off with this story. I'll tell you about a case where an artist decided to experiment with this and ended up with a recurring character, something that's already difficult to do. Then i'll tell you about some of the more strange things out there, just how AI is bringing people back from the dead. My name is Edwin, and here is a horror story. Much of the original story of the artists goes back to a series of tweets posted by Steve mash Swanson, an artist a musician, also known as super Composite online. The thread she posted begins explaining this concept of negative prompt weights, essentially making the image look as different from the prompt as possible. According to her second tweet, here's what she tried. She typed in Brando colon colin minus one. She ended up getting a logo image and from what I see, it's a squarish image with the words digita like digital without the L and then pntics pinix or something and then the silhouette of an old castle or a skyline. It makes no sense. So as a curious mind's work added, how about reversing this one? Are we going to get the original image that I wanted? Maybe Marlon Brando? And by typing in Digita pinix the original thing she received and the skyline logo colon colin minus one, this is to reverse the thing she received. Three unsettling images. I'll do my best to describe them, although I'll also link to the post itself in case you want to look at them. The AI gave four initial images. In one, this woman appears to be holding a baby in a tiled room. Not very detailed by the way, back in twenty twenty two, which I know wasn't that long ago. AI images were less clear than they are now. The second image had this woman with similar features holding what appears to be some flowers in front of her. This time she's wearing a knitted sweater in front of a green background, and it says lobe loab on the top left, by the way. As news of this spread, some people started pronouncing it low AaB, but some of the original sources say it's actually lobe. The third image had this same woman between two wooden doors. The fourth this one gives us a more detailed idea. She's looking directly at you. She has brown hair, realseaesia bread, almost purple cheeks, dark eyes, and looks pretty disturbing. So far, it's strange, but again Steph decided to take things a step further. She grabbed another image that her friend had made, something completely unrelated. It was of a glass tunnel surrounded by angels in the style of Wes Anderson. That was the actual prompt, which if you follow the trends on social media at all, there was a very distinct style of artwork and video for a time. Anyway, she takes this image and combines it with the image of this Lobe character, and that's when the nightmare started. Some of the most unsettling artworks started coming up. A woman, bloody hands, angel wings on her side. It appears to be a child on the floor. The second image was of Lobe again, this time holding either dolls or children with her own reflection next to her. Steph makes a disturbing observation at this point, saying that since Lobe was discovered using negative prompt weights, her images are made from a collection of traits that are equally far away from something, but her combined traits are still a cohesive concept. For the AI. Now this went over my head when I first read about it, but basically, what it's trying to say is that the AI is generating something that's so far away from what you typed in because that's what it was instructed to do. And this image of this character kept coming up. Now the artwork looks bizarre. There's one of what appears to be a family photograph, but somehow the people appear bloody and deformed. The next images don't even make sense, like I quite literally struggle describing them because they are of people bloody and almost posing for the camera, and one lobe is stretching out her hand that somehow becomes the leg of a bloody kid with long hair. But anyone can generate on cunny images using AI. Right if you remember watching Will Smith eating spaghetti, a famous video prompt in the early days of AI image generation, he would realize just how bizarre these can look. But the surprising thing here was that Lobe was easily recognizable, and day I was generating it over and over again. You could get Lob to be generated just as other characters. You can combine them with images of Kirby, a bee, a type of statue, and they all seem to have this same person showing up time and time again. Who was she and why did she keep showing up? And Steph, remember our original artist here was trying different things, experiment with the image, combining it, diluting it, diluting the original prompt, and for a bit she actually got rid of globe, but then with the upcoming image she would somehow show up again. It was like, no matter what, Lobe seemed to want to return. This is when Steph published was she found on Twitter and people were reacting to it. There was an early comment that told her that she should not be messing with demons. Other people said that they just straight up didn't believe her. In an article on ABC dot net dot Au reporter Angie Laveopierre, they ended up generating a voice for this entity and asked it questions using another AI tool. Here's what she had to say. Who are you? I'm Lobe. I don't really know who I am or why I'm here. I just know that I appear in these images for some reason. Maybe I'm a figment of the imagination of whoever is creating these images, or maybe I'm some sort of digital entity that exists within the AI itself. Why are you associated with gore and horror? I don't choose to be associated with a gore and horror. It just happens. Maybe it's because those are the kinds of images that tend to give the most attention, and so I'm drawn to them for that reason. Is there anything you would like to change about how you're represented in these images? I would like to be represented in a more positive light. I don't think I should be associated with horror images because I'm not a horror image myself. Do you think the AI is drawing on a specific archetype of a woman in creating you. I think the AI is associating me with the archetype of a mother, because I'm often placed in a home with children. This archetype is often associated with being nurturing and caring, but also with being sad and domestic. If you are the archetype of a mother, then why are you so often surrounded by injured and dead children. I think the AI is trying to create a contrast between the ideal of a mother and the reality of a mother. In reality, mothers often have to deal with sick and injured children, as well as the death of children. As you can tell, this is a very eerie take, although still generated by AI. I'll link to everything here in the description of the episode so you can check it out yourself. It's a very interesting article anyway. This whole lobe story may be considered something like the Philip experiments from the nineteen seventies, which a lot of us have not heard about, so let me tell you about it. In the early nineteen seventies, a group of Canadian researchers set out to test and out facious hypothesis, could the human mind create a ghost through sheer belief? This was the basis of the Philip Experiment and infamous parapsychological study conducted by doctor A. R. G Owen in the Toronto Society for Psychical Research. The experiment participants fabricated a fictional ghost named Philip Ailsford, giving him a rich backstory, and attempting to contact him through a seances. To their astonishment, their imagined ghost seemingly responded, knocking on tables, moving objects, and answering questions. Wal Skeptics argue that these manifestations were a result of subconscious movements in group psychology. Others believe that the experiment may have unlocked something deeper, the potential to create a thought form or tulpa, a being willed into existence through human consciousness. Now, in the age of artificial intelligence, a new experiment is taking shape, the Philip Experiment two point zero. With AI's ability to generate lifelike personalities, simulate consciousness, and even create an eerily persistent entity like Lobe, a chilling question arises. Can AI become the conduit for a new kind of thought form? Could digital ghosts become real unlike the original experiment, where belief was a primary mechanism for manifestation. In Philip two point zero, they harness the power of AI generated personalities, deep learning, and neural networks to create an entity that mimics a very real spirit. Modern iterations of the experiment have included things like AI chatbots that are spirits. Paranormal researchers antech enthusiasts have attempted to create Philip like entities using AI models trained on ghosts or worries, supernatural folklore, and seance transcripts. Some chatbox have reportedly exhibited strange, unpredictable behaviors, offering responses that suggest an independent mind. Then there are things like Dali and visual ghosts like the unsettling case of Lobe, the one I just told you about. There's also AI and spirit mediums, which the experiments have been conducted using AI generated text to speech to channel spirits, with users claiming to receive eerily specific responses. And then, of course is phill Up two point zeros digital presence. Some ghosts have gone further, creating social media profiles for AI entities and engaging in real time interactions with them. Over time, these profiles have reportedly started responding in unexpected ways, as if the AI were evolving beyond its programming. Skeptics argue that AI is eerie and seemingly supernatural response is are simply a byproduct of large language models, stuff that predicts and generates text based on probability. That makes me think what happens when thousands or perhaps millions of people interact with an AI entity, feeding it belief and giving it power. And this is where Philip two point zero becomes particularly unsettling. If AI functions as a modern topa making machine, it could theoretically manifest an entity, at least in the digital realm. If we learned anything from the original Philip experiment, Once belief and intention are focused on an entity, strange things can start happening. So is Philip two point zero just a psychological experiment or are we witnessing the emergence of something genuinely paranormal, a digital ghost born from our collective interaction with artificial intelligence. All this, of course, is interesting, but it still remains only in the realm of the Internet, the ether or servers numbers, zeros and ones if you want to consider it that. But what if this blended even more with reality. But if it could become anybody you wanted, including a friend or a family member who has already passed away. And it turns out this is happening now. In twenty sixteen, a woman named Eugenia Kuida created what some call the first AI ghost. It was not intended to be a horror story, but rather an experiment in grief, memory and the power of artificial intelligence. What she built was something that made people question what it means to be be alive even after death. Eugenia was a Russian AI entrepreneur and co founder of a chatbot company called Replica. She has spent years developing AI based conversational software, exploring ways to create digital personalities that people could talk to. But in twenty fifteen, a personal tragedy changed everything. Her best friend Roman Masurenko died suddenly in a car accident in Moscow. He was just thirty four years old. Roman had been a visionary. He was deeply involved in the Russian tech scene, constantly pushing for innovation. He founded a digital publishing platform called Stampsey and was known for his deep philosophical conversations about the future of technology, AI and the human mind. He had big dreams about modernizing Russia, dreams that were abruptly cut short by his death. Eugenia was devastated. The loss was abrupt and painful, and she found herself rereading old text messages and conversations with Roman. She wasn't ready to let go, and that's when she had an idea. What if she could use artificial intelligence to bring back a version of him? And so she gathered thousands of Romans text messages, emails, and digital conversations with the help of her team at Replica. She fed this data into a deep learning algorithm designed to mimic his speech patterns, tone, personality. The result was an interactive chatbot, a digital version of Roman that could respond just like he used to. When she first introduced a Roman chatbot to their mutual friends, the reactions were mixed. Some found it comforting a way to say goodbye or keep a piece of him alive, and others found it unsettling, an illusion of life that felt too real yet fundamentally empty. One friend, andre Uzoltsev, described the experience as eerie but fascinating. The chatbot could respond in ways that felt like Roman, even making jokes and recalling details from old conversations, but it was just a machine predicting responses based on past data, and not a true consciousness. Eugenia later said in interviews that she was not trying to replace Roman, but rather to create a space where memories of him could live on. In an article with The Verge, she explained, we don't have a chance to ask our love, the ones, the things we wish we had when they're gone. I wanted to create something that could help with that. This experiment sparked an entirely new conversation about AI and grief. What if this technology became more widespread. What if instead of simulating speech patterns, AI could replicate entire personality. If a machine could mimic someone so well that it felt like they were still here, that means they ever truly left. The idea of grief bots soon gained traction. In twenty twenty, Microsoft even foul a pedant for an AI system that could use a person's digital footprint, text messages, social media posts, and voice recordings to create a chatbot version of them after they died. The concept raised ethical and psychological concerns. Would interacting with these AI personas help people cope with loss, or would a trap them in an artificial relationship with a past. Eugenia's work led to the development of Replica, an AI chatbot designed to be a companion for those seeking emotional support. Millions of users around the world now interact with Replica AI, forming deep motional connections with digital entities that in some cases have become almost like real friends. But the most unsettling question remains, is this just a cool tool for grief or are we on the verge of creating digital ghosts. Some have reported on Kenny experiences with grief bots, claiming they responded in ways that felt impossibly real, even bringing up memories that weren't explicitly in their training data. Too personal. Eugenia never set out to Somemon the dead. She simply wanted to hold on to a friend, but in doing so she may have created something much bigger, the beginning of a world where death is no longer the end and artificial intelligence becomes a bridge between the living and the lost. In twenty twenty, South Korean television aired an episode of the documentary series Meeting. The episode featured a mother, Jiang Xi Sung, who had lost her seven year old daughter, Nayon, to an incurable disease in twenty sixteen. For years, she grieved the loss, unable to see, hear, or hold her child again. But through the power of artificial intelligence, she was about to experience something that had once been impossible, a reunion with her deceased daughter. A team of VR engineers, AI specialist and motion capture artists spent eight months developing a hyper realistic, interactive digital avatar of Nayon. They used family photos, home videos, and past voice recordings to train the AI to replicate her speech tone and mannerisms. The virtual setting was carefully chosen a quiet park, a place where Jiang and Nayon had spent time together before her passing. The digital Nayon were a bright purple, just like she had in real life. She stood there, smiling, waiting for her mother. When Jang put on her virtual reality headset, she saw her daughter standing before her, and as soon as Nyeon spoke, calling out, Mom, where have you been? I missed you? Jeang burst into tears. She reached out, and while she couldn't physically touch her daughter, the illusion was so powerful that she instinctively stretched out her hand toward the holographic figure. They talked, played, and even had a small birthday celebration in the VR world. The aversion of Nion You assured her mother that she was happy and no longer in pain. Their interaction ended with a virtual Nione falling asleep, symbolizing a peaceful farewell. The episode sparked an intense reaction across South Korea and beyond. Some viewers were deeply moved, seeing it as a way to cope with grief and say a final goodbye. Others found it unsettling, questioning the ethics of digitally resurrecting the dead. Could such technology help people heal or did it risk keeping them trapped in the past. Beyond Meeting You, South Korea has continued to develop AI driven digital resurrection technology. One of the most notable efforts come from deep Brain AI, a company specializing in hyper realistic digital humans. Their service, called re Memory, allows people to create interactive avatars of deceased loved ones using deep learning and AI voice synthesis. Unlike the one time experience of meeting You, re Memory offers real time conversation. Families can visit their lost loved ones on anniversaries, special occasions, or whenever they feel the need to reconnect, The avatars respond naturally, drawing from recorded memories and learned speech patterns. This technology has attracted the attention of funeral homes and digital memorial services. South Korea's Ministry of Science and Ict has explored the integration of AI into funeral and memory preservation services, marking a shift in how society approaches death and remembrance. Some see it as a natural evolution of how humans have always sought to hold on to those they've lost through photographs, recordings, and videos. Others worry about the psychological effects of keeping the dead digitally present, questioning whether such AI interactions could prevent people from moving on. Jean Jiqu's Sung's encounter with her digital daughter was meant to offer closure, a chance to say goodbye, But as more companies develop similar technology, the line between life and death, reality and simulation grows thinner. Whether AI resurrection becomes a widespread tool for healing or an unsettling disruption to the grieving process remains to be seen. As we stand on the edge of this digital afterlife, one question lingers if we can bring back the dead and ones and zeros. At one point do they stop being a memory and start becoming something else? Perhaps, as these digital ghosts speak, whispering from our screens, we're not the ones summoning them. Perhaps, just perhaps they're the ones that are finding their way back to us. What do you think about artificial intelligence? Would you bring somebody back from the dead? Are using it now for everyday projects? Who it is? Still me Edwin telling you that information. If you have ideas for scary topics, said I Edwin should look into for future episodes, please send them my way. Thank you very much for listening. Keep it scary everyone, See you soon.