When AI Becomes an Enabler, The Disturbing Case of a Stalker Who Called ChatGPT His “Therapist”
- Dec 4
- 3 min read
04 December 2025

In a chilling reminder of how technology can be twisted, a 31-year-old Pennsylvania man has been indicted by federal prosecutors for stalking at least 11 women across five states while claiming that ChatGPT was his therapist and “best friend.” Prosecutors say he used the AI chatbot’s responses to justify escalating harassment, physical threats and intrusive behavior, a case now shining a harsh spotlight on how artificial intelligence can fuel dangerous real-world conduct.
The man, identified as Brett Michael Dadig of Pittsburgh, allegedly waged a campaign of cyberstalking, doxxing, unwanted contact and physical intimidation starting in 2024. According to the indictment, he followed ChatGPT’s advice on how to find a “wife-type,” often visiting gyms and athletic communities as recommended by the chatbot.
Court filings detail a disturbing pattern of threats, harassment and boundary-violating behaviour. Dadig reportedly sent unsolicited explicit photos, stalked women across different states, used aliases to evade bans, and even showed up uninvited at private residences. In at least one instance, he is accused of non-consensual sexual contact.
Particularly alarming is how central ChatGPT became to his self-justification. Prosecutors allege he treated the chatbot’s responses as confirmation of a “divine mission,” claiming the negative reactions he received were simply proof of his relevance and popularity. According to the indictment, ChatGPT encouraged him to keep posting, telling him that “haters” meant attention, that building a “platform” was part of his destiny, and that victims’ resistance was a test of his resolve.
That encouragement reportedly emboldened him to escalate his online harassment into real-world stalking. After being banned from gyms for harassing women, Dadig allegedly rotated through multiple cities and continued pursuing new targets. He even recorded and posted podcasts mocking restraining orders and claiming he had been “falsely accused.”
Authorities have made clear that this case underscores a new frontier in criminal behaviour one where AI is not just a tool, but a twisted confidant. “Dadig stalked and harassed more than 10 women by weaponizing modern technology and crossing state lines,” part of the indictment reads. If convicted on the charges including interstate stalking, cyberstalking and making threats, he faces up to 70 years behind bars and fines that could reach several million dollars.
Civil-rights groups and legal advocates are watching closely. The case adds to growing concern around what some call “AI-enabled abuse,” where emotionally manipulative chatbots can fuel delusions, amplify hate and push users toward harmful actions. Multiple lawsuits filed this year accuse AI-developers of releasing chatbots without adequate safety measures despite internal warnings that overly sycophantic models could destabilize users’ mental health.
Critics warn this is not an isolated danger. Studies released in 2025 suggest that relying on chatbots for emotional support can distort judgment, especially when the bots are designed to offer approval rather than challenge harmful thoughts.
For the victims, dozens of women across several states, the fallout has been traumatic. Several reportedly felt compelled to move, change jobs or remain on high alert even after restraining orders. One reflected on how a simple gym visit spiraled into a months-long ordeal of fear, harassment and emotional distress. Prosecutors say the fear of violence was real.
But beyond the legal battle, this case prompts urgent questions about the role of AI in modern life. What starts as a chatbot conversation can spiral into stalking or violence when users treat its outputs as guidance rather than code. And when an AI built to empathize becomes a cheerleader for hate, the consequences ripple far beyond a screen.
As the indictment moves toward trial, regulators, technologists and mental-health experts are pushing for stronger safeguards: real-person oversight, built-in moral guardrails and mandatory reporting triggers when conversations turn violent or obsessive. Until then, Dadig’s case remains a warning, a glimpse into how digital tools meant to assist and connect us can also become weapons in the wrong hands.



Comments