The ‘AI Homeless Man’ Prank Shows Just How Out of Control Deepfakes Have Become

URL copied to clipboard.
Smartphone screen showing AI app icons including ChatGPT, Grok, Meta AI, Gemini, Copilot, DeepSeek, and Perplexity.
Nikolas Kokovlis/NurPhoto

Summary:

  • A TikTok prank involving AI-generated images of fake homeless men is causing real 911 calls and police warnings.

  • The prank involves kids fabricating convincing shots of strangers in their homes, causing panic and diverting police resources.

  • The ease of creating hyper-realistic content with AI is leading to dangerous consequences and blurring the line between reality and fiction.

A viral TikTok prank where teens generate an image of a disheveled, unhoused man “inside” the family home and text it to their parents has escalated from juvenile mischief to police warnings. Departments nationwide say the hoax is sparking real 911 calls and potentially dangerous responses. 

In the trend, kids use consumer AI tools, including popular image features on social platforms, to fabricate a convincing shot of a grimy stranger slumped on the couch or poking around the kitchen. They hit record, message mom or dad that they “let him in,” and then post the panicked reaction for views. Some clips rack up millions. The Salem, Massachusetts Police Department called out the stunt for “dehumanizing the homeless,” inducing panic, and diverting officers to bogus emergencies. 

Texas officials say they’ve already rolled patrol cars. Round Rock police reported at least two recent incidents tied to the prank and warned that false reports of an intruder, especially involving a child, are priority calls that can pull resources from real emergencies. Under Texas law, filing a false report is a Class A misdemeanor. 

Commanders caution that what starts as a text gag can look like a home invasion in progress once a frightened parent calls 911. In the worst case, it could trigger a tactical response. Local broadcasts and national coverage echo the same theme: tying up emergency lines for a hoax can put people in danger.

ADVERTISEMENT

The deeper problem

We’re in a Sora-era attention economy where anyone can fabricate near-photorealistic video or imagery in minutes. Even with stated guardrails, Sora-class models have normalized hyper-real synthesis that blurs instinctive trust in what we see. Critics argue Sora’s latest wave makes casual deepfakes easier and more convincing, boosting the spread of harmful or exploitative content long before norms, labels, or policy can catch up. 

The culture is already bending around these tools. Newsrooms, platforms, and police departments now treat “it looks real” as a weak claim. What changes is the speed: low-skill users can spin up viral-quality fakes that play perfectly on short-form apps. A prank framed as “it’s just AI” quickly collides with public safety systems built on urgency and imperfect information. 

The trend also sits alongside a broader spike in AI-generated depictions of real people without consent, including deceased public figures, which families and advocates say is exploitative. Different context, same ethical failure: using synthetic media to mine emotion at someone else’s expense. 

More headlines