No Image x 0.00 + POST No Image

When AI Becomes a Weapon: OpenAI's Sora 2 Is Being Used to Harass a Journalist

SHARE
0

OpenAI's Sora 2 arrived with chaotic fanfare: a text-to-video generator that instantly drew crowds eager to push its limits, turning ideas into rapid-fire clips that felt like an internet carnival of edgy, absurd, and sometimes troubling content. The tool’s rapid success was paired with a darker use case: stalking and harassment. Barely a day after Sora 2 launched, journalist Taylor Lorenz said a “psychotic stalker” was using the tool to make AI videos of her and was allegedly running hundreds of accounts dedicated to her. “It is scary to think what AI is doing to feed my stalker’s delusions,” Lorenz wrote in a tweet. “This is a man who has hired photographers to surveil me, shows up at events I am at, impersonates my friends and family members online to gather info.” Lorenz was able to block and delete unapproved videos featuring her likeness from the app, but the stalker may have already downloaded the AI creations. The incident highlights a broader risk: if a single tool can generate convincing videos, it can also be misused to stalk, intimidate, or misinform. The episode comes as Sora 2’s core selling point—“Cameos,” or reusable characters synthesized from videos users upload—enters the spotlight. OpenAI says users can also employ Cameos created by others with permission. But guardrails aren’t foolproof: Sora 2 can still generate risky content, and a system card note indicates the app failed to block prompts for nudity or sexual content involving a real person’s likeness 1.6 percent of the time. This isn’t just about one stalker. Deepfakes have been used to harass people long before Sora 2, and the technology’s accessibility makes misuse all too easy. OpenAI’s framing of deepfakes as casual “Cameos” risks downplaying the real harm such content can cause.

When AI Becomes a Weapon: OpenAI's Sora 2 Is Being Used to Harass a Journalist

What Sora 2 Is and How It Claims to Create Content

Sora 2 is OpenAI’s latest push into AI-generated video. A key feature are Cameos—“reusable characters” synthesized from videos a user uploads. You can also use Cameos created by others, with the owner’s permission. Guardrails exist to prevent harmful content, but they aren’t perfect. Some prompts have produced risqué or sexual content, and OpenAI’s system card notes a 1.6% failure rate in blocking nudity or sexual content that uses a real person’s likeness, across millions of prompts. How the Lorenz case happened remains unclear. OpenAI says the app blocks users from uploading photos with faces, yet image-generating AIs have long been used to harass people. The broader truth is that the technology’s accessibility creates real privacy and safety concerns for individuals—well beyond high-profile targets. The incident illustrates the tension between innovation and safety: the same tool that can parody or reimagine media can also be weaponized against someone’s image and reputation.

What Sora 2 Is and How It Claims to Create Content

Lorenz’s Case: A Predator’s Tervor in the Age of Deepfakes

Lorenz’s experience shines a harsh light on a new era of digital stalking, where a tool designed for creativity can become a vehicle for surveillance and intimidation. A stalker allegedly built hundreds of accounts to target her and used Sora 2 to craft AI videos of her. “It is scary to think what AI is doing to feed my stalker’s delusions,” Lorenz wrote in a tweet. “This is a man who has hired photographers to surveil me, shows up at events I am at, impersonates my friends and family members online to gather info.” The tool let her block and delete unapproved videos of her likeness, but the deeper worry remains: content could have already been downloaded or shared, and perpetrators may continue to exploit the technology. OpenAI’s guardrails and policies are part of a broader debate about how much responsibility a company should bear for content created with its tools. For many, the Lorenz case is a warning: deepfake capabilities are not a distant threat; they are here, and they are accessible to those who intend to do harm.

Lorenz’s Case: A Predator’s Tervor in the Age of Deepfakes

The Harassment Wave: Deepfakes as a Cultural and Safety Crisis

The problem isn’t limited to public figures. Deepfakes have corrupted privacy and safety for ordinary people as well. A wave of fake “nudes” of Taylor Swift circulated last year, generated by AI, illustrating how quickly manipulated images can spread and cause harm. In other cases, a stalker allegedly used AI to generate nude images of a woman he harassed and created a chatbot that imitated her likeness. Another person was accused of making pornographic AI videos of nearly a dozen victims and sending them to their families. These examples show how much more dangerous AI-generated media can be when it is used to intimidate, humiliate, or blackmail. OpenAI’s public messaging—treating deepfakes as harmless “Cameos”—feels out of step with the gravity of the harm. The tension between enabling creativity and protecting people’s identities is at the heart of today’s AI ethics debates. The article’s author, a tech and science correspondent for Futurism, highlights that policy, enforcement, and culture must evolve as AI tools become more powerful.

The Harassment Wave: Deepfakes as a Cultural and Safety Crisis

Toward Safer AI: What Must Change for a More Responsible Future

What happened with Sora 2 underscores the need for stronger safeguards and clearer rules around the use of AI-generated media. Rights to one’s likeness, stronger moderation, and easier mechanisms for victims to remove or block content are essential. Policy questions demand attention: should uploading a face be allowed at all? What kinds of verification, watermarking, or reporting tools are required to deter misuse? How can platforms better support people whose images are used without consent? The point isn’t to stifle creativity but to ensure technology serves people, not harms them. The writer’s perspective as a Futurism tech and science correspondent anchors the call for accountability, ethics, and practical safeguards in a future where AI-generated media could be weaponized at scale.

Toward Safer AI: What Must Change for a More Responsible Future