menu
menu
Technology

The Double-Edged Sword of OpenAI's Video Generator Sora

16/11/2025 14:26:00
Tempo.co

TEMPO.CO, Jakarta - The launch of OpenAI's latest artificial intelligence (AI) model, Sora 2, presents a digital double-edged sword. It demonstrates a new level of intelligence by generating videos from text instructions, yes, but it is evident that users have crossed societal norms, even bullying others using it. 

Technology analyst from Digital Trends, Moinak Pal, dissected the impact of AI-generated videos that have recently flooded our timelines. Noting how Netflix CEO proudly boasts the prowess of artificial intelligence, he underscores how the streaming mogul overlooks the negative effects of generative AI. 

Sora 2's application, he said, has been utilized to create bullying content. "OpenAI's new video-making tool is being used to flood the internet with some of the ugliest content imaginable, he said, as quoted from a review in Digital Trends on Friday, November 14, 2025.

"We're talking about a wave of straight-up fatphobic and racist “comedy” videos," he added, referring to videos generated by Sora. 

Besides racism, some of the content created by Sora AI also leads to body-shaming, targeting individuals based on their weight. AI-engineered videos have rapidly spread and become the subject of mockery by many people.

OpenAI's Prevention System Failure

Sora 2 was designed with a policy to prohibit hate speech or harassment content. However, according to Moinak, the reality is that fat-shaming videos still manage to bypass the filters implemented by OpenAI.

Many users have found ways to manipulate the input prompt or text command. This method enables them to produce content that violates policies without being detected by the AI system.

This phenomenon indicates a major ethical loophole in the implementation of advanced AI technology. Strict policies do not always stop the malicious intentions of users.

Sora's ability to create highly realistic videos further exacerbates this problem. The videos produced are often misunderstood as authentic content. When one video goes viral, dozens of other attention-hungry users are compelled to make their own versions. 

OpenAI is now facing a significant challenge in balancing technological innovation and social responsibility. Companies and regulators should be able to effectively and promptly address the misuse of their platforms.

The Floods of Sora OpenAI Deepfakes 

The release of Sora AI has also raised new concerns about the weak detection of content manipulation, known as deepfake. The application has proven capable of producing fake videos that appear increasingly realistic, including mimicking the faces of famous figures such as Martin Luther King Jr., Michael Jackson, Bryan Cranston, as well as popular copyrighted characters like SpongeBob and Pikachu. A review by The Verge on October 27 revealed users reporting that their likenesses were used in racially charged AI-generated videos.

Although OpenAI provides an understanding that the content within the application is not real, the generated videos still spread like wildfires on social media, with no discernible marker that they were AI-generated. This situation indicates the weakness of content labeling systems, including C2PA authentication, a mechanism claimed to differentiate original content from that generated by AI.

The C2PA system, also known as content credentials, is a metadata system developed by Adobe to attach information about when and how an image, video, or audio was created or modified.

OpenAI is part of the board of directors for the Coalition for Content Provenance and Authenticity (C2PA), which collaborates with the Content Authenticity Initiative (CAI). However, the digital identifier is barely noticeable to the public.

by Tempo English