BY Fast Company < 1 MINUTE READ

With the unveiling of a new text-to-video tool called Sora, OpenAI has joined Runway, Meta, Google, and others in the race toward AI video that nears the quality of traditional live-action video.

OpenAI published a minute-long example of the tool’s work in a blog post, and it’s impressive. The result might send shivers up the spines of actors (none of whom were involved in the making of the clip). The AI system trains on millions of labeled video images in order to create videos based on user descriptions.

OpenAI told the New York Times it’s applying watermarks to the videos Sora creates, but acknowledges that the watermarks can be removed. OpenAI and its backer, Microsoft, are involved in a standards consortium called C2PA that’s developing a method of cryptographically embedding provenance information into the code of AI-generated content.

OpenAI says its not releasing Sora to the public yet, in part because it wants to get feedback from academics and other researchers on how the tools could be used in toxic or misleading ways.

In a blog post, OpenAI says it’s releasing the research, but not the tool, now “to give the public a sense of what AI capabilities are on the horizon.” With any luck, the startling quality of the Sora’s output might give lawmakers another jolt to place usage restrictions and labeling requirements on AI-generated content before it’s too late.

FastCompany