AI Video From OpenAI Just Blew Everyone's Minds!

Author: Matt Wolfe

Reflecting on the significant advancements in AI, particularly from mid-Journey version 3 to mid-Journey version 4, where images suddenly became remarkably realistic, it's reminiscent of the groundbreaking progress we experienced. Now, on February 15, 2024, AI video generation has achieved a comparable leap with the introduction of Sora by OpenAI. Sora stands out as the most impressive AI text-to-video model to date. OpenAI officially announced Sora on Twitter, stating, "Introducing Sora, our text-to-video model. Sora can create videos up to 60 seconds, featuring highly detailed scenes, complex camera motions, and multiple characters with vibrant emotions. Until now, we could only create 3 or 4 seconds at a time, and if we wanted longer, we could extend it up to about 16 seconds. But now, Sora can generate a single prompt into a 60-second video that is super realistic." This announcement has created a buzz, and Greg Brockman showcased a video of a woman strolling through Tokyo at night after rain, demonstrating the unprecedented realism achieved by AI-generated videos. While access to this model is currently limited, OpenAI plans to start red team testing and offer access to a select number of creators. The criteria for selecting these creators and granting access remain unclear at the moment. Examples shared on the official OpenAI website display Sora's capabilities, ranging from woolly mammoths walking towards the viewer in a 10-second video to a 20-second video featuring a beautifully rendered papercraft world of a coral reef with colorful fish and sea creatures. In summary, the introduction of Sora signifies a groundbreaking milestone in AI video generation. The ability to create 60-second videos with such realism is unprecedented. As we eagerly await broader access to this technology, the potential for future innovations and advancements in the field of AI is promising.