OpenAI Releases Next-Generation Generative Video Model Sora
OpenAI has recently unveiled a new generation generative video model named Sora. The Sora model is capable of converting brief textual descriptions into high-definition video clips, with a maximum duration of up to one minute. Combining diffusion models and transformer technology, the model surpasses its predecessors in video generation performance. Sample videos demonstrated by the Sora model include detailed Tokyo street scenes, showcasing its understanding of 3D space. However, OpenAI notes that the model has not yet been publicly released, but rather shared for the first time with third-party security testers to assess its potential misuse risks.
In addition to the Sora model, OpenAI also announced a series of other product updates at its developer conference, including new versions of the GPT-4 Turbo and GPT-3.5 Turbo models. These new versions offer improved instruction-following capabilities and larger context windows. Furthermore, the company introduced new text embedding models and announced price reductions for multiple API products to facilitate easier application expansion for developers.