AIAI Art

The public will be able to access OpenAI’s Sora AI movie generator “this year”: CTO Mira Murati

This robust tool, which can produce lifelike films in response to human input, is anticipated to cost about the same as DALL-E, another OpenAI product that will soon be made accessible to the general public.

With the release of Sora, a new text-to-video generative AI model that could produce lifelike videos, OpenAI astounded the entire world. In an interview with The Wall Street Journal, the company’s Chief Technology Officer (CTO), Mira Murati, has now confirmed that Sora will be made available to the general public “this year” and “may take a few months.”

A significant advancement in the field of generative AI was made when OpenAI published a few films created on the platform in February, introducing Sora to the public. Sora is currently only available to a small group of users, but that will soon change, according to OpenAI’s CTO. 

Additionally, Murati has stated that Sora would “eventually” include audio production, enabling users to produce more interesting content directly from the AI bot. The report further emphasises that the corporation is considering ways to enhance the tool even further, including enabling users to modify videos produced by artificial intelligence. It was verified that the computational cost of creating a video is “much more expensive.” The company intends to price Sora similarly to OpenAI’s text-to-image generator DALL-E, which formerly cost $15 for 115 credits after the first month’s 50 free credits and each subsequent month’s 15 free credits. For $20 a month, OpenAI now provides ChatGPT Plus customers with DALL-E 3 access. 

The CTO responded, “I’m not going to go into the details of the data that was used, but it was publicly available or licenced data,” in response to a question concerning the data used to train the model. She also affirmed that OpenAI has made use of data from Shutterstock, a partner. A watermark at the bottom of videos produced with Sora aids viewers in differentiating between content produced by AI and that created by humans. Strong rules have begun to be implemented in a number of nations to stop the transmission of false information. Some examples of these laws include the limitation of election-related queries on Gemini AI. 

Leave a Reply

Your email address will not be published. Required fields are marked *