Adobe is jumping into video production AI. The Firefly Video Model company, which has been teased since earlier this year, is launching today a number of new tools, including some within Premiere Pro that will allow creators to enhance videos and create videos from still images and sound effects.
The first tool – Generative Extend – is launching in beta for Premiere Pro. It can be used to widen ends or ends that are a little short, or to correct changes in the center, such as correcting the movement of eye lines or unexpected movements.
Videos can be extended by just two seconds, so Generative Extend is definitely worth it for small tweaks, but this can replace the need to reshoot the video to fix minor issues. Additional videos can be produced at 720p or 1080p at 24 FPS. It can also be used in audio to improve control, although it has limitations. Amplifies sounds and “room tone” for up to ten seconds, for example, but no speech or music.
Two more video production tools are launching online. Adobe’s Text-to-Video and Image-to-Video tools, first announced in September, are now running as a limited public beta in the Firefly web app.
Text-to-Video works similarly to other video creators like Runway and OpenAI’s Sora – users just need to add text to the content they want to create. It can simulate different styles such as “real” film stills, 3D animation, and stop-motion, and footage can be edited using “camera controls” that take into account things like camera angles, panning, and shooting distance.
Image-to-Video goes one step further by allowing users to add a link image along with captions to control the results. Adobe suggests that this feature can be used to create b-roll from photos and videos, or to enable viewing of re-recorded images by downloading existing video. The before and after example below shows that this is not possible to change the reshoots directly, however, because several errors such as shaky strings and ground motion are visible in the results.
You won’t be making any videos with this technology any time soon. The maximum duration of Text-to-Video and Image-to-Video images is five seconds, and the quality is above 720p with 24 frames per second. By comparison, OpenAI claims that Sora can create videos of up to one minute “while maintaining the visual quality and following the user’s needs” – but this is not available to the public despite being announced months before Adobe’s tools.
Text-to-Video, Image-to-Video, and Generative Extend all take about 90 seconds to render, but Adobe says it’s working on a “turbo mode” to reduce that. And as restricted as it may be, Adobe says its AI-powered video tools are “commercially secure” because they’re trained on what the developer is allowed to use. Samples provided from other providers such as Runway are reviewed for training thousands of YouTube videos – or Meta, maybe even your own. individually videos – commercial viability may be a problem for some users.
Another advantage is that videos created or edited using Adobe’s Firefly video can be included with Internal Guidelines to help disclose AI usage and copyright when published online. It’s unclear when these tools will come out of beta, but at least they’re publicly available — which is more than we can say for OpenAI’s Sora, Meta’s Movie Gen, and Google’s Veo generators.
AI video was announced today at Adobe’s MAX conference, where the company is also bringing more AI-powered features to its production software.
#Adobes #animations #theyre #built #Premiere #Pro