True generative AI video editing has arrived on Premiere Pro. At this year’s Adobe Max, the company has revealed the new genAI video tools are now available in beta, including the first generative video model designed to be safe for commercial use.
As we reported last month, the latest update adds a whole suite of genAI tools. Generative Extend is the headline feature, letting users increase the length of video and audio clips.
But there’s much more on offer as Adobe pushes its Firefly AI deeper into the video editing software.
What’s new from Adobe Firefly Video?
With the release of the first set of Firefly-powered video editing workflows, Adobe has confirmed several core focuses.
First, dissatisfied with the quality of previous results, Adobe has R&D’d the latest version to the Nth degree. As well as improved video quality, the model has, the company said, been trained on Adobe Stock and public domain data – and not user data or media found online. Adobe trusts the safeguarded training, alongside the indemnification available to enterprise customers, makes this the first generative video model designed to be commercially safe, and more attractive to professionals looking to use AI without fear of copyright infringement.
That doesn’t mean Adobe’s forgotten the core of the experience. In a virtual press conference attended by TechRadar Pro, Alexandru Costin, Vice President, Generative AI and Sensei at Adobe, explained that users “told us editing is more important than pure generation. If you look at the success of Firefly Image, the most use we get inside Photoshop is with Generative Fill because we’re serving an actual customer workflow. So, with video, we’ve decided to focus more on generative editing.”
So, what does that look like in practice?
Generative Extend is the clearest, and most useful example coming to the beta. This tool lets users extend existing video and audio clips to match the soundtrack or alter the pacing, even without enough coverage.
Image to Video and Text to Video have also arrived in earnest – as one would expect to find in any self-respecting AI video editor. By the looks of things, it works in a similar fashion that that found elsewhere across the Creative Cloud ecosystem – with, like any good movie, a twist. Here, users can effectively become the director with creative control over shot size, angle, motion, and zoom. Using the new models, the company also showcased examples of text graphics, B-roll content, and overlaying AI-generated atmospheric elements like solar flares to existing footage.
The latest updates build on last month’s set of beta tools, including a new context-aware properties panel that adds most needed tools into one place to speed up workflows. There’s a new Color Management that, Adobe said, “fundamentally transforms the core color engine.” And general performance sees an improvement, ProRes exports, for example, are now three times faster than before.
We’ll be reviewing the latest version of Premiere Pro soon, and we’re keen to see how well the new video tools complement the editing process. In the meantime, users can try out Adobe’s new tools in beta by clicking here.