Adobe has launched new AI tools for video production, including Generative Extend in Premiere Pro, which allows editors to expand footage and make small adjustments without reshooting. Two web tools—Text-to-Video and Image-to-Video—let users create video clips with text prompts or reference images. These tools are currently limited to five-second clips at 720p quality, but they provide a glimpse into the future of creative, AI-powered editing. Adobe prioritizes transparency with Content Credentials to verify AI usage and ensure commercial safety.
Adobe has taken a big step into the world of generative AI video creation with the launch of its Firefly Video Model, now available directly within its popular video editing software, Premiere Pro. This cutting-edge AI-powered tool aims to simplify video production for creators by allowing them to extend footage and create videos using just text prompts and still images, making editing more seamless and efficient.
The first of Adobe's new AI tools, Generative Extend, is currently available as a beta feature in Premiere Pro. This tool is designed to address an all-too-common challenge for video editors: dealing with footage that's slightly too short or contains minor issues. Instead of reshooting, editors can use Generative Extend to add up to two seconds to the beginning or end of a clip or even make adjustments within a scene. For example, it can tweak awkward eye movements or unexpected gestures without needing to set up another shoot.
Generative Extend is intended for small fixes. It can generate extensions of up to two seconds, making it ideal for minor corrections but not for more extensive edits. The output is limited to resolutions of 720p or 1080p at 24 frames per second, which suits quick adjustments but might not meet the needs of high-end video production. The tool can also help with audio, extending sound effects or ambient noise by up to ten seconds, which can be helpful to smooth transitions between clips.
Alongside Generative Extend, Adobe has also launched two new AI-powered video generation tools on the web: Text-to-Video and Image-to-Video. These tools make it possible to create video content simply by typing in a description or uploading a reference image. Text-to-Video works similarly to other AI platforms, like OpenAI's Sora, by allowing users to type in a description of what they want, and the AI generates a short clip in response. The tool can emulate various styles, including realistic footage, 3D animation, and stop-motion, giving creators a range of artistic possibilities.
Image-to-Video takes this a step further by allowing users to upload a reference image along with a text prompt. This gives more control over the final output, making it ideal for creating b-roll content or visualizing ideas for reshoots. For example, users can take a still image from an existing video and use the AI to expand it into a short clip. However, the technology isn’t flawless yet; some artifacts, like wobbling objects or shifting backgrounds, still appear, meaning it’s not quite a replacement for actual reshoots.
While these tools sound revolutionary, they are still in the early stages and come with a few significant limitations. Currently, the maximum length of any generated clip is just five seconds, and the quality caps at 720p with 24 FPS. These limitations mean that Adobe’s generative tools are best suited for adding creative touches or enhancing small segments rather than creating full-length professional videos. Competing tools, like OpenAI's Sora, have promised longer outputs, but those tools are still not publicly available.
Another factor to consider is that each generated clip takes around 90 seconds to produce, though Adobe has promised a “turbo mode” in the future to speed up the process. Despite these growing pains, Adobe’s focus on creating tools that are commercially viable could be its edge. Unlike other models that may use scraped, unlicensed content for training, Adobe assures its users that the Firefly Video Model is trained only on legally permissible data, making it more dependable for professional, commercial use.
Adobe has also added Content Credentials to help verify ownership and the use of AI in generated content. This feature means that videos created using the Firefly tools can be tagged to disclose that they have been altered or generated with AI, which could prove useful in a world where the authenticity of media is increasingly questioned. For creators and brands, this transparency builds trust and could become a decisive factor for users looking to avoid legal complications around AI-generated media.
Adobe’s introduction of the Firefly Video Model marks an important development for video creators and editors, providing them with new tools to simplify and enrich the creative process. Though the current versions have some restrictions, they offer a glimpse into a future where video production is more accessible and efficient. As Adobe continues to develop these tools, we could see a shift towards more AI-assisted workflows that save time and reduce the costs associated with reshoots and editing.
With its new AI capabilities, Adobe aims to provide a more versatile toolkit that benefits content creators of all skill levels. The company’s commitment to ethical and transparent AI use also helps set a standard in an industry still grappling with questions of copyright and content ownership. For now, while these tools may not replace traditional video production entirely, they certainly offer exciting new opportunities for content creators to explore.
Sign up to gain AI-driven insights and tools that set you apart from the crowd. Become the leader you’re meant to be.
Start My AI JourneyThatsMyAI
8 November 2024
ThatsMyAI
26 October 2024
ThatsMyAI
24 October 2024
ThatsMyAI
23 October 2024