AI: the Battleground for FCP vs. Premiere Pro vs. Resolve

 

This week, Apple announced their new range of iPads, but as part of that announcement, they also highlighted new AI additions to FCP. In the war of video editors, AI is the new battleground. An exciting three-way fight is developing between the major editing programs: Apple’s FCP, Adobe Premiere Pro and DaVinci Resolve. Everyone has their personal preference, but the fact remains that the advances that machine learning (ML) and generative AI are providing are perhaps just enough of a reason for people to consider changing their once most beloved and preferred tools.

Resolve probably led the field with machine learning innovations with a range of tools both for colour grading isolation and innovations such as audio background noise reduction for dialogue. When these were first introduced, they were well beyond any normal advance one expected in an iterative software release.  For example, isolating a voice from background noise was the difference between having a usable clip and not many people, especially indie filmmakers and documentarians. Adobe matched this, but it was via a beta ‘Podcasting’ separate AI test website.

It seemed just days before the Apple FCP announcement, Adobe had launched its new additions of generative AI in Premiere Pro.  Their Firefly additions allow an editor to use tools previously only seen in Photoshop’s Beta to provide seamless visual effects while editing. Of course, this wasn’t Adobe’s first use of machine learning and generative AI. They have shown a considerable commitment to generative AI for some time. Nevertheless, the demonstration of the new Firefire tools seemed remarkable and a substantial leap from anything that had been possible previously in Premiere.

FireFly inside Premiere Pro

Most of these products rely on cloud-based rendering, but even with latency caused by upload and download speeds, all of these new machine learning tools dramatically increase the speed at which work can be done, and more importantly, they change the dynamic between editing and visual effects. For a long time, it’s been possible to move shots fairly seamlessly between Premiere and After Effects. However, if you’re not familiar with AE, it was still quite daunting to be able to do anything as seamless as these generative AI style visual effect shots compared to the ease of being able to do today in Premiere with FireFly.

FCP

As part of Apple’s announcements, new artificial intelligence features have been shown for Final Cut Pro for Mac 10.8, allowing editors to rapidly customise the look of videos or photos in a single click and retime visuals seamlessly. Plus, Apple have added important workflow-accelerating tools that bring new ways to manage colour correction and video effects, as well as search and navigate the timeline..

The Final Cut Pro for iPad launch showcased Live Multicam, an innovative new solution for users to capture up to four different angles of a single scene. Live Multicam connects wirelessly via Final Cut Camera, a new video capture app, enabling users to view up to four iPhone or iPad devices and providing a director’s view of each camera in real-time.


For Apple, the new FCP news not only announced the ability to have multiple iPhones feeding in footage for a real-time Multicam, but also they also showed automatic segmentation generation,  effectively generative AI roto generation, as part of the demonstration of what’s possible just an iPad version of Final Cut Pro

Auto isolating objects using ML at 4K on an iPad.

During the launch presentation Apple showed 4K automatic roto on clips such as this dancer (above) using AI. This was demonstrated on the iPad but importantly it relies on the Neural Engine built into the actual Apple Silicon M4 chip. This is indicative of Apple’s preference for, where possible, doing machine learning inferences locally rather than on the cloud – both for performance and privacy reasons.

New Smooth Slow-Mo in FCP

On the Mac editors can leverage the H/W Neural Engine in Apple silicon. As a result there are many new AI features and organisational tools in FCP. Apple’s Final Cut Pro 10.8 introduces Enhance Light and Colour, offering the ability to improve colour, colour balance, contrast, and brightness in a straightforward step, and it is optimised for SDR, HDR, RAW, and Log-encoded media. With the new Smooth Slo-Mo, video frames are intelligently generated and blended, providing the highest-quality movement for slow motion shots. To aid in post efficiency, colour corrections and video effects can now be given custom names by the inspector to identify changes applied to a clip easily, and effects can be dragged from the inspector to other clips in the timeline or viewer.

Many see this release as Apple’s just serving up an appetizer ahead of its June WWDC event, where the focus will clearly be on advanced AI across the product line.

Resolve

At NAB in April, Da Vinci showed their new Resolve 19 beta. This includes dramatic noise reduction and new face refining features, all of which use machine learning to create effects to speed up and improve the overall process of editing and collating in Resolve.

DaVinci Resolve 19 (now in public beta) is a significant new update that adds a range of new AI tools, over 100 feature upgrades such as IntelliTrack AI, Ultra NR noise reduction, ColorSlice six vector grading, film look creator FX, new multiply rotoscoping tools, and new Fairlight AI audio panning to video, ducker track FX. Editors can work directly with transcribed audio to find speakers and edit timeline clips.

Resolve

Competition is healthy for the innovation it fosters, and right now the result of this competition is both new tools and perhaps the start of a shift in workflows between editing and effects teams. Certainly the war for new and innovative AI assisted tools is far from over, its accelerating.

*Our headline/banner cover image was AI generated in Firefly.

Update: AVID

AVID

It has been pointed out that we did not include AVID news in this original story. That was our omission. The Avid RAD (Research and Development) Lab was set up in 2021 to explore the use of new technologies, including AI, in media production. We also note that AI was central to Avid’s product offerings at NAB. These included automate speech-to-text transcription, summarization, and language translation. AVID also showed many of the AI tools already in Avid solutions currently – such as PhraseFind AI and ScriptSync AI in Avid Media Composer, facial detection and scene recognition in Avid MediaCentral, plus transcription services.