Software | CineD https://www.cined.com/reviews/reviews-software/ Thu, 14 Nov 2024 13:47:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Final Cut Pro 11’s Magnetic Mask Feature Reviewed – An AI-powered Productivity Revolution https://www.cined.com/final-cut-pro-11s-magnetic-mask-feature-reviewed-an-ai-powered-productivity-revolution/ https://www.cined.com/final-cut-pro-11s-magnetic-mask-feature-reviewed-an-ai-powered-productivity-revolution/#comments Thu, 14 Nov 2024 13:44:53 +0000 https://www.cined.com/?p=362221 Apple has long been accused of neglecting its Pro apps and not giving them enough updates, particularly Final Cut Pro, whose biggest competitors are DaVinci Resolve and Premiere Pro, both of which regularly receive a bunch of new features. It looks like for now, the wait is over, because with the release of Final Cut Pro 11 (yes, we’re skipping 10.9!), Apple is actually leapfrogging the other NLEs in one area in particular: automatic masking with their Magnetic Mask feature. I took a closer look and tried it out.

If you’ve been using image processing apps for a while, you will be well accustomed to Photoshop’s famous Magic Wand tool. It selects areas in an image with a similar color, and you can set it to be more or less “tolerant” towards similar colors. It’s not AI or machine learning based as it’s been around for much longer than we’ve known what “machine learning” even means, so it’s doing a pretty poor job at recognizing the borders of subjects – basically, it simply doesn’t “know” what a subject is, it only looks at similar color information.

Photoshop’s Magic Wand vs. “Select and Mask”

Chances are that you haven’t used the Magic Wand in Photoshop nearly as much as you used to, because of the “Select and Mask” tool that was introduced a while ago, which does a much better job at “understanding” the exact borders of objects, humans or animals. Very often it’s a one-click type of work, you click on it and boom – perfectly selected, no other steps required.

Final Cut Pro 11 introduces Magnetic Mask tools

On the video front, however, we haven’t been as spoiled with this technology over the last few years. Yes, DaVinci has a similar tool but it makes quite a few mistakes, which still means you’ll have to rotoscope quite a bit to get it right.

Close to perfect automatic tracking: a drone flying in mid-air, tracked with Magnetic Masks in Final Cut Pro 11. Image credit: CineD

In comes Apple with a major update to Final Cut Pro, even going so far as to finally make the big jump from Final Cut Pro 10.8 to 11. (One could say they were simply running out of numbers behind the 10 … but they did skip the 10.9, maybe this is Apple’s unlucky number? We also never had an iPhone 9…). As its headlining main feature, Final Cut Pro 11 has the “Magnetic Mask” which works very similarly to Photoshop’s “Select and Mask”, but for video.

Are Magnetic Masks just pimped “Magic Wands”, or more?

When I first heard about Magnetic Masks, I didn’t expect too much because my first association was actually with that Magic Wand tool we’ve forever known from photo editors. But the first time I saw it in action, I was blown away. One-click on a subject will immediately color it red and show the automatically generated mask. Another click on “Analyze” will start tracking the mask automatically for the entire duration of the clip in the timeline, starting from the playhead position forward. When it’s at the end, it’ll continue going backward from that playhead position until it’s done. This is useful because subjects can get in and out of focus or frame, and this way you can start the subject tracking on a clean frame.

Still frame from the resulting isolated drone. Great border separation with a chance to feather it further. The rotors are super difficult in front of a busy background full of leaves, but it’s still a great result. Image credit: CineD.

Real-world testing of Magnetic Masking with drones, humans, and forests

I went through dozens of clips from recent shoots and decided to select some really difficult ones, starting with b-roll from my review of the DJI Air 3S. The drone flying in mid-air, turning, sometimes leaving the shot, and flying back in. It’s doing a simply remarkable job even with one-click selection. On some very rare occasions, it will miss a few frames of one of the drone arms coming back into shot when it’s turning, but this was easy to fix with two more clicks. The foreground-background separation works amazingly well, and even when the drone quickly exits the frame and comes back in, it “catches” it again. There are very few times when parts of the background are also selected, and even if they are, it’s almost invisible portions of it.

When magnifying the mask, you can see that it does a decent job with fine hair separation. Image credit: CineD

Moving on to the next shot, me flying the drone. Same story: one click and I have a perfect selection that even does a great job separating fine hair from the background. Tracking works well even if I am not filling the frame when the camera is tilting down and back up to my head again. No complaints, manual rotoscoping wouldn’t have looked any better on this shoot.

The last shot I tested was a drone shot where the drone was flying towards a hut inside a forest full of colorful trees in fall. The big challenge here is the changing perspective with tree branches and leaves moving in front of the hut. On this shot, the first selection click wasn’t enough and some further additions to the mask with additional clicks were necessary, as you can see in the review video. Yet still, it took two minutes or so, and the mask was done, and, again, the automatic tracking did a great job. It’s not perfect and some of the foreground leaves are too detailed, but it’s better than what even manual rotoscoping would look like on this occasion as well.

A few more clicks than before, but still done within two minutes: Masking this hut in a forest and then analyzing the clip. Image credit: CineD

When working with Magnetic Masks, you can either make a separate mask and stack effects that you want to apply to that mask below the mask in the Effects tab. Or if you want a mask specifically for one effect, you select the mask symbol from inside the effect and track it from there – however if you do that, keep in mind that there doesn’t seem to be a way to copy that effect-specific mask to another effect, you’ll have to track the subject again. Here I would love to see Apple add an option that allows you to copy a mask from within an effect onto another effect, or paste it even as a global mask to the clip.

Render speed of Magnetic Masks in Final Cut Pro 11

I am using an M2 Max MacBook Pro 16″ with 64GB of RAM and rendering Magnetic Masks (on footage on a fast external SSD) was probably around 70% realtime (UHD footage at 25 frames per second). In other words, remarkably fast considering the complexity of the work. We don’t have access to an M4 Pro / Max machine right now, but it can only get better on those of course.

Conclusion

The Magnetic Mask feature is one of the most intuitive, best-working masking tools I’ve ever used and it will propel selective color grading and effects placement into the hands of a whole new generation of creators and filmmakers. It simply works very reliably and it’s leaped over the quality of tracking currently available in other NLEs.

To upgrade to Final Cut Pro 11 from any prior version of Final Cut Pro (since the relaunch over 13 years ago, that is – 13 years!! Can you believe it?!), you can download the free update from the App Store. It’s amazing that they still are not charging for an upgrade as long as you purchased Final Cut Pro for $299 once before. There is also a free 30-day trial if you haven’t used it before. Other features in Final Cut Pro 11 are well worth an upgrade (even if it would cost money), including Transcribe to Captions and Spatial Editing, smooth Slo-Mo, and much-enhanced voice isolation, but nothing will convince you more to use Final Cut Pro as your daily editing application than the Magnetic Mask function. Well done!

]]>
https://www.cined.com/final-cut-pro-11s-magnetic-mask-feature-reviewed-an-ai-powered-productivity-revolution/feed/ 4
Final Cut Pro for iPad 2 – Live Multicam Review & Tutorial https://www.cined.com/final-cut-pro-for-ipad-2-live-multicam-review-amp-tutorial/ https://www.cined.com/final-cut-pro-for-ipad-2-live-multicam-review-amp-tutorial/#comments Thu, 20 Jun 2024 19:35:34 +0000 https://www.cined.com/?p=344627 We had early access to Final Cut Pro for iPad 2 and reviewed the Live Multicam feature which allows for simultaneous live recording from up to four iPhones & iPads without extra hardware. Here’s our Final Cut Pro for iPad 2 Review!

First off: Watch the video above to get the full impression of our “live demo” and review of the live multicam feature in Apple’s new Final Cut Pro for iPad 2 (it’s a confusing name, but it’s the second version of Final Cut Pro for iPad, not a version of Final Cut Pro for the long-obsolete 2nd generation iPad …). I’ll highlight some of my key findings in this text here.

Final Cut Pro 2 for iPad and the Live Multicam feature. Image credit: CineD

Democratising multicam shooting and editing with Final Cut Pro for iPad 2?

For the longest time, multicam shooting has been either something easy to do when you only cared about actually dealing with synching in post – meaning, you just record on a bunch of cameras and worry about syncing them later – or a rather hardware-intensive task that required the usage of timecode generators and cameras that could actually accept timecode input. Not a big deal for professional productions and relatively easy to handle thanks to devices such as Tentacle Sync boxes, however, not necessarily something that the average content creator would use. Then came automatic syncing using the audio waveform, which already made things a lot easier.

Smartphones and iPhones in particular became ubiquitous, and if we’re all honest, the cameras in these phones are the ones that are used most even for most professional filmmakers and photographers, even if it’s “just” photos of our families (but aren’t those the images that will matter the most?). The quality of videos and photos from these devices has reached a level that essentially killed the entry-level camera market in the last few years, so adding “professional” video (and of course photo) features to these devices is only the next logical step. And that includes multicam shooting and editing using smartphones and tablets.

Using the Live Multicam feature in Final Cut Pro 2 for iPad requires a new free app called Final Cut Camera, now also in the App Store. Image credit: CineD

Live Multicam and live drawing – two features that are not available on the Mac version of Final Cut Pro

Fast forward to 2024, and Apple released version 2 of Final Cut Pro for iPad which adds Live Multicam as a key new feature that also differentiates it from its sister software on Mac (which also gets an update to 10.8). With this new feature as well as the live drawing feature that allows for drawing onto a video (and animate these drawings) Apple is clearly making some new steps that separate the app from its Mac counterpart, actually adding features that are exclusive to its iPad version (which is only available as a monthly or annual subscription).

Using Live Multicam requires a new camera app from Apple called Final Cut Camera, which is freely available in the App Store. I’ll work on a separate review for the app, but essentially it’s not unlike Blackmagic Camera for iPhone, but simpler with less functions – yet essentially, it unlocks a lot of “pro controls” in iPhones. The app can definitely used stand-alone without Final Cut Pro for iPad even if you don’t plan on using the Live Multicam feature.

Live Multicam, tested

Here are some of the key facts about the Live Multicam feature: You can connect up to four iPhones or iPads for the multicam recording, and one of these angles can also be the built-in camera of the iPad you are using for the recording.

Pairing iPhones is easy and in most cases worked without any problems. However, one phone wouldn’t connect and it didn’t matter how often I tried. It was a iPhone 15 Pro Max which was only used for tests at the office so initially I thought it would require a SIM card, but that is not the case – it turned out that iCloud password sharing / Keychain needed to be activated on the phone in order for it to work. This is something that Apple definitely has to clarify in the setup notes because it can save a lot of time knowing this beforehand!

What’s nice is that also older iPhones can be used, as long as they are still able to install Final Cut Camera, which requires iOS 17.4. I added an iPhone 13 Pro Max for one of the angles and it worked flawlessly. Of course that phone does not support LOG recording, so that’s not an option (but actually it’s the first one that can record ProRes, so that would work).

Live multicam view from four iPhones. Image credit: CineD

All or at least most camera settings from each camera can be changed from the Multicam view once the cameras are connected, which is very convenient – you don’t need to run to each one of the cameras to change settings. There isn’t however a “global setting” screen where you can change all the camera settings at once, which would be nice – because it’s easy to miss something. You can’t see the detailed camera and codec settings on the overview screen but you have to change into full screen in order to modify the camera settings, on each camera individually. And that actually led to me mistakenly record ProRes on one of the cameras (the others were set to HEVC H.265 codec). I ended up with a 17 minute recording from the main camera with over 100 GB. Yikes! In my opinion, it shouldn’t even be possible to mix codecs when using the Live Multicam feature, then this mistake would be easy to fix. There is no point in recording in different qualities on the iPhones for a Multicam!!

When you press record, all cameras start recording simultaneously internally, and continue streaming a preview (proxy) image to the iPad. In a studio environment with a normal Wifi this worked really flawlessly, and considering that we are on a Wifi connection, I was also impressed with the relatively low latency of the image preview.

How much sense does ProRes 422 HQ make?

We’ve gotten the ability to record ProRes on the iPhone already a while ago, and since iPhone 15 Pro and Pro Max it can also shoot Apple Log, which is of course great. (We ran a full Lab Test on the iPhone 15 Pro camera, check that out in case you missed it.) However, the ONLY flavour of ProRes that Apple implemented in iPhones is ProRes 422 HQ, and I simply cannot get my head around why. It’s one of the older implementations of ProRes and generates huge file sizes, while the hardware doesn’t necessarily deliver the quality that would actually make full use of what the codec offers. In other words. ProRes LT would save A LOT of space on the phones (yes, you can record externally … but you shouldn’t have to!) and is still a 10-bit video codec delivering plenty of quality.

Whoops – accidentally I had one camera set to ProRes (422 HQ). 17 minutes of 4K UHD footage resulted in a file larger than 100GB. Image credit: CineD

Background footage transfer (but no background processes)

It wasn’t such a huge deal before Final Cut Pro for iPad 2 and their Live Multicam feature, but this software transfers video assets in the background after recording stops, which means that in my case, over 100 GB (all four cameras) hat to be transferred in the background. While transfer was reasonably fast, of course it still took a while. (You can even decide to transfer later, which is nice.) Big caveat here: Because of how iOS and iPad OS is structured, Final Cut Camera and Final Cut Pro for iPad need to be open at all times in order for the transfers to continue, background processes are still “killed”, which is in general a very annoying behaviour for “pro” apps (this even happens when exporting a file from Final Cut Pro for iPad … equally frustrating!). I hope Apple can implement background processes for some individual apps like Final Cut Pro for that reason.

Transfer of the original files from the iPhones works over Wifi after the recording is done. You can already start editing using the proxies. Image credit: CineD

Multicam recording but no live switching

While the app records all camera angles at the same time, there is no live switching capability built into it right now – which means, no edits or edit decisions can be made in realtime. This is something that absolutely should be added in the next version of the app as it’s kind of “standard” for any multicam applications or hardware. It’s simply a huge time saver as the edits will be almost done after a live multicam shoot is over, and corrections can still be made afterwards of course.

The actual editing process of the multicam clip works flawlessly and without any hickups in Final Cut Pro for iPad 2. It’s also easy to define the audio source for the edit and corrections can be made in the angle editor. It’s also possible to export projects to continue working with them on Mac afterwards, also the multicam clips can be edited there.

Multicam editing after recording works nicely inside the iPad app. Image credit: CineD

Final Cut Pro for iPad 2 Review of Live Multicam – Conclusion

Live switching on Final Cut Pro for iPad 2 is definitely a function that will be used a lot by content creators that want an easy way to record multi camera recordings for their content. Limitations include the fact that this function is only available on the iPad version of the app, the sheer size of the ProRes 422 HQ files (though you don’t have to use it, you can stick with HEVC which is very reasonably sized!), the inability to do live switching while recording, and the fact that you need to keep the apps open and active in order to continue transferring (it doesn’t work in the background or when the iPad locks itself). Nevertheless it’s a version 1 of a completely new function and it’s simply amazing to me that you can do all this without any dedicated hardware. Lag is minimal considering it’s running over Wifi, and connections seem very stable.

A great first step into the democratization of multicam recording and editing, and I can’t wait for what’s coming next!

Links:
Final Cut Pro for iPad 2 on the App Store
Final Cut Camera on the App Store

What do you think about Live Multicam on Final Cut Pro for iPad 2? Can you see yourself using it in the future? Let us know in the comments below.

]]>
https://www.cined.com/final-cut-pro-for-ipad-2-live-multicam-review-amp-tutorial/feed/ 15
Pro Video Editing on iPad Pro M4 with FCP 2? Hands-on at Apple Event https://www.cined.com/pro-video-editing-on-ipad-pro-m4-with-fcp-2-hands-on-at-apple-event/ https://www.cined.com/pro-video-editing-on-ipad-pro-m4-with-fcp-2-hands-on-at-apple-event/#comments Fri, 10 May 2024 15:18:14 +0000 https://www.cined.com/?p=339297 At the Apple launch event at Apple UK HQ in Battersea Power Station in London, I was able to have a hands-on with the newly announced iPad Pro M4, as well as preview versions of the upcoming Final Cut Pro 2 for iPad and the newly announced Final Cut Camera app which allows a feature called “Live Multicam”.

Exciting times, first time CineD was invited to an Apple launch event. Of course I wondered why as the invitation already hinted at an event heavily targeted at the iPad, not so much filmmaking-related topics. While watching the keynote there, it became clear: Apple not only relaunches the iPad Pro with the entirely new M4 chip, but they also showcased an entirely new version of Final Cut Pro for iPad that is now really fully optimized for touch (just a reminder, the original FCP for iPad was announced last May), and an entirely new app called Final Cut Camera that allows for Live Multicam shooting.

iPad Pro - Tandem OLED, much lighter, M4 for more power.
iPad Pro – Tandem OLED, much lighter, M4 for more power. Credit: CineD

iPad Pro with M4 – Tandem OLED, much lighter, M4 more power-efficient

As mentioned in the video report above, I was impressed how light the M4 iPad Pros have become. 100 grams less is a lot for a compact device like an iPad, and you can feel it when holding the new 13 inch version in one hand and operating with the other hand. It now finally feels like you can operate it for an extended period of time holding it in one hand.

It seems like the reason to debut the M4 chip in the iPad Pro is the fact that it’s more power-efficient than the M2 (and M3, which was skipped in the iPad lineup), which means it will generate less heat and need less power in order to operate at comparable speeds to the M2. This way, the iPad can be build even thinner (and it’s really thin now … probably the closest Apple has ever gotten to that “magic sheet of glass” they envisioned when introducing the iPad in the beginning). iPads “traditionally” last about 10 hours on a full charge, and by my own experience, those are conservative estimates (will last longer if you do mundane tasks) – I am guessing that they built in a smaller battery because the processor is less power-hungry, meaning they could shed some space and weight by incorporating a smaller battery).

It will be interesting to see how the iPad Pro fares in real-life pro scenarios which are render-intensive, and I look forward to reviewing this. There have been some reports of power throttling happening on its predecessor, so let’s see if the M4 fares better in this regard.

The “Tandem OLED” screen is really impressive and super bright for an OLED screen. They achieved this brightness by basically putting two OLEDs on top of each other and have them “work in harmony”. The level of detail is staggering and I can see this screen show up in professional productions everywhere. I have no doubt this would be a perfect preview monitor on as well as off set. There is a “nano etched” version of the screen available for the 1TB and 2TB versions of the iPad Pro for an extra $99 which will diffuse light even more and make it less reflective. They target this particularly at pro colorist workflows.

Final Cut Pro 2 for iPad – very responsive touch, AI functions

Apple is entering the AI game with Final Cut Pro 2 for iPad, which not only feels much snappier to operate in its latest iteration but also fully enabled for touch – which wasn’t the case before. There is a virtual jogwheel for exact navigation, and setting in and out points as well as pinch zooming works a treat. The new Apple Pencil Pro enables Final Cut Pro 2 for iPad to go into “Live Draw” mode where you can draw directly onto your footage and animate that. Neat and surely a way to creatively and easily add graphics to your footage. Regarding the aforementioned AI functions, they now have machine learning incorporated in various functions, for example a keyer that removes the background based on machine learning, without a green screen. It’s not perfect but surely very useful for quick isolation of subjects. They demoed it by putting a title in between a dancer and a wall in the background and that worked really well also when being reproduced. I will be very curious to test the ability of this tool when reviewing the new version. The new version of FCP for iPad also supports external SSD drives to store libraries, which is a long-awaited feature.

Final Cut Camera for Multicam Recording
Final Cut Camera for Multicam Recording. Credit: CineD

Final Cut Camera – Live Multicam

Apple also announced a free new app for iPhone and iPad called Final Cut Camera, which can be seen as a bit of a competitor to Blackmagic Cam. While probably not as fully featured (for example, unfortunately preview LUT support seems not to be there in version 1, however Apple Log shooting is available), it allows for manual adjustments of ISO, shutter speed, white balance and focus, even including focus peaking – neat!

But the main feature of this app is its “Live Multicam” functionality which allows you to use up to 4 iPhones (and / or iPads) to stream live footage and then do a multicam live edit on one of the devices. You can change all the settings of the other devices from the “master” device, and also press record on that master. It will then record streaming proxies which you can start editing right away as a multicam clip on the device after pressing stop recording. As you are editing, ProRes originals are transferred from the other devices in the background and automatically onlined. It will be interesting to see how well this works over Wifi with the large amounts of data ProRes generates, because as I learned, it will be the quite data-heavy ProRes 422 HQ versions. In my opinion, ProRes LT would have been plenty good quality from iPhones / iPads, considering it’s also 10-bit, and much less data-heavy. Looking forward to trying this feature out.

Editing video on iPad Pro M4
Editing video on iPad Pro M4 with FCP 2. Credit: CineD

Conclusion

Apple is clearly targeting content creators with their updates of the iPad Pro M4 and Final Cut Pro 2 for iPad and Final Cut Camera (also Logic Pro, but that is not really our topic here at CineD). It will be interesting to see if it will be an efficient desktop-less editing workflow for both normal edits and also the live multicam edits, and I look forward to testing this out in the near future.

What do you think about editing professionally on an iPad? Please share your thoughts with us in the comment section below.

]]>
https://www.cined.com/pro-video-editing-on-ipad-pro-m4-with-fcp-2-hands-on-at-apple-event/feed/ 10
PRODUCER Software Evaluated in Real World Testing by MXR Productions https://www.cined.com/producer-software-evaluated-in-real-world-testing-by-mxr-productions/ https://www.cined.com/producer-software-evaluated-in-real-world-testing-by-mxr-productions/#comments Fri, 15 Mar 2024 09:59:53 +0000 https://www.cined.com/?p=330256 Some weeks ago, we had the opportunity to interview Xaver Walser, CEO of PRODUCER – Maker Machina. PRODUCER is an all-in-one production software to manage projects from the initial steps to delivery. Filmmaker Christoph Tilley and his production company MXR Productions used the app in a real-world scenario to give us their feedback. Well, here goes!

PRODUCER – Maker Machina is a promising all-in-one tool designed mainly for line producers, plus a collaborative tool for different projects, from short commercials to music videos. This software aims to end the nightmare of having all the production data separated into other apps by offering a comprehensive interface where everyone can be on the same page, saving time and making production more efficient. 

Christoph Tilley’s first impressions

In our first video, Xaver showed us how PRODUCER – Maker Machina works, giving a step-by-step explanation of its different features and the things under development. Changing the production paradigm in an industry where everything has evolved quickly, except in this area, could make this software a reference for filmmaking teams. 

Thanks to features like automating repetitive tasks, connecting the different parts of a shoot in the same program, and making communication easier, PRODUCER – Maker Machina offers a blueprint of the entire production without having to use external apps, send emails, make redundant phone calls, etc. 

Right now, there are more than 5,000 filmmakers testing the app, and filmmaker Christoph Tilley gave us his first impressions in the video above while using the software in a commercial shoot out of the office. 

All the stages of the production are organized inside the program. – Source: PRODUCER – Maker Machina.

He likes that the program gives you all the production information in one place, speeding up the process in an industry where speed and efficiency are gold. As an independent production company, Christoph finds PRODUCER – Maker Machina liberating, which gives more space to be creative on set and focus on making movies.

The things he would love to see in the future are a sophisticated budgeting tool and a time-tracking tool to avoid switching to external apps like Notion. Finally, Christoph recommends that other filmmakers test the tool and see if it fits their workflow. His team found it helpful and a time-saving tool, which, again, is essential in this industry. Even if just saves an hour per year, PRODUCER is worth it for him.

Price and availability 

You can sign up for PRODUCER in seconds here. The Free plan has no time limit, giving you the best chance to fully explore the tool and unlock its potential. When you sign up before 31st of March, you’ll also be able to take advantage of a limited 50% off deal that is available for Public Beta users only. To claim the offer, follow the instructions here.

What do you think about PRODUCER – Maker Machina? Would you be interested in testing the program? Which features would you like to see in the future? Please let us know your thoughts in the comments below!

]]>
https://www.cined.com/producer-software-evaluated-in-real-world-testing-by-mxr-productions/feed/ 9
PRODUCER – Maker Machina Tested – First Look at the All-In-One Production Software https://www.cined.com/producer-maker-machina-tested-first-look-at-the-all-in-one-production-software/ https://www.cined.com/producer-maker-machina-tested-first-look-at-the-all-in-one-production-software/#comments Fri, 02 Feb 2024 13:01:50 +0000 https://www.cined.com/?p=324925 Let’s be honest: production pipelines can be enormously frustrating at times – especially if you shoot commercials, have tons of client projects, and collaborate with different filmmakers. Ever seen a desperate 1st AD calling each member of the 20-people crew to reschedule the shoot? Or found yourself lost in countless Google documents with shooting plans, Notion boards, and Asana task lists? These are exactly the issues that the new production software PRODUCER – Maker Machina aims to solve. Together with seasoned filmmaker Christoph Tilley from MXR Productions, we decided to take it into the field and give the creators (and you) our honest feedback on what this software can and cannot do.

This is the first of our video interview series in which Nino Leitner (CineD) sits together with Christoph Tilley (MXR Productions) and Xaver Walser (CEO of PRODUCER – Maker Machina). Initially, Xaver guides everyone through the online software step by step and presents all the features that are currently available. Filmmakers also discuss the biggest pain points in current project production, and why it’s so important to change the existing paradigm and streamline all the processes.

What is the main goal of PRODUCER – Maker Machina?

The idea for an all-in-one production software came to Xaver during a commercial shoot for a watch brand. The client asked him if they could have all the created content, shooting schedules, and feedback notes in one place. Regrettably, the filmmaker had to acknowledge that he had never come across such a comprehensive tool. That was the first step toward founding a start-up with precisely this goal: to give creatives an all-in-one application for managing productions from early concept through to delivery. Or, as Xaver Walser nicely puts it: “To make a painkiller for filmmakers.”

Inside the PRODUCER – Maker Machina

According to Xaver, the new software is called “PRODUCER” because the person who will likely use it the most is a line producer. At the same time, it is developed as a collaborative tool adjustable for projects of varying scales, ranging from a 30-second commercial to music videos, image and corporate films. Feature films are planned to be integrated at a later development stage.

When you select a project from a visual board, as demonstrated above, you will see the entire production process divided into stages we all are familiar with:

Image source: PRODUCER – Maker Machina

This overview helps to make sure that no step will be overlooked. Simultaneously, the software keeps everything related to this particular project centralized. In the video interview, Xaver explains in detail how centralization works. Let’s look at a couple of existing features.

Automating repetitive tasks in PRODUCER – Maker Machina

What PRODUCER – Maker Machina promises to be good at is automating processes and simplifying repetitive tasks. For example, here we have a storyboard section in which you can drag and drop your scribbles, generated pictures, or references from the Internet:

Image source: PRODUCER – Maker Machina

Apart from moving them around until you get the correct storyline, you can connect each of your pictures to a shooting day, a location (from the list you created before), and characters. Also, it’s easy to collaborate with a cinematographer, adding additional information such as angle, movement, shot size, camera lens, etc.

Image source: PRODUCER – Maker Machina

While you’re going through this process, the software automatically creates a shot list for each day based on the information you’ve provided. Simply make the final adjustments by dragging shots into the desired sequence (e.g., for consecutive shots), include the estimated time, and let the program do the math. Double-check, connect the actors from the list to the character roles, add your crew members for this project – and, wait, what? Has PRODUCER – Maker Machina just generated a correct call sheet?

At first glance, the tool does seem easy to use, fast, and flexible. During the presentation, Christoph Tilley remembered how they had just finished a shooting schedule in a clumsy Google doc, and watching the new software made him jealous. Well, I can only relate.

For easy communication

Extensive communication is another big pain point in our industry that PRODUCER – Maker Machina wants to resolve. The software allows users to add comments to each document and at every production stage. Your clients can also collaborate whenever needed. For example, in the post-production section, it is possible to upload your first cut and share it for quick feedback. Additionally, you can compare different versions of the edit side by side, directly in the software.

Image source: PRODUCER – Maker Machina

Of course, there are enough tools out there that offer us the same in terms of editing and delivery. For instance, a lot of filmmakers use Frame.io to gather feedback. Yet, how many times have your clients lost the link to a rough cut? If they could have everything just in one online tool, wouldn’t it be easier for everyone?

What else to expect?

Of course, an image is worth a thousand words, and our written text can only capture a limited number of features. So, make sure to watch our video above to form your own impression of PRODUCER.

It’s worth mentioning that it’s young software and only a starting point for this tool. At the moment, they have around 4000 people testing the application and providing them with feedback. Xaver Walser says they take all input seriously and have a big roadmap ahead. For example, developers want to add an extensive and structured briefing document or offer the possibility to upload dailies beside each day’s call sheet, just to give you an idea of the upcoming features.

Price & availability

You can sign up for PRODUCER in seconds here. The Free plan has no time limit, giving you the best chance to fully explore the tool and unlock its potential. When you sign up before 31st of March, you’ll also be able to take advantage of a limited 50% off deal that is available for Public Beta users only. To claim the offer, follow the instructions here.

Stay tuned for our upcoming videos!

For our video series, Christopher Tilley will take PRODUCER on an upcoming commercial shoot, test it thoroughly, and come back with honest feedback on what worked and what can be improved. So, stay tuned, and don’t miss our follow-up in a couple of weeks!

What do you think of PRODUCER – Maker Machina? How did you feel about the video presentation? Is it something that you were also looking for production-wise? Are there any features that could be added to the software, in your opinion? Let’s talk in the comments below!

Feature image source: PRODUCER Maker Machina

]]>
https://www.cined.com/producer-maker-machina-tested-first-look-at-the-all-in-one-production-software/feed/ 8
AI Video Generators Tested – Why They Won’t Replace Us Anytime Soon https://www.cined.com/ai-video-generators-tested-why-they-wont-replace-us-soon/ https://www.cined.com/ai-video-generators-tested-why-they-wont-replace-us-soon/#comments Thu, 25 Jan 2024 11:22:36 +0000 https://www.cined.com/?p=323174 The rapid development of generative AI will either excite you or make you a bit uneasy. Either way, there is no point in ignoring it because humanity has already reached the point of no return. The technical advancements are here and will undoubtedly affect our industry, to say the least. As filmmakers and writers, we take it upon ourselves to responsibly inform you as to what the actual state of technology is, and how to approach it most ethically and sustainably. With that in mind, we’ve put together an overview of AI video generators to highlight their current capabilities and limitations.

If you’ve been following this topic on our site for a longer time, you might remember our first piece about Google’s baby steps toward generating moving images from text descriptions. Around a year ago, the company published their promising research papers and examples of the first tests. However, Google’s models were not yet available to the general public. Fast forward to now, and not only has this idea become a reality, but we have a plethora of working AI video generators to choose from.

Well, “working” is probably too strong a word. Let’s give them a try, and talk about how and when it is okay to use them.

AI video generators: market leaders

The first company to roll out an intelligent AI model capable of generating and digitally stylizing videos based on text commands was Runway. Since spring 2023, they have launched tool after tool for enhancing clips (like AI upscales, artificial slow motion, removing the background in one click, etc.), which made a lot of VFX processes simpler for the independent creators out there. However, we will review only their flagship product – a deep-learning network, Gen-2, that can conjure videos upon your request (or at least it tries to).

While Runway indeed still runs the show in video generation, they now have a couple of established competitors. The most well-known one is Pika.

Pika is an idea-to-video platform that utilizes AI. There’s a lot of technical stuff involved, but basically, if you can type it, Pika can turn it into a video.

A description from their website

As the creators of Pika emphasize, their tech team developed and trained their own video model from scratch, and you won’t find it elsewhere on the market. However, they don’t disclose what kind of data it was trained on (and we will get to this question below). Until recently, Pika worked only through the Discord server as a beta test and was completely free of charge. You can still try it out this way (just click on the Discord link above), or head over to their freshly launched upgraded model Pika 1.0 in the web interface.

Both of these companies offer a free basic plan for their products. Runway allows only limited generations to test their platform. In the case of Pika, you get 30 credits (equals 3 short videos), which refill every day. Also, the generated clips have a baseline length (4 seconds for Runway’s Gen-2, 3 seconds for Pika’s AI), that can be extended a few times. The default resolution differs from 768 × 448 (Gen-2) to 1280 x 720 (Pika). However, you can upscale your results either directly in each software (there are paid plans for it), or by using other external AI tools like TopazLabs.

What about open-source projects?

This past autumn, another big name in the image generation space entered the video terrain. Stability AI launched Stable Video Diffusion (SVD) – their first model that can create videos out of still images. Like their other projects, it is open source so you can download the code on GitHub, run the model locally, and read everything about its technical capabilities in the official research paper. If you want to take a look at it without struggling with AI code, here’s a free online community demo on their HuggingFace space.

For now, SVD consists of two image-to-video models that are capable of generating videos at 14 and 25 frames at customizable frame rates between 3 and 30 frames per second. As the creators claim, external user preference studies showed that Stable Video Diffusion surpasses the models from the competitors:

Image source: Stability AI

Well, we’ll see if that evaluation stands the test. At the moment, we can only compare it to the other image-to-video generative tools. Stability AI also plans to roll out a text-to-video model soon, and anyone can sign up for the waitlist here.

Generating videos from text – side-by-side comparison

So, let’s get the experiments going, shall we? Here’s my text prompt: “A woman stands by the window and looks at the evening snow falling outside”. The first result comes from Pika’s free beta model, created directly in their Discord channel:

Not so bad for an early research launch, right? The woman cannot be described as realistic, and for some reason, the snow falls everywhere, but I like the overall atmosphere and the lights outside. Let’s compare it to the newer model of Pika. The same text description with a different video result:

Okay, what happened here? This woman with her creepy plastic face terrifies me, to be honest. Also, where did the initial window go? Now, she just stands outside in the snow, and that’s definitely not the generation I asked for. Somehow, I like the previous result better, although it’s from the already obsolete model. We’ll give it another chance later, but now it’s Gen-2’s turn:

Although Gen-2 also didn’t manage to keep the falling snow solely outside the window, we can see how much more cinematic the output feels here. It’s the overall quality of the image, the light cast on the woman’s hair, the depth-of-field, the focus… Of course, this clip is far from spotless, and you would immediately recognize that it was generated by AI. But the difference is huge, and the models will continue learning for sure.

AI models are learning fast, but they also struggle

After running several tests, I can say that video generators struggle a lot. More often than not they produce sloppy results, especially if you want to get some lifelike motion within the frame. In the previous comparison, we established that Runway’s AI generates videos with higher-quality imagery. Well, maybe they just have a better still image generator because I couldn’t get a video of a running fox out of this bugger, no matter how many times I tried:

Surprisingly, Pika’s new AI model came up with a more decent result. Yes, I know the framing is horrible, and the fox looks as if it ran out of a cheap cartoon, but at least it moves its legs!

By the way, this is a good example to demonstrate how fast AI models learn. Compare the video above (by Pika 1.0) to the one below that I created with the help of the previous Pika model (in Discord). The text input was the same, but the difference in the generated content – drastic:

Animating images with AI video generators

A slightly better application idea for current video generators, in my opinion, is to let them create or animate landscape shots or abstract images. For instance, here is a picture of random-sized golden particles (sparks of light, magic, or dust – it doesn’t matter) on a black background that Midjourney V6 generated:

Image source: generated with Midjourney V6 for CineD

Each of the AI video generators mentioned in the first part of this review allows uploading a still image and animating it. Some don’t require any additional text input and go ahead on their own. For example, here’s what Runway’s Gen-2 came up with:

What do you think? It might function well as a background filler for credits text, but I find the motion lacks diversity. After playing around, I got a much better result with a special feature called “Motion Brush”. This tool, integrated as a beta test into the AI model, allows users to mark a particular area of their still image and define the exact motion.

Pika’s browser model insisted on the additional text description with the uploaded image, so the output didn’t come out as expected:

Regardless of the spontaneous explosions at the end, I don’t like the art of motion and the camera shake. In my vision, the golden particles should float around consistently. Let’s give it another go and try the community demo of Stable Video Diffusion:

Now we’re talking! Of course, this example has only 6fps and the AI model obviously cannot separate the particles from the background, but the overall motion is much closer to what I envisioned. Possibly, after extensive training followed by some more trial and error, SVD will show a satisfactory video result.

Consistency issues and other limitations

Well, after looking at these examples, it’s safe to say that AI video generators haven’t yet reached the point where they can take over our jobs as cinematographers or 2D/3D animators. The frame-to-frame consistency is not there, the results often have a lot of weird artifacts, and the motion of the characters (be it human or animal) does not feel even remotely realistic.

Also, at the moment, the general process requires way too much effort to get a decent generated video that’s close to your initial vision. It seems easier to take a camera and get the shot that you want “the ordinary way”.

At the same time, it is not like AI is going to invent its own ideas or carefully work on framing that is the best one for the story. Nor is that something non-filmmakers will be constantly aware of while generating videos. So, I reckon that applying visual storytelling tools and crafting beautiful evolving cinematography shall remain in our human hands.

There are also some other limitations that you should be aware of. For example, Stable Video Diffusion doesn’t allow using their models for commercial purposes. You will face the same issue with Runway and Pika on a free-of-charge basis. At the same time, once you get a paid subscription, Pika will remove their watermark and grant commercial rights.

However, I advise against putting generated videos into ads and films for now. Why? Because there is a huge ethical question behind the use of this generative AI that needs regulatory and attribution solutions first. Nobody knows what data they were trained on. Most possibly, the database consists of anything to be found online, so a lot of pictures, photos, and other works of artists who haven’t given their permission nor have gotten any attribution. One of the companies that try to handle this issue differently is Adobe with their AI model Firefly. They also announced video AI tools last spring, but it’s still in the making.

In what way can we use them to our advantage?

Some people say that AI-generated content will soon replace stock footage. I doubt it, to be honest, but we’ll see. In my opinion, the best way to use generative AI tools is during preproduction, for instance, to quickly communicate your vision. While text-to-image models are a handy go-to for gathering inspiration and creating artistic mood boards, video generators could become a quick solution for making previsualization. If you, like me, normally use your own poor scribbles that you’ve put together one after the other to create a story reel, then, well – video generators will be a huge upgrade. They don’t produce perfect results, as we’ve seen above, but that’s more than enough to draft your story and previs it in moving pictures.

Another idea that comes to mind is animating your still images for YouTube channels or presentations. Nowadays, creators tend to add digital zoom-ins or fake pans to make their photos appear more dynamic. With a little help from AI video generators, they will have more exciting options to choose from.

Conclusion

The creators of text-to-image AI Midjourney also announced, that they are working on a video generator and are planning to launch it in a few months. And there most certainly will be more to come this year. So, we can either look the other way and try to ignore it, or we can embrace this advancement and work together on finding ethical applications. Additionally, it’s crucial to educate people that there will soon be an increase in fake content, and they shouldn’t believe everything they see on the Internet.

What are your thoughts on AI video generators? Any ideas on how to make this technical development a useful tool for filmmaking (instead of only calling AI an enemy that will destroy our industry?) I know that this topic provokes heavy discussions in the comments, so I ask you: please, be kind to each other. Let’s make it a constructive exchange of thoughts instead of a fight! Thank you!

Feature image: screenshots from videos, generated by Runway and SVD

]]>
https://www.cined.com/ai-video-generators-tested-why-they-wont-replace-us-soon/feed/ 18
Improved Naturalism and Better People Generation in Adobe Firefly Image 2 – A Review of New Features https://www.cined.com/improved-naturalism-and-better-people-generation-in-adobe-firefly-image-2-a-review-of-new-features/ https://www.cined.com/improved-naturalism-and-better-people-generation-in-adobe-firefly-image-2-a-review-of-new-features/#comments Tue, 28 Nov 2023 16:46:23 +0000 https://www.cined.com/?p=316280 Time flies, and artificial intelligence models keep getting better. We witnessed it in spring when Midjourney rolled out its updated Version 5, which blew our minds with its unbelievable photorealistic images. As predicted, Adobe didn’t lag behind. Apart from announcing several upcoming AI tools for filmmakers, the company also launched a new model for their text-to-image generator. In Adobe Firefly Image 2, developers promise better people generation, improved dynamic range, and some new features like negative prompting. After testing it over a period of time, we’re eager to share our results and thoughts with you.

The new, deep-learning model Adobe Firefly Image 2 is now in beta and available for testing. In fact, if you tried out its predecessor, you can use the same link. It will open the updated image generator by default. If not, you can sign up here.

To get a hang of how Firefly works, read our detailed review of the previous AI model. In this article, we will skip the basics and concentrate on what’s new and what has changed specifically in Adobe Firefly Image 2.

Photographic quality and generation capabilities

So, the main question is can the updated Firefly finally create realistic-looking people? As you probably remember, the previous model struggled with photorealism even when you specifically chose “photo” as your preferred content type. For example, this was the closest I got to natural looking faces last time:

The Adobe Firefly (Beta) results with the prompt “a girl’s face, covered with beautiful flowers”, using the previous AI model. Image source: created with Adobe Firefly for CineD

Why don’t we take the same text prompt and try it out in the latest Firefly version? I must mention here if you don’t specify whether you want AI to generate “photo” or “art” in the parameters, the artificial brain will automatically choose what seems most logical. That’s why the first results with my old prompt were illustrations:

Same prompt, different results. Image source: created with Adobe Firefly Image 2 for CineD

Looks nice and creative, right? However, not what we were trying for. So, let’s try again. Here is a raster with four pictures Adobe Firefly Image 2 came up with after I changed the content type to “photo”:

Image source: created with Adobe Firefly Image 2 for CineD

Wow, that’s definitely an improvement! The announcement from developers stated that the new Firefly model “supports better photographic quality with high-frequency details like skin pores and foliage.” These portrait results definitely prove them right.

The flip side of the coin

However, perfection is a myth. While the previous model couldn’t make a picture look like a photo, this one doesn’t seem to have “an imagination”. If you compare the results above, you will see that Adobe Firefly Image 2 put little to no flowers directly on the faces. Apparently, it would feel too unreal. Yet, it was the main idea behind the image I had visualized, so the old neural network was more to the point.

If you want to also create something rather dreamy, try playing with your text prompt and settings. For example, I added the word “fantasy” and changed the style to “otherworldly”. Those iterations brought me a slightly better match for my original concept:

Image source: created with Adobe Firefly Image 2 for CineD

What bothers me more though is a sudden occurrence of the bias problem. Do you notice that almost all these women (even the illustrated ones) have European facial features, green or blue eyes, and long blonde or light-brown hair? Where did diversity go? The first image generator by Adobe constantly generated all kinds of appearances, races, skin colors, etc. This one, on the contrary, sticks to one category.

Not to mention that different experiments with new settings and features delivered different results, and not all of them worked smoothly. For instance, here is an Adobe Firefly Image 2’s attempt to picture children playing with a kite on the beach at sunset:

Image source: created with Adobe Firefly Image 2 for CineD

Photorealistic as hell, I know.

Photo settings in Adobe Firefly Image 2

The new photo settings feature sounded very appealing in the press release, especially for content creators and filmmakers who use image generators for, say, making mood boards. It includes changing the key photo parameters we all are familiar with – aperture, shutter speed, and field of view. The last one refers to a lens, which you can now specify by moving a little tumbler. It is also the only setting that somehow worked in my experiments. Here you see a comparison of results with a 300mm vs 50mm camera lens:

At least there’s a subtle change, right? However, I can’t confirm the same for the aperture setting. Even when the description promised “less lens blur”, the results provided a low depth of field regardless.

Less lens blur? Don’t think so. Image source: created with Adobe Firefly Image 2 for CineD

So, the idea of manually controlling the camera settings of the image output sounds fantastic, but we’re not there yet. When this feature starts running like clockwork, this will no doubt be reason enough to switch to Adobe Firefly, even from my favorite, Midjourney.

Using image references to match a style

Another new feature Adobe introduced is called “Generative Match”. It allows users to upload a specific reference (or choose from a preselected list) and transfer the style of it onto the generated pictures. You will find it on the sidebar along with other settings:

A screenshot from Adobe Firefly Image 2 interface. Image source: Mascha Deikova/CineD

My idea was to create a fantasy knight in a sci-fi style using the gorgeous lighting and color palette from “Blade Runner 2049”. The first part of this task went quite well:

Image source: created with Adobe Firefly Image 2 for CineD

However, when I tried to upload the Denis Villeneuve film still, Firefly warned me that:

To use this service, you must have the rights to use any third-party images, and your upload history will be stored as thumbnails.

Sounds great, especially because a lot of people forget to attribute the initial artists whose pictures they use for reference. So, I changed my plan and used a film still from my own sci-fi short instead. Below you see my reference and how Firefly processed it, matching the style of the knight image results to its look and feel:

Not bad! Adobe Firefly Image 2 replicated the colors, and you can even see the grain from my original film still. Also, AI unexpectedly got rid of the helmet to show the face of my knight. So, it tries to match the style as well as the content of your reference.

Inpainting directly in your results with Adobe Firefly Image 2

Let’s say I like the new colors but don’t want to see the face of the knight from my previous example. Would it be possible to fix it with Adobe’s Generative Fill? Sure, why not, as the AI upgrade allows us to apply the Inpaint function directly to generated images without leaving the browser:

Where to find newly added functions. Image source: Mascha Deikova/CineD

Generative Fill is a convenient tool you can use with a simple brush (just like in Photoshop Beta) to mask out an area of the image you don’t like. Afterward, either insert the new elements with a text prompt or click “remove” to let AI come up with content-aware-fill.

In the process of inpainting. Image source: created with Adobe Firefly Image 2 for CineD

To achieve a better result, I marked a slightly bigger area than I needed. (In the first attempts, the size of the helmet was too small in proportion). Several runs later, Firefly generated a couple of decent results, so this experiment was successful:

Image source: created with Adobe Firefly Image 2 for CineD

You can now also alter your own images in the browser without downloading Photoshop (Beta). You can test the Generative Fill magic yourself here. I played with the removal feature and made a very realistic-looking flames visualization for an upcoming SFX shot using a real photo from our location.

Negative prompting

At this point, it only made sense to include the addition of negative prompting. As with other image generators, now you can add up to 100 words (English only) that you want Adobe Firefly Image 2 to avoid. For that, click “Advanced Settings” on your right and type in specific, no-go terms using the return key after each word.

Developers recommend using this feature to exclude such glitches and sudden appearances like “text”, “extra limbs”, “blurry”, etc. I tried it with another concrete example. To start with, I created an illustrated picture of a cat catching fireflies in the moonlight.

Image source: created with Adobe Firefly Image 2 for CineD

The results are very lovely, but naturally, the artificial intelligence put the image of the moon in each and every picture. My idea, on the other hand, was only to recreate the soft bluish lighting. That’s why I tried to get rid of the Earth’s natural satellite by adding “moon” in the negative prompting field.

And here are the results. Image source: created with Adobe Firefly Image 2 for CineD

Okay, it only worked in one out of four results, and unfortunately, not in the most appealing one. Still, better than nothing. Hopefully, this feature works better with undesired artifacts like extra fingers or gore.

Attribution novice

When you decide to save your results, Adobe Firefly Image 2 warns you that it will apply Content Credentials to let other people know your picture was generated with AI. In case you missed it, Adobe even created their own symbol for such purposes.

I was happy to hear that because firstly, this symbol doesn’t resemble a big red watermark “not for commercial use”, which the previous AI model stamped on each picture. Secondly, it is also a big step towards distinguishing real content from created content. Finally, Adobe’s tool even promises to indicate in the credentials when the generated result uses a reference image.

The only problem is: where is it? Scroll through my article again. At this point, you will find at least 5 images generated by Firefly and downloaded in their full size (feature one, for example). Do you see any content credentials? What about a small “Cr” button? Neither do I. So, why do they announce it whenever you try to save a picture? Is it a bug or am I so special?

Price and availability

To get full access to all of Adobe’s AI products, you just need any Creative Cloud subscription. The type of subscription determines the number of image generations you can perform. Free users with an Adobe account but no paid software receive 25 credits to test out the AI features, with each credit representing a specific action, such as text-to-image generation. You can read more about the different pricing models here.

Adobe Firefly Image 2 is in web-based beta, but the developers promise to include it in Creative Cloud apps soon.

Have you tried the upgraded model yet? What do you think about it? Which added functions work well and which don’t, in your opinion? Let’s exchange best practices in the comment section below!

]]>
https://www.cined.com/improved-naturalism-and-better-people-generation-in-adobe-firefly-image-2-a-review-of-new-features/feed/ 3
Movavi Video Suite 2024 Available – A Closer Look At a Complete Editing Solution https://www.cined.com/movavi-video-suite-2024-available-a-closer-look-at-a-complete-editing-solution/ https://www.cined.com/movavi-video-suite-2024-available-a-closer-look-at-a-complete-editing-solution/#comments Mon, 23 Oct 2023 15:14:11 +0000 https://www.cined.com/?p=308845 MOVAVI Video Suite is a simple solution that includes all the tools and assets a content creator could need in today’s social media world. This easy-to-use app is available for Mac and Windows, and it could be an excellent entry-level editor for those who want to start creating videos. So, let’s take a look and see what Movavi Video Suite can do!

In a well-established market where three leading video editors dominate the industry (Premiere Pro, Final Cut Pro X, and DaVinci Resolve), software companies are launching new products adapted to our times and the latest content creators’ needs. The new Movavi Video Suite fits that category.

An alternative for non-editors

Times have changed, and the visual formats we used to know are only a fraction of what’s produced and shared today. With smartphones as primary creating tools and social media platforms as the main distribution channels, the language of filmmaking has changed. The length, type of content, music, effects, assets used, etc., are only a few of the elements that have to work well for a video to be shared and, therefore, successful.

In this context, anyone can make a video now. You don’t need to be an editor or a filmmaker to film, edit, and publish videos. Content creators, especially those unrelated to the filmmaking industry, need easy tools to do what they intend to do – film with their phone, edit, and upload their creation to share with others. They don’t need complex or expensive gear to publish decent content online. In other words, simpler is better.

This path leads us to the segment of video editing software where Movavi Video Suite fits in perfectly. Programs like DaVinci Resolve or Adobe Premiere can feel overwhelming for beginners or creators who aim for fast workflow and don’t need all the advanced tools these programs offer.

A simple interface

When opening the program, the first thing we notice is a simple and well-organized interface. Everything is there; we don’t need to open new windows and tabs to determine how the program works. To avoid confusion, each panel has texts like ‘Drag files here’ or ‘Drag folder here’ in the file import section or ‘Drop files here’ in the timeline. This hints at the program’s intended users.

Movavi Video Suite’s main window. All clear and organized – Source: MOVAVI

The timeline shows all the tools available without navigating the submenus. It’s all straightforward. We can add tracks, select, cut, add a marker, crop a clip, add transitions, etc., by clicking one of the familiar icons next to the timeline.

We can save our videos in the most popular formats in the Export window with a few clicks. However, the program also includes advanced controls to adjust our final video.

An all-in-one solution

We know as editors that one of the most disruptive moments in the creative process is when we have to stop, search somewhere else for music, assets, or stock footage, and then go back to editing. With Movavi Video Suite, this is no longer a problem because it includes libraries with music, sound effects, sample videos, intro videos, animations… everything we need to start and finish the editing process without ever leaving the program.

Tools like ‘Record Video’, ‘Capture Screencast’, or ‘Record Audio’ show Movavi’s commitment to ensuring a seamless creator experience from start to finish.

Movavi includes many effects and presets to polish our videos with a click in a ‘drag and drop’ system. Everything is organized by theme to facilitate our search. We also have essential tools like color adjustments, crop and rotate, pan and zoom, stabilization, chroma, background removal, tracker, scene detection, and speed effects. Moreover, we also have the option to go further and fine-tune things in Manual Mode. AI tools like motion tracking, background removal, or noise removal are available in the latest version.

Inside the app, we can find music, effects, intros, and more – Source: MOVAVI

The included stickers, callouts, and frames also fit nicely in the social media video creator world.

Finally, the Effects Store offers different packs, including effects, music, backgrounds, stickers, etc. We can preview and access them inside the app before purchasing.

Users will find funny effects for their creations – Source: MOVAVI

Who is MOVAVI Video Suite for?

When I opened Movavi Video Suite for the first time, I intended to make it work without reading a manual or going to Google for help. I was able to edit and export a complete video, using different tools and applying effects with no problem at all.

Movavi gives the user a quick workflow with all its tools, assets, and effects visible. Of course, we will not find the same capabilities as those in professional NLEs, but it is a complete system for beginners and users looking for an all-in-one solution to create their videos. In that sense, I see it competing with similar video apps like Splice or Apple’s iMovie.

All the assets can be edited and tweaked simply – Credit: Jose Prada/CineD

Price and availability

A free trial version of the video editing software for mac can be downloaded here.

The MOVAVI Video Suite can be found here. (Currently 20% off until October 29th)

The full version’s annual subscription costs €67,95.

They also have a 55% discount promotion until October 22 for these packs: Video Suite + Photo Editor – €77,95 (annual subscription) and 95,95 € (lifetime subscription)

So, what do you think about this alternative to the more established NLEs? Would you give them a chance to create content that needs a quick workflow? Let us know in the comments below!

]]>
https://www.cined.com/movavi-video-suite-2024-available-a-closer-look-at-a-complete-editing-solution/feed/ 2
Midjourney’s Vary Region Feature Challenges Adobe’s Generative Fill – Review https://www.cined.com/midjourney-vary-region-feature-challenges-adobes-generative-fill-review/ https://www.cined.com/midjourney-vary-region-feature-challenges-adobes-generative-fill-review/#comments Thu, 31 Aug 2023 14:37:50 +0000 https://www.cined.com/?p=302688 A long-awaited news for all AI art lovers! Midjourney has recently rolled out the new inpainting function, which basically allows users to alter selected parts of their image. It is still in the testing phase, but the results are already quite impressive. Some call the update “an answer to Adobe’s Generative Fill”. Others react with the excited, “Finally!” We also tried out the Midjourney’s Vary Region feature and think it has the potential to support us in different filmmaking tasks. How so? Let’s explore together!

Midjourney is considered one of the best image generators on the market. As the developers belong to an independent research lab, they manage to release new updates and features at breakneck speed. (Just a couple of weeks ago, we were experimenting with the latest Zoom Out function, for example). Users also appreciate the precise language understanding and incredible photorealistic results of this deep-learning model.

Yet, one of the central things Midjourney lacked was the possibility to change selected areas of your image. Compared to Stable Diffusion, which had an Inpaint function from the beginning on, or Adobe’s Generative Fill, Midjourney users couldn’t adjust the details of their generated visuals. That was frustrating, but finally, this issue won’t be a problem anymore. Well, at least to some extent.

Before we dive into the tests, tips, and tricks for Midjourney’s Vary Region Feature, a heads-up. If you have never used this AI image generator before, please read our article “Creating Artistic Mood Boards for Videos Using AI Tools” first. There you will learn the basics of working with Midjourney’s neural network.

Two ways to use Midjourney’s Vary Region feature

Users can access the new feature through Midjourney’s Discord Bot, as usual. After you generate and upscale an image, the button “Vary (Region)” will appear underneath it.

Midjourney's Vary Region feature - where the button is
Location of the new button in the Discord interface. Image credit: Mascha Deikova/CineD

When you click on the button, a new window with an editor will pop up directly from your chat. There, you can choose between a rectangular selection tool or the freehand lasso. Use one or both to select the area of your image that you want to refine.

Midjourney's Vary Region feature - selecting the area
Image credit: Mascha Deikova/CineD

Now you have two possibilities. The first one is to click “submit” and let Midjourney regenerate the defined part of the visual. In this case, it will try to correct mistakes within this area and get a better result according to your original text input. In my example, the neural network created new visualizations of the medieval warrior and matched it to the background.

Midjourney's Vary Region feature - the results, that Vary Region brings
Regenerating the subject of your image. Image credit: created with Midjourney for CineD

An alternative approach to making use of Midjourney’s new Vary Region feature involves implementing the Remix mode. This way, you can change the contents of the selected area completely by writing a new prompt. You might need to enable it by typing “/settings“ and clicking on “remix mode”.

Changing the text prompt to refine your image

Once you’ve enabled the Remix mode, an additional text box will appear in the editor, which will allow you to modify the prompt for the selected region. Describe precisely what you want to see in that area. Be specific about the details you’d like to introduce or exclude (a few tips on wording follow below). Don’t worry, the AI will preserve the original aspect ratio of the root image.

Midjourney's Vary Region feature - selecting a bigger sized area
Changing your text prompt in the remix mode. Image credit: Mascha Deikova/CineD

As you see in the screenshot above, I decided to change the entire environment around my warrior, teleporting him from a foggy forest into an abandoned village. The results were unexpectedly good. 3 out of 4 image variations matched my description precisely and didn’t contain any weird artifacts. Check it out yourself:

Midjourney's Vary Region feature - changing the environment around the character
Changing the environment around the character. Image credit: created with Midjourney for CineD

Of course, a single successful test does not set a precedent, and my other experiments turned out less encouraging. However, for the tool, which came out only recently and is still in the beta test, the results seem amazing.

What’s especially great about the new Midjourney’s Vary Region feature is that it introduces flexibility. By upscaling regenerated images in between, you can improve parts of your image as many times as you need to get the desired result. Let’s say you have a specific shot in mind and you want to convey it to your cinematographer, producer, or production designer. Now, it seems possible to really get it from your head onto the paper without any drawing skills. While it may involve some trial and error, the potential is there.

Tips for the best image result

As with other neural networks, Midjourney is still learning. So, don’t expect wonders from it straight away. In order to get the best result out of the Vary Region feature, here are some tips you may follow (which combine suggestions from the AI developers and myself):

  • This feature works best on large regions of the image. According to creators, if you select 20% to 50% of the picture’s area, you will get the most precise and consistent results.
  • In cases where you decide to alter the prompt, the neural network will provide a better outcome if your new text matches that specific image. For example, Midjourney has no problems adding a hat to a character. Yet, if you ask it to draw an extremely unusual scenario (like an elephant in the room – pardon the pun!), the system might not give you the result you intended.
  • The function also respects some of the commands and parameters working in Midjourney. So, don’t forget about the power of the “—no” command, a parameter utilized for negative prompts. This prompts the AI to eliminate the specified elements from the image.

Possible ways to use Midjourney’s Vary Region feature in filmmaking

As you probably know, I love using Midjourney for visual research, creating artistic mood boards, and preparing pitch papers for upcoming projects. The latest update will definitely simplify this conceptual task and be useful in situations when I want to communicate my specific vision. As they say, a picture is worth a thousand words.

Apart from that, you might use Midjourney’s Vary Region function to create a fast previz. The system remembers your initial selection when you return to the image after altering specific parts of it. This allows you to utilize the tool multiple times for generating diverse scenarios. Accordingly, I was able to put my warrior into different scenarios and then animate them in the form of match-cuts for a short preview of his hero’s journey. It didn’t take much time, and the video result speaks for itself:

I’m not suggesting that it will always suit the purpose, but for some scenes or sequences such a previz is enough.

Comparing inpainting in Midjourney to Adobe’s Generative Fill

What Midjourney’s Vary Region definitely lacks at this point in time is the possibility to upload your own image (or film still) and then refine parts of it with the help of artificial intelligence. This would allow us to prepare filmed scenes for VFX, or even quickly mask out some disturbing elements in the shot.

Sounds cool, right? This is already within the capabilities of Adobe’s Generative Fill. Based on their own AI called Adobe Firefly, this function is available in Photoshop (Beta). You can install the software and try it out if you have a Creative Cloud subscription. In the following example, I took a film still from my latest short and changed a part of the image, just like that. Now, the protagonist can enjoy a slightly more appealing dinner:

Generative Fill can also work as an eraser for the designated area. If you don’t type anything in the prompt, it will make an effort to eliminate the chosen elements by employing content-aware fill techniques. Midjourney, on the other hand, always tries to put something new into the defined area.

So no, in my opinion, Midjourney is by no means the new Generative Fill. However, it’s developing in this direction, and hopefully, similar functions will be introduced soon. Why “hopefully”? The quality of the pictures created by this artistic image generator is still hard to beat, even with Adobe’s innovations.

Other problems and limitations

We already touched on the issue concerning the selected area’s size. In one of the tests, I tried to replace only the warrior in the wide shot. Changing the prompt accordingly, I hoped to get a beautiful elven woman in a long green dress, and the results were not promising.

Midjourney's Vary Region feature - weird results
Weird results are also part of the process. Image credit: created with Midjourney for CineD

The only decent picture I managed to generate after a couple of trials was the last one, where the woman stands with her back to the viewers. Others seem not only quite disturbing but also weird. All that, despite the fact that usually, Midjourney can draw absolutely stunning humans and human-like creatures. If we ask it to make variations of the troubled elf from the pictures above, using the same prompt, it instantly comes up with an amazing outcome:

Midjourney's Vary Region feature - how Midjourney can generate humans
Image credit: created with Midjourney for CineD

So, hopefully, in the next updates, the model will continue to learn and eventually apply its new skills to the smaller selected parts of the image as well.

Some other limitations and problems I noticed while playing with Midjourney’s Vary Region feature are:

  • The interface of the built-in editor is bulky and awkward to use. Compared to the variety of flexible tools in Photoshop, Midjourney’s lasso will take some getting used to. Yet, it’s remarkable that developers could embed a functioning editor directly into Discord.
  • Additionally, there is no eraser. As a result, no fast changes can be made to your selection. At the moment, you can only “undo” the steps in which you marked various areas of the image.
  • Midjourney’s Vary Region tool is compatible with the following model versions: V5.0, V5.1, V5.2, and Niji 5.
  • Midjourney users in the Discord community also noticed that when you do many generations of regional variations, the whole image gradually becomes darker and darker.

Conclusion

In every article on AI tools, I mention the question of ethics, and I won’t stop doing so. Of course, we don’t want artificial intelligence to take over our jobs, or big production companies to use generated art without proper attribution to the initial artists whose works the models were trained on. Yet, such tools as Midjourney can also become helping hands, which support us in mundane tasks, or help enhance our work and unleash new ideas for film, art, and music. A careful and ethical approach is key here. Therefore, it’s important to learn how to use neural networks and keep up with the updates.

So, what do you think about Midjourney’s Vary Region feature? Have you already tried it out? What are some other ways of using it in your film and video projects?

Feature image credit: created with Midjourney for CineD

]]>
https://www.cined.com/midjourney-vary-region-feature-challenges-adobes-generative-fill-review/feed/ 2
FUJIFILM XApp Review – Finally A Good Camera Companion App? https://www.cined.com/fujifilm-xapp-review-finally-a-good-camera-companion-app/ https://www.cined.com/fujifilm-xapp-review-finally-a-good-camera-companion-app/#comments Mon, 26 Jun 2023 12:01:20 +0000 https://www.cined.com/?p=293666 At the end of May 2023, together with the latest X-S20 mirrorless camera (see our review), FUJIFILM released a brand new companion smartphone app called XApp. This new app lets you control some camera functions and transfer media. But it also makes use of some features that make their mirrorless cameras more useful as day-to-day companions. Let’s dive in!

With a lousy 1.3-star rating on the iOS App Store, the old FUJIFILM Camera Remote app had a very bad reputation. The camera’s unreliable connections, outdated user interface, and lack of support for modern functions and formats were clear signs that an update was desperately needed.

And FUJIFILM gave us a worthy one with the brand-new XApp. This app is not an update to the old Camera Remote App, but rather a new listing in the App Store the lets us start with a clean slate.

New user interface

The XApp features a minimalistic design with monochromatic color use. The user interface looks very clean and is easy to understand and use.

XApp on iPhone showing that the FUJIFILM X-S20 is connected
The new user interface is a big upgrade compared to the old Camera Connect App. Image credit: CineD

The main features are laid out clearly as soon as you start the app. You’ll be prompted to grant a bunch of permissions to access your photo library (required for transferring images from your camera to your phone), location (for example for geotagging), and so on.

FUJIFILM XApp interface on an iPhone and on an iPad
The user interface is optimized for smartphones and tablets. Image credit: CineD

The XApp also scales well on larger devices, like iPads and other tablets. Culling through numerous photos and selecting them for import is particularly enjoyable, especially on tablets.

Camera connection

Connecting to a FUJIFILM camera couldn’t be easier. Be sure to update your camera to the latest firmware to make it compatible with the new XApp. Getting the camera ready to connect to a smart device was greatly simplified with the latest firmware updates.

Image transfer

As soon as your phone is connected to your camera, you can select the prominent “Image Acquisition / Photography” button to copy content from your camera to your phone.

You get previews of all the content that is saved on your memory card. You only see thumbnails when transferring photos in the JPEG or HEIF format though, since the App doesn’t support RAW photos or video files. All RAW photos and videos will show a generic thumbnail without a preview and cannot be transferred.

When you want to work with your RAW photos or video files, I would strongly advise using a cable or card reader to transfer the files from the camera to your tablet or computer. Even if it were possible with the XApp, it would take a long time to transfer those large files over WiFi.

Preview and select photos from the memory card on your phone for transferring.
You get image previews of your JPEG and HEIF photos, but no RAW photo or video support. Image credit: CineD

To transfer images, the smartphone needs to connect to the camera via WiFi. The app only asks you to join the camera’s WiFi network and the rest is done for you.

a progress indicator tell you how much time is left to transfer all selected images
The progress screen while transferring images. Image credit: CineD

HEIF support

For image sharing, the JPEG and new HEIF images with the famous Film Simulations are great. You get high-resolution previews of the images on your phone or tablet, and you can also pinch-to-zoom on your device for checking details and focus.

After selecting the images that you’d like to transfer, you can select whether to transfer the full-size photos or resize the images. This option will save space on your phone, but you might also consider using the HEIF format. This gives you the same quality but at lower file sizes compared to JPEGs, and you can use the full-resolution HEIF, which ends up being the same file size as a downscaled JPEG. iOS and macOS have been compatible with HEIF photos since 2017 starting with iOS 11 and macOS High Sierra. You can also work with those files on Windows with the help of extensions.

Remote control

Strangely, Remote Control and Camera Control are two separate features that are found in different locations in the XApp.

The Remote Control feature is a simple virtual shutter button that takes a picture (with a shutter hold option) or starts/stops a video.

the camera remote link leads to the shutter button screen while the Photography button leads to the camera remote interface
Remote Control and Camera Remote are two separate user interfaces. Image credit: CineD

When you want to control the camera settings and see a live preview, you press the prominent “Image Acquisition / Photography” button and then switch to the Camera tab on the top. Inside the Camera Control interface, you can switch between Photo and Video mode for different sets of settings.

adjustment options for Aperture, Exposure Compensation, ISO, Film Simulation and White Balance in Photo mode
Adjustment options for Aperture, Exposure Compensation, ISO, Film Simulation, and White Balance in Photo mode. Image credit: CineD

In Photo mode, you get a preview image with touch-to-focus functionality, which works accurately but is relatively slow to react on your touch input. There is also basic status information visible around the preview image and you can adjust the aperture, exposure compensation, ISO, film simulation, and white balance.

adjustment options for Shutter Speed, Aperture, Film Simulation and White Balance in Videomode
Adjustment options for Shutter Speed, Aperture, Film Simulation, and White Balance in Video mode. Image credit: CineD

In Video mode, you only get the option to adjust shutter speed, aperture, film simulation, and white balance. Strangely, you cannot adjust ISO in video mode. More settings would be nice to have in a future update.

Camera settings Backup/Restore

One convenient feature of the XApp is the Settings Backup/Restore function. If you use multiple camera bodies or rent your camera, you can simply save your camera settings and restore them before you get going again.

All camera backups are shown and can be selected for restoring to the same model
Backup setting from the connected camera and restore from backups. Image credit: CineD

Unfortunately, you can only restore settings to the same camera model (from X-H2 to X-H2 for example). I understand that different camera models have different feature sets, but I wish I could at least transfer the functions to a different model that both cameras support.

Timeline & activity

Something unique about the XApp is the Timeline and Activity features. These features let you see your activity with your FUJIFILM gear.

the timeline displays tiles with images of your cameras, lenses, and photos taken
Timeline view with the camera and lens used, as well as photo occasions. Image credit: CineD

The Timeline shows you a chronological view of all the times you used your camera and lenses. The app also compiles events with all the images taken on a certain day or in a specific location. You can also go into photo events and look at more details like a map with pins where all the images were taken.

looking at timeline details
Looking at the timeline details. Image credit: CineD

The Activity feature is a statistical summary of all the metadata in your images. You get to see the total amount of images you took, total video recording time, and how many images you transferred to your phone.

Your activity records are synchronized from your camera to the XApp.
Synchronizing the activity from your phone to the XApp. Image credit: CineD

There is also a breakdown of the cameras and lenses you used and how many pictures were taken with any film simulation. The same goes for videos (that are stored on your memory card).

breakdown of all cameras, lenses, and more metadata
Breakdown of all the metadata from all the captured images in the Activity tab. Image credit: CineD

You are required to create an account using “Continue with Apple/Google/Facebook” in order to use the Activity feature. Maybe at some point, this will turn into a “social network” of some kind where FUJIFILM users have their own public user profiles where they can choose to share some of this information outside of the XApp.

I don’t see any professional use for these features, but they are a very nice touch for personal enjoyment. These features are definitely geared toward enthusiasts.

Geotagging

Geotagging (adding location information to) photos with the help of the smartphone app finally works (reliably) for the first time with any FUJIFILM app. The camera will show a geolocation icon on the screen, which will blink red if no location has been transmitted lately from the phone. Just open the app on your phone, let it connect for a few seconds and the current location will be captured in the next photo.

the interval for the location data update can be adjusted from 10 sec to 480 sec
Fine adjustment of the location synchronization interval. Image credit: CineD

In my experience, this worked very well and I only had to open the app to force a location update to the camera a few times. Unfortunately, the location data is only saved for JPEG/HEIF photos but not for RAW photos (in a sidecar file.) The location data is recorded to both JPEG/HEIF and RAW photos. Just be aware that more frequent location updates will use more of your phone’s battery but I never had to stop using the app because I felt it was draining my (iPhone 13 mini) battery too quickly. You can customize the location synchronization interval in the app settings.

What’s missing

I would really love to see an Intervalometer option for Timelapses in the XApp. A nice user interface for setting up a Timelapse and executing it from your smartphone would be very convenient.

Also, a way to adjust more settings in photo and video mode would be welcome if you really want to rig the camera in a hard-to-reach space and would like to control the whole camera remotely.

Let me know in the comments what features you would like to see added to the XApp!

Conclusion

For users who want extended functionality for their FUJIFILM camera for everyday use, the XApp is a very welcome introduction and very good at what it does. I really love how closely the HEIF files come to native iPhone photos when it comes to metadata. The imported mirrorless photos are also included in iCloud’s photo memories thanks to geotagging and in-phone face and animal recognition.

Professional users have to rely on third-party solutions like frame.io and other Camera-to-Cloud providers to wirelessly and safely transfer RAW photos and high-resolution videos.

I hope that this app will only become even more useful over time if FUJIFILM decides to keep up the Kaizen spirit (continuous improvement of software over time) with the XApp as well.

UPDATE APRIL 2024:
After an update to version 2.1, you can now transfer RAW images from the X-H2S and GFX100 II (as of April 11th) using the XApp. The update also adds the functionality, to view and transfer images from the GFX100 II while the camera is turned off, which sounds very interesting.

The FUJIFILM XApp is available for iOS (App Store Link) and Android (Google Play Link).

I tested version 1.0.2. of the XApp on an iPhone 12 mini running iOS 16.5.

More information about the FUJIFILM XApp can be found on the FUJIFILM Website.

Do you use a smartphone app as a companion to your camera? If so, do you use it only for fun or also for professional use? What are your experiences with the FUJIFILM Camera Remote or XApp? Let me know what you think in the comments below! I’d love to hear from you!

]]>
https://www.cined.com/fujifilm-xapp-review-finally-a-good-camera-companion-app/feed/ 12