#i got used to my actual editing workflow with DaVinci Resolve
Explore tagged Tumblr posts
Text
Guys this is my personal fucking hell.
No do not ask me why it's the America's Funniest Home Videos theme, my friend joked about it and I said "okay." That's usually how these come to exist.
#idk how people make genuine edits#maxie pokemon#magma leader maxie#i just do silly shit and fuck around with effects to make it stupid#i also havent used capcut in like months#i got used to my actual editing workflow with DaVinci Resolve#my ass forgot how to get silly in capcut bruh#this ones not as funny as one of my other ones tbh bc theres less transtions#the transitions carried the last stupid edit i made#I GUESS hes technically there#pokemon#pokemon oras#nugget's yapping again
2 notes
¡
View notes
Text
Film Production Workflow: How CC4 and iClone Fits
Creating a fully CG film solo is⌠intense. Every step from animation to rendering has its own landmines. But over time, Iâve built a workflow that makes the process not just manageable, but actually smooth, and lately, thanks to some major software sponsorships, it just got a huge upgrade.
So I wanted to break down the pipeline Iâm using for The Judas Mark, both to document the process and to show how these tools all click together.
Hereâs the full workflow diagram:
Character Design
This begins in Character Creator 4 (CC4) from Reallusion. It gives me full control over character lookdev, including body proportions, facial structure, and clothing foundation. Whatâs especially powerful here is that I can build characters ready for animation, no need to rig from scratch or jump through technical hoops.
Mocap Data
For motion capture, Iâm using QuickMagic, a video-to-mocap software thatâs surprisingly accurate and very accessible. It allows me to shoot rough footage of movements and quickly convert them into usable mocap data. Facial motion comes later through iCloneâs AccuFace, which pairs seamlessly with the base body mocap.
The best part? QuickMagic exports directly into iClone, which eliminates so many conversion headaches.
Mocap Cleanup (Full Body + Facial)
Once both body and face animation are combined in iClone, I can polish the performance. This is where Reallusionâs real strength comes in, iClone is built for this. I get a clean timeline, retargeting tools, and facial animation editing without having to dive into complex node setups or Python scripts.
Cloth Simulation
After mocap cleanup, the character is exported to Marvelous Designer. This is where the drapery magic happens. The character is imported with the mocap baked in, so I simulate cloth movement naturally based on the animation. This gives everything a sense of physical realism, especially important in a slow, surreal narrative like mine.
Environment and Rendering
Next, the animation and cloth are all pulled into Blender,specifically with Cycles X as my render engine. This is where I build the actual world of The Judas Mark. The environments are heavy, atmospheric, and filled with symbolism, so this phase demands a lot of GPU power and attention to detail.
One key thing: iClone has a direct FBX pipeline to Blender. That means no clunky workarounds, and the animation arrives exactly how I cleaned it up. I save a ton of time and avoid technical errors.
Post Production (Editing and Color Grade)
Finally, everything lands in DaVinci Resolve. Hereâs where the tone really gets locked in, the pacing, the rhythm, the emotional beats. Resolve handles both my editing and my color work, letting me craft the final feel of the film without switching tools again.
Why This Workflow Works for Me
Iâm a solo filmmaker working on a deeply personal story with very little time. So this setup matters a lot. Every software in the pipeline plays a distinct role, and they all speak to each other without me having to constantly fix broken exports or rig things manually.
Reallusionâs ecosystem is at the core of this. With their help, I can spend more time refining story, visual language, and emotional pacing instead of getting stuck doing tedious or repetitive tasks. The software suite allows me to operate at a level that just wouldnât be possible otherwise.
0 notes
Text
Project Improvement #4 (Colour Correction)
As stated in the last blog, the next part of this project was to colour correct the first rough cut of the project. To colour correct, I wanted to explore DaVinci Resolve which is could be known as the industry standard software to use. This is a software which I have never used before therefore my use of it is from a beginners viewpoint. I have however completed some colour correction within AVID Media Composer, so I understand that a good image is one which balances the colour graphs (Histogram, Waveforms, Parade) across RGB at first. Then when grading you have licence to manipulate any colour you wish to present. To colour correct this footage I first had to export my AVID sequence into AAF form. This is another new process for myself so took some time to get my head around it all. However, in the end I used a consolidated export instead of a linked export as I expected to complete my CC offline (Away from my editing computer). This became an issue though because the device I intended to use didnât have a big enough screen, meaning Resolve was difficult to use. Therefore, ended up just correcting the offline files on the same machine as I was editing. (This caused more issues but we will get to them later).Â
This image is roughly what my Resolve project looked like. Unfortunately I forgot to take a screen grab at the time, so had to a while after hence why the media is offline. If you can image that the big image is the one I am working on, then the colour wheels, curves and graph relates to the changing of that image. Splitting the 2 sections are the 69 different shots within the AAF sequence, and each one was to be corrected. On the right of the main image, is the ânode pathâ, I believe this could be called many things however this is what I call it. The green dot at the beginning is the original image from the AAF, then you can have a number of Nodes, which are the locations for all the changes you make and you can have as many nodes as you want for as many different changes. The last green dot is then the final image output. This is a clear pathway for the changes you make. On the left of the main image is a gallery of stills. Once changes have been made and you want to replicate them onto many other shots you can right click, find grab still and in effect will generate a preset, which can then be applied to other shots.Â
To provide an example of the changes I made to my shots, here we can see an original shot followed by the edited.Â
The original shot featured a hint of yellow to the whole frame, this was something I wanted to remove. All changes were made through the RGB colour bars and the curves which I showed earlier. Overall the changes provided a more neutral tone with the contributor standing out from the background and providing a more foreground presence. To help the contributor stand out more, the sides of him have a feathered fade, by creating a shape, lowering the light and creating a large feather, it produces a slight vignette effect. The background can still be seen but itâs not visible enough to be taking the audiences eyes away from what matters. Not only that, I wanted the contributor to look less like a Simpsons character and more human so the more pinkish skin colour now shines through. When watching on a big screen however it could be perceived as a very âcoldâ picture, with a hint of blue even though there is no blue actually present. This will be something I change further along the process of this edit however, for now it will do.Â
The next example needs more work however, details small corrections from the AAF footage. In the original, due to high exposure from the sun the contributors face looks slightly washed out, along with many other aspects such as the sky, grass and buildings in the back ground. The washed out aspect has been fixed however, needs more work because the sky has now gone the other way and is more of an unnatural blue. Also the image has now become orange, especially within the face, therefore many changes still to come in the colour correction and grading process.Â
As I said earlier, the workflow between AVID and DaVinci Resolve was the most problematic process of this stage. The issue was that because I expected to work offline, then I needed an offline AAF. Had I had been working with Linked media I doubt I would have had this issue, but for some reason AVID just couldnât recognise the new AAFs from Resolve. After an hour of trial and error, I ended up just importing the MXFs which were created, however though I finally got the corrected footage into AVID, they didnât have handles either side, therefore these clips couldnât be lengthened if needed. Therefore as editing if I have needed to lengthen, I have had to go back to the RAW footage and splice that into my AVID sequence.Â
You could state that the correction and grading needs to be the last stages of the edit. However, I wanted to start this process at the point because I wanted to take a step back from the edit narrative and story line. By taking this step I felt it definitely made me noticed other aspects of the cut I would have never noticed if i just kept watching, re-watching and trying to edit. So by breaking up the editing, in the long term will help produce a more worth while story line.Â
In my blog post I will be speaking about how I am specifically creating the narrative as the number of rough cuts add up as I near the end of this project.Â
1 note
¡
View note
Text
DS9 Upscale Project Update: What Iâve Been Working On
Itâs been several months since I wrote an update on my ongoing efforts to restore Deep Space Nine. I took a break from the project through much of June due to a move and an associated injury but jumped back into it in July and have been at work steadily since then. The majority have my time has been focused on understanding how shifting the episode into various alternative frame rates would impact motion smoothness and image quality.
In the past, Iâve written updates when I hit specific milestones Iâd set for myself or discovered something I thought was interesting. This is more of a progress report. So, to begin: A bit of recap: Iâm a lifelong Deep Space Nine fan who started this project in January and has pursued it since. Iâve been learning about video processing and encoding from scratch as Iâve worked, and according to everyone Iâve talked to, I didnât pick a beginner-level project.
Deep Space Nine is a VFR (Variable Frame Rate) show, which means the DVD alternates between playing back at 23.976 fps and 29.97 fps at various points within the episode. This is a common format for late-1990s science fiction. Shows encoded in this fashion include Babylon 5, Stargate SG-1, Buffy the Vampire Slayer, Star Trek: The Next Generation (DVD-only), and Star Trek: Voyager.
The episode of Deep Space Nine I chose to treat as my test vehicle, âSacrifice of Angels,â is about 14 percent 29.97fps footage and ~86 percent 23.976fps footage. The problem is, applications like AviSynth cannot edit VFR video and must convert it to CFR (Constant Frame Rate). Applications like DaVinci Studio Resolve can technically âhandleâ VFR files, in that it will ingest them properly, but the resulting output periodically pauses in a way I couldnât find a clean solution for. For now, unless I figure that out, processing the show requires that it be converted to CFR as an initial step.
If you encode a VFR show at 23.976 CFR, the 29.97fps content will be cut to 23.976fps and the playback may not be perfectly smooth. In some cases, you wonât see any stutter because thereâs not enough motion on screen for the frame decimation to be visible. Thereâs a several-minute block of 29.97fps content in âSacrifice of Angelsâ when Dukat, Dumar, Weyoun, and the female Changeling are all talking at Ops. While there are a few telltale signs, you only really see it when Dukat walks around the table â and this is after both postprocessing and upscale.
youtube
The reason that Dukatâs hand and body are blurred as he moves is that, if you go frame-by-frame, what heâs doing looks like this:
I boosted the brightness a bit here, to make the shadow easier to see. Most of the frame looks normal, but you can see where Dukatâs hand is going to be in the next frame. The error is visible but small and confined to one part of the screen.
The fact that a lot of TNG-era Star Trek is conversation makes the frame rate shifts that much easier to deal with, but itâs still noticeable as heck when it happens. My goal has been to find an automated method of processing DS9 that would typically produce better motion during 29.97fps content sections. I spent the last few months playing with various methods of converting the showâs frame rate to see what the options would look like.
Source Sensitivity
The transmutative property of mathematics states that when you multiply two numbers together, it doesnât matter what order you write the numbers in. 1 * 2 * 3 * 4 = 24. So does 4 * 3 * 2 * 1. Video processing is not transmutative. The order in which you apply filters changes what the final output will be. Video processing workflows need to be duplicated exactly in order to guarantee accurate results, up to and potentially including using the exact same application and filter versions.
There are a few reasons Iâve been exploring the outcomes for Handbrake and MakeMKV as opposed to using DVD Decrypter to create a VOB copy of the DVD data in 59.94 interlaced format.
First and foremost, Iâve yet to figure out how to get the video output quality to look anywhere near as good as what Iâve achieved with HB/MMKV without creating scripts for each episode. In point of fact, I havenât completely figured out the episode scripting, either. This is what I get for taking my last programming course circa âI Want It That Way.â
While weâre being honest, I want it this way, but we donât get everything we want, do we ViacomCBS? Image from the âWhat We Left Behindâ DS9 documentary, showing what a remastered Defiant would look like.
My own best. Spoiler: The professional one is better.
Second, Handbrake offered some really simple options to batch up and test a huge range of file encode presets. In mid-July, I ripped âSacrifice of Angelsâ more than 250 separate times in Handbrake in order to examine the impact of various quality control settings, H.264 flags, frame rates, and deinterlacing options. Third, I finally figured out how to hand StaxRip a set of flags that would synchronize the audio/video playback of a VFR MakeMKV file, and I wanted to experiment with it. Finally, part of learning something is figuring out what not to do. I make a lot of mistakes and I make some of them on purpose, just to see how various ideas change the final output.
I have spent a great deal of time during the last two months playing with various methods of changing frame rates. AviSynth has a number of filters for changing the frame rate and different source filters yield subtly different outputs. Iâve experimented with various methods of interpolating up to 119.88fps before trimming back down â either to a compromise frame rate like 59.97 or back to 23.976. Iâve done a lot of testing combining a pass through Davinci Studio Resolve through further processing with AviSynth, or before AviSynth, or after. Iâve experimented with various H.264 quality levels and specific presets to look at the impact these would have on the areas of troublesome motion in the show. To be honest, I worked out a strategy for what I wanted to encode and allowed the encoding to race ahead of my actual evaluations. Iâm still evaluating what Iâve created. If any of these methods had yielded a single clear winner, Iâd have said so, but Iâve certainly seen some intriguing differences among the data. Iâve even played with some of the AI-based methods of interpolation to see how theyâd compare.
Separately from this, Iâve experimented with deinterlacing based on 59.97 VOB files. Even with script help from some of the community at Doom 9, I havenât found a single, broad, fire-and-forget solution that gave me as clear an image quality as what Iâve gotten from MakeMKV and Handbrake. Part of the reason I chose to stick with these sources when evaluating motion is that I knew Iâd already achieved something reasonably close to what Iâd consider final quality. I wanted to hold that set of variables constant and experiment with the methods Iâd already worked with, especially when I had trouble achieving the same image quality. Still hoping to find one, but thatâs why I chose to focus my time where I did.
The Pros and Cons of 119.88fps
One way of solving the 23.976fps and 29.97fps playback problem is to shift content up to 119.88fps. The problem with 119.88 â well, one of them, because thereâs not just a problem â is that youâve definitionally quintupled your workload. If it takes 15 wall-clock hours of mixed CPU and GPU processing time to upscale an episode of 23.976 DS9, itâll take ~75 hours for 119.88.
Thatâs not great. And to add insult to injury, you need a 120Hz display to watch the output without dropping half the frames.
Iâm still messing around with 119.88, because so far Iâve gotten the best overall results in those troublesome patches at this frame rate, but itâs hard to imagine attempting to do the show this way. Ampere would have to be more than 2x faster than the GTX 1080 Ti to make the GPU processing times anything near reasonable.
Alternatively, one can attempt a frame rate between 23.976 and 119.88, and Iâve been doing some experimenting there as well. These frame rates all require either the film or video portion of the material to shift playback speed by a non-integer multiplier, which means thereâs always some degree of detectable something. What varies is just what that something is, and how often it pops up. Iâve also tested the outcomes if you upscale the video first, then process it. The end results are pretty good, but the clock time penalty for processing 2560Ă1920 clips versus 720Ă480 clips is larger than the resolution increase alone would suggest.
Where This Is Going
My plan is to assemble a set of options that make some reasonable tradeoffs as far as motion smoothness versus processing time versus frame rate, with at least two and possibly three targets. Iâve also been experimenting with masking and antialiasing lately, including using a version of an episode with fewer aliasing problems as an external antialiasing guide for a version of the same episode optimized for smoother motion. And it works!
âŚish.
One of the things Iâve learned is that when searching for a best-fit line that will safely adjust a television show, you may be very lucky to find a single method that works for an episode. Asking for a method that globally works well for 176 episodes is asking a lot.
Most of the time, what you get isâŚish, and some things are a heck of a lot âishierâ than others. The external clip concept is interesting, but after playing around with it for a little while Iâve got my doubts about whether it can work. There are scenes that it transforms as perfectly as I could ask for â and scenes that, uh, donât.
This is what âDonâtâ looks like.
I suppose a pertinent question to yâall would be: How much of what doesnât work are you curious to see in the first place? I havenât posted or talked much about failed experiments to date, and the reason this story doesnât have more video is that Iâm not sure what people would find interesting in the first place. It doesnât seem all that interesting to just talk about what doesnât work. If you find this sort of work-in-progress more interesting, or if youâd find it more interesting if I gave you more to look at, say so.
Now Read:
Deep Space Nine Upscale Project Season Finale: What Weâve Brought Ahead
Deep Space Nine Upscale Project (DS9UP): Technical Goals and FAQ
Upscaling Star Trek: Deep Space Nine Using Topaz Video Enhance AI
from ExtremeTechExtremeTech https://www.extremetech.com/extreme/313963-ds9-upscale-project-update-what-ive-been-working-on from Blogger http://componentplanet.blogspot.com/2020/08/ds9-upscale-project-update-what-ive.html
0 notes
Text
You can watch an archive of the #AskAvid â An Evening With Avid event
On Thursday evening, October 12th Avid partnered with Keycode Media for a Facebook livestream event called #AskAvid â An Evening With Avid. It was a two hour presentation with a panel of Avid employees and executives, including Avid President Jeff Rosica, as they sat down with Keycode Director Of Technology and 5 THINGS host Michael Kammes to answer questions from Avid users. Michael moderated the event as well as collected questions from Twitter and various other online forums.
Missed @Avid and @KeyCodeMedia's #AskAvid Facebook Live Q&A? Watch the recorded video đŹ now at https://t.co/hSTYwevqPx. #postproduction pic.twitter.com/5aHW6dQeVQ
â Avid (@Avid) October 13, 2017
This was a Facebook live event so you have to go to Facebook to watch it since embedding the stream doesnât seem to be possible. It might be up on other platforms later but as of now weâre stuck in Facebook. The plus side of watching it in Facebook is you can choose the Realtime Comments option in the Facebook viewer and thatâs an interesting experience on its own as there is a lot of interesting discussion that went own in the comments as well.
 Q & A time at Evening with Avid. #AskAvid pic.twitter.com/221jS36t9t
â Greg Lawler (@lawlergj) October 13, 2017
Avid began the event with a technology update and sort of a state of the company discussion before they got to the question and answer session about an hour in. The Q and A is probably the meat of what most editors reading this blog will be interested in (and there is some good meat for us Avid users) but the entire presentation will be of interest to anyone who has their toe squarely in the Avid (and Media Composer) waters. Everyone always wants to know about new features and there is talk of that too.
I was unable to watch the #AskAvid event live as I was enjoying the beautiful city of Chattanooga with my family that evening but I did watch the archive and I had a chat with Michael Kammes the #AskAvid host. Michael is right in the middle of the technology side of editing and post-production (his 5 THINGS series has dug into this question of NLEs more than once) and I think most people appreciate his honesty when it comes to discussing the often controversial topic of NLEs. His thoughts on the event are worth reading:
From Michael Kammes (reprinted with permission): My big takeaway is that there are 2 very different sets of users, and Avid is caught in the middle. Now, thatâs not news by any stretch, but it was really exemplified last night.
The in-house audience was the higher end of Hollywood Post, from an editor/engineer to facility owner/operator level. There were facilities that have survived on Media Composer, and who rely on MC doing what MC does so they can pay salaries. They wanted the tools they use daily to work better, and for Avid to finish and actually deliver on features Avid has promised to complement current workflows. I hate to use the quote âIf I had asked people what they wanted, they would have said faster horsesâ, but itâs true: fix the color corrector, etc. etc.
Now donât get me wrong, these are VALID concerns. We all want our jobs to have the tools we need to solve the problems we have TODAY.
However, there is the flip side â which is, âwhile enhancing the CC is fine and good, what about forward thinking tools for new workflows?â Tools like remote editing paradigms, metadata exchange, VMs, etc.
At one point, Howard Brock asked (and Iâm paraphrasing), âBy a show of hands, who wants the color corrector fixed over enhancing metadata?â The room was pretty evenly split (although, to be fair, half the people in the room didnât raise their hand for either.)
So, Avid is caught in the middle. Continue to serve their bread and butter that is higher end TV/Film: post, sports, news, and broadcast but also embrace the newer technologies for the next incarnation of media creation. Limited resources means features have to be cherry picked to appease both camps.
âŚand you canât please all of the people all of the time, amirite?
Michael hit the nail on the head and I agree with him 100%. Avid as a company is in a much different position than the other NLE makers. They have to please both their installed, ingrained and highly vocal market while still moving forward in this brave new media world. Some asking a question said something like this: âyou will get yourself fired if you try to cut a tv show in DaVinci Resolve, Itâs impossible.â What should be added to that statement is you will get yourself fired if you try to cut a tv show in DaVinci Resolve the traditional Hollywood way as so many of these tools that arenât Avid Media Composer have different ways of doing things that arenât exactly the Avid way. There was a lot of talk of bin locking and Adobeâs addition of a bin locking-like feature that has been announced for Adobe Premiere Pro. I would argue that this feature has been added to appease that ingrained Hollywood market while Adobe would really like collaborative workflows to use some of their other collaboration tools altogether. Itâs more faster horse talk. Or to use another tired cliche, Avid doesnât seem to be skating to where the puck is going, at least not from the standpoint of the Media Composer software. Listening to the presentation on the entire Avid platform, they seem to be much more forward thinking.
Whether you agree with Avidâs focus (and their roadmap) or not watching the #AskAvid event feels to me like the one of the first times Iâve heard a more direct vision of what is to come, this âplatformâ Iâve heard them talk about before. Iâve watched a lot of presentations from Avid CEO Louis Hernandez and I donât think Iâve ever left one of his talks with more answers than questions.
Looking forward to trying the new #4K Title Tool, coming to @MediaComposer. #AskAvid pic.twitter.com/1Cre88wDbp
â Christian Jhonson (@CristianJhonso) October 13, 2017
Another thing that should be noted is the fact that Avid was willing to sit down for this type of event. Even though a lot of their answers were âitâs comingâ and âwe hear youâ sitting down for this kind of hammering from your user-base takes some guts. Youâll never see Apple or Adobe sending out their NLE teams to face the firing squad the way Avid did that Thursday evening.
Regardless of your NLE of choice, if you work deep in the editing and post-production trenches youâll most likely get something out of a viewing of the #AskAvid event. So pull up a chair, grab a beverage and give it a watch.
 The post You can watch an archive of the #AskAvid â An Evening With Avid event appeared first on ProVideo Coalition.
First Found At: You can watch an archive of the #AskAvid â An Evening With Avid event
0 notes
Text
Deep Space Nine Upscale Project Season Finale: What Weâve Brought Ahead
The Deep Space Nine Upscale Project is an unofficial fan effort and not affiliated with Paramount in any way.Â
After three months of work, the Deep Space Nine Upscale Project (DS9UP) has hit some milestones that I feel comfortable showing off in greater detail. While thereâs still more to do to bring Deep Space Nine into the modern era, Iâm proud of what weâve achieved in the past three months.
Iâve included a broader array of videos in this article. If youâve wanted to see more head-to-head comparisons and examples of how different settings play against each other, youâll like what weâve got in store for you. Footage sources compared here are:
Original DVD: Not much point to the comparison if I donât show you where we started from. Thereâs a lot I donât like about Deep Space Nine on DVD, but Iâd recommend anyone buy them compared to streaming the show. In-article designation: âDVD.â
Upscaled MKV: This is the toughest comparison for DS9UP. Topaz Video Enhance AI has a 30-day free trial and a $199 â $299 list price depending if itâs on sale. The DVD set costs ~$110. If I canât demonstrate a better version of Deep Space Nine than youâd see from upscaling a DVD rip, thereâs literally no point in what Iâm doing. In-article designation: âMKV.â Original MKV from the DS9 rips I did ~15 years ago.
DS9UP Encode Process: My own work. Because this is an ongoing series of articles and I want to be able to refer to my own encode workflows without confusing people, Iâve decided to use codenames for the major âreleases,â as it were. Fortunately, thereâs a ready supply of thematically appropriate names near to hand. Say hello to Rubicon, a 23.976fps constant frame rate conversion of DS9âs original variable frame rate content. Itâs currently created with Handbrake, StaxRip (AviSynth, not VapourSynth), and DaVinci Studio Resolve. Details on why Iâm using Handbrake instead of MakeMKV will be forthcoming in an upcoming article.
Rubicon isnât perfect â itâs got a ~100ms audio synchronization bug in a couple of clips that I still need to fix, but Iâll iron it out. Besides, it wouldnât be Star Trek if Season 1 didnât end with a few bugs to work out.
Previously, On Deep Space Nine The Deep Space 9 Upscale ProjectâŚ
I was inspired to undertake this project by the work of CaptRobau, who published the first screenshots and video of what an upscaled Deep Space Nine could look like. I investigated the possibility of upscaling the show myself, but the performance figures were daunting and Gigapixel AI often crashes if you attempt to load too many images at once. The idea of manually converting each episode in batches of 2,500 to 5,000 frames was anything but appealing.
The release of Topaz Labs Video Enhance AI took this idea from pipe dream to possible. While the application is still very new and in need of some bug fixes, its long-term potential is demonstrably tremendous.
Deep Space Nine is, unfortunately, an ideal candidate for this kind of restoration. While Paramount created an absolutely beautiful Star Trek: The Next Generation remaster, the company has claimed the boxed set didnât sell well enough to justify making an effort for the later shows like Deep Space Nine and Voyager. For reference, this sort of image quality is what major stream providers like Netflix believe is acceptable:
Defending the Alpha Quadrant with the power of moire!
DS9âs lousy streaming quality isnât news to longtime fans of the show. Itâs just that now, thereâs something to be done about it. I decided that if Paramount wasnât willing to treat DS9 with the respect it deserved, Iâd take on the challenge myself, despite having no previous experience or education in video editing. Rubicon isnât perfect, but I think it represents a significant uplift worth talking about.
Up until now, my clips have either lacked audio or had audio badly out of sync with the video feed. Thatâs no longer the case. So letâs get started:
Tips for Best Viewing, Notes on Quality Comparisons
Be advised that the audio on some of the Rubicon clips is off by a fixed ~100ms. This can be easily corrected for in a video player, but I havenât had time to figure out why it happened in the first place.
Always set Netflix to play in 4K or the highest available quality, even if you do not have a 4K monitor. This will improve the quality of the stream regardless. The quality differences between Rubicon, an MKV, and the other versions of Deep Space Nine are less apparent at lower quality levels.
Be aware that the largest gap in quality, at least in my opinion, is between the streamed services and the DVDs. Thatâs not an absolute â there are some scenes where the DVDs are quite poor â but itâs a pretty good rule of thumb.
The gap between the upscaled MKV file weâve included in this coverage and what Rubicon can achieve is much smaller than the leap from DVD to upscaled footage. When I wrote my first article on upscaling Deep Space Nine, I said that I felt as if running an MKV through Topaz VEAI got you 75 â 80 percent of the improvement that might be reasonably squeezed out of the DVD source, and that still seems like a pretty fair assessment.
Initial Fleet Flyby
DVD:
youtube
The DVD looks pretty good here, honestly. Thereâs not a lot of great detail on the distant ships, but the nearby Excelsior comes through quite nicely.
MKV:
youtube
The MKV looks really nice, here. I donât expect to see the Rubicon encode recover much more detail than we have already.
Rubicon:
youtube
The hitch at the front of the video is because of where I cut the frame. Apart from that, Rubicon and the MKV tie here. Not much difference.
Defiant Conversation
Most of my comparisons have been battle comparisons to-date, but not much of Deep Space Nine actually involves combat. This short clip focuses on a range of characters â shows like DS9 have a lot of skin tones, and some AI upscalers handle them oddly.
DVD:
youtube
The DVD is dark â Deep Space Nine feels dark overall â but the detail is pretty solid. Daxâs face looks oddly low-detail, though.
MKV:
youtube
The GCG preset brings out some nice detail in the clip, but it also creates an odd aberation on the bulkhead over Siskoâs left shoulder at one point. This appears to be an error in the upscaler â the problem isnât present on the regular version of the MKV â but it shows how important it is to keep every bit of detail, since it isnât present in Rubicon, either.
Rubicon:
youtube
I donât think thereâs all that much difference between the MKV and the Rubicon upscale in this clip, either. Thatâs not to say I canât see a difference â if you pause both videos on exactly the same frame, you can usually find a few details that favor Rubicon, and in a few spots, places that favor the MKV. Ultimately, though, Iâm not sure how much of the detail is visible. Rubicon has a ~100ms audio delay in this clip that I didnât notice until it was too late to fix for this article.
First Fleet Engagement
This sequence is one of the great battle shots of Deep Space Nine. The show had been showing us fleet engagements all throughout Season 6, but Sacrifice of Angels was teased hard as something that was going to be extra-special. This isnât the first fleet skirmish â those have been going on for a while by this point in the episode â but this is the largest pure Federation fleet weâve ever seen opening up on-screen.
DVD:
youtube
The DVD looks like it was recorded off someoneâs old VCR tapes. The ships going by in the background look like vague little miniatures. The image is downright ugly and robbed of most of the impact.
MKV
youtube
The MKV file is a huge improvement over the DVD. The noise is gone and you can get a sense for just how large an engagement this was intended to look like. Tremendous improvement.
Rubicon:
youtube
This is another area where the improvements from Rubicon over MKV are small, but definite. Watch the leading edge on the saucer on the lead Galaxy-class vessel in the two clips and youâll see that itâs blurrier. Rubicon is slightly sharper overall, and itâs easier to follow the Galaxy-class vessels as they move from the background to the foreground of the video. Until I upscaled this scene, I didnât realize that the Galaxy-class ships moving through it near the end had even been visible in the back of the video. When I said youâve never really seen this footage until you upscale it, I really wasnât kidding.
Second Fleet Engagement:
The second, climactic battle of Sacrifice of Angels. Iâve showed short clips of this fight sequence before, but this is the first time Iâve shown the whole thing. This was one of the all-time high watermarks for DS9âs VFX team.
DVD:
youtube
The DVD is, once again, badly marred by noise. Itâs difficult to read the hull letters on the Miranda-class ships and thereâs a weird aliased grill on the bottom of the Miranda saucer. Several interlaced frames are prominently visible. Itâs a dismal way to experience such a beautiful set of scenes.
MKV:
youtube
The MKVs, as expected, dramatically clean up the show. Again, I canât argue with anybody who says this footage looks gorgeous in upscale. If it didnât, I wouldnât be writing this article.
Rubicon:
youtube
Rubicon eliminates all but a trace of the aliased pattern on the underside of the Miranda-class hull, smooths out the motion overall, and cleans the noise out of the smoke billowing around on the Defiantâs bridge until it just looks like, well, smoke. Detail levels in the Rubicon clip are just a touch higher.
How About Some Footage From a Different Episode? Any Episode?
No matter how bored you are of looking at Sacrifice of Angels, I promise you, you are not as tired of it as I am. Iâm literally better at lip-reading than I was before I started this project. With that said? Your wish is my command:
youtube
It was important to me to demonstrate that Rubicon could stand up to footage in other seasons without modification. I used exactly the same process to render âThe Die is Castâ that I did for the rest of the show.
Looking Ahead to Season 2
Iâm declaring this the end of âSeason 1â of DS9UP for several reasons. First, Iâve got a move coming up, and need to turn my attention towards it. Second, Iâm long past due to circle back and talk to some folks who have either wanted to help with this project or are already actively working on efforts of their own. Third, I want to pause long enough to hammer some remaining issues out of my workflow, understand some of the problems Iâve had over the past few months a bit better, and return to the idea of improving the color balance of Deep Space Nine through some judicious changes.
As things stand, Iâll be writing a follow-up article to this one over the next week or so, with some additional examples of alternate workflows and outcomes when using applications like AviSynth. In addition to 5Sharp, Iâve got a ~48 fps and ~60 fps version of Deep Space Nine that have their own strengths and weaknesses. Weâll also finally be taking a look at Gaia-HQ.
I couldnât have completed the work Iâve accomplished to-date without help from a number of people, including Gary Huff, Mark Renoden, Steve Reeve for some deinterlacing solution ideas, and help from several members of the Doom9 forum. Shortstack, I still hope to chat with you about recoloring ideas. Anybody else who deserves to be on this list, I sweartogod Iâll update it as soon as Iâve slept.
What We Brought Ahead
Deep Space Nine is too good of a show to be left rotting on DVD-era source. For all Paramountâs talk about the high costs of remastering, Iâd love to see the breakout of recutting all of DS9 and Voyager compared with the cost of a single episode of Discovery. Back in 2017, leaked data showed the budget for Discovery at $8M â $8.5M per episode. Supposedly TNGâs episodes cost $70K each to remaster, but letâs assume VOY and DS9 are more expensive, at $100K each. The $34.8M it would take to remaster 348 episodes of TV works out to⌠about 4.5 episodes of Discovery?
youtube
Paramount could build a better version of the show than I could even hope to create â but since they arenât going to bother, I figure Iâll keep up my own efforts.
On the night the last episode of Deep Space Nine aired, I carried my IBM K6-233 tower out to the living room, ran a 3.5mm cable from the audio-out port on our VCR into the line-in port on my sound card, and made a recording of the following. Itâs been one of my favorite moments of the show ever since it aired, and I canât think of a better way to end what I feel has been a very successful âseasonâ thus far, than with a little James Darren.
youtube
Rubiconâs credits. Iâm actually a little more partial to the ones I released back on April 27, but the Defiantâs motion is better here.
To the actors, artists, creators, directors, set crew, sound crew, and anyone else Iâve forgotten: Thanks for creating a show so damn good, people still rally around it 25 years later to see it treated with the respect it deserves.
May the Prophets guide you.
Now Read:
Deep Space Nine Upscale Project Update: âSacrifice of Angelsâ
Deep Space Nine Upscale Project Update: Variable Frame Rate DVDs Can Burn in Hell
How to Upscale Video to 4K, 8K, and Beyond
from ExtremeTechExtremeTech https://www.extremetech.com/extreme/310176-deep-space-nine-upscale-project-season-finale-what-we-brought-ahead from Blogger http://componentplanet.blogspot.com/2020/05/deep-space-nine-upscale-project-season.html
1 note
¡
View note
Text
CREATING YOUR OWN DCPâS â DAVINCI RESOLVE 14 SETUP
In this article, weâre going to talk about DaVinci Resolve, when it comes to creating DCPâs. What makes Resolve interesting, and different from the other NLEâs, is the fact that you can use it as your editor to, not only edit your project, but you can use it as the intermediary between your NLE and your DCP creation software. Now, keep in mind that Resolve has EasyDCP functionality built into it, but for the purpose of this article, weâre going to assume that, much like in the other lessons, the goal is XYZ color space and creating JPEG2000 sequences to send to your DCP creation application. Letâs get rolling!
Weâre going to use the newest version of DaVinci Resolve for this article, because I figured âWhy not? It only came out yesterday, so letâs take it for a spin!â, and to be perfectly honest, as of me writing this article, Resolve 14 was released just yesterday, and we can see if the workflow for all the previous versions of Resolve (which has been the same), can be carried forward. Weâre going to get things rolling in this article by talking about bringing your footage in from your other NLEâs like Media Composer, Premiere and Final Cut Pro X. Once youâve launched Resolve 14 (R14) youâll be brought to the project creation window. Go ahead and create a new project. What I normally do is create one project per DCP.
Once youâve created a new project, Iâve called ourâs Waterfalls for the sake of this article, R14 will immediately open, and youâll be now sitting at the editing screen.
Weâve actually gotten a little ahead of ourselves, and I want to get in first, and set up our project for the DCP creation. The first thing we need to decide, is Flat or Scope. Now, Iâm not going to go too much into Flat and Scope formats, as you can read all about them in a previous article about DCP creation. Weâre going to assume Flat for this article. Letâs hit SHFT+9 on the keyboard on both Mac and Windows, to call up the Project Properties window. In here is where weâre going to set up our final resolution for creating our DCP. Donât worry about the raster dimensions of your clip, as weâre going to set that here. Weâre going to set our timeline resolution at the top to be 1998Ă1080 DCI Flat 1.85, and weâre going to leave our frame rate at 24. Keep in mind that your frame rate might change, based on the frame rate you exported your clip as, but youâll want to keep it as one of the supported FPSâ. Weâre going to leave ours as 24, as weâre going to do the 23.98 to 24 frame conversion in R14. Once youâve got your project settings the way you like, simply hit âSaveâ and weâre ready to bring some footage into R14.
Alright, so letâs back up and bring our footage into R14. Click on the âMediaâ button at the left side of the toolbar at the bottom of the screen. Once here, you can right click on the Media Pool or Master window, to call up the menu, and navigate to the âImport Mediaâ option. Once there, you can import the clip (or clips) you need to create your JPEG 2000 image sequence from. Once you have it in the Media Pool, you can drag the clip directly into your timeline. Youâll notice that once you do, R14 will immediately tell you that your clip and the timelineâs frame rate/raster dimensions donât match.
Donât change anything. We already have everything set up the way we want them in our project settings, so weâre going to conform our clip to our Projectâs settings. Once you select âDonât Matchâ, your clip will appear in your timeline. Youâll notice that in our case, weâre working with 5.1 audio, which will come into play in just a second. Letâs talk about the video right now. If you look closely at the left and right sides of the record window, youâll notice that our video is slightly pillarboxed, which we will need to fix. Remember, weâre going from 1920Ă1080 to 1998Ă1080, so weâre actually working with a slightly wider frame, and we need to compensate for that. Letâs make that fix right now. If you head over to the Color Tab, we can make the necessary adjustment. Now, donât get panicked when you see the Color Tab, as weâre only going to deal with a very, very (and I mean very, very) small portion of it, and if you look towards the center of the screen, youâll notice some buttons that change up the tool(s) that youâre working with. The tool we want looks like this.
This button will bring up the Sizing tools, and you can make a very minor âZoomâ scaling adjustment, to make sure that your footage fits into the frame correctly, and essentially remove the pillarboxing.
With all of that being said, weâve been talking about working with a single clip that has been brought in from another NLE. If youâve decided that R14 is going to be your NLE, you have to make an important decision about how the workflow youâre going to use. You can use the technique weâre using here, which is to edit in the raster dimension and frame rate of your broadcast master, export, and re-import a single clip to work with in your R14 timeline, OR you can just set your project up for a DCP delivery (Flat or Scope), adjust all your clips so they fit inside that frame, and then follow the workflow from the second half of this article. Whichever way you choose to work is up to you, but for me, the best way to work is to export a master that you can re-import, and then set up your DCP from there. Again, my own personal opinion.
Now that we have our video sized correctly in the Flat frame, letâs now talk about everyoneâs favorite topic, and that is the XYZ color conversion. Itâs something that Iâve received a few e-mails about, and itâs from readers who thought that my tip about adding the XYZ LUT into your timeline, and then adding the XYZ>REC709 LUT to see what the conversion process looked like, was a clever idea. This way you can make necessary alterations to the luma and chroma of your footage, and make sure itâs going to look exactly what you expect it to look like. Now, something thatâs important to keep in mind is that R14 ships with the REC709 to XYZ LUT built in, but youâll need to download and install the XYZ to REC709 LUT yourself. To find that LUT, youâre going to head over to Michael Phillipsâ 24p.com, and find this link. Youâll notice immediately that youâre brought to the blog entry for exactly what weâre talking about, and can download not only the XYZ>REC709 LUT, but also the REC709>XYZ LUT, if you happened to need it (which we donât). To install the LUT, so itâs ready to use in Resolve, youâll navigate to:
Mac: Library>Application Support>BlackMagic Design>DaVinci Resolve>Support>LUT>DCI
Windows: C:/ProgramData/BlackMagic Design/DaVinci Resolve/Support/LUT/DCI
Once youâve copied the XYZ>REC709 LUT into this (these) location(s), either launch R14, or if youâre already in the application, weâre going to need to tell R14 that weâve updated the LUTâs, and it needs to add this new one to the list, so we can add it to our clip. Head to your Project Settings, and to the âColor Managementâ section, by clicking on it on the left hand side of the window. Once here, simply hit âUpdate Listsâ at the bottom of the window, and youâre all set.
This is how you add LUTâs to work with them in R14. Once youâve done this, weâre going to head back to the âColorâ module (this is where we did the zoom in on our footage), and weâre now going to add both LUTâs (XYZ and REC709) to our footage. First thingâs first, since i donât want to alter the main image, in case we want to do some color correction to it, letâs add two new nodes to our image. We can do that by simply hitting ALT (Win) or OPT (Mac) on the keyboard, with the main node selected. Your node layout should now look like this:
Now simply hit the same shortcut to add a second node. You can see where Iâm going with this. Select the first node you added, labelled in R14 as Node #2, right click on it, and head down to 3D LUT>DCI>REC709 to DCI XYZ (or P3 or sRGB, whichever color space youâre working in).
This does the conversion to XYZ color space. Now do the exact same thing to Node 3, except this time, do the conversion back to REC709. Now that youâve done this, youâve essentially done the color space round trip, and to get an idea of what the âBeforeâ and âAfterâ images look like, you can simply hit OPT/ALT and D to turn the color space nodes on and off, and get an idea of exactly what is going on with the luminance. Now, you can simply make any adjustments you need to make to Node #1, so this way when the conversion happens, the âre-convertedâ image will match more of the original look, that it would have if you had left it alone. One thing that is exceptionally important for me to point out is that once you have your image color corrected the way you want, taking into account the shift if luminance, is that youâll want to remove the LUTâs from your video, as we will be adding the XYZ LUT back onto our video, when we do the export to JPEG 2000 towards the end of the article.
Alright, that takes care of my video, now what about my audio. Many people donât bring their audio into R14, as they figure that since it was fine in their NLE, it will be fine here. That, for the most part, is true, but for me, I like to make sure that all my audio lines up and is in sync, and the best way to do that is to bring it around for the R14 process as well. AND, since we are doing a frame rate conversion, as well, I like to make sure that my audio follows the same pipeline as my video. Something else thatâs exceptionally important for me to mention, and that is that BlackMagic Design has drastically changed how Resolve handles audio in this release (Fairlight), so if youâre reading this article, and using a different version of R14, this section might not make a lot of sense, so keep that in mind.
The first thing that I normally like to do with my audio is to âunlinkâ it from the video, and that can simply be done by right clicking on the audio portion of my clip, and simply hitting âLink Clipsâ. The next thing I like to do is to make sure that all my audio tracks are âMonoâ. Remember, even though we are talking about a 5.1 audio mix (CH1:L CH2:R CH3:C CH4:Lfe CH5:LS CH6:RS), this is not one 5.1 audio clip. Itâs six mono channels that make up the six channel 5.1 mix, so technically, each of these tracks is Mono, and weâre going to make sure we set them back to be Mono, so when we output, thatâs what we get. The next thing youâll notice is that beside each of the audio tracks, youâll notice a number like 2.0 or 5.1. This represents the type track your audio is on. For us, 1.0 or Mono is what weâre looking for. Simply right click on the on the left side of the track itself, and select âChange Track Type Toâ, and select âMonoâ. Youâll notice your track ID changes to 1.0. Do this for all the tracks, and then youâre ready to export your 5.1 audio as a single audio track containing your surround mix.
EXPORTING
Alright, hereâs where we get our JPEG2000 image sequence, as well as our 5.1 audio set up and ready to go. Please keep in mind that I am aware that you can export directly for EasyDCP right from within R14, but Iâm not going to cover that in this lesson. Weâre going to save that for the actual DCP creation article(s). Okay, letâs get started. Head over to the âDeliverâ tab, and the first thing youâre going to want to do is to set an output location for where you want your audio and JPEG2000 image sequence to go to.
IMAGE
Once youâve done that, weâre ready to set up our video and this is where, in many cases, things fall off the rails. The Format and Codec drop downs are pretty self explanatory. We want JPEG2000 as the Format, and youâll notice that as soon as you select it, the Codec immediately changes to 2K DCI, and most people think thatâs fine, and they continue on with the output process but, if you happen to actually drop down the Codec menu, youâll immediately notice that there is actually a codec for DCI FLAT, SCOPE and FULL, which means that you do need to set your codec (once youâve selected JPEG2000) to match the raster dimension of the project youâre working on. In our case it will be DCI FLAT.
Something that I also want to point out is that if you are working from an âeditingâ master, meaning youâve cut all your footage in R14, and are now ready to export, youâll want to select âSingle Clipâ, which is the default export type. You will also notice that once you select JPEG2000 as your export type, it will immediately disable your audio export, which is fine for now. Weâll come back and take care of that in a second. By default your timeline will default to the maximum bitrate of 250 Mbit/sec (which is what we want), and your timeline will also have a frame rate of 24 fps. You can ignore the Advanced Settings, as we donât need to set anything in there, for the purposes of making our DCP. Okay, Letâs head to the âFileâ tab to give our export a name, weâll call it âMain DCP Waterfall Exportâ, and weâll add this to the Render Queue.
Since R14 doesnât like to export itâs JPEG2000 sequences with audio, weâre going to do this as a two step process. Head back to the video tab, and deselect âExport Videoâ. Now, the audio tab becomes active again, and we can select it, to set up our audio for export. Again, because weâre using R14, this step is a little different than it would have been done previously. In the Audio tab, weâre going to select WAVE and Linear PCM as our Format and Codec respectively, and weâre also going to make sure that we keep out bit depth as 24.
Now hereâs where things are a little different than in the past. Youâll notice that thereâs an âOutput Trackâ drop down option below the Bit Depth. By default itâs set to âMain 1 (Stereo)â. We want to take the tracks that we made Mono in our timeline, and export it to one clip, that we can then take and import into our DCP creation application. With the layout the way it is now, weâll be exporting one stereo clip. What we want to do is to take that Output Track (1), and assign it to the first audio track in our timeline, by dropping it down to âTimeline Trackâ. Youâll notice immediately that R14 defaults to choosing track 1, and itâs identifying it as Mono, which is exactly what we want.
Once youâve set up all your tracks (one through six), you can add this export to the Render Queue, and youâre all set to export both your video and audio for DCP creation.
Thatâs it! Â Your work in Resolve 14 is done, but before we wrap up, there is something thatâs important for me to point out. Assuming that youâre not using a DCP export option in an application like Media Encoder, Premiere or even Compressor (for all you FCPX editors out there), you are going to need to use an intermediary application like R14 to do your conversion to not only JPEG2000, but also to XYZ color space. Â To be honest, Resolve is my go to application for doing this, and I use it on every DCP project that I create. Â If youâre not familiar with working with Resolve, or this write up seems a little complex, there is a way to do this technique inside of Adobeâs After Effects, which weâre going to talk about in an upcoming article. Â For that technique, a third party plug-in is required. Â Itâs old, but itâs free and works in both After Effects and in Photoshop as well.
And last, but certainly not least, Iâve been getting a lot of people commenting on these articles in regards to what Iâve been saying about taking your DCP to a reputable post facility to check your DCP, and many people have been posting (and sending me e-mails directly), telling me that that is not necessary, and you can just check it on a computer, and youâre good to go. Â Please, donât do that. Â The whole point of taking it to a facility to check it is to make sure that, not only does it work, but so you can sit down, and watch it to make sure that it looks and sounds exactly the way that you expect it to look and sound. Â You want to experience your project, like the audience will experience it. Â Taking it to a facility will get you that, plus let you check to make sure that the DCP ingests into a playback machine properly, so you have piece of mind know that when that DCP is sent out to theatres, it will playback exactly the way you expect it to. Â In our next article, Iâm going to walk through the process for the last of the big NLEâs, and that is FCPX, so look for that article in the coming weeks!
The post CREATING YOUR OWN DCPâS â DAVINCI RESOLVE 14 SETUP appeared first on ProVideo Coalition.
First Found At: CREATING YOUR OWN DCPâS â DAVINCI RESOLVE 14 SETUP
0 notes
Text
CREATING YOUR OWN DCPâS â PREMIERE PRO CC SETUP
I want to keep the DCP topic going by taking a look that the process that we just covered in Media Composer, and discuss how weâre going to have a similar workflow inside of Premiere Pro. Â Many people feel that DCPâs are jobs that you need to take to âprofessionalâ facilities, and there is an element of truth to that. Â As I mentioned in my Media Composer article, you can create DCPâs on your own, on your own home machine, but you absolutely, 100% need to take your final DCP to a facility that has a DCP playback machine, so you can watch and check you DCP to ensure that when you send it to theatres, via FTP or external hard drive, that the DCP will not only play back, but it will also look and sound exactly the way you think it will. Â Okay, enough chat, letâs get into Premiere, and get this process going.
For the purposes of this article, we have to make a few assumptions. Â First, weâre going to assume that the project that youâre working on is an HD project, as the workflow is much simpler if youâre already working in a 2K or higher workflow. Â Speaking of 2K, weâre also going to assume that we are creating the DCP (Digital Cinema Package) in 2K. Â I know that Iâm going to get e-mails from people screaming at me that their production is 4K, so they want to produce a 4K DCP, and thatâs fine, but keep in mind that Iâve been doing this a while (a long while), and up until this point, I havenât had a 4K DCP come across my desk. Â Not once. Â Weâre also going to assume that your timeline is 23.976, as itâs the most common frame rate to work with, assuming youâre not cutting a feature length film. Now, what IS important to keep in mind is the accepted DCP frame rates, as this will become important in just a second.
2K Scope (2.39:1) Resolution: 2048Ă858 FPS:24, 25, 30, 48, 50 and 60fps
2K Flat (1.85:1) Resolution: 1998Ă1080 FPS:24, 25, 30, 48, 50 and 60fps
2K Full (1.90:1) Resolution: 2048Ă1080 FPS:24, 25, 30, 48, 50 and 60fps
4K Scope (2.39:1) Resolution: 4096Ă1716 FPS:24, 25, 30, 48, 50 and 60fps
4K Flat (1.85:1) Resolution: 3996Ă2160 FPS:24, 25, 30, 48, 50 and 60fps
4K Full (1.90:1) Resolution: 4096Ă2160 FPS:24, 25, 30, 48, 50 and 60fps
Youâre probably thinking to yourself, why would I want to cut in 23.98 if itâs not an acceptable delivery format. Â Well, whatâs important to keep in mind is that many DCP creation applications like CineAsset will interpret 23.98 as 24, so you can still encode, without having to change your frame rate. Â Â Keep in mind, though, that weâll be creating our DCP in EasyDCP, and it requires your footage to be in 24p, but weâll talk about that, and what to do about your 23.98 timeline, in the article about conversions and frame rates. Â If you want to avoid hassles, you can make sure that your production shoots in 24p, you can set your Premiere Timeline to 24p or convert to 24p when exporting but, again, weâll cover all that later in this lesson, and in our lesson that looks at DaVinci Resolve.
Iâm going to be mentioning some of the information from the Media Composer article here. Â Iâm sure that if youâre reading this, you donât care about Media Composer, and wonât read an article dedicated to it, but Iâm going to abbreviate the information. Â If you want a full rundown, you can click here to read the article to get more in-depth information.
Thereâs something else thatâs exceptionally important for me to bring up before we move forward, and that is about the process itself. Â Youâre going to notice that the technique Iâm going to be going through has a lot of âyou can do it this way, or that wayâ. Â Due to the nature of NLEâs, and workflows, thereâs about 3 different workflows you can use for each step of the process. Â I try to list of my recommended workflow, as well as the best alternative workflow. Â You might have a way that works better, but keep in mind that Iâve been doing this for a long time, and my methods have been tested to work, so if you stick with my main suggestion in each category, you really canât go wrong.
THE FIRST BIG DECISION
You basically have two aspects to consider when creating your DCP. Â If youâre working with HD footage (16Ă9) to create your DCP, chances are (about 95%) that youâre going to work with a FLAT aspect (1998Ă1080) for your final DCP, so weâre going to go on with that assumption in mind. Â Next, weâre going to need to convert the color space of our timeline to XYZ from REC709, so letâs talk a little about that now.
XYZ LUT
The best part about finding an XYZ LUT is that if you have DaVinci Resolve installed, you already have the LUT you need. Â You can find them at the below location.
Mac Library>Application Support>BlackMagic Design>DaVinci Resolve>Support>LUT>DCI
Windows: C:/ProgramData/BlackMagic Design/DaVinci Resolve/Support/LUT/DCI
Now, thereâs something that I didnât mention in the Media Composer article that I do want to mention here. Â Youâre going to want to have an XYZ to REC709 LUT as well, so you can see exactly what is going to happen to your footage when the conversion happens both ways. Â Remember, the DCP is delivered in XYZ, but when itâs converted back to REC709, there is a noticeable difference in the luminance levels, so youâre going to want to see that in Premiere, so you can compensate for it, before you export your master file. Â So, where do you find the XYZ to REC709 LUT? Â You can download it here! Â http://24p.com/wordpress/?p=340
I want to mention something thatâs very important.  If, when following through with this workflow, you try to import the REC709 to XYZ LUT from the location I mentioned, and you get the below errorâŚâŚâŚâŚ
Hereâs what you need to do.
Launch Resolve, and import any clip into a project, doesnât matter what clip
Select the Color Module, and make sure you can see the clip in the âTimelineâ.
Apply the XYZ LUT to the clip
Right click on clip and export LUT to your desktop. Â
Now the LUT should work correctly in Premiere. Â Annoying, yes it certainly is. Â Alright, now that we have the LUT issue worked out, letâs get into Premiere and get things rolling.
Thereâs a raging debate between editors as to exactly how the next step(s) should go, but Iâm a firm believer in my method, so Iâm sticking to it. Â So to get started, we need to talk about our LUT, and exactly how itâs applied to our footage and what itâs supposed to look like.
THE EDIT
Alright, your high def edit is picture locked, and you have a timeline full of clips. 1920Ă1080 clips that weâre going to need to reformat to 1998Ă1080 (Flat Aspect). Â The one big advantage that we have by going with a Flat aspect is that, as you can see, the aspect ratio is only slightly wider than the current aspect that you are working in. Â If your client needed a Scope aspect (2048Ă858), some pretty substantial work would need to be done to reposition all your clips into a much thinner frame vertically, but weâll get to that in just a bit. Â To sidestep for just a second, keep in mind that it would be very rare for a client to want a Scope aspect ratio from standard HD footage, unless it was taken into consideration at the time of shooting, and all the footage was matted in camera, to take this into account. Â What the normal process when creating a DCP, if you need both Flat and Scope, is to create the Flat version first (assuming your edit was in HD), and then make a pillarboxed Scope version (black bars down the side) from the Flat master DCP. Â Â Thereâs something else thatâs important for me to mention, and that is that you can choose at what point you want to do the cropping. Â In your timeline, or in your Export Settings in Premiere, or in Media Encoder, whichever you prefer. Â If you want to do your crop at the export stage, DO NOT ADJUST THE FRAME SIZE OF YOUR SEQUENCE. To begin with, letâs take a look at adjusting your timeline in Premiere first, and then in the EXPORTING section of this article, Iâll cover how to do this step when youâre ready to export.
Once youâve got picture and audio lock my first suggestion is to take your audio mix, and export it as either a 5.1 surround .WAV file, or a two channel stereo file, depending on how you have it mixed. Â Next, take your sequence, duplicate it, and replace all your multi-track audio with the newly created audio export. Â The next thing that weâre going to need is all our footage as âone clipâ, just for the simple fact of making our lives easier. Â The last thing we want to do when creating a Flat DCP from HD footage is to change the project size and then have to go into each clip one at a time to adjust its scale properties to match the new frame aspect. Â Many people think that weâre going to be exporting our video, much like our audio, and bringing it back in and dropping it into a new timeline, but weâre actually not going to do that. Â Instead, weâre going to take our footage, and nest it together as one new âclipâ. Â This way, we can avoid adding a compression pass to it, and keep it the best quality possible. Â So now our timeline looks something like this.
Now that weâve got one video âclipâ and one audio clip, we can now change our project size. Â You can head to the Sequence Settings window, and change your frame size to 1998Ă1080. Â Premiere will warn you that all your renders will be deleted. Â Now is a good time to decide if you still want to go this route, or switch over to the export method, and export your video, to bring it back in as itâs own separate clip into its own separate sequence. Â To be honest, I donât do much rendering in Premiere, so Iâm fine with it blasting away my renders. Â Once youâve changed your sequenceâs frame size, youâll now notice that there is a bit of black down the left and right sides of your frame.
This is important to notice, as Premiere wonât fix it when the sequence size changes, so weâll need to make a quick tweak to fix it. Â You can double click on the nested sequence, and hold Command on the Mac, Control on Windows to call up the nest in the source monitor, and you can now adjust the scale by five percent to remove the black bars, and make your image now full screen. Â Obviously youâre losing a bit of the image at the top and bottom of the screen, but to be honest, thereâs nothing up or down there thatâs going to be missed anyways. Â
So, that does now beg the question, what do you do in a situation where the clients has asked for a Scope DCP, and all your footage is HD. Â The first thing youâre going to do is hope that when they shot the production, everything was matted for the scope aspect ratio (2.39:1), and chances are about 100% that they did not, so itâs going to require some work on your part. Â First, you can export and re-import your audio like the step above, but there will be no nesting of your video, as each clip is going to require a reformat. Â Change your sequence settings to 2048Ă858, instead of 1998Ă1080. Â Youâll see that your sequence now looks more like the below image.
Now, youâre going to need to go through each clip, and scale it up, so that the horizontal width matches the new frame size, and youâll also need to adjust the âYâ (or vertical), positioning, so that what you want centered in the frame is actually centered in the frame. Â This step right here is a lesson on how a production can screw up royally, and have everything fall off the rails, if this step wasnât thought out properly. Â Keep in mind that you can do the same steps for the Scope version, as you did for the Flat version, but keep in mind youâre going to want to have your timeline either matted correctly, so that when you make the scale adjustment, everything is centered in the frame, or be prepared for things to get cut off, and all your framing to look wonky.
Now that our timeline is ready, letâs talk about the XYZ LUT, and how to get in and apply it to your timeline.
Whatâs important to keep in mind about this step of the process, is that itâs not necessary to add the XYZ LUT here if you donât want to. Â I will be doing an article dedicated to working with the LUT and the JPEG 2000 conversion outside of the NLEâs (DCP creation applications love the JPEG2000 format, but itâs not essential to create this type of file), so you can just work with the XYZ LUT as a reference if you want.
Create two new video tracks. Â One will be for our conversion to the XYZ color space, and one will be for our conversion back to REC709, so you can see exactly the process your footage will go through. Â On V2 and V3 (assuming your footage is on V1), create a new Adjustment layer, and add to both video tracks. Â On V2, apply the Lumetri Color effect, and once youâve called up the effect in the source monitor, apply the XYZ LUT to the your footage via the Basic Color Correction parameters of the effect. Â When applied your footage should look washed out. Â Check out the example below.
Now, on V3, do the exact same technique, but apply the XYZ to REC709 LUT to see what happens when you convert back. Â To get the best picture of exactly what is going on, now add the original nested sequence to V4, so you can turn the clip on and off to see a before and after version of your footage. Â Youâll notice that during the conversion process, your footage luminance levels have been adjusted slightly darker, so you might want to compensate for that by adding the Lumetri Color effect to your nested sequence, and bringing the levels up a little. Â In most cases, at this point, I remove the LUT from the timeline, as I normally add it on export, as itâs a parameter we can save as a preset, and it just makes my life a million times easier. Â Once youâve made a few DCPâs, youâll know exactly how much you need to bring your video levels up, to compensate for the XYZ color conversion, and you can even create a preset to do it for you with one drag of the mouse, so I donât even find myself adding those LUTâs in my timeline anyways. Â Okay, letâs now talk about getting our footage out of Premiere!
EXPORTING
Itâs important that we look at exporting in two different ways, as there will be situations where youâll want to export from Premiere, and there are times where you wonât have Premiere launched, and just want to make the files from the desktop. Â Get to your Export Settings window by hitting CMD/CTL+M on the keyboard. Â If youâve already set up your video in Premiere to be the correct size (in our case 1998Ă1080), you can leave the video settings as is. Â If not, what youâre going to do is to change the Basic Video Settingsâ Width and Height to be 1998Ă1080. Â If you look closely, youâll notice that there are black bars down the sides, much like we discussed in your Premiere timeline. Â We can crop our image by heading to the Output tab at the top left of the Export Settings window, and selecting âScale To Fillâ as the option. Â Now our image fills the frame and is slightly cut off at the top and bottom. Â The last thing we need to decide here is whether or not we want to apply our XYZ LUT. Â If you applied it in your timeline, you can skip this step. Â If you didnât, head to the âEffectsâ tab, turn on the Lumetri Look/LUT option, and choose âSelectâ to navigate to the LUT on your computer.
The final big debatable question is, what video codec should you choose?  If youâre on a Mac, thereâs no question.  ProRes HQ, as exporting as a JPEG2000 image sequence is not an option, and ProRes is a widely acceptable codec for QuickTime files.  For Windows users?  I would use DNxHR as my export choice, as the HQ flavour has exceptional quality, and relatively low file sizes (compared to Animation which most editors still chooseâŚâŚLord knows why), and is easily used in most applications.  Once youâve set this, you can save this setup as a Preset, so that you can come back to it whenever you need to.  I have to say that I was pleasantly surprised that Premiere/AME (Adobe Media Encoder) retained the Source Scaling for the preset, as it would have been a huge pain in the butt to have to apply a preset, and adjust it every time you want to process a clip.  Now keep in mind that the process works the same, more or less, if youâre creating the preset(s) in AME, the big difference being is that itâs much easier to queue up multiple clips, and crank out a ton of DCP files one right after the other!
Thatâs it! Â If you applied the XYZ LUT, youâre now ready to jump into EasyDCP and start creating your DCP. Â If you didnât, and you now have a master file in the proper Flat or Scope Aspect Ratio, you can now head to the article on Resolve, to see how to apply the LUT there, as well as doing the JPEG conversion. Â Again, I stressed this at the start of the article, and Iâll stress it hear again, if you are having any problems with any step of the process here, please reach out to a facility in your area for assistance. Â It might cost you a little bit of money, but it will save you a ton of time and headaches in the end.
 The post CREATING YOUR OWN DCPâS â PREMIERE PRO CC SETUP appeared first on ProVideo Coalition.
First Found At: CREATING YOUR OWN DCPâS â PREMIERE PRO CC SETUP
0 notes