To generate a new proxy:
1 Make your desired changes to the current clip’s settings.
2 Right-click on the same clip and select “Generate Proxy Media” from the contextual menu.
A new proxy file is created in the same directory as the previously linked proxy file, and its file name is
appended with “_s00x” to differentiate it. The latest proxy generated is automatically linked to the
source file, but previous proxy versions are retained on disk, so you can then manually relink the
different versions as needed.
Switching Between Proxy Media and Source Media
You can switch between using your original source media and the proxy media for playback at any
time by checking or unchecking Playback > Use Proxy Media If Available in the Menu bar.
Using Proxy Files for Delivery
By default, the Deliver page always reverts proxies to the original source media for final output to
ensure the highest quality render. Checking the “Use proxy media” box in the Advanced Settings of
the Video Render settings in the Deliver page overrides this so DaVinci Resolve uses proxy media for
final output instead. This can be useful if you need to save rendering time while making dailies, or to
quickly create outputs of your timeline for producers or audio engineers where master quality is not
necessarily needed. You will also need to check the “Use proxy media” box if you are editing with
proxies and do not have access to the original source media.
Moving Proxies Using a DaVinci Resolve Archive (.dra)
When moving proxies from one DaVinci Resolve system to another, it can be time consuming and
problematic to manually copy many individual assets (proxies, graphics, source files, etc.) from different
folders and locations. By far the easiest way to move complete projects from system to system is by
letting DaVinci Resolve do all that file management for you, by creating a DaVinci Resolve Archive
(.dra). An archive file contains not only your project, but all its media as well, maintaining the file paths
and organization of the original project.
To create a DaVinci Resolve Archive file, right-click on any project in the Project Manager, and choose
“Export Project Archive” from the drop-down menu. Within this mechanism, a new Archive setting in
DaVinci Resolve makes working with proxies simple and elegant.
Creating a Proxy-Only Archive to Share
In the Archive Options dialog, if you check Proxy Media, and uncheck Media Files and Render Cache,
DaVinci Resolve will make an Archive using only the proxy media. This allows you to create a compact
and easily transported version of your project to either move to another computer, or to give to an
editor working remotely. If proxy media is not available for a clip (say a graphic or a media file you
didn’t create a proxy for in the first place), the original media is automatically exported to ensure that
nothing goes offline.
Archive Setting options for exporting only Proxy Media
The resulting .dra is a folder that is a fully self-contained version of your project and proxy media.
This folder can easily be moved from drive to drive, or zipped up and sent across the internet.
Chapter 8 Improving Performance, Proxies, and the Render Cache 181
Working Remotely Using Proxy Media
The proxy workflow in DaVinci Resolve opens up many new possibilities for editing collaboration and
media management. For example, one common workflow is to use the RAW camera master source
clips in the editing suite but to then generate low resolution proxies to take home to edit on a laptop.
To create a portable set of proxies for editing on a laptop:
1 Set up the Resolution and Format settings for the proxies in the Project Settings. In this case, you
may want to use “Choose Automatically” and a low-bandwidth, easily editable codec like ProRes
LT or DNxHR LB.
2 Select all source media in the Media Pool and Generate Proxy.
3 Export a DaVinci Resolve Archive (.dra) onto an external drive, with only Proxy Media checked.
4 Go away. Once at home, connect that drive to your home laptop, and use the Restore Project
Archive command in the Project Manager to import the archive.
5 When you’ve finished working at home, export a timeline, bin, or project from your laptop when
finished, and bring just that file back into the edit suite to continue working with the original
source media.
Another common scenario might involve sending proxies over the internet to an editor in another city
or country.
To send a project to another editor over the internet:
1 Set up the Resolution and Format settings for the proxies in the Project Settings. In this case,
you may want to use a low resolution like “quarter” or “one-eighth,” and a low-bandwidth, highly-
compressed codec like H.265 for the smallest file sizes possible.
2 Select all of the source media in the Media Pool and Generate Proxy.
3 Export a DaVinci Resolve Archive (.dra), with only Proxy Media checked.
4 Using the file compression tools in your OS, zip the archive folder so it becomes one large file.
5 Upload the resulting .zip to the online file sharing service you prefer, and send the download link
to the remote editor.
6 Once the other editor unzips and imports the archive, you and they can then simply send
timelines, bins, and/or project files back and forth to collaborate. These files are small enough to
transfer over email or an instant messaging service.
Additionally, you may have your editing computer connected via ethernet to a Media Asset
Management system that can create its own proxies. In order to edit smoothly via the network, you
need to use low bandwidth proxies instead of the source media.
To create proxy media externally to edit over a local network:
1 Import the original source media files to your Media Pool from the network storage system
you’re using.
2 Set up the proxy generation settings in your Media Asset Management software to accommodate
the amount of network bandwidth you expect to have access to.
3 Make sure the timecode and frame rate of the proxies match the original source media, and render
the proxies to a network location.
4 Select all of your original source media in the Media Pool, and choose “Link Proxy Media.”
5 Choose the proxy media at the network location where they’ve been rendered.
Chapter 8 Improving Performance, Proxies, and the Render Cache 182
Proxy Media vs. Other Playback
Optimizations in DaVinci Resolve
There continue to be other methods of optimizing real time performance in DaVinci Resolve, so it’s
natural that one might wonder how this is different from Optimized media, Timeline Proxy Mode, and
other performance optimization techniques available in DaVinci Resolve. The key aspect of proxy
media that differentiates it is that proxy media is independent, portable, and can be created by
applications outside of DaVinci Resolve, if desired.
Proxy Media vs. Timeline Proxy Mode
One of the oldest performance optimization options, originally named “Proxy Mode” in previous
versions of DaVinci Resolve, has been renamed “Timeline Proxy Mode” in DaVinci Resolve 17 to
differentiate it from Proxy Media. While the new Proxy Media feature creates actual media files on disk,
“Timeline Proxy Mode” simply reduces the resolution of the timeline on-the-fly, allowing for increased
real time playback performance. To be clear, Proxy Media and Timeline Proxy Mode are two entirely
different features, which are wholly independent of one another.
Proxy Media vs. the Render Cache
Proxy Media is designed to create easy-to-edit primary source material on the Timeline, for improved
performance before you start editing. The Render Cache is designed to improve the real time
performance of clips that have enough computationally intensive effects (such as Resolve FX, color
corrections, noise reduction, compound clips, fusion compositions, etc.) to slow playback, even at the
current Timeline resolution. Proxy Media is independent and portable (you can move clips wherever
you want; you just have to relink them afterward), while the Render Cache media is not designed to be
moved or interacted with externally and only works with the project it was made for.
Proxy Media vs. Optimized Media
On the surface, Proxy Media and Optimized Media appear similar in function. Both options are
designed to create lower bandwidth, easier to edit versions of source media. However, Optimized
Media is managed internally by DaVinci Resolve, cannot be exported, and is not user accessible. In
contrast, Proxy Media creates fully portable and independent media that can be easily managed
by the user.
Using Optimized Media, Proxy Media,
and Caching Together
How you use DaVinci Resolve’s various performance-enhancing features together is entirely up to you,
but you should know that they’re not an either/or proposition. For example, you can create optimized
media from the camera raw original clips in your project, then enable Timeline Proxy Mode playback to
enhance the performance of your 4K timeline, and turn on Smart Cache to speed up your work in the
Color page as you add Fusion effects, noise reduction, and Resolve FX or OFX to every clip. All of
these optimization methods work happily and seamlessly together to improve your performance while
keeping the image quality of your project as high as the Optimized, Proxy, and Cache formats you’ve
selected in the Master Settings panel of the Project Settings.
Chapter 8 Improving Performance, Proxies, and the Render Cache 183
Which Playback Optimization Method
Should I Use?
DaVinci Resolve’s various playback optimization features are designed to specifically increase
performance to make up for hardware, storage, and bandwidth deficiencies, but knowing when to use
each method is essential to proper functionality. Included below is a quick reference.
– Timeline Proxy Mode: My timeline is playing back, just a little bit too slowly.
– Cache Clip: I need help playing back a few clips in real time that have heavy effects applied.
– Optimized Media: I need help playing back all my source media in real time, and I will only be
editing on this computer.
– Proxy Media: I need help playing back all my source media in real time, and I need to collaborate
and share this media with other users, programs, or outside storage locations.
Other Project Settings
That Improve Performance
In addition to working with proxies, using reduced raw decoding quality, generating optimized media,
and enabling the Smart and User caches, there are five additional options in the Project Settings
window and one setting in the UI Settings panel of the User Preferences that you can use to further
improve real time performance if you’re working on an underpowered computer, at the expense of
lower image quality while you work. These settings can then be changed back to higher quality modes
prior to rendering.
– Set timeline resolution to: (Master Project Settings, Timeline Format) DaVinci Resolve is resolution
independent, so you can change the resolution at any time and all windows, tracks, sizing
changes, and keyframe data will be automatically recalculated to fit the new size. Lowering the
Timeline resolution while you’re grading will improve real time performance by reducing the
amount of data being processed, but you’ll want to increase Timeline resolution to the desired size
prior to rendering. This is effectively the same as using the Proxy command, but you get to choose
exactly what resolution you want to work at.
– Enable video field processing: (Master Project Settings, Timeline Format) You can leave this
option turned off even if you’re working on interlaced material to improve real time performance.
When you’re finished, you can turn this setting back on prior to rendering. However, whether or
not it’s necessary to turn field processing on depends on what kinds of corrections you’re making.
If you’re applying any filtering or sizing operations such as blur, sharpen, pan, tilt, zoom, or rotate,
then field processing should be on for rendering. If you’re only applying adjustments to color and
contrast, field processing is not necessary.
– Video bit depth: (Master Project Settings, Video Monitoring) Monitoring at 8-bit improves real time
performance, at the expense of possibly introducing banding to the monitored image.
– Monitor scaling: (Master Project Settings, Video Monitoring) Lets you choose which transform
filter to use when scaling video to fit into the Video format resolution you’ve specified. Options are
Bilinear and Basic.
– Resize Filter: (Image Scaling) A drop-down menu that lets you choose an alternate image
transform filter (such as Bilinear) that is lower quality but less processor intensive. A “Force sizing
highest quality” checkbox in the Render Settings list of the Deliver page helps make sure you
don’t accidentally render your final media at this lower quality setting, however.
Chapter 8 Improving Performance, Proxies, and the Render Cache 184
– Hide UI overlays: (User Preferences, Playback Settings) Off by default. When using a single GPU
for both display and CUDA or OpenCL processing, or if your display GPU is underpowered, or
if you lack the PCIe bandwidth required for the currently specified resolution or frame rate, you
may be able to improve real time performance by turning this option on. When enabled, onscreen
controls such as the cursor, Power Window outlines, and split-screen views are disabled and
hidden during playback. When playback is paused, all onscreen controls reappear.
– Minimize interface updates during playback: (User Preferences, Playback Settings) On by default.
While enabled, this setting improves real time performance by hiding on-screen controls that
appear in the Viewer, such as the cursor, Power Window outlines, and split-screen views during
playback. When playback is stopped, onscreen controls reappear.
Chapter 8 Improving Performance, Proxies, and the Render Cache 185
Chapter 9
Data Levels,
Color Management,
and ACES
This chapter covers operational details that affect how color is managed for media
that is imported into and exported from DaVinci Resolve. If color accuracy is important
to you, then it’s a good idea to learn more about how Resolve handles the data levels
of each clip, how DaVinci Resolve Color Management helps you to work with different
formats, and how to use ACES.
Contents
Data Levels Settings and Conversions 187
Converting Between Ranges and Clipping 188
Internal Image Processing and Clip Data Levels 188
Assigning Clip Levels in the Media Pool 189
Video Monitoring Data Levels 189
Deck Capture and Playback Data Level 190
Output Data Level Settings in the Deliver Page 190
So, What’s the “Proper” Data Range for Output? 191
Introduction to DaVinci Resolve Color Management 191
Display Referred vs. Scene Referred Color Management 191
Updated RCM In DaVinci Resolve 17 192
Resolve Color Management for Editors 192
The Input, Timeline, and Output Color Space 193
The RCM Image Processing Pipeline 194
Identifying the Input Color Space of Different Clips 194
Chapter 9 Data Levels, Color Management, and ACES 186
Simple RCM Setup 196
Automatic Color Management 196
Resolve Color Management Presets 197
Output Color Space 198
Advanced RCM Setup 199
Single Setting vs. Dual Setting RCM 199
Setting the Input Color Space 200
Choosing a Timeline Color Space 200
Timeline Working Luminance 203
203 Nit Support for SDR to HDR 203
Gamut Limiting, Restricting Values Within a Larger Gamut 204
Input DRT Tone Mapping 204
Output DRT Tone Mapping 205
Use Inverse DRT for SDR to HDR Conversion 206
Use White Point Adaptation 207
Color Space Aware Grading Tools 207
Apply Resize Transformations In 207
Graphics White Level 208
Display HDR On Viewers If Available 208
HDR Mastering Is For (Studio Version Only) 208
Resolve Color Management and the Fusion Page 208
Ability to Bypass Color Management Per Clip 209
Exporting Color Space Information to QuickTime Files 209
Color Management Using ACES 210
Setting Up ACES in the Project Settings Window 210
The Timeline Color Space in ACES Workflows is Fixed 214
Tips for Rendering Out of an ACES Project 214
Data Levels Settings and Conversions
Different media formats use different ranges of values to represent image data. Since these data
formats often correspond to different output workflows (cinema vs. broadcast), it helps to know where
your project’s media files are coming from, and where they’re going, in order to define the various data
range settings in DaVinci Resolve and preserve your program’s data integrity.
To generalize, with 10-bit image values (with a numeric range of 0–1023), there are two different data
levels (or ranges) that can be used to store image data when writing to media file formats such as
QuickTime, MXF, or DPX. These ranges are:
– Video: Typically used by Y’CBCR video data. All image data from 0 to 100 percent must fit into the
numeric range of 64–940. Specifically, the Y’ component’s range is 64–940, while the numeric
range of the CB and CR components is 64–960. The lower range of 4–63 is reserved for “blacker-
than-black,” and the higher ranges of 941/961–1019 are reserved for “super-white.” These “out of
bounds” ranges are recorded in source media as undershoots and overshoots, but they’re not
acceptable for broadcast output.
Chapter 9 Data Levels, Color Management, and ACES 187
– Full: Typical for RGB 444 data acquired from digital cinema cameras, or film scanned to DPX
image sequences. All image data from 0 to 100 percent is simply fit into the full numeric range
of 4 to 1023.
Keep in mind that every digital image, no matter what its format, has absolute minimum and maximum
levels, referred to in this section as 0–100 percent. Whenever media using one data range is
converted into another data range, each color component’s minimum and maximum data levels are
remapped so that the old minimum value is scaled to the new data level minimum, and the old
maximum value is scaled to the new data level maximum:
– (minimum Video Level) 64 = 4 (Data Level minimum)
– (maximum Video Level) 940 or 960 = 1023 (Data Level maximum)
Converting Between Ranges and Clipping
Simply converting an image from one data range to another should result in a seamless change. All
“legal” data from 0–100 percent is always preserved and is linearly scaled from the previous data
range to fit into the new data range.
The exceptions to this are undershoots and overshoots that you’ve deliberately set, also referred to as
out-of-bounds levels. The overshoots and undershoots that are allowable in “Video Levels” media
(known as sub-black or super-black and super-white) are usually clipped when converted to full-range
“Full Levels.” However, DaVinci Resolve preserves this data internally, and these clipped pixels of
detail in the undershoots and overshoots are still retrievable by making suitable adjustments in the
Color page to bring them back into the “legal” range.
The out-of-bounds image data that’s preserved within the headroom of Video Levels by
DaVinci Resolve while working is usually clipped, however, when you either output to video or render
your output. There are two settings that let you get around this for instances where you want to
preserve these levels:
– A checkbox in the Video Monitoring group of the Master settings, “Retain sub-black and super-
white data,” lets DaVinci Resolve output undershoots (sub-black) and overshoots (super-white)
to video when Data Level is set to Video. When this is turned off, these out-of-bounds values are
clipped on output.
– A checkbox in the Advanced settings of the Render settings in the Deliver page, “Retain sub-
black and super-white data,” lets DaVinci Resolve render undershoots (sub-black) and overshoots
(super-white) to exported media when Data Level is set to Video.
Internal Image Processing and Clip Data Levels
It’s useful to know that, internally to DaVinci Resolve, all image data is processed as full range,
uncompressed, 32-bit floating point data. What this means is that each clip in the Media Pool, whatever
its original bit-depth or data range, is scaled into full-range 32-bit data. How each clip is scaled
depends on its Levels setting in the Clip Attributes window, available from the Media Pool
contextual menu.
Selecting Auto, Video, or Full levels
By converting all clips to uncompressed, full-range, 32-bit floating point data, Resolve guarantees the
highest quality image processing that’s possible. As always, the quality of your output is dependent on
the quality of the source media you’re using, but you can be sure that Resolve is preserving all the
data that was present in the original media.
Chapter 9 Data Levels, Color Management, and ACES 188
Assigning Clip Levels in the Media Pool
When you first import media into the Media Pool, either manually in the Media page or automatically
by importing an AAF or XML project in the Edit page, Resolve automatically assigns the “Auto” Levels
setting. When a clip is set to Auto, the Levels setting used is determined based on the codec of the
source media.
DaVinci Resolve generally does a good job of figuring out the appropriate Levels setting of each clip
on its own. However, in certain circumstances, such as when you’re working with media that was
originated in one format but transcoded into another, you may find that you need to manually choose
the appropriate settings so that the levels of each clip are interpreted correctly. This can be done
using each clip’s Levels setting in the Clip Attributes window, available from the Media Pool contextual
menu in either the Media or Edit pages.
To change a clip’s Data Level setting:
1 Open the Media or Edit page.
2 Select one or more clips, then right-click one of them and choose Clip Attributes.
3 Click the Levels ratio button corresponding to the data level setting you want to assign,
then click OK.
TIP: If you need to change the Levels setting of a range of clips that share a unique property
such as reel name, resolution, frame rate, or file path, you can view the Media Pool by column,
and sort by the particular column that will best isolate the range of media to which you need
to make a data level assignment.
Once you change a clip’s Levels setting, that clip will automatically be reconverted based on the new
assignment. If it appears to be correct, then you’re ready to work. If it doesn’t, then you may want to
reconsider the Levels assignment you’ve made, and you should check with the person who provided
the media to find out how it was generated, captured, and exported.
So long as the Levels settings used by your clips are accurate, you should be ready to work. However,
problems can still occur based on what external video hardware you’re using with your workstation,
and how you need to deliver the finished media to your client. For this reason, there are three
additional data level settings that you can use to maintain data integrity, while at the same time seeing
the proper image as you work.
Video Monitoring Data Levels
Superficial problems may result if the settings used by your external display differ from the settings
you’re using to process data levels in Resolve. Accordingly, there is a Video/Full Level setting in the
Master Settings panel of the Project Settings (in the Video Monitoring section).
When you change this setting, the image being output to your external display should change, but the
image you see in your Viewer will not. That’s because this setting only affects the data levels being
output via the video interface connecting the Resolve workstation to your external display. It has no
effect on the data that’s processed internally by Resolve, or on the files written when you render in the
Deliver page.
There are two options:
– Video: This is the correct option to use when using a broadcast display set to the
Rec. 709 video standard (10-bit 64–940).
– Full: If your monitor or projector is capable of displaying “full range” video signals, and you wish to
monitor the full 10-bit data range (4–1023) while you work, then this is the correct option to use.
Chapter 9 Data Levels, Color Management, and ACES 189
It is imperative that the option you choose in DaVinci Resolve matches the data range the external
display is set to. Otherwise, the video signal will appear to be incorrect, even though the internal data
is being processed accurately by DaVinci Resolve.
Auto/Video/Full Level selection for monitoring
Deck Capture and Playback Data Level
There is a separate “Video/Data Level” setting that is specific to when you’re capturing from or
outputting to VTRs. This setting also affects the video signal that is output via the video interface
connecting the Resolve workstation to your VTR (which is usually also in the signal chain used for
monitoring). However, it only takes effect when you’re capturing from tape in the Media page, or
editing to tape in the Deliver page. If you never capture or output to tape, this setting will never
take effect.
This setting is found in the Deck Capture and Playback panel of the Project Settings.
The reason for a separate option for tape capture and output is that often you’d want to monitor in one
format (normally scaled Rec. 709), but output to tape in another (full range RGB 444). This way, you can
set up Resolve to accommodate this workflow, and then not have to worry about manually switching
your video interface back and forth.
There are two options:
– Video: This is the correct option to use when you want to output conventional
Rec. 709 video to a compatible tape format.
– Full: This is the correct option to use when you want to output “full range” RGB 444 video
to a compatible tape format.
Once tape ingest or output has finished, your video interface goes back to outputting using the setting
specified by the “Colorspace conversion uses” setting in the Master Settings panel of the Project
Settings (in the Video Monitoring section).
Output Data Level Settings in the Deliver Page
Finally, there’s one last set of data level settings, available in the Render Settings list, within the Format
group. It’s the “Set to video or data level” drop-down menu. It’s there to give you the ability to convert
the data level of your rendered output, if necessary.
All media is output using a single data level, depending on your selection. There are three options:
– Automatic: The output data level of all clips is set automatically based on the codec you select to
render to in the “Render to” drop-down menu.
– Video: All clips are rendered as normally scaled for video (10-bit 64–940).
– Full: All clips are rendered as full range (10-bit 4–1019).
For most projects, leaving this setting on “Automatic” will yield the appropriate results. However, if
you’re rendering media for use by another image processing application (such as a compositing
application) that is capable of handling “full range” data, then full range output is preferable for media
exchange as it provides the greatest data fidelity. For example, when outputting media for VFX work
as a DPX image sequence, or as a ProRes 4444 encoded QuickTime file, choosing “Unscaled full
range data” guarantees the maximum available image quality. However, it is essential that the
application you use to process this media is set to read it as “full range” data, otherwise the images will
not look correct.
Chapter 9 Data Levels, Color Management, and ACES 190
So, What’s the “Proper” Data Range for Output?
Strictly speaking, there is no absolutely “proper” data range to use when outputting image data. As
long as the Levels setting of each clip in the Media Pool is set to reflect how each clip was created,
your primary consideration is which data range is compatible with the media format or application
you’re delivering to. If the media format you’re exporting to supports either normally scaled or full
range, and the application that media will be imported into supports either normally scaled or full
range, then it’s really your choice, as long as everyone involved with the project understands how the
data range of the media is meant to be interpreted once they receive it.
Outputting to hardware is a bit trickier, in that you need to make sure that the external display or VTR
you’re outputting to is set up to receive a signal using the data range you’ve chosen. If the device is
limited to only one data range, then you need to be sure that you’re outputting to it using that data
range, or the levels of the image will appear to be incorrect, even though the image data being
processed by Resolve is actually fine.
Introduction to DaVinci Resolve
Color Management
How color is managed in DaVinci Resolve depends on the “Color Science” setting at the top of the
Color Management panel of the Project Settings. There are four options: DaVinci YRGB,
DaVinci YRGB Color Managed, DaVinci ACEScc, and DaVinci ACEScct. This section discusses the
second setting, DaVinci YRGB Color Managed. ACEScc and ACEScct is discussed in the following
section in this chapter.
Display Referred vs. Scene Referred Color Management
The default DaVinci YRGB color science setting, which is what DaVinci Resolve has always used, relies
on what is called “Display Referred” color management. This means that Resolve has no information
about how the source media used in the Timeline is supposed to look; you can only judge color
accuracy via the calibrated broadcast display you’re outputting to. Essentially, you are the color
management, in conjunction with a trustworthy broadcast display that’s been calibrated to
ensure accuracy.
DaVinci Resolve 12 introduced a color science option called “DaVinci YRGB Color Managed,” or more
simply “Resolve Color Management” (RCM). This introduced a so-called “Scene Referred” color
management scheme, in which you have the option of matching each type of media you’ve imported
into your project with a color profile that informs DaVinci Resolve how to represent each specific color
from each clip’s native color space within the common working color space of the timeline in which
you’re editing, grading, and finishing.
This is important, because two clips that contain the same RGB value for a given pixel may in actuality
be representing different colors at that pixel, depending on the color space that was originally
associated with each captured clip. This is the case when you compare raw clips shot with different
cameras made by different manufacturers, and it’s especially true if you compare clips recorded using
the differing log-encoded color spaces that are unique to each camera.
This Scene Referred component of color management via RCM doesn’t do your grading for you, but it
does try to ensure that the color and contrast from each different media format you’ve imported into
your project are represented accurately in your timeline. For example, if you use two different
manufacturer’s cameras to shoot green trees, recording Blackmagic Film color space on one, and
recording to the Sony SGamut3.Cine/SLog3 color space on the other, you can now use RCM to make
sure that the green of the trees in one set of clips match the green of the trees in the other, within the
shared color space of the Timeline.
Chapter 9 Data Levels, Color Management, and ACES 191
It should be mentioned that this sort of thing can also be done manually in a more conventional
Display Referred workflow, by assigning LUTs that are specific to each type of media, or using Color
Space Transform Resolve FX in order to transform each clip from the source color space to the
destination color space that you require. However, RCM’s automation can make this process faster by
freeing you from the need to locate and maintain a large number of LUTs to accommodate your various
workflows. Also, the matrix math used by RCM (as well as the Color Space Transform operation)
extracts high-precision, wide-latitude image data from each supported camera format, preserving
high-quality image data from acquisition, through editing, color grading, and output. These are all
advantages when compared to lookup tables, which can have plenty of precision, but can clip out-of-
bounds image data and introduce issues when differing lookup table interpolation methods cause
minor inconsistencies with color space transformations from application to application.
The preservation of wide-latitude image data deserves elaboration. LUTs clip image detail that goes
outside of the numeric range they’re designed to handle, so this often requires the colorist to make a
pre-LUT adjustment to “pull back” image data in the highlights that you want to retrieve. Using RCM
eliminates this two-step process, since the input color space matrix operations used to transform the
source preserves all wide-latitude image data, making highlights easily retrievable without any
extra steps.
Updated RCM In DaVinci Resolve 17
In version 17, DaVinci Resolve introduced the biggest improvements to Resolve Color Management
(RCM) since it was originally introduced, adding numerous features to simplify setup, improve image
quality, and make the “feel” of your grading controls more consistent. Specific improvements include
improved metadata management for incoming media files that support color metadata, a new wide
gamut color space suitable for using as your default Timeline working color space for any program, a
new Input Tone Mapping option (Input DRT) that makes it easier to mix media formats for SDR and HDR
grading, improved Timeline to Output Tone Mapping (Output DRT) that offers improved shadow and
highlight handling, and select color space-aware grading palettes that make controls feel and perform
well no matter what you’re grading.
This updated Resolve Color Management has the same name as the previous version. However, older
projects using the previous version of RCM will have Color science set to Legacy, to preserve the
older color management settings and color transformations effect on your work. For more information
on how the previous generation of RCM works, see the September 2020 version of the
DaVinci Resolve 16 Manual.
How Is DaVinci Resolve Color Management Different from ACES?
This is a common question, but the answer is pretty simple. Resolve Color Management (RCM)
and ACES are both Scene Referred color management schemes designed to solve the same
problem. However, if you’re not in a specific ACES-driven cinema workflow, DaVinci Resolve
Color Management can be simpler to use, and will give you all of the benefits of color
management, while approximating the “feel” that the DaVinci Resolve Color page controls
have always had.
Resolve Color Management for Editors
RCM isn’t just for Colorists. RCM can be easier for editors to use in situations where the source
material is log-encoded. Log-encoded media preserves highlight and shadow detail, which is great for
grading and finishing, but it looks flat and unpleasant, which is terrible for editing.
Even if you have no idea how to do color correction, it’s simple to turn RCM on in the Color
Management panel of the Project Settings, and then use the Media Pool to assign the particular Input
Color Space that corresponds to the source clips from each camera. Once that’s done, each log-
Chapter 9 Data Levels, Color Management, and ACES 192
encoded clip is automatically normalized to the default Timeline Color Space of Rec. 709 Gamma 2.4.
So, without even having to open the Color page, editors can be working with pleasantly normalized
clips in the Edit page.
The Input, Timeline, and Output Color Space
The foundation of Resolve Color Management rests on three core settings. Not only do you have the
ability to either automatically or manually identify the color science of each individual source clip (the
Input Color Space), but you also have explicit control over the working color space within which all
color adjustments and operations are made (the Timeline Color Space), and you have separate control
over the Output Color Space that defines how your graded image will be monitored and output.
This means that, basically, Resolve Color Management consists of two color transforms working
together, converting each source clip via its Input Color Space definition into the Timeline Color Space
in which you work, and then converting the adjusted image from the Timeline Color Space to whatever
Output Color Space you require to deliver the project.
Input Color Space Timeline Color Space Output Color Space
Resolve Color Management consists of three color transforms working together.
This means that, as a colorist, you can set the Timeline Color Space that you’re working in to whatever
you prefer. If you prefer grading wide-gamut log media because you like the way the grading controls
behave in that color space, you can set the Timeline Color Space in the Color Management panel of
the Project Settings to DaVinci Wide Gamut (more on this below), or any of the available log formats,
including ARRI Log C, REDWideGamutRGB/Log3G10, and Cineon Film Log. If you instead prefer
grading in the Rec. 709 color space because you’re mastering a standard dynamic range (SDR)
program to Rec. 709 and you’re more comfortable with how the controls in DaVinci Resolve have
always felt in that color space, you can choose that instead. Whatever Timeline Color Space you
assign is what all source clips will be transformed to for purposes of making grading adjustments in the
Color page, so you can make this choice using a single setting.
A key benefit of the color space conversions that RCM applies is that no image data is ever clipped
during the Input to Timeline color space conversion. For example, even if your source is log-encoded
or in a camera raw format, grading with a Rec. 709 Timeline Color Space does nothing to clip or
otherwise limit the image data available to the RCM image processing pipeline. All image values
greater than 1.0 or less than 0.0 are preserved and made available to the next stage of RCM
processing, the Timeline to Output color space conversion.
Consequently, if you’re grading in a color space other than the one you need to output to, you don’t
have to worry about data loss during the color transformation back to the color space you actually
want to output to. The Output Color Space setting gives you the freedom to work using whatever
Timeline Color Space you like while grading, with Resolve automatically converting your output to the
specific color space you want to monitor with and deliver to. And thanks to the precision of the image
Chapter 9 Data Levels, Color Management, and ACES 193
processing in DaVinci Resolve, you can convert from a larger color space to a smaller one and back
again without clipping or a loss of quality. Of course, if you apply a LUT or use Soft Clip within a grade,
then clipping will occur, but that’s a consequence of using those particular operations.
TIP: If you want to use Resolve Color Management, but you want the Input and Output Color
Spaces to match whatever you set the Timeline Color Space to, you can choose “Bypass” in
the “Input Colorspace” and “Output Colorspace” drop-down menus.
Finally, it is the Output Color Space that determines the final color space of your rendered result. While
no image data is clipped during the Source to Timeline color space conversion, image data will be
clipped during the Timeline to Output color space conversion in order for the final image to conform to
the color space being rendered and output, unless you use the Gamut Mapping options to compress
image data during the Timeline to Output Color Space conversion.
The RCM Image Processing Pipeline
The previous explanation is, of course, simplified. To clarify the inner workings of Resolve Color
Management for advanced users, the following flowchart presents a rudimentary overview of how
every parameter works together to automatically manage the color of clips in your program.
Resolve Color Management’s image processing pipeline, illustrated
Identifying the Input Color Space of Different Clips
Central to the process of automated color management is knowing the color space and transfer
function used by every clip of source media in your project. There are a variety of ways
DaVinci Resolve can figure this out, in a cascading decision-tree that can be manually overridden if
necessary. Deriving the Input Color Space involved the following stages of automated
decision making:
1 If the source media is a camera raw format like .braw, .R3D, .ari, etc., DaVinci Resolve uses
manufacturer-supplied colorimetry to automatically debayer the clip and identify its Input
Color Space.
2 Otherwise, if the source media has embedded color space metadata (QuickTime or .MXF make
this possible), then use that to identify the Input Color Space.
3 Otherwise, if there is no embedded color space metadata, use the default Input Color Space
setting of the Project Settings to assign an Input Color Space to all otherwise unidentified clips.
Chapter 9 Data Levels, Color Management, and ACES 194
4 If necessary, you can manually set the Input Color Space of clips in the Media Pool, which
overrides both embedded color space metadata (in case it’s wrong), or the default Input Color
Space setting (if you’re dealing with multiple color spaces). You cannot override the Input Color
Space of camera raw media.
The following sections discuss each of these steps in more detail.
Using Camera Raw Formats
When you use RCM in a project that uses Camera Raw formats, color science data from each camera
manufacturer is used to debayer each camera raw file to specific color primaries with linear gamma, so
that all image data from the source is preserved and made available to DaVinci Resolve’s color
managed image processing pipeline. As a result, the Camera Raw project settings and Camera Raw
palette of the Color page are disabled, because RCM now controls the debayering of all camera raw
clips, and all image data from the raw file is available no matter which Timeline Color Space you
choose to work within.
Using Source Media Color Space Metadata
When enabled, RCM automatically identifies the color space information of imported media that’s been
either transcoded or recorded directly to supported non-raw media formats, reading the NCLC
metadata of QuickTime-wrapped files, the color space metadata of .mxf-wrapped files, and the XML
sidecar files that track color management in ACES workflows. This behavior is automatic; there are no
visible controls governing this behavior aside from the individual Input Color Space and Input Gamma
settings associated with each clip in the Media Pool.
Color Space Metadata in QuickTime
DaVinci Resolve is capable of reading the NCLC metadata found within media files wrapped within a
QuickTime container for proper color management. This metadata consists of three values formatted
as (for example) 1-1-1. From left to right, these three digits specify the Color Primary (or color space),
Transfer Function (or gamma), and Color Matrix used by that media file.
These values are standardized in the SMPTE Registered Disclosure Document RDD 36:2015. For your
information, the different codes are listed in the following table. In the previous example, the code of
1-1-1 indicates a standard dynamic range clip that uses the BT.709 primaries, transfer function, and
color matrix.
Color Primary Transfer Function Color Matrix
0 Reserved 0 Reserved 0 GBR
1 ITU-R BT.709 1 ITU-R BT.709 1 BT709
2 Unspecified 2 Unspecified 2 Unspecified
3 Reserved 3 Reserved 3 Reserved
4 ITU-R BT.470M 4 Gamma 2.2 curve 4 FCC
5 ITU-R BT.470BG 5 Gamma 2.8 curve 5 BT470BG
6 SMPTE 170M 6 SMPTE 170M 6 SMPTE 170M
7 SMPTE 240M 7 SMPTE 240M 7 SMPTE 240M
8 FILM 8 Linear 8 YCOCG
9 ITU-R BT.2020 9 Log 9 BT2020 Non-constant Luminance
10 SMPTE ST 428-1 10 Log Sqrt 10 BT2020 Constant Luminance
11 DCI P3 11 IEC 61966-2-4 – –
Chapter 9 Data Levels, Color Management, and ACES 195
Color Primary Transfer Function Color Matrix
12 P3 D65 12 ITU-R BT.1361 Extended Colour Gamut – –
– – 13 IEC 61966-2-1 – –
– – 14 ITU-R BT.2020 10 bit – –
– – 15 ITU-R BT.2020 12 bit – –
– – 16 SMPTE ST 2084 (PQ) – –
– – 17 SMPTE ST 428-1 – –
– – 18 ARIB STD-B67 (HLG) – –
The Default Input Color Space
The default Input Color Space can only be set if the “Resolve color management preset” drop-down
menu is set to Custom. Otherwise, it defaults to “Rec. 709 Gamma 2.4” for all presets. Or else, this
setting is the default color space that all otherwise unidentified clips in the Media Pool will default to.
Manually Tagging Clip Color Space
If necessary, you can manually identify the color space of one or more selected clips in the Media Pool
by right-clicking them and choosing the Input Color Space (and optionally the Input Gamma) from the
contextual menu.
Simple RCM Setup
When you first choose DaVinci YRGB Color Managed from the Color science drop-down menu of the
Color Management panel in the Project Settings, you’re presented with a simple pair of menus for
setting up how you want to work with Resolve Color Management: the “Resolve color management
preset,” and the “Output Color Space.”
Automatic Color Management
The first option when using RCM is to decide to use either Automatic Color Management or the
Manual Presets. When the Automatic Color Management box is checked, DaVinci Resolve presents
you with a simplified set of options for the most common use cases. For the Color Processing Mode,
you choose SDR or HDR, and based on the file types and codecs in the Media Pool, DaVinci Resolve
will automatically choose the appropriate input color space. Then, select from a list of common Output
color spaces for delivery. If you want specific control of these parameters, uncheck Automatic Color
Management box and select from the Color Management Presets below.
Automatic Color Management presets for fast, simple color management setup
Chapter 9 Data Levels, Color Management, and ACES 196
Resolve Color Management Presets
Resolve Color Management presets for manual color management setup
The Resolve Color Management preset menu lets you choose how you want to use RCM to grade your
program. Each of these presets fully configures your project’s use of color management, and the
setting you select directly impacts how you’ll grade your program. Because of this, once you choose a
method of working and you grade every clip in your program, those grades rely on the preset you
used being selected in order to appear as they should.
When it comes to choosing a preset, a good way to think about which to use is to choose an SDR or
HDR preset that corresponds to the primary deliverable you plan on outputting. Both SDR and HDR
presets have several variations that you can choose among.
While these presets correlate to how you plan on outputting your program, they don’t lock you in,
since you can always change the Output Color Space (described below). This makes it possible to
export multiple versions of your program, each intended for different venues, no matter which color
management preset you’re using.
Whenever you choose a preset, a brief description explains the workflow that preset is intended to
facilitate. Here’s a list of the available presets, with slightly more detailed explanations.
– SDR Rec.709: (default) Sets up a Rec. 709 SDR grading environment. Your work can be converted
to HDR on output, if specified, but is limited to a Rec. 709 gamut with out-of-bounds colors
being clipped. Gamma 2.4 is not mentioned in the name because scene versus display OOTF is
managed automatically. Suitable for conventional streaming and broadcast.
– SDR P3 Broadcast: Sets up a P3-D65 SDR grading environment. Your work can be mapped
to HDR for output, if specified, but it is limited to a P3-D65 gamut with out-of-bounds colors
being clipped. Gamma 2.4 is not mentioned in the name because scene versus display OOTF is
managed automatically. Suitable for wider gamut streaming and broadcast at SDR levels.
– SDR P3 Cinema: Sets up a P3-D60 SDR grading environment. Your work can be mapped to HDR
for output, if specified, but it is limited to a P3-D60 gamut with out-of-bounds colors being clipped.
Suitable for conventional Cinema projection.
– SDR Rec.2020: Sets up a Rec. 2020 SDR grading environment. Your work can be mapped to HDR
for output, if specified. Good for wide gamut streaming and broadcast.
– DaVinci Wide Gamut: Sets up an extra wide gamut grading environment that’s suitable for grading
either SDR or HDR. Capable of exporting with maximum image fidelity, preserving highlight details
of up to 10,000 nits. This is a log-encoded grading space for colorists wishing to work that way.
Suitable for creating mezzanine intermediates or final deliverables, or for grading HDR with high
nit levels.
– HDR P3 Broadcast: Sets up a P3-D65 HDR grading environment. Output gamut is limited to P3-
D65, with out-of-bounds colors being clipped. Suitable for grading wide gamut SDR or HDR up to
1000 nits.
Chapter 9 Data Levels, Color Management, and ACES 197
– HDR Rec.2020: Sets up a Rec. 2020 HDR grading environment. Suitable for wide gamut SDR or
HDR deliverables up to 1000 nits.
– Custom: If none of the available presets suits how you need to work, you can choose Custom,
which exposes the full set of RCM settings for you to set up to suit your needs.
IMPORTANT
For all presets, importing media that’s in an identical or smaller gamut maps the image data
into the larger color space of the preset without transforming it. Importing media with a wider
gamut than the color space of the preset remaps the image data to fit into the smaller color
space, while preserving as much image detail as possible.
Output Color Space
For most DaVinci Resolve installations and projects, you’ll set your Output Color Space to match the
needs of your program, according to your display’s capabilities (or the capabilities your display is set to
use for the project at hand). You’ll also typically use a Resolve Color Management preset that matches
those capabilities.
However, RCM gives you the flexibility of grading in one color space and then outputting to others,
when necessary. For example, it’s easy to grade an SDR Rec. 709 version of a program for streaming
or broadcast, and then switch the Output Color Space to SDR P3 Cinema to output an additional
deliverable for theatrical exhibition.
To facilitate this, you can set the Output Color Space to any setting, independent of the Resolve Color
Management preset you’ve selected, and DaVinci Resolve will automatically convert from your Color
Management Preset to the Output Color Space of your choice. When you do so, here are the rules that
govern the resulting image transform.
When going SDR to HDR:
– 0-50 nits (18% mid-gray) in your program is mapped to 0-50 nits on output (no change).
– Everything from 51-90 nits in your program is remapped from 51 to 100 nits (slightly expanded).
– Everything from 91-100 nits in your program is remapped from 101 to 1000 nits (greatly expanded).
(Left) Original SDR grade seen within an HDR scale, (Right) After an automatic SDR to HDR conversion
When going from HDR to SDR, the reverse is done:
– 0-50 nits (18% mid-gray) in your program is mapped to 0-50 nits on output (no change).
– Everything from 51 to 100 nits in your program is remapped from 51-90 nits (slightly compressed).
– Everything from 101 to 1000 nits in your program is remapped from 91-100 nits
(greatly compressed).
Chapter 9 Data Levels, Color Management, and ACES 198
While these methods of converting between SDR and HDR provide an effective starting point for
conversion, they’re not meant to be an automatic solution. It’s critical that you do a trim pass whenever
outputting a deliverable in a new color space and EOTF, so you can check every clip and make
adjustments to improve the result when necessary.
NOTE: When converting SDR to HDR, this behavior may exaggerate noise in imported SDR
media that happens to have large flat expanses of bright colors. If you see particular clips that
show this issue, you can disable this behavior on a clip by clip basis in the Media Pool clip
contextual menu, or the Thumbnail Timeline contextual menu in the Color page, by toggling
“Inverse DRT for SDR to HDR Conversion.”
Advanced RCM Setup
Advanced users who need more detailed control over every aspect of RCM can choose Custom from
the Resolve Color Management preset menu. This exposes every control that’s available, which opens
a world of workflow possibilities for advanced users and post production facilities.
Because each of the settings encompasses a significant amount of functionality, the following sections
cover each particular parameter in detail.
Custom Color Management settings of Resolve Color Management, as updated in DaVinci Resolve 17
NOTE: Older projects using RCM will have Color science set to Legacy, to preserve the older
color management settings and color transformations effect on your work. For more
information on how the previous generation of RCM works, see the September 2020 version
of the DaVinci Resolve 16 Manual.
Single Setting vs. Dual Setting RCM
There are two ways you can set up RCM. When the “Use Separate Color Space and Gamma”
checkbox is turned off, the Color Management panel of the Project Settings exposes one drop-down
each for the Input, Timeline, and Output Color Space settings. Each setting lets you simultaneously
transform the gamut and gamma, depending on which option you choose. This makes it a bit simpler
to set up the transform you need.
Chapter 9 Data Levels, Color Management, and ACES 199
Single setting Resolve Color Management
If you turn the “Use Separate Color Space and Gamma” checkbox on, then the Color Management
panel changes so that the Input, Timeline, and Output Color Space settings each display two pop-ups.
The first drop-down lets you explicitly set the gamut, while the second drop-down lets you explicitly
set the gamma. This makes it easier to see exactly which pair of transforms is being used at each
stage of RCM.
Dual setting Resolve Color Management
Additionally, Dual Setting RCM enables you to assign separate gamut and gamma transforms to clips in
the Media Pool.
Dual setting Resolve Color Management assignments for Media Pool clips
Setting the Input Color Space
This setting is the default color space that all otherwise unidentified clips in the Media Pool will default
to, unless you manually identify the color space of these clips by right-clicking them and choosing an
Input Color Space (and optionally Input Gamma) from the contextual menu.
This setting does not affect media in camera raw formats, or media with embedded color
space metadata.
Choosing a Timeline Color Space
The Timeline Color Space is the “working” color space that determines how each clip’s contrast and
color are mapped for adjustment, which in turn has an impact on how sensitive the effects and grading
controls are as you work. Some colorists prefer to work in the classic “video” color space of Rec. 709,
since the controls feel comfortable and familiar, particularly if you’re mastering SDR content. On the
other hand, colorists who are used to working with log-encoded media (likely using the Log controls)
often prefer to work in a more film-oriented workflow using Cineon, LogC, or other wide gamut,
logarithmically encoded formats.
Chapter 9 Data Levels, Color Management, and ACES 200
If you’re outputting an SDR deliverable, any color space that you’re comfortable will produce good
results. However, if you’re outputting an HDR deliverable, it’s in your best interest to choose a wide
gamut Color Space (and Gamma) to obtain the best results on output. In this instance, DaVinci Wide
Gamut is a great choice (see below for more information).
No matter which Timeline Color Space you choose to work in, all clips in an edit are transformed from
the Input Color Space that’s either automatically or manually assigned to them, to the Timeline Color
Space setting to provide the final output. This is how you can grade within a Log-encoded timeline
color space and yet view a normalized or de-logged image.
IMPORTANT
Once you choose a Timeline Color Space and begin grading, do not change your Timeline
Color Space, or you’ll end up changing all of the grades that are built using the mathematics it
defines. You can always change the Output Color Space to create a new deliverable, but all of
your grades depend on the Timeline Color Space to render correctly.
DaVinci Wide Gamut Color Space and DaVinci Intermediate Gamma
DaVinci Wide Gamut (DaVinci WG) and DaVinci Intermediate are Timeline Color Space and Gamma
settings developed by Blackmagic Design that provide a reliable universal internal working color
space, which encompasses a practical maximum of what image data any given camera can capture.
The DaVinci Wide Gamut color space is greater than BT.2020, ARRI Wide Gamut, and even ACES, so
you don’t ever lose image data, no matter where your media is coming from.
Furthermore, the primary color values of the DaVinci WG color space are set such that the process of
automatically mapping source media from different cameras into this gamut is extremely accurate as
part of the Input to Timeline Color Space conversion, and tone and saturation mapping from one color
space to another can be done more accurately in the Timeline to Output Color Space conversion.
This also helps to produce greater consistency among media from different cameras when making
manual grading adjustments (though some variations due to differences in camera and lens systems
will remain).
The DaVinci Wide Gamut color space
Chapter 9 Data Levels, Color Management, and ACES 201
The DaVinci Intermediate OETF gamma setting has been designed to work with DaVinci Wide Gamut
to provide a suitable internal luminance mapping of high precision image data, in preparation for
mastering to either HDR or SDR standards, as your needs require, without losing image data.
The DaVinci Intermediate OETF seen encoding HDR levels
The DaVinci Intermediate OETF encoding SDR levels
Resolve Color Management is extremely flexible, so you don’t have to use DaVinci Wide Gamut/
DaVinci Intermediate as your Timeline color space if you don’t want. However, it presents many
advantages and is worth trying out to see if it can improve your workflow.
Chapter 9 Data Levels, Color Management, and ACES 202
For more information, see the “DaVinci Resolve Wide Gamut Intermediate” document at https://www.
blackmagicdesign.com/support/family/davinci-resolve-and-fusion.
Timeline Working Luminance
This control is only visible while the Resolve Color Management presets menu is set to Custom
Settings. The Timeline Working Luminance drop-down menu lets you choose how the Input DRT
(described below) maps the maximum level of a source image to the currently selected Timeline Color
Space. This setting also defines the maximum highlight level that’s possible to output into the currently
selected Output Color Space using the Output DRT.
While it’s typical to set this according to the mastering standard you’re grading to via a collection of
SDR and HDR labeled settings, there are additional settings available that make it possible to add
more automatic compression of highlights as you grade.
– SDR 100: The conventional setting for grading SDR material with a maximum level of 100 nits.
– HDR 500-4000: Conventional settings for grading HDR material at a variety of maximum
mastering levels. So long as output DRT isn’t set to None, there will be some manner of rolloff in
the highlights, unless inverse DRT is enabled, in which case there will be no rolloff.
– SDR and HDR ER settings: These “extended range” settings each specify two values and provide
more headroom for aggressive grading of highlights by enabling DaVinci Resolve to compress a
greater range of out-of-bounds image data without clipping, which can result in a smoother look.
Here’s how it works. Suppose you choose the setting “HDR ER 1000/2000.” In this case, the Input
DRT is used to map the maximum brightness of each source image to the range specified by the
first value, which is 1000 nits. Then, when you grade, the signal isn’t clipped until it reaches the
maximum range specified by the second value, which is 2000 nits. This provides an additional
1000 nits of out-of-bounds headroom before the image data is hard clipped by RCM’s image
processing pipeline. The Output DRT is then used to map from the maximum brightness specified
by the second number (2000 nits) to the output value defined by the currently selected Output
Color Space, in the process compressing this out-of-bounds headroom to preserve as much
highlight detail as is possible given the range you’ve selected.
– Custom: Exposes a field where you can enter a specific nit value.
203 Nit Support for SDR to HDR
This control is only visible while the Resolve Color Management presets menu is set to Custom
Settings. Resolve Color Management has support for remapping SDR content to HDR by mapping 100
nits to 203 nits (defined as the diffuse white level) according to the BT.2100 recommendation. This
enables the peak highlights of SDR material to compete more favorably against the significantly
brighter highlights of HDR content in programs that combine both (such as documentaries), so that
SDR whites continue to appear white, rather than gray, when compared to diffuse white in HDR.
The checkbox that enables this is hidden by default. Whenever you set the Output to an HDR standard
while the Timeline is set to an SDR standard, the “Use 203 nits reference for Rec.2100 HDR” checkbox
for remapping SDR highlights to HDR appears in both the RCM settings of the Color Management
panel of the Project Settings and in the Color Space Transform Resolve FX plug-in.
The “Use 203 nits reference for Rec.2100 HDR” checkbox in Resolve Color
Management for scaling SDR levels appropriately into HDR color space
Chapter 9 Data Levels, Color Management, and ACES 203
Gamut Limiting, Restricting Values Within a Larger Gamut
This control is only visible while the Resolve Color Management presets menu is set to Custom
Settings. In the emerging world of larger gamuts for distribution, it’s increasingly common for delivery
specifications to specify output to a large gamut, such as Rec. 2020, yet require that image values be
restricted to a smaller gamut, such as P3. This is to allow delivery to “future-proofed” delivery
standards, while preventing saturation values that are too high to be displayed on consumer displays
that aren’t capable of implementing the full scope of those standards.
In this case, you’ll choose a larger gamut in Output Color Space, but you’ll then choose a smaller
gamut in “Limit Output Gamut To.” When you do this, all image values falling outside the “Limit Output
Gamut To” standard specified will be hard clipped. This setting defaults to None.
Choose a setting from the Limit Output Gamut To menu to limit image values within a larger gamut
Input DRT Tone Mapping
This control is only visible while the Resolve Color Management presets menu is set to Custom
Settings. RCM has always transformed the color primaries of different media formats to match one
another within the shared Timeline Color Space. In this updated version, the Input DRT (Display
Rendering Transform) drop-down menu provides a variety of different options to enable
DaVinci Resolve to automatically tone map the image data of SDR and HDR clips to better match one
another when they’re fit into the currently selected Timeline Color Space. While each option varies in
the details, they are all automated input-to-timeline color transforms that do the following:
– Log-encoded media, or media using a 2.4 gamma transfer function, is mapped so the black point,
midtones at 18% gray, and white levels match those of HDR media. Highlight data will be carefully
stretched as necessary so that the highlights of all clips in the Timeline, whether SDR or HDR,
are treated similarly.
– Raw formats such as BRAW, RED, and ARRI RAW, and media using HDR transfer functions are
minimally mapped along an HDR range of tonality.
– All color transforms into the Timeline Color Space are done without clipping.
The idea is to distribute the image data of each clip in the Timeline, be it SDR or HDR media, along a
similar histogram, with shadows, midtones, and highlights spread out in such a way as to create an
easier starting point for grading. One result of this is that grades made for one type of media mostly
work well with other types of media.
Different options are provided governing the details of how this Input to Timeline Color Space
transform is achieved. They all do the same thing but have different advantages.
– None: This setting disables Input DRT Tone Mapping. No tone mapping is applied to the
Input to Timeline Color Space conversion at all, resulting in a simple 1:1 mapping to the
Timeline Color Space.
– Simple: A good mapping for color transforms from HDR to SDR.
Chapter 9 Data Levels, Color Management, and ACES 204
– Luminance Mapping: Same as DaVinci, but more accurate when the Input Color Space of all your
media is in a single standards-based color space, such as Rec. 709 or Rec. 2020.
– DaVinci: This option tone maps the transform with a smooth luminance roll-off in the shadows and
highlights, and controlled desaturation of image values in the very brightest and darkest parts of
the image. This setting is particularly useful for wide-gamut camera media and is a good setting to
use when mixing media from different cameras.
– Saturation Preserving: This option has a smooth luminance roll-off in the shadows and highlights,
but does so without desaturating dark shadows and bright highlights, so this is an effective option
for colorists who like to push color harder. However, because over-saturation in the highlights
of the image can look unnatural, two parameters are exposed to provide some user-adjustable
automated desaturation.
– Sat. Rolloff Start: Lets you set a threshold, in nits (cd/m2), at which saturation will roll off along
with highlight luminance. Beginning of the rolloff.
– Sat. Rolloff Limit: Lets you set a threshold, in nits (cd/m2), at which the image will be totally
desaturated. End of the rolloff.
Output DRT Tone Mapping
This control is only visible while the Resolve Color Management presets menu is set to Custom
Settings. To accommodate workflows where you need to transform one color space into another that
has a dramatically larger or smaller gamut, an additional group of settings have been added that can
help to automate the expansion or contraction of image data necessary to give a pleasing result.
Using the available options in the Output DRT drop-down menu will compress or expand your image
data as necessary during the Timeline to Output Color Space transformation that RCM performs when
monitoring or rendering a timeline, in order to make sure that the final result is either not clipping, or to
ensure that it’s taking better advantage of the new color space. This is not meant to provide your final
grade. Rather, it’s meant to give you a faster starting point, when you need it, for proceeding with your
own more detailed grade of the result.
Here are some examples of what the Gamut Mapping controls of RCM can be used for:
1 If you’re working with high-dynamic-range log-encoded media and you’re outputting to Rec. 709
as you work, turning on Gamut Mapping lets RCM use saturation and tone mapping to give you a
more immediately pleasing image with highlight detail that’s not clipped.
2 If you’re working with standard-dynamic-range log-encoded media and you’re outputting to an
HDR format as you work, turning on Gamut Mapping lets RCM use saturation and tone mapping
to expand the highlights of the image to HDR strength to give you an image with more immediate
visual impact on HDR screens.
Chapter 9 Data Levels, Color Management, and ACES 205
(Before/After) Gamut Mapping used to automatically fit
high-dynamic-range media into the Rec. 709 color space
The Output DRT (Display Rendering Transform) drop-down menu provides the following options.
– None: No tone mapping is applied to the Timeline to Output Color Space conversion at all,
resulting in a simple 1:1 output with no softness or rolloff applied. All image data outside of gamut
will be clipped.
– Simple: A good mapping for color transforms from HDR to SDR.
– Luminance Mapping: Same as DaVinci, but more accurate when all your media is in a single
standards-based color space, such as Rec. 709 or Rec. 2020, set to the Timeline and Output.
– DaVinci: This option tone maps your output with a smooth luminance roll-off in the shadows and
highlights, and controlled desaturation of image values in the very brightest and darkest parts of
the image. It’s been designed to give smooth, naturalistic highlights and shadows as you push and
pull the values of your images, without the need for additional settings. This setting is particularly
useful for wide-gamut camera media and is a good setting to use when mixing media from
different cameras.
– Saturation Preserving: This option has a smooth luminance roll-off in the shadows and highlights
to prevent clipping. It does so without desaturating dark shadows and bright highlights, so this
is an effective option for colorists who like to push color a bit harder. However, because over
saturation in the highlights of the image can look unnatural, two parameters are exposed to
provide some user-adjustable automated desaturation.
– Sat. Rolloff Start: Lets you set a threshold, in nits (cd/m2), at which saturation will roll off along
with highlight luminance. Beginning of the rolloff.
– Sat. Rolloff Limit: Lets you set a threshold, in nits (cd/m2), at which the image will be totally
desaturated. End of the rolloff.
– RED IPP2: This setting lets you use RED IPP2 tone mapping to output to an SDR format, such as
Rec. 709; two settings are exposed with which to choose how your output will be shaped.
– Output Tone Map: Lets you choose what kind of tone mapping you want to use for your output.
Options include: None, Low, Medium, and High.
– Highlight Roll Off: Lets you choose what kind of highlight rolloff you want to use to prevent
clipping. Options include: None, Hard, Medium, Soft, and Very Soft.
– HDR peak nits: A slider lets you choose the peak nit level you want to tone map to.
Defaults to 10,000 nits.
Use Inverse DRT for SDR to HDR Conversion
This control is only visible while the Resolve Color Management presets menu is set to Custom
Settings. A device rendering transform (DRT) is typically used when converting high dynamic range
media to a lower dynamic range color space/mastering standard. Thus, setting up a color transform
from SDR to HDR is an “inverse” operation to expand the dynamic range of SDR media to HDR
Chapter 9 Data Levels, Color Management, and ACES 206
standards. The way this works is that levels at 100 nits are mapped to the maximum value set for the
Timeline Working Luminance parameter, and all other image levels are strategically tone mapped in
order to give yourself a good starting point for grading SDR media into an HDR program.
This setting also has a secondary use. With this setting turned on, you can output Rec. 709 clips with
color that’s identical to the input, with no compression in the highlights.
NOTE: Turning on “Use Inverse DRT for SDR to HDR Conversion” may exaggerate noise in
imported SDR media with large flat expanses of bright colors.
Use White Point Adaptation
This control applies a chromatic adaptation transform to account for different white points between
color spaces.
– Uncheck this box if you simply want to view the input color space’s white point unaltered in the
output color space. For example, wanting to use a P3-D60 mastered clip inside a P3-D65 timeline
for reference purposes.
– Check this box to apply the chromatic adaptation transform to convert the input white point to
match the output color space’s white point. For example, wanting a P3-D60 mastered clip to cut in
with other clips mastered in a P3-D65 timeline.
NOTE: This control is only visible while the Resolve Color Management presets menu is set to
Custom Settings.
Color Space Aware Grading Tools
In DaVinci Resolve version 17, both Resolve Color Management and ACES enables “color space aware”
palettes, such as the new HDR palette, to have controls that feel consistent, no matter what color
space the original media is from, or what Timeline Color Space you’re using.
Other palettes, such as the Qualifier and Curves palettes, become color space aware when you turn
on the “Use Color Space Aware Grading Tools” checkbox in the Color Management panel of the
Project Settings (this is turned on by default). When you’re using color space aware grading tools, you
should not turn on HDR Mode for the node you’re working on.
– In the case of the Qualifier palette, this enables Qualifiers to create high-quality keys as you would
expect, no matter what the color space of the original media is, or what Timeline Color Space
you’re using.
– In the case of the Curves palette, this makes the overall range of each curve better fit the overall
data range of the current clip, making curves adjustments easier and more specific.
NOTE: This control is only visible while the Resolve Color Management presets menu is set to
Custom Settings.
Apply Resize Transformations In
When you’re using Resolve Color Management, a new “Apply Resize Transformations In” Project
Setting is available in the Color Management panel while the Resolve Color Management presets
menu is set to Custom Settings. This setting lets you choose which color space is used for resizing
Chapter 9 Data Levels, Color Management, and ACES 207
operations. Ordinarily, resizing is done in Linear, but certain specialty workflows benefit from doing
resizing in other color spaces, so this option lets you choose which is best. The available options are:
– Timeline: Uses the Timeline Color Space to perform all resizing operations.
– Log: Uses a Log Color Space for resizing. Good for avoiding artifacts in certain
high-contrast images, such as titles and star fields.
– Linear: Usually provides the best results with most SDR media.
– Linear Mapped: Usually provides the best results with most HDR media.
– Gamma: Provided in case you find a need for this option.
– Gamma Mapped: Usually provides best results when mixing SDR media with
wide gamut and log-encoded media on the same timeline.
Graphics White Level
The “Graphics white level” setting lets you define a shared maximum level in nits (cd/m2) for titles,
generators, and selected effects that generate color. Changing this setting lets you change the
maximum level of all DaVinci Resolve-generated titles, generator graphics, and effects at once to
accommodate different mastering and output requirements.
Display HDR On Viewers If Available
Turn this checkbox on if your computer monitors and operating system are capable of accommodating
HDR display. This allows the Viewers to show true HDR, according to the capabilities of your
computer monitor.
HDR Mastering Is For (Studio Version Only)
If you have a DeckLink 4K Extreme 12G or an UltraStudio 4K Extreme video interface, then
DaVinci Resolve 12.5 and above can output the metadata necessary to correctly display HDR video
signals to display devices using HDMI 2.0a when you turn on the “Enable HDR metadata over HDMI”
checkbox in the Master Project Settings.
The Enable HDR metadata over HDMI option in the Master
Project Settings lets you output HDR via HDMI 2.0a.
When you do so, a setting in the Color Management panel of the Project Settings, “HDR mastering
is for X” lets you specify the output, in nits, to be inserted as metadata into the HDMI stream being
output, so that the display you’re connecting to correctly interprets it. The output you specify
should match what your display is expecting.
The “HDR mastering is for” setting lets you insert
metadata for HDR output via HDMI 2.0a.
Resolve Color Management and the Fusion Page
Enabling RCM also allows the Fusion page to handle the color of clips automatically. Images output by
MediaIn nodes are automatically converted to Linear color space, which is the preferred color space
with which to perform high-quality compositing operations. Setting the LUT menu of each Viewer in
Chapter 9 Data Levels, Color Management, and ACES 208
the Fusion page to Managed ensures that you’re looking at the image in Rec. 709, so that the image
looks correct to the artist even though they’re really working in the Linear color space. Each MediaOut
node then converts the image back to the timeline color space for handoff to the Color page.
With RCM off, you must manage color in the Fusion page manually, either using the Source Color
Space and Source Gamma Space settings of each MediaIn node, or using the CineonLog or FileLUT
nodes in your node tree.
For more information on how color management affects the Fusion page, and why the Linear color
space is preferable for compositing, see Chapter 76, “Controlling Image Processing and Resolution.”
Ability to Bypass Color Management Per Clip
When you right-click a clip in the Thumbnail Timeline of the Color page, a “Bypass Color Management”
setting appears underneath the Input Color Space and Input Gamma submenus that let you identify a
clip’s color characteristics. Choosing this option so that it appears checked lets you exclude that clip
from color management altogether, in the event you want to manually manage that clip using LUTs, the
Color Space Transform node, or simply by doing manual grading.
The Bypass Color Management option for clips in
the contextual menu of the Thumbnail Timeline
Exporting Color Space Information to QuickTime Files
If you render QuickTime files from the Deliver page, then color space tags will be embedded into each
file based on either the Timeline Color Space (if Resolve Color Management is disabled) or the Output
Color Space (if Resolve Color Management is enabled). Two settings in the Advanced Settings of the
Render Settings let you choose how color space metadata will be embedded into your output for
supported media formats, “Color Space Tag,” and “Gamma Tag.” These default to “Same as Project,”
which will match the Output Color Space currently selected in the Project Settings.
The Color Space Tag and Gamma Tag settings in the Render Settings
Chapter 9 Data Levels, Color Management, and ACES 209
Color Management Using ACES
The ACES (Academy Color Encoding Specification) color space has been designed to make scene-
referred color management a reality for high-end digital cinema workflows. ACES also makes it easier
to extract high-precision, wide-latitude image data from raw camera formats, in order to preserve
high-quality image data from acquisition through the color grading process, and to output high-quality
data for broadcast viewing, film printing, or digital cinema encoding.
An oversimplification of the way ACES works is that every camera and acquisition device is
characterized to create an IDT (Input Device Transform) that specifies how media from that device is
converted into the ACES color space. The ACES gamut has been designed to be large enough to
encompass all visible light, with more than 25 stops of exposure latitude. In this way ACES has been
designed to be future-proof, taking into consideration advances in image capture and distribution.
Meanwhile, an RRT (Reference Rendering Transform) is used to transform the data provided by each
image format’s IDT into standardized, high-precision, wide-latitude image data that in turn is processed
via an ODT (Output Device Transform). Different ODT settings correspond to each standard of
monitoring and output, and describe how to accurately convert the data within the ACES color space
into the gamut of that display in order to most accurately represent the image in every situation.
The RRT and ODT always work together.
Image Encode ACES to DaVinci Image Decode ACEScc / ACEScct
IDT
Data ACEScc / ACEScct Processing back to ACES
ODT RRT
Disable
Monitoring ODT for
Deliver
page
Output
ACES signal and processing flow
By using the ACES color space and specifying an IDT and an ODT, you can ingest media from any
capture device, grade it using a calibrated display, output it to any destination, and preserve the color
fidelity of the graded image.
Setting Up ACES in the Project Settings Window
There are four parameters available in the Color Science drop-down of the Color Management panel
of the Project Settings that let you set up DaVinci Resolve to use the ACES workflow:
– Color science is: Using this drop-down menu, you can choose either DaVinci ACES, or DaVinci
ACEScc color science, which enables ACES processing throughout DaVinci Resolve.
– ACEScc: Choose DaVinci ACEScc color science to apply a standard Cineon-style log
encoding to the ACES data before it is processed by DaVinci Resolve. This well defined
common encoding makes it possible for ASC CDL values to be used across systems using the
same ACEScc encoding. After processing, a reverse encoding is applied in order to output
ACES linear data.
Chapter 9 Data Levels, Color Management, and ACES 210
– ACEScct: A variation of ACEScc that adds a roll-off at the toe of the image that’s different
from the encoding of ACEScc, in order to make color correction lift operations “feel” more like
they do with film scans and LogC encoded images, which makes it easier to raise the darkest
values of the image and get milky shadows, something that can be difficult with ACEScc. After
processing, a reverse encoding is applied in order to output ACES linear data.
– ACES Version: When you’ve chosen one of the ACES color science options, this drop-down
becomes available to let you choose which version of ACES you want to use. You can choose from
ACES 1.0.3, ACES 1.1, ACES 1.2, or ACES 1.3 (the latest version).
– ACES Input Device Transform: This drop-down menu lets you choose which IDT (Input Device
Transform) to use for the dominant media format in use. DaVinci Resolve currently supports the
following IDTs:
– ACEScc/ACEScct/ACEScg: Standardized transforms for each of these ACES standards.
– ADX (10 or 16): 10-bit or 16-bit integer film-density encoding transforms meant for use if you’re
working with film scans that were initially encoded in an ACES workflow. This transform is
designed to maintain the variation in look between different film stocks.
– ALEXA: Color management settings for all ARRI ALEXA cameras.
– BMD Film/4K/4.6K: Color management settings for Blackmagic Design cameras.
– Canon 1D/5D/7D/C200/C300/C300MkII/C500/C700: Color management settings for
Canon cameras.
– DCDM: This IDT transforms X’Y’Z’-encoded media with a gamma of 2.6.
– DCDM (P3D65 Limited): This IDT transforms X’Y’Z’-encoded media with a gamma of 2.6,
specifically hard clipped to a P3 gamut with a D65 white point.
– DRAGONcolor/2 and REDgamma3/4/REDlogFilm combinations: Different combinations of the
DRAGONcolor, REDgamma, and REDlogFilm settings are provided for legacy RED workflows.
– P3-D60: Transforms RGB-encoded image data with a D60 white point, intended for monitoring
with a P3-compatible display using a D60 white point.
– P3-D65: Transforms RGB-encoded image data with a D65 white point; intended for monitoring
with a P3-compatible display using a D65 white point.
– P3-D65 (D60 sim.): Transforms RGB-encoded image data with a D65 white point; intended to
simulate monitoring with a P3-compatible display using a D60 white point on a display with D65.
– P3-D65 ST2084 (108/1000/2000/4000 nits): Transforms an image that’s compatible with the
P3 color gamut, using the SMPTE standard PQ (ST.2084) tone curve for High Dynamic Range
(HDR) post-production. Three settings for four different peak luminance ranges are provided;
which one is appropriate to use depends on the maximum white level of the display used to
create the media. Preliminary standards exist for HDR displays with peak luminance at 1000
nits, 2000 nits, and 4000 nits. A setting of 108 nits is provided for Kodak laser projection.
– P3-D65: Transforms RGB-encoded image data with a D65 white point, intended for monitoring
with a P3-compatible display using a D65 white point.
– P3-D65 ST2084 (1000/2000/4000 nits): Transforms an image that’s compatible with the
P3 color gamut, using the SMPTE standard PQ (ST.2084) tone curve for High Dynamic Range
(HDR) post-production. Three settings for three different peak luminance ranges are provided;
which one is appropriate to use depends on the maximum white level of the display used
to create the media. Preliminary standards exist for HDR displays with peak luminance at
1000 nits, 2000 nits, and 4000 nits.
– P3-DCI (D60 sim.): Produces output that’s specifically for output on a DCI projector with
a D60 white point. This output may look magenta on other display devices that aren’t set up for
DCI display.
Chapter 9 Data Levels, Color Management, and ACES 211
– P3-DCI (D65 sim.): Produces output that’s specifically for output on a DCI projector with
a D65 white point. This output may look magenta on other display devices that aren’t set up for
DCI display.
– Panasonic V35: Color management settings for each listed camera.
– Rec.2020: This IDT transforms media created with the wide-gamut standard for consumer and
broadcast television.
– Rec.2020 ST2084 (1000/2000/4000 nits): This IDT transforms media created within the
wide-gamut standard for consumer and broadcast television, using the SMPTE standard PQ
(ST.2084) tone curve for High Dynamic Range (HDR) post-production. Three settings provided
for HDR televisions with different peak luminance capabilities.
– Rec.2020 HLG (1000 nits): This IDT transforms media within the wide-gamut standard for
consumer and broadcast television and uses the Hybrid Log-Gamma (HLG) standard tone curve
for High Dynamic Range (HDR) post-production. A single setting is provided for HDR televisions
with peak luminance at 1000 nits.
– Rec.709 (Camera): A deprecated legacy IDT for Rec. 709 that’s included for backward
compatibility. Converts the source data to linear based on Rec. 709 and transforms the result
to ACES, but while this transformation is technically correct, it’s not necessarily pleasing after
conversion through the matching ODT. For this reason, the academy updated to the following
Rec. 709 IDT, which is the inverse of the Rec. 709 ODT.
– Rec.709: A standard transform designed to move media in the Rec. 709 color space into the
ACES color space. This option is used for any other file type that might be imported, such as
ProRes from Final Cut Pro, DNxHD from Media Composer, and any media file captured from tape.
– Rec.709 (D60 sim.): A standard transform designed to move media in the Rec. 709 color space
with a white point of D60 into the ACES color space.
– REDColor2/3/4/REDGamma3/4/REDLogFilm combinations: Different combinations of the
REDcolor, REDgamma, and REDlogFilm settings are provided for legacy RED workflows.
– RWGLog3G10: The standardized RED IPP2 color pipeline transform for all RED camera media.
If you’re working on a project that mixes media formats that require different IDTs, then you can assign
different IDTs to clips using the Media Pool’s contextual menu, or using the Clip Attributes window,
which is also accessible via the Media Pool’s contextual menu.
– ACES Output Device Transform: This drop-down menu lets you choose an ODT (Output Device
Transform) with which to transform the image data for monitoring on your calibrated display, and
when exporting a timeline in the Deliver page. You can choose from the following options:
– ADX (10 and 16): A standardized ODT designed for media destined for film output. Two settings
accommodate 10-bit and 16-bit output. This ODT is not meant to be used for monitoring.
– DCDM: This ODT exports X’Y’Z’-encoded media with a gamma of 2.6 intended for handoff
to applications that will be re-encoding this data to create a DCP (Digital Cinema Package) for
digital cinema distribution. This can be displayed via an XYZ-capable projector.
– DCDM (P3D60 Limited): Outputs a P3 hard-limited signal with a D60 white point.
– DCDM (P3D65 Limited): Outputs a P3 hard-limited signal with a D65 white point.
– P3 D60: Outputs RGB-encoded image data with a D60 white point; intended for monitoring
with a P3-compatible display using a D60 white point.
– P3 D65: Outputs RGB-encoded image data with a D66 white point; intended for monitoring
with a P3-compatible display using a D66 white point.
– P3 D65 (D60 sim.): Outputs RGB-encoded image data to simulate monitoring with a P3-
compatible display using a D60 white point on a display with a D65 white point.
Chapter 9 Data Levels, Color Management, and ACES 212
– P3 D65 (Rec.709 Limited): Outputs RGB-encoded image data with a D65 white point within a
P3 gamut that’s hard-limited to the color range of Rec. 709.
– P3 D65 ST2084 (108/1000/2000/4000 nits): Outputs an image that’s compatible with the
P3 color gamut, using the SMPTE standard PQ tone curve for High Dynamic Range (HDR)
post-production. Three settings for three different peak luminance ranges are provided; which
one is appropriate to use depends on the maximum white level of your display. Preliminary
standards exist for HDR displays with peak luminance at 1000 nits, 2000 nits, and 4000 nits.
A setting of 108 nits is provided to simulate an HDR signal clipped to an SDR range.
– P3 DCI (D60 sim.): Outputs RGB-encoded P3 image data that appears as if with a D60 white
point on a DCI projector with a DCI white point.
– P3 DCI (D65 sim.): Transforms RGB-encoded image data with a D61 white point
(the DCI mastering standard) that appears as if with a D65 white point.
– P3-D65 ST2084 (1000/2000/4000 nits): Transforms an image that’s compatible with the
P3 color gamut, using the SMPTE standard PQ (ST.2084) tone curve for High Dynamic Range
(HDR) post-production. Three settings for three different peak luminance ranges are provided;
which one is appropriate to use depends on the maximum white level of the display used
to create the media. Preliminary standards exist for HDR displays with peak luminance at
1000 nits, 2000 nits, and 4000 nits.
– Rec.2020: This ODT is for compatibility with the full range of this wide-gamut standard for
consumer and broadcast television.
– Rec.2020 (P3D65 Limited): Outputs a P3D65 hard-limited signal within this wide-gamut
standard for consumer and broadcast television.
– Rec.2020 (Rec.709 Limited): Outputs a Rec. 709 hard-limited signal within this wide-gamut
standard for consumer and broadcast television.
– Rec.2020 HLG: Outputs the full Rec. 2020 gamut to the Hybrid Log-Gamma standard for HDR.
– Rec.2020 HLG (1000 nits, P3D65 Limited): Outputs a 1000 nit, P3D65 hard-limited signal
within the Rec. 2020 gamut and the Hybrid Log-Gamma standard for HDR.
– Rec.2020 ST2084 (1000/2000/4000 nits): This ODT transforms media created within the
wide-gamut standard for consumer and broadcast television, using the SMPTE standard
PQ (ST.2084) tone curve for High Dynamic Range (HDR) postproduction. Three settings are
provided for HDR televisions with different peak luminance capabilities.
– Rec.2020 ST2084 (1000/2000/4000 nits, P3D65 Limited): This ODT transforms media within
the wide-gamut standard for consumer and broadcast television but with hard clipping at the
boundary of the P3 gamut for televisions that are limited to the smaller P3 gamut for digital
cinema; also uses the SMPTE standard PQ (ST.2084) tone curve for High Dynamic Range
(HDR) post-production. Three settings are provided for HDR televisions with different peak
luminance capabilities.
– Rec.709: This ODT is used for standard monitoring and deliverables for TV.
– Rec.709 (D60 Sim): A standard transform designed to move media in the Rec. 709 color space
with a white point of D60 into the ACES color space.
– sRGB: A standardized transform designed for media created for computer display in a
consumer environment.
– sRGB (D60 Sim.): A standardized ODT designed for media destined for computer display in a
consumer environment. Suitable for monitoring when grading programs destined for the web.
– ACEScc/ACEScct/ACEScg: Standardized transforms for each of these ACES standards.
You must manually select an ODT that matches your workflow and room setup when working in ACES.
Chapter 9 Data Levels, Color Management, and ACES 213
– Process Node LUTs in: This drop-down menu lets you choose how you want to process
CLF LUTs that are added to nodes in your grades while working in ACES, such as Look LUTs
in on-set or VFX workflows. There are two choices: ACEScc AP1 Timeline Space (the default),
and ACES AP0 Linear.
– ACEScc AP1: For LUTs that have been designed to take the specific range of ACEScc data
using the AP1 primary coordinates.
– ACES AP0: For LUTs that have been designed for normal ACES data from 65504 to -65504
floating point values.
NOTE: ACES grades require CLF LUTs that have been specifically created for ACES
workflows. If you want to apply a regular LUT within a grade, you must do a color space
transform to convert the image from ACES to whatever space the LUT was designed to work
within, and then another color space transform to convert the image back to ACES; however,
this workflow does not alway provide ideal results.
The Initial State of Clips When Working in ACES
Don’t worry if the initial state of each image file appears differently than what was monitored originally
on set. What’s important is that if the camera original media was well exposed, the IDT used in ACES
mode will retain the maximum amount of image data, and provide the maximum available latitude for
grading, regardless of how the image initially appears on the Timeline.
The Timeline Color Space in ACES Workflows is Fixed
When you’re working in ACES, you do not get to change the Timeline Color Space as you do in
Resolve Color Management. The ACES working color space is a log-encoded color space, which
encourages a more traditional, film-oriented approach to grading.
Tips for Rendering Out of an ACES Project
When choosing an output format in the Deliver page, keep the following in mind:
– If you’ve delivering graded media for broadcast, set the ACES Output Device Transform to be
Rec. 709, then you can output to whatever media format is convenient for your workflow.
– When you’re delivering graded media files to another ACES-capable facility using the DCDM or
ADX ODCs, you should choose the OpenEXR RGB Half (uncompressed) format in the Render
Settings, and set the ACES Output Device Transform to “No Output Device Transform.”
– When you’re rendering media for long-term archival, you should choose the OpenEXR RGB Half
(uncompressed) format in the Render Settings, and set the ACES Output Device Transform to
“No Output Device Transform.”
Chapter 9 Data Levels, Color Management, and ACES 214
Chapter 10
HDR Setup
and Grading
High Dynamic Range (HDR) grading for cinema, television, and streaming is the latest
evolution of the consumer media experience. While HDR workflows in high-end
cinema and television aren’t new, this way of mastering media has been slow to
expand to less expensive programming. However, new developments and an
expanding array of affordable HDR-capable consumer devices are poised to make
HDR mastering of visual content increasingly ubiquitous. This chapter describes what
HDR is for the uninitiated and covers the operational details that will let you set up
DaVinci Resolve to do HDR grading.
Contents
High Dynamic Range (HDR) Grading in DaVinci Resolve 216
HDR Isn’t Just for Televisions 217
The Different Ways of Mastering HDR 217
What Do I Do With HDR? 218
Analyzing HDR Signals Using Video Scopes 218
Dolby Vision ®
219
Organizing Your Timeline for Dolby Vision Mastering 221
Letterboxing for Dolby Vision Mastering 221
Setting Up Color Management for Dolby Vision Mastering 222
Choosing Mastering Displays for Dolby Vision 222
Using the Dolby Vision Internal Content Mapping Unit (iCMU) 222
Simultaneous Master and Target Display Output for Dolby Vision 223
External Content Mapping Unit (eCMU) for Dolby Vision 223
Dolby, Dolby Vision, and the double-D symbol are registered trademarks of Dolby Laboratories Licensing Corporation.
Chapter 10 HDR Setup and Grading 215
Auto Analysis is Available to All Studio Users 224
Licensing DaVinci Resolve to Expose Dolby Vision Trim Controls 224
Dolby Vision Trim Controls in DaVinci Resolve
®
225
Previewing and Trimming At Different Levels 227
Managing Dolby Vision Metadata 228
Setting Up Resolve Color Management for Grading HDR 229
DaVinci Resolve Grading Workflow For Dolby Vision 230
Delivering Dolby Vision 230
SMPTE ST.2084 and HDR10 231
Monitoring and Grading to ST.2084 in DaVinci Resolve 233
Connecting to HDR-Capable Displays using HDMI 2.0a 233
HDR10+ ™
233
Monitoring and Grading to ST.2084 for HDR10+ 234
HDR10+ Grading Workflow 234
Simultaneous Master and Target Display Output for HDR10+ 234
HDR10+ Auto Analysis Commands 235
Delivering HDR10+ 235
Hybrid Log-Gamma (HLG) 235
Grading Hybrid Log-Gamma in DaVinci Resolve 236
Ouputting Hybrid Log-Gamma 236
High Dynamic Range (HDR)
Grading in DaVinci Resolve
The HDR features found in DaVinci Resolve are only available in DaVinci Resolve Studio.
High Dynamic Range (HDR) video describes an emerging family of video encoding and distribution
technologies designed to enable a new generation of television displays to play video capable of
intensely bright highlights and increased saturation. The general idea is that the majority of an HDR
image will be graded similarly to how a Standard Dynamic Range (SDR) image is graded now, with the
shadows and midtones being mostly the same between traditionally SDR and HDR-graded images.
This is mostly because shadows are shadows and are meant to be dark; however this philosophy also
maintains a comfortable viewing experience and easier backward compatibility when you need to
master both SDR and HDR versions of a program. The difference is that HDR provides abundant
additional headroom for very bright highlights and color saturation that far exceed what has been
previously visible in SDR television and cinema. This enables the colorist to create more vivid and
life-like highlights in images, such as sunsets, lit clouds, firelight, explosions, sparkles, and other
intensely bright and colorful imagery. In short, you can now “open up” the highlights in an image just
as you’ve always been able to open up, or expand, the detail of the shadows. This not only provides
more life-like lighting intensity and saturation, but it also dramatically expands the contrast available in
the scene. For example, a calibrated SDR display should have a peak luminance level of 100 nits
(cd/m2), but existing HDR displays can provide peak luminance levels of 700, 1000, or even 4000 nits.
However, because it’s an evolving technology, the technical standards that have been developed far
exceed what current consumer televisions, projectors, phones, and tablets are capable of. At the time
of this writing, consumer televisions are capable of outputting 700 to 1600 nits. Furthermore,
Chapter 10 HDR Setup and Grading 216
consumer displays are often saddled with automatic brightness limiting (ABL) circuits that limit power
consumption to acceptable levels for home use, which means that only a certain percentage of the
picture may reach these peak values at any one time. This is fine, because the point of HDR is not that
you’re making the entire image brighter, it’s that you have more headroom for specific bright highlights
and additional saturation.
For all of these reasons, HDR standards focus on describing what displays should be capable of, not
how these levels are to be used. That is a creative decision.
HDR Isn’t Just for Televisions
Lest you think that living room televisions and projectors are the only way to watch HDR content,
certain flagship iOS and Android phones and tablets have implemented HDR viewing capabilities that
are capable of meeting or even exceeding the UltraHD requirements for HDR content on an OLED
display. This makes HDR, surprisingly, a widely available mobile experience.
The Different Ways of Mastering HDR
While different HDR technologies use different methods to map the video levels of your program to an
HDR display’s capabilities, they all output a “near-logarithmically” encoded signal that requires a
compatible television that’s capable of correctly stretching this signal into its “normalized” form for
viewing. This means if you look at an HDR signal that’s output from the video interface of your grading
workstation on an SDR display, it will look flat, desaturated, and unappealing until it’s plugged into your
HDR display of choice.
A graded HDR image being output looks similar to a log-encoded image
At the time of this writing, there are four principal approaches to mastering HDR that DaVinci Resolve
is capable of supporting, including:
– Dolby Vision®
– HDR10
– HDR10+
– Hybrid Log-Gamma (HLG)
Each of these HDR standards define how an HDR signal is encoded for export and later mapped to the
visible output of an HDR or SDR display. Grading to each of these standards requires some degree of
color management, and DaVinci Resolve gives you three main ways to handle this:
– The easiest way is to enable Resolve Color Management (RCM) or ACES in the Color Management
panel of the Project Settings, and use the Color Space conversion options that are available.
There are options there for each supported type of HDR.
– The transforms that are available in RCM are also available as Resolve FX operations,
if you want to organize your grading pipeline more manually using the Color Transform
Resolve FX adjustment.
Chapter 10 HDR Setup and Grading 217
– LUTs are also available to accomplish each of these color space conversions if you want
to develop your own specific image processing pipeline based on custom-made LUT
or DCTL transforms.
Overall, Resolve Color Management and ACES are reliable and recommended approaches to handling
HDR grading in DaVinci Resolve in most instances. For more information about Resolve Color
Management, see Chapter 9, “Data Levels, Color Management, and ACES.”
What Do I Do With HDR?
While these standards make HDR mastering and distribution possible, they have nothing to say about
how these HDR-strength levels should be used creatively. That’s up to you, because the question of
how to utilize the expansive headroom for brightness and saturation that HDR enables is fully within
the domain of the colorist, as a series of creative decisions that must be made regarding how to assign
the range of highlights that are available in your source media to the above-100 nit HDR levels you’re
mastering to as you grade, given the peak luminance level that you’re assigned to master with. Which
HDR peak luminance level you use (1000 nit, 3000 nit, 4000 nit) probably depends on which display
you have access to and who’s distributing the resulting program.
Analyzing HDR Signals Using Video Scopes
When you’re using waveform scopes of any kind, including parade and overlay scopes, the signal will
fit within the 10-bit scale used to analyze the signal much differently owing to the way HDR is encoded.
The following chart of values will make it easier to understand how each level in “nits” (i.e., cd/m2)
corresponds to a code value within the 10-bit image scale:
Nearest
10-Bit Code Value in cd/m2 HDR Display Peak Luminance Capability
1019† 10,000 No commercially available display
920 4000 Dolby Pulsar
889 3000 Flanders Scientific XM310K w/L20 test pattern
844 2000 Dolby PRM 32FHD
Sony BVM X300 w/L10 test pattern, EIZO Prominence CG3145,
767 1000
or Flanders Scientific XM311K
756 900 Flanders Scientific XM650U w/L20 test pattern
742 800 Panasonic TC-55FZ1000U w/L10 test pattern
728 700 Measured on an iPhone XS displaying 50% white
711 600 Canon V2411 (not in burst mode)
691 500 Minimum standard for an “UltraHD” OLED display
635 300 Flanders Scientific DM250 in “HDR preview mode” w/L40 pattern
BT.2408 recommendation for diffuse white of SDR
593 203
content being intercut with 1000 nit max HDR content
528 108 Dolby Cinema projector
520 100 Standard peak luminance for SDR displays
Standard peak luminance for SDR DCI projection,
447 48
Dolby Cinema 3D peak luminance
4† 0 Absolute black
† 0–3 and 1020–1023 are reserved values
Chapter 10 HDR Setup and Grading 218
While this table of values is useful for understanding where HDR nit levels fall on legacy external
scopes, if you’re monitoring with the built-in video scopes in DaVinci Resolve, you can turn on the
“Enable HDR Scopes for ST.2084” checkbox in the Color panel of the User Preferences, which
replaces the 10-bit scale of the video scopes with a scale based on nit values (or cd/m2) instead.
The video scopes with “Enable HDR
Scopes for ST.2084” enabled in the Color
panel of the User Preferences
TIP: If you’re unsatisfied with the amount of detail you’re seeing in the 0–519 range (0–100
nits) of the video scope graphs, then you can use the 3D Scopes Lookup Table setting in the
Color Management panel of the Project Settings to assign the appropriate “HDR X nits to
Gamma 2.4 LUT,” with X being the peak nit level of the HDR display you’re using. This
converts the way the scopes are drawn so that the 0–100 nit range of the signal takes up the
entire range of the scopes, from 0 through 1023. This will push the HDR-strength highlights
up past the top of the visible area of the scopes, making them invisible, but it will make it
easier to see detail in the midtones of the image.
Dolby Vision®
Long a pioneer and champion of HDR for enhancing the consumer video experience,
Dolby Laboratories has developed a method for mastering and delivering HDR called Dolby Vision.
As with most HDR standards discussed in this chapter, Dolby Vision uses the PQ (perceptual quantizer)
electrical-optical transfer function (EOTF, which defines how an electronic video signal is presented on
a display), which is defined by SMPTE ST.2084, along with a hierarchy of metadata that’s embedded
alongside the video stream. All metadata used by Dolby Vision is organized into levels, of which the
following are important to the colorist:
– Level 0 metadata, which is global metadata that defines the Mastering Display (what the colorist is
using), including aspect ratio, frame rate, color encoding and information on all the target displays
that are used for the Level 2 and Level 8 trim metadata below.
– Level 1 metadata, which is the Dolby Vision v2.9 analysis metadata that’s generated automatically
when you use the Dolby Vision controls to analyze the clips in the timeline. The controls for
automatically generating Level 1 metadata are available to all DaVinci Resolve Studio users.
Chapter 10 HDR Setup and Grading 219
– Level 2 metadata, which is the Dolby Vision v2.9 trimming metadata that’s set by the colorist via the
version 2.9 trim controls available in the Dolby Vision palette of the Color page. This trimming allows
adjustment of how the Dolby Vision image is to be mapped to a target display (such as a 100 nit
BT.709 display) that’s different from the mastering display (such as a 1000 nit BT.2020 display).
The purpose of this metadata is to maintain a program’s artistic intent by providing guidance
from the colorist over how the program’s signal should be fit into the differing luminance ranges
of a variety of displays with different peak luminance capabilities. Manually adjustable Level 2
metadata is only available to DaVinci Resolve Studio users via a license obtained from Dolby.
– Level 3 metadata, which is the offset for Dolby Vision v4.0 added to Level 1 metadata generated
by the analyze buttons in the Dolby Vision controls. It also stores the mid tone offset data.
– Level 5 metadata, which provides information about the aspect ratio of the deliverable format, and
the aspect ratio of the actual image within that format. This metadata is also applicable at the per
clip level.
– Level 6 metadata, which stores the MaxCLL and MaxFALL levels required by the HDR10 mastering
standard of HDR.
– Level 8 metadata, which is the updated Dolby Vision v4.0 trimming metadata that’s set by the
colorist via the v4.0 trim controls available in the Dolby Vision palette of the Color page. This
evolved set of trimming commands allows more detailed adjustment of how the Dolby Vision
image is to be mapped to a target display (such as a 100 nit BT.709 display) that’s different from
the mastering display (such as a 1000 nit BT.2020 display). Just like Level 2 metadata, the purpose
of Level 8 metadata is to maintain a program’s artistic intent by providing guidance from the
colorist over how the program’s signal should be fit into the differing luminance ranges of a variety
of displays with different peak luminance capabilities. Manually adjustable Level 8 metadata is
only available to DaVinci Resolve Studio users via a license obtained from Dolby. Whether you
use Level 2 trim controls or Level 8 trim controls depends on the Dolby Vision version setting you
choose in the Color Management panel of the Project Settings.
NOTE: It’s currently recommended for all users to choose Dolby Vision v4.0 for analysis and
trimming, as it provides superior results. If you’re required to deliver Dolby Vision v2.9
metadata when mastering for backwards compatibility, DaVinci Resolve can now export v2.9
metadata from projects using v4.0 workflows.
The metadata levels described above are current of this writing. However Dolby Vision is a
rapidly evolving technology, and as Dolby adds new features and metadata levels you should
reference Dolby’s website to keep track of the latest developments: https://
professionalsupport.dolby.com/s/article/Dolby-Vision-Metadata-Levels?language=en_US
For the foreseeable future, the current consumer display landscape encompasses a wide variety of
differently performing televisions and projectors that are guaranteed to improve year over year. This
means that mastering for today’s displays may render content less vibrant than content that emerges
five years from now. This can be especially vexing for narrative content that will have a long lifespan
on streaming services as new generations of viewers discover them. While one way of solving this
would be to re-grade your program many times at a variety of nit levels to create deliverables suitable
to a range of display capabilities, that’s an enormous amount of work.
Dolby Vision offers a shortcut by using sophisticated algorithms to derive automatically analyzed
metadata that intelligently guides how an image graded at one nit level (say 4000 nits) can be
adjusted to be perceptually similar to viewers watching a 1000 nit display. Highlights and saturation
that are too bright for a particular display will be adjusted to provide as close to the same experience
without clipping or flattening image detail.
Furthermore, this automatic analysis can be manually trimmed by a colorist to account for the artistic
intentions of the authors of a program, in cases where the automatic analysis doesn’t do exactly what’s
wanted. This combination of auto-analysis and manual trimming is key to how Dolby Vision streamlines
Chapter 10 HDR Setup and Grading 220
the process of mastering programs to accommodate backward compatibility with SDR displays, as well
as the varying peak luminance capabilities of different makes and models of HDR consumer displays,
both now and in the future. You’re only required to make a 100 nit trim pass to guide the HDR
program’s conversion all the way down to SDR, and the Dolby Vision system can use that information
to guide how intermediate presentations (such as at 700 or 1200 nits) should be adjusted. You can
even do multiple trim passes at specific nit levels, such as a 100 nit pass and a 1000 nit pass, to give
the Dolby Vision system more information to accurately guide intermediate presentations on different
displays. Additionally, you don’t have to trim every clip. If the analysis is good, you can skip those clips
and only trim clips that need it. The overall system has been created to make it as efficient as possible
for colorists to ensure that the widest variety of viewers see the image as it’s meant to be seen.
This, in a nutshell, is the advantage of the Dolby Vision system. You can grade a program on a more
future-proofed 4000 nit display, and use auto-analysis plus one or two manual trim passes to make the
program backward compatible with SDR televisions, and capable of intelligently scaling the HDR
highlights to provide the best representation of the mastered image for whatever peak luminance and
color volume a particular television is capable of. All of this is guided by decisions made by the colorist
during the grade.
At the time of this writing, all seven major Hollywood studios are mastering in Dolby Vision for cinema.
Studios that have pledged support to master content in Dolby Vision for home distribution include
Universal, Warner Brothers, Sony Pictures, and MGM. Content providers that have agreed to distribute
streaming Dolby Vision content include Netflix, Vudu, and Amazon. If you want to watch Dolby Vision
content on television at home, consumer television manufacturers LG, TCL, Vizio, HiSense, Sony,
Toshiba, and Bang & Olfusen have all shipped models with Dolby Vision support.
Organizing Your Timeline for Dolby Vision Mastering
One of the first things you need to do before doing a Dolby Vision grade is to organize your timeline
accordingly. Because each clip undergoes a visual analysis to facilitate the Dolby Vision workflow,
there are specific limitations to how clips can appear in a timeline.
– All clips to be analyzed in a Dolby Vision workflow need to be on video track V1; clips on other
tracks will be ignored.
– All clips that overlap one another as part of a composite must be turned into a single item in
the timeline in order to be correctly analyzed. This means that each group of clips that create
a composite in a timeline, be it multiple overlapping clips combined via keys or alpha channel
transparency, multiple overlapping clips combined using composite or blend modes, or text
generators appearing above one or more video clips, must be turned into a compound clip for
Dolby Vision analysis to work correctly.
Letterboxing for Dolby Vision Mastering
The analysis of clips in a Dolby Vision workflow keeps track of the timeline aspect ratio, as well as the
image aspect ratio of each clip in that timeline. Programs that mix different aspect ratios of letterboxing
(or blanking) will be accommodated by the Dolby Vision analysis, however Dolby Vision does not
support letterbox on two sides (both pillarbox and letterbox), only one at a time.
If you choose Show Blanking Clip Override in the Output Sizing mode of the Sizing palette, you have
the option of overriding the overall Timeline Blanking settings with individual Clip Blanking settings.
You can do this by choosing the Clip option and then turning off the Use Timeline Blanking checkbox.
At this point, you can choose any letterboxing format you want, and the correct letterboxing ratio will
be stored as part of the metadata.
Chapter 10 HDR Setup and Grading 221
The Show Blanking Clip Override options with
the Use Timeline Blanking box unchecked
Setting Up Color Management for Dolby Vision Mastering
For an HDR signal to look correct, you need to output your graded program using the right EOTF for
the HDR standard you’re mastering. The EOTF maps the different levels DaVinci Resolve outputs to
your HDR display using the SMPTE ST.2084 PQ setting required for outputting Dolby Vision. You can
set this up in one of three different ways, as:
– Output Color Space and Gamma settings in RCM or ACES
– Color Space and Gamma settings within a series of Resolve FX Color Transform plug-ins that can
be used at the end of each grade or at the end of a Timeline grade
– 3D LUTs used for converting signals from one standard to another that can be used at the end of
each grade or at the end of a Timeline grade
While Dolby Vision content is not limited to a particular color space, Resolve Color Management
provides a P3 D65 setting that matches the capabilities of most mastering displays in use at the time of
this writing.
Choosing Mastering Displays for Dolby Vision
To do HDR grading, you need a suitable HDR display. Technically any monitor that supports SMPTE
ST.2084 (aka PQ) will work. Happily, a growing number of professional displays from Sony, Flanders
Scientific, TV-Logic, Canon, and Eizo are suitable for use in HDR grading suites. EBU Tech 3320
specifies the requirements for a Grade 1 HDR mastering monitor. Dolby recommends the following
minimum requirements for HDR monitors:
– A minimum Peak Luminance of 1000 nits
– A 200,000:1 contrast ratio
– Minimum black at 0.005 nits
– Capable of at least 99% of P3 gamut
For more information on Dolby best practices for color grading Dolby Vision, visit: https://www.dolby.
com/us/en/technologies/dolby-vision/dolby-vision-for-creative-professionals.html.
Using the Dolby Vision Internal Content Mapping Unit (iCMU)
DaVinci Resolve has a GPU-accelerated “internal” software version of the Dolby Vision CMU (Content
Mapping Unit) for previewing Dolby Vision mapping right in DaVinci Resolve. iCMU support can be
enabled and set up in the Color Management panel of the Project Settings by turning on the Enable
Dolby Vision checkbox. This is a DaVinci Resolve Studio-only feature.
Chapter 10 HDR Setup and Grading 222
Dolby Vision settings in the Color Management panel of the Project Settings
The Dolby Vision group of settings also exposes menus for choosing the version of Dolby Vision you
want to use, what kind of Master Display you’re using, and whether or not to use an eCMU (assuming
you possess the option). Finally, turning Dolby Vision on also enables the Dolby Vision palette and
controls in the Color page, which are described in greater detail later in this chapter.
To master with Dolby Vision in DaVinci Resolve using the built-in iCMU, you still need a more specific
hardware setup than the average grading and finishing workstation, consisting of the following
equipment:
– Your DaVinci Resolve grading workstation, outputting via either a DeckLink 8K Pro or DeckLink 4K
Extreme 12G video interface
– A mastering display capable of outputting HDR nit levels suitable for the deliverable you’re
required to produce
Simultaneous Master and Target Display Output for Dolby Vision
When mastering HDR and trimming versions for more limited displays, it’s extremely useful to be able
to evaluate your HDR grade and SDR trim pass side-by-side. It’s possible to output both the Master
Display output and the Target Display output simultaneously when you’re grading with either Dolby
Vision or HDR10+ enabled.
Necessary Hardware
To work in this manner, you must have the following equipment:
– Your DaVinci Resolve grading workstation must output via a DeckLink 8K Pro or
DeckLink 4K Extreme 12G.
– Your Mastering Display must be capable of HDR nit levels suitable for the deliverable
you’re required to produce.
– A display that can be set to output calibrated SDR, probably using the BT.709 gamut
Enabling Simultaneous Monitoring
When you set up your display hardware, the HDR Master Display must be connected to output A,
and the Target Display must be connected to output B of whichever BMD video output device you’re
using. Then, you need to turn on the “Use dual outputs on SDI” checkbox in the Master Settings of the
Project Settings. At this point, assuming all of your connections are compatible with one another, you
should see an HDR image output to your HDR display, and a trimmed image output to your
SDR display.
External Content Mapping Unit (eCMU) for Dolby Vision
DaVinci Resolve supports the use of a Dolby External Content Mapping Unit (eCMU) for studios doing
more intensive HDR mastering work, as it lets you monitor and adjust an HDR display simultaneously
to an SDR display for side-by-side trimming at high resolutions via hardware. The eCMU also has the
ability to preview Dolby Vision on a consumer display in real time via HDMI tunneling to view directly
what the audience will see at home.
Chapter 10 HDR Setup and Grading 223
Auto Analysis is Available to All Studio Users
Resolve Studio enables either unlicensed or licensed users to automatically analyze the image and
generate Dolby Vision analysis metadata. This metadata is used to deliver Dolby Vision content and to
render other HDR and SDR deliverables from the HDR grade that you’ve made. This enables any
DaVinci Resolve Studio user to create Dolby Vision deliverables with Level 1 metadata. However,
manual trimming of the analysis metadata requires a license from Dolby.
The commands governing Dolby Vision auto-analysis, which are available to all Resolve Studio users,
are available in the Color > Dolby Vision™ submenu, as well as the Dolby Vision palette, and consist
of the following:
– Analyze All Shots: Automatically analyzes each clip in the Timeline and stores
the results individually.
– Analyze Selected Shot(s): Only analyzes selected shots in the Timeline.
– Analyze Selected And Blend: Analyzes multiple selected shots as if they were a single sequence.
The result is the same analysis being saved to each clip. Useful to save time when analyzing
multiple clips that have identical content.
– Analyze Current Frame: A fast way to analyze clips where a single frame is representative
of the entire shot.
Once you analyze a clip, the Min, Max, and Average fields automatically populate with the resulting
L1 data; these fields are not editable.
The metadata fields for each clip
Additionally, clips that have been analyzed show an HDR badge in the Thumbnail timeline, to help
you keep track of which clips have been analyzed and which have yet to be.
Analyzed clips have HDR
badges to identify them
Licensing DaVinci Resolve to Expose Dolby Vision Trim Controls
To expose the Dolby Vision controls in DaVinci Resolve Studio that let you make manual trims on top
of the automatic analysis that any copy of DaVinci Resolve Studio can do, you must email
dolbyvisionmastering@dolby.com to receive more information about obtaining a license.
Once you’ve obtained a license file from Dolby, you can import it by choosing File > Dolby Vision >
Load License, and its successful installation will enable the Dolby Vision controls to be enabled in the
Color page. You should also receive a display configuration file, which can be loaded via the File >
Dolby Vision > Load Configuration command and lets you populate the Dolby Vision drop-down
menus with the most up to date options.
Chapter 10 HDR Setup and Grading 224
Dolby Vision® Trim Controls in DaVinci Resolve
Once you’ve analyzed a clip, you’re in a position to trim the result. The latest version of the
Dolby Vision palette exposes four sets of controls. The first are the main controls:
– Target Display Output: This drop-down specifies what Dolby refers to as the Target Display, used
to display the tone mapped image. This menu lets you choose specific display properties to obtain
a preview of what the trimmed image will look like on different displays with different gamuts and
peak luminance capabilities.
– Trim Controls for: Which Target Display you’re currently trimming for. The default setting
(100-nit, BT.709, BT.1886, Full) lets you monitor an SDR version of the HDR image, so you can see
how the trim metadata tone maps the image on non-HDR televisions.
– Analyze controls: The commands governing Dolby Vision auto-analysis are available as buttons,
which perform the same functions as their similarly named counterparts in the Color > Dolby Vision
submenu. Please note that most trim controls are disabled until you perform an analysis, which is a
necessary first step.
– All: Automatically analyzes each clip in the current Timeline and stores the results individually.
– Selected: Only analyzes selected shots in the Timeline.
– Blend: Analyzes multiple selected shots as if they were a single sequence. The result is the
same analysis being saved to each clip. You need to use the blend option when analyzing two
clips that meet at a through edit separating otherwise contiguous frames. It’s also typical to use
the Blend option when analyzing a scene of clips that take place at the same location at the
same time, to ensure that natural variations in lighting don’t add unwanted variations between
the analyses of clips that are supposed to already be balanced with one another. Blend also
saves time when analyzing multiple clips that have identical content.
– Frame: Useful in situations where part of a clip has an extreme level of color or lightness that’s
not typical of the rest of the clip, that incorrectly biases the analysis and produces a poor result.
Placing the playhead on a frame that’s representative of how the clip is supposed to look and
using the Frame option bases the analysis on only that frame. This is also a fast way to analyze
clips where a single frame is representative of the entire shot.
– Enable Tone Mapping Preview: Lets you see the target display output in the Color page Viewer
and video output, so you can evaluate how the tone mapped version looks on your HDR display.
This control is disabled when you enable “Use dual-outputs on SDI” in the Master Settings of the
Project Settings, since the second output SDI now automatically displays the target display output.
– Mid Tone Offset (CM v4.0 only): This control is used to match the overall exposure between the
tone mapped SDR signal to the HDR master. This offset is applied to the L1 Mid values, allowing
the adjustment of mid tones without affecting the blacks and highlights. It can be used to shift
overall L1 analysis to ensure the best preservation of artistic intent. This setting is shared among all
trim passes you do at all nit levels, so if you’ve done two trim passes, one at 100 nits and another
at 1000 nits, adjusting this setting always adjusts both trim passes at once. Changes made to this
control are recorded to the L3 metadata for each clip.
The second are the Min, Mid, and Max metadata fields that are populated by the analyzed values of
the current clip. These fields cannot be edited, although analysis metadata can be copied and pasted
among clips. These values represent the L1 analysis and are used to calculate how the HDR image
should be trimmed to fit into the video standard specified by the Target Display.
The third are the Primary Trims, which are only editable if you’ve performed an analysis and if you have
a license from Dolby. Which controls are exposed depends on the version of Dolby Vision you’ve
selected in the Color Management panel of the Project Settings.
Chapter 10 HDR Setup and Grading 225
Dolby Vision CM v2.9 Controls
If you choose Dolby Vision 2.9 in the Color Management panel of the Project Settings, it activates the
2.9 version of Dolby’s content mapping algorithm and exposes the original Dolby Vision trim controls.
It is no longer suggested to use these, since you can do a Dolby Vision 4.0 analysis and trim, and still
export converted 2.9 metadata for legacy workflows.
– Lift/Gamma/Gain: These controls function similarly to the Y-only Lift, Gamma, and Gain master
wheels of the Color Wheels palette, to let you trim the overall contrast levels of the image. The
Dolby Best Practices Guide recommends to limit positive Lift to no more than 0.025, and mostly
restrict yourself to using Gamma and Gain if necessary to lighten the image.
– Saturation Gain: Lets you trim the saturation of the most highly saturated areas within a scene.
Lesser saturated values will be less affected.
– Chroma Weight: Darkens saturated parts of the image to preserve colorfulness in areas of the
image that are clipped by smaller gamuts that don’t have enough headroom for saturation in the
highlights.
– Tone Detail: Lets you preserve contrast detail in the highlights that might otherwise be lost when
the highlights are mapped to lower dynamic ranges, usually due to clipping. Increasing Tone Detail
Weight increases the amount of highlight detail that’s preserved. When used, can have the effect
of sharpening highlight detail.
Dolby Vision CM v4.0 Controls
If you choose Dolby Vision 4.0 in the Color Management panel of the Project Settings, it activates the
4.0 version of Dolby’s content mapping algorithm, and exposes the following controls.
– Lift/Gamma/Gain: These controls function similarly to the Y-only Lift, Gamma, and Gain master
wheels of the Color Wheels palette, to let you trim the overall contrast levels of the image. The
Dolby Best Practices Guide recommends to limit positive Lift to no more than 0.025, and mostly
restrict yourself to using Gamma and Gain if necessary to lighten the image.
– Saturation Gain: Lets you trim the saturation of the most highly saturated areas within a scene.
Lesser saturated values will be less affected.
– Chroma Weight: Darkens saturated parts of the image to preserve colorfulness in areas of the
image that are clipped by smaller gamuts that don’t have enough headroom for saturation in the
highlights.
– Tone Detail: Lets you preserve contrast detail in the highlights that might otherwise be lost when
the highlights are mapped to lower dynamic ranges, usually due to clipping. Increasing Tone Detail
Weight increases the amount of highlight detail that’s preserved. When used, can have the effect
of sharpening highlight detail.
– Mid Contrast Bias: Affects image contrast in the region around the computed average picture
level. This lets you increase or decrease contrast in the midtones of the image.
– Highlight Clipping: Reduces details and affects the roll-off the brighter part of the image by
clipping the highlights as required. This is useful when the tone mapped image is displaying
unwanted details.
Chapter 10 HDR Setup and Grading 226
The Primary Trims controls that are found in the Dolby Vision palette are only enabled
once you’ve authorized your system with a special license, available from Dolby.
The fourth set of controls is available via a second palette mode, the Secondary Trims. These are only
editable if you’ve performed an analysis and if you have a license from Dolby.
– Secondary Saturations: A set of slider-based vector-style controls (similar to the Hue vs. Sat
curve) lets you adjust the Saturation of Red, Yellow, Green, Cyan, Blue, and Magenta to help you
selectively fine tune the results.
– Secondary Hues: Another set of slider-based vector-style controls (similar to the Hue vs. Hue
controls) lets you adjust the Hue of Red, Yellow, Green, Cyan, Blue, and Magenta to help you fine
tune the results.
The Secondary Trims controls, as seen on a licensed Dolby Vision system
Together, all of this trimming metadata lets the colorist guide how the iCMU or eCMU transforms the
image from the Mastering Display specified in the Project Settings to the Target Display specified in
the Dolby Vision palette. This metadata is carried throughout the ecosystem so that your artistic intent
is preserved on a variety of platforms and displays.
Previewing and Trimming At Different Levels
Additionally, the iCMU or eCMU can be used to preview 100 nit, 600 nit, 1000 nit, and 2000 nit
versions of your program, with different gamuts, if you want to see how your master will scale to those
combinations of peak luminance levels and standards. This, of course, requires your DaVinci Resolve
workstation or eCMU to be connected to a display that’s capable of being set to those peak luminance
output levels.
Though it’s not at all typical, you also have the option to set the “Trim Controls For” drop-down menu
to different combinations of peak luminance, gamut, and color temperature, in order to visually trim the
grades of your program at up to four different peak luminance levels, including 100 nit, 600 nit,
Chapter 10 HDR Setup and Grading 227
1000 nit, and 2000 nit reference points. Choosing a setting from the “Trim Controls For” drop-down
menu sets you up to adjust trim metadata for that setting.
Choosing different settings from the “Trim Controls For” drop-down menu lets you can optimize a
program’s visuals for the peak luminance and color volume performance of many different televisions
with a much finer degree of control. If you take this extra step of doing a complete trim pass of your
program at multiple nit levels (using the Dolby Vision controls), the Level 2, or Level 8 metadata you
generate in each trim pass ensures that the artistic intent is preserved as closely as possible across a
wide variety of displays, in an attempt to provide the viewer with the best possible representation of
the director’s intent, no matter where it appears.
For example, if a program were graded relative to a 4000 nit display, along with a single 100 nit
BT.709 trim pass, then a Dolby Vision-compatible television with 750 nit peak output will reference
the 100 nit trim pass metadata in order to come up with the best way of “splitting the difference” to
output the signal correctly. On the other hand, were the colorist to do three trim passes, the first at
100 nits, -cond at 600 nits, and a third at 1000 nits, then a 750 nit-capable Dolby Vision television
would be able to use the 600 and 1000 nit trim metadata to output more accurately scaled color
volume and HDR-strength highlights, relative to the colorist’s adjustments, that take better advantage
of the 750 nit output of that television.
Managing Dolby Vision Metadata
As you go through the process of analyzing and trimming the HDR grades displayed on your Master
Display to look appropriate on your Target Display, you’ll sometimes find it useful to copy and paste
metadata from one clip to another. You can copy and paste Analysis Metadata separately from Trim
Metadata and Mid Tone Offset, and you can choose to copy and paste metadata for all Target Displays
when you’re trimming multiple passes, or you can copy and paste metadata for only the current Target
Display if you’re trimming multiple passes and you only want to overwrite metadata for a single pass.
Methods of Copying and Pasting Dolby Vision Metadata:
– To copy and paste Analysis Metadata: Select a clip you want to copy from, choose Copy Analysis
Metadata from the Dolby Vision palette option menu, then select a clip you want to paste to, and
choose Paste Analysis Metadata from the Dolby Vision palette option menu.
– To copy and paste Trim Metadata for all Target Displays: Do one of the following:
– Select a clip you want to copy from, choose Edit > Dolby Vision > Copy Trim Metadata, then
select a clip you want to paste to, and choose Edit > Dolby Vision > Paste Trim Metadata.
– Select a clip you want to copy from, choose Copy Trim Metadata from the Dolby Vision palette
option menu, then select a clip you want to paste to, and choose Paste Trim Metadata from the
Dolby Vision palette option menu.
– Select a clip you want to paste to, then press and hold the Option-Shift keys, and middle-click
the clip you want to copy from.
– To copy and paste Trim Metadata for the current Target Display: Do one of the following:
– Select a clip you want to copy from, choose Copy Trim Metadata from the Dolby Vision palette
option menu, then select a clip you want to paste to, and choose Paste Trim Metadata to
Current from the Dolby Vision palette option menu.
– Select a clip you want to paste to, then press and hold the Option key, and middle-click the clip
you want to copy from.
– To copy and paste Mid Tone Offset: Select a clip you want to copy from, choose Copy Mid Tone
Offset from the Dolby Vision palette option menu, then select a clip you want to paste to, and
choose Paste Mid Tone Offset from the Dolby Vision palette option menu.
Chapter 10 HDR Setup and Grading 228
Setting Up Resolve Color Management for Grading HDR
Once the hardware is set up, setting up Resolve itself to output HDR for Dolby Vision mastering is easy
using Resolve Color Management (RCM). This procedure is pretty much the same no matter which
HDR mastering technology you’re using; only specific Output Color Space settings will differ.
1 Set Color Science to DaVinci YRGB Color Managed in the Color Management panel of the
Project Settings.
2 Then, open the Color Management panel, and set the Output Color Space drop-down to the
ST.2084 setting that corresponds to the peak luminance, in nits, of the grading display you’re
using. For example, if you’re grading with a Sony BVM X300, choose ST.2084 1000 nit, but if
you’re grading with a Flanders Scientific XM310K, choose ST.2084 3000 nit, in order to use the full
capabilities of each display. Be aware that whichever HDR setting you choose will impose a hard
clip at the maximum nit value supported by that setting. This is to prevent accidentally overdriving
HDR displays for which there are negative consequences (not all HDR displays have this limitation).
– ST.2084 300 nit
– ST.2084 500 nit
– ST.2084 800 nit
– ST.2084 1000 nit
– ST.2084 2000 nit
– ST.2084 3000 nit
– ST.2084 4000 nit
This setting is only the output EOTF (a sort of gamma transform, if you will, using the terminology
that DaVinci Resolve’s UI has used up until now).
3 Next, choose a setting in the Timeline Color Space that corresponds to the gamut you want
to use for grading, and that will be output. For example, if you want to grade the Timeline as
a log-encoded signal and “normalize” it yourself, you can choose ARRI Log C or Cineon Film
Log (this workflow is highly recommended for the best results). If you would rather save time by
having DaVinci Resolve normalize the Timeline to P3-D65 and grade that way, you can choose
that setting as well. In terms of defining the output gamut, the rule is that if “Use Separate Color
Space and Gamma” is turned off, the Timeline Color Space setting will define your output gamut.
If “Use Separate Color Space and Gamma” is turned on, then you can specify whatever gamut you
want in the left Output Color Space drop-down menu, and choose the EOTF from the right drop-
down menu (as described in step 2).
4 Be aware that, when it’s being properly output, HDR ST.2084 signals appear very “log-like,” in
order to pack a wide dynamic range into the bandwidth of a standard video signal. It’s the HDR
display itself that “normalizes” this log-encoded image to look as it should. For this reason, the
image you see in your Color page Viewer is going to appear flat and log-like, even though the
image being displayed on your HDR reference display looks vivid and correct. If you’re using a
typical SDR computer display, and you want to make the image in the Color Page Viewer look
“normalized” at the expense of clipping the HDR highlights (in the Viewer, not in the grade), you
can use the 3D Color Viewer Lookup Table setting in the Color Management panel of the Project
Settings to assign the appropriate ST.2084 setting with a peak nit level that corresponds to the
HDR broadcast display you’re outputting to.
5 Additionally, the “Timeline resolution” and “Pixel aspect ratio” (in the project settings) that your
project is set to use is saved to the Dolby Vision metadata, so make sure your project is set to the
final Timeline resolution and PAR before you begin grading.
Chapter 10 HDR Setup and Grading 229
DaVinci Resolve Grading Workflow For Dolby Vision
Once the hardware and software is all set up, you’re ready to begin grading HDR. The workflow is
fairly straightforward.
1 First, grade the HDR image on your HDR Monitor to look as you want it to. Dolby recommends
starting by setting the look of the HDR image, to set the overall intention for the grade.
2 When using various grading controls in the Color page to grade HDR images, you may find it
useful to enable the HDR Mode of the node you’re working on by right-clicking that node in
the Node Editor and choosing HDR Mode from the contextual menu. This setting adapts that
node’s controls to work within an expanded HDR range. Practically speaking, this makes controls
that operate by letting you make adjustments at different tonal ranges, such as Custom Curves,
Soft Clip, and so on, work more easily with wide-latitude signals.
3 When you’re happy with the HDR grade, click the Analysis button in the Dolby Vision palette.
This analyzes every pixel of every frame of the current shot, and performs and stores a statistical
analysis that is sent to the iCMU or eCMU to guide its automatic conversion of the HDR signal to
an SDR signal.
4 Choose “Target Display Output” and “Trim Controls For” settings that you want to trim to. By
default, these are set to “100-nit, BT.709, BT.1886, Full,” which is a typical SDR deliverable.
However, other options are available if you want to do multiple trim passes to obtain a more
accurate result. Whichever setting you choose from, “Trim Controls For” dictates which trim pass
you’re doing. You can do multiple trim passes by choosing another option from this menu.
5 If you’re not happy with the automatic conversion, use the trim controls in the Dolby Vision palette
to manually trim the result to the best possible BT.709 approximation of the HDR grade you
created in step 1.
6 If you obtain a good result, then move on to the next shot and continue work. If you cannot
obtain a good result, and worry that you may have gone too far with your HDR grade to derive an
acceptable SDR tone mapping, you can always trim the HDR grade a bit, and then retrim the SDR
grade to try and achieve a better tone mapping. Dolby recommends that if you make significant
changes to the HDR master, particularly if you modify the blacks or the peak highlights, you
should reanalyze the scene. However, if you only make small changes, then reanalyzing is not
strictly required.
As you can see, the general idea promoted by Dolby is that a colorist will focus on grading the HDR
picture relative to the 1000, 2000, 4000, or higher nit display that is being used, and will then
rely on the colorist to use the Dolby Vision controls to “trim” this into a 100 nit SDR version. This
metadata is saved as part of the mastered media, and it’s used to more intelligently tone map the
entire image to fit within any given display’s parameters. The colorist’s artistic intent is used to guide
all dynamic adjustments to the content.
Delivering Dolby Vision
Once you’re finished grading the HDR and trimming the SDR tone mapping, you need to output your
program correctly in the Deliver page.
Rendering a Dolby Vision Master
To deliver a Dolby Vision master after you’ve finished grading, you want make sure that the Output
Color Space of the Color Management panel of the Project Settings is set to the appropriate HDR
ST.2084 setting based on the peak output you want to deliver (any values above will be clipped).
Then, you want to set your render up to use one of the following Format/Codec combinations:
– TIFF, RGB 16-bit
– EXR, RBG-half (no compression)
Chapter 10 HDR Setup and Grading 230
When you render for tapeless delivery, all Dolby Vision metadata is recorded into a Dolby Vision XML
and delivered along side either the Tiffs or EXR renders. To export a Dolby Vision XML file, select your
timeline in the media pool and choose File > Export >Timeline. Navigate to where you want to save the
file and select Dolby Vision v2.9 (or v4.0) MXF files from the file type selector and click save. These
two sets of files are then delivered to a facility that’s capable of creating the Dolby Vision
deliverable file.
Rendering a Dolby Vision IMF
You can deliver directly to an IMF that includes an MXF with embedded Dolby Vision metadata in the
package. To export a Dolby Vision IMF use the following Video settings in the Deliver page:
– Format: IMF
– Codec: Kakadu JPEG 2000
– Type: Dolby Vision (HD, 2K, UHD, or 4K) depending on your deliverable resolution.
Configure the rest of the IMF settings as necessary for your project.
The Video Settings to use for creating a Dolby Vision IMF in the Deliver page
Rendering an Ordinary SDR Media File or Other Specific HDR Trim Pass
If you want to export the SDR trim pass, then you can choose Dolby Vision from the Tone Mapping
drop-down menu in the Advanced Settings of the Render Settings list on the Deliver page, and
choose the 100-nit, BT.709, BT.1886, Full setting below. With this enabled, you can output the SDR
version of your program to any format you like.
You can also export the trims for other HDR nit levels for specific displays, at 600, 1000 or 2000 nits
and in the either the BT.2020 or P3 gamuts.
The Tone Mapping setting in the Advanced Settings of the Render Settings list
SMPTE ST.2084 and HDR10
Many display manufacturers who have no interest in licensing Dolby Vision for inclusion in their
displays are instead going with the simpler method of engineering their displays to be compatible with
SMPTE ST.2084. It requires only a single stream for distribution, there are no licensing fees, no special
hardware is required to master for it (other than an HDR mastering display), and there’s no special
metadata to write or deal with.
Interestingly, SMPTE ST.2084 ratifies the “PQ” EOTF that was originally developed by Dolby, and which
is used by Dolby Vision, into a general standard that accommodates encoding HDR at peak luminance
values up to 10,000 cd/m2. This standard requires at minimum a 10-bit signal for distribution, and the
EOTF is mathematically described such that the video signal utilizes the available code values of a
10-bit signal as efficiently as possible, while allowing for such a wide range of luminance in the image.
Chapter 10 HDR Setup and Grading 231
SMPTE ST.2084 is also part of the “Ultra HD Premium” industry specification, which stipulates that
televisions bearing the Ultra HD Premium logo have the following capabilities:
– A minimum UHD resolution of 3840 x 2160
– A minimum gamut of 90% of P3
– A minimum dynamic range of either 0.05 nits black to 1000 nits peak luminance (to accommodate
LCD displays), or 0.0005 nits black to 540 nits peak luminance (to accommodate OLED displays)
– Compatibility with SMPTE ST.2084
Finally, ST.2084 has been included in the HDR 10 standard adopted by the Blu-ray Disc Association
(BDA) that covers Ultra HD Blu-ray. HDR 10 stipulates that Ultra HD Blu-ray discs have the following
characteristics:
– UHD resolution of 3840 x 2160
– Up to the Rec. 2020 gamut
– SMPTE ST.2084
– Mastered with a peak luminance of 1000 nits
The downside is that, by itself, an HDR10 mastered program is not backward compatible with BT.709
displays using BT.1886 (although the emerging HDR10+ standard described later addresses this).
Furthermore, no provision is made to scale the above-100 nit portion of the image to accommodate
different displays with differing peak luminance levels. For example, if you grade and master an image
to have peak luminance of 4000 nits, and you play that signal on an HDR10-compatible television
(using ST.2084) that’s only capable of 800 nits, then everything above 800 nits will be clipped, while
everything below 800 nits will look exactly as it should relative to your grade.
This is because ST.2084 is referenced to absolute luminance. If you grade an HDR image referencing a
1000 nit peak luminance display, as is recommended by HDR10, then any display using ST.2084 will
respect and reproduce all levels from the HDR signal that it’s capable of reproducing as you graded
them, up to the maximum peak luminance level it can reproduce. For example, on an HDR10-
compatible television capable of outputting 500 nits, all mastered levels from 501–1000 will be
clipped, as seen in the screenshot below.
Comparing the original 1000 nit waveform representing the grading monitor to
a 500 nit clipped waveform representing the consumer television
Chapter 10 HDR Setup and Grading 232
How much of a problem this is really depends on how you choose to grade your HDR-strength
highlights. If you’re only raising the most extreme peak highlights to maximum HDR-strength levels,
then it’s entirely possible that the audience might not notice that the display is only outputting 800 nits
worth of signal and clipping any image details from 801–1000 nits because there weren’t that many
details above 800 anyway. Or, if you’re grading large explosive fireballs up above 800 nits in their
entirety because it looks cool, then maybe the audience will notice. The bottom line is, when you’re
grading for displays that are only capable of ST.2084, you need to think about these sorts of things.
Monitoring and Grading to ST.2084 in DaVinci Resolve
Monitoring an ST.2084 image is as simple as obtaining a ST.2084-compatible HDR display and
connecting it to the output of your DeckLink 8K, DeckLink 4K Extreme 12G, or UltraStudio 4K Extreme.
Setting up Resolve Color Management to grade for ST.2084 is identical to setting up to grade for
Dolby Vision. You’ll also monitor the video scopes identically, and output a master identically, given
that both standards rely upon the same PQ curve.
TIP: If you’re monitoring with the built-in video scopes in DaVInci Resolve, you can turn
on the “Enable HDR Scopes for ST.2084” checkbox in the Color panel of the User
Preferences, which will replace the 10-bit scale of the video scopes with a scale based on nit
values (cd/m2) instead.
Connecting to HDR-Capable Displays using HDMI 2.0a
If you have a DeckLink 4K Extreme 12G or an UltraStudio 4K Extreme video interface, then
DaVinci Resolve 12.5 and above can output the metadata necessary to correctly display HDR video
signals to display devices using HDMI 2.0a when you turn on the “Enable HDR metadata over HDMI”
checkbox in the Master Settings panel of the Project Settings.
The Enable HDR metadata over HDMI option in the Master Settings
panel of the Project Settings lets you output HDR via HDMI 2.0a
When you do so, a setting in the Color Management panel of the Project Settings, “HDR mastering
is for X” lets you specify the output, in nits, to be inserted as metadata into the HDMI stream being
output, so that the display you’re connecting to correctly interprets it. The output you specify
should match what your display is expecting.
The “HDR mastering is for” setting lets you
insert metadata for HDR output via HDMI 2.0a
HDR10+™
DaVinci Resolve supports the new HDR10+ HDR format by Samsung. Please note that this support is a
work in progress as this is a new standard. When enabled, an HDR10+ palette shows the results of the
trimming analysis that make an automated downconversion of HDR to SDR, creating metadata to
control how HDR-strength highlights look on a variety of supported televisions and displays. This is
enabled and set up in the Color Management panel of the Project Settings with the Enable HDR10+
checkbox. Turning HDR10+ on enables the HDR 10+ palette in the Color page.
Chapter 10 HDR Setup and Grading 233
HDR 10+ settings in the Color Management panel of the Project Settings
Monitoring and Grading to ST.2084 for HDR10+
When you’re grading a program for HDR10+ output, you’ll need to monitor an ST.2084 image, which is
as simple as obtaining a ST.2084-compatible HDR display and connecting it to the output of your
DeckLink 8K, DeckLink 4K Extreme 12G, or UltraStudio 4K Extreme.
Setting up Resolve Color Management to grade for ST.2084 is identical to setting up to grade for
Dolby Vision or regular HDR10. You’ll also monitor the video scopes identically, and output a master
identically, given that each of these standards rely upon the same PQ curve.
TIP: If you’re monitoring with the built-in video scopes in DaVinci Resolve, you can turn
on the “Enable HDR Scopes for ST.2084” checkbox in the Color panel of the User
Preferences, which will replace the 10-bit scale of the video scopes with a scale based on nit
values (cd/m2) instead.
HDR10+ Grading Workflow
The idea behind the HDR10+ workflow is that you’ll grade the HDR version of each clip in your
program first, and then use the automatic analysis to create a downconverted tone mapped version of
each shot that’s controlled by metadata. Once the HDR10+ trim pass is complete, you’ll deliver the
rendered HDR output along with a set of HDR10+ JSON metadata files to a facility for final mastering.
Simultaneous Master and Target Display Output for HDR10+
When mastering HDR and trimming versions for more limited displays, it’s extremely useful to be able
to evaluate your HDR grade and tone mapped trim pass side by side. Starting in DaVinci Resolve 15,
it’s possible to output both the Master Display output and the Target Display output simultaneously
when you’re grading with either Dolby Vision or HDR10+ enabled.
Necessary Hardware
To work in this manner, you must have the following equipment:
– Your DaVinci Resolve grading workstation must output via a DeckLink 8K, DeckLink 4K
Extreme 12G, UltraStudio 4K Extreme video interface, or better.
– Your Mastering Display must be capable of HDR nit levels suitable for the deliverable you’re
required to produce.
– An HDR target display that can be set to the appropriate tone mapped output.
Enabling Simultaneous Monitoring
When you set up your display hardware, the HDR Master Display must be connected to output A, and
the Target Display must be connected to output B of whichever BMD video output device you’re using.
Then, you need to turn on the “Use dual outputs on SDI” checkbox in the Master Settings of the
Project Settings. At this point, assuming all of your connections are compatible with one another, you
should see an HDR image output to your HDR display and a trimmed image output to your
SDR display.
Chapter 10 HDR Setup and Grading 234
HDR10+ Auto Analysis Commands
After you’ve graded an HDR version of each clip in your program, a set of HDR10+ specific commands
let you auto-analyze each clip to create custom HDR to SDR downconversion metadata that give you a
starting point for the SDR trim pass you need to do. These commands are available in the Color >
HDR10+ submenu:
– Analyze All Shots: Automatically analyzes each clip in the Timeline and
stores the results individually.
– Analyze Selected Shot(s): Only analyzes selected shots in the Timeline.
– Analyze Selected and Blend: Analyzed multiple selected shots and averages the result, which is
saved to each clip. Useful to save time when analyzing multiple clips that have identical content.
– Analyze Current Frame: A fast way to analyze clips where a single frame is
representative of the entire shot.
The Enable Tone Mapping Preview checkbox lets you turn the tone mapping trim being applied off
and on, so you can evaluate how the downconverted SDR version looks on your HDR display. This
control is disabled when you enable “Use dual-outputs on SDI” in the Master Settings of the Project
Settings, since the second output SDI now automatically displays the target display output.
Delivering HDR10+
Once you’re finished grading the HDR and trimming the SDR downconversion, you need to output
your program correctly in the Deliver page.
Rendering an HDR10+ Master
To deliver an HDR10+ master after you’ve finished grading, you want make sure that the Output Color
Space of the Color Management panel of the Project Settings is set to the appropriate HDR ST.2084
setting based on the peak output you want to deliver (any values above will be clipped). Then, you
want to set your render up to use the highest quality Format/Codec combination that can be delivered
to whomever is doing the final mastering.
The HDR10+ analysis and manual trim metadata you generated while trimming is saved per clip,
in a series of JSON sidecar files, which should then be exported by right-clicking that timeline in the
Media Pool, and choosing Timelines > Export > HDR10+JSON.
These two sets of files are then delivered to a facility that’s capable of creating an HDR10+ Mezzanine
File (this cannot be done in DaVinci Resolve).
NOTE: The HDR10+ mastering workflow is still a work in progress. More information will be
provided as it becomes available.
Hybrid Log-Gamma (HLG)
The BBC and NHK jointly developed another method of encoding HDR video, referred to as Hybrid
Log-Gamma (HLG). The goal of HLG was to develop a method of mastering HDR video that would
support a range of displays of different peak luminance capabilities without additional metadata, that
could be broadcast via a single stream of data, that would fit into a 10-bit signal, and that in the words
of the ITU-R Draft Recommendation BT.HDR, “offers a degree of compatibility with legacy displays by
more closely matching the previous established television transfer curves.”
The basic idea is that the HLG EOTF functions very similarly to BT.1886 from 0 to 0.6 of the signal
(with a typical 0–1 range), while 0.6 to 1.0 smoothly segues into logarithmic encoding for the highlights.
Chapter 10 HDR Setup and Grading 235
This means that, if you just send an HDR Hybrid Log-Gamma signal to an SDR display, you’d be able to
see much of the image identically to the way it would appear on an HDR display, and the highlights
would be compressed to present an acceptable amount of detail for SDR broadcast.
On a Hybrid Log-Gamma compatible HDR display, however, the log-like highlights of the image (not
the BT.1886-like bottom portion of the signal, just the highlights) would be stretched back out, relative
to whatever peak luminance level a given HDR television is capable of outputting, to return the image
to its true HDR glory. This is different from the HDR10 method of distribution described previously, in
which the graded signal is referenced to absolute luminance levels dictated by ST.2084, and levels
that cannot be represented by a given display will be clipped.
And while this facility to support multiple HDR displays with differing peak luminance levels is
somewhat analogous to Dolby Vision’s ability to tailor HDR output to the unique peak luminance levels
of any given Dolby Vision-compatible television, HLG requires no additional metadata to guide how
the highlights are scaled, which depending on your point of view is either a benefit (less work), or a
deficiency (no artistic guidance to make sure the highlights are being scaled in the best possible way).
As is true for most things, you don’t get something for nothing. The BBC White Paper WHP 309 states
that, for a 2000 cd/m2 HDR display with a black level of 0.01 cd/m2, up to 17.6 stops of dynamic range
without visible quantization artifacts (“banding”) is possible. BBC White Paper WHP 286 states that the
proposed HLG EOTF should support displays up to about 5000 nits. So, partially, the backward
compatibility that HLG makes possible is due in part to discarding long-term support for 10,000 nit
displays. However, it’s an open question whether or not over 5000 nits is even necessary for
consumer enjoyment.
Sony, LG, Panasonic, JVC, Phillips, Hisense, Hitachi, and Toshiba have all either announced or are
shipping consumer HDR televisions capable of displaying HLG encoded video, and of course
DaVinci Resolve supports this standard through Resolve Color Management.
Grading Hybrid Log-Gamma in DaVinci Resolve
Monitoring an ST.2084 image is as simple as getting a Hybrid Log-Gamma-compatible HDR display,
and connecting the output of your video interface to the input of the display.
Setting up Resolve Color Management to grade for HLG is identical to setting up to grade for Dolby
Vision, except that there are four HLG settings to choose from for the Output Color Space:
– Rec.709 HLG ARIB STD-B67
– Rec.2020 HLG ARIB STD-B67
– Rec.2100 HLG
– Rec.2100 HLG (Scene)
Optionally, if you choose to enable “Use Separate Color Space and Gamma,” you can choose either
Rec. 2020 or Rec. 709 as your gamut, and Rec. 2100 HLG as your EOTF.
The levels you’ll be monitoring in your scopes will be different from the table of data to nit values listed
previously for grading to the PQ EOTF.
Ouputting Hybrid Log-Gamma
Once you’ve created an HLG grade for your program, you can output it to any high-quality 10-bit
capable media format.
Chapter 10 HDR Setup and Grading 236
Chapter 11
Image Sizing
and Resolution
Independence
DaVinci Resolve is a resolution-independent application. This means that, whatever
the resolution of your source media, it can be output at whatever other resolution you
like, and just about every size-dependent effect in your project, text, windows of
grades, edit and input clip scaling, and other effects will scale appropriately to match
the new output resolution.
This also means that you can freely mix clips of any resolution, fitting 4K, HD, and
SD clips into the same timeline, with each scaling to fit the project resolution
as necessary.
Your project’s resolution can be changed at any time, allowing you to work at one
resolution, and then output at another resolution. This also makes it easy to output
multiple versions of a program at different resolutions, for example, outputting 4K,
HD, and SD sized versions of the same timeline.
Additionally, most controls that let you transform clips, either to push into a clip for
creative intent, or to pan and scan media of one format to fit better into a different
output format, are smart enough to always refer back to the source resolution when
combining resizing operations to shrink, then enlarge an image for various reasons as
you work in the Cut, Edit, Fusion, and Color pages.
This chapter covers the relationship among the different sizing and transform controls
found in DaVinci Resolve, showing how they work together to intelligently manage the
sizing of clips and effects as you work.
Chapter 11 Image Sizing and Resolution Independence 237
Contents
About Resolution Independence 238
Timeline Resolution 239
Mixing Clip Resolutions 239
Changing the Timeline Resolution 239
You Can Use Separate Timelines to Output Different Resolutions 239
You Don’t Need Separate Timelines to Output Different Resolutions 239
Using High Resolution Media in Lower Resolution Projects 240
Clip Source Resolution 240
Pixel Aspect Ratio (PAR) 240
Clip Resolution 241
The DaVinci Resolve Sizing Pipeline 241
“Super Scale” High Quality Upscaling (Studio Version Only) 241
Fusion Effects and Resolution 242
Image Scaling 244
Edit Sizing in the Cut and Edit pages 247
Image Stabilization 248
Input Sizing on the Color Page 248
Node Sizing on the Color Page 249
Output Sizing on the Color Page 249
Output Blanking 249
Format Resolution on the Delivery Page 249
Rendering Sizing Adjustments and Blanking 250
About Resolution Independence
If you only read one paragraph of this chapter, read this: Resolution Independence in DaVinci Resolve
means you can add clips to a timeline in any combination of resolutions to fit the project resolution
you’ve chosen to work at, and you can later output that timeline to as many other resolutions as
necessary in order to create multiple deliverables. When you do so, all effects and transforms will
automatically readjust themselves to match the sizing of each new timeline resolution, and most
transforms are calculated and processed using the full native resolution of the source media you’ve
linked to that clip.
In short, what this means is that you can create multiple deliverables in multiple resolutions by simply
changing the timeline resolution or by using a lower resolution setting in the Deliver page compared to
the timeline resolution when you create a new job to render out, and every effect will be
the right size automatically.
Chapter 11 Image Sizing and Resolution Independence 238
Timeline Resolution
The timeline resolution is one of the most fundamental settings of your project, defining its frame size.
It’s found in the Master Settings panel of the Project Settings, where you can choose a predefined
resolution from the “Timeline resolution” drop-down menu, or you can type a custom resolution into
the X and Y fields below.
The project-wide Timeline Resolution parameters found in
the Master Settings panel of the Project Settings window
Mixing Clip Resolutions
Media used in a project does not have to match the timeline resolution. In fact, it’s extremely common
to mix multiple resolutions within the same timeline. Clips that don’t match the current resolution will be
automatically resized according to the currently selected Image Scaling setting (described below).
Changing the Timeline Resolution
As mentioned earlier, you can change the timeline resolution whenever you like. When you do so,
each Edit page transform, Fusion clip effects output, Color page Power Window, Input and Output
Sizing adjustment, tracking path, spatial keyframing value, as well as any other other resolution-
dependent Resolve FX effect or transform operation in DaVinci Resolve is automatically and
accurately scaled to fit the new resolution.
You Can Use Separate Timelines to
Output Different Resolutions
Beginning in DaVinci Resolve 16, you have the option of creating separate timelines with individual
Format (including Input Scaling), Monitoring, and Output Sizing settings for situations where
you need to set up multiple timelines to create multiple deliverables with different resolutions,
pixel aspect ratios, frame rates, monitoring options, or output scaling options than the overall project,
including “Mismatched Resolution Files” settings. For more information, see Chapter 34,
“Creating and Working with Timelines.”
You Don’t Need Separate Timelines to
Output Different Resolutions
Because of the way DaVinci Resolve works, it’s not necessary to create separate timelines when all
you need is to output the same timeline at multiple resolutions. Instead, you can focus on mastering a
single timeline, which you can output to as many other resolutions as you need.
For example, with only a single timeline in a project set to 4096x2160 (4K DCI) resolution, you can
easily output UHD, HD, center-cut SD, and center-cut Instagram sized deliverables in any format you
need by simply changing the Resolution drop-down setting in the Deliver page Render Settings before
you create a job to render. DaVinci Resolve takes care of the rest.
Chapter 11 Image Sizing and Resolution Independence 239
The Deliver page drop-down menu in the Render Settings panel lets you choose
what resolution you want to output the current timeline using
Using High Resolution Media in Lower Resolution Projects
Every set of transform and sizing parameters and settings that resize clips is combined intelligently, so
that the full resolution of a clip’s source media is always used as the source for any transform. For
example, if you’re using 8K media within a 1920x1080 project, and you need to enlarge a clip using the
Input Sizing palette’s Zoom parameter to 200%, the image is scaled relative to the native 8K resolution
of the source, and the result is fit into the current timeline resolution. This automatically guarantees the
highest quality for any image transform you make so long as you don’t zoom in past the native
resolution of any given clip.
This also applies to situations where, for example, you shrink a clip in the Edit page using the Edit
Sizing controls, only to re-enlarge the same clip in the Color page, using the Input Sizing controls. In
this situation, DaVinci Resolve is smart enough to do the math combining the project resolution, the
Edit Sizing, and the Input Sizing controls so that a single transform is applied to the native source
resolution of that clip, giving you the best quality result.
NOTE: This changes when you apply Fusion effects to any clip, as described later in
this chapter.
Clip Source Resolution
Clip resolution in DaVinci Resolve is handled by the combination of Pixel Aspect Ratio and Resolution.
Pixel Aspect Ratio (PAR)
The Timeline Format settings, found in the Master Settings of the Project Settings, let you specify a
Pixel Aspect Ratio for the project, in addition to the frame size. This setting defaults to Square Pixel,
which is appropriate for high definition projects and most digital media. However, there are also
options for 16:9 anamorphic, 4:3 standard definition, or Cinemascope. Which options are available
depends on what timeline resolution you’ve selected.
In addition, each clip has individually adjustable PAR settings in the clip attributes, for situations where
you’re mixing multiple types of media within a single project. For example, if you’re mixing SD clips
with non-square pixels and HD clips with square pixels, you can sort out all of the SD clips in the Media
Pool and assign them the appropriate NTSC or PAL non-square pixel ratio PAR setting. For more
information, see Chapter 22, “Modifying Clips and Clip Attributes.”
Chapter 11 Image Sizing and Resolution Independence 240
Clip Resolution
Ordinarily, the resolution of a clip is entirely dependent on the resolution that was selected when that
media was shot, or rendered out of a compositing, VFX, or 3D application. Once a piece of media has
been created, the native resolution of that media cannot be changed, and to maintain the ideal amount
of sharpness for that clip, you need to make sure that whatever transforms you apply to resize a clip
zoom into that clip no more than 10-20% over its native resolution (if even that), otherwise the image
will visibly soften.
However, DaVinci Resolve provides advanced Super Scale image processing in the Clip Attributes of
every video and image clip, that make it possible to resize clips beyond their native resolution while
maintaining the perceptible sharpness of a clip that’s still within it’s native resolution. This is an illusion,
but it’s a convincing one.
The DaVinci Resolve Sizing Pipeline
This section discusses the various sizing controls that are available in DaVinci Resolve, and how they
work together.
“Super Scale” High Quality Upscaling (Studio Version Only)
For instances when you need higher-quality upscaling than the standard Resize Filters allow, you can
now enable one of three “Super Scale” options in the Video panel of the Clip Attributes window for
one or more selected clips. Unlike using one of the numerous scaling options in the Edit, Fusion, or
Color pages, Super Scale actually increases the source resolution of the clip being processed, which
means that clip will have more pixels than it did before and will be more processor-intensive to work
with than before, unless you optimize the clip (which bakes in the Super Scale effect into the optimized
media) or cache the clip in some way.
Super Scale options in the Video panel of the Clip Attributes
The Super Scale drop-down menu provides three options of 2x, 3x, and 4x, as well as Sharpness and
Noise Reduction options to tune the quality of the scaled result. Note that all of the Super Scale
parameters are in fixed increments; you cannot apply Super Scale in variable amounts. Selecting one
of these options enables DaVinci Resolve to use advanced algorithms to improve the appearance of
image detail when enlarging clips by a significant amount, such as when editing SD archival media into
a UHD timeline, or when you find it necessary to enlarge a clip past its native resolution in order to
create a closeup.
You may find that, depending on the source media you’re working with, setting Sharpness to Medium
yields a relatively subtle result that can be hard to notice, but setting Sharpness to high should be
immediately more preferable, while also sharpening grain and noise in the image to an undesirable
extent at the default settings. However, while raising Noise Reduction will ameliorate this effect, it will
also diminish the gains you obtained by raising Sharpness. In these cases, it’s worth experimenting
with keeping Sharpness at Low or Medium so that Super Scale sharpens all aspects of a clip, but then
using the Noise Reduction tools of the Color page (with their additional ability to be fine-tuned) to
diminish the unwanted noise.
Chapter 11 Image Sizing and Resolution Independence 241
TIP: Super Scale, while incredibly useful, is a processor-intensive operation, so be aware that
turning this on will likely prevent real-time playback. One way to get around this is to create
Optimized Media for clips in which you’ve enabled Super Scale, since Optimized Media
“bakes in” the Super Scale effect. Another way to work is to create a string-out of all of the
source media you’ll need to enlarge at high-quality, turn on Super Scale for all of them, and
then render that timeline as individual clips, while turning on the “Render at source resolution”
and “Filename uses > Source Name” options.
Fusion Effects and Resolution
All image processing by the Fusion page takes place before effects that are applied by the Edit page,
with the sole exception of the Lens Correction effect. When it comes to sizing and image resolution,
whether or not the Fusion page affects resolution depends on how you use it.
Fusion Effects Inherit the Source Resolution of a Clip
When you open a clip on the Timeline in the Fusion page, the Fusion page is set to the full source
resolution of that clip, regardless of the Timeline resolution. This can be seen if you look at the
resolution that’s listed above the upper right-hand corner of the Viewer. This means that if you don’t
apply any operations that reduces the image resolution (described later), subsequent sizing
adjustments in other pages will refer to the same resolution as the source clip.
The available resolution and bit depth of the currently selected clip is visible
above the upper right-hand corner of the Viewer, circled in red
Fusion Clips Inherit the Timeline Resolution
If you combine multiple clips on the Timeline into a Fusion clip, the Fusion page is set to the timeline
resolution, regardless of the source resolution of the clip. The image is then output to the Edit page at
this timeline resolution, and all subsequent sizing adjustments are performed relative to the timeline
resolution, with no reference to the original resolution in the source clip.
Chapter 11 Image Sizing and Resolution Independence 242
The available resolution and bit depth of a clip that’s been turned into a
Fusion Clip, that’s set to the timeline resolution of 1920x1080
Operations in the Fusion Page That Change Resolution
If you don’t do anything to change the size of a clip in the Fusion page, then its resolution stays the
same and you’ll effectively output the source resolution of that clip to the Edit page.
However, if you Merge the image with a second clip attached to the background which has a different
resolution, or if you use a Crop or Resize node to increase or decrease the resolution of the image,
then the new resolution will be passed to the Cut and Edit pages as the effective source resolution of
that clip.
In short, the Fusion page passes whatever resolution is output by the last node of your composition
back to the Edit page as the effective resolution of that clip in the DaVinci Resolve image
processing pipeline.
Fusion Page Transform Operations Are Resolution Independent
Within the Fusion page, multiple Transform nodes operate in a resolution independent manner relative
to the resolution of the source clip. This means that if you shrink an image to 20% with one Transform
node, and then enlarge it back up to 100% using a second Transform node, you end up with an image
that has all the resolution and sharpness of the input image.
Fusion Page Resize Operations Are Not
Within Fusion there are two kinds of transform effects, the Transform node and the Resize node. Which
of these nodes you use has a dramatic impact on resolution independence.
– The Transform node always refers back to the input resolution of the clip (as defined by the
Clip Attributes) to enable resolution-independent sizing, such that multiple Transform nodes can
scale the image down and up repeatedly within the Fusion page with no unnecessary loss of
image resolution.
– The Resize node actually decreases image resolution when you shrink an image or increases
image resolution (with filtering) when enlarging. This means that the Resize node will break
resolution independence, and the resolution of the image will be fixed at whatever you specify
from that point in your composite’s node tree forward.
Chapter 11 Image Sizing and Resolution Independence 243
In most situations, you probably want to use the Transform node to maintain resolution independence
relative to the source media, unless you specifically want to alter and perhaps reduce image resolution
to create a specific special effect which purposefully degrades the image. For example, if you want a
clip to be forced to a standard definition resolution in order to make it look like a low-resolution
archival clip, the Resize node will accomplish this. Enlarging the result with a Transform node will then
perform a filtered enlargement that will look like a real SD clip being enlarged.
Transforms from the Fusion Page to the Edit Page
All transform operations you apply on the Cut, Edit, and Color pages are resolution independent,
referring to the original resolution of the source media, so long as you don’t use the Fusion page. For
example, if you shrink an image to 20% in the Edit page (using Edit sizing controls) and then enlarge it
in the Color page back to 100% (using Input sizing controls), you end up with an image that has all the
resolution and sharpness of the original media, because the final resolution is drawn from the original
source media.
However, once you use the Fusion page to do anything to a clip, from adding a small effect to creating
a complex composition, the resolution-independent relationship of the Edit and Color pages to the
source media is broken, and whatever resolution is output from your Fusion composition is the new
effective resolution of the clip that appears in the Timeline. This means if you shrink an image to 20%
in the Fusion page (using a Transform node) and then enlarge it in the Color page by 150%, you end up
with an image that isn’t as sharp as the original because the downconverted image in the Fusion page
is effectively the new source resolution of that clip.
Image Scaling
DaVinci Resolve has a dedicated mechanism for automatically managing the size of clips with
resolutions that don’t match the timeline resolution, and it’s separate from the Zoom transform controls
that are available for making creative adjustments to clips. This is called Image Scaling, and it’s
customizable in a few different areas.
Resize Filter Project Setting
The Resize Filter setting lets you choose the filter method that’s used to interpolate image pixels when
resizing clips:
– Smoother: May provide a more pleasing result for projects using clips that must be scaled down to
standard definition as this filter exhibits fewer sharp edges at SD resolutions.
– Bicubic: While the sharper and smoother options are slightly higher quality, bicubic is still an
exceptionally good resizing filter and is less processor-intensive than either of those options.
– Blinear: A lower quality setting that is less processor-intensive. Useful for previewing your
work on a low-performance computer before rendering when you can switch to one of the
higher quality options.
– Sharper: Usually provides the best quality for most projects, using an optical quality processing
technique that’s unique to DaVinci Resolve.
– Custom: This setting lets you take control of the exact algorithm used in all resizing operations.
The custom Resize Filter options available are: Bessel, Box, Catmul-Rom, Cubic, Gaussian,
Lanczos, Mitchell, Nearest Neighbor, Quadratic, and Sinc. In practice, the difference between
these methods can be quite subjective. However, if you need to match a specific resizing method
used from another application, you can do it here. For everyday use, the normal resizing filters in
DaVinci Resolve should be sufficient.
– Override Input scaling: Checking this box lets you choose an Input Sizing preset to
apply to the project.
– Override Output scaling: Checking this box lets you choose an Output Sizing preset to
apply to the project.
Chapter 11 Image Sizing and Resolution Independence 244
– Anti-alias edges: A second group of settings lets you choose how to handle edge anti-aliasing
for source blanking.
– Auto: Adds anti-aliasing when any of the Sizing controls are used to transform the image.
Otherwise, anti-aliasing is disabled.
– On: Forces anti-aliasing on at all times.
– Off: Disables anti-aliasing. It might be necessary to turn anti-aliasing off if you notice black
blurring at the edges of blanking being applied to an image.
– Deinterlace quality: (only available in Studio version) A fourth group of settings lets you choose
the quality/processing time tradeoff when deinterlacing Media Pool clips using the Enable
Deinterlacing checkbox in the Clip Attributes window. There are two settings:
– Normal: A high-quality deinterlacing method that is suitable for most clips. For many clips,
Normal is indistinguishable from High. Normal is always used automatically during playback in
DaVinci Resolve.
– High: A more processor-intensive method that can sometimes yield better results, depending
on the footage, at the expense of slower rendering times.
– DaVinci Neural Engine: This option uses the advanced machine learning algorithms of
the DaVinci Neural Engine to analyze motion between the fields of interlaced material and
reconstructs them into a single frame. This option is very computationally intensive, but ideally
will deliver an even more aesthetically pleasing result than the “high” setting.
Input Scaling Project Setting
If the native resolution of an imported clip doesn’t match the timeline resolution, then the currently
selected Input Scaling Preset in the Image Scaling panel of the Project Settings dictates how
mismatched clips will be handled project-wide. The default setting is “Scale entire image to fit,” which
shrinks or enlarges the image to fit the current dimensions of the frame without cropping any part of
the image, adding letterboxing or pillarboxing as necessary to fill the unused portion of the frame
depending on whether the horizontal or vertical dimension of the image hits the edge of the
frame first.
The Mismatched resolution files option let you choose how clips that don’t match the current project
resolution are handled. The illustrated examples below show an SD clip being fit into an HD project
using each of the different options.
– Center crop with no resizing: Clips of differing resolution are not scaled at all. Clips that are
smaller than the current frame size are surrounded by blanking, and clips that are larger than
the current frame size are cropped. Keep in mind that this is a good setting to use if you’re
importing a timeline from another NLE in which clip resolution adjustments are imported as
scaling adjustments. Choosing “Center Crop with no resizing” prevents DaVinci Resolve from
“double scaling” clips in imported timelines.
Chapter 11 Image Sizing and Resolution Independence 245
– Scale full frame with crop: Clips of differing resolution are scaled so that the clip fills the frame
with no blanking. Excess pixels are cropped. This is a good setting when you want clips that don’t
match the project resolution to automatically fill the frame, with no letterboxing or pillarboxing.
– Scale entire image to fit: The default setting. Clips of differing resolution are scaled so that
each clip fills the frame without cropping. The dimension that falls short has blanking inserted
(letterboxing or pillarboxing). This is a good setting when you want clips that don’t match the
project resolution to automatically fit into the frame without being cropped in any way, and you’re
fine with letterboxing or pillarboxing as a result. However, if you’ve imported a timeline from
another NLE and there are clips that are twice as big as they should be, it’s because this setting
is on by default, and your imported timeline has imported scaling settings used to resize clips
that didn’t match the timeline resolution. If this happens, switch to “Center crop with no resizing”
instead, and that will fix the problem.
– Stretch frame to all corners: Useful for projects using anamorphic media. Clips of differing
resolutions are squished or stretched to match the frame size in all dimensions. This way,
anamorphic media can be stretched to match full raster or full raster media can be squished
to fit into an anamorphic frame. An added benefit of this setting is that it makes it easy to mix
anamorphic and non-anamorphic clips in the same project.
Output Image Scaling Project Settings
Another group of settings found in the Image Scaling panel of the Project Settings lets you optionally
choose a different resolution to be output, either via the Deliver page, or via your video output
interface for monitoring or outputting to tape.
Chapter 11 Image Sizing and Resolution Independence 246
In particular, if you set the “Resolution” in the Render Settings panel of the Deliver page to something
other than the timeline resolution, these settings are used to make the change. This is useful in
situations where you’re mastering a high resolution 4K project, but you want to monitor using an HD
display, and you plan on eventually outputting HD resolution deliverables in addition to the 4K
deliverables for which you want to use different Scaling and/or Resize Filter settings that work better at
the lower resolution.
– Match timeline settings: This checkbox is turned on by default so that these settings mirror the
Image Scaling and Input Scaling settings described above. Turning this checkbox off lets you
choose different settings to be used when monitoring, outputting to tape, or rendering, using the
other settings below.
– Output resolution: Lets you choose an alternate resolution for monitoring and delivery. You can
also set this from the “Resolution” drop-down menu of the Video panel in the Render Settings of
the Deliver page.
– For “X x Y” processing: Lets you specify a different custom alternate resolution.
– Pixel aspect ratio: Lets you specify an alternate pixel aspect ratio to match the
alternate timeline format.
– Mismatched resolution files: Lets you choose an alternate way of handling mismatched resolution
files that works better for the alternate resolution you’ve chosen. These options work similarly
to those of the “Input Scaling” group. For example, for an HD or UHD resolution project you may
have the Image Input Scaling set to “Scale Full Frame With Crop” so that all Standard Definition
resolution files are center-cut to eliminate blanking. However, if you’re using Output Image Scaling
to create a Standard Definition deliverable, you may want to set the Output Image Scaling >
Mismatched resolution files setting to “Scale entire image to fit” in order to letterbox all HD or UHD
resolution clips, while preserving the original aspect ratio of the SD clips.
– Super Scale: Sets a very processor-intensive and high quality upscaling algorithm that actually
creates new pixels for the resized image. The possible values are None, 2x, 3x, 4x, and Auto.
Clip-Specific Scaling Settings
There’s an additional set of Scaling and Resize Filter settings, available in the Video Inspector for
selected clips, that provide the same options as those found in the Project Settings window, except
that they let you choose settings that will be specific to a particular clip. These are valuable for
situations where the project-wide scaling setting is working for most clips, but you have a handful of
specific clips that would benefit from individual settings.
Edit Sizing in the Cut and Edit pages
The Video Inspector contains a set of Transform parameters with which you can alter clips in the
Timeline. These parameters operate independently of the Input Sizing controls found in the Color
page. Separate Edit sizing controls serve a number of different functions:
– They’re convenient for editors and are easily animated for creating motion graphics effects right
on the Cut and Edit page timelines. They also keep editor transform adjustments separate from
colorist transform adjustments, for a clear division of labor and responsibility.
– Edit sizing parameters also store incoming transform data from imported AAF and XML projects
that come from other applications, so that imported transforms are kept separate from adjustments
made by colorists and finishing artists.
Chapter 11 Image Sizing and Resolution Independence 247
The Transform parameters in the
Inspector of the Edit page
If, when importing an AAF or XML project file, you turned on the “Use sizing information” checkbox,
then every clip that had position, scale, rotation, or crop settings applied in the originating NLE will
have those adjustments applied to these transform parameters, which is convenient for keeping
imported transform settings separate from other DaVinci Resolve-native transform settings.
Additionally, a set of Dynamic Zoom parameters also exists in the Video Inspector, which let you make
quickly animated transforms using graphical controls that correspond to the start and end states of an
animated transform. However, these transforms are lumped in with the other Edit page Transform
parameters in terms of the order of sizing operations occurring throughout DaVinci Resolve.
The Dynamic Zoom settings in the
Inspector of the Video Inspector
The transform that’s made via the Edit Sizing controls refers back to either the source resolution of
each clip, or the resolution output by the Fusion page if it’s in use.
Image Stabilization
DaVinci Resolve provides Image Stabilization controls in the Cut, Edit, and Color pages that all control
the same transform operation that happens between Edit sizing and Input Sizing in the image
processing pipeline. The transform that’s made via the Image Stabilization controls refers all the way
back to either the source resolution of each clip, or the resolution output by the Fusion page if
it’s in use.
Input Sizing on the Color Page
The Sizing palette on the Color page has another dedicated set of keyframable transform parameters
that work with the various DaVinci control panels to let the colorist apply pan and scan adjustments
while working through a project. These parameters work independently of the Edit page Transform
parameters, allowing you to keep imported transform settings separate from other transform settings
that you apply. However, for convenience the Edit sizing controls are available in the Color
page as well.
The transform that’s made via the Input Sizing controls refers all the way back to either the source
resolution of each clip, or the resolution output by the Fusion page if it’s in use.
Chapter 11 Image Sizing and Resolution Independence 248
Node Sizing on the Color Page
Using Node Sizing, you can apply individual sizing adjustments to clips on a per-node basis within the
Color page, which is similar in principal to using Transform nodes in the Fusion page. All Node Sizing
adjustments within a grade are cumulative, and any keyframing done to Node Sizing parameters is
stored in that node’s Node Format keyframe track in the Keyframe Editor. Two good examples of Node
Sizing include realigning color channels individually in conjunction with the Splitter/Combiner nodes or
duplicating windowed regions of an image by moving them around the frame. Subsequent Node
Sizing operations do not refer back to the source resolution of a clip, so using multiple Node Sizing
operations to reduce and enlarge an image will reduce image resolution and sharpness.
Output Sizing on the Color Page
Output sizing is an additional transform that is applied after Edit sizing, Fusion sizing, Input sizing, and
Node sizing. It’s an overall adjustment that affects every clip at once, which is suitable for making
last-minute format alterations that you want to affect the entire program. Technically, Output Sizing
includes the Blanking controls, but those are important enough to discuss separately. Output Sizing
also does not refer back to the source resolution of clips, so if you use Edit or Input Sizing to shrink a
clip, and Output Sizing to enlarge it again, the final result will be somewhat softened as you’re
enlarging the lower resolution image output by Input Sizing.
Output Blanking
Output blanking is not a sizing operation, but it’s often related and so worth mentioning here. Blanking
is an adjustment you can use to add black areas to the top, bottom, left, or right of an image, in order
to add “letterboxing” (black bars at the top and bottom of the image) or “pillarboxing” (black bars at the
left and right of the image) that lets you fill in the unused parts of an image frame that’s either shorter
or thinner than the current output resolution.
Once all transforms, compositing operations, and color corrections have been applied by the
DaVinci Resolve image processing pipeline, the very last operation to be performed is Output
blanking, if it’s enabled. This guarantees that overlapping images, grading, and other adjustments are
properly “blacked out” no matter what you’re doing to the program.
Output Blanking controls are found in the Timeline menu (as a series of aspect ratios) as well as in the
Output Sizing parameters of the Color page Sizing palette (via Top, Right, Bottom, and Left controls).
TIP: Text and graphics superimposed via the Data Burn-In window, if enabled, are the only
effects that will appear in front of picture areas affected by blanking. This lets you add
timecode and other information over letterboxed areas that you don’t want to obscure
the picture.
Format Resolution on the Delivery Page
By default, the Format Resolution setting in the Render Settings of the Deliver page matches the
timeline resolution when “Match timeline settings” is enabled in the Output Scaling Preset in the Image
Scaling panel of the Project Settings.
Choosing a new resolution from the “Set Resolution to” drop-down menu lets you override the current
Format Resolution setting before rendering. Using this control, you can queue up multiple jobs, each
set to a different resolution, to output multiple formats during a single render session.
For more information on rendering and setting up jobs for the Render Queue, see Chapter 185,
“Using the Deliver Page.”
Chapter 11 Image Sizing and Resolution Independence 249
Rendering Sizing Adjustments and Blanking
When rendering your final output, you have the option of choosing whether or not to “bake in” the
sizing operations that have been performed. For example, you may have set up a whole set of specific
sizing adjustments for the clips in a program, but then you’re requested to render the project and its
media as individual clips for round trip re-delivery to the editor for further work. In this case, you can
choose to either render the sizing into the final media, or not.
Whether or not sizing is rendered into your final media depends on the “Disable edit and input sizing”
checkbox in the Advanced Settings options of the Render Settings panel. You can disable sizing and
blanking either when rendering the current timeline as a single clip, or when rendering individual clips.
– If “Disable sizing and blanking output” is turned off: Output Blanking, Cut and Edit page sizing
adjustments, Color page Input and Output Sizing adjustments, and Image Stabilization are
rendered into the final rendered media using the optical-quality sizing algorithms available to
DaVinci Resolve. This is best if your sizing adjustments are approved and final, and you want to
“bake” sizing adjustments into the final media you’re delivering.
– If “Disable sizing and blanking output” is turned on: Output Blanking, Cut and Edit page sizing
adjustments, Color page Input and Output Sizing adjustments, and Image Stabilization are not
rendered, and each clip will be rendered either at the source resolution if “Render at source
resolution” is enabled in individual clips mode, or to the currently specified resolution of the
timeline or project. However, the sizing adjustments you’ve made will be exported as part of
the XML or AAF file that you’re exporting. This is best for workflows where the editor wants to
continue adjusting sizing after you’ve handed off the graded project relative to the original size of
the clips.
Keep in mind that if you want to render Input Sizing adjustments into the media you’re outputting, the
“Force sizing to highest quality” checkbox guarantees that DaVinci Resolve will use the highest-quality
sizing setting, even if you’ve temporarily chosen a faster-processing option for a slower computer.
NOTE: “Disable sizing and blanking output” does not disable any transform operations that
happen within the Fusion page. Those will continue to be applied to the final output.
Chapter 11 Image Sizing and Resolution Independence 250
Chapter 12
Data Burn-In
This chapter covers how to use the Data Burn-In window that’s available to every
page in DaVinci Resolve.
Contents
Data Burn-In 252
Project vs. Clip Mode 253
Setting Up Burned-In Metadata 253
Saving and Loading Burn-In Presets 253
Data Burn-In Metadata 254
Custom Output Options 255
Gang Rendered Text Styles 256
Prefix Render Text 256
Chapter 12 Data Burn-In 251
Data Burn-In
The Data Burn-In window lets you display select metadata as a timeline-wide “window burn” that’s
superimposed over the image in the Viewer. This window burn is written into files that you render in
the Deliver page, and it’s also output to video, for viewing on your external display, or for
outputting to tape.
The Burn-In window is available by choosing Workspace > Data Burn-In.
Data Burn-In window
Traditionally, window burns are useful as a reference when creating offline media that you need to
keep track of later. However, the Data Burn window is extremely flexible. For example, it’s also useful
for watermarking review files that you don’t want to be distributed accidentally with either custom text
or graphics with alpha channels, for adding graphical logos or “bugs” to programs in preparation for
broadcast (again, optionally using graphics with alpha channels), for superimposing custom reference
guidelines of some sort over the images being monitored, or even just for temporarily displaying
timecode or clip names to refer to on your monitor while editing, mixing, or reviewing graded dailies
with a client.
Viewer displaying record timecode, source timecode, and source clip name
Chapter 12 Data Burn-In 252
Project vs. Clip Mode
Two buttons at the top of the Data Burn window let you choose whether you want to edit one set of
burned-in metadata that will be displayed for the entire duration of the Timeline, or edit burned-in
metadata on a clip-by-clip basis. You can combine the two, having timeline-wide window burn settings
and separate clip-specific window burn settings for a handful of clips in that timeline at the same time.
When rendering in the Delivery page, window burns are applied both when rendering timelines as
individual source clips and when rendering as one single clip.
Two separate panels let you adjust project-wide
window burns vs. clip-specific window burns
Setting Up Burned-In Metadata
Setting up different clip and project metadata to output as a window burn is easy.
To set up a window burn:
1 Choose Workspace > Data Burn-In.
2 Click Project or Clip at the top of the Data Burn-In window.
3 Turn on the checkboxes of whatever items of metadata you want to display in the “Add to Video
Output” column. More information about the available items appears later in this chapter.
The first item of metadata is centered near the bottom of the frame, above Action Safe. Each
additional item of metadata you turn on for display is added above whichever items are already
displayed, regardless of their position in the “Add to Video Output” list.
4 Click any currently enabled item of metadata from the list to highlight it in black, and edit that
item’s Custom Output parameters at the right. More information about the available parameters
appears later in this chapter.
To reset the current window burn setup:
Click the Reset button next to the Option drop-down menu to reset the current mode of the Data
Burn window.
Saving and Loading Burn-In Presets
If there are common sets of metadata that you regularly use and switch among, you can save each set
up as a preset for future use.
To save a burn-in preset:
1 Click the Option menu and choose Save As New Preset.
2 Type a name into the Burn In Preset dialog that appears, and click OK. That preset is added to the
list of saved presets in the Option menu.
To delete a burn-in preset:
1 Choose a preset from the Option menu.
2 Click the Option menu, and choose Delete.
3 A dialogue box appears asking you to confirm the deletion.
Chapter 12 Data Burn-In 253
To modify a burn-in preset:
1 Choose a preset from the Option menu.
2 Edit it however you like.
3 Click the Option menu, and choose Update.
Data Burn-In Metadata
The leftmost column in the Data Burn-In window contains a list of all the options that you can add to
the video output as a window burn. Each option has a checkbox that lets you turn it on or off. You can
also select in the Option drop-down if you would like the item name rendered as a prefix to the
burn-in data.
NOTE: If two clips overlap in the Timeline, the metadata that matches the currently visible clip
in the Viewer is what will be displayed in the window burn.
– Record Timecode: The timecode relative to the Timeline, as set in the Conform Options section of
the General Options panel of the Project Settings.
– Record Frame Number: The number of frames from the first frame of the Timeline.
– Source Timecode: Each clip’s individual timecode.
– Source Frame Number: The number of frames from the first frame of the clip.
– Record TC & Frame Num: Both metadata options combined in one line.
– Source TC & Frame Num: Both metadata options combined in one line.
– Source & Record TC: Both metadata options combined in one line.
– Feet + Frames 35mm: Displays a Feet + Frames conversion of the program’s record timecode,
calculated for 35mm film.
– Feet + Frames 16mm: Displays a Feet + Frames conversion of the program’s record timecode,
calculated for 16mm film.
– Audio Timecode: The timecode of audio that’s been synced to a clip.
– Keycode: Also referred to as edge-code, the identification codes running along the edge of film
stocks that provide an absolute reference for which digital frames correspond to which film frames.
– Source File Name: The full file path, including file name, of the media file that’s
linked to the current clip.
– Record File Name: The file name as defined in the Render Settings list of the Deliver page.
– Source Clip Name: The file name of the media file that’s linked to the current clip,
without the file path.
– Custom Text1: A line of text that you type into the Text field of the Custom Output parameters.
You can use any characters you like. When editing any of the three custom text fields that are
available, you can use “metadata variables” that you can add as graphical tags that let you
display clip metadata. For example, you could add the corresponding metadata variable tags
%scene_%shot_%take and the custom text would display “12_A_3” if “scene 12,” “shot A,” “take 3”
were its metadata. For more information on the use of variables, as well as a list of all variables that
are available in DaVinci Resolve, see Chapter 16, “Using Variables and Keywords.”
– Custom Text2: A second line of text that you can customize.
– Custom Text3: A third line of text that you can customize.
Chapter 12 Data Burn-In 254
– Logo1: Lets you superimpose a graphic over the image in a customizable location. Compatible
graphics formats include PNG, TGA, TIF, BMP, and JPG. Alpha channels are supported for
transparency in logos.
– Logo2: Lets you superimpose a second graphic.
– Logo3: Lets you superimpose a third graphic.
– Reel Name: The currently defined reel number for the current clip.
– Shot: Shot metadata, if it’s been written to the file by a camera, or entered into the Metadata Editor
on the Media page.
– Scene: Scene metadata, if it’s been written to the file by a camera, or entered into the Metadata
Editor on the Media page.
– Take: Take metadata, if it’s been written to the file by a camera, or entered into the Metadata
Editor on the Media page.
– Angle: Angle metadata, if it’s been written to the file by a camera, or entered into the Metadata
Editor on the Media page.
– Day: Day metadata, if it’s been written to the file by a camera, or entered into the Metadata Editor
on the Media page.
– Date: Date metadata, if it’s been written to the file by a camera, or entered into the Metadata
Editor on the Media page.
– Good Take: Corresponds to Good Take metadata, if it’s been written to the file by a camera, or
entered into the Metadata Editor on the Media page.
– Camera: Corresponds to the Camera metadata, if it’s been written to the file by a camera, or
entered into the Metadata Editor on the Media page.
– Roll/Card: Corresponds to the Roll/Card metadata, if it’s been written to the file by a camera, or
entered into the Metadata Editor on the Media page.
Custom Output Options
The parameters in the Custom Output panel let you modify the look, position, and in some cases
content, of the selected metadata item. Pan and Tilt are individually customizable for each
metadata item.
– Display During First x frames: Turning on this checkbox lets you specify a number of frames
during which the current item of metadata will be displayed before dissolving away over one
second. When enabled, the current item of metadata will cut onscreen with the beginning of each
new clip, remain onscreen for the duration specified, and then dissolve away.
– Display During Last x frames: Turning on this checkbox lets you specify a number of frames
before the end of each clip during which the current item of metadata will appear onscreen after
fading up over one second, before cutting away with the end of the clip.
– Font: Defaults to Courier, but you can choose any font that’s installed on your system.
– Size: Defaults to 48, but you can choose standard increments from 6 to 72.
– Alignment: Defaults to Center. The only other option is Left.
– Font (color): Defaults to white, but you can choose from a range of predefined colors in this drop-
down menu.
– Background: Defaults to black, although the apparent color is influenced by the Opacity
setting. For a more garish look, you can choose from a range of predefined colors in this
drop-down menu.
Chapter 12 Data Burn-In 255
– Text Opacity: Defaults to 1.00. Lets you define the transparency of the burned-in metadata’s text.
– Background Opacity: Defaults to 1.00. Lets you define the transparency of the burned-in
metadata’s background color.
– X-Y Position: Lets you change the horizontal and vertical orientation of the current item of
metadata. The default horizontal value is the center of the frame, relative to the current project’s
frame size. The first item of metadata is centered vertically near the bottom of the frame, above
Action Safe. Each subsequent item of metadata you turn on is automatically placed above the
previous item of metadata, regardless of its order in the “Add to Video Output” list.
– Text: (only if one of the Custom Text options is checked) A text field that lets you enter custom text
to display as one of three possible custom text items.
– Logo: (only if one of the Logo options is checked) A field that displays the file path of any currently
selected graphic that you’re displaying as one of the three possible Logo graphics. Compatible
graphics formats include PNG, TGA, TIF, BMP, and JPG. Alpha channels are supported for
transparency in logos.
– Import File button: (only if one of the Logo options is checked) Lets you choose a graphics file to
use as a logo.
Gang Rendered Text Styles
You have the option of independently styling each item of metadata, depending on whether the Gang
Render Text Styles option is checked in the Data Burn-In window’s Option menu. When turned on, all
text metadata share the same font, size, color, background, justification, and opacity. When turned off,
each item of metadata can have individual settings.
Prefix Render Text
Another option in the Data Burn-In window’s Option menu lets you turn the prefixes, or headers, on or
off for all metadata that’s enabled to be burned in.
Chapter 12 Data Burn-In 256
Chapter 13
Frame.io & Dropbox
Replay Integration
DaVinci Resolve has sophisticated integrations with Frame.io, and Dropbox Replay
video review and collaboration services designed specifically for the postproduction
industry.
Contents
Enabling Frame.io Integration in Preferences 258
Deliver and Upload to Frame.io 258
Frame.io Comments Sync with Timeline Markers 259
Importing Media from Frame.io 261
Linking Media Pool Clips and Timelines with Frame.io Clips 261
Enabling Dropbox Replay Integration in Preferences 262
Deliver and Upload to Dropbox Replay 262
Dropbox Replay Comments Sync with Timeline Markers 263
Working With Dropbox Markers 264
Chapter 13 Frame.io & Dropbox Replay Integration 257
Enabling Frame.io Integration
in Preferences
An Internet Accounts panel in the System tab of the DaVinci Resolve Preferences lets you sign into
your Frame.io account and specify a local cache location for media being synced with Frame.io. You’ll
need to enter your login name and password to enable Frame.io integration, but once entered,
DaVinci Resolve will sign in automatically when DaVinci Resolve opens.
The Internet Accounts panel of the System tab of the
DaVinci Resolve Preferences window (login deliberately obscured)
The local cache location is used to store clips you import into a DaVinci Resolve project from the
Frame.io volume in the Media Storage panel of the Media page.
Deliver and Upload to Frame.io
A Frame.io preset at the top of the Deliver page’s Render Settings panel lets you render and upload a
program for review. All options in the Render Settings panel update to present suitable controls for
this process.
Choosing the Frame.io preset
Chapter 13 Frame.io & Dropbox Replay Integration 258
When you choose the Frame.io preset, the Location field turns into an Upload To field, and the Browse
button lets you choose a project and folder path to which to upload the exported result.
Choosing a Frame.io account to deliver a program to
When you export to Frame.io, the available choices in the Resolution, Format, Video Codec, and Type
pop-up menus are limited to those that are most suitable for Frame.io file sharing. Choose the desired
export options, then click the Add to Render Queue button to add this job to the Render Queue as you
would with any other export. When that job is rendered, it automatically proceeds to upload to Frame.
io, and an upload percentage indicator appears in the job listing to show how far along this upload is.
When it’s finished, the job displays the text “Upload completed.”
The job in the Render Queue shows you the
percentage the file has uploaded so far
This upload is done in the background, so you can continue working on other things in
DaVinci Resolve while the file uploads. If you want to see how long the upload will take on any other
page, you can choose Workspace > Background Activity to see the Background Activity window.
Frame.io Comments Sync
with Timeline Markers
When you render a timeline directly to Frame.io, that timeline is automatically linked to the movie that’s
been uploaded to Frame.io, and all comments, “Likes,” and graphical annotations (drawings and
arrows) from reviewers that are added online via the Frame.io interface are automatically synced to
Frame.io markers on your timeline (so long as your computer has an active internet connection).
Frame.io markers are distinct from all other markers and can be independently shown and hidden, or
deleted. Drawings and arrows from Frame.io are converted into their equivalent DaVinci Resolve
annotation graphics for visibility in DaVinci Resolve.
Chapter 13 Frame.io & Dropbox Replay Integration 259
Comments and graphical annotations from Frame.io appear as markers
with their corresponding overlays in your DaVinci Resolve Timeline
Working With Frame.io Markers
Double-clicking any Frame.io marker in the Timeline opens a dialog that lets you send replies to
comments that appear on Frame.io, enabling editors to respond directly to commenters.
The editor talking to himself using the Frame.io comment
dialog that appears when you open a Frame.io marker
You can also place Frame.io markers on the Timeline to have them automatically sync back to Frame.
io, giving you the ability to send your own comments back to commenters (be kind).
If you delete one or more Frame.io markers on the DaVinci Resolve timeline, those markers will also be
deleted in Frame.io. This includes the Mark > Delete All Markers > Frame.io command. This is not
undoable.
Chapter 13 Frame.io & Dropbox Replay Integration 260
Frame.io Marker Navigation
You can specifically navigate only the markers created in Frame.io while in the comment dialog of a
Frame.io marker, using the Previous Marker (Shift-UpArrow) and Next Marker (Shift-DownArrow)
commands. This allows you to skip directly from comment to comment in Frame.io without having to
either navigate all markers in-between, or double-click each Frame.io marker individually to respond.
Frame.io interoperability is a Studio Only feature.
Importing Media from Frame.io
A Frame.io volume appears in the Media Storage panel of the Media page that lets you access the
media available from your Frame.io account. Within this Frame.io volume, a top-level directory
represents your account directory, and within that each project you’ve created in Frame.io appears as
a sub-directory.
Accessing the directories of a Frame.io account from the Media Storage browser
Any media files that can be accessed in Media Storage can be imported into the Media Pool via the
usual methods. Once added to the Media Pool, that media file downloads in the background to the
specified local cache location, but it’s immediately available via your internet link until the download is
complete, so you can begin working immediately. If you want to see how long the download will take,
you can choose Workspace > Background Activity to see the Background Activity window.
The Background Activity window lets you see
what’s happening in the background while you work
Linking Media Pool Clips and
Timelines with Frame.io Clips
You can also use Frame.io accessibility in the Media Storage panel of the Media page to link clips or
timelines with media that’s already uploaded to your Frame.io account. Just locate and select a Frame.
Chapter 13 Frame.io & Dropbox Replay Integration 261
io clip in Media Storage, then right-click the clip or timeline you want to link it to in the Media Pool and
choose Link to Frame.io Media from the contextual menu.
If you’ve linked a Frame.io clip to a timeline, comments made on that Frame.io clip appear on the
linked timeline as Frame.io markers, just as if you’d exported that timeline directly to Frame.io.
Enabling Dropbox Replay Integration in
Preferences
An Internet Accounts panel in the System tab of the DaVinci Resolve Preferences lets you sign into
your Dropbox account. You’ll need to enter your login name and password to enable Dropbox
integration, but once entered, DaVinci Resolve will sign in automatically when DaVinci Resolve opens.
The Dropbox Login window in the Internet Accounts panel of
the System tab of the DaVinci Resolve Preferences window.
Deliver and Upload to Dropbox Replay
A Dropbox Replay preset at the top of the Deliver page’s Render Settings panel lets you render and
upload a program for review. All options in the Render Settings panel update to present suitable
controls for this process.
NOTE: The Dropbox Replay Render settings are separate from the normal Dropbox Render
settings, and you need to use this specific set of presets to integrate with Dropbox Replay.
Chapter 13 Frame.io & Dropbox Replay Integration 262
The Dropbox Replay Render settings (highlighted). Note they are
different from the normal Dropbox Render settings to the left.
When you export to Dropbox Replay, the available choices in the Resolution, Format, Video
Codec, and Audio pop-up menus are limited to those that are most suitable for Dropbox Replay.
Choose the desired export options, then click the Add to Render Queue button to add this job to
the Render Queue as you would with any other export. When that job is rendered, it automatically
proceeds to upload to Dropbox Replay, and an upload percentage indicator appears in the job
listing to show how far along this upload is. When it’s finished, the job displays the text
“Upload completed.”
The job in the Render Queue shows you the percentage the file has
uploaded, and lets you know when it’s completed.
This upload is done in the background, so you can continue working on other things in
DaVinci Resolve while the file uploads. If you want to see how long the upload will take on any other
page, you can choose Workspace > Background Activity to see the Background Activity window.
Dropbox Replay Comments Sync with
Timeline Markers
When you render a timeline directly to Dropbox Replay, that timeline is automatically linked to the
movie that’s been uploaded to Dropbox Replay, and all comments, and graphical annotations
(drawings and arrows) from reviewers that are added online via the Dropbox Replay interface are
automatically synced to Dropbox markers on your timeline (so long as your computer has an active
internet connection). Dropbox markers are distinct from all other markers and can be independently
shown and hidden or deleted. Drawings and arrows from Dropbox Replay are converted into their
equivalent DaVinci Resolve annotation graphics for visibility in DaVinci Resolve.
Chapter 13 Frame.io & Dropbox Replay Integration 263
Comments and graphical annotations from Dropbox Replay appear as markers
with their corresponding overlays in your DaVinci Resolve timeline.
Working With Dropbox Markers
Double-clicking any Dropbox marker in the Timeline opens a dialog that lets you send replies to
comments that appear on Dropbox Replay, enabling editors to respond directly to commenters.
The Dropbox Replay comment dialog that
appears when you open a Dropbox marker
Chapter 13 Frame.io & Dropbox Replay Integration 264
You can also place Dropbox markers on the Timeline to have them automatically sync back to
Dropbox Replay, giving you the ability to send your own comments back to commenters (be kind).
Dropbox markers on the Timeline show solid blue when they are created, and with a circle inside them
once they are synced with Dropbox Replay.
If you delete one or more Dropbox markers on the DaVinci Resolve timeline, those markers will also be
deleted in Dropbox Replay. This includes the Mark > Delete All Markers > Dropbox command. This is
not undoable.
Dropbox Marker Navigation
You can specifically navigate only the markers created in Dropbox Replay while in the comment dialog
of a Dropbox marker, using the Previous Marker (Shift-UpArrow) and Next Marker (Shift-DownArrow)
commands. This allows you to skip directly from comment to comment in Dropbox Replay without
having to either navigate all markers in-between, or double-click each Dropbox marker individually
to respond.
Chapter 13 Frame.io & Dropbox Replay Integration 265
Chapter 14
Resolve Live
The Color page has another mode available to aid you in using DaVinci Resolve in
on-set grading workflows. Turning the Resolve Live option on puts DaVinci Resolve
into a live grading mode, in which an incoming video signal from a camera can be
monitored and graded during a shoot.
Contents
More About Resolve Live 267
Configuring Your System for Resolve Live 267
Grading Live 268
Getting Started 268
Going Live 268
Using Freeze 269
Using Snapshot 269
Using Resolve Live Grades Later 270
Using LUTs in Resolve Live Workflows 270
Chapter 14 Resolve Live 266
More About Resolve Live
Resolve Live has been designed to let you use all of the features of DaVinci Resolve to grade these
on-set video previews, in the process saving video snapshots that contain a captured image, your
grade, and reference timecode from the camera. The idea is that, using Resolve Live, you can work
with the cinematographer to develop looks and test lighting schemes on the footage being captured
during the shoot, and then later you can use those looks to build dailies, and as a starting point for the
final grade once the edit has been completed.
Additionally, you can use Resolve Live in conjunction with other Color page features such as the
Alpha output to build test composites to check green screen shots, comparing them against imported
background images in order to aid camera positioning and lighting adjustments. The built-in video
scopes can also be used to monitor the signal levels of incoming video. Finally, you can use 1D and
3D LUTs to monitor and grade log-encoded media coming off the camera.
Configuring Your System for Resolve Live
Setting up Resolve Live is straightforward. Whether you’re using a tower workstation or a laptop, any
of the Blackmagic Design DeckLink or UltraStudio video interfaces can be used to connect your
DaVinci Resolve workstation to a camera and external video display. The important thing to keep in
mind is that, if you want to connect to a live incoming signal and output that signal for monitoring at the
same time, you need to either use two separate DeckLink PCIe cards or UltraStudio Thunderbolt
interfaces, or a single DeckLink Duo or DeckLink Studio card with multiple separate inputs and outputs
on a single PCIe card.
The Video and Audio I/O panel of the System Preferences provides two sets of options for configuring
video interfaces connected to your computer, one for playback, and one for capture. Resolve Live
uses the capture input.
Video input/output options in the System Preferences
During the shoot, the digital cinema camera in use needs to be connected to your DaVinci Resolve
workstation via HD-SDI, which must be configured to carry both the video image and timecode that
mirrors the timecode being written to each recorded clip. Most cameras allow timecode output over
HD-SDI, and both DeckLink and UltraStudio interfaces can pass this timecode to DaVinci Resolve.
Without a proper timecode reference, you won’t be able to take the shortcut of automatically syncing
your saved Snapshots to recorded camera original media using ColorTrace, although you can always
apply grades manually.
Chapter 14 Resolve Live 267
Grading Live
Once your camera and computer are appropriately connected and configured, using Resolve Live is
straightforward. This section describes the live grading workflow as it was designed to be used. Once
you’re familiar with the capabilities of Resolve Live, you may find your own ways of working that are
more in tune to the needs of your particular project.
Getting Started
When working with Resolve Live on a new shoot, you should begin with an empty project and a new
empty timeline, since the live grading workflow involves capturing live graded snapshots to an
otherwise unoccupied timeline. One recommended way of organizing the live grades of a shoot is to
create one new project per day of shooting. This way, snapshots captured during shoots using all 24
hours of time-of-day timecode won’t conflict with one another. Also, separate projects can make it
easier to use ColorTrace to copy grades from your live grade snapshots to the camera original media
you’ll be creating dailies from, eventually.
TIP: Having an empty Media Pool and timeline doesn’t mean you can’t install useful LUTs and
pre-import reference stills and saved grades to the Gallery, as these can be valuable tools for
expediting your on-set grading.
Once you’ve created your new project, you also need to choose the disk where all snapshots you take
will be saved. By default, snapshots are saved on the scratch disk at the top of the Scratch Disks list in
the Media Storage panel of the System Preferences. They’re automatically saved in a folder named
identically to the current project.
Going Live
Once you’ve created your day’s project, you need to turn on Resolve Live to begin work.
To turn on Resolve Live:
1 Open the Color page.
2 Choose Color > Resolve Live (Command-R).
A red Resolve Live badge at the top of the Viewer indicates that Resolve Live is turned on, and the
transport controls are replaced by the Freeze and Snapshot buttons.
A red badge shows that Resolve Live is active
and showing incoming video from the camera
Chapter 14 Resolve Live 268
At this point, the video from the connected camera should become visible within the Viewer, the
camera timecode should be displayed in the Viewer’s timecode window, and you can begin using all
of the capabilities of the Color page to begin grading whatever is onscreen, including Gallery split-
screens for matching and comparing. The current color adjustments in all palettes are automatically
applied to both the image in the Viewer and the video output to an external display (if there is one).
While Resolve Live is on, much of DaVinci Resolve’s non-grading functionality is disabled, so when
you’re finished, be sure to turn Resolve Live off.
To turn off Resolve Live, do one of the following:
– Click the Exit button at the bottom left-hand corner of the Viewer.
– Choose Color > Resolve Live (Command-R).
Using Freeze
In Resolve Live mode, the Freeze button (it looks like a snowflake) freezes the current incoming video
frame, so you can grade it without being distracted by motion occurring during the shoot. When you’ve
made the adjustment you need, you can unfreeze playback in preparation for grabbing a snapshot.
To freeze incoming video:
– Click the Freeze button (that looks like a snowflake).
– Choose Color > Resolve Live Freeze (Shift-Command-R).
The snowflake button freezes the image
so you can grade a particular frame
Using Snapshot
Once you’re happy with a grade, clicking the Snapshot button saves a snapshot of the current still in
the Viewer, the incoming timecode value, and your grade into the Timeline. Snapshots are simply
one-frame clips. They use grades and versions just like any other clip. In fact, ultimately there’s no
difference between the timeline created by a Resolve Live session and any other timeline, other than
that the Resolve Live timeline only has a series of one frame clips, which appear in the Timeline of the
Edit page as a series of 1-frame stills.
To save a snapshot, do one of the following:
– Click the snapshot button (with a camera icon).
– Choose Color > Resolve Live Snapshot (Command-Option-R).
The snapshot button saves a frame
and the grade for future use
Chapter 14 Resolve Live 269
For example, you may begin the process of building and refining a grade for a particular scene during
an unrecorded run-through. Then, once shooting starts, you may take snapshots of each shot’s slate,
and then of significant takes that follow, tweaking where necessary and in conjunction with the DP’s
feedback once things get going. New camera setups may require further tweaks, which you’ll save as
snapshots for those shots, and as you work in this way you’ll find yourself building up a timeline of
snapshots that correspond to that day’s shoot.
As you work, keep in mind that you must temporarily turn Resolve Live off in order to open a grade
from a previous snapshot in the Timeline, in order to use it as a starting point for another shot. You can
also save grades into the Gallery.
Using Resolve Live Grades Later
Since each Snapshot you capture during a Resolve Live session contains timecode that was captured
from the camera, grades from snapshots with timecode that overlaps recorded camera original media
can be synced using ColorTrace when the time comes to start making dailies.
Keep in mind that snapshot grades correspond to the monitored output of the camera during the
shoot. If you shot using a raw format, you’ll need to use whatever in-camera debayering settings were
used for monitoring during the shoot if you want the grades from your snapshots to produce the
same result.
For more information on using ColorTrace, see Chapter 145, “Copying and Importing Grades
Using ColorTrace.”
Using LUTs in Resolve Live Workflows
Many on-set workflows use Lookup Tables (LUTs) to calibrate displays, normalize log-encoded media
for monitoring, and preview looks in the video village to test how the current lighting scheme will work
with the intended grade. You can apply LUTs using the Lookup Tables section of the Project Setting’s
Color Management panel, or within a grade as part of a node tree.
However, you can also export LUTs, if necessary for monitor previewing, that you can apply by loading
them into a compatible LUT box of some kind, connected in-between the camera’s video output and a
display, or using a display capable of loading LUTs internally.
If you’re exporting LUTs using the Generate 3D LUT command of the Thumbnail timeline’s contextual
menu, you should limit yourself to using only Primaries palette and Custom Curves palette controls
within a single node. These are the only grading controls that can be mathematically converted
into a LUT.
When exporting a LUT, any nodes that use Windows or OpenFX will be ignored along with all
corrections made within these nodes. All other nodes with Primaries palette and Custom Curves
palette adjustments that can be translated into a LUT will have their combined result translated
into a LUT. For any nodes that mix supported and unsupported adjustments for LUT export (such as
sharpening or blur filtering operations), the unsupported adjustments will simply be ignored.
For more information on exporting LUTs, see “Exporting Grades and LUTs” in Chapter 138,
“Grade Management.”
NOTE: DaVinci Resolve exports LUTs in the .cube format, which is a DaVinci-developed
LUT format, with no relation to the Adobe SpeedGrade.cube format.
Chapter 14 Resolve Live 270
Chapter 15
Stereoscopic
Workflows
DaVinci Resolve has robust support for a wide variety of stereoscopic workflows.
Using the built-in tools of the Studio version of DaVinci Resolve, you can edit using
stereoscopic clips, grade the resulting program, adjust each clip’s stereo-specific
properties such as convergence and floating windows, and master stereoscopic
output, all within DaVinci Resolve.
Contents
Stereoscopic Workflows 272
Hardware Requirements for Working in Stereo 3D 272
Setting Up to Display Stereo 3D via SDI 273
Setting Up to Display Stereo 3D via HDMI 273
Supported Stereo 3D Media 274
Using Dual Sets of Media in Any Supported Format 274
Using Stereoscopic OpenEXR Media 274
Using Stereoscopic CineForm Media 274
Creating Stereo 3D Clips From Separate Files 275
Step 1—Import and Organize Your Media 275
Step 2—Generate 3D Stereo Clips 275
Step 3—(Optional) Create Optimized Media 277
Monitoring Stereoscopic 3D in the Edit Page 277
Converting Clips Between Stereo and Mono 277
Converting Stereo Clips Back to Mono 277
Converting Mono Clips or an Entire Timeline to Stereo 277
Attaching Mattes to Stereo 3D Clips 278
Chapter 15 Stereoscopic Workflows 271
Organizing and Grading Stereo 3D Dailies 278
Step 1—Create 3D Stereo Clips 278
Step 2—Edit the New Stereo Clips Into One or More Timelines for Grading 278
Step 3—Align Your Media 278
Step 4—Grading Stereo Media 279
Step 5—Output Offline or Online Media for Editing 280
Conforming Projects to Stereo 3D Media 281
Grading Mastered Stereoscopic Media From Tape 281
Adjusting Clips Using the Stereo 3D Palette 282
Stereo Eye Selection 282
Stereo 3D Geometry Controls 283
Swap and Copy Controls 284
Automatic Image Processing for Stereo 3D 285
Stereo 3D Monitoring Controls 287
Floating Windows 288
Stereo Controls on the DaVinci Control Panel 290
Outputting Stereo 3D Media in the Deliver Page 290
Rendering Frame-Compatible Media 290
Rendering Individual Left- and Right-Eye Clips 290
Stereoscopic Workflows
Creating a stereo 3D project is a multi-step process that benefits from careful media organization.
This chapter covers how to set up for working on stereoscopic projects, how to import stereoscopic
projects, and how to export stereoscopic media.
First, stereoscopic pairs of clips, i.e., the individual left- and right-eye media files, are imported into the
Media Pool, organized, and then linked together using the “Stereo 3D Sync” command to create a new
set of linked stereo clips. Then, these clips stereo clips can be either edited or conformed to imported
project data using a single Timeline. DaVinci Resolve lets you manage left- and right-eye grades and
sizing in the Color page using the controls found in the shortcut menu of the Thumbnail timeline, and in
the Stereo 3D palette.
If you’re using stereoscopic CineForm media, which contains muxed left-eye and right-eye image data
that can be decoded by DaVinci Resolve, you still need to go through this process, although you’ll be
using duplicate clips to populate Left and Right folders with matching sets of clips.
Hardware Requirements for
Working in Stereo 3D
With DaVinci Resolve on Mac systems, dual 4:2:2 Y’CbCr stereoscopic video streams are output via
SDI from a compatible Blackmagic Design video interface. You can select either Side-by-Side or Line
Mesh output to be fed to your stereo 3D-capable display, depending on its compatibility. Alternately, if
you turn on the “Enable Dual SDI 3D Monitoring” checkbox in the Video Monitoring group of the
Master Settings panel of the Project Settings, your compatible Blackmagic Design video interface
outputs full resolution 4:2:2 Y’CbCr for each eye to compatible displays.
Chapter 15 Stereoscopic Workflows 272
When setting up a 3D-capable DaVinci Resolve workstation, keep in mind that the dual video streams
of 3D projects make greater demands on disk bandwidth, media decoding via your workstation’s CPU,
and effects processing via your workstation’s available GPU cards.
Setting Up to Display Stereo 3D via SDI
All DaVinci Resolve systems can output a side-by-side frame-compatible signal that can be viewed on
a stereo 3D-capable display via a single SDI connection, output from a DeckLink HD Extreme card or
better. For higher-quality monitoring, two SDI signals can be used to output the left-eye and right-eye
images separately at full resolution using one of the following Blackmagic Design video interfaces:
– DeckLink HD Extreme 3D+
– DeckLink 4K Extreme
– DeckLink 4K Extreme 12G
– DeckLink 8K Pro
– UltraStudio 4K
– UltraStudio 4K Extreme
– UltraStudio 4K Extreme 3
Very old legacy systems accomplish this via NVIDIA dual SDI monitoring outputs.
NOTE: If your stereo display is not capable of multiplexing the two incoming SDI signals by
itself, you can accomplish this using an external device to multiplex both SDI signals into a
single stereo 3G signal that will be compatible. Check with your display manufacturer in
advance to see if this is necessary.
The following procedures describe how to set up stereo 3D monitoring in two different ways.
Monitoring via dual SDI to dual SDI:
1 Open the Master Settings panel of the Project Settings, then do the following:
– Make sure the Use 4:4:4 SDI checkbox is unchecked.
– Turn on the “Use dual outputs on SDI” checkbox.
2 Open the Stereo 3D palette in the Color page, and do the following:
– Set Vision to Stereo.
– Set the Out pop-up menu to None.
NOTE: When “Enable dual SDI 3D monitoring” is turned on, split-screen wipes and cursors
will not be visible on the grading monitor, nor will you be able to view image resizing.
Setting Up to Display Stereo 3D via HDMI
If your stereo-capable display only has HDMI input, you’ll need to use the HDMI output of a compatible
Blackmagic Design video interface that has HDMI 1.4 or better to output stereo 3D signals; see the
documentation accompanying your video interface for more information.
Chapter 15 Stereoscopic Workflows 273
Supported Stereo 3D Media
When importing stereo 3D media from other applications, there are two types of media that are
compatible with DaVinci Resolve stereoscopic workflows.
Using Dual Sets of Media in Any Supported Format
When originally shot, the media corresponding to stereo 3D workflows consists of two directories, one
for the left-eye media, and one for the right-eye media. For the most automated workflow possible,
this media must be tightly organized. Each pair of left-eye and right-eye media files in both directories
should have matching timecode, and reel numbers that clearly indicate which are the left-eye shots,
and which are the right-eye shots. When organized in this way, it’s relatively easy to use
DaVinci Resolve to convert each matching pair of clips into the stereo 3D clips that you’ll need to work
in DaVinci Resolve. This process is covered in detail in a subsequent section.
Using Stereoscopic OpenEXR Media
DaVinci Resolve is compatible with stereo OpenEXR files to accommodate professional cinema and
specialty workflows. Stereo OpenEXR clips include the media for both eyes stored as separate parts
so that a single OpenEXR file may output either a single image or stereo 3D images when used with
an application that supports it, such as DaVinci Resolve. This means you can edit stereo OpenEXR
media, grade it, and make all of the stereoscopic adjustments that the Stereo palette of the Color
page supports.
If you import stereo OpenEXR clips to the Media Pool, they will at first appear to be regular non-stereo
clips that output a single image. However, these can easily be converted to stereo 3D clips using the
following procedure.
To set stereo OpenEXR clips to be usable as stereo clips:
1 Import the OpenEXR media to the Media Pool as you would any other clips.
2 Select one or more OpenEXR clips, then right-click the selection and choose “Convert to Stereo”
from the contextual menu. Those clips will now appear with a stereo 3D badge to indicate that
they’re stereo.
Using Stereoscopic CineForm Media
DaVinci Resolve is also compatible with CineForm stereo QuickTime files. CineForm clips encode the
media corresponding to both eyes and mux (multiplex) it together in such a way so that CineForm files
may output either a single frame of image data, if used in an application that is not capable of
stereoscopic processing, or stereo 3D media when used with an application that is, such as
DaVinci Resolve. This means that you can edit CineForm media using nearly any NLE, export a project
via whatever workflow is convenient, and end up with a stereoscopic project that can be graded in
DaVinci Resolve.
There are two ways of creating CineForm files. One is by using a camera or recording system that
processes dual synchronized video signals to create a single set of CineForm media. The other is to
use the CineForm conversion tools that come with GoPro CineForm Studio to reprocess dual sets of
stereo 3D assets into the CineForm format.
The CineForm codec itself encodes full-frame image data using wavelet compression, at any
resolution, at up to 12-bits, in a choice of RGB, Y’CbCr, or RAW color spaces. DaVinci Resolve is
compatible with CineForm in a QuickTime wrapper using any supported color space, allowing access
to the dual streams of image data that are provided.
Chapter 15 Stereoscopic Workflows 274
When the time comes to output your program, keep in mind that while DaVinci Resolve can read
CineForm files, CineForm files cannot be rendered out of DaVinci Resolve unless you’ve purchased an
encoding license for OS X or Windows from GoPro. Furthermore, DaVinci Resolve cannot render
Stereoscopic CineForm files.
If you import stereo CineForm clips to the Media Pool, they will at first appear to be regular non-stereo
clips that output a single image. However, these can easily be converted to stereo 3D clips using the
following procedure.
To set stereo CineForm clips to be usable as stereo clips:
1 Import the CineForm media to the Media Pool as you would any other clips.
2 Select the CineForm media you need to convert, then right-click the selection and choose
“Convert to Stereo” from the contextual menu. Those clips will now appear with a stereo 3D badge
to indicate that they’re stereo.
Creating Stereo 3D Clips
From Separate Files
If you’re working with stereo media that was either captured or created as individual left- and right-eye
files, then you need to convert each matching pair of clips into the stereo 3D clips that you’ll need to
work in DaVinci Resolve. This is a two-step procedure.
Step 1—Import and Organize Your Media
You need to import all of the left-eye and right-eye media into separate bins.
1 Open the Media page, and create three Media Pool bins named “Left,” “Right,” and “Stereo Clips.”
The exact names are not important, but the way the media is organized is.
2 Import all left-eye media into the “Left” bin, and all right-eye media into the “Right” bin. If you’re
importing stereoscopic Cineform media, you still need to create this kind of organization, which
requires you to place duplicates of each clip into each of the “Left” and “Right” bins.
Step 2—Generate 3D Stereo Clips
Once you’ve organized your media appropriately, you’re set up to synchronize the left- and right-eye
clips using timecode.
1 Create a new bin in the Media Pool, and name it “Stereo Clips.” This is the bin that will eventually
contain the linked stereo clips you’re about to create.
How to organize media for working in stereo 3D
2 Right-click anywhere within the Media Pool and choose Stereo 3D Sync.
Chapter 15 Stereoscopic Workflows 275
The Stereo 3D Sync dialog appears, with buttons for choosing the left-eye folder, choosing the
right-eye folder, choosing the output folder, and checkboxes for specifying whether to match reel
names and file names, and additional fields for entering characters that identify left- and
right-eye clips.
The Stereo Media Sync window
3 Click the Browse button corresponding to “Choose left eye folder” and then use the hierarchical
list of bins that appears to choose the bin you named “Left.” Follow the same procedure to choose
the right-eye media.
4 Click the Browse button corresponding to “Output folder” and then use the hierarchical list of bins
that appears to choose the bin you named “Stereo Clips.”
5 Choose which matching criteria to use. Ideally, you only need to use whichever one of the three
criteria that apply. The three options are:
– Match Reel Name: If the reel names of the left- and right-eye media match,
turn this checkbox on.
– Match File Name checkbox: If the file names of the left- and right-eye media match,
turn this checkbox on.
– Left Identifiers and Right Identifiers fields: If the left- and right-eye clips are identified by a
special subset of characters within the file name (for example, “3D_R” and “3D_L”), then you
can type each into the appropriate field, and these characters will be used to match the left
and right eyes together.
6 Click Sync.
The original clips in the Left and Right bins disappear, and a full set of Stereo 3D clips appear in the
output bin you selected in step four.
Final stereo clips, ready to be edited and graded
Chapter 15 Stereoscopic Workflows 276
Step 3—(Optional) Create Optimized Media
If your stereo media is excessively large, you can create optimized media.
1 Select the stereo clips you’ve created.
2 Right-click one of the selected clips, and choose Generate Optimized Media from the contextual
menu. A window appears showing you how long it will take to finish creating optimized media.
Monitoring Stereoscopic 3D in the Edit Page
You can now view a Stereoscopic 3D signal directly from the Edit page. Previously, the Edit page was
restricted to left eye for both outputs. The Edit Page Viewer now displays Stereoscopic 3D identically
to the Color page Viewer. The 3D palette in the Color page has the stereoscopic controls to select the
stereo viewing options (Side by Side, Anaglyph, Line by Line, etc.), as well as adjusting the
convergence and other stereoscopic parameters.
Converting Clips Between Stereo and Mono
You also have the option of converting clips between mono and stereo 3D using a pair of commands
in the Media Pool.
Converting Stereo Clips Back to Mono
If necessary, you can split one or more stereo clips into mono clips using a single command.
To convert stereo clips into mono clips:
1 Select one or more stereo clips in the Media Pool.
2 Right-click one of the selected clips and choose Split Stereo 3D Clips from the contextual menu.
Afterwards, two new bins are created named Left and Right, containing the individual left- and right-
eye clips that you’ve split apart.
Converting Mono Clips or an Entire Timeline to Stereo
Non-stereo clips (for which there are not separate left- and right-eye media files) can be converted into
stereo clips either individually or throughout an entire timeline for one of two different reasons:
– You can convert non-stereo clips into stereo for use in a stereo project, so they output properly
along with the rest of a stereo timeline, albeit without adjustable convergence or depth effects.
– If you want to grade an HDR and non-HDR version of your program at the same time, converting
non-stereo clips to stereo makes it possible for you to a) manage two separate SDR and HDR
grades for each clip in a timeline using the left- and right-eye channels, and b) output the SDR and
HDR signals separately via your compatible Blackmagic Design interface’s left- and right-eye SDI
outputs when you turn on the “Use dual outputs on SDI” checkbox in the Video Monitoring section
of the Master Settings panel of the Project Settings.
To convert mono clips into stereo clips:
1 Select one or more non-stereo clips in the Media Pool.
2 Right-click one of the selected clips and choose Convert to Stereo from the contextual menu.
Afterwards, that clip appears in the Media Pool as a Stereo 3D clip, and when edited into a timeline,
can expose its controls in the 3D Stereo palette in the Color page.
Chapter 15 Stereoscopic Workflows 277
If you have a timeline full of clips that you’ve just converted into stereo using the above procedure, you
need to take the additional step of setting the Timeline to stereo in order to create stereo grades for
each clip.
To convert a timeline to have stereo grades for simultaneous HDR/SDR output while grading:
– Right-click a timeline in the Media Pool and choose Timelines > Set Timeline to Stereo.
For more information about using stereo timeline workflows for simultaneous HDR and SDR grading,
see Chapter 9, “Data Levels, Color Management, and ACES.”
Attaching Mattes to Stereo 3D Clips
If you have left- and right-eye mattes that need to be attached to stereo clips, the process works
identically to importing mattes for regular clips, except that when you’ve selected a stereo 3D clip
in the Media Pool, you have two matte import commands, “Add As Left Eye Matte,” and
“Add As Right Eye Matte.”
Organizing and Grading Stereo 3D Dailies
A common workflow is the creation of digital dailies within DaVinci Resolve before editing in an NLE.
This provides the editors, director, and producers with the advantage of having more attractive media
to work with, that’s also more comfortable to view if handled with the automatic geometry and color-
matching functions that match the media of each pair of shots together for a preliminary left- and
right-eye balance. The resulting Timelines can then be output to whichever media format is most
convenient to use.
Step 1—Create 3D Stereo Clips
The very first step in the process of creating dailies is to import all of the left-eye and right-eye media
into individually organized bins, and to then link them together to create stereo 3D clips, as described
in the previous section.
Step 2—Edit the New Stereo Clips Into One
or More Timelines for Grading
Now that you’ve created a set of Stereo 3D clips, you’re ready to edit them into one or more Timelines
for grading. You can do this by simply creating a new Timeline and deselect the Empty Timeline
checkbox. A new Timeline will be created with the stereo 3D clips you created.
Step 3—Align Your Media
For the stereoscopic effect to work without causing headaches, it’s critical that both eyes are aligned.
This can be tricky to adjust using manual controls, but is something that can be automatically analyzed.
You can perform stereo 3D alignment to a single clip using the Stereo 3D Palette controls, or you can
select a range of clips to align all of them automatically at once. There are two methods of alignment;
which is more appropriate depends on the type of geometry issues you have to address.
– Transform Alignment: Analyzes the image and makes vertical and rotational adjustments to line
up the left- and right-eye images as closely as possible.
– Vertical Skew: Analyzes the images and makes a vertical-only adjustment to line up the left- and
right-eye images.
Chapter 15 Stereoscopic Workflows 278
Controls for aligning the left- and right-eye media
Step 4—Grading Stereo Media
Grade the clips in the Timeline as you would any other digital dailies, with the sole addition of using
the controls in the Stereo 3D palette to control monitoring and manage the adjustments made to each
eye as necessary. As when creating any other kind of dailies, you can use LUTs, the Timeline Grade,
and individual clip grading to make whatever adjustments are necessary to create useful media
for editing.
Grading Windows
If you’re using windows, The Color group of the General Options panel of the Project Settings has a
checkbox called “Apply stereoscopic convergence to windows and effects” that correctly maintains
the position of a window that’s been properly placed over each eye when convergence is adjusted.
You must turn on a checkbox in the Project Settings
to enable stereo convergence for windows
When this option is enabled, the Window palette displays an additional Transform parameter,
“Convergence,” that lets you create properly aligned convergence for a window placed onto a
stereoscopic 3D clip.
The Convergence control in the Transform
section of the Window palette
After placing a window over a feature within the image while monitoring one eye, you can enable
Stereo output in the Stereo 3D palette and use the Pan and Convergence controls to make sure that
window is properly stereo-aligned over the same feature in both eyes. At that point, adjusting the
Convergence control in the Stereo 3D palette correctly maintains the position of the window within the
grade of each eye.
Chapter 15 Stereoscopic Workflows 279
A convergence-adjusted window in stereo
Matching Media From Left and Right Eyes
To help you manage the visual differences between left- and right-eye clips, there are also three
automatic color matching commands that can be used to batch process as many clips as you need to
adjust at once.
– Stereo Color Match (Primary Controls): Uses the Lift/Gamma/Gain controls to match one eye to
the other. The result is a simple adjustment that’s easy to customize, but may not work as well as
Custom Curves in some instances.
– Stereo Color Match (Custom Curves): Uses the Custom Curves to create a multipoint adjustment
to match one eye to the other. Can be more effective with challenging shots.
– Stereo Color Match (Dense Color Match): Performs a pixel by pixel, frame by frame color match
that is incredibly accurate. This operation is processor intensive, so if you’re going to batch
process many clips, or if you’re matching long clips, you’ll want to make sure you have adequate
time. Because this is such a precise match, it’s recommended to use Dense Color Match after
you’ve used one of the stereo alignment commands.
Controls for matching the grade of
the left and right eye media
Step 5—Output Offline or Online Media for Editing
When you’re done applying whatever grading is necessary to make the media suitable for editing,
you’ll need to export each clip as separate left- and right-eye clips using the controls of the
Deliver page.
1 Open the Deliver page, and set up your render to output the format of media you require. Be sure
to do the following:
– Set Render Timeline As to Individual source clips.
– Turn on the “Filename uses Source Name” checkbox.
– To render both eyes’ worth of media, choose “Both eyes as” from the Render Stereoscopic 3D
option, and choose Separate Files from the accompanying pop-up menu. Optionally, you could
also choose to render only the left-eye or right-eye media.
Chapter 15 Stereoscopic Workflows 280
2 Choose how much of the Timeline to render from the Render pop-up menu in the Timeline toolbar;
to render everything, choose Entire Timeline.
3 Click “Add Job to Render Queue.”
4 Click Start Render.
DaVinci Resolve will now render either two sets of left- and right-eye clips, or one set of media
corresponding to the eye you chose.
To make sure that the resulting edited project conforms easily to the originating DaVinci Resolve
project, it’s important to be sure that you render individual source clips, and that you turn on the
“Filename user Source Name” checkbox, in order to clone the timecode, reel numbers, and file names
of the source media.
Conforming Projects to Stereo 3D Media
Since DaVinci Resolve manages stereo via a single set of specially created stereo 3D clips, you can
use the same project import methods to import stereo 3D projects as you would for any other project.
Only a single imported timeline is necessary.
This also means that you can edit stereo projects in NLEs that aren’t otherwise stereo-aware, and
finish them in full stereo 3D in DaVinci Resolve. To do this, you need to make sure that you edit the
left-eye media in your NLE, and then export either an EDL or XML file to conform in DaVinci Resolve.
To conform an EDL to stereo 3D media:
1 Open the Media page, and create the necessary set of stereo 3D clips that will correspond to the
project you’re going to import, as described previously.
Open the Edit page, and then use the Import AAF/EDL/XML command to import your edit.
2 When the Load EDL/XML dialog appears, do the following:
– If importing an EDL, verify that the frame rate is correct, and click OK.
– If importing XML, make sure you turn off the “Automatically import source clips into
Media Pool” checkbox, since you want to relink the imported project to the stereo 3D clips you
created in step one.
The left-eye media timecode and reel information that’s embedded within each stereo 3D clip will be
used to conform the stereo 3D clips with the imported EDL, and you should be ready to work.
Grading Mastered Stereoscopic Media From Tape
If you’ve been handed a stereo 3D muxed tape with a mastered program that needs to be graded,
but you haven’t been given a project file or EDL, you can ingest it as individual left- and right-eye
media files with a supported VTR, such as HDCAM SR with 4:2:2 x 2 mode, by turning on the “Use left
and right eye SDI” checkbox in the Capture and Playback panel of the Project Settings. When muxed
stereoscopic signals are ingested, each eye is separated into individual left-eye and
right-eye image files
Once ingested, you can use Scene Detection to split the left-eye media in one bin, and to create an
EDL, you can use to split the right-eye media in the same way in another bin, so that you can create a
sequential set of stereo clips for grading.
Chapter 15 Stereoscopic Workflows 281
Adjusting Clips Using the Stereo 3D Palette
Once you’ve either created or imported a stereoscopic 3D-identified timeline, you’re ready to begin
grading. The left eye will be displayed in the Edit and Color pages by default; however, you can
right-click on the Timeline and select Stereo 3D Mode to view the other eye. Most colorists work by
grading one eye first (typically the left), and rippling their grades to the other eye, making separate
adjustments to each eye’s clips when necessary to match undesirable variation between cameras.
DaVinci Resolve lets you do this automatically.
Setting up stereo 3D media enables the Stereo 3D palette on the Color page. This palette contains all
the controls necessary for working on stereoscopic projects. It provides controls for choosing which
eye to grade, adjusting convergence, swapping and copying grades and media between matching
left- and right-eye clips, auto-processing the color and geometry of left- and right-eye clips to match,
stereo 3D monitoring setup, and controls for floating windows.
Stereoscopic 3D palette
Your project must contain stereo 3D clips in order to open this palette. For more information on setting
up a stereo 3D project, see the “Creating Stereo 3D Clips” section of this chapter.
Stereo Eye Selection
Most colorists work by grading one eye first (typically the left), and rippling their grades to the other
eye, making separate adjustments to each eye’s clips when necessary to match undesirable variation
between cameras.
The first three buttons in the Stereo 3D palette let you choose which eye to grade while you’re
working, as well as whether or not to ripple each clip’s grade to the matching opposite-eye clip.
Whenever you switch eyes, the 3D badge above each clip’s thumbnail changes color (blue for right,
red for left) and the thumbnails themselves update to show that eye’s media.
The Left eye is master and ganged with the Right
– Left button: Displays the left-eye image and grade.
– Ripple Link button: When enabled (orange), all changes you make to the grade of the currently
selected eye are automatically copied to the correspondingly opposite eye. When disabled (gray),
grades made to the currently selected eye are made independently.
– Right button: Displays the right-eye image and grade.
You can also choose which eye you’re viewing and grading by right-clicking a clip’s thumbnail and
choosing Stereo 3D > Switch Eye or by choosing View > Switch Eye To > Left Eye or Right Eye.
Chapter 15 Stereoscopic Workflows 282
Using Ripple Link When Grading Stereo 3D Clips
You would turn Ripple Link off to suspend rippling when you want to make an individual adjustment to
the grade of one eye to obtain a better match between the two. When you’re finished matching the
two clips, you can turn it back on to resume automatic grade rippling.
Stereo 3D grade rippling is always relative, so differences between the grades that are applied to the
left- and right-eye clips are preserved. In fact, when you add or remove nodes to or from one eye, the
same nodes are automatically added to or removed from the corresponding clip it’s paired with,
regardless of whether or not Ripple Linked is enabled.
IMPORTANT
Regardless of whether or not Ripple Link is enabled, local versions created for one stereo
3D-identified clip are automatically available to the paired timeline.
Stereo 3D Geometry Controls
The next group of parameters lets you adjust the geometry of stereo 3D clips. The Pan, Tilt, and Zoom
controls are provided as a convenience, and simply mirror the same parameters found in the Transform
palette’s Input mode, but made specific to the geometry of the left- and right-eye media.
Convergence, Pitch, and Yaw are the three parameters that are unique to the Stereo 3D palette.
Stereoscopic 3D Geometry controls
– Convergence: Adjusts the disparity between the left and right eyes, to define the point of
convergence (POC), or the region within the image where the left- and right-eye features are in
perfect alignment. If necessary, Convergence can be animated using the Stereo Format parameter
group in the Sizing track of the Keyframe Editor. If you want to adjust convergence in pixels, open
the Stereo 3D palette option menu, and turn on “Show convergence in pixels.”
Features that overlap perfectly in both right- and left-eye clips are at zero parallax, putting that
feature’s depth at the screen plane. Matching features that are divergent in the left- and right-eye
clips have increasingly positive parallax, and appear to be farther away from the audience.
Matching features that are divergent and reversed in the left- and right-eye clips have increasingly
negative parallax, and appear to be closer to the audience than the screen plane.
– Linked Zoom button: When enabled (white), both the left- and right-eye clips are automatically
zoomed whenever Convergence is adjusted so that both eyes always fill the screen. When
disabled (gray), changes to Convergence will cause the opposing left and right edges of each
eye’s clip to have blanking intrude.
– Pitch: Pivots the image around the horizontal center plane of the frame.
– Yaw: Pivots the image around the vertical center plane of the frame.
Chapter 15 Stereoscopic Workflows 283
Sizing Repositioning in Stereo 3D
Generally, you’ll want to reposition stereo 3D clips with Ripple Link turned on, but you may
occasionally find yourself needing to make a manual adjustment to one eye in particular with Ripple
Link disabled. As with color adjustments, Sizing adjustments made with Ripple Link disabled are only
applied to the clip in the current Timeline. When Ripple Link is turned on, all Sizing adjustments are
automatically copied to the correspondingly numbered shot of the other stereo 3D timeline.
WARNING
It is not advisable to use the Rotate parameter when transforming stereo 3D clips.
Geometrically, rotation tilts a stereo pair of clips inappropriately, and ruins the “side-by-side”
convergence that’s necessary to create the stereoscopic illusion.
Protecting Stereo Adjustments When Copying Grades
Each version of a grade has independent stereo adjustments stored along with the Sizing settings.
To prevent accidental overwrite of convergence and alignment data when copying grades from one
clip to another, you can right-click within the Gallery and choose one of the following options to
turn them on:
– Copy Grade: Preserve Convergence
– Copy Grade: Preserve Floating Windows
– Copy Grade: Preserve Auto Align
When enabled, these options let you overwrite a clip’s grade without overwriting specific Stereo 3D
parameters.
TIP: Stereo 3D and Sizing settings are processed before node-based corrections in the
DaVinci Resolve image processing pipeline.
Swap and Copy Controls
Another set of controls at the right of the Stereo 3D palette lets you swap and copy grades, and swap
clips, in situations where you need to reverse what’s applied to a pair of left- and right-eye clips.
Swap and Copy grades between eyes
– Swap Grade: Exchanges the grades that are applied to the left- and right-eye clips.
– Swap Shot: A checkbox that, when enabled, switches the actual media used by two
corresponding left- and right-eye clips. Useful in situations where the eyes of a stereo
3D clip were mislabeled, and you want to switch the clips without rebuilding both EDLs.
– Copy Right to Left: Copies the right-clip grade to the corresponding left-eye clip.
– Copy Left to Right: Copies the left-clip grade to the corresponding right-eye clip.
Chapter 15 Stereoscopic Workflows 284
Batch Grade Management for Stereo 3D Projects
There are also a series of batch-processing commands that are useful for stereoscopic grading that
are available when you right-click one or more selected clips in the Thumbnail timeline:
– Stereo 3D Batch Copy: Copies every grade from the left-eye clips to the right-eye clips.
– Stereo 3D Batch Sync: Copies grades from one eye to the other only when their node graphs
have the same number of nodes. This prevents you from accidentally overwriting a custom grade
with a different node structure that was necessary to match two eyes for a problem shot.
The Copy Grade, Swap Grade, Swap Shots, Ripple Link, and Switch Eye commands are also available
from the Stereo submenu of the Timeline contextual menu.
Automatic Image Processing for Stereo 3D
It’s common during stereoscopic shoots for minor divergences in geometry and color to appear in the
source footage. To make the process of grading stereo 3D media less onerous, DaVinci Resolve
provides a set of auto-adjustment controls at the right of the Stereo 3D palette that gives you a starting
point for matching left- and right-eye clips together.
Auto align and color match buttons
Options for Auto Processing
You can choose which frame should be used to automatically analyze and process stereo clips using
the Alignment and Matching controls from the Stereo 3D palette option menu. You can choose Auto
Process > First or Middle, depending on what works best for your media.
Auto Process—Stereo Alignment
For the stereoscopic effect to work without causing headaches, it’s critical that both eyes are aligned.
This can be tricky to adjust using manual controls, but is something that can be automatically analyzed.
You can perform stereo 3D alignment to a single clip, or you can select a range of clips to align all of
them automatically at once. There are two options. Which is more appropriate depends on the type of
geometry issues you’re needing to address.
– Transform Alignment: Analyzes the image and makes vertical and rotational adjustments to line
up the left- and right-eye images as closely as possible.
– Vertical Skew: Analyzes the images and makes a vertical-only adjustment to line up the left- and
right-eye images.
To align one or more clips automatically:
1 Select one or more stereo clips in the Thumbnail timeline of the Color page.
2 Choose which frame of each clip you want to use for the analysis by opening the Stereo 3D
palette, clicking the Option menu, and choosing Auto Process > First or Auto Process > Middle.
3 Click either of the Stereo Alignment buttons. The button to the left is for Automatic Transform,
while the button to the right is for Automatic Vertical Skew.
If you selected multiple clips, then the Stereo Alignment window appears, and a progress bar shows
the remaining time this operation will take.
Chapter 15 Stereoscopic Workflows 285
Auto Process—Color Matching
Due to the design of different stereo 3D rigs, sometimes the color and contrast of one eye’s media
doesn’t precisely match that of the corresponding eye. DaVinci Resolve provides two commands for
quickly and automatically matching two eyes together.
– Stereo Color Match (Primary Controls): Uses the Lift/Gamma/Gain controls to match one eye to
the other. The result is a simple adjustment that’s easy to customize, but may not work as well as
Custom Curves in some instances.
– Stereo Color Match (Custom Curves): Uses the Custom Curves to create a multipoint adjustment
to match one eye to the other. The result can be more effective with challenging shots.
– Stereo Color Match (Dense Color Match): Performs a pixel by pixel, frame by frame color match
that is incredibly accurate. This operation is processor intensive, so if you’re going to batch
process many clips, or if you’re matching long clips, you’ll want to make sure you have adequate
time. Because this is so precise match, it’s recommended to use Dense Color Match after you’ve
used one of the stereo alignment commands.
TIP: For the best results, it’s recommended to use automatic color matching in a separate
node, independent of other corrections.
Stereo 3D color match works differently depending on whether or not one of the stereo 3D-paired
clips has already been graded. The following procedure shows how to match a pair of left- and
right-eye clips before you make any manual adjustments of any kind.
To match a pair of left- and right-eye clips automatically:
1 Select one or more clips in the Thumbnail timeline of the Color page.
2 Open the Stereo 3D palette, and click one of the three Color Match controls.
The Color Matching window appears, and a progress bar shows the remaining time this operation will
take. You can also use automatic color matching to match an ungraded clip to a paired clip that’s
already been graded. This only works for grades consisting of one or more primary corrections;
secondary corrections cannot be auto-matched.
To match an ungraded clip automatically to a paired stereo clip that’s graded:
1 To suspend stereo grade linking temporarily:
– Open the Stereo 3D palette, and turn off the Ripple Link button.
– Right-click the Thumbnail timeline, and choose Stereo 3D > Ripple Link > Solo.
2 Make a primary adjustment to a clip in the left-eye timeline to create a simple base grade.
The left-eye clip now has a grade, and the right-eye clip does not.
3 Do one of the following to switch eyes:
– In the Stereo 3D palette, click Right.
– Right-click the Thumbnail timeline again, and choose Stereo 3D > Switch Eye.
This procedure only works when you use the Stereo Color Match commands on the ungraded clip
of a left- and right-eye stereo pair, to match it to the graded clip.
4 To make the match, do the following:
– In the Stereo 3D palette, click one of the three color match controls.
Both clips should match one another very closely.
Chapter 15 Stereoscopic Workflows 286
Stereo 3D Monitoring Controls
To output both eyes to a stereo 3D display, you need to click the Vision: Mono or Stereo button, and
then choose a display mode from the Out pop-up menu.
Monitoring controls for Stereo 3D
– Vision: Click a button to choose between Stereo, where both eyes can be displayed in the Viewer
and output to video in a variety of different formats, and Mono, where only one eye is monitored in
the Viewer and your video output interface.
– Out: A pop-up menu that provides different stereo viewing options for previewing stereo 3D
signals in different ways. By default, this option is also linked to the Viewer display Internal Video
Scope options. For detailed descriptions of each stereo 3D viewing mode, see the following
section, “Stereo 3D Output Options.”
– Link button: When enabled, the Viewer and internal video scopes both use the Out pop-up
menu’s option for stereo 3D viewing. When disabled, you can choose different stereo 3D viewing
options for the Viewer and internal video scopes.
– Viewer: Lets you choose a stereo 3D viewing option for the Viewer.
– WFM: Lets you choose a stereo 3D viewing option for the internal video scopes.
– Cbd Size: If any stereo 3D viewing options are set to Checkerboard, this parameter becomes
enabled, and lets you define the size of the checkerboard boxes, in pixels.
Dual 4:2:2 Y’CbCr stereoscopic video streams are output via HD-SDI on selected Blackmagic I/O
devices when you turn on the ”Use left and right eye SDI output” checkbox on the Master Settings
panel of the Project Settings. You can select either Side-by-Side or Line-by-Line output to be fed to
your stereo-capable display, depending on your display’s compatibility.
Stereo 3D Output Options
Additionally, the Viewer and video scopes can be set to display both “eyes” in one of a variety of
different modes.
– Side by Side: Displays both images side by side. Each eye is squeezed anamorphically to fit both
eyes into the same resolution as the GUI viewer.
– Top and Bottom: Displays both images one over the other. Each eye is squeezed vertically to fit
both eyes into the same resolution as the GUI viewer.
– Line by Line (Even/Odd): An interlaced mode where each eye is displayed on alternating lines.
The thickness of the lines as seen in the Viewer depends on how zoomed in you are.
– Checkerboard: Displays both eyes via an alternating checkerboard pattern. This is an excellent
mode for identifying regions of the image where there’s variation in color or geometry between
the two eyes.
– Anaglyph (B/W): Each eye is desaturated and superimposed via Red/Cyan anaglyph to show the
disparity between both eyes in different regions of the image. Left-eye divergence is red, and
right-eye divergence is cyan. Regions of alignment between both eyes appear grayscale.
Chapter 15 Stereoscopic Workflows 287
Anaglyph modes are useful for evaluating the geometric differences between both eyes, as well
as for identifying the point of convergence (where both eyes align most perfectly) that places a
region of the image at the screen plane.
Red/cyan color coding also identifies the direction of parallax. For any given feature, disparity such
that red is to the right and cyan is to the left indicates positive parallax (backward projection away
from the audience). Red to the left and cyan to the right indicates negative parallax (forward
projection towards the audience).
– Anaglyph (Color): Similar to Anaglyph (B/W), except that regions of close alignment are shown
in full color. Incidentally, both anaglyph modes can be previewed on ordinary displays using
old-fashioned red/cyan anaglyph glasses, enabling stereo 3D monitoring on non-stereo
3D-capable displays.
– Difference: Superimposes grayscale versions of both eyes using the difference composite
mode. Corresponding left/right-eye pixels that are perfectly aligned appear black, while pixels
with disparity appear white. This mode is extremely useful for evaluating geometric differences
between both eyes, as well as for identifying the point of convergence, without the distraction of
color that the anaglyph modes present.
NOTE: Only displays the eye corresponding to the currently selected timeline in the
Viewer. However, this option also works in conjunction with the “Use Dual Outputs on
SDI” checkbox in the Master Settings of the Project Settings which, when turned on,
outputs each eye to an individual HD-SDI output of your Blackmagic I/O card.
The Viewer set to display an anaglyph stereo image in color
Floating Windows
Floating Windows are meant to correct for “Window violations,” where elements of the image with
negative parallax, that project forward from the screen plane towards the audience, are cut off by the
edge of the frame. In these instances, differences between the images being shown to the left and
right eyes can result in a visual paradox that’s difficult for viewers to reconcile. Specifically, when a
forward-projecting element is cut off by the left or right edge of the frame, one eye sees things that
the other eye does not.
If the subject is moving quickly, this may not be an issue, but if the cut off (or occluded) element lingers
onscreen, it causes problems for viewers that defeat the stereo 3D illusion. The viewer’s binocular
vision (or stereopsis) is providing one depth cue, while occlusion is providing a completely different
depth cue.
Chapter 15 Stereoscopic Workflows 288
To fix this, you can use Floating Windows to crop the cut off object from the eye on the side of the
object that’s cut off, thus eliminating the portion of the stereo image that is unseen to the other eye
that causes the problem.
Floating Window controls
The objective of using Floating Windows is to manipulate the illusion of the viewer’s “window into the
scene.” In addition to fixing Window violations, it has been proposed that Floating Windows can be
used as a creative tool by manipulating the geometry of this Window to alter subtly the viewer’s
perception of the screen orientation.
– By cropping the right-hand side of the right-eye frame, you also create the illusion that the right
edge of the “window into the image” is tilted farther forward toward the viewer.
– By cropping the left-hand side of the left-eye frame, you create the illusion that the left edge of the
Window is tilted toward the viewer.
– If you crop both the left-hand side of the left-eye frame and the right-hand side of the right-eye
frame, you create the illusion that the entire plane of the “virtual screen” is coming toward you.
– If you apply opposite-angled Windows to the left- and right-eye clips at one or both of the edges
of the frame, it appears to “tilt” the screen toward or away from the viewer.
Animating Floating Windows
Floating Windows can be animated using the Float Window keyframing track, found within the Sizing
track of the Keyframe Editor, to push the edge of the frame in as needed, and then pull it back out
when the partially occluded subject has moved fully into the frame. For more information about
animating keyframing tracks, see Chapter 144, “Keyframing in the Color Page.”
Floating Windows have the following controls and parameters.
– L/R/T/B buttons: Lets you choose an edge to which to apply a Floating Window. Click the button
corresponding to the edge you want to adjust. Each edge has its own position, rotate, and
softness settings.
– Position: Adds masking to the currently selected edge.
– Rotate: Rotates the currently selected edge, letting you create an angled Window.
– Softness: Feathers the edge of the currently selected edge, letting you create a soft Window that
can be less noticeable to viewers.
To add a Floating Window to fix a Window violation:
1 Choose to which eye you want to add the Floating Window.
– To apply a Floating Window to eliminate a Window violation on the right-hand side of the
screen, click the right eye view.
– To apply a Floating Window to eliminate a Window violation on the left-hand side of the screen,
click the left eye view.
2 Choose which edge you want to adjust by clicking the L or R buttons.
– To eliminate a Window violation on the right-hand side, click R.
– To eliminate a Window violation on the left-hand side, click L.
Chapter 15 Stereoscopic Workflows 289
3 Adjust the Position parameter as necessary to crop the portion along the edge of the selected eye
that’s not visible in the other.
4 Optionally, if you feel that the Window adjustment you’ve just made is too obvious, increase the
Softness parameter to make that edge less noticeable.
Stereo Controls on the DaVinci Control Panel
If you’re doing convergence adjustments and stereographic work throughout a program, you can use
many of the controls described in this section from the DaVinci control panel.
To show the Stereo transform controls page on the Transport panel:
1 Press the 3D soft key. The Transport panel’s knobs and soft keys are remapped with all available
Stereoscopic commands.
2 When you’re finished, press MAIN.
To show the Floating Windows controls on the Center panel:
1 From the main page of the Center panel, press the 3D soft key. The Floating Windows, Auto
Match, and Auto Align controls appear on the Center panel.
2 Press the 3D OVERLAY soft key to expose the Stereoscopic sizing controls on the Transport
panel. Press 3D OVERLAY again to return to the ordinary sizing controls.
3 When you’re finished, press the MAIN soft key to exit the 3D control page.
Outputting Stereo 3D Media
in the Deliver Page
To render full frame media, you’ll need to render each stereo 3D eye separately using the controls of
the Deliver page, outputting whatever media format is required by the client.
Rendering Frame-Compatible Media
Frame-compatible media has both the left- and right-eye images squeezed anamorphically into a
single media file. To create frame-compatible media, choose the “Both eyes as” option from the
Render Stereoscopic 3D controls at the bottom of the File output options of the Deliver page, and then
choose a method of output from the Mesh Options pop-up menu.
Stereoscopic 3D mesh render options on the Deliver page
You can choose Side-by-Side, Line-by-Line, or Top-Bottom. You can also choose Anaglyph if you want
to output a traditional anaglyph red/cyan stereo 3D image for viewing on any display.
Rendering Individual Left- and Right-Eye Clips
If your workflow requires you to deliver separate sets of left- and right-eye media, this is easily
accomplished by either setting up a render job with “Render Stereoscopic 3D” set to either “Right eye”
or “Left eye,” or selecting “Both eyes as” and choosing the “Separate files” option.
Chapter 15 Stereoscopic Workflows 290
Chapter 16
Using Variables
and Keywords
This chapter describes how to use metadata variables and keywords to help you
manage your clips.
Contents
Using Metadata Variables 292
Where Variables Can Be Used 292
How to Edit Metadata Variables 292
Available Variables in DaVinci Resolve 293
Using Keywords 295
Chapter 16 Using Variables and Keywords 291
Using Metadata Variables
If you’re an enthusiastic user of clip metadata (and you should be), you can use “metadata variables”
that you can add into supported text fields that let you reference other metadata for that clip. For
example, you could add the combination of variables and text seen in the following screenshot.
Variables, once they’ve been entered, are represented as graphical tags shown with a background,
while regular text characters that you enter appear before and after these tags.
Variables and text characters entered to create a display name based on a clip’s metadata
As a result, that clip would display “12_A_3” as its name if scene “12,” shot “A,” and take “3” were its
metadata. When you do this, you can freely mix metadata variables with other characters (the
underscore, as in the example above) to help format the metadata to make it even more readable.
Be aware that, for clips where a referenced metadata field is empty, no characters appear for that
corresponding metadata variable’s tag wherever it happens to be used.
Where Variables Can Be Used
Metadata variables are extremely flexible, and can be used to procedurally add metadata to several
functions in DaVinci Resolve. Here’s a partial list of where you can use variables.
– Clip names: You can use variables in the Clip Name column of the Media Pool in List view, or in
the Clip Name field of the Clip Attributes window’s Name panel, to use each clip’s metadata to
generate a more readable and useful display name.
– Other metadata fields in the Metadata Editor: You can use variables to reference metadata in
other fields.
– Automatic labeling of stills in the Gallery: You can choose an option from the Color group in the
General Options panel of the Project Settings to “Automatically label Gallery stills” in the Gallery,
and you can use variables to do so.
– Custom text in the Data Burn palette: You can use variables to automatically populate metadata
in different combinations as a window burn.
– The Filename field of the Render Settings in the Deliver page: Using variables, you can
automatically set the name of rendered clips to follow any metadata that’s associated with a
timeline or individual clip. This is especially useful when you want to generate specific file names
when rendering individual source clips.
How to Edit Metadata Variables
Every single item of metadata that’s available in the Metadata Editor can be used as a variable, and
several other clip and timeline properties such as the version name of a clip’s grade, a clip’s EDL event
number, and that clip’s timeline index number can be also referenced via variables.
To add a variable to a text field that supports the use of variables:
1 Type the percentage sign (%) and a scrolling list appears showing all variables that are available.
2 To find a specific variable quickly, start typing that variable’s name and this list automatically filters
itself to show only variables that contain the characters you’ve just typed.
3 Choose which variable you want to use using the Up and Down Arrow keys, and press Return to
choose that variable to add.
Chapter 16 Using Variables and Keywords 292
The variable list that appears when you type the % character
As soon as you add one or more metadata variables to a field and press Return, the string is replaced
by its corresponding text. To re-edit the metadata string, simply click within that field to edit it, and the
metadata variables will reappear as the graphical tags that they are.
To remove a metadata variable:
– Click within a field using variables to begin editing it, click a variable to select it, and press Delete.
Available Variables in DaVinci Resolve
The following list describes what metadata variables are available to add.
Clip Metadata
– File Name
– Clip Directory
– Video Codec
– Data Level
– KeyKode
Metadata Editor Metadata
– All Shot Scene metadata
– All Clip Details metadata (see Metadata Editor for more information)
– All Camera metadata (see Metadata Editor for more information)
– All Tech Details metadata (see Metadata Editor for more information)
– All Stereo 3D VFX metadata (see Metadata Editor for more information)
– All Audio metadata (see Metadata Editor for more information)
– All Audio Tracks metadata (see Metadata Editor for more information)
– All Production metadata (see Metadata Editor for more information)
– All Production Crew metadata (see Metadata Editor for more information)
– All Reviewed By metadata (see Metadata Editor for more information)
Chapter 16 Using Variables and Keywords 293
Media Pool Metadata
– File name
– Reel name
– File path
– Video Codec
– IDT
– Input LUT
– PAR
– Data Level
– Description
– Comments
– Keyword
– Shot
– Scene
– Take
– Roll/Card #
– Input Color Space
– Input Sizing Preset
– Start TC
– End TC
– Optimized Media
Timeline and Project Metadata
– Group
– Timeline Name
– Project Name
– Track Number
– Track Name
– Render Codec
Legacy Metadata
– EDL Tape Number: Tape number extracted from imported EDL
– Render Resolution: Resolution of the rendered file
– EDL Event Number: DaVinci Resolve-generated index number of the clip in the timeline
– Version: Version Name of the rendered file
– Eye: Stereo session, “Left” or “Right”
– Reel Number: Reel Name extracted by DaVinci Resolve from source filename or clip name
– Timeline Index: Event number from imported EDL
Chapter 16 Using Variables and Keywords 294
Using Keywords
While most metadata in the Metadata Editor is edited via text fields, checkboxes, or multiple button
selections (such as Flags and Clip Color), the Keyword field is unique in that it uses a graphical “tag”
based method of data entry. The purpose of this is to facilitate consistency with keyword spelling by
making it easy to reference both a built-in list of standardized keywords, as well as other keywords that
you’ve already entered to other clips.
Once added, keywords are incredibly useful for facilitating searching and sorting in the Media Pool, for
creating Smart Bins in the Media and Edit pages, and for use in Smart Filters on the Color page.
Reaping these benefits by adding and editing keywords is simple and works similarly to the method of
entering metadata variables that’s described above.
To add a keyword:
1 Select one or more clips, then click in the Keyword field of the Metadata Editor, and begin typing
the keyword you want to use. As you begin typing, a scrolling list appears showing all keywords
that are available using the string of characters you’ve just typed.
2 To find a specific keyword in the list, start typing that keyword’s name and this list automatically
filters itself to show only keywords that contain the characters you’ve just typed. Choose which
keyword you want to use in the list using the Up and Down Arrow keys, and press Return to
choose that keyword to add.
3 If you selected multiple clips, don’t forget to click Save or you’ll lose your changes. If you only
selected a single clip, your changes will be saved automatically.
The keyword list that appears when you type within the Keyword field
As soon as you add one or more keywords, they appear as a graphical tag. To re-edit any keyword,
simply click anywhere within the Keyword field to edit it.
To edit a keyword:
Double-click any keyword to make it editable, then edit it as you would any other piece of text,
and press Return to make it a graphical keyword tag again.
To remove a keyword:
Click any keyword to select it, and press Delete.
Chapter 16 Using Variables and Keywords 295
PART 3
Ingest and
Organize Media
Chapter 17
Using the
Media Page
The Media page is the primary interface for media import and clip organization in
DaVinci Resolve. It’s also where all timelines that you edit in DaVinci Resolve or
import from other applications are organized. While timelines and clips are both saved
in the Media Pool, it’s central to the way DaVinci Resolve works that the source media
used by a project is managed separately from your timelines. In this way, you can
manage and update the clips used by timelines with ease, importing and reorganizing
clips, switching between offline and online media, and troubleshooting any problems
that occur.
The Media page also contains much of the core functionality used for on-set
workflows, as well as most of the functions that are used in the ingest, organization,
and sound-syncing procedures corresponding to digital dailies workflows.
Contents
Understanding the Media Page User Interface 298
The Interface Toolbar 298
Showing Which Panel Has Focus 299
The Media Storage Browser 299
Playing Media in the Media Storage Browser 300
The Media Storage Browser’s Volume List 300
The Media Storage Browser Area 301
Revealing a Finder Location in the Media Browser 305
Viewer 305
Live Media Preview 306
Media Pool 306
The Bin List 306
Showing Bins in Separate Windows 307
Chapter 17 Using the Media Page 297
Bins, Power Bins, and Smart Bins 307
Filtering Bins Using Color Tags 308
Sorting the Bin List 309
Thumbnail, List, and Metadata Views in the Media Pool 309
Display Audio Clip Waveforms in Media Pool and Media Storage 309
Metadata Editor 310
Audio Panel 311
Dual Monitor Layout 311
Customizing the Media Page 312
Undo and Redo in DaVinci Resolve 313
Understanding the Media Page
User Interface
By default, the Media page is divided into five different areas, designed to make it easy to find, select,
and work with media in your project.
Media page
Much of the functionality and most of the commands are found within the contextual menus that
appear when you right-click clips in the Media Storage browser or Media Pool.
The Interface Toolbar
At the very top of the Media page is a toolbar with buttons that let you show and hide different parts of
the user interface. These buttons are as follows, from left to right:
The Interface toolbar
Chapter 17 Using the Media Page 298
– Media Storage full/half height button: Lets you set the Media Storage browser to take up the full
height of your display, if you need more area for browsing at the expense of a smaller Media Pool.
– Media Storage: Lets you hide or show the Media Storage browser. Hiding the
Media Storage browser creates more room for the Viewer.
– Clone Tool: Shows or hides the Clone tool, used for cloning media from camera cards
or hard drives.
– Audio Panel: Hides or shows the Audio Panel.
– Metadata: Hides or shows the Metadata Editor.
– Inspector: Hides or shows the Inspector Panels.
– Capture: Switches the Viewer and Audio Panel to Capture Mode, exposing the controls necessary
for cuing up a device-controllable deck, and batch recording from tape.
– Audio Panel/Metadata Editor full/half height button: Lets you set the Audio Panel or Metadata
Editor to take up the full height of your display, if you need more area for either of those functions.
Showing Which Panel Has Focus
Whenever you click somewhere on the DaVinci Resolve interface using the pointer, or use a keyboard
shortcut to “select” a particular panel (such as in the Edit page), you give that panel of the user
interface “focus.” A panel with focus will capture specific keyboard shortcuts to do something within
that panel, as opposed to doing something elsewhere in the interface.
Disabled by default, checking the “Show focus indicators in the user interface” box in the UI Settings
section of the User Preferences causes an orange highlight to appear at the top edge of the focused
panel, allowing you to keep track of which part of the current page is taking precedence. You can
switch focus as necessary to do what you need to do.
The Focus indicator shown at the top edge of the Media
Pool, shown next to a Viewer that doesn’t have focus
The Media Storage Browser
The Media Storage browser lets you see all of the volumes connected to your workstation, browsing
them for media that you want to preview and eventually import into your DaVinci Resolve project in
one way or another. Whereas other applications rely on some sort of import dialog, DaVinci Resolve
provides the Media page for doing complex media import tasks. To facilitate media import, the Media
Storage browser is divided into two areas, the Volume List, and the Media Browser.
Chapter 17 Using the Media Page 299
Media Storage browser with scrubbable clip view
Playing Media in the Media Storage Browser
You can select media in the Media Storage Browser to play directly in the Media page Viewer, without
importing it, so long as it’s in a format that DaVinci Resolve supports. This is useful for previewing clips
that you’re considering using in a project, but it’s also useful for quality control review sessions of media
that you’ve exported from DaVinci Resolve. All clips that are played in the Media page Viewer are also
output to video, if you have a supported Blackmagic output interface. You can also output the video to a
second monitor by choosing Workspace > Video Clean Feed, and selecting your monitor. Additionally,
if you choose Workspace > Dual Screen > On, the second computer display is capable of displaying a
set of video scopes on the Media page, which can help you QC a program you’re delivering.
Playing DCP and IMF Packages
It’s also possible to use the Media Storage Browser to select and play DCP and IMF packages that
have been exported either using EasyDCP or using the native DCP/IMF export capabilities of
DaVinci Resolve. Simply locate the package, select it, and play it in the Viewer like any other clip. It will
be output to video and analyzed by the video scopes.
DCP and IMF packages can also be imported from Media Storage to the Media Pool for various
workflows. For more information, see Chapter 187, “Delivering DCP and IMF.”
The Media Storage Browser’s Volume List
At the left of the Media Storage browser is a list of all volumes that are currently available to your
DaVinci Resolve workstation. It’s used to locate media that you want to import manually into your project.
– Scratch volumes: Indicated by a usage statistic to the right of the volume name that lists
how full that volume is, these are disks that you’ve added to the Media Storage panel of the
System Preferences window. The topmost of these scratch disks is used to store Gallery stills and
cache files.
– Available volumes: Indicated by disk icons, this is a list of all fixed, removable, and network
volumes that are currently available to your workstation. When the “Automatically display
attached local and network storage locations” checkbox is turned on in the Media Storage panel
of the DaVinci Resolve Preferences, new volumes that are attached to your workstation should
automatically appear in this list.
Chapter 17 Using the Media Page 300
This is a hierarchical list; clicking the disclosure triangle to the left of any volume opens up an
additional list of that volume’s subdirectories, with additional disclosure triangles next to each
subdirectory. Using the Media Storage browser, you can drill down into as many subdirectories as
you need to.
Adding Volumes That Don’t Appear in This List
If you need to access a storage volume that doesn’t appear on this list, for example if you’re using the
version of DaVinci Resolve that is available in the Apple App Store, then you can right-click anywhere
in the background of the Volume list and choose “Add New Location” to open a dialog you can use to
choose a volume you want to add.
If you’re using the Apple App Store version of DaVinci Resolve, auto-mounting of attached storage
volumes is not enabled automatically. However, you can enable this in the Media Storage panel of the
DaVinci Resolve Preferences. For more information, see the DaVinci Resolve Preferences section of
Chapter 4, “System and User Preferences.”
Media Storage Browser Favorites
Underneath this is the Favorites area. If there are special directories that you find yourself frequently
accessing, you can add them to the Favorites in order to avoid having to traverse complex hierarchies
in order to access the media you need. The Favorites can be easily customized and used.
Methods of organizing favorite file system locations in the Media Storage Browser:
– To add a favorite: Right-click any folder in the Media Storage browser folder list, and choose
“Add folder to favorites” from the contextual menu. The new favorite appears at the bottom of the
Favorites area.
– To open a favorite: Click any favorite to expose the contents of the corresponding directory in the
Media Storage browser.
– To remove a favorite: Right-click the favorite you want to remove, and choose “Remove folder
from favorites” from the contextual menu.
The Media Storage Browser Area
Once you’ve selected a volume or subdirectory in the Media Storage browser, you can view its
contents in List view, Thumbnail view, or Metadata view to search through the media that’s available to
you as you try to find what you need.
List View
In List view, the following columns are available for sorting media prior to importing it into the
Media Pool:
– File name: The name of a file.
– Reel name: The reel name as it’s currently derived according to the Conform Options that are
currently chosen in the General Options panel of the Project Settings.
– Start TC: The first timecode value in the source media.
– Start: The first frame number in the source media.
– End: The last frame number in the source media.
– Frames: The duration of each clip in frames.
– Resolution: The frame size of the source media.
– Bit Depth: The bit depth of the source media.
– Video Codec: The codec used for the video track of supported media.
– Audio Codec: The codec used for the audio tracks of supported media.
Chapter 17 Using the Media Page 301
– FPS: The frame rate of the source media.
– Audio Ch: The number of audio channels within the source media.
– Date Created: The date a media file has been created.
– Date Modified: The date a media file has been changed in some way and saved.
– Shot: Additional metadata from media formats that support it.
– Scene: Additional metadata from media formats that support it.
– Take: Additional metadata from media formats that support it.
– Angle: Additional metadata from media formats that support it.
– Good Take: Additional metadata from media formats that support it.
If you work in List view, you gain additional organizational control by exposing columns that show the
metadata that each clip contains, prior to media being added to your timeline. You can use these
columns to help organize your media.
Methods of customizing metadata columns in List view:
– To show or hide columns: Right-click at the top of any column in the Media Storage browser and
select an item in the contextual menu list to check or uncheck a particular column. Unchecked
columns cannot be seen.
– To rearrange column order: Drag any column header to the left or right to rearrange
the column order.
– To resize any column: Drag the border between any two columns to the right or left to narrow or
widen that column.
– To sort by any column: Click the column header you want to sort with. Each additional time you
click, the same header toggles that column between ascending and descending sort order.
You can also customize column layouts in the Media Storage area. Once you’ve customized a column
layout that works for your particular purpose, you can save it for future recall.
Methods of saving and using custom column layout:
– To create a column layout: Show, hide, resize, and rearrange the columns you need for a
particular task, then right-click any column header in the Media Pool and choose Create Column
Layout. Enter a name in the Create Column Layout dialog, and click OK.
– To recall a column layout: Right-click any column header in the Media Pool and choose the name
of the column layout you want to use. All custom column layouts are at the top of the list.
– To delete a column layout: Right-click any column header in the Media Pool and choose the name
of the column layout you want to delete from the Delete Column Layout submenu.
Thumbnail View
While in Thumbnail view, you can scrub through a clip’s icon to see its contents, and you can also click
the Clip Info drop-down menu at the bottom right corner of any clip’s thumbnail to see an instant
summary of that clip’s vital information, including:
– File name: The name of that file.
– In timecode: The first frame in the source media.
– Out timecode: The last frame in the source media.
– Duration: The total duration of the source media.
– Resolution: The frame size of the source media.
– Frame Rate: The frame rate, in fps, of the source media.
– Pixel Aspect Ratio: The aspect ratio of the source media.
Chapter 17 Using the Media Page 302
– Codec: Which codec is used by the source media.
– Date Created: The date created metadata from the source media file.
– Flags: Flag metadata applied either by the camera that shot the media, in the
Metadata Editor, or in the Color page Timeline.
Also while in Thumbnail view, you can use the Thumbnail Sort drop-down menu (between the Search
and Option menu) to choose a criteria by which to organize the thumbnails. A wide variety of metadata
options appear, including: File Name, Reel Name, Start TC, FPS, Audio Ch, etc. You can also sort in
ascending or descending order.
The Thumbnail Sort drop-down in
the Media Storage browser
Metadata View
In the Metadata view mode, each clip is represented by its own card with a thumbnail and basic clip
metadata information visible. This view is designed to have more metadata information than a
thumbnail but more targeted information than the List view. This feature, combined with its sort modes,
is a powerful way to organize and reorganize your clips in the Media Pool.
The metadata fields of the Metadata view (from the top down):
– Thumbnail: A scrubbable thumbnail image of your clip.
– Row 1: A main description field that is variable and determined by the sort order selection.
– Row 2: Start Timecode, Date Created, Camera #.
– Row 3: Scene, Shot, Take.
– Row 4: Clip Name, Comment.
The Metadata View icon view (highlighted icon in the top bar),
showing the thumbnail being scrubbed next to the clip’s metadata
Chapter 17 Using the Media Page 303
The strength of the Metadata view is the automatic clustering of your clips based on the sort order you
choose in the Media Pool Sort By menu, at the very upper-right corner of the Media Pool.
The Media Sort options
Each different sort mode changes the main description field on the card, as well as re-arranging the
Media Pool to reflect the selected organization method.
The sort modes available in the Metadata view are:
– Bin: This mode clusters the clips by bin, changes the main description field to clip name, and
orders the list by timecode.
– TImecode: This mode clusters the clips by creation date, changes the main description field to
creation date and start timecode, and orders the list by timecode.
– Camera: This mode clusters the clips by camera #, changes the main description field to
camera # and start timecode, and orders the list by timecode.
– Date Time: This mode clusters the clips by day, changes the main description field to creation date
and file name, and orders the list by timecode.
– Clip Name: This mode clusters the clips by the first letter of the clip name in alphabetical order,
changes the main description field to clip name, and orders the list by timecode.
– Scene, Shot: This mode clusters the clips by scene, changes the main description field to
scene-shot-take, and orders the list by scene-shot-take.
– Clip Color: This mode clusters the clips by clip color name, changes the main description field to
creation date and start timecode, and orders the list by timecode.
– Date Modified: This mode clusters the clips by day, changes the main description field to creation
date and file name, and orders the list by the last time the clip was modified by the OS filesystem.
– Date Imported: This mode clusters the clips by day, changes the main description field to creation
date and file name, and orders the list by the date the clip was added to the Media Pool.
– Ascending: Orders the Media Pool from lowest numerical value to highest, and
alphabetically from A to Z.
– Descending: Orders the Media Pool from highest numerical value to lowest, and
alphabetically from Z to A.
Chapter 17 Using the Media Page 304
Revealing a Finder Location in the Media Browser
If you drag a folder from the macOS Finder into the Media Storage browser, the Media Storage
browser will immediately update to show the location of that folder.
Viewer
Clips that you select in any area of the Media page show their contents in the Viewer. The current
position of the playhead is shown in the timecode field at the upper right-hand corner of the Viewer.
Viewer
Simple transport controls appear underneath the jog bar, letting you Jump to First Frame, Play
Backward, Stop, Play Forward, and Jump to Last Frame. A jog control to the left of these buttons lets
you move through a long clip more slowly; click it and drag to the left or right to move through a clip a
frame at a time.
Audio playback can be turned on or off by clicking on the speaker icon, or adjust the level by right-
clicking on the speaker icon and dragging the slider.
To the right of the transport controls, In and Out buttons let you set In and Out points for the current
clip. The Cue buttons move the playhead to these In and Out cue points. The clip’s timecode is also
displayed at the top right.
A jog or scrubber bar appears directly underneath the image, letting you drag the playhead directly
with the pointer. The full width of the jog bar represents the full duration of the clip in the Viewer.
There’s an additional option for the Media Page Viewer that you can expose by choosing Show
Timecode Toolbar from the Viewer option menu. This reveals an info bar at the top of the Viewer that
displays the In and Out timecode, as well as the duration of the currently marked section of media.
An optional info bar for showing the timecode and duration of a marked section of media
You can also put the Viewer into Cinema Viewer mode by choosing Workspace > Viewer Mode >
Cinema Viewer (Command-F), so that it fills the entire screen. This command toggles Cinema Viewer
mode on and off.
Chapter 17 Using the Media Page 305
Live Media Preview
Enabled by default, the Live Media Preview setting found in the Viewer options menu (the three-dots
menu found at the upper right-hand corner of the Viewer) makes it possible for thumbnails that you’re
skimming in either the Media Storage browser or Media Pool to show the skimmed frame in the
Viewer. When skimming with Live Media Preview enabled, the playhead that appears in the thumbnail
is locked to the playhead displayed in the Viewer’s jog bar. You can turn Live Media Preview on or off.
When Live Media Preview is on in the Viewer options
menu, skimming thumbnails mirrors to the Viewer
Media Pool
The Media Pool is central to the DaVinci Resolve experience. It contains all of the media that you
import into the current project, as well as all of the timelines you create. It also contains all media that’s
automatically imported along with Projects, Timelines, or Compositions that have themselves been
imported into DaVinci Resolve. In the Media page, enough room is given to the Media Pool to make it
an ideal place to sort, sift through, and organize the clips in your project. However, the Media Pool is
also mirrored in the Cut, Edit, Fusion, Color, and Fairlight pages, so you can access clips as you build
timelines, composites, grades, and sound design.
Media Pool with the Bin list open
The Bin List
Ordinarily, all media imported into a project goes into the Master bin, which is always at the top of the
Bin list and encompasses everything in a given project. However, you can add bins of your own, and
the Media Pool can be organized into as many user-definable bins as you like, depending on your
needs. Media can be freely moved from one bin to another from within the Media Pool. When working
in projects with multiple bins, you can choose to expose the bin structure in one of two ways:
– Bin list open: The Bin List button at the upper left-hand corner of the Media Pool lets you open
a separate List view showing all bins in your project, hierarchically. Bins that contain other bins
appear with a disclosure button to their left, that you can use to show or hide the contents. With
the Bin list exposed, it’s easy to organize clips among a large collection of bins.
– Bin list closed: When the Bin list is closed, all bins are hidden, and contents of whichever bin is
currently selected populate the Media Pool browser.
Chapter 17 Using the Media Page 306
Showing Bins in Separate Windows
If you right-click a bin in the Bin list, you can choose “Open As New Window” to open that bin into its
own window. Each window is its own Media Pool, complete with its own Bin, Power Bins and Smart
Bins lists, and display controls.
This is most useful when you have two displays connected to your workstation, as you can drag these
separate bins to the second display while DaVinci Resolve is in single screen mode. If you hide the Bin
list, not only do you get more room for clips, but you also prevent accidentally switching bins if you
really want to only view a particular bin’s contents in that window. You can have as many additional
Bin windows open as you care to, in addition to the main Media Pool that’s docked in the primary
window interface.
Media Pool bins opened as new windows
Bins, Power Bins, and Smart Bins
There are actually three kinds of bins in the Media Pool, and each appears in its own section of the
Bin list. The Power Bin and Smart Bin areas of the Bin list can be shown or hidden using commands in
the View menu (View > Show Smart Bins, View > Show Power Bins). Here are the differences between
the different kinds of bins:
– Bins: Simple, manually populated bins. Drag and drop anything you like into a bin, and that’s
where it lives, until you decide to move it to another bin. Bins may be hierarchically organized, so
you can create a Russian dolls nest of bins if you like. Creating new bins is as easy as right-clicking
within the Bin list and choosing Add Bin from the contextual menu.
– Power Bins: Hidden by default. These are also manually populated bins, but these bins are shared
among all of the projects in your current database, making them ideal for shared title generators,
graphics movies and stills, sound effects library files, music files, and other media that you want to
be able to quickly and easily access from any project. To create a new Power Bin, show the Power
Bins area of the Bin list, then right-click within it and choose Add Bin.
– Smart Bins: These are procedurally populated bins, meaning that custom rules employing
metadata are used to dynamically filter the contents of the Media Pool whenever you select a
Smart Bin. This makes Smart Bins fast ways of organizing the contents of projects for which you
(or an assistant) has taken the time to add metadata to your clips using the Metadata Editor, adding
Scene, Shot, and Take information, keywords, comments and description text, and myriad other
pieces of information to make it faster to find what you’re looking for when you need it. To create
Chapter 17 Using the Media Page 307
a new Smart Bin, show the Smart Bin area of the Bin list (if necessary), then right-click within it and
choose Add Smart Bin. A dialog appears in which you can edit the name of that bin and the rules it
uses to filter clips, and click Create Smart Bin.
Filtering Bins Using Color Tags
If you’re working on a project that has a lot of bins, you can apply color tags to identify particular bins
with one of eight colors. Tagging bins is as easy as right-clicking any bin and choosing the color you
want from the Color Tag submenu.
For example, you can identify the bins that have clips you’re using most frequently with a red tag.
A bin’s color tag then appears as a colored background behind that bin’s name.
Using Color Tags to identify bins
Once you’ve tagged one or more Media Pool bins, you can use the Color Tag Filter drop-down
menu (the drop-down control to the right of the Bin List button) to filter out all but a single
color of bin.
Using Color Tag filtering to isolate the blue bins
To go back to seeing all available bins, choose Show All from the Color Tag Filter drop-down.
Chapter 17 Using the Media Page 308
Sorting the Bin List
The Bin list (and Smart Bin list) of the Media Pool can be sorted by bin Name, Date Created, or Date
Modified, in either ascending or descending order. Simply right-click anywhere within the Bin list and
choose the options you want from the Sort by submenu of the contextual menu.
You can also choose User Sort from the same contextual menu, which lets you manually drag all bins
in the Bin list to be in whatever order you like. As you drag bins in this mode, an orange line indicates
the new position that bin will occupy when dropped.
Dragging a bin to a new position
in the Bin list in User Sort mode
If you use User Sort in the Bin list to rearrange your bins manually, you can switch back and forth
between any of the other sorting methods (Name, Date Created, Date Modified) and User Sort and
your manual User Sort order will be remembered, making it easy to use whatever method of bin
sorting is most useful at the time, without losing your customized bin organization.
Thumbnail, List, and Metadata Views in the Media Pool
The contents of the Media Pool can be browsed in the following traditional ways:
– Thumbnail view: Each clip is represented by an icon, with its file name appearing underneath.
When you move the pointer over a clip’s icon, DaVinci Resolve automatically scrubs through that
clip, showing you its contents. Also, a Clip Info drop-down menu appears in the lower right-hand
corner. Click the Clip Info drop-down to see an overlay appear showing essential information about
that clip. In Thumbnail view, you can use the Sort Order drop-down to choose how clips are sorted.
– List view: Each clip is represented by an item on a text list. Additionally, multiple columns of
information appear, organized by headers. Clicking any header lets you sort the list by that column,
in either ascending or descending order.
– Metadata view: Each clip is represented by its own card with a thumbnail and basic clip metadata
information visible. This view is designed to have more metadata information than a thumbnail but
more targeted information than the List view.
For more information about browsing the contents of the Media Pool, see Chapter 18, “Adding and
Organizing Media with the Media Pool.”
Display Audio Clip Waveforms in Media Pool and Media Storage
The Media Pool option-menu presents an option to Show Audio Waveforms. When you do so, every
audio clip in the Media Pool appears with an audio waveform within its thumbnail area. If Live Media
Chapter 17 Using the Media Page 309
Preview is on in the Source Viewer, you can then scrub through each clip and hear its contents. If you
don’t want to see audio waveforms, you can turn this option off.
You can now enable waveform thumbnails in the
Media Pool that you can scrub with Live Media Preview.
Metadata Editor
Both the Media and Edit pages have a Metadata Editor. When you select a clip in any area of the
Media page, its metadata is displayed within the Metadata Editor. If you select multiple clips, only the
last clip’s information appears. The Metadata Editor’s header contains uneditable information about
the selected clip, including the file name, directory, duration, video codec, frame rate, resolution, audio
codec, sample rate, and number of channels.
Because there are so very many metadata fields available, two drop-down menus at the top let you
change which set of metadata is displayed in the Metadata Editor.
– Metadata Presets (to the left): If you’ve used the Metadata panel of the User Preferences to
create your own custom sets of metadata, you can use this drop-down to choose which one to
expose. Surprisingly enough, this is set to “Default” by default.
– Metadata Groups (to the right): This drop-down menu lets you switch among the various groups
of metadata that are available, grouped for specific tasks or workflows.
The heart of the Metadata Editor is a series of editable fields underneath the header that let you
review and edit the different metadata criteria that are available. For more information on editing clip
metadata and creating custom metadata presets, see Chapter 19, “Using Clip Metadata.”
Clip Metadata Editor showing the Clip Details panel
Chapter 17 Using the Media Page 310
Audio Panel
The Audio Panel can be put into one of two modes via an option menu. In the default Meters mode,
audio meters are displayed that show the levels of audio in clips you’re playing. In Waveform mode,
you can open audio clips side by side with video clips in the Viewer in order to sync them together
manually. For more information on manually syncing audio to video, see Chapter 21, “Syncing Audio
and Video.”
When set to Levels mode, you can check audio embedded within clips you’ve imported into the Media
Pool. As you play a clip, each audio meter shows the levels for whichever of these tracks contain
audio. A Mute button in the Viewer lets you disable and enable audio playback.
Audio Meters Exposed
Dual Monitor Layout
The Media page has a dual monitor layout that provides maximum space for the Media Storage
browser and Media Pool on the primary monitor, and an enlarged Viewer, Audio Panel, and Metadata
Editor on the secondary monitor, along with a complete set of video scopes for helping you to evaluate
media as you organize it.
To enter dual screen mode:
– Choose Workspace > Dual Screen > On.
Chapter 17 Using the Media Page 311
The Media page in dual screen mode
To switch which UI elements appear on which monitors:
– Choose Workspace > Primary Display > Display 1 or Display 2, which reverses the contents of both
monitors in dual screen mode.
Customizing the Media Page
The Media Page can be customized to create more room in different areas to accommodate
specific tasks.
To resize any area of the Media page:
– Drag the vertical or horizontal border between any two panels to enlarge one and shrink the other.
Methods of hiding different parts of the Media page:
– To toggle the Clone Tool on and off: Click the Clone Tool button in the UI toolbar at the top.
Chapter 17 Using the Media Page 312
– To toggle the Audio Panel on and off: Click the Audio button in the UI toolbar at the top.
– To toggle the Metadata Editor on and off: Click the Metadata button in the UI toolbar at the top.
– To toggle the Media Storage browser folder list on and off: Click the button at the top-left
corner of the Media Browser.
– To toggle the Media Pool Bin list on and off: Click the button at the top-left corner of
the Media Pool.
Methods of organizing favorite file system locations in the Media Storage browser:
– To add a favorite: Right-click any folder in the Media Storage browser folder list, and choose
“Add folder to favorites” from the contextual menu.
– To remove a favorite: Right-click the favorite you want to remove, and choose
“Remove folder from favorites” from the contextual menu.
To return all pages to their default layout:
– Choose Workspace > Reset UI Layout.
Undo and Redo in DaVinci Resolve
No matter where you are in DaVinci Resolve, Undo and Redo commands let you back out of steps
you’ve taken or commands you’ve executed and reapply them if you change your mind.
DaVinci Resolve is capable of undoing the entire history of things you’ve done since creating or
opening a particular project. When you close a project, its entire undo history is purged. The next time
you begin work on a project, its undo history starts anew.
Because DaVinci Resolve integrates so much functionality in one application, there are three separate
sets of undo “stacks” to help you manage your work.
– The Media, Cut, Edit, and Fairlight pages share the same multiple-undo stack, which lets you
backtrack out of changes made in the Media Pool, the Timeline, the Metadata Editor, and
the Viewers.
– Each clip in the Fusion page has its own undo stack so that you can undo changes you make to
the composition of each clip, independently.
– Each clip in the Color page has its own undo stack so that you can undo changes you make to
grades in each clip, independently.
In all cases, there is no practical limit to the number of steps that are undoable (although there may be
a limit to what you can remember). To take advantage of this, there are three ways you can undo work
to go to a previous state of your project, no matter what page you’re in.
To simply undo or redo changes you’ve made one at a time:
– Choose Edit > Undo (Command-Z) to undo the previous change.
– Choose Edit > Redo (Shift-Command-Z) to redo to the next change.
– On the DaVinci control panel, press the UNDO and REDO buttons on the T-bar panel.
TIP: If you have the DaVinci control panel, there is one other control that lets you control the
undo stack more directly when using the trackballs, rings, and pots. Pressing RESTORE
POINT manually adds a memory of the current state of the grade to the undo stack.
Since discrete undo states are difficult to predict when you’re making ongoing adjustments
with the trackball and ring controls, pressing RESTORE POINT lets you set predictable states
of the grade that you can fall back on.
Chapter 17 Using the Media Page 313
You can also undo several steps at a time using the History submenu and window. At the time of this
writing, this only works for multiple undo steps in the Media, Cut, Edit, and Fairlight pages.
To undo and redo using the History submenu:
1 Open the Edit > History submenu, which shows (up to) the last twenty things you’ve done.
2 Choose an item on the list to undo back to that point. The most recent thing you’ve done appears
at the top of this list, and the change you’ve just made appears with a check next to it. Steps
that have been undone but that can still be redone remain in this menu, so you can see what’s
possible. However, if you’ve undone several changes at once and then you make a new change,
you cannot undo any more and those steps disappear from the menu.
The History submenu, which lets you undo several steps at once
Once you’ve selected a step to undo to, the menu closes and the project updates to show you its
current state.
To undo and redo using the Undo window:
1 Choose Edit > History > Open History Window.
2 When the History dialog appears, click an item on the list to undo back to that point. Unlike
the menu, in this window the most recent thing you’ve done appears at the bottom of this list.
Selecting a change here grays out changes that can still be redone, as the project updates to
show you its current state.
The Undo history window that lets you browse the
entire available undo stack of the current page
3 When you’re done, close the History window.
Chapter 17 Using the Media Page 314
Chapter 18
Adding and
Organizing Media
with the Media Pool
Before you can edit or grade media, you need to add it to the Media Pool, which is
the central repository of clips in DaVinci Resolve. The Media Pool is a feature-rich
environment, giving you many different methods of importing clips into your project
and organizing them.
Contents
Copying Media Using the Clone Tool 317
Adding Media to the Media Pool 319
Basic Methods for Adding Media in the Media Page 319
Adding Subclips From the Media Storage Panel 320
Adding Individual Frames From Image Sequences 321
Adding Media Based on EDLs 321
Splitting Clips Based on EDLs 322
Import Clips With Metadata Via Final Cut Pro 7 XML 322
Adding Media With Offset Timecode 322
Adding Media to the Cut, Edit, Fusion, and Fairlight Pages 323
Removing Media From the Media Pool 323
Adding and Removing External Mattes 324
What Are Mattes For? 325
Adding Mattes 325
Using Embedded Mattes in OpenEXR Files 326
Adding Offline Reference Movies 326
Chapter 18 Adding and Organizing Media with the Media Pool 315
Extracting Audio in Media Storage 327
Manually Organizing the Media Pool 327
To Select Clips in the Media Pool 327
Organizing Media into Bins 327
Import and Export DaVinci Resolve Project Bins (.drb) 328
Import and Export DaVinci Resolve Timelines (.drt) 329
Sharing Media Among Projects Using Power Bins 329
Automated Organization Using Smart Bins 330
Smart Bins Are Only As Good As Your Metadata 330
Smart Bins Update Their Contents Dynamically 331
Automatic Smart Bin Creation 331
Manual Smart Bin Creation 332
Organizing Smart Bins 334
Duplicating Clips in the Media Pool 335
Duplicating Timelines 335
Choosing How to Display Bins 335
Showing Bins in Separate Windows 335
Using the Media Pool in Thumbnail View 336
Working With Columns in List View 336
Editable Description and Comments Columns 339
Using Metadata View in the Media Pool 339
Finding Clips, Timelines, and Media 341
Finding Clips and/or Timelines Within the Media Pool 341
Finding Synced Audio 342
Finding Timeline Clips in the Media Pool 343
Finding Timelines in the Media Pool 343
Finding Media in the Media Storage Panel and Finder 343
Going Immediately to a File System Location in the Media Browser 343
Tracking Media Usage 343
Thumbnail Clip Usage Indicators 343
List View Clip Usage Column 344
Relinking Media Simply 344
Relink Media 344
Relink Selected Clips 345
Change Source Folder 346
Chapter 18 Adding and Organizing Media with the Media Pool 316
Copying Media Using the Clone Tool
One of the few things you may want to do before you add media to your project is to clone all camera
original media onto a safe set of backup volumes, for redundancy in case any one volume fails.
Additionally, you should consider cloning all media to an off-site backup as well.
Whether you’re on-set working as a DIT, or doing data ingest at a post facility, the Clone Tool in the
Media page lets you safely and accurately copy media from SD cards, SSDs, or disk drives, to
multiple destinations, with a checksum report (based on a choice of six checksum options) written to
the root of each destination volume that verifies the absolute accuracy of the duplicate media saved to
each destination.
To duplicate media using the Clone Tool:
1 Open the Clone Tool by clicking the Clone button at the far left of the Media Pool toolbar, which
reveals the Clone Tool palette.
2 Click the Add Job button at the bottom left to create a new job. A job item appears within the
Clone Tool palette, with overlays to guide you through its use.
3 Drag a volume or folder from the Media Storage panel to the “Drop source here” drop zone.
Alternately, you can right-click any volume or folder in the Media Storage panel and choose Set As
Clone Source.
4 Next, drag one or more volumes or folders from the Media Storage panel to the “Drop destination
here” drop zone. Alternately, you can right-click any volume or folder in the Media Storage panel
and choose Set As Clone Destination. You can have more than one destination.
5 If you want to preserve the top level folder name from the source volume or folder, click the Clone
Tool panel’s option menu, and choose “Preserve Folder Name.” The overall folder structure of the
cloned media is always preserved.
6 If you want to change the checksum method used by DaVinci Resolve to verify that each clip
has copied properly, you can choose an option from the Checksum submenu of the Clone Tool’s
option menu. Each option is a tradeoff between the speed of your file copy operation and the
security of the verification process. Greater security generally means a slower copy operation.
The options are:
– None: Disables data verification, sacrificing safety for speed.
– File Size: Fast, but minimal data verification. Data verification is done simply by comparing the
file size of a duplicate file with that of the original. “Collision resistance” refers to whether two
files (or a file and an incorrectly duplicated file) may coincidentally have the same comparison
value (be it file size, an error-detecting code, or a hash). File Size is very fast, but it’s minimally
collision resistant.
– CRC 32: Faster than MD5, but less secure. An error-detecting code rather than the hash used
by the next three options. A “check value” is generated based on the remainder of a polynomial
division of the file’s contents. By comparing the check value derived from an original file with
that derived from a copy, data integrity can be verified. This is a much faster data verification
scheme than MD5 (the default), but it is significantly less collision resistant.
– MD5: This is the default setting. A reasonable tradeoff between speed and security. A hash
function generates a 128-bit value that’s unique to a particular file; Data integrity is checked
by comparing the hash value generated by the original file to that generated by the copied
file. MD5 is not as collision resistant as the SHA options, but it’s a faster operation, and the
probability of such collisions in conventional film and video workflows is probably small.
Chapter 18 Adding and Organizing Media with the Media Pool 317
– SHA 256, SHA 512: Slower, but more secure. SHA is a more collision resistant hash function
than MD5; options are provided for 256- and 512-bit value generation, with 512 being even
more collision resistant than 256. However, these options are progressively slower than MD5,
and will result in significantly slower copy times. Similarly to MD5, data integrity is checked by
comparing the hash value generated by the original file to that generated by the copied file.
7 When you’re ready, click the Clone button to initiate the cloning process.
To duplicate media quickly using the Clone Tool:
1 Right-click any volume or folder in the Media Storage panel, and choose Set as Clone Source.
A job item appears within the Clone Tool palette, populated by the volume or folder you selected.
2 Next, right-click any volume or folder in the Media Storage panel, and choose Set As Clone
Destination. You can do this more than once because you can have more than one destination.
3 If you want to preserve the top level folder name from the source volume or folder, click the Clone
Tool panel’s option menu, and choose “Preserve Folder Name.” The overall folder structure of the
cloned media is always preserved.
4 When you’re ready, click the Clone button to initiate the cloning process.
The Clone tool with a job set up
Chapter 18 Adding and Organizing Media with the Media Pool 318
Adding Media to the Media Pool
At minimum, you’ll be using the Media page to add clips to a project to begin editing, in preparation to
create dailies, or as a prelude to conforming a project using an EDL. All clips you want to work with
must first be added to the Media Pool to be available for grading and processing in DaVinci Resolve,
regardless of whether or not there’s edited project data to go along with it.
If you import XML or AAF projects, you can choose to automatically import all accompanying media as
part of the import process you initiate in the Edit page. However, if you find yourself needing to replace
updated effects or stock footage in the Timeline, or you’re called upon to add additional media such
as animated titles or superimposed clips for compositing, then you’ll still need to use the Media page
to do so.
Whatever kind of project you’re working on, you can add clips to the Media Pool from as many different
volumes as you need. All imported clips are linked to the original media on whichever disks you found
them; files are not moved, copied, or otherwise transcoded when you add them to the Media Pool.
Consequently, it’s a good idea to make sure that all media you want to import into your project has
already been copied to a suitably fast volume before importing it.
Basic Methods for Adding Media in the Media Page
There are several ways of adding clips to the Media Pool.
To add individual clips from the Media Storage panel to the Media Pool:
1 Use the Media Storage panel to find a media file to import.
2 If you have multiple bins available in the Bin list, choose the bin you want to add the
incoming media to.
3 Do one of the following:
– Shift-click or Command-click multiple files, then right-click one of the selected files and
choose “Add into Media Pool.”
– Drag a clip from the Media Storage panel browser to the Media Pool or to a specific
bin in the Bin list.
4 If a dialog appears asking if you want to change your project to match the criteria, click “Change”
to alter the project’s settings, or click “Don’t Change” to continue importing the media while
leaving the project at its previous frame rate. Once clips have been imported into the Media Pool,
the frame rate cannot be changed again, so choose carefully.
You also have the option of dragging media directly from the file system of supported platforms into
the Media Pool.
To drag one or more clips from the File System to the Media Pool (supported platforms only):
1 Select one or more clips in your File System.
2 Drag those clips into the Media Pool of DaVinci Resolve or to a specific bin in the Bin list.
Those clips are added to the Media Pool of your project.
If you need to add the contents of all directories and subdirectories to the Media Pool as a flat group
of media, that’s easily accomplished. A good example of this is when you’re importing camera original
media from a cloned file structure, in which clips are organized into subdirectories that are many levels
deep. DaVinci Resolve can easily import all of these clips and put them all into the same bin.
Chapter 18 Adding and Organizing Media with the Media Pool 319
To add the entire contents of one or more directories of clips to the Media Pool:
1 Use the Media Storage panel to find and select one or more directories containing media files you
need to import.
2 If you have multiple bins available in the Bin list, select the specific bin you want to add the
incoming media to.
3 Do one of the following:
– Right-click the selected directory or directories in the Media Storage panel, and choose
“Add Folder into Media Pool” to add only clips from the selected directory. Subdirectories
are ignored.
– Right-click the directory in the Media Storage panel, and choose “Add Folder and
SubFolders into Media Pool” to add clips from the selected directory and those from all
subdirectories within.
– Drag one or more selected directories you want from the Media Storage panel’s browser area
to the browser area of the Media Pool to add its contents, and the contents of all subdirectories
within, to the currently selected bin in the Bin list.
You also have the option of using the directories and subdirectories that organize media in your file
system as bins in the Media Pool, so that you can preserve the original organization of your media.
To add all clips and folders in a directory organized into matching folders in the Media Pool:
1 Use the Media Storage panel to find the directory containing the files you need to import.
2 Do one of the following:
– Right-click the directory and choose “Add Folder and SubFolders into Media Pool (Create Bins)”
– Drag the folder you want to import from the Media Storage panel to the Bin list of the Media
Pool to add that folder, and all subfolders within, as a new bin in the Bin list.
A folder appears in the Media Pool with the same name as the folder you dragged in. All clips and all
subdirectories appear within, nested hierarchically in the Media Pool as they were in the file system.
Import Hierarchically Organized Nests of Empty Directories
You can also import a nested series of directories and subdirectories that constitutes a default
bin structure you’d like to bring into projects, even if those directories are empty, by dragging
them from your file system into the Media Pool Bin list of a project. The result is a hierarchically
nested series of bins that mimics the structure of the directories you imported. This is useful if
you want to use such a series of directories as a preset bin structure for new projects.
Adding Subclips From the Media Storage Panel
If you’re browsing long source clips in the Media Storage panel, but you only want to import a small
segment of a much longer clip into the Media Pool, you can create subclips directly from the Media
Storage panel.
To add a subclip from a clip in the Media Storage panel to the Media Pool:
1 Single-click any clip in the Media Storage panel to open it into the Viewer in order to create a
subclip without needing to first import that clip into the Media Pool.
2 Set In and Out points in the Source Viewer to define the section you want to turn into a subclip.
Chapter 18 Adding and Organizing Media with the Media Pool 320
3 Do one of the following:
– Right-click the jog bar and choose Make Subclip from the contextual menu
– Drag the clip from the Viewer to the Media Pool to add it as a subclip
Adding Individual Frames From Image Sequences
If you’re working with image sequences, or with sequentially numbered image files from any source,
DaVinci Resolve automatically presents them as clips in the Media Storage panel. This is good if that’s
what they are, but there are instances where sets of photos, of which each frame is in actuality a
separate media file, are also sequentially numbered. For this reason, you can import individual frames,
rather than entire image sequences.
To choose between adding individual frames from a number sequence of images, or adding them
as image sequence clips in the Media Storage panel:
1 Click the Media Storage panel option menu, and choose Frame Display Mode.
2 Close one of the drop-down options:
– Auto: DaVinci Resolve will automatically select Individual Frames or Image Sequence based on
file type. For example, DPX and EXR files will be imported as image sequence clips, while JPG
files will be imported as individual frames.
– Individual: Each image sequence is now separated into its individual frames, allowing you to
select only the frames you need.
– Sequence: Will group sequentially numbered files together as an image sequence clip,
regardless of file type.
3 Use any of the previous described methods to add the frames you want to the Media Pool as
individual clips or image sequences.
Adding Media Based on EDLs
Another strategy for adding media to the Media Pool is to use an EDL to add only the clips it refers to
from a directory. This lets you add only the clips that are necessary for conforming a particular
imported project before conforming an EDL, and eliminates the need to add too much media to the
Media Pool, which might slow you down in the case of projects referencing terabytes of media.
Furthermore, you can choose multiple EDLs to base the import on, and many directories to examine.
The EDLs will reference clips via their timecode and sometimes reel name and path. It is these settings
and the conform frame rate that you made previously in the Configuration screen that are now used to
place images correctly into the Media Pool.
To add only media used in an EDL to the Media Pool:
1 If necessary, open the General Options panel of the Project Settings, turn on the “Assist
using reel names from the” checkbox, and choose a method with which to extract reel name
information from the media files you’re about to import. For more information, see Chapter 19,
“Using Clip Metadata.”
2 Right-click a directory in the Media Storage panel, and choose one of the following commands:
– Add Folder Based on EDLs into Media Pool
– Add Folder and SubFolders Based on EDLs into Media Pool
3 Using the file dialog that appears, select one or more EDLs to use.
DaVinci Resolve searches the directory hierarchy, either one level deep or all levels deep, for every
media file matching the source timecode and the reel ID of an event in one of the selected EDLs.
Chapter 18 Adding and Organizing Media with the Media Pool 321
Splitting Clips Based on EDLs
You can also use EDLs to split a media file into multiple clips in the Media Pool, either as an alternate
means of “preconforming” a flattened master media file, or to import multiple sections of a longer
media file that happen to be referenced by an EDL.
To split and add clips based on an EDL:
1 Right-click a directory in the Media Storage panel, and choose “Split and add into Media Pool.”
2 Using the file dialog that appears, select an EDL to use, and click Open.
3 Choose a frame rate to use to conform the clips to in the “File Conform Frame Rate” dialog,
and click OK.
4 Choose a handle size, in frames, and whether or not you want to split unreferred clips from
the “Enter handle size for splitting” dialog, and click Split & Add. The media file is split into the
component clips specified in the EDL, and added to the Media Pool.
TIP: Turning on the Split Unreferred Clips checkbox automatically splits out sections of the file
that were not referred to by the EDL you selected, and adds them to the Media Pool
separately, giving you access to every piece of media that’s available.
Import Clips With Metadata Via Final Cut Pro 7 XML
In order to support workflows with media asset management (MAM) systems, DaVinci Resolve
supports two additional Media Pool import workflows that use Final Cut Pro 7 XML to import clips
with metadata.
To import clips with metadata using Final Cut Pro 7 XML files, do one of the following:
– Right-click anywhere in the background of the Media Pool, choose Import Media from XML, and
then choose the XML file you want to use to guide import from the import dialog.
– Drag and drop any Final Cut Pro 7 XML file into the Media Pool from the macOS Finder.
Every single clip referenced by that XML file that can be found via its file path will be imported into the
Media Pool, along with any metadata entered for those clips. If the file path is invalid, you’ll be asked to
navigate to the directory with the corresponding media. Additionally, the following metadata
is imported:
– Clips
– Browser metadata
– Subclips
– Clip Markers, with colors and duration
– Bin Hierarchy
– Comments
Adding Media With Offset Timecode
Sometimes source media was created with incorrectly offset timecode, due to mistakes made earlier
in the postproduction process. If this offset is consistent, you can use the “Add Folder with Source
Offset” command to add media to the Media Pool as clips with a timecode offset.
Chapter 18 Adding and Organizing Media with the Media Pool 322
To add a folder of clips to the Media Pool with offset timecode:
1 Right-click a directory in the Media Storage panel, and choose one of the following commands:
– Add Folder with Source Offset
– Add Folder and SubFolders with Source Offset
2 Choose a number of frames with which to offset the timecode from the “Change Frame Offset”
dialog, and click Apply.
The media is imported as clips with offset timecode in the Media Pool. However, the original source
timecode of the clips on disk has not been altered. All media rendered out of the Deliver page will
reflect the offset timecode.
Adding Media to the Cut, Edit, Fusion,
and Fairlight Pages
While adding clips to the Media Pool in the Media page provides the most organizational flexibility and
features, if you find yourself in the Cut, Edit, Fusion, or Fairlight page and you need to quickly import a
few clips for immediate use, you can do so in a couple of different ways.
To add media by dragging one or more clips from the Finder to the Media Pool (macOS only):
1 Select one or more clips in the Finder.
2 Drag those clips into the Media Pool of DaVinci Resolve, or to a bin in the Bin list.
Those clips are added to the Media Pool of your project.
To use the Import Media command in the Media Pool:
1 Right-click anywhere in the Media Pool, and choose Import Media.
2 Use the Import dialog to select one or more clips to import, and click Open.
Those clips are added to the Media Pool of your project.
Removing Media From the Media Pool
If you’ve added clips to the Media Pool that you need to eliminate, this is easy to do, either singly, or in
the aggregate.
To remove clips from the Media Pool, do one of the following:
– Select one or more clips in the Media Pool, then press the Delete or Backspace key.
– Select one or more clips in the Media Pool, right-click one of the selected clips, and then choose
Remove Selected Clips.
– Right-click anywhere in the Media Pool, and choose Remove All Clips in Bin.
NOTE: If you’ve turned on “Automatically match master timeline with media pool” in the
General Options panel of the Project Settings, you cannot remove all clips from the Media
Pool if there are other timelines using that media.
Chapter 18 Adding and Organizing Media with the Media Pool 323
To remove clips from the Master Timeline (if it’s exposed):
Open the Edit page, then select one or more clips in the Media Pool, right-click one of the selected
clips, and choose “Remove Selected Clips from Master Timeline.” For more information about using
the Master Timeline, see Chapter 33, “Using the Edit Page.”
Adding and Removing External Mattes
If you’ve been provided with matte files to accompany one or more media files used by a program
you’re grading, you can attach them directly to specific clips in the Media Pool, in order to use them as
key sources for a Clip Grade in the Node Editor of the Color page. You can even use matte files that
pack multiple mattes within a single piece of media. This can be done by either writing different mattes
to each of the red, green, and blue channels of a clip, or by embedding multiple matte passes within a
single OpenEXR file.
Matching RGB and Matte images
When the Media Pool is in Icon view, clips with clip mattes appear with a badge.
A clip matte, seen in Icon view
Clip mattes appear listed underneath a clip in the Media Pool when it’s in List view.
A clip matte, seen in List view
Alternately, you can add a timeline matte to the Media Pool, which isn’t attached to any clip, that
can be used as a key source in the Color page within any clip’s Clip grade, or within a Timeline
Grade. Timeline mattes appear as stand-alone clips in the Media Pool.
Chapter 18 Adding and Organizing Media with the Media Pool 324
A timeline matte, seen in Thumbnail view
What Are Mattes For?
Matte files are useful for two things. Traditionally, mattes are grayscale media files that identify regions
of varying opacity, with white representing solid areas, and black representing transparency.
For example, exported clips from a compositing application sometimes are accompanied by one or
more matte files that correspond to keys or rotoscoped mattes from the composite. By importing these
matte files using the “Add as Matte” command, you can attach them to the clips they belong to in the
Media Pool, so that they’re only available to the clips they’re synced to.
However, mattes can also be used as creative tools to apply grain and texture for effect. What a matte
does depends on how you connect it in the Node Editor of the Color page. These are media files that
you may want to use as mattes for potentially any clip, so they can also be added to the Media Pool as
a so-called timeline matte, that can be applied to any clip you want.
TIP: If necessary, you can also apply LUTs to both clip mattes and timeline mattes in
the Media Pool, simply by right-clicking a matte, and choosing a LUT from the 1D LUT or
3D LUT submenus. This can be helpful for adjusting incorrectly formatted mattes.
Adding Mattes
To use mattes, you need to add them in very specific ways.
To assign a matte to a clip in the Media Pool:
1 Select a clip in the Media Pool to which you want to attach an external matte.
2 Select the matching external matte file in the Media Storage panel, right-click it, and choose
Add to Media Pool as a Matte.
The matte is attached to the clip as a clip matte. A badge indicates that clip has a matte when the
Media Pool is in Icon view, and the matte itself can be seen, if you put the Media Pool into List
view, appearing as a nested item underneath the clip it’s attached to.
Removing mattes from clips in the Media Pool:
1 Put the Media Pool into List view.
2 Right-click the external matte file you need to remove, and choose Remove Selected Clips.
Removing an external matte clip also removes that matte’s key from any clip grades that use it,
such that any clips using it as a key input change from a secondary operation to a primary
operation, where the color adjustment affects the entire image.
To add a timeline matte to the Media Pool:
1 Make sure no clip is selected in the Media Pool.
Chapter 18 Adding and Organizing Media with the Media Pool 325
2 Select an external matte file in the Media Storage panel, right-click it, and choose Add to Media
Pool as a Matte.
The external matte appears in the Media Pool as a timeline matte.
You can also assign mattes to clips directly in the Color page, which can sometimes be faster.
To assign a matte to a clip in the Color page:
– Drag any clip from the Media Pool to the Node Editor.
That clip appears an an External matte for the current clip’s grade in the Node Editor, and it’s also
automatically assigned to the current clip in the Media Pool.
For more information on using external matte clips as keys when grading, see Chapter 143,
“Combining Keys and Using Mattes.”
Using Embedded Mattes in OpenEXR Files
If you’re importing OpenEXR files with embedded matte passes, there’s nothing special you need to
do, as the mattes are within the clip you’ve just imported into the Media Pool. For more information on
how to use mattes within OpenEXR files, see Chapter 142, “Combining Keys and Using Mattes.”
Adding Offline Reference Movies
When moving a project from another application to DaVinci Resolve, it’s useful to export the entire
program as a single media file for use as an Offline Reference Movie. Then, you can import this file in a
special way to use for dual Viewer comparison in the Edit page, or as a split-screen comparison for
fade wipe in the Color page. As of DaVinci Resolve 16 it’s no longer necessary to import reference
movies in this way to make an offline comparison, but it can still be convenient when managing
multiple timelines and versions that require great specificity.
To add a clip as an offline reference clip:
– Right-click it in the Media Storage panel, and choose “Add As Offline Clip.”
That clip appears with a small checkerboard badge in its icon in the Media Pool, or as the icon to the
left of the Media Pool.
Checkerboard icon indicating
an Offline comparison video
For more information on using an offline video to compare with an imported Timeline in the Edit page,
see Chapter 55, “Preparing Timelines for Import and Comparison.” For more information on split-
screen reference of Offline video in the Color page, see Chapter 123, “Using the Color Page.”
Chapter 18 Adding and Organizing Media with the Media Pool 326
Extracting Audio in Media Storage
If there’s a video clip in the Media Storage panel that has audio you need, but you don’t want the video
component, you can use the Extract Audio command to create a self-contained audio clip that you can
then import into the Media Pool by itself.
To extract the audio from a media file:
1 Right-click a clip in the Media Storage panel, and choose Extract Audio.
2 Click the Browse button in the Extract Audio dialog to find another disk location for the
extracted clip.
3 Click Extract. The audio channels are extracted and written as a .WAV file to the selected
destination.
4 After you’ve extracted the stand-alone .WAV file, you’ll need to import it into the Media Pool if you
want to use it in your project.
Manually Organizing the Media Pool
Whether you’re doing onset work, creating digital dailies, organizing media to edit, or ingesting media
to conform to an imported project, it’s vitally important to stay organized. The Media Pool provides
many different tools for doing so. This section examines how you can create bins to manually organize
collections of clips.
To Select Clips in the Media Pool
There are a variety of ways you can make clip selections in the Media Pool in preparation for relinking,
unlinking, moving, duplicating, deleting, or doing any other operation to them.
– Click any clip to select it.
– Drag a bounding box around several clips to select them all.
– Hold the Command or Shift keys down and drag a bounding box around another discontiguous
group of clips to either add them to the current selection or remove them from the
current selection.
– Click one clip, then Shift-click another to select both clips and make a continuous selection of all
clips in-between. Shift-clicking another clip can expand or contract the selection.
– Command-click individual clips to select a discontiguous number of clips. Command-click a clip
that’s already selected to individually de-select it, while leaving the rest of the selection alone.
– With one clip selected, hold the Shift or Command keys down and use the Arrow keys to expand
the selection to other clips.
Organizing Media into Bins
You can easily organize clips into different bins in the Media Pool. For some workflows, this is required,
while with other workflows it’s purely optional.
Methods of working with bins in the Media Pool:
– To add a bin to the Media Pool: Right-click in the Bin list and choose Add Bin. To add a bin inside
another bin, right-click any bin and choose Add Bin.
– To move selected clips into a new bin: Select all the clips you want to put into a new bin, then
right-click one of the selected clips, and choose Create Bin With Selected Clips.
Chapter 18 Adding and Organizing Media with the Media Pool 327
– To rename a bin: Select the bin you want to rename, and then click its name a second time to
make it editable. With the bin name highlighted, type a new name and press Return. Alternately,
you can right-click a bin, choose Rename Bin, and then type a new name and press Return.
– To add incoming clips to a specific bin in the Media Pool: Click a bin to select it, then use any of
the previously described methods to add media from the Media Storage panel directly to that bin.
– To move media from one bin to another: Drag one or more selected clips from their current
location in the Media Pool into that bin. Multiple clips in the Media Pool can be selected by Shift-
clicking or Command-clicking them, or by dragging a bounding box over a group of clips. You can
also drag one bin into another one.
– To delete a bin: Select the bin you want to delete, and press the Backspace or Delete key. Or,
right-click a bin and choose Delete Bin. Deleting a bin with nested bins inside of it results in that
entire set of bins being deleted.
– To sort bins: Right-click on any bin, and choose an option from the Sort By submenu. You can
choose from Name, Date Created, Date Modified, and User Sort.
– To reorganize bins manually: Right-click anywhere within the Bin list, and choose Sort By
> User Sort. Then, drag bins up or down in the Bin list to put them into the order you want.
An orange dividing line shows where dragged bins will be placed when you drop them and helps
you see when a bin you’re dragging will become nested within another bin or not. The User Sort
order is saved even when you change to another sort order, and selecting User Sort again results
in your custom sort order being recalled.
Import and Export DaVinci Resolve
Project Bins (.drb)
You can import/export specific bins from one DaVinci Resolve project to another, allowing you to pass
bins quickly between projects and workstations that have access to the same media. All Metadata,
In/Out points, Timelines, etc. are transferred along with the clips in the bin, but none of the actual
media files are included.
To export bins from the Media Pool:
1 Select one or more bins in the Media Pool.
2 Right-click the selection and choose “Export Bin,” or choose File > Export > Export Bin.
3 Choose where to save the DaVinci Resolve Bin file (.drb) in the file system dialog, and click Save.
To import bins into the Media Pool:
1 Right-click in the Media Pool and choose “Import Bin,” or choose File > Import > Import Bin.
2 Do one of the following:
– Choose a DaVinci Resolve Bin file (.drb) from the file system dialog.
– Double click the .drb file in your file system.
The bin or bins will appear in the Media Pool. Any bins imported this way will have the word “import”
appended to their name, to avoid duplicate names. If you import a bin that contains clips that were
already in the Media Pool, the potentially duplicate clips are excluded from the import and instead
relinked to the media referenced by your project. This keeps your Media Pool tidy. However, if the bin
or bins have been moved to another computer, you may have to relink offline media.
Chapter 18 Adding and Organizing Media with the Media Pool 328
Import and Export DaVinci Resolve
Timelines (.drt)
You can export and import individual timelines from one DaVinci Resolve project into another
previously existing DaVinci Resolve project, allowing you to pass timelines quickly between projects
and workstations, without creating additional imported project files. Just the timeline and its
associated clip information is exported, none of the actual media files are included.
To export a timeline from the Media Pool:
1 Select a timeline from the Media Pool.
2 Choose File > Export > Export AAF, XML, DRT (Shift-Command-O).
3 Choose “DaVinci Resolve Timeline Files (*.drt)” from the format options popup
in the file system dialog.
4 Choose where to save the DaVinci Resolve Timeline file (.drt) in the file system dialog,
and click Save.
To import a timeline into the Media Pool:
1 Choose a bin in the Media Pool in which you want the imported timeline to be saved.
2 Do one of the following:
– Choose File > Import Timeline > Import AAF, XML, DRT (Shift-Command-I), then Select a
DaVinci Resolve Timeline file (.drt) from the file system dialog, and click Open.
– Double click the .drt file in your file system.
The timeline will appear in the Media Pool, along with all of the clips associated with it. Any timelines
imported this way will have the word “import” appended to their name, to avoid duplicate names.
The imported timeline will be automatically conformed to corresponding media that’s already in the
Media Pool. However, if the timeline has been moved to another computer, you may have to reimport
or relink missing or offline media in to bring the imported timeline fully online.
NOTE: Only a single timeline can be imported and exported at a time using this method.
To import or export multiple timelines, use the Import/Export Bin function described above.
Sharing Media Among Projects
Using Power Bins
Power Bins provide a way of importing and organizing media that you want to be available to all
projects in DaVinci Resolve. Power Bins reside in a separate area of the Media Pool, with resizable
dividers separating them from both the ordinary bins and Smart Bins areas. Power Bins are
hierarchical, just like regular bins, and you can nest as many as you like, one inside another.
Chapter 18 Adding and Organizing Media with the Media Pool 329
The Power Bins area of the Bin list
Like regular bins, Power Bins must be manually created by right-clicking within the Power Bins area
and choosing Add Bin. The difference is that whatever clips you import into Power Bins are shared
among all projects in a single-user installation, or all projects belonging to a particular user in a
multi-user installation. In this way, they’re similar to Power Grades in the gallery of the Color page. This
makes Power Bins ideal for storing shared media that’s re-used often, such as stock video, sound
effects, stills, and things like company slates and network graphics and animations that go into every
show of a series.
Power Bins are created and used like any other bin, using the procedures described previously.
To show or hide the Power Bin area of the Bin list:
– Choose View > Show Power Bins to toggle the visibility of all power bins on and off.
Automated Organization Using Smart Bins
A completely automated way of organizing media in the Media Pool is to use Smart Bins that are either
automatically or manually created, in order to collect all clips and timelines in the Media Pool that have
commonalities based on any of the intrinsic or user-editable metadata that’s available in the Metadata
Editor and Media Pool. If you’re familiar with the Color page, Smart Bins work much the same way as
Smart Filters, and they’re created and edited using much the same procedures. For more information
about Smart Filters, see Chapter 123, “Using the Color Page.”
Smart Bins are incredibly flexible. Using one or more metadata-based rules, they can be as simple or
sophisticated as you require. They’re even capable of using multiple groups of multiple rules for
situations where you need to gather clips that match all of one set of criteria, but only one of a second
set of criteria. In this way, you can use Smart Bins to solve a wide variety of organizational needs as
you edit your program.
Smart Bins Are Only As Good As Your Metadata
It’s important to point out, however, that as much intrinsic metadata is available to every clip in
DaVinci Resolve automatically (clip properties such as frame rate, frame size, codec, file name, and so
Chapter 18 Adding and Organizing Media with the Media Pool 330
on), the more time you take entering extra metadata in the Metadata Editor to prepare your project for
editing and grading, the more powerful Smart Bins can be in helping you to sift and sort through the
contents of a program you’re grading. Examples of metadata entry that will guarantee immediate
benefits from Smart Bins include the entry of scene, shot, and take information, keywords identifying
key descriptors (day and night, interior and exterior, framing, and so on), and using Face Detection to
assign character names. These categories of metadata can be used for the automatic creation of
Smart Bins, but they can also be used in combination when manually creating Smart Bins that are even
more specific.
Imagine being able to gather all the clips in a particular scene, find all the interview clips for a particular
subject, or find all the edited timelines corresponding to a particular name, all by simply selecting a
Smart Bin that automatically examines the current contents of the Media Pool. If you or an assistant can
take the time to enter metadata for the source material in a project that identifies these characteristics,
you’ll be able to work even more quickly to find the clips you need for any given situation.
Smart Bins Update Their Contents Dynamically
Smart Bins are always dynamically up to date and include whatever new media is added to the Media
Pool. This makes it easy to stay organized, even when working on projects where new media is being
added to the Media Pool every day, such as when editing during a shoot. By using metadata entered
either in-camera, by the DIT or media wrangler managing ingest, or by an Assistant Editor who’s
working with you, Smart Bins will automatically include all clips in the Media Pool that have matching
criteria, whether they were added a month ago or a minute ago.
Automatic Smart Bin Creation
The process of adding metadata to your clips can be used for the automatic creation of sets of “Smart
Categories,” which are Smart Bins that are generated and organized by the presence of specific
categories of metadata and appear in the Smart Bins section of the Media Pool sidebar. To enable or
disable this behavior, open the Editing panel of the User Preferences, and use the checkboxes in the
Automatic Smart Bins group to choose which metadata automatically creates Smart Bins.
Preferences governing what metadata can automatically create Smart Bins
Metadata capable of creating Smart Bins include:
– Clip Keywords
– Marker Keywords
– People Keywords (added via People Detection)
– Scene metadata
– Shot metadata
These categories are hierarchically organized, with each category closed by default to save space.
Click the disclosure triangle of any category to reveal all Keyword, People, Scene, or Shot Smart Bins
that are available in the current project. Selecting the Smart Category’s top bin lets you see every clip
referenced by every Smart Bin inside of it, whereas selecting individual Smart Bins shows you only the
clips referenced by that Smart Bin.
Chapter 18 Adding and Organizing Media with the Media Pool 331
A Smart Category seen in the Smart
Bins area of the Media Pool sidebar
Manual Smart Bin Creation
It’s easy to manually create Smart Bins with customized rules to filter very specific collections of media
and timelines that you want to use.
To show or hide the Smart Bin area of the Bin list:
– Choose View > Show Smart Bins to toggle the visibility of all Smart Bins on and off.
To create a Smart Bin:
1 If necessary, open the Bin list, choose View > Show Smart Bins, then right-click anywhere in the
background of the Smart Bin area of the Bin list, and choose Create Smart Bin.
2 In the Create Smart Bin dialog, enter a name for the filter, and use the following controls to create
one or more filter criteria (you can have as many filter criteria as you like):
The Create Smart Bin dialog
– Show in all projects checkbox: Lets you create a persistent Smart Bin that appears in all
projects in your database. Smart Bins created this way will be found in the User Smart Bins
folder inside every project’s Smart Bin area in the Media Pool.
– Match options: For multi-criteria filtering, choosing All ensures that every single criteria you
specify is met for a clip to be filtered. Choosing Any means that if only one out of several criteria
is met, that clip will be filtered.
– Filter criteria enable checkbox: Lets you enable or disable any criteria without
having to delete it.
– Metadata category drop-down: Lets you choose which category of metadata you want to
select a criteria from. Each category of metadata that’s available in the Metadata Editor is
available from this drop-down menu. Additionally, Color Timeline Properties (containing many
properties unique to the Color page timeline) and Media Pool Properties (containing every
column in the Media Pool) provide access to additional metadata you can use for filtering.
– Metadata type drop-down: For choosing which exact type of metadata to use, of the options
available in the selected metadata category.
Chapter 18 Adding and Organizing Media with the Media Pool 332
– Metadata criteria drop-down: Lets you choose the criteria by which to filter, depending on
the metadata you’ve selected. Options include “true/false,” integer ranges, date ranges, string
searches, flag and marker colors, et cetera.
– Add filter criteria button: Lets you add additional criteria to create multi-criteria filters. You
could use multiple criteria to, for example, find all exterior clips, that also contain the keyword
“Sunset,” that aren’t closeups, in order to find all the exterior long and medium shots in sunset
lighting. Additionally, if you Option-click this button, you can add a nested match option in order
to create even more sophisticated filters, such as when the filter must match all of one set of
criteria, and any of another set of criteria.
A complicated Smart Bin with multiple criteria and a second match option setting
As you’re editing the filter criteria, the thumbnail timeline automatically updates to show you how
the Smart Bin you’re creating is working.
3 When you’re done editing the filter criteria, click Create Smart Bin. The resulting Smart Bin appears
in the Smart Bin area of the Bin list, at the left of the Media Pool’s browser area.
Once you’ve created a Smart Bin, it appears in the lower half of the Media Pool’s Bin list, alongside
every other Smart Bin in that project. This keeps them organized, separate from the manually created
bin shown above.
All Smart Bins appear together at the
bottom of the Media Pool’s Bin list
Once you’ve created a Smart Bin, you can re-edit it whenever the situation requires.
Methods of modifying existing Smart Bins:
– To rename a Smart Bin: Right-click the Smart Bin you want to rename, choose Rename from the
contextual menu, enter a new name, and press Return.
– To edit a Smart Bin: Double-click the Smart Bin, then edit the filter criteria, and click OK.
Chapter 18 Adding and Organizing Media with the Media Pool 333
– To duplicate a Smart Bin: Right-click any Smart Bin and choose Duplicate from the contextual
menu. This is a good way to create multiple variations of a Smart Bin that you created with
complex rules, where you need to create variations by modifying those rules without needing to
reinvent the wheel each time.
– To delete a Smart Bin: Right-click the Smart Bin you want to delete, choose Delete Smart Bin from
the contextual menu, and click Delete in the warning dialog. Deleting a Smart Bin does not delete
any gathered media associated with that bin.
Smart Bins Work Better With Metadata
Keep in mind that the more metadata you associate with each clip, the more methods you
have at your disposal for creating custom Smart Bins (for editing) and Smart Filters (for grading)
with which to zero in on the clips you need for any given situation. This will not only make it
easier to find what you need, but it’ll help you to work faster. At the very least, it would be
valuable for you to use the Metadata Editor to add information to each clip such as a
Description, Shot and Scene designations, take information, and possibly some useful
keywords such as character names, shot framing, interior or exterior keywords, and so on.
For example, if you’ve entered enough metadata, then you can create multi-criteria Smart Bins
or Smart Filters that let you find the equivalent of “every close-up of Sally inside the diner,” or
“every long shot of Antonio outside in the parking lot.” In a documentary, you could easily
isolate “every interview shot of Louis from camera 1,” or “every B-roll clip with Robyn.” All of
this will help you find media faster for editing, or quickly isolate similar clips that you need to
match together for grading.
For more information about using the Metadata Editor, see Chapter 19, “Using Clip Metadata.”
Organizing Smart Bins
Manually created Smart Bins can be organized into Folders and Sub-Folders for better sidebar
management, just like regular bins.
Smart Bins organized into folders
To add a Smart Bin folder:
Right-click in the Smart Bins area and choose Add Folder from the contextual menu to create folders
that you can drag Smart Bins into. Each folder has a disclosure triangle, so you can show or hide
its contents.
Another benefit of Folders is that when you select a Folder, you can see the full contents of all Smart
Bins inside of it in the Media Pool browsing area. Selecting any one Smart Bin then restricts the Media
Pool to showing only the media reference by that Smart Bin.
Folders can be renamed, removed, opened as a new window, or sorted along with all other Smart Bins
by right-clicking them and using commands in the contextual menu.
Chapter 18 Adding and Organizing Media with the Media Pool 334
Duplicating Clips in the Media Pool
You can duplicate clips in order to create an instance of that media that’s treated as a completely new
source clip, entirely separate from the original instance of that clip that was imported into
DaVinci Resolve. The duplicate is capable of storing individualized metadata and markers that are
completely distinct from the original clip that was imported into your project.
To duplicate one or more clips:
1 Select one or more clips to duplicate.
2 Do one of the following:
– Choose Edit > Duplicate Clip
– Hold the Option key down while dragging one or more selected clips to another bin
– Right-click a clip in the Media Pool, and choose Duplicate Clip from the Contextual Menu
Adding Clips From the Timeline to the Media Pool
You can also drag one or more clips from the Timeline back into the Media Pool to create a duplicate.
As with duplicating clips in the Media Pool, each duplicate is created as a new source clip that’s
entirely separate from the original instance of that clip that was imported into DaVinci Resolve and is
capable of storing individualized metadata and markers that are completely distinct from the original
clip that was imported into your project.
For example, the original clip in the Timeline remains conformed to the original clip that was first
imported into the Media Pool; deleting the original clip from the Media Pool will make that clip “non-
conformed” in the Timeline, while the duplicate you just created remains linked and available. If you’re
in this situation, you can always turn Conform Lock Enabled off for that clip in the Timeline and
reconform the Timeline Clip to the duplicate you just created, but that’s an extra step because the
duplicate clip is considered by DaVinci Resolve to be a whole new piece of media that just happens to
share the same clip details.
This may seem strange, but there are a variety of finishing workflows that use this capability, so it’s
good to know about.
Duplicating Timelines
Timelines can be duplicated for a variety of reasons: to create a backup of a timeline at a specific date,
to create a variation of an edit, or to create separately graded versions.
To duplicate a Timeline:
– Select a Timeline in the Media Pool, and choose Edit > Duplicate Timeline.
– Press Command-4 to give focus to the Timeline, and choose Edit > Duplicate Timeline.
Choosing How to Display Bins
Once you’ve created a bin structure for your project, you can customize how your bins are displayed,
depending on how you like to work.
Showing Bins in Separate Windows
If you right-click a bin in the Bin list, you can choose “Open As New Window” to open that bin into its
own window. That window is basically its own Media Pool, complete with its own Bin list, Power Bins
and Smart Bins lists, and display controls.
Chapter 18 Adding and Organizing Media with the Media Pool 335
Multiple Media Pool bins opened as new windows
When multiple Media Pool windows are open, the Workspace > Media Pool Windows submenu lets
you bring a floating Media Pool window into focus when you have one or more open and hidden.
This is most useful when you have two displays connected to your workstation, as you can drag these
separate bins to the second display while DaVinci Resolve is in single screen mode. If you hide the Bin
list, not only do you get more room for clips, but you also prevent accidentally switching bins if you
really want to only view a particular bin’s contents in that window.
Using the Media Pool in Thumbnail View
If you work in Thumbnail view using the controls at the top right of the Media Pool, you have the option
to resize the thumbnails to make them easier to see, and you can move the mouse pointer over each
clip to hover scrub through its contents. Clicking any clip to select it displays it in the Media page
Viewer. Whichever clip is currently selected is also output to video for monitoring.
In Thumbnail view, you can use the Sort Order drop-down, at the top right of the Media Pool, between
the Icon Size slider and the Icon/List view buttons, to choose how clips are sorted. There are fourteen
options: File Name, Reel Name, Clip Name, Start TC, Duration, Type, FPS, Audio Ch, Flags, Date
Modified, Date Created, Shot, Scene, and Take.
Working With Columns in List View
If you work in List view using the controls at the top right of the Media Pool, you gain additional
organizational control by exposing columns that show the metadata that each clip contains, prior to
media being added to your timeline. You can use these columns to help organize your media.
Methods of customizing metadata columns in List view:
– To show or hide columns: Right-click at the top of any column in the Media Pool to reveal the
column list, and while the column list is open, click the checkboxes of any columns you want to
show or hide. Unchecked columns cannot be seen. When you’re finished, click anywhere else in
the Media Pool to dismiss the column list.
– To rearrange column order: Drag any column header to the left or right to rearrange
the column order.
– To resize any column: Drag the border between any two columns to the right or left to narrow or
widen that column.
– To sort by any column: Click the column header you want to sort with. Each additional time you
click, the same header toggles that column between ascending and descending sort order.
Chapter 18 Adding and Organizing Media with the Media Pool 336
Once you’ve customized a column layout that works for your particular purpose, you can save it for
future recall.
Methods of saving and using custom column layouts:
– To create a column layout: Show, hide, resize, and rearrange the columns you need for a
particular task, then right-click any column header in the Media Pool, and choose Create Column
Layout. Enter a name in the Create Column Layout dialog, and click OK.
– To recall a column layout: Right-click any column header in the Media Pool, choose the name of
the column layout you want to use from the contextual menu, and choose Load from that item’s
submenu. All custom column layouts appear at the top of the list.
– To edit a column layout: Load the column layout you want to edit, make whatever changes you
need to, then right-click any column header in the Media Pool, choose the name of the column
layout you just edited from the contextual menu, and choose Update from that item’s submenu.
– To delete a column layout: Right-click any column header in the Media Pool, choose the name
of the column layout you want to delete from the contextual menu, and choose Delete from that
item’s submenu.
While the available columns of metadata correspond to those fields shown in the Metadata Editor, the
available columns in the Media Pool of the Media and Edit pages are a subset of the total amount of
metadata that’s available, although they represent the most commonly used metadata you’ll find
yourself referring to when editing and finishing. The available columns in List view include:
File Name: The name of the file on disk that clip is linked to.
Clip Name: Editing the Clip Name lets you change the name with which clips appear throughout
DaVinci Resolve when View > Use Clip Name for Clip Titles is enabled. By default, the clip name mirrors
the source clip’s file name. When editing the clip name in the List view of the Media Pool, you can use
“metadata variables” that you can add as graphical tags that let you reference clip metadata.
For example, you could add the corresponding metadata variable tags %scene_%shot_%take and
that clip would display “12_A_3” as its name if “scene 12,” “shot A,” “take 3” were its metadata.
The clip name can also be edited in the Clip Attributes window. For more information on the use
of variables, as well as a list of all variables that are available in DaVinci Resolve, see Chapter 16,
“Using Variables and Keywords.”
Angle: An editable field to contain the angle of the media in a multi-camera shoot.
Audio Bit Depth: The bit depth of any audio channels in the media file.
Audio Ch: The total number of audio tracks in the media file.
Audio Codec: The specific codec used by the audio portion of the media file.
Audio Offset: Lists the audio offset, in frames, for clips that have been synchronized to separately
recorded audio. This parameter is editable in the Media Pool.
Bit Depth: The bit depth of the media file.
Camera #: The number assigned to a specific camera.
Clip Color: The current color assigned to that clip.
Comments: A user-editable field for entering information about that clip.
Data Level: The data level setting for the media file.
Date Created: The date the media file was created.
Date Modified: The last date the media file was modified.
Chapter 18 Adding and Organizing Media with the Media Pool 337
Description: A user-editable field for entering information about that clip.
Duration: The total duration of the clip, in timecode.
End: The last frame number of the media file.
End TC: The timecode value of the last frame in the media file.
FPS: The frame rate of the media file.
File Path: The file path where that media file is located on disk.
Flags: Which flags, if any, have been added to a media file.
Format: The image format used by that clip, such as QuickTime, MXF, WAVE, and so on.
Frame/Field: Whether that media file is progressive or interlaced.
Frames: The total duration, in frames.
Good Take: An editable field to contain the circled state of media, relative to the
script supervisor’s notes.
H-FLIP: Whether that media file is horizontally flipped in DaVinci Resolve.
HDRX: Only displayed for R3D media, indicates whether or not it’s HDRX media.
IDT: If ACES color science is selected in the Color Management panel of the Project Settings, the IDT
used by that clip is listed here.
In: The timecode value of the In point, if any, that’s stored for that clip.
Input Color Space: If Resolve Color Management is selected in the “Color Science” menu of the
Color Management panel of the Project Settings, then this column will show the Input Color Space that
has been assigned to each clip. By default, all clips inherit the Input Color Space setting that’s been
selected in the Color Management panel of the Project Settings.
Input LUT: Which input Lookup table has been assigned, if any.
Input Sizing Preset: The currently selected Input Format Preset, if there is one.
Keyword: A user-editable field for entering searchable keywords pertaining to that clip. Only shows clip
keywords, not marker keywords.
Offline Reference: Lists the offline reference video that has been assigned to a given timeline.
Optimized Media: Populated with the resolution of whatever optimized media you’ve created (Original,
Half, Quarter, and so on). Clips that have not been optimized appear with “None.”
Out: The timecode value of the Out point, if any, that’s stored for that clip.
PAR: The pixel aspect ratio, if assigned.
Reel Name: The reel name of that clip. Dynamically generated by the “Assist using reel names from the”
setting in the General Options panel of the Project Settings.
Resolution: The frame size of the media file.
Roll/Card: An editable field to contain the roll number of media that was scanned from film.
S3D Sync: Shows a frame count when you slip an eye to fix non-synced timecode using
the “Slip Opposite Eye One Frame Left/Right” commands. This parameter is editable in the Media Pool.
Sample Rate: The sample rate of the media file’s audio, if there is any.
Chapter 18 Adding and Organizing Media with the Media Pool 338
Scene: An editable field to contain the scene number of the media, relative to the script.
Shot: An editable field to contain the shot number of the media, relative to the scene.
Slate TC: The Slate timecode track used to sync audio with video.
Start: The first frame number of the media file.
Start KeyKode: The starting KeyKode value of a scanned negative.
Start TC: The timecode value of the first frame in the media file.
Take: An editable field to contain the take number of the media, relative to the shot.
Type: The type of item, such as Video+Audio, Video, Audio, Timeline, Multicam, Still, and so on.
Usage: After a timeline has been created by importing an AAF, EDL, or XML project, the Usage column
automatically reflects how many times each clip is used in the project. This makes it easy to identify clips
that aren’t in use, and which can be removed from the Media Pool.
V-FLIP: Whether that media file is vertically flipped in DaVinci Resolve.
Video Codec: The specific codec used by the video portion of the media file.
Editable Description and Comments Columns
When the Description and Comments columns are displayed by the Media Pool in List view, you can
edit their contents by clicking once within a clip’s Description or Comments field, waiting a moment,
and then clicking a second time to select that field.
Using Metadata View in the Media Pool
In the Metadata View mode, each clip is represented by its own card with a thumbnail and basic clip
metadata information visible. This view is designed to have more metadata information than a
thumbnail but more targeted information than the List view. This feature, combined with its sort modes,
is a powerful way to organize and reorganize your clips in the Media Pool.
The metadata fields of the Metadata view (from the top down):
– Thumbnail: A scrubbable thumbnail image of your clip.
– Row 1: A main description field that is variable and determined by the sort order selection.
– Row 2: Start Timecode, Date Created, Camera #.
– Row 3: Scene, Shot, Take.
– Row 4: Clip Name, Comment.
The Metadata View icon view (highlighted icon in the top bar), showing
the thumbnail being scrubbed next to the clip’s metadata
Chapter 18 Adding and Organizing Media with the Media Pool 339
The strength of the Metadata view is the automatic clustering of your clips based on the sort order you
choose in the Media Pool Sort By menu, at the very upper-right corner of the Media Pool.
The Media Sort options
Each different sort mode changes the main description field on the card, as well as re-arranging the
Media Pool to reflect the selected organization method.
The sort modes available in the Metadata view are:
– Bin: This mode clusters the clips by bin, changes the main description field to clip name, and
orders the list by timecode.
– TImecode: This mode clusters the clips by creation date, changes the main description field to
creation date and start timecode, and orders the list by timecode.
– Camera: This mode clusters the clips by camera #, changes the main description field to
camera # and start timecode, and orders the list by timecode.
– Date Time: This mode clusters the clips by day, changes the main description field to creation date
and file name, and orders the list by timecode.
– Clip Name: This mode clusters the clips by the first letter of the clip name in alphabetical order,
changes the main description field to clip name, and orders the list by timecode.
– Scene, Shot: This mode clusters the clips by scene, changes the main description field to scene-
shot-take, and orders the list by scene-shot-take.
– Clip Color: This mode clusters the clips by clip color name, changes the main description field to
creation date and start timecode, and orders the list by timecode.
– Date Modified: This mode clusters the clips by day, changes the main description field to creation
date and file name, and orders the list by the last time the clip was modified by the OS filesystem.
– Date Imported: This mode clusters the clips by day, changes the main description field to creation
date and file name, and orders the list by the date the clip was added to the Media Pool.
– Ascending: Orders the Media Pool from lowest numerical value to highest, and
alphabetically from A to Z.
– Descending: Orders the Media Pool from highest numerical value to lowest, and alphabetically
from Z to A.
Chapter 18 Adding and Organizing Media with the Media Pool 340
The Metadata view with clips sorted by Scene-Shot-Take
The Metadata view with the same clips sorted by Camera
Finding Clips, Timelines, and Media
There are several ways to locate different items in the Media Pool and Media Storage, be they clips,
timelines, or media on disk.
Finding Clips and/or Timelines Within the Media Pool
Clicking the magnifying glass button at the upper right-hand corner of the Media Pool exposes the
Search Options, which by default can be used to locate one or more clips in the currently selected bin
or bins, based on the metadata that’s selected in the Filter By drop-down menu to the left of it.
Chapter 18 Adding and Organizing Media with the Media Pool 341
The Search Options drop-down menu (as
seen in the Edit page Media Pool) lets you
choose what metadata you’re searching
A drop-down menu right next to the magnifying glass icon lets you choose the scope of your
search. This lets you choose whether a search looks through all bins in the current project for the
specified criteria, or just looks at the currently open bin, or currently selected bins in the Bin list, in
cases where you’re looking for an instance of media in a specific hierarchical location of the
Media Pool.
The drop-down menu next to the magnifying
glass icon lets you set the bin search parameters
To find a clip in the Media Pool:
1 (Optional) Use the drop-down menu next to the Search button that exposes the Search and Filter
by controls in the Media Pool to choose whether you select All Bins or Selected Bins.
2 (Optional) If you’re searching Selected Bins, then open the Bin list and select one or more bins in
which to search.
3 (Optional) Choose a criteria from the Search Options drop-down menu at the top right of the Media
Pool; you can choose All Fields to do a simultaneous search of every metadata column in the
Media Pool at once, or you can choose a specific criteria to restrict your search.
4 Type a search term in the Search field. As soon as you start typing, all clips that don’t match the
search criteria are temporarily hidden. To show all clips in the Media Pool again, click the cancel
button at the right of the search field.
Finding Synced Audio
If you’ve synced dual-system audio and video clips together in DaVinci Resolve, you can find the audio
clip that a video clip has been synced to using the following procedure.
To find the audio clip that a video clip has been synced to:
– Show the Media Pool in List view, and reference file name in the Synced Audio column.
– Right-click a video clip that’s been synced to audio, and choose “Reveal synced audio in
Media Pool” from the contextual menu. The bin holding the synced audio clip is opened and that
clip is selected.
Chapter 18 Adding and Organizing Media with the Media Pool 342
Finding Timeline Clips in the Media Pool
If you have a clip in a timeline and you want to find the corresponding clip that it’s conformed to in the
Media Pool, you can right-click that clip, and choose Find in Media Pool from the contextual menu.
Finding Timelines in the Media Pool
If you’d like to find the currently open timeline’s location in the Media Pool, you can choose Timeline >
Find Current Timeline in Media Pool.
Finding Media in the Media Storage Panel and Finder
If you find yourself needing to determine the location of a clip’s source media file on disk, you can
right-click an item in the Media Pool and choose Reveal in Media Storage panel. The Library
automatically opens to the folder containing the media file you’ve selected, with that media file
selected in the Library browser to the right.
Another feature that’s only available for macOS systems is the ability to right-click an item in the
Media Pool and choose Reveal In Finder. A file system window opens up, revealing the media file that
clip is linked to.
Going Immediately to a File System Location
in the Media Browser
Conversely, if you drag a folder from the macOS Finder into the Media Storage panel, the Media
Storage panel will immediately update to show the location of that folder.
Tracking Media Usage
As clips are added to timelines, two mechanisms come into play for keeping track of which clips are
used in which timelines.
Thumbnail Clip Usage Indicators
Whenever you open a timeline, all thumbnails in the Media Pool automatically update to show
highlighted usage bars to let you know which parts of that clip are used in that timeline.
Two colored highlights at the bottom of the
thumbnail indicate which parts of a clip are
used by the currently open timeline
If you right-click on a thumbnail that shows usage, a Usage submenu shows you a list of each instance
of that clip in the currently open timeline. Choosing an instance from this list jumps the playhead to that
clip in the Timeline.
Chapter 18 Adding and Organizing Media with the Media Pool 343
List View Clip Usage Column
Exposing the Usage column when the Media Pool is in List view lets you see a value for the number of
times a clip appears in all timelines of the current project. This usage column is now automatically
updated; no user intervention is required.
A Usage column shows how many times a
clip is used in every timeline, after analysis
NOTE: The usage column increments for each clip item that appears in the Timeline. This
means that if a clip consists of one video item and one video item linked together, the usage
column will show the number 2.
Relinking Media Simply
DaVinci Resolve keeps track of the relationship between clips in your project and their corresponding
source media on disk. If, for whatever reason, source media that links to clips in your project becomes
unavailable, DaVinci Resolve has several different methods of relinking those clips in the Media Pool.
This section summarizes the methods of relinking. For more comprehensive information on conforming
projects and relinking media, see Chapter 56, “Conforming and Relinking Clips.”
Relink Media
If DaVinci Resolve fails to find your media, a Relink Media icon in the Cut and Edit page’s Media Pool
will highlight orange.
The Relink Media icon that
appears for unlinked media
Clicking this icon opens a dialog box showing the volumes that the missing files initially belonged to.
You can then use this information to track down the media on your file system, find that specific hard
drive, or ask a client if they provided you the media from this volume. Clicking the Locate button lets
you re-connect the missing clips to a new file location of your choosing. If the quick search initiated by
the Locate buttons doesn’t find media that you know is there, you can initialize an exhaustive deep
disk search for the media by clicking on the Disk Search button.
Chapter 18 Adding and Organizing Media with the Media Pool 344
The Relink Media dialog showing the volume
names where the missing clips originated
Relink Selected Clips
The easiest method of relinking clips in your project that have gone offline is to use the appropriately
named “Relink selected clips” command. This is the most flexible method of relinking clips in your
project with clips in a file system directory of your choice, using file name and timecode as the primary
criteria for drawing a correspondence between each clip and the corresponding media file on disk.
When you relink clips this way, the original file path in DaVinci Resolve is ignored, so this is a good
command to use to relink to media that’s been reorganized on disk.
To relink selected clips:
1 Do one of the following:
– Select one or more clips in the Media Pool browser that you want to relink, then right-click
one of the selected clips or the selected bin, and choose “Relink Selected Clips” from the
contextual menu.
– Select a bin in the Media Pool Bin list that contains clips you want to relink, then right-click one
of the selected clips or the selected bin, and choose “Relink Clips for Selected Bin” from the
contextual menu.
2 When the Relink File dialog opens, choose a directory in which to look for the files you want to
relink to, and click OK. DaVinci Resolve attempts to find every clip with a matching file name in the
subdirectories of the directory you chose, using the original file paths of the clips being relinked to
do this as quickly as possible. By first looking for the clips in the directories they were originally in,
relinking can be quite fast.
3 If there are any clips that couldn’t be found using the method in step 2, you’re prompted with
the option to do a “deep search” by a second dialog. If you click Yes, then DaVinci Resolve will
look for each clip inside every subdirectory of the directory you selected in step 2. This may take
significantly longer, but it should be completely successful so long as the media that’s required is
within the selected directory structure.
4 If there are still other clips that couldn’t be found, you’re prompted to either choose another
directory altogether to continue searching, or quit.
Chapter 18 Adding and Organizing Media with the Media Pool 345
Change Source Folder
If you’ve used your file system to move media that’s associated with a DaVinci Resolve project, but you
haven’t changed the directory structure with which it’s organized, you can use the Change Source
Folder command to quickly relink selected clips in the Media Pool to the new file path of the media on
disk, using the original file paths as a guide. This is a good relinking method to use, if possible, for
projects on a SAN where you don’t want to risk the excessively long search times that could result
from using the Relink command to examine a nested hierarchy of folders in a more flexible way.
To relink your Media Pool clips to a new location:
1 Select one or more clips in the Media Pool, then right-click one of the selected clips, and choose
Change Source Folder from the contextual menu. The Relink Media window appears displaying
the original path for the material, with controls for choosing a new directory.
2 Click the “Browse” button to the right of the Change To field, and then use the file navigation
dialog to find the new location of the media file, select it, and click Open.
3 If you succeeded in finding the appropriate media file, click Change. Otherwise, click Cancel.
Chapter 18 Adding and Organizing Media with the Media Pool 346
Chapter 19
Using Clip Metadata
DaVinci Resolve has powerful tools for viewing, editing, exporting, and importing
metadata associated with each clip in the Media Pool. Once your metadata house is
in order, you can use this metadata in the Edit, Color, and Audio pages to find, sort,
and organize the clips in your project, so you can work faster.
Contents
Editing Clip Metadata 348
Automatically Imported Metadata 348
Using the Metadata Editor 348
Editing Keywords 349
Editing Metadata Using the File Inspector 350
Face Detection to Generate People Keywords 352
Creating Custom Metadata Groups 354
Importing and Exporting Media Pool Metadata 355
Different Ways of Using Clip Metadata 356
Renaming Clips Using Clip Names 356
Switching Between File Names and Clip Names 357
Using Metadata to Define Clip Names 357
Chapter 19 Using Clip Metadata 347
Editing Clip Metadata
Whether you’ve imported media in preparation for editing, or you’ve imported a project for grading
that resulted in media being imported automatically, once you’ve added clips to the Media Pool, it
would behoove you to consider taking the time to review and add metadata to your clips.
At the very least, it would be valuable for you to use the Metadata Editor that’s available in either the
Media page or the Edit page to add information to each clip such as a Description, Shot and Scene
designations, Take information, and possibly some useful keywords such as Character Names, Shot
Framing, Interior or Exterior keywords, and so on. If you’re especially ambitious (or you have a very
responsible assistant), you could go further and add Shoot Day, Camera Type, Audio Notes, and other
valuable information. Much of the metadata that is useful in the day to day work of editing and grading
can be found in the Shot & Scene group, but there are many other potentially useful groups as well
that you should explore.
Keep in mind that the more metadata you associate with each clip, the more methods you have at your
disposal for creating custom Smart Bins (for editing) and Smart Filters (for grading) with which to zero in
on the clips you need for any given situation. This will not only make it easier to find what you need,
but it’ll help you to work faster.
For example, if you’ve entered enough metadata, then you can create multi-criteria Smart Bins or
Smart Filters that let you find the equivalent of “every close-up of Sally inside the diner,” or “every long
shot of Antonio outside in the parking lot.” In a documentary, you could easily isolate “every interview
shot of Louis from camera 1,” or “every B-roll clip with Robyn.” All of this will help you to find media
faster for editing, or to quickly isolate similar clips that you need to match together for grading.
Automatically Imported Metadata
In many instances, metadata is also imported along with the media you’ve added to the Media Pool.
For example, media recorded on BMD cameras may have had a variety of metadata entered into the
camera or automatically generated by the camera, and this metadata is automatically available in the
Metadata Editor. Similarly, Broadcast WAVE files can have quite a bit of metadata entered at the time of
recording, such as scene and take numbers and channel names describing each microphone. Still
images are imported with EXIF metadata. In all cases, available metadata is imported along with the
media and exposed in the Metadata Editor to facilitate workflows where valuable organizational
metadata is being entered on set during the shoot or immediately after ingest.
Using the Metadata Editor
Whenever you select a clip in the Media Pool, its editable metadata appears in the appropriately
named Metadata Editor (so long as it’s displayed). You can use this editor to further massage the
metadata of the clips in a project, adding information on set that will be of help later during
editing and finishing.
By default, clips initially appear with a set of clip metadata called “Clip Details,” that shows some of the
most fundamental details of the clip such as start and end timecode, duration, bit depth, and so on.
Because there are so very many metadata fields that are available, two drop-down menus at the top
right of the Metadata Editor let you change which set of metadata is displayed.
– Metadata Presets (to the left): If you’ve used the Metadata panel of the User Preferences to
create your own custom sets of metadata, you can use this drop-down to choose which one to
expose. Surprisingly enough, this is set to “Default” by default.
– Metadata Groups (to the right): This drop-down menu lets you switch among the various groups
of metadata that are available, grouped for specific tasks or workflows.
Chapter 19 Using Clip Metadata 348
Metadata categories drop-down menu
If you want to see a list of every piece of metadata in a clip, you can choose All Groups. Otherwise,
you can choose any set of metadata to narrow your focus to just those items of information.
To edit metadata for a single clip:
Select any clip in the Media Pool, and edit whatever metadata fields you require. The edited metadata
is immediately saved.
To edit metadata for multiple clips:
1 Choose a metadata set using the drop-down menu in the Metadata Editor.
2 Select multiple clips in the Media Pool by Shift-clicking, Command-clicking, or dragging a
bounding box around them.
3 Edit whichever metadata fields you want to. Checkboxes are automatically turned on for any
metadata fields you edit.
4 When you’re done, click the Save button at the bottom of the Metadata Editor. When you’ve edited
metadata for multiple clips at once, you’ll be prompted to save your changes if you create a new
selection in the Media Pool without clicking the Save button first.
Editing Keywords
While most metadata in the Metadata Editor is edited via text fields, checkboxes, or multiple button
selections (such as Flags and Clip Color), the Keyword field is unique in that it uses a graphical “tag”
based method of data entry. The purpose of this is to facilitate consistency with keyword spelling by
making it easy to reference both a built-in list of standardized keywords, as well as other keywords that
you’ve already entered to other clips.
Once added, keywords are incredibly useful for facilitating searching and sorting in the Media Pool, for
creating Smart Bins in the Media and Edit pages, and for use in Smart Filters on the Color page.
Reaping these benefits by adding and editing keywords is simple, and works similarly to the method of
entering metadata variables. For more information on metadata variables, see Chapter 16, “Using
Variables and Keywords.”
Chapter 19 Using Clip Metadata 349
To add a keyword:
1 Select the Keyword field of the Metadata Editor, and begin typing the keyword you want to use.
As you begin typing, a scrolling list appears showing all keywords that are available using the
string of characters you’ve just typed.
2 To find a specific keyword in the list, start typing that keyword’s name and this list automatically
filters itself to show only keywords that contain the characters you’ve just typed. Choose which
keyword you want to use in the list using the Up and Down Arrow keys, and press Return to
choose that keyword to add.
The keyword list that appears when you type within the Keyword field
As soon as you add one or more keywords, they appear as a graphical tag. To re-edit any
keyword, simply click anywhere within the Keyword field to edit it.
To edit a keyword:
– Double-click any keyword to make it editable, then edit it as you would any other piece of text,
and press Return to make it a graphical keyword tag again.
To remove a keyword:
– Click any keyword to select it, and press Delete.
Editing Metadata Using the File Inspector
The File tab of the Inspector provides a consolidated way to view and edit a subsection of a clip’s most
commonly used media file metadata. It’s easily accessible in the Inspector across the Media, Cut, Edit,
and Fairlight pages. The tab is composed of the following parts:
– Clip Details: Presents data about the clip’s data format (codec, resolution, frame rate, etc.).
– Metadata: Presents a reduced set of common metadata fields for quick user entry.
– Timecode: The start timecode of the clip. This field is editable if you want to manually change
the clip’s starting timecode.
– Date Created: The date that the clip was created. This field is editable if you want to manually
change the clip’s creation date.
– Camera: Sets the Camera # metadata.
– Reel: Sets the Reel/Card ID.
– Scene: The Scene number of the clip.
– Shot: The Shot letter/number of the clip.
– Take: The Take number of the clip.
– Good Take: This checkbox indicates if the clip is a good or circled take.
– Clip Color: Assign a specific color to a clip that is reflected in the Timeline.
– Name: The clip name field; this can be entered manually.
Chapter 19 Using Clip Metadata 350
– Comments: Add a text description to the clip.
– Auto Select Next Unsorted Clip: When this box is checked, the next clip in the Media Pool
is selected when you hit the return button after entering a metadata field, and the cursor is
automatically placed in the same field. This allows rapid sequential metadata entry without
having to manually click to load each individual clip in the Media Pool. The Next Clip button will
select the next clip in the Media Pool, regardless of the checkbox status.
The File Inspector parameters
Tips for Editing Metadata
Editing metadata is like taking vitamins. Nobody wants to, but you know you probably should.
To encourage you to undertake this task so you can reap the benefits, here are a few pointers.
– Don’t start editing until you review your footage and add metadata. If you get into the
habit of entering your clip metadata before you get preoccupied with your edit, you’ll
be in a much better position to edit faster using organizational tools that leverage the
metadata you’ve entered.
– Enter metadata starting with groups of clips and then moving to individual clips. Since
the Metadata Editor lets you add metadata for multiple selected clips at once, it becomes
easy to select groups of clips based on their thumbnails for entering information such
as Scene designations, Interior or Exterior keywords, Character keywords, and Framing
keywords. You’ll be surprised how fast this goes, and how useful this information is later
on, for both editing and grading.
– After you’ve entered all the metadata you can in groups of clips, then switch to entering
clip-specific metadata such as Shot designations, Take numbers, descriptions of action,
and other clip-specific keywords.
– There’s no right or wrong way to edit or use metadata, but a lack of consistency will make
it less useful. For example, if you’re identifying each clip that takes place at the same
diner, try to use the same keyword or descriptive text. If you call half the shots “diner”
and the other half “restaurant,” your ability to easily search for all the diner shots will be
compromised.
Chapter 19 Using Clip Metadata 351
Face Detection to Generate
People Keywords
You can select multiple clips in the Media Pool, then right-click the selection and choose “Analyze
clips for people” from the contextual menu to automatically analyze all selected clips using the DaVinci
Neural Engine, identifying faces that can be used to help organize the media. A progress dialog shows
you how long until the analysis is finished (you can cancel the operation if necessary).
Afterwards, the People Management window appears that shows you the results, automatically
organized into a number of bins in a sidebar.
– A “People” bin shows each face that has been recognized as an individual person. Click, pause,
then click again underneath any thumbnail to edit the name or role of that person. You must assign
a name if you want a keyword to appear for that individual in the People field of the Metadata
Editor. Assigning names renames the bins corresponding to each found person and enables
retagging to fix mistaken identification.
The Face Recognition window seen immediately after a Face Recognition operation
– Individual bins collect all clips with a particular person, allowing you to evaluate whether or not the
contents have been identified correctly. If you see an incorrectly identified clip, you can right-click
it and re-tag it from the contextual menu, or choose “Untag” if it’s a new person that has not been
identified at all.
A bin for a particular person lets you evaluate the contents
Chapter 19 Using Clip Metadata 352
– An “Other People” bin shows all faces that could not be identified. You can right-click any of these
to re-tag it as one of the people that have been already identified, or you can choose New Person
if it’s someone who wasn’t initially identified (this sometimes happens when multiple people have
very similar features).
The Face Recognition window seen immediately
after a Face Recognition operation
Clicking the Close button closes this window and assigns the names you edited as keywords to
the People field of the “Shot & Scene” group in the Metadata Editor. Clips with multiple people
who have been identified have multiple keywords assigned.
The People keywords field of the Shot & Scene group in
the Metadata Editor, populated with who is in that shot
Once People keywords are assigned to one or more clips, a People smart category of smart bins
can automatically be created in the Smart Bins sidebar of the Media Pool, making it easy to
immediately begin finding clips that have specific people in them. To create this People Smart Bin,
select “Automatic Smart Bins for People Metadata” box in the Preferences > User > Editing window.
You can reopen the Face Recognition window at any time to make modifications by choosing
Workspace > People. You can reset all faces by clicking the People Management Option menu
and choosing “Reset Face Database.”
NOTE: A command in the Option menu of the Face Recognition window, Reset Face
Database, lets you reset all analyzed results if the results are not acceptable and you don’t
want to save the resulting metadata.
Chapter 19 Using Clip Metadata 353
Creating Custom Metadata Groups
The Metadata panel in the User Preferences lets you create custom sets of metadata parameters that
will be exposed in the Metadata Editor. Using this panel, you can create customized subsets of
metadata that are focused on your particular needs.
Presets that you create are available from the Option menu that’s just to the left of the Metadata
categories drop-down menu.
Custom Metadata Categories drop-down menu
Choose any custom preset to restrict the Metadata Editor to only showing the metadata fields in that
preset. To see the full set of custom metadata fields you’ve saved to a particular preset, you should set
the Metadata Categories drop-down menu to All Groups. To make the full set of metadata fields
reappear, just choose default presets in the same drop-down.
Making and managing metadata presets is simple.
To create a new metadata preset:
1 Open the Metadata panel of the User pane of the Preferences window, and click New.
2 Click the checkboxes of every metadata tag you want to include in this preset, or click the
checkbox of a group name on the list to include all metadata tags within it.
Every single metadata tag available in DaVinci Resolve appears within one of several groups that
appear as a list. To open any group to see its contents, move the pointer over that group’s entry
on the list, and click the Open button when it appears.
3 When you’re finished, click the Save button under Metadata Options.
4 Click the Save Button for the User Preferences.
To edit an existing metadata preset:
1 Select a preset from the list, and click Edit.
2 Turn checkboxes on and off to include or exclude whatever tags you need.
3 Click the Save button under Metadata Options.
4 Click the Save Button for the User Preferences.
To delete a metadata preset:
Select a preset from the list and click Delete.
Chapter 19 Using Clip Metadata 354
Importing and Exporting
Media Pool Metadata
Once you’ve taken the trouble to add metadata to the clips in your project, DaVinci Resolve makes it
possible to export metadata from the Media Pool of one project for import into the clips of another
project, for instances where you need to move metadata around.
For example, a DIT might have entered a lot of metadata to the DaVinci Resolve project used for
generating dailies, but then an impatient editor might have created a separate project to begin editing
those dailies. Instead of requiring the editor to enter each clip’s metadata all over again, you can
export the metadata from the DIT’s project and import it into the editor’s new project, automatically
matching the relevant metadata to each corresponding clip.
To export Media Pool metadata:
1 Open a project containing Media Pool metadata you want to export.
2 Optionally, select which clips in the Media Pool you want to export metadata for.
3 Choose File > Export Metadata From > Media Pool to export metadata from every clip in the Media
Pool, or choose File > Export Metadata From > Selected Clips to only export metadata from clips
you selected in step 2.
4 When the Export Metadata dialog appears, enter a name and choose a location for the file to be
written, then click Save. All metadata is exported into a .csv file that can be viewed and/or edited
in any spreadsheet application.
If you open the resulting metadata .csv file, the first line is a header that lists what metadata is to be
found for each item listed in this document, and in what order. Only metadata fields that have been
populated for at least one clip are exported and listed in this header; unused metadata fields in the
Metadata Editor or Media Pool are ignored.
This file can now be imported into another project file to reattach the metadata to the same clips.
To import Media Pool metadata:
1 Open a project containing clips you want to populate with imported metadata.
2 Optionally, select which clips in the Media Pool you want to import metadata to.
3 Choose File > Import Metadata To > Media Pool to import metadata to potentially every clip in the
Media Pool, or choose File > Import Metadata To > Selected Clips to only import metadata to clips
you selected in step 2.
4 When the Import Metadata dialog appears, choose a metadata .csv file to import, and click Open.
5 When the Metadata Import dialog appears, choose the Import Options you want to use to match
the .csv file’s metadata to the correct clips in the currently open project. By default, DaVinci
Resolve tries to use “Match using filename” and “Match using clip start and end Timecode” to
match each line of metadata in the .csv file with a clip in the Media Pool, but there are other
options you can use such as ignoring file extensions, using Reel Name, and using source
file paths.
6 Next, choose which Merge Option you want to use in the Metadata Import dialog. There are
three options:
– Only update metadata items with entries in the source file: The default setting. Only updates
a clip’s metadata if there’s a valid entry in the imported .csv file. Other clip metadata fields are
left as they were before the import.
– Update all metadata fields available in the source file: For each clip that corresponds to a line
of metadata in the imported .csv file, every single metadata field referenced by the .csv file is
overwritten, regardless of whether or not there’s a valid entry for that field.
Chapter 19 Using Clip Metadata 355
– Update all metadata fields available in the source file and clear others: For each clip
that corresponds to a line of metadata in the imported .csv file, every single metadata field
referenced by the .csv file is overwritten, regardless of whether or not there’s a valid entry
for that field. Furthermore, metadata fields that aren’t referenced by the imported .csv file are
cleared of whatever metadata was there before.
The Metadata Import dialog that lets you choose options for how to match and merge imported metadata
7 When you’re finished choosing options, click Ok and all available metadata from the source .csv
file will be imported.
Different Ways of Using Clip Metadata
To encourage you to take advantage of the clip metadata tools that exist in DaVinci Resolve, here’s a
short list of the many different ways you can use clip metadata to help you work faster.
– Searching for clips in the Media Pool
– Searching for clips in the Timeline
– Sorting the Media Pool by metadata columns in list view
– Creating Smart Bins in the Edit page
– Creating Timeline Filters in the Color page
– Using Metadata to create clip Clip Names
– Displaying Metadata in frame using the Color page Burn In palette
Renaming Clips Using Clip Names
The most fundamental piece of clip metadata is each clip’s name, which is used to identify clips nearly
everywhere they appear inside DaVinci Resolve. By default, clips show the file name of the
corresponding media file on disk. Since the dawn of tapeless recording, however, editors have been
stuck with camera original media having names that are not exactly “human readable.”
Chapter 19 Using Clip Metadata 356
Fortunately, you have the option of entering a more user-friendly clip name to use instead, while
preserving the original file name that’s critical for maintaining the link between a clip and its media, as
well as for tracking an offline clip’s corresponding link to the online media from which it originated.
There are a few ways you can edit the clip name of a clip.
NOTE: You can also edit the clip names of timelines, compound clips, and multicam clips, so
that you can have two sets of naming conventions for these items, one for when you’re doing
creative editing, and one for when you’re doing finishing tasks.
To edit a clip’s clip name, do one of the following:
– In the Media Pool’s Icon view, click a clip’s name once, pause a moment, then click a second time
to select the name, type a new name, then press return to accept the name.
– In the Media Pool’s List view, the Clip Name mirrors the source clip’s file name (hidden by default),
but you can click the Clip Name column for any clip to add a new name from scratch.
– With the Clip Name column exposed in the Media Pool’s List view, Option-click the Clip Name
column for any clip to edit the file name, rather than entering a brand new name.
– To edit the clip name of multiple clips, select all of the clips for which you want to change the clip
name, then right-click one of the selected clips and choose Clip Attributes. Open the Name panel
of the Clip Attributes window, edit the Clip Name field, and click OK.
After you’ve changed a clip’s clip name, that clip appears in the following places using the clip name
instead of the original file name:
– The Media Pool’s Thumbnail view
– The name bar of each clip in the Timeline
– The Source Viewer title bar
– The Clip Name field of the Clip Attributes dialog’s Name panel
Switching Between File Names and Clip Names
Since different tasks require different information, you have the ability to switch between using clip file
names and clip names. For example, finishing editors will probably have more reason to refer to the file
name of each clip, making it easier to troubleshoot problems with reconforming and relinking. Creative
editors, on the other hand, will want to use easier-to-read clip names to make it easier to find what
they need.
To switch between file names and clip names:
– Choose View > Show File Names to toggle between both naming conventions.
Using Metadata to Define Clip Names
If you’re an enthusiastic user of clip metadata (and you should be), you can use “metadata variables”
that you can add into a field that let you reference other metadata for that clip. For example, you could
add the combination of variables and text seen in the following screenshot to define a clip name
automatically. Variables, once they’ve been entered, are represented as graphical tags shown with a
background, while regular text characters that you enter appear before and after these tags.
Variables and text characters entered to create a clip name based on a clip’s metadata
Chapter 19 Using Clip Metadata 357
As a result, that clip would display “12_A_3” as its name if scene “12,” shot “A,” and take “3” were its
metadata. When you do this, you can freely mix metadata variables with other characters (the
underscore, as in the example above) to help format the metadata to make it even more readable.
Every single item of metadata that’s available in the Metadata Editor can be used as a variable, and
several other clip and timeline properties such as the version name of a clip’s grade, a clip’s EDL event
number, and that clip’s timeline index number can be also referenced via variables.
Since the use of metadata variables is a great way to automatically generate names for multiple clips,
you may find it more useful to add metadata variable-driven clip names by selecting all of the clips you
want to edit, and opening the Clip Attributes window. By editing the Clip Name field found in the Name
panel, you can add a single clip name to all selected clips at once.
To add a variable to a text field that supports the use of variables:
1 Type the percentage sign (%) and a scrolling list appears showing all variables that are available.
2 To find a specific variable quickly, start typing the characters of that variable’s name and this list
automatically filters itself to show only variables that contain the characters you’ve just typed.
3 Choose which variable you want to use using the Up and Down Arrow keys, and press Return to
choose that variable to add.
The variable list that appears when you type the % character
As soon as you add one or more metadata variables to a clip’s Clip Name column and press Return,
the string is replaced by its corresponding text. To re-edit the metadata string, simply click within that
column, and the metadata variables will reappear. Be aware that, for clips where a referenced
metadata field is blank, no characters appear for that corresponding metadata variable in the Clip
Name column.
To remove a metadata variable:
– Click within a field using variables to begin editing it, click a variable to select it, and press Delete.
For more information on the use of variables, as well as a list of all variables that are available in
DaVinci Resolve, see Chapter 16, “Using Variables and Keywords.”
Chapter 19 Using Clip Metadata 358
Chapter 20
Using the Inspector
in the Media Page
The Inspector holds all the controls to modify, resize, retime, and generally adjust
anything related to a clip, transition, or effect on the Media page Timeline.
Contents
Using the Inspector 360
Adjusting Media Pool Clips in the Inspector 360
Video 360
Audio 364
Image 365
File 365
Chapter 20 Using the Inspector in the Media Page 359
Using the Inspector
The Inspector has been redesigned to make it easier to find specific controls and to adjust common
settings for your clips. Instead of a long vertical list, different aspects of the Inspector have now been
organized into panels, with each controlling specific grouped sets of parameters for your clip.
The Inspector is activated by clicking on the Inspector Panel in the upper-right section of the User
Interface toolbar. The Inspector is broken up into individual Video, Audio, Effects, Transition, Image,
and File panels. Inspector panels that are not applicable to your clip or selection are grayed out.
The Inspector Panel icon in the
upper right of the UI toolbar
The Inspector panels showing Video, Audio,
and File parameters available for adjustment; others are grayed out.
Methods of using controls in the Inspector:
– To activate or deactivate a control: Click the toggle to the left of the control’s name. The
orange dot on the right means the control is activated. A gray dot on the left means the control
is deactivated.
– To reveal a control’s parameters: Double-click the control’s name.
– To reset controls to their defaults: Click the reset button to the right of the control’s name.
Adjusting Media Pool Clips in the Inspector
You can directly modify Media Pool clips in the Inspector, before you edit those clips into a timeline.
This allows you to change the parameters of source media so that clips that are subsequently edited
into a timeline carry those new settings with it. For example, you can prepare your material prior to
editing by changing the clip’s file and RAW settings, adjusting the audio levels and EQ, or assigning it
a specific lens correction, etc. Once modified, any part of that clip would have the correct Inspector
parameters already in place when you edited them into your timeline.
To adjust Media Pool clips in the Inspector:
1 Select one or more clips in the Media Pool Panel of either the Media, Cut, Edit, or Fairlight pages.
2 Open the Inspector panel, and adjust any parameters in the Video, Audio, Image and File tabs.
These parameter changes are stored with the Media Pool clip, and will be carried over when any part
of that clip is edited into the Timeline. Of course, each clip’s Inspector parameters can be further
modified once it’s in the Timeline, and those Timeline parameters are independent from the Media
Pool Inspector settings. This means that any further adjustments you make to the clip in the Timeline
do not affect that same clip’s adjustments already in the Media Pool.
Video
The Video Panel of the Inspector exposes a vast array of controls designed to manipulate the size,
speed, and opacity of your clips.
Chapter 20 Using the Inspector in the Media Page 360
Transform
The Transform section of the Video Inspector panel
The Transform group includes the following parameters for resizing and repositioning your clips:
– Zoom X and Y: Allows you to blow the image up or shrink it down. The X and Y parameters can
be linked to lock the aspect ratio of the image, or released to stretch or squeeze the image in one
direction only.
– Position X and Y: Moves the image within the frame, allowing pan and scan adjustments to be
made. X moves the image left or right, and Y moves the image up or down.
– Rotation Angle: Rotates the image around the anchor point.
– Anchor Point X and Y: Defines the coordinate on that clip about which all transforms are centered.
– Pitch: Rotates the image toward or away from the camera along an axis running through the
center of the image, from left to right. Positive values push the top of the image away and bring
the bottom of the image forward. Negative values bring the top of the image forward and push the
bottom of the image away. Higher values stretch the image more extremely.
– Yaw: Rotates the image toward or away from the camera along an axis running through the center
of the image from top to bottom. Positive values bring the left of the image forward and push the
right of the image away. Negative values push the left of the image away and push the right of the
image forward. Higher values stretch the image more extremely.
– Flip Image: Two buttons let you flip the image in different dimensions.
– Flip Horizontal control: Reverses the image along the X-axis, left to right.
– Flip Vertical control: Reverses the clip along the Y-axis, turning it upside down.
Cropping
The Cropping section of the Video Inspector panel
The Video Inspector controls the image’s cropping parameters:
– Crop Left, Right, Top, and Bottom: Lets you cut off, in pixels, the four sides of the image. Cropping
a clip creates transparency so that whatever is underneath shows through.
– Softness: Lets you blur the edges of a crop. Setting this to a negative value softens the edges
inside of the crop box, while setting this to a positive value softens the edges outside of
the crop box.
Chapter 20 Using the Inspector in the Media Page 361