O_ColourMatcher
The O_ColourMatcher plug-in lets you match the colours of one view with those of another. It has been specifically designed to deal with the subtle colour differences that are sometimes present between stereo views.
Inputs and Controls
Connection Type |
Connection Name |
Function |
Input |
Mask |
An optional mask that determines where to take the colour distribution from. For example, if you have a clip showing a person in front of a green screen, you might want to use a mask to exclude the green area so the plug-in concentrates on matching the person. |
Source |
A stereo pair of images. If disparity channels and occlusion masks aren’t embedded in the images and you are using the 3D LUT or Local Matching mode, you should use an O_Solver, an O_DisparityGenerator, and an O_OcclusionDetector node after the image sequence. |
Control (UI) |
Knob (Scripting) |
Default Value |
Function |
O_ColourMatcher Tab |
|||
Views to Use |
viewPair |
Dependent on source |
Sets the two views whose colours you want to match. These views will be mapped for the left and right eye. |
Match |
matchWhat |
Left to Right |
Sets how to match the colours between views: • Left to Right - adjust the colours of the left view to match with those of the right. • Right to Left - adjust the colours of the right view to match with those of the left. |
Mode |
matchingMode |
Basic |
Sets the algorithm to use for the colour matching: • Basic - this mode takes the colour distribution of one entire view and modifies that to match the distribution of the other view. • 3D LUT - this mode generates a global look-up table (LUT) from local matches at unoccluded pixels. Note: This mode requires that there is a disparity field and an occlusion mask in the input data stream. If these don’t yet exist, you can create them using the O_Solver, O_DisparityGenerator, and OcclusionDetector plug-ins. • Local Matching - this mode first divides the two images into square blocks according to the Block Size control. Then, it matches the colour distributions between corresponding blocks in the two views. This can be useful if there are local colour differences between the views, such as highlights that are brighter in one view than the other. Note that this mode requires that there is a disparity field in the input data stream. If there isn’t, you can create one using the O_Solver and O_DisparityGenerator plug-ins. Note: If Occlusion Compensate is enabled, this mode also requires an occlusion mask upstream. If one doesn’t exist, you can use O_OcclusionDetector to create one. |
Export 3D LUT |
exportLUT |
N/A |
Click to export the colour change calculated for the current frame as a 3D look-up table (LUT) in .vf format. This allows you to apply the LUT separately using Nuke’s Vectorfield (Color > 3D LUT > Vectorfield) node. This control is only available in the 3D LUT mode. |
Local Matching Options |
|||
Block Size |
blockSize |
20 |
Defines the width and height (in pixels) of the square blocks that the images are divided into when calculating the colour match. Note: This control is only available in the Local Matching mode. |
Occlusion Compensate |
occlusionCompensate |
enabled |
When Occlusion Compensate is enabled, O_ColourMatcher looks for similar colours in the nearby unoccluded areas that it has already been able to match and uses the closest colour it finds. Note: This requires an occlusion mask upstream (you can create one using O_OcclusionDetector) and is only available in the Local Matching mode. |
Edge Occlusion |
edgeOcclusion |
0.40000001 |
Sets the threshold for treating image edges as occlusions to reduce haloing and edge flicker. The higher the value, the more image edges are considered occlusions even if they aren’t marked as such in the upstream occlusion mask. Note: This control is only available when Occlusion Compensate is enabled. |
Colour Sigma |
colourSigma |
2 |
Sets the amount of blurring across edges in the colour match at occluded regions. Decrease this to restrict the colour correction in occluded regions to similar colours. Increase the value to blur the colour correction. Note: This control is only available when Occlusion Compensate is enabled. |
Region Size |
regionSize |
25 |
Sets the size of the region (in pixels) of unoccluded pixels used to calculate the colour correction at an occluded pixel. Tip: When Occlusion Compensate is enabled, O_ColourMatcher first finds the closest unoccluded pixel and then expands that distance by this number of pixels to pick up unoccluded pixels to use. |
Multi-scale Options |
|||
Number of Samples |
samples |
5 |
Sets the number of samples in Local Matching mode. Using a value larger than 1 calculates the correction for multiple block sizes - between Block Size and Max Block Size - and then blends the results together. This can help to reduce errors. |
Max Block Size |
maxBlockSize |
100 |
Sets the size (in pixels) of the maximum block size to go up to when using multiple samples in the Local Matching mode. Note: This control is only available if you have set Mode to Local Matching and Number of Samples to a value larger than 1. |
Sample Spacing |
intervalType |
Uniform |
Sets the type of sampling intervals to use when using multiple samples in the Local Matching mode: • Uniform - the sampling interval remains constant. The samples are spaced evenly. • Favour Small Block Sizes - the sampling interval increases as the block size increases. This weights the correction towards smaller block sizes, which preserve more detail, while still including some larger block sizes, which are more immune to disparity errors. |
Colour Correction Type |
correctionType |
Best Guess |
Determines how O_ColourMatcher divides the two views into square blocks and matches the colour distributions between corresponding blocks: Tip: If you set Number of Samples to a value larger than 1, Colour Correction Type does this for multiple block sizes and then combines the results for different block sizes together. • Minimum Correction - out of the results you have, this picks the smallest correction at each point (that is, closest to your original image). This option can be useful if you have a very poor disparity map. • Best Guess - out of the results you have, this picks the closest correction to the target image at each point. The target image is created by using the disparity field to warp the other view onto the image you're trying to correct. This option can be useful if you have a very good disparity map. • Average Correction - use the mean value of the colour correction at each point. This option is the default. Note: This control is only available if you have set Mode to Local Matching. |
Mask Options |
|||
Mask Components |
maskWith |
None |
Sets the channel to use as a mask when calculating the colour transformation: • None - use the entire image area. • Source Alpha - use the alpha channel of the Source clip as a mask. • Source Inverted Alpha - use the inverted alpha channel of the Source clip as a mask. • Mask Luminance - use the luminance of the Mask input as a mask. • Mask Inverted Luminance - use the inverted luminance of the Mask input as a mask. • Mask Alpha - use the alpha channel of the Mask input as a mask. • Mask Inverted Alpha - use the inverted alpha channel of the Mask input as a mask.s |
Video Tutorials
OCULA 3.0 - Colour Matcher from Foundry on Vimeo.
Welcome to Ocula from Foundry. My name’s Jon, and in this tutorial we are going to take a look at setting up and reviewing the ColourMatcher node in Ocula 3.0. We are going to look at colour correcting this footage. If I switch between left and right eye, here, you can see there are some large changes in reflections on the floor. We want to match those reflections to the image, to ease the stereo viewing experience. In my script, I am setting up and reviewing disparity. We are rendering that out for use, and I am pulling it in here to do a colour correction, using the ColourMatcher node in Ocula, and we are going to look at quality checking that later on.
So, first of all, let’s have a look at the setup and review. I have set the analysis keys in the Solver node, and I am using the default parameters in the DisparityGenerator node. There are some tutorials available on how to set up the Solver and DisparityGenerator nodes. Here I also have an OcclusionDetector node, and that’s adding in a new occlusion mask channel in Nuke. This defines the pixels that do not match between the left and right view, the pixels that are obscured or revealed between the views. The ColourMatcher is going to ignore those occluded pixels when it pulls the colour from one view onto the other. So, if we look at the OcclusionDetector, you can tune how the occlusions are defined and we’ll see how to do that later on. Here, you can see that the image borders are defined as occluded between left and right view. I am going to render out disparity and occlusion and then pull that back in to use with the ColourMatcher node.
In the ColourMatcher, you can define the left and right views, whether to match the left to the right, or update the right to match the left. There are three different methods to calculate the colour: update a Basic lift and gain, a more complicated 3D LUT, which you can then export for use outside Ocula or Nuke, and there is also an option to do a local colour correction (Local Matching). This matches small blocks between the left and right images, so it will match the subtle variations between the views, which is the reflection we are seeing here. You can choose whether to treat the occluded regions, defined by the occlusion detector, separately or not, and there are some parameters to tune. We will see how to tune those updates later. There are also some Multi-scale Options for the blocks. You can choose whether to use one or more block sizes, you can set the maximum block size to use, and also define how the update is applied. These options help to stabilise the colour update over time. The ColourMatcher also has a Mask input. You can choose how the mask is defined with the basic and 3D LUT methods, then the update is calculated in the mask region and applied to the whole image. In Local Matching, the update is calculated for the masked region and applied to that masked region. So, here’s the update that’s been calculated using the local correction method; you can see that reflections have been updated so they match between the left and right view.
Now, let’s have a look at how we can quality check the colour correction. Down here, I am re-calculating the disparity for the corrected plates. Notice I have to pull the Solver information from upstream. I could have just re-calculated it here in the new Solver node. I am using that disparity with the Ocula StereoReviewGizmo. I have got it set to show a CheckerBoard key mix between the left and right view. If I switch the correction, I can pick up the converge point on the gizmo and move it around to check between any differences on the left and right on different parts of the image. If we take a look at the original footage, you can see the difference that’s been corrected. So, again, if we moved around, you could see the different colours in the left and right view in the key mix. Another way to quality check is to use the NewView node to pull the corrected view onto the ‘hero’ view, and then you can check for differences. I have the NewView set to pull the Left view, which we updated across onto the right using an Interpolate Position of 1. I can then do a per-pixel difference, so we can check for errors on the whole image at once. The difference is here, only on the edges, which may have a sub-pixel shift disparity calculation. If we look at the original plates, you can see the colour difference that’s been corrected by the ColourMatcher node.
Let’s have a look at the colour correction for another shot. Here’s the left and right view, and there is a large colour shift we want to correct here. I have keyed the Solver, and we are calculating the disparity and occlusions as before. If we look at those occlusions, these are the pixels that are marked as different between left and right views. Now there are two ways in which occlusions are calculated, first by looking at differences in Depth Occlusions, where one part of the image is obscured by the other, and the other by looking at differences in image content, using a threshold on the colour difference between the views (Colour Threshold). So, we can switch these thresholds off and tune them to the shot. I am going to start by pulling up the Depth Occlusions until we just pick out the occlusions between the depth layers. I don’t want this to become too large and blobby. I can then work with the Colour Threshold so, again, I can start to pull this up until we just pick out the differences at these changes between depth and the scene. I tend to be quite conservative here; you don’t want to pick up lots of isolated regions that are going to flicker between frames. We want it to be stable over time. There are also some options here to refine the occlusion regions, and there’s some presets you can choose from, according to how much of the image is different between left and right views. So, here we are, seeing an extreme difference, so it’s much larger. One extra thing I have done for this shot is to add a roto to block out the bit at the top of the image, which is different between left and right. So, you are free to edit the occlusion masks with Roto or RotoPaint.
So, let’s have a look at the results of the colour matching. This is the Basic method doing the lift and gain. You can improve this using the 3D LUT method, which calculates a global update based on pixel matches, based on the left and right views. So, the result is much better, but it still has some variations in the reflections on the water. Now, I can export this and use it in a VectorField node in Nuke, so I can apply that 3D LUT without Ocula, or you can use it outside Nuke. Here’s the ColourMatcher result, and here’s the result from the VectorField node, and they match. We can also do a local correction to match the reflections on the water in this footage. Here, you can see the results: the left and right view match. Now we can use our QC technique again by looking at the colour difference within NewView. So, here’s the difference for the local colour correction, now we can go back and switch the methods, and compare those differences. There’s large changes with Basic. It’s better with 3D LUT, but you can see the differences with the reflections on the water. And, finally, the Local Matching, which gives the best result.
Let’s have a look at one more example. In this shot there are some extreme changes between the views. Large areas are not visible when you switch from one to the other. Here, you can see the shop front is obscured when I switch from the left to the right. Let’s review the footage with NewView. I am going to colour match the left view. So, here, I want to pull the right onto the left, and then compare that to the original left. You can see it highlights that shop front. We want to pick up this region with the OcclusionDetector. Now let’s set that up. I am going to switch to the Mat overlay (M) and you can see it picks up the shop fronts. There are some holes though, and some isolated regions, and these can flicker, so we want to remove them and create a stable mask. I am going to switch to the Extreme setting. This expands the mask and fills the holes to make sure that they are stable. Now let’s set up the thresholds, as we did before. We just want to pick out the shop front here. I can switch to set Colour Threshold next, and I want to make sure that there are no isolated patches left in the mask that will flicker, causing the colour update to flicker. So, this looks good. Now, one extra thing I have done here is to add a roto mask, just to make sure that occlusion is masked for the shot. I could have actually switched the thresholds off in the OcclusionDetector, and just used a roto instead.
Now, let’s have a look at the colour correction for the left view. So, here, we have the basic lift and gain. Let’s switch to see the local block-based colour match. You can now see a yellow band on the wall. This is where the ColourMatcher is compensating for the occlusions. It looks at the pixels outside of the mask and it pulls the colour update inside of the mask. The problem here is that it’s pulled the update from the black region into the adjacent bricks. I just need to expand the Region Size, where it searches for the colour update so it can pull the update from other bricks. You can also restrict this to using regions of similar colour by reducing the Colour Sigma. Now, finally, you will notice there is some dark banding inside the teapot. This happens because the blocks are spanning the light and dark region. I need to use a smaller Block Size, so the update covers a smaller area. This looks good. You can try tuning the multi-sampling options, particularly if you are having any issues with the correction changing between frames; multi-sampling can improve the stability. Now, there are still some isolated regions inside of the occlusion mask I’m not happy with. What I am going to do now is switch to using the 3D LUT result in this region. This is the 3D LUT correction, and it looks pretty good. So, what I have done is shuffled the occlusion mask into alpha, and I am going to apply that as a matte to pick out the 3D LUT correction inside the occlusion mask, and the local correction outside. Here’s switching between the original and our final results, and you can see it corrects some shadowing and reflections in the shot.
So, that wraps up this tutorial on the ColourMatcher. We have looked at rendering out the occlusion mask with disparity for use in colour matching. We have looked at the local colour matching and quality checking the result, in particular using NewView to rebuild one view on top of the other to look at colour differences. We then looked at a more detailed example, and how to set up the occlusions, how to edit them with roto, and finally, we looked at an extreme occlusion example and how to tune the colour matching parameters, and also switch between different colour matching results.