The O_DisparityGenerator plug-in is used to create disparity fields for stereo images. A disparity field maps the location of a pixel in one view to the location of its corresponding pixel in the other view. It includes two sets of disparity vectors: one maps the left view to the right, and the other maps the right view to the left.
The following Nuke plug-ins rely on disparity fields to produce their output:
• O_OcclusionDetector
• O_ColourMatcher (in 3D LUT and Local Matching modes)
• O_FocusMatcher
• O_VerticalAligner (in Local Alignment mode)
• O_NewView
• O_InteraxialShifter
• O_VectorGenerator
• O_DisparityToDepth, and
• O_DisparityViewer.
Connection Type |
Connection Name |
Function |
Input
|
Fg |
An optional mask that specifies the area to calculate disparity. You can use this to create a disparity layer for a foreground element. |
Ignore |
An optional mask that specifies areas to exclude from the disparity calculation. NOTE: Masks should exist in both views, and O_DisparityGenerator expects alpha values of either 0 (for background) or 1 (for foreground). |
|
Solver |
If the Source sequence doesn’t contain features that O_Solver is able to match well, you can use O_Solver on another sequence shot with the same camera setup. If you do so, connect O_Solver to this input. |
|
Source |
A stereo pair of images. The images should be followed by an O_Solver node, unless you’re using the Solver input. |
Control (UI) |
Knob (Scripting) | Default Value | Description |
O_DisparityGenerator |
|||
Views to Use |
viewPair |
Dependent on source |
Sets the two views you want to use to create the disparity field. These views will be mapped for the left and right eye. |
Ignore Mask |
ignoreMask |
None |
Sets the mask type to exclude areas of the sequence: NOTE: Masks should exist in both views, and O_DisparityGenerator expects alpha values of either 0 (for regions to use) or 1 (for regions to ignore). • None - do not use an ignore mask. • Source Alpha - use the alpha channel of the Source clip as an ignore mask. • Source Inverted Alpha - use the inverted alpha channel of the Source clip as an ignore mask. • Mask Luminance - use the luminance of the Ignore input as an ignore mask. • Mask Inverted Luminance - use the inverted luminance of the Ignore input as an ignore mask. • Mask Alpha - use the alpha channel of the Ignore input as an ignore mask. • Mask Inverted Alpha - use the inverted alpha channel of the Ignore input as an ignore mask. |
Foreground Mask |
foregroundMask |
None |
Sets an optional mask that specifies the area to calculate disparity. You can use this to create a disparity layer for a foreground element. You can also use the Ignore mask to exclude elements in the foreground region. NOTE: Masks should exist in both views, and O_DisparityGenerator expects alpha values of either 0 (for background) or 1 (for foreground). • None - do not use a foreground mask. • Source Alpha - use the alpha channel of the Source clip as a foreground mask. • Source Inverted Alpha - use the inverted alpha channel of the Source clip as a foreground mask. • Mask Luminance - use the luminance of the Fg input as a foreground mask. • Mask Inverted Luminance - use the inverted luminance of the Fg input as a foreground mask. • Mask Alpha - use the alpha channel of the Fg input as a foreground mask. • Mask Inverted Alpha - use the inverted alpha channel of the Fg input as a foreground mask. |
Noise |
noiseLevel |
0 |
Sets the amount of noise O_DisparityGenerator should ignore in the input footage when calculating the disparity field. The higher the value, the smoother the disparity field. You may want to increase this value if you find that the disparity field is noisy in low-contrast image regions. |
Strength |
strength |
1 |
Sets the strength in matching pixels between the left and right views. Higher values allow you to accurately match similar pixels in one image to another, concentrating on detail matching even if the resulting disparity field is jagged. Lower values may miss local detail, but are less likely to provide you with the odd spurious vector, producing smoother results. |
Consistency |
consistency |
0.1 |
Sets the constrains applied the left and right disparities to be consistent. Increase the value to encourage the left and right disparity vectors to match. |
Alignment |
alignment |
0.1 |
Sets how much to constrain the disparities to match the horizontal alignment defined by an upstream O_Solver node. A value of 0 calculates the disparity using unconstrained motion estimation. Increasing the value forces the disparities to be aligned. In most cases, you want this set to 0 or the default value of 0.1. |
Sharpness |
sharpness |
0 |
Sets how distinct object boundaries should be in the calculated disparity field. Increase this value to produce distinct borders and separate objects. Decrease the value to blur disparity layers together and minimise occlusions. For better picture building with O_NewView, O_InteraxialShifter, O_FocusMatcher, and O_Retimer, you can set this value to 0. |
Smoothness |
smoothness |
0 |
Sets the amount of extra smoothing applied to the disparity field as a post process after image matching. The higher the value, the smoother the result. You can use this in conjunction with the Sharpness parameter to smooth out the disparity field separately for distinct objects in the shot. |
Parallax Limits |
|||
Enforce parallax limits |
enforceParallax |
disabled |
When enabled, O_DisparityGenerator limits the disparity to the specified Negative and Positive values to remove incorrect disparity vectors. You can review the disparity range using the Parallax Histogram display in O_DisparityViewer. |
Negative |
negativeParallax |
-100 |
Sets the maximum negative parallax, in pixels. With negative parallax, pixels in the left image are to the right of pixels in the right and objects appear in front of the screen plane. Negative parallax is defined by the maximum disparityL.x and minimum disparityR.x values for the aligned images. |
Positve |
positiveParallax |
100 |
Sets the maximum positive parallax, in pixels. With positive parallax, pixels in the left image are to the left of pixels in the right and objects appear behind the screen plane. Positive parallax is defined by the minimum disparityL.x and maximum disparityR.x values for the aligned images. |
Ocula 3.0 - DisparityGenerator from The Foundry on Vimeo.
Welcome to Ocula from The Foundry. My name’s Jon and, in this tutorial, we’re going to take a look at setting up and reviewing the DisparityGenerator node in Ocula 3.0. Disparity is used throughout Ocula, and is the vector that connects the pieces between the left view and the right view, so it informs Ocula about how to match images, and how to update one view along with the other one. It’s used to align plates, colour match, focus match, to create z passes, change the interaxial separation, to change depth, rebuild views, correlate roto between eyes, and also to do retiming. So, you can calculate disparity upfront for your shot, bake it out, and then use it in a comp. Let’s have a quick look at some of these operations.
The VerticalAligner node in Ocula can do a per-pixel update on the input footage to align it horizontally, based on the vertical components of disparity. We can see this if we switch to Anaglyph for this footage. If I look at the footage here, and compare that to the updated, vertically aligned footage, it’s done a per-pixel update.
The ColorMatcher node also uses disparity to pull the colour of one view onto the other to correct any differences between the two views. So, here’s the input footage left and right, and here's the corrected footage. The colour is now matched, so this is using the colour from the right view to update the colour on the left, based on those disparity vectors. Similarly, the focus matcher can pull the appearance of one view onto the other to update focus. So, here’s the input footage, which is slightly out of focus on the left view, compared to the right. We can update that left view to match the focus on the right by pulling the appearance across using disparity. You can also take those disparity vectors, if we have a match-moved camera, and convert them to a z pass, convert them to depth values to use in a comp. So, here we have a z pass, based on those disparity vectors, triangulated to create 3D points using a match-moved camera for the shot. The InteraxialShifter node in Ocula will shift images, using disparity, to change the depth in the scene. So, here’s our input footage, here’s our new interaxial separation, and we have changed the depth. It’s easier to see if we switch to the Parallax Histogram in our DisparityViewer node. Here’s our input footage and here’s our new parallax, showing a squeezed depth with a change in the interaxial separation.
Now, you can also use disparity to rebuild one view from the other, using the NewView node in Ocula. Here we're using the left image to rebuild the right by pushing the left pixels across using the disparity vectors. So, that’s the rebuilt right view, compared to the original right view. You can also use disparity to push across rotos. Here, I have created a roto in the left eye and I have pushed it across to the right eye. And, finally, we use disparity in Ocula to create motion vectors that are consistent in the left and right eye. So, the disparity connects the motion in the left and the right, making sure it’s consistent for any retiming we do with it. Disparity is critical to all Ocula operations. Let’s have a look at setting up the DisparityGenerator now, and reviewing those disparity vectors.
In the DisparityGenerator you need to find the Views to use for the left and right eye. It also has two mask inputs - an Ignore Mask to exclude parts of the image in the disparity calculation. For example, where the disparity is corrupted or if you want to exclude a foreground element to be able to try to pull out the disparity behind it. You also have a Foreground Mask, and this defines the region where it calculates the disparity, so you can roto out a particular part of the image and calculate disparity there to pull out disparity layers in the shot. You also have the parameters that control the disparity calculation, so Strength defines how well the image is matched. You can increase the Strength to force the disparity to match the images. Consistency defines how much the left and right vectors for disparity agree. Alignment is the weighting on how much the disparity vectors satisfy the alignment data that’s delivered by a Solver upstream. You can increase that, or even set it to 0 to ignore the alignment data. You have a Sharpness parameter to define how distinct objects are in the disparity vectors, so you can increase Sharpness to pull out disparity boundaries between the different layers in the scene. You can also over-smooth the disparity vectors by increasing the Smoothness parameter. Finally, here, if you need to set limits on how the disparity is calculated, you can force those limits by clicking the Enforce parallax limits and setting the (Negative and Positive) limits in pixels here.
Let’s have a look at the disparity that’s been calculated. We can switch the Viewer to show disparity and, here, disparity is being shown as an RGBA image. I can turn that down to see it more easily or, in fact, what I have done here is copied disparity over to RGBA, and done a Grade to make it more easily visible. Essentially, it’s showing a disparity left vector as red and green, and the disparity right vector as blue and alpha. So, red and green x,y, and the disparity right vector as blue and alpha x,y. Disparity left is the vector that pulls the left image onto the right, so it starts at a right pixel and points at the corresponding left pixel. Disparity right is the vector that pulls the right image onto the left, so it starts at a left pixel and points at a corresponding right. Using this image, we can see how the parameters affect the disparity calculation. If I switch this to a strength of 2, I want it to force the image matcher more. You can see it starts to get a little bit more noisy, forcing the image matching, but it’s likely to be more accurate. So, increase the Strength, if you’re not getting accurate matching.
Another way of actually getting accurate matching is to increase the Sharpness slightly, because it allows the disparity to separate the matching for different elements in the shot. If I turn the Sharpness on here, and increase it a bit, you can see it’s picked up the different object boundaries inside the disparity and allowed the disparity to separate between different layers. And, finally, the other thing to tune here is Smoothness. If you need to smooth out those vectors, you can turn on over-smoothing here. When you work with disparity, it’s a good idea to bake them out upfront to use in your comp. So, you can render these out with different settings for your parameters, you can have the default setting, you can use a strong setting, you can have a sharp setting and a smooth setting, and then you can pick and choose between the disparity vectors without having to go back in and tune.
Now, let’s have a look at quality checking them and reviewing disparity. The best way to check your disparity vectors is to use them in an Ocula tree. My favourite is to do a NewView, pulling the left pixels onto the right, or vice versa. So, here’s the original right image, and here we have rebuilt that right image by pulling the left pixels across on top of the right using disparity. If the original right and our rebuilt right line up, those disparity vectors must be working well. So, the images look well-aligned and we can check that here by joining the rebuilt and the original image, and putting it into the StereoReviewGizmo, set to Difference. You can also look at it as a CheckerBoard. Essentially, with the Difference tool, all you are looking at here is the colour differences between the left and right eye because they now line up nicely with the disparity vectors. If the disparity wasn’t quite right, you would see the images un-aligned. You can review the disparity using the DisparityViewer node in Ocula. Here, we can see disparity overlayed as vectors on the image. You can switch this to show a Parallax Histogram, summarising the depth in the scene, and comparing the disparity against any parallax limits you have set for the shot. If you are violating those limits, you can switch to see a violation overlay (Parallax Violation) to see where that is happening in the image. All these options render out so you can bake them out with your disparity vectors, and the Parallax Histogram is quite handy here in order to test any reconvergence happening in the comp tree. You can calculate disparity upfront for your plates, you can do your comp work, and then you can recalculate the disparity and view that again as a Parallax Histogram to see if there has been a shift in convergence in the scene. So, here, I have just put in a ReConverge node and, you can see, as I move it around, the Parallax Histogram will shift, showing a shift in the depth of the scene.
So, that wraps up this tutorial on the DisparityGenerator in Ocula 3.0. We have looked at where Disparity is used in Ocula, the parameter settings, and how to tune them. We have also looked at quality checking using the NewView node in Ocula, and also reviewing with the DisparityViewer.
Ocula 3.0 - DisparityGenerator Layers from The Foundry on Vimeo.
Hello, my name is Dan Ring, and I am going to show you some advanced techniques for working with, and correcting, disparity maps. The most common use for disparity maps is building new views. We are going to introduce some handy tricks you can use in Ocula 3.0, and in particular, how to build better new views that require far less touch-ups. Now, instead of the scenario where you have made significant changes to one view, let’s call it the ‘hero’ view, you now want to apply those same changes to the other view. When I say significant changes, this could mean that you have created a clean background plate, removed a camera dolly, painted or composited in something novel, colour-treated a region, and so on - anything that takes a considerable amount of time and you wouldn't want to do twice for the second eye. In this example script I am going to use a CheckerBoard tracked to the ground to represent significant changes made through our left eye. We then want to push these changes to our other view, in this case, our right eye. The CheckerBoard makes it much easier to see when things go wrong. We are also going to assume that some kind person has drawn roto masks for the dancers for us in both views. This allows us to composite the dancers back onto any pushed background plates much more easily.
Let’s start off by looking at how you might do this without Ocula. One way is to use a tracked camera setup with geometry of the set, and then project your ‘hero’ view onto the scene. You can then use a ScanlineRender to get a new view. Looking at the results, here, you can see this does a pretty good job. However, it does have two important and non-trivial requirements: the first being that you have a good track of your scene, and the second that you have accurate scene geometry. In our CheckerBoard example, the scene geometry wouldn't need to be that complicated - a simple card is probably sufficient - but for anything more complicated like the sides of the buildings, for example, you would certainly need more detailed geometry, and in many cases, that’s just not possible.
Now, let’s try pushing from the left to right eye in the standard Ocula way, with Disparity Generator and NewView nodes. As with most Ocula pipelines, we start with O_Solver with relevant keyframes set and any erroneous matches thrown out. We now go down the stream and use an O_DisparityGenerator to get some disparity for us. Note that we are generating the disparity on the untreated plates, we then use a Shuffle node to bring in our CheckerBoard left eye, before supplying it to a O_NewView node to give us a new right eye. In order to tell NewView that we want to build a right eye from the left, we set the Inputs here to Left, and we move the Interpolate Position all the way to 1. We will use these same settings for every NewView node in this tutorial. If we look at this for one frame, you can see that it didn’t do a good job. Looking at the comp with the dancers on, you can see by the wavy lines of the CheckerBoard that the motion of the dancers is severely influencing the background disparity. This is exactly what we don’t want. Instead, we want a way to generate the background disparity separately from the foreground, and we can do this with the new Ignore Mask feature in Ocula’s O_DisparityGenerator. This tells the disparity generator not to calculate any disparity in the specified region and, instead, to fill it using disparity from around its border.
In this example, I have drawn a crude roto around the foreground dancers. Now, if we look at the disparity from the original and ignore the generator (CheckerBoard) node, you can see the dancers have been effectively painted out, giving us a perfectly smooth, usable background disparity plate. If we then supply this disparity to a NewView node, you can see it’s done a much better job. Straight lines remain straight after the push between eyes. If we composite the dancers back in, you can see the CheckerBoard is no longer affected by the dancers and remains smooth over time. Let’s now swap between the original background plate reconstruction and our layer enhanced reconstruction to see the effect more clearly. This example shows how powerful working with Ocula’s disparity layers can be, not to mention how easy it is to push background changes between views without the need for camera tracking and projections on the geometry.
In the next part of this tutorial, we are going to develop the techniques shown earlier to get better new views for both background and foreground regions. For this scenario, imagine you have been working heavily fixing up and treating both the dancer, and the background elements for the left eye, so much so that the right eye no longer bears any resemblance to the left. You now want to push these edits in your other view. While the camera tracking and projection workflow will work for the background elements, it probably won’t work for the foreground. So, naturally, we turn to Ocula’s NewView workflow to help us out. First off, let’s look at the simplest, most standard way of creating new view from disparity by connecting O_NewView node to O_DisparityGenerator. As you can see, it hasn’t done a good job. We are getting a lot of distortion and artefacts around the edges of the dancers and parts of the background aren't warping correctly, such as the centre of the road and the kerb. To fix this, we are going to make clever use of disparity generators, ignore, and foreground masks to extract disparity from the foreground and background separately, then combine the results at the end to give a much cleaner build of the other view. Earlier in this tutorial, we saw how to extract background disparity using the ignore mask, so let’s use that as a place to start now. Let’s apply the same rough roto mask as before to the O_DisparityGenerator. Looking at the NewView node, we can now see the background is handled better and we have nice straight edges along the road’s kerbs and centre line. However, because we decided to ignore the area of the dancers, we have essentially thrown away the disparity, so we now have to estimate that separately. Also notice that we are shuffling in our dancer’s alpha mask before estimating the new view. This will be used later on.
To accurately pull disparity for the dancers, we are going to supply the alpha mask of the dancers as a foreground mask to the O_DisparityGenerator node. This explicitly tells the disparity generator that it must match the regions in the mask across both views. Now, when we create a new view from the foreground disparity, we can be sure that the background is not influencing the dancer’s disparity. For example, looking at the NewView you can see that the only regions that are hardly distorting are the dancers, and this is exactly what we want. Now, let’s comp the foreground and the background using the dancer’s alpha. Apart from the ghosting effect of the dancers, the new plate looks very nice. In particular, we are not seeing any weird distortions around the boundaries of the dancers or the background. The ghost effect is due to the fact that these regions are occluded in the left view, so when we build the right view from the left, we don’t have the correct data from which to reconstruct. Although most of our ‘hero’ view is transferred over correctly, these ghost regions will need to be addressed by a comp artist, possibly applying the same treatment that was applied to the ‘hero’ view. With some simple mask manipulation, you can quickly identify these regions.
To end this tutorial, I am going to cheat a bit and pretend that I painted in these missing bits by simply copying them over from the original right eye. Although the colours are a little off, as we haven't done any colour matching, you can see how nicely the original right eye boundaries line up with our new view reconstruction. This lets us know that our plate rebuild workflow has done a good job. You can try out these techniques yourself with the accompanying scripts and footage available on The Foundry’s website. This concludes this Ocula tutorial on working with disparity maps.