The O_VerticalAligner plug-in lets you warp views vertically so that their corresponding features align horizontally. The Vertical Skew and Local Alignment options allow you to warp the views while keeping the horizontal position of each pixel the same so that there is no change in convergence.
Connection Type |
Connection Name |
Function |
Input |
Solver |
If you’re using the Global Alignment mode and the Source sequence doesn’t contain features that O_Solver is able to match well, you can use O_Solver on another sequence shot with the same camera setup. If you do so, connect O_Solver to this input. |
Source |
A stereo pair of images. In the Global Alignment mode, if you’re not using the Solver input, the images should be followed by an O_Solver node. In the Local Alignment mode, you need an O_Solver node and a disparity field in this input. You can create a disparity field using O_DisparityGenerator. |
Control (UI) |
Knob (Scripting) |
Default Value |
Function |
O_VerticalAligner Tab |
|||
Views to Use |
viewPair |
Dependent on Source |
Sets the two views you want to align. These views will be mapped for the left and right eye. |
Align |
alignWhat |
Both Views |
Sets how to move the views to align the images: • Both Views - move both views half way. • Left to Right - move the left view to line up with the right. • Right to Left - move the right view to line up with the left. |
Warp Mode |
warpMode |
Global Alignment |
Sets the mode to use for vertical alignment: • Global Alignment - applies a global image transform to align the feature matches generated by an upstream O_Solver node. You can use the Global Method menu to choose how this is done. With all methods, multiple O_VerticalAligner nodes concatenate with a single filter hit. You can also analyse to create Corner Pin and Camera information in all methods except Vertical Skew. • Local Alignment - rebuilds the view(s) to remove vertical disparity calculated by an upstream O_DisparityGenerator. Use this mode to create a per-pixel correction if there are any local distortions in the mirror or lens and changes in alignment with depth. |
Global Method |
alignmentMethod |
Vertical Skew |
Selects the method you want to use to align the images when Warp Mode is set to Global Alignment: • Vertical Skew - align the features along the y axis using a skew. This does not move the features along the x axis. • Perspective Warp - do a four-corner warp on the images to align them on the y axis. This may move the features slightly along the x axis. • Rotation - align the features vertically by rotating the entire image around a point. The centre of the rotation is determined by the algorithm. • Scale - align the features vertically by scaling the image. • Simple Shift - align the features vertically by moving the entire image up or down. • Scale Rotate - align the features vertically by simultaneously scaling and rotating the entire image around a point. The centre of the rotation is determined by the algorithm. • Camera Rotation - align the features by first performing a 3D rotation of both cameras so that they have exactly the same orientation and a parallel viewing axis, and then reconverging the views to provide the original convergence. This method requires the camera geometry provided by an upstream O_Solver node. For best results, use the O_Solver Camera input to provide the information for the shooting cameras. If a Camera input is connected, the camera data is used per frame (rather than only taken from keyframes). |
filter |
filter |
Cubic |
Select the filtering algorithm to use when remapping pixels from their original positions to new positions. This allows you to avoid problems with image quality, particularly in high contrast areas of the frame (where highly aliased, or jaggy, edges may appear if pixels are not filtered and retain their original values). NOTE: This control is only available if you have set Warp Mode to Global Alignment. • Impulse - remapped pixels carry their original values. • Cubic - remapped pixels receive some smoothing. • Keys - remapped pixels receive some smoothing, plus minor sharpening. • Simon - remapped pixels receive some smoothing, plus medium sharpening. • Rifman - remapped pixels receive some smoothing, plus significant sharpening. • Mitchell - remapped pixels receive some smoothing, plus blurring to hide pixelation. • Parzen - remapped pixels receive the greatest smoothing of all filters. • Notch - remapped pixels receive flat smoothing (which tends to hide Moiré patterns). |
Analyse Sequence |
analyse |
N/A |
Click to analyse the sequence to create a corner pin or aligned camera output. Use Analyse Sequence to create the output data in all global methods except Vertical Skew (the default). Then, use Create Corner Pin, Create Camera, or Create Rig. |
Create Corner Pin |
createPin |
N/A |
Click to create a corner pin representing the result of O_VerticalAligner once you have clicked Analyse Sequence. You can use multiple O_VerticalAligner nodes to produce the desired alignment. Then, analyse on the final node to create a single corner pin to represent the concatenated transform. This works in all global methods except Vertical Skew (the default). |
Create Camera |
createCamera |
N/A |
If you have a pretracked Nuke stereo camera connected to the Camera input of the O_Solver up the tree and you have clicked Analyse Sequence, you can click this to create a vertically aligned camera from the analysis. This gives you a single Camera node with split controls to hold the left and right view parameters. This works in all global methods except Vertical Skew (the default). |
Create Rig |
createRig |
N/A |
If you have a pretracked Nuke stereo camera connected to the Camera input of the O_Solver up the tree and you have clicked Analyse Sequence, you can click this to create a vertically aligned camera rig from the analysis. This gives you two Camera nodes and a JoinViews node that combines them. This works in all global methods except Vertical Skew (the default). |
Output Tab |
|||
Four Corner Pin |
|||
Bottom Left xy |
pinBL |
0,0 |
These controls represent the 2D corner pin that can be applied to the input image to create the same result as O_VerticalAligner (in all global methods except Vertical Skew). This allows you to do the analysis in Nuke, but take the matrix to a third-party application, such as Baselight, and align the image or camera there. • Bottom Left - the coordinates of the bottom left corner pin calculated during the analysis pass. • Bottom Right - the coordinates of the bottom right corner pin calculated during the analysis pass. • Top Right - the coordinates of the top right corner pin calculated during the analysis pass. • Top Left - the coordinates of the top left corner pin calculated during the analysis pass. |
Bottom Right xy |
pinBR |
0,0 |
|
Top Right xy |
pinTR |
0,0 |
|
Top left xy |
pinTL |
0,0 |
|
Transform Matrix |
transformMatrix |
N/A |
Provides the concatenated 2D transform for the vertical alignment. The matrix is filled when you click Analyse Sequence on the O_VerticalAligner tab. There is one matrix for each view in the source. |
Python Tab |
|||
before render |
beforeRender |
none |
These functions run prior to starting rendering in execute(). If they throw an exception, the render aborts. |
before each frame |
beforeFrameRender |
none |
These functions run prior to starting rendering of each individual frame. If they throw an exception, the render aborts. |
after each frame |
afterFrameRender |
none |
These functions run after each frame is finished rendering. They are not called if the render aborts. If they throw an exception, the render aborts. |
after render |
afterRender |
none |
These functions run after rendering of all frames is finished. If they throw an error, the render aborts. |
render progress |
renderProgress |
none |
These functions run during rendering to determine progress or failure. |
Ocula 3.0 - VerticalAligner from The Foundry on Vimeo.
Welcome to Ocula from The Foundry, my name’s Jon. In this tutorial, we are looking at setting up and reviewing vertical alignment in Ocula 3.0. The Solver node calculates the stereo geometry of your input footage. We are going to use that information in the O_VerticalAligner node to correct the horizontal alignment of these plates. So we saw how to set up the Solver in the previous tutorial by defining some analysis keys on the timeline. On each analysis keyframe, the Solver calculates some feature matches between the left and right view. It uses the feature matching data to calculate the stereo geometry. It also uses it to calculate the alignment for the plates, and you can preview that directly inside the Solver to QC (quality-check) your feature matches. The O_VerticalAligner node applies that alignment data to the plates and interpolates it smoothly between the keys.
For this footage, I have combined several VerticalAligner nodes. If we take a look at the first one, you will see you can set the left and right view, you can choose whether to align both views or to align one view to a hero view, you can do a local per pixel warp which we will look at later, or a global alignment which acts like a corner pin. For the global alignment, you can choose different options for how it calculates the alignments for the plates, and you can choose the filter that’s used when it warps the images. So here you will see I have got four Aligners in a row. All of the global methods concatenate, so this will only generate a single filter hit. You are free to choose the most appropriate methods to align your plates. I generally have a Simple Shift upfront to align vertically first, I then apply a Scale to the plate to take out any focal length differences, and I then have a Rotation and this will take out any roll between the cameras. You can also choose to use Camera Rotation here, and this will take out roll and pitch between cameras. Finally, I then use a Vertical Skew, which you can use to take out any keystoning with a conversion camera rig. It’s important to remember though that skew can’t be represented by a corner pin. So if you do want to export a corner pin, which we will look at later, then use the Perspective Warp method as the final step.
Ok, let’s have a look at how the final alignment works. To QC alignment, you can use an Anaglyph node to overlay the left and right nodes and check for any vertical mismatches that remain in your plates. You can also use the StereoReviewGizmo that comes with Ocula. Here it’s set to show a Difference image. I am calculating the disparity for the aligned plates, and the gizmo has a ReConverge node inside. You can pick up the converge point and move it around to check for vertical mismatches in different parts of the image. So here it looks like there’s a difference on the lights in the background. You can correct this by going back and reviewing the alignment options you chose using the VerticalAligner nodes. Alternatively, what we're going to do here is add some user-defined matches in the analysis key in O_Solver. So this locks down the alignment exactly at those user-defined feature matches. I am going to add some on the lights in the background on both sides here, taking great care to make sure they are matched exactly in the same position. We can go back to our Anaglyph node and see how well the alignment now works. The lights now match up.
It’s important to remember that the alignment is weighted to ensure the user matches are horizontal now, so if you have any change in alignment with depth, you need to place those user matches at different depths. Now we have a way to QC the alignment and correct the analysis keys in O_Solver. You can see that the ghosting effect has now disappeared in that light in the background. We now need to check the alignment in between the keys to see how well it’s interpolated. There are a lot of frames to get through here, so a trick is to look at the disparity that’s being calculated for the aligned plates. I have got an Expression node that’s set up to look at the vertical component of disparity and check it against the threshold to see if there is any significant vertical discrepancy between the left and right view. It is worth noting here that I have increased the Strength in my DisparityGenerator to make sure I do pick up any mismatches in the alignment.
I have rendered that out and I can read it back in and play it back to see if there are any bad frames that I need to check first. So there’s a small discrepancy on this frame. If I switch back, you will see here, around frame 105 and 106, that some large discrepancies are being shown. So now I actually need to go back to Solver and add a new Analysis Key to re-calculate the alignment here - interpolating isn't good enough.
Now let’s look at the Local Alignment option. This does a per-pixel update based on the vertical component and disparity. I have got a DisparityGenerator and I am calculating the vertical discrepancy between left and right views. I am correcting that locally, so there’s no vertical offset left in the plates. So here I have simulated a local distortion left in the plate, for example, if there was a distortion in the mirror for the mirror rig. This is the input footage and I have distorted this region over here. It’s exaggerated, so you can see it more easily. I am going to do a global correction as we did before. Here’s the global result. This can’t correct the local distortion in the plates, so you can still see that the lights are still displaced, and if I switch to view the local correction afterwards, you can see this gets corrected. So if I switch between the two, you can see the correction is only being applied in a local region where the plate was distorted to begin with. So the local alignment can be used to correct distortions. You can also use it to take out any high-frequency jitter in the rig without having to key every frame in O_Solver. The important thing to remember though is to correct globally first, so that you are only taking out small differences with the local alignments. If you correct large vertical discrepancies with this method, it can distort the pixel aspect in your footage.
Finally, if you have a match-moved camera for your shot, you can pull that into O_Solver before setting the analysis key. The feature matches in stereo geometry will then be based on those input cameras. You can then use the Camera Rotation method in the VerticalAligner to correct the matched-moved cameras, so the plates are aligned horizontally. Here are the aligned plates. Also, in VerticalAligner you can analyse the whole sequence and create a corner pin if you want to export the alignment outside Nuke. If you have that match-moved camera on the input, you can also create an aligned rig to use with the correct plates. You can do this with all the global alignment methods, except Vertical Skew, which you can’t represent with a camera or a corner pin. So here I have analysed the shot and created an aligned rig. If I switch to 3D, here’s the original rig and here’s the aligned rig next to it. So you can see its been rotated to match the aligned plates.
That wraps up this tutorial. We have taken a look at how to use the O_VerticalAligner node to align stereo footage horizontally based on the analysis keys in the O_Solver node. We have looked at how to QC the alignment, how to add user-defined matches to correct alignment at the keyframes in O_Solver, and you can use disparity for the aligned plates to review how well the alignment works when it’s interpolated between the keyframes. We have also looked at the Local Alignment option to correct small distortions, and you can use this to remove high-frequency jitter in our rig. And we have pulled in a match-moved rig to define the alignment of the plates. Finally, we saw how you can analyse a sequence to output an aligned camera rig for the plates or to create a corner pin to export outside Nuke.