The O_Solver plug-in defines the geometric relationship between the two views in the input images (that is, the camera relationship or solve). This is necessary if you want to use DisparityGenerator, VectorGenerator, or VerticalAligner down the tree.
Connection Type |
Connection Name |
Function |
Input
|
Camera |
A pretracked Nuke stereo camera that describes the camera setup used to shoot the Source image. This can be a camera you have tracked with the CameraTracker node or imported to Nuke from a third-party camera tracking application. This input is optional. |
Ignore |
A mask that specifies areas to ignore during the feature detection and analysis. This can be useful if an area in the Source image is producing incorrectly matched features. This input is optional. |
|
Source |
A stereo pair of images. These can either be the images you want to work on, or another pair of images shot with the same camera setup. |
Control (UI) |
Knob (Scripting) |
Default Value |
Function |
O_Solver Tab |
|||
Views to Use |
viewPair |
Dependent on source |
Sets the two views you want to use to calculate the features and the camera relationship. These views are mapped for the left and right eye. |
Analysis |
|||
Analysis Key |
analysisKeyframe |
0 |
Displays set keyframes from which O_Solver does the feature matching and analysis. The solves for all other frames are created by interpolating between the results on the keyframes on either side. |
Add Key |
addAnalysisKey |
N/A |
Click to add an analysis keyframe at the current frame. |
Delete Key |
deleteAnalysisKey |
N/A |
Click to delete an analysis keyframe at the current frame. |
Delete All |
deleteAnalysisKeys |
N/A |
Click to delete all analysis keyframes. |
Single Solve From All Keys |
singleSolve |
disabled |
When enabled, O_Solver calculates a single solve using all the keyframes you have set. Use this for rigs that don’t change over time to get more accurate results than when using a single keyframe. |
Features |
|||
Mask |
ignore |
None |
Set the mask type to exclude areas of the sequence: • None - none of the footage is ignored. • Source Alpha - use the alpha channel of the source clip to define which areas to ignore. • Source Inverted Alpha - use the inverted alpha channel of the source clip to define which areas to ignore. • Mask Luminance - use the luminance of the Mask input to define which areas to ignore. • Mask Inverted Luminance - use the inverted luminance of the Mask input to define which areas to ignore. • Mask Alpha - use the Mask input alpha channel to define which areas to ignore. • Mask Inverted Alpha - use the inverted Mask input alpha channel to define which areas to ignore. |
Number |
numberFeatures |
1000 |
Sets the number of features to detect in each image and match between views. |
Threshold |
featureThreshold |
0 |
Sets the threshold to select features in an image. Use a high value to select prominent points. Use a low value to spread features out across the image. |
Separation |
featureSeparation |
2 |
Sets a required feature separation to force detected features to cover the image. It is important that the features do not cluster together. If you set Display to Keyframe Matches and see that this is the case, try increasing this value. |
Display |
|||
Display |
displayType |
Nothing |
Sets the display mode: • Nothing - only show the Source image. • Keyframe Matches - show the features and matches for the camera relationship calculation in a Viewer overlay. • Preview Alignment - Preview how well the calculated feature matches describe the alignment of the stereo camera. |
Alignment Method |
alignmentMethod |
Vertical Skew |
Sets the alignment method to use to align the views when Display is set to Preview Alignment: • Vertical Skew - align the features along the y axis using a skew. This does not move the features along the x axis. • Perspective Warp - do a four-corner warp on the images to align them on the y axis. This may move the features slightly along the x axis. • Rotation - align the features vertically by rotating the entire image around a point. The centre of the rotation is determined by the algorithm. • Scale - align the features vertically by scaling the image. • Simple Shift - align the features vertically by moving the entire image up or down. • Scale Rotate - align the features vertically by simultaneously scaling and rotating the entire image around a point. The centre of the rotation is determined by the algorithm. • Camera Rotation - align the features by first performing a 3D rotation of both cameras so that they have exactly the same orientation and a parallel viewing axis, and then reconverging the views to provide the original convergence. For best results, use the Camera input to provide the information for the shooting cameras. |
Match Offset |
offset |
100 |
Sets offset (in pixels) applied to the aligned feature matches. You can: • increase this value to artificially increase the disparity, so it’s easier to see how horizontal the feature matches are. Any matches that aren’t horizontal can be considered poor matches and deleted manually. • decrease this value to set the disparity of particular matches to zero and examine the vertical offset at each feature. The matches should sit on top of each other. If they are vertically offset, you know they’re poor and can delete them manually. NOTE: The Match Offset control is only available when Display is set to Preview Alignment. |
Error Threshold |
alignmentError |
10 |
Sets the threshold on the vertical alignment error in pixels. When Display is set to Preview Alignment, any matches with a vertical error greater than the threshold are selected in the Viewer. This allows you to easily delete poor matches with large errors when previewing alignment at keyframes. |
Current Frame |
|||
Re-analyse Frame |
resetFrame |
N/A |
Click to clear the automatic feature matches from the current frame and recalculate them. This can be useful if there have been changes in the node tree upstream from O_Solver, you have deleted too many automatic feature matches, or you want to calculate the automatic matches based on any user matches you have created. |
Delete User Matches |
deleteUserMatches |
N/A |
Click to delete all feature matches you have manually added to the current frame. |
Ocula 3.0 - Solver from The Foundry on Vimeo.
Welcome to Ocula from The Foundry, my name is Jon, and in this tutorial we're taking a look at QCing (quality-checking) the Solver node in Ocula 3.0. The Solver node sits at the top of Ocula trees, and it calculates the stereo geometry for the input plates for use by the other Ocula nodes.
So this is a script to do some retiming. At the top, I have got a Solver node. It’s calculating the stereo geometry for these plates and delivering that as hidden metadata downstream to my other Ocula nodes I am using here. So I am doing some plate preparation, I’m doing some color matching, vertical alignments, calculating disparity, baking that out, and also calculating motion vectors to do a retime. The alignment data for the plates is calculated by a Solver, and delivered downstream for use by the DisparityGenerator to calculate disparity vectors for that alignment. And also that’s delivered to the Aligner, to update the alignment of the plates so they are aligned horizontally. You will notice that many Ocula nodes have a separate Solver input, so you can pipe that metadata down through different streams.
If we have a look at the Solver node in this script, we will see that some analysis keys have been created on the timeline. The Solver calculates the stereo geometry at these keyframes, and it interpolates it in between them. It does this by matching the input left and right images at feature points. So if I switch to look at those, it calculates features in the left eye and matches them to the right eye. And it uses that data internally to calculate an internal camera rig and alignment data for the plates that’s delivered downstream. So let’s have a look at setting up the analysis keys and setting up the Solver node. In the Solver node, you can define which views to use for left and right, you can define analysis keys, you can define how features are detected and matched at the keys, you can define the display which you use to QC the feature matches that have been created, and you can also re-analyze or delete matches. The first thing to do is to add an analysis key to your shot. If the rig is fixed, there’s no change in convergence, interaxial focal length, then you can use the single analysis key. The way I work is, I look for a frame where there’s lots of nice texture in the image for the feature matching. So, the first frame has a lot of blank bluescreen, and it’s going to be difficult to pick up features on. The last frame has something a bit more interesting, more textures here to pick up features and to match them. So let’s add an analysis key here. If I press M and switch the Display to Keyframe Matches, it’s showing the features it picked up on the left image and the right image and matched between the two. So these feature matches are used to calculate the stereo geometry and the alignment data delivered downstream. I can QC these feature matches by hitting P to switch the Display to Preview Alignment. It’s now applied the alignment data it’s calculated to those feature matches, so in the original matches you can see some vertical offset between the left and right eye. Let's now apply that alignment data, so the offset is now horizontal. If there are any feature matches that aren't horizontal, they are likely to be bad matches. There’s a Threshold here you can tune to pick out those bad matches automatically, so if I set that to something reasonable and have a look at some of these matches it’s highlighted, if I flick between the left and right eye, you can see them switching up and down. I can hit the Delete key to delete all of those bad matches automatically. Alternatively, you can actually look at all of the individual matches and examine them, see which ones are bad yourself, select them, and delete. There are some parameters you can tune. You can use a Mask input on a node to ignore particular areas. You can define the number of features it’s going to detect and try and match between left and right eye. I tend to leave this at 1000, but you can switch it up to maybe 3000 if you want lots and lots of features. Or if you use a lot of user matches, you can turn that down. The Threshold and Separation generally don’t need to be tuned, but you can play with them if you want.
Now you can use user-defined matches to lock down the alignment of your plate, so here let’s delete this automatic match. I can go in and I can add a user-defined match (Add feature), switch eyes, put it in the right place, and check it. This user-defined feature match is given a lot more weighting in the calculation of the stereo geometry. So if I preview the alignments, that’s now perfectly horizontal, and any bad automatic matches around it will be highlighted when I preview that alignment. So for example, if I was worried about all of these automatic matches here and I wanted to check they were ok, I could go in and I can add a user-defined match and put it in exactly the right place - you should be very careful about this. You can then switch back to previewing the alignment and seeing which of the automatic matches have been kicked out of place. Now I can delete those automatic bad matches. It’s recalculated the alignment so it’s got a new set of matches that are outside my threshold, and I can delete again to remove those. So if you want to add user matches, you can add them and they give an extra weighting in the calculation of alignment in stereo geometry.
I actually used three keyframes to pick up any variations in the stereo rig over the timeline, so let’s just add keys at two more frames. New key, these are the matches, I hit P to preview the alignment, hit the Delete key to remove the bad matches automatically, and then I want to add one final keyframe. Let’s add that now, preview matches, look at the alignment, set the Threshold, and delete the automatic matches outside that threshold. There we go, I have three matches and I have previewed the alignment, so I have quality-checked the alignment data delivered downstream directly in the Solver node.
OK, let’s have a look at a few tips and tricks in setting up the Solver node. In this shot, the camera rig is actually changing over time. If you have any metadata about the interaxial convergence, you can look at that to see where the stereo geometry changes and where you need to set down new analysis keys in the Solver to pick up the new stereo geometry. But if you don’t have that metadata, you can still use the Preview Alignment in the Solver to test when the stereo alignment changes. So let’s see how we can do that for this shot. I’m going to add an Analysis Key at the first frame here. Preview the alignment, and delete the bad matches. I am then going to leave the Display on Preview Alignment and switch to a new frame to check how that alignment data I have calculated on my keyframe works on the new frame. And you can see that the alignment of all feature matches is now wrong. The stereo geometry has changed over time, so I’m going to work my way back to find where the camera rig changes. So here the alignment is pretty consistent, there are a few bad matches, but I am going to keep that frame and continue working forwards. Let’s check that on a new frame. It looks reasonable. Let’s go a bit further forwards. Still reasonable. Let’s check frame 100. OK, the geometry has definitely changed. I am going to Add Key. I have corrected the alignment data. Let’s work back to see if that has worked on the in-between frames. The geometry has changed in between. We need to add a new key to correct that line of data. Let’s see if it’s changed in between. This looks good. Let’s continue working forwards. The geometry has changed, so I need to add a new key. Let’s check in between. This looks like it might be OK, but actually there is a general shift in all of the feature matches in a particular direction. You can see they are all slightly moved vertically upwards, so I think the geometry is changing very slightly here and I am going to Add Key to make sure I have locked down that alignment data. Now, we have got great alignment again, and you can move forwards across the timeline to define all of the keys you need.
You can tune the input to O_Solver to help it with the feature matching. In this shot, there are dark areas of the image which are hard to pick up and match features on. So if we added a key on this frame, you can see it’s picked up feature matches in the central, bright, and well-textured region. So I have just changed the colorspace to a Log space - you could use Cineon or something like that to pull up those dark regions. And if we re-match that frame, I am going to Re-analyse Frame. You can see now it’s picked up feature matches in those relatively dark regions. This is good because it has spread out the feature matching across the whole image. So feel free to tune the input to get good feature matches as long as you don't change the location of where it picks up the features, so you don’t change the alignment data that it calculates.
You can also calculate solves from separate footage. Here we have a fixed camera rig, and we have shot a separate bit of footage of a well-lit region that’s nicely textured. The feature points here are going to be well-defined, so we can get a good solve for the stereo geometry. And as the camera is fixed, I have pulled in several analysis keys here and I have selected Single Solve From All Keys. It’s going to pull all the keyframes together to deliver an even better calculation of the stereo geometry, and you can pipe that into the Solver input of other Ocula nodes. So here, if I wanted to calculate the disparity for this shot, I can pull in the stereo geometry from this shot by putting the Solver into the Solver input of the DisparityGenerator.
Finally, if you have a match-move for your shot, you can pull that camera data into the Solver node. O_Solver has a Camera input that you can connect to your match-moved cameras. And when you calculate a keyframe now, it will pull in that exact stereo geometry and use it in the feature matching. It will also deliver that stereo geometry downstream to the other Ocula nodes in your tree.
So that wraps up this tutorial. We have covered setting up and quality-checking analysis keys. We have also looked at some tricks to test changes in stereo geometry, tuning the input footage to get the best out of feature matching, as well as using separate footage for your solves, and finally using match-moved cameras.