Cropping, Reformatting, and Transforming Deep Images

You can crop, reformat and transform deep images much in the same way as you would a regular image, using the corresponding deep nodes.

Note:  Remember that since the samples at each pixel can be located at arbitrary depths, resampling during transforming may produce unexpected results since there may not be samples at the same depth in adjacent pixels.

Cropping Deep Images

You can use the DeepCrop node to clip your deep image, much like the normal Crop node:

1.   Connect the DeepCrop node to the deep image you want to crop.
2.   Adjust the crop box in the Viewer in X and Y directions to define your crop area. Alternatively, define your crop area using the bbox fields in the properties panel. If you want to keep the depth samples outside the crop box, you can check the keep outside bbox box.
3.   Use the znear and zfar controls in the properties panel to crop samples in depth. If you don’t want to use either of these controls, you can disable them by unchecking the use box next to them. If you want to keep your depth samples outside of the z range defined by these controls, you should check the keep outside zrange box.

Reformatting Deep Images

DeepReformat is the Reformat node for deep data. You can use it to set your deep image’s dimensions, scale, and so on. To reformat your deep image:

1.   Connect the DeepReformat node to the deep image you want to resize.
2.   In the type dropdown, select:

to format - sets the output width and height to the selected format. Select the format in the output format dropdown. If the format does not yet exist, you can select new to create a new format from scratch. The default setting, root.format, resizes the image to the format indicated on the Project Settings dialog.

to box - sets the output width and height to dimensions you define in pixels. Enter values in the width, height and pixel aspect fields to specify the dimensions.

scale - sets the output width and height to a multiple of the input size. Use the scale slider to define the factor. The scale factor is rounded slightly, so that the output image is an integer number of pixels in the direction selected under resize type.

3.   You can specify what kind of resize you want in the resize type dropdown. Select:

none - to not resize the original.

width - to scale the original until its width matches the output width. Height is then scaled in such a manner as to preserve the original aspect ratio.

height - to scale the original so that it fills the output height. Width is then scaled in such a manner as to preserve the original aspect ratio.

fit - to scale the original so that its smallest side fills the output width or height. The longest side is then scaled in such a manner as to preserve the original aspect ratio.

fill - to scale the original so that its longest side fills the output width or height. The smallest side is then scaled in such a manner as to preserve the original aspect ratio.

distort - to scale the original so that both sides fill the output dimensions. This option does not preserve the original aspect ratio, so distortions may occur.

4.   Check the center box to define whether the input pixels should be resampled to the new size or centered in the output. If you don’t center, the lower left corners of the input and output are aligned.
5.   To further adjust your image’s layout, you can check the respective boxes for:

flip - to swap the top and bottom of the image.

flop - to swap the left and right of the image.

turn - to turn the image 90 degrees.

black outside - to set pixels outside the format black.

preserve bounding box - to preserve pixels outside the output format rather than clipping them off.

Transforming Deep Samples

You can use the DeepTransform node to reposition the deep samples.

1.   Connect the node to the deep footage you want to transform.
2.   Use the translate x, y, and z controls to translate your samples.
3.   Scale the samples’ z depth using the zscale control. Values above 1 increase the depth, whereas values below 1 decrease it.
4.   If you connect a mask to the node’s mask input, you can use it to regulate how much of an effect the depth transformation has in different parts of the frame.