RayRender

When connected to a Scene node, the RayRender node renders all the objects and lights connected to that scene from the perspective of the Camera connected to the cam input (or a default camera if no cam input exists). The rendered 2D image is then passed along to the next node in the compositing tree, and you can use the result as an input to other nodes in the script.

See also PrmanRender, ScanlineRender, Scene, and Camera.

Inputs and Controls

Connection Type

Connection Name

Function

Input

cam

An optional camera input. The scene is rendered from the perspective of this camera. If the camera input is not connected, RayRender uses a default camera positioned at the origin and facing in the negative Z direction.

obj/scn

Either:

A Scene node that is connected to the objects and lights you want to render, or

a 3D object or MergeGeo node.

bg

An optional background input. This can be used to composite a background image into the scene and to determine the output resolution. If not used, this defaults to root.format or root.proxy_format defined in the Project Settings.

Control (UI)

Knob (Scripting)

Default Value

Function

RayRender Tab

filter

filter

Cubic

Select the filtering algorithm to use when remapping pixels from their original positions to new positions. This allows you to avoid problems with image quality, particularly in high contrast areas of the frame (where highly aliased, or jaggy, edges may appear if pixels are not filtered and retain their original values).

Impulse - remapped pixels carry their original values.

Cubic - remapped pixels receive some smoothing.

Keys - remapped pixels receive some smoothing, plus minor sharpening (as shown by the negative -y portions of the curve).

Simon - remapped pixels receive some smoothing, plus medium sharpening (as shown by the negative -y portions of the curve).

Rifman - remapped pixels receive some smoothing, plus significant sharpening (as shown by the negative -y portions of the curve).

Mitchell - remapped pixels receive some smoothing, plus blurring to hide pixelation.

Parzen - remapped pixels receive the greatest smoothing of all filters.

Notch - remapped pixels receive flat smoothing (which tends to hide moire patterns).

Lanczos4, Lanczos6, and Sinc4 - remapped pixels receive sharpening which can be useful for scaling down. Lanczos4 provides the least sharpening and Sinc4 the most.

projection mode

projection_mode

render camera

The projection modes are:

perspective - have the camera’s focal length and aperture define the illusion of depth for the objects in front of the camera.

orthographic - use orthographic projection (projection onto the projection plane using parallel rays).

spherical - have the entire 360-degree world rendered as a spherical map. You can increase tessellation max to increase the accuracy of object edges as they are warped out of line, but this takes longer to render.

render camera - use the projection type of the render camera.

stochastic samples

stochastic_samples

0

Sets the number of samples, per pixel, to use in stochastic estimation (zero is disabled). Lower values result in faster renders, while higher values improve the quality of the final image.

Stochastic sampling is based on Robert L. Cook’s Stochastic Sampling in Computer Graphics, available in ACM Transactions on Graphics, Volume 6, Number 1, January 1996.

Note:  It is recommended for motion blur that the samples control is adjusted. This also provides anti-aliasing by jittering the sample point.

intersection epsilon

triangle_intersection_epsilon

0.000035

Sets the error threshold for the triangle ray intersection calculations.

MotionBlur Tab

interpolate animation

interpolate_animation

disabled

When enabled, interpolate between animation keyframes during the shutter aperture.

When disabled, no interpolation is calculated.

Enabling interpolation can decrease the number of keyframes and stocastic samples required to produce motion blur, but may introduce deviation from the motion direction.

samples

samples

1

Sets the number of keyframes used to reconstruct motion blur during the shutter aperture.

uniform distribution

uniform_distribution

disabled

When enabled, use a uniform temporal distribution of scenes to sample. This generates more accurate results for stochastic multisampling.

shutter

shutter

0.5

Enter the number of frames the shutter stays open when motion blurring. For example, a value of 0.5 corresponds to half a frame.

shutter offset

shutteroffset

start

This value controls how the shutter behaves with respect to the current frame value. It has four options:

centred - center the shutter around the current frame. For example, if you set the shutter value to 1 and your current frame is 30, the shutter stays open from frame 29,5 to 30,5.

start - open the shutter at the current frame. For example, if you set the shutter value to 1 and your current frame is 30, the shutter stays open from frame 30 to 31.

end - close the shutter at the current frame. For example, if you set the shutter value to 1 and your current frame is 30, the shutter stays open from frame 29 to 30.

custom - open the shutter at the time you specify. In the field next to the dropdown menu, enter a value (in frames) you want to add to the current frame. To open the shutter before the current frame, enter a negative value. For example, a value of - 0.5 would open the shutter half a frame before the current frame.

shutter custom offset

shuttercustomoffset

0

If the shutter offset parameter is set to custom, this parameter is used to set the time that the shutter opens by adding it to the current frame. Values are in frames, so -0.5 would open the shutter half a frame before the current frame.

match ScanlineRender shutter offset

use_scanline_shutter

disabled

When enabled, assume a sample value of 1 and a shutter offset of 0, unless a custom shutter offset is in use.

When disabled, set the required sample value.

AOVs Tab

output AOV

output_shader_vectors

disabled

When enabled, all the arbitrary output variables specified are passed into the specified channels.

remove AOV from beauty pass

remove_from_beauty

enabled

When enabled, the specified AOV are not included in the output of the node.

When disabled, all specified AOV channels are output.

surface point

AOV_Point

none

When output vectors is enabled, these dropdowns allow you to split out the various AOV into specific channels for use later on in the node tree.

surface normal

AOV_Normal

none

motion vector

AOV_Motion

none

solid color

AOV_Solid

none

direct diffuse

AOV_Direct_Diffuse

none

direct specular

AOV_Direct_Specular

none

reflection

AOV_Reflection

none

emissive

AOV_Emissive

none

Camera Tab

Stereo Scan Enable

stereoScan

disabled

When enabled, the controls on the Camera tab are enabled, allowing you to scan stereo footage.

When disabled, the controls on the Camera tab are disabled.

Left View

leftView

N/A

Sets the view to use for the left eye in the output.

Right View

rightView

N/A

Sets the view to use for the right eye in the output.

Eye Separation

eyeSeparation

0.065

Determines how far apart the two views are, from a viewer's perspective. If you set the Eye Separation, or interpupillary distance (IPD), too low, objects in the scene appear crushed horizontally, but raising it too high can leave holes in the stitch.

The IPD is measured in the same units as the Rig Size control in the upstream C_CameraSolver properties, so adjust it accordingly.

Convergence Distance

convergenceDistance

100

Sets the distance to the zero parallax point, where the scene is in focus.

Falloff Type

falloffType

Cosine

Determines how pole merging is handled:

None - no IPD adjustment occurs towards the poles.

Linear - the views are merged gradually from the Start Angle specified toward the pole. Increasing the angle moves the start point toward the poles.

Cosine - the views are merged smoothly toward the poles. Reducing the Separation Falloff shifts the transition in depth towards the poles.

Start Angle

separationFalloffStartAngle

0

Sets the point at which falloff begins when Falloff Type is set to Linear.

Increasing the value pushes the merge point toward the poles, a value of 90 disables pole merging entirely.

Falloff Exponent

separationFalloffExponent

1

Sets the rate off falloff for the eye separation towards the poles when Falloff Type is set to Cosine.

A value of 1 produces smooth merging toward the poles for the left and right views.

Reducing the value pushes the merge point toward the poles, a value of 0 disables pole merging entirely.

Sample Ray From Camera

sampleRayFromCamera

disabled

When enabled, sample rays with respect to the capture radius for a camera rig.

Enable this control to match stereoscopic image stitches generated for a horizontal ring of cameras with a diameter set by the Rig Size control.

Rig Size

rigDiameter

0.1

Sets the diameter of the camera rig used to generate a corresponding stereoscopic image stitch, when Sample Ray From Camera is enabled.

Note:   The Rig Size diameter should always be greater than the Eye Separation value.