ScanlineRender2

The ScanlineRender node renders all the objects and lights connected to that scene from the perspective of the Camera connected to the cam input (or a default camera if no cam input exists). The rendered 2D image is then passed along to the next node in the Node Graph, and you can use the result as an input to other nodes in the script.

The ScanlineRender2 node’s Properties panel has a variety of options for controlling sampling, camera/projection, motion blur, materials, lighting, and more.

ScanlineRender2 also has the ability to render reflections, refractions and more via ray tracing, giving you greater control when rendering directly from Nuke. In the Properties panel, you can adjust Ray Options, including the ability to control your Ray Depth.

Note:  You can render more complex scenes with refractions and reflections using shader nodes, such as the BasicSurface shader node with its ray visibility and indirect illumination controls.

In the Output tab, you can control render world space distance with 1/Z, Motion Vectors type and AOV outputs. Plus, the ScanlineRender2 node can output deep data if there is deep data available from the scene graph.

ScanlineRender2 supports an Integrator plugin API, which you can populate with your own integrators to visualize render data. See Advanced options in the table below

Note:  Nuke plugins named with the prefix 'slr' and suffix 'Integrator' appear in the list of available integrators to use in the node panel.

See also Camera.

For more information about ScanlineRender2 usage, see Render the Scene from 3D to 2D.

Tip:  ScanlineRender2's classic 3D system equivalent is the ScanlineRender node.

Inputs and Controls

Connection Type

Connection Name

Function

Input

cam

An optional camera input. The scene is rendered from the perspective of this camera. If the camera input is not connected, ScanlineRender uses a default camera positioned at the origin and facing in the negative Z direction.

obj/scn

Either:

A GeoScene or GeoMerge node that is connected to the objects and lights you want to render, or

a 3D object or prim.

bg

An optional background input. This can be used to composite a background image into the scene and to determine the output resolution. If not used, this defaults to root.format or root.proxy_format defined in the ProjectSettings.

If this input contains a depth channel, ScanlineRender considers it when doing Z-buffer and Z-blending calculations.

Control (UI)

Knob (Scripting)

Default Value

Function

ScanlineRender2

Sampling
Camera Samples camera_sample_mode 1 Sets the number of samples to render per pixel, to produce motion blur and antialiasing. The total number of samples is the sampling grid size squared, or gridsize*gridsize.
Custom Size (N*N) camera_samples_custom 1 When camera samples is set to the ‘custom’ option, this value determines the camera sample grid width/height size. A value of 4 means a 4x4 grid of 1 samples.
Spatial Jitter At spatial_jitter 2 When to enable the spatial (X/Y) jittering of the sampling screen location. If this is 2 and the camera samples >=2 then spatial jitter will be applied to all samples.
Time Jitter At time_jitter 2 When to enable random jittering in time. If this is 2 and the camera samples >=2 then time jitter will be applied to all samples.
Scene Time Offset scene_time_offset 0 Shifts the time frame of input geometry, lights and cameras while keeping the renderer at the same output frame.
Camera/Projection
Projection projection_mode Render Camera

The preset projection modes are:

Render Camera - Take the projection mode from the camera.

Perspective - have the camera’s focal length and aperture define the illusion of depth for the objects in front of the camera.

Orthographic - use orthographic projection (projection onto the projection plane using parallel rays).

Spherical - have the entire 360-degree world rendered as a spherical map. You can increase tessellation max to increase the accuracy of object edges as they are warped out of line, but this takes longer to render.

Cylindrical - have the 360-degree world horizontally cylindrically projected around a camera.

UV Unwrap - have every object render its UV space into the output format. You can use this option to cook out texture maps

Shutter shutter 0.5 Enter the number of frames the shutter stays open when motion blurring. For example, a value of 0.5 corresponds to half a frame.
Shutter Offset shutteroffset start

Select when the shutter opens and closes in relation to the current frame value when motion blurring:

centered - to center the shutter around the current frame. For example, if you set the shutter value to 1 and your current frame is 30, the shutter stays open from frame 29.5 to 30.5.

start - to open the shutter at the current frame. For example, if you set the shutter value to 1 and your current frame is 30, the shutter stays open from frame 30 to 31.

end - to close the shutter at the current frame. For example, if you set the shutter value to 1 and your current frame is 30, the shutter stays open from frame 29 to 30.

custom - to open the shutter at the time you specify. In the field next to the dropdown menu, enter a value (in frames) you want to add to the current frame. To open the shutter before the current frame, enter a negative value. For example, a value of -0.5 would open the shutter half a frame before the current frame.

Shutter Segments shutter_segments 1 The number of ‘time segments’ used to interpolate motion blur. A segment count of 0 effectively disables motion blur, while a segment count of 1 creates a straight line blur. Increasing the number of segments beyond 1 subdives the straight line into more time segments, which is normally only necessary for heavy rotational blur.
Shutter Bias shutter_bias 0 Biases shutter towards shutter close or shutter open for stylized motion blue. Negative values bias towards shutter open and positive values bias towards shutter close.
Render Region render_region_mode Format Extend

When the bg input has a larger pixel box than its format size, defines how the render should handle the entended bbox area outside the format’s size.

Format - render region is the format size, and any extended bbbox pixels are passed through unchanged.

Format Extend - render region is the format size, and automatically expanded to fill ant extended bbox area.

Format Clip - render region is the format size, and the output bbox is clipped to the format.

Overscan overscan 0 The number of pixels to render beyond the left/right and top/bottom of the frame. Rendering pixels beyond the edges of the frame can be useful if subsequent nodes need to have access outside the frame. For example, a Blur node down the node tree may produce better results around the edges of the frame if overscan is used. Similarly, a subsequent LensDistortion node may require the use of overscan.
Motion Blur
Camera Xform camera_motion_blur on Enable or disable camera transform motion blur.
Camera Lens camera_lens_blur on Enable or disable camera lens blur.
Object Xform object_motion_blur on Enable or disable object transform motion blur.
Object Deform object_deform_blur on Enable or disable object deform motion blur.
Lights light_motion_blur on Enable or disable light transform motion blur.
Ray Options
Max Ray Depth ray_max_depth 10

Defines the maximum depth rays can ‘bounce’ to, allowing you to balance render performance and quality. Ray max depth is tested and incremented for all ray types, so the max depth can be a mix of ray types. For example, if the max depth is 4 then a ray bounce sequence like:

ray# type

1 camera

2 glossy

3 glossy

4 diffuse

5 refraction

will stop at the ‘diffuse’ ray bounce which is ray #4 in the sequence. However, if glossy max depth was set to only 1 then shading would stop at ray #2 terminating the sequence.

Diffuse Max ray_diffuse_max_depth 1 Diffuse rays will stop when this depth count is reached. The depth is incremented when a surface is shaded.
Reflection Max ray_reflection_max_depth 1 Glossy rays will stop when this depth count is reached. The depth is incremented when a surface is shaded.
Refraction Max ray_refraction_max_depth 2 Refraction rays will stop when this depth count is reached. The depth is incremented when a surface is shaded.
Material

Texture Filter

filter

Cubic

Select the filtering algorithm to use when remapping pixels from their original positions to new positions. This allows you to avoid problems with image quality, particularly in high contrast areas of the frame (where highly aliased, or jaggy, edges may appear if pixels are not filtered and retain their original values).

Impulse - remapped pixels carry their original values.

Cubic - remapped pixels receive some smoothing.

Keys - remapped pixels receive some smoothing, plus minor sharpening (as shown by the negative -y portions of the curve).

Simon - remapped pixels receive some smoothing, plus medium sharpening (as shown by the negative -y portions of the curve).

Rifman - remapped pixels receive some smoothing, plus significant sharpening (as shown by the negative -y portions of the curve).

Mitchell - remapped pixels receive some smoothing, plus blurring to hide pixelation.

Parzen - remapped pixels receive the greatest smoothing of all filters.

Notch - remapped pixels receive flat smoothing (which tends to hide moire patterns).

Lanczos4, Lanczos6, and Sinc4 - remapped pixels receive sharpening which can be useful for scaling down. Lanczos4 provides the least sharpening and Sinc4 the most.

Lighting
(Lighting)Enable lighting_enable_mode auto

Controls whether or not lights in the scene affect the 2D output.

auto: renders with surface lighting when there are lights in the scene, or constant shading if no lights

on: renders with surface lighting, but if no lights surfaces will appear black if not emissive

off: renders constant shading, ignoring lights

Shadowing shadowing_enabled on Enable or disable shadowing cast from direct lighting.
Ambient ambient 0 Enter a value between 0 (black) and 1 (white) to change the global ambient color.
Scene Masks      
Purpose Filter prim_purpose_filter_mode all

Filter input scene prims by their purpose attribute.

all: renders all prims regardless of purpose default: renders prims based on default settings for the stage

render: renders all prims set to render purpose

proxy: renders all prims set to proxy purpose

guide: renders all prims set to guide purpose

all disables purpose filtering so everything renders, and if none are enabled then nothing will be rendered.

Objects (Mask)

objects_mask

//*

Specifies the mask pattern to match prim names to include in the output scene, excluding light prims such as DirectLight1.

The default mask, //*, includes everything from the root of the scene graph. As you can see, you can use standard glob-style variables, such as /*, to create masks or use individual prim names separated by spaces. For example, /GeoCube1 /GeoCard3 includes only those prims in the output scene.

Tip:  You can also use the cog menu, the Viewer picker, or drag and drop paths from the Scene Graph to create masks.

Lights (Mask)

lights_mask

//*

Specifies the mask pattern to match light names to include in the output scene, excluding object prims such as GeoCube1.

The default mask, //*, includes everything from the root of the scene graph. As you can see, you can use standard glob-style variables, such as /*, to create masks or use individual prim names separated by spaces. For example, /DirectLight1 /PointLight4 includes only those light prims in the output scene.

Tip:  You can also use the cog menu, the Viewer picker, or drag and drop paths from the Scene Graph to create masks.

Outputs
1/Z one_over_z on

On: Output classic style Nuke Z is which 1/Z-distance. ‘No Object’ is black 0.0 and the Z value *decreases* the further from camera. Note that 1/Z is NOT normalized Z. 1/Z is easily converted back to absolute Z-distance by using another 1/Z.

Off: Output absolute Z-distance where ‘No-Object’ = inf and the Z value increases the further from the camera. This is the world-space distance from the camera to the object.

Motion Vectors motion_vectors_type Distance

Select how to render motion vectors:

Off: don't create any motion vectors

Distance: stores the distance between the samples in the motion vector channel

Distance Normalized: normalized distance between the samples in the motion vector channel

Velocity: stores the velocity in the motion vector channel

Velocity Normalized: normalized velocity in the motion vector channel

Surface AOVs
Surface AOVs output_aovs_group N/A Defines which surface AOVs are output.
enable aov_enable0 (0 is the first one in the list of 9) on Enables the AOV in the list.
aov source aov_source0 (0 is the first one in the list of 9) Zlinear

Source of Aov from shading results. A list of shading variable layers extended if geometry prims provide primvars. For example if a geometry mesh object provides a primvar named ‘myUV‘ then it will appear at the end of this list along with its size (float, vec2, vec3, vec4.

Predefined shading attributes:

Cf- Geometry color (from primvar if available)

Co - Geometry opacity (from primvar if available)

presence - Surface presence (physical solidity)

Oi - Surface rgb opacity (optical transparency)

Zlinear - U near-projected distance (classic Z)

Zdistance - Camera ray distance

Ps - Shading position

Pg - Geometric position

Ns - Shading normal

Ng - Geometric normal

Ts - Shading tangent

UV- Surface parametric-uv coordinate (from primvar if available)

UVW - Surface parametric-w (Z for volumes, from primvar if available)

St - Texture coordinate (different than parametric-UV if modified by shader)

MotionFwd - Forward motion-vectors

MotionBwd - Backwards motion-vectors

merge mode aov_merge_mode0 (0 is the first one in the list of 9) min

Math to use when merging multiple surface samples in depth:

• premult-blend - Premult A then Under(B + A*Aa*(1-Ba)) or Over(B*(l-Aa) + A*Aa) - best for data Aovs

• blend - Under(B + A*(1-Ba)) or Over(B*(l-Aa) + A) - best for color Aovs

• plus -B+A

• min - min (B, A) - best for Z

• mid -(B+ A)/2

• max - max(B, A)

unpremult aov_unpremult_mode0 (0 is the first one in the list of 9) coverage Unpremult this Aov by coverage or alpha channel.
output layer aov_output0 (0 is the first one in the list of 9) depth Output channels to route Aov to.

Deep Options

Drop Zero Alpha Samples

drop_zero_alpha_samples

enabled

When enabled, deep samples with an alpha value of 0 do not contribute to the output.

When disabled, deep samples with an alpha value 0 contribute to the output.

Advanced
Integrator integrator_type Default
  • With the Integrator knob set to Default, ScanlineRender2 will render utilising the new ray tracing architecture and act as a standard utility renderer. You can use the options to the right to Reverse Order, switch to rendering in Uv Mode, or simply only output the render without the background input using Render only mode.

  • If you switch the Integrator knob to Debug mode, it gives you access to rendering your scene in different shading contexts, such as normals and texture coordinates.

  • The final Integrator option available is Occlusion which can switch between ‘Ambient’, ‘Reflection’ and ‘Refraction’ modes to give you even more ways to render your scene.

Reverse Order reverse_order off Reverse the shading order from farthest surface to nearest surface.
Render Only render_only off Output only the render against black (zeros) don’t overlay on background input.
Sample diagnostic_sample -1 Selects samples for diagnostic mode, if enabled.
Pixel diagnostic_pixel x=-1
y=-1
Pixel coordinates to test or debug information at, for diagnostic mode.

Step-by-Step Guides

The ScanlineRender Node

Adding Motion Blur Using a Renderer

Rendering a 3D Scene