ScanlineRender
The ScanlineRender node renders all the objects and lights connected to that scene from the perspective of the Camera connected to the cam input (or a default camera if no cam input exists). The rendered 2D image is then passed along to the next node in the Node Graph, and you can use the result as an input to other nodes in the script.
The ScanlineRender node’s Properties panel has a variety of options for controlling sampling, camera/projection, motion blur, materials, lighting, and more.
ScanlineRender also has the ability to render reflections, refractions and more via ray tracing, giving you greater control when rendering directly from Nuke. In the Properties panel, you can adjust Ray Options, including the ability to control your Ray Depth.
Note: You can render more complex scenes with refractions and reflections using shader nodes, such as the MtlXStandardSurface node.
In the Output tab, you can control render world space distance with 1/Z, Motion Vectors type and AOV outputs. Plus, the ScanlineRender node can output deep data if there is deep data available from the scene graph.
ScanlineRender supports an Integrator plugin API, which you can populate with your own integrators to visualize render data. See Advanced options in the table below
Note: Nuke plugins named with the prefix 'slr' and suffix 'Integrator' appear in the list of available integrators to use in the node panel.
See also Camera.
For more information about ScanlineRender usage, see Rendering in the New 3D System.
Tip: ScanlineRender's classic 3D system equivalent is also named ScanlineRender. For New 3D System scenes, make sure you are adding the New 3D System node (found in right click node menu under 3D > 3D), not the Classic one (3D > 3D Classic).
Inputs and Controls
|
Connection Type |
Connection Name |
Function |
|
Input |
cam |
An optional camera input. The scene is rendered from the perspective of this camera. If the camera input is not connected, ScanlineRender uses a default camera positioned at the origin and facing in the negative Z direction. |
|
obj/scn |
Either: • A GeoScene or GeoMerge node that is connected to the objects and lights you want to render, or • a 3D object or prim. |
|
|
bg |
An optional background input. This can be used to composite a background image into the scene and to determine the output resolution. If not used, this defaults to root.format or root.proxy_format defined in the ProjectSettings. If this input contains a depth channel, ScanlineRender considers it when doing Z-buffer and Z-blending calculations. |
|
Control (UI) |
Knob (Scripting) |
Default Value |
Function |
|
ScanlineRender Tab |
|||
| Sampling | |||
| Camera Samples | camera_sample_mode | 1 | Sets the number of samples to render per pixel, to produce motion blur and antialiasing. The total number of samples is the sampling grid size squared, or gridsize*gridsize. More samples means smoother motion blur and less aliasing, but a longer render time. |
| Custom Size (N*N) | camera_samples_custom | 1 | When camera samples is set to the ‘custom’ option, this value determines the camera sample grid width/height size. A value of 4 means a 4x4 grid of 1 samples. |
| Spatial Jitter At | spatial_jitter | 2 | When to enable the spatial (X/Y) jittering of the sampling screen location. If this is 2 and the camera samples >=2 then spatial jitter will be applied to all samples. This allows you to control at which sample levels to jitter spatial subpixel locations when doing anti aliasing so that you break up the patterns for better results. |
| Time Jitter At | time_jitter | 2 |
When to enable random jittering in time. The default value is 2, which means jittering to produce motion blur occurs when samples are at a value 2 or higher. If samples are at 1, no motion blur will be created, but motion vectors will be produced so that you can still blur the image. |
| Scene Time Offset | scene_time_offset | 0 |
Shifts the time frame of input geometry, lights and cameras while keeping the renderer at the same output frame. This means you can slip your render in time (with subframe accuracy) without affecting the camera or the objects in the stage, which can be really helpful when you are trying to slip the camera to get vector motion blur aligned properly. |
| Camera/Projection | |||
| Projection | projection_mode | Render Camera |
The preset projection modes are: • Render Camera - Take the projection mode from the camera. • Perspective - have the camera’s focal length and aperture define the illusion of depth for the objects in front of the camera. • Orthographic - use orthographic projection (projection onto the projection plane using parallel rays). • Spherical - have the entire 360-degree world rendered as a spherical map. You can increase tessellation max to increase the accuracy of object edges as they are warped out of line, but this takes longer to render. • Cylindrical - have the 360-degree world horizontally cylindrically projected around a camera. • UV Unwrap - have every object render its UV space into the output format. You can use this option to cook out texture maps |
| Shutter Length | shutter | 0.5 | Enter the number of frames the shutter stays open when motion blurring. For example, a value of 0.5 corresponds to half a frame. |
| Shutter Offset | shutteroffset | start |
Select when the shutter opens and closes in relation to the current frame value when motion blurring: • centered - to center the shutter around the current frame. For example, if you set the shutter value to 1 and your current frame is 30, the shutter stays open from frame 29.5 to 30.5. • start - to open the shutter at the current frame. For example, if you set the shutter value to 1 and your current frame is 30, the shutter stays open from frame 30 to 31. • end - to close the shutter at the current frame. For example, if you set the shutter value to 1 and your current frame is 30, the shutter stays open from frame 29 to 30. • custom - to open the shutter at the time you specify. In the field next to the dropdown menu, enter a value (in frames) you want to add to the current frame. To open the shutter before the current frame, enter a negative value. For example, a value of -0.5 would open the shutter half a frame before the current frame. Nuke has always defaulted to start and going forward in time, so at a given frame the shutter is open. However often it can be useful to change this to end when you want a motion blur trail to an impact point on a specific frame. |
| Shutter Segments | shutter_segments | 1 | The number of ‘time segments’ used to interpolate motion blur. A segment count of 0 effectively disables motion blur, while a segment count of 1 creates a straight line blur. Increasing the number of segments beyond 1 subdives the straight line into more time segments, which is normally only necessary for heavy rotational blur. The greater the number of segments the greater the quality of rotational motion blur. |
| Shutter Bias | shutter_bias | 0 | Biases shutter towards shutter close or shutter open for stylized motion blue. Negative values bias towards shutter open and positive values bias towards shutter close. |
| Render Region | render_region_mode | Format Extend |
When the bg input has a larger pixel box than its format size, defines how the render should handle the entended bbox area outside the format’s size. Format - render region is the format size, and any extended bbbox pixels are passed through unchanged. Format Extend - render region is the format size, and automatically expanded to fill ant extended bbox area. Format Clip - render region is the format size, and the output bbox is clipped to the format. |
| Overscan | overscan | 0 | The number of pixels to render beyond the left/right and top/bottom of the frame. Rendering pixels beyond the edges of the frame can be useful if subsequent nodes need to have access outside the frame. For example, a Blur node down the node tree may produce better results around the edges of the frame if overscan is used. Similarly, a subsequent LensDistortion node may require the use of overscan. |
| Motion Blur | |||
| Camera Xform | camera_motion_blur | on |
Enable or disable the motion blur of a camera being transformed i.e. animating a Camera through space. Please note that subsampling matrices produce artifacts currently, as this requires quaternions for the matrix interpolation method which has not yet been implemented. As a workaround increase the Shutter Segments to chop up the frame you are generating the matrix for, into multiple matrices. |
| Camera Lens | camera_lens_blur | on | Enable or disable the motion blur of a camera projection control being changed i.e. motion blur generated from animating the focal length of the camera |
| Object Xform | object_motion_blur | on | Enable or disable the motion blur of an object being transformed i.e. animating a GeoCube through space (the mesh transform matrix is animated). |
| Object Deform | object_deform_blur | on | Enable or disable the motion blur of an object being deformed i.e. an animated radius knob on a GeoSphere (the point xyz locations are animated). |
| Lights | light_motion_blur | on | Enable or disable the motion blur of a light being transformed i.e. animating a light through space. |
| Ray Options | |||
| Max Ray Depth | ray_max_depth | 10 |
Defines the maximum depth rays can ‘bounce’ to, allowing you to balance render performance and quality. Ray max depth is tested and incremented for all ray types, so the max depth can be a mix of ray types. For example, if the max depth is 4 then a ray bounce sequence like: ray# type 1 camera 2 glossy 3 glossy 4 diffuse 5 refraction will stop at the ‘diffuse’ ray bounce which is ray #4 in the sequence. However, if glossy max depth was set to only 1 then shading would stop at ray #2 terminating the sequence. The higher the number the more 'accurate' the render is, but at the cost of render time. |
| Diffuse Max | ray_diffuse_max_depth | 1 | Diffuse rays will stop when this depth count is reached. The depth is incremented when a surface is shaded. |
| Reflection Max | ray_reflection_max_depth | 1 | Glossy rays will stop when this depth count is reached. The depth is incremented when a surface is shaded. |
| Refraction Max | ray_refraction_max_depth | 2 | Refraction rays will stop when this depth count is reached. The depth is incremented when a surface is shaded. |
| Material | |||
|
|
material_families | slr mtlx |
List of material family names that the renderer will attempt to use for material connections, in search order priority. Default is ‘slr mtlx’ where ‘slr‘ is the native material family for ScanlineRender and ‘mtlx‘ is the MaterialX material family. If a material provides multiple output connections then they are matched in the order of the family list (ie ‘slr‘ first, then ‘mtlx’.) If no match is made then the material binding fails and the object will not be rendered. |
|
|
material_bind_mode | auto | How to resolve material binding attributes at each renderable prim:
• auto - choose the best binding attribute in order of importance: ‘full’ then ‘all purpose’ (‘preview’ bindings are ignored and must be manually selected) • full - use only the ‘full’ binding attribute on the prim, if it exists • preview - use only the ‘preview’ binding attribute on the prim, if it exists • all purpose - use only the ‘all purpose’ binding attribute on the prim, if it exists • off - do not use any of the binding attributes on the prim, disabling the material |
| Side Visibility | side_visibility_mode | auto | This is a global override which takes precedence over per-prim settings. Sets whether the Material is visible for front-side (front faces), back-side (back faces), or both-sides. Note this is measuring the face’s angle to the incoming ray direction, not the camera. Affects all ray types. |
|
Texture Filter |
filter |
cubic |
Select the filtering algorithm to use when remapping pixels from their original positions to new positions. This allows you to avoid problems with image quality, particularly in high contrast areas of the frame (where highly aliased, or jaggy, edges may appear if pixels are not filtered and retain their original values). • Impulse - remapped pixels carry their original values. • Cubic - remapped pixels receive some smoothing. • Keys - remapped pixels receive some smoothing, plus minor sharpening (as shown by the negative -y portions of the curve). • Simon - remapped pixels receive some smoothing, plus medium sharpening (as shown by the negative -y portions of the curve). • Rifman - remapped pixels receive some smoothing, plus significant sharpening (as shown by the negative -y portions of the curve). • Mitchell - remapped pixels receive some smoothing, plus blurring to hide pixelation. • Parzen - remapped pixels receive the greatest smoothing of all filters. • Notch - remapped pixels receive flat smoothing (which tends to hide moire patterns). • Lanczos4, Lanczos6, and Sinc4 - remapped pixels receive sharpening which can be useful for scaling down. Lanczos4 provides the least sharpening and Sinc4 the most. |
| Lighting | |||
| (Lighting) Enable | lighting_enable_mode | auto |
Controls whether or not lights in the scene affect the 2D output. auto: renders with surface lighting when there are lights in the scene, or constant shading if no lights on: renders with surface lighting, but if no lights surfaces will appear black if not emissive off: renders constant shading, ignoring lights |
| Shadowing | shadowing_enabled | on | Enable or disable shadowing cast from direct lighting. |
| Ambient | ambient | 0 |
Enter a value between 0 (black) and 1 (white) to change the global ambient color. If you set ‘Ambient’ to 1.0 with a scene with no lights, the surfaces will emit their diffuse color as if illuminated equally from all directions. |
| Scene Masks | |||
| Purpose Filter | prim_purpose_filter_mode | all |
Filter input scene prims by their purpose attribute. all: renders all prims regardless of purpose default: renders prims based on default settings for the stage render: renders all prims set to render purpose proxy: renders all prims set to proxy purpose guide: renders all prims set to guide purpose all disables purpose filtering so everything renders, and if none are enabled then nothing will be rendered. |
|
Objects (Mask) |
objects_mask |
//* |
Specifies the mask pattern to match prim names to include in the output scene, excluding light prims such as DirectLight1. The default mask, //*, includes everything from the root of the scene graph. As you can see, you can use standard glob-style variables, such as /*, to create masks or use individual prim names separated by spaces. For example, /GeoCube1 /GeoCard3 includes only those prims in the output scene. Tip: You can also use the cog menu, the Viewer picker, or drag and drop paths from the Scene Graph to create masks. |
|
Lights (Mask) |
lights_mask |
//* |
Specifies the mask pattern to match light names to include in the output scene, excluding object prims such as GeoCube1. The default mask, //*, includes everything from the root of the scene graph. As you can see, you can use standard glob-style variables, such as /*, to create masks or use individual prim names separated by spaces. For example, /DirectLight1 /PointLight4 includes only those light prims in the output scene. Tip: You can also use the cog menu, the Viewer picker, or drag and drop paths from the Scene Graph to create masks. |
| Outputs Tab | |||
| 1/Z | one_over_z | on |
On: Output classic style Nuke Z is which 1/Z-distance. ‘No Object’ is black 0.0 and the Z value *decreases* the further from camera. Note that 1/Z is NOT normalized Z. 1/Z is easily converted back to absolute Z-distance by using another 1/Z. Off: Output absolute Z-distance where ‘No-Object’ = inf and the Z value increases the further from the camera. This is the world-space distance from the camera to the object. |
| Motion Vectors | motion_vectors_type | Distance |
Select how to render motion vectors: Off: don't create any motion vectors Distance: stores the distance between the samples in the motion vector channel Distance Normalized: normalized distance between the samples in the motion vector channel Velocity: stores the velocity in the motion vector channel Velocity Normalized: normalized velocity in the motion vector channel |
| Surface AOVs | |||
| Surface AOVs | output_aovs_group | N/A | Defines which surface AOVs are output. |
| enable | aov_enable0 (0 is the first one in the list of 9) | on | Enables the AOV in the list. |
| aov source | aov_source0 (0 is the first one in the list of 9) | Zlinear |
Source of Aov from shading results. A list of shading variable layers extended if geometry prims provide primvars. For example if a geometry mesh object provides a primvar named ‘myUV‘ then it will appear at the end of this list along with its size (float, vec2, vec3, vec4. Predefined shading attributes: Cf- Geometry color (from primvar if available) Co - Geometry opacity (from primvar if available) presence - Surface presence (physical solidity) Oi - Surface rgb opacity (optical transparency) Zlinear - U near-projected distance (classic Z) Zdistance - Camera ray distance Ps - Shading position Pg - Geometric position Ns - Shading normal Ng - Geometric normal Ts - Shading tangent UV- Surface parametric-uv coordinate (from primvar if available) UVW - Surface parametric-w (Z for volumes, from primvar if available) St - Texture coordinate (different than parametric-UV if modified by shader) MotionFwd - Forward motion-vectors MotionBwd - Backwards motion-vectors |
| merge mode | aov_merge_mode0 (0 is the first one in the list of 9) | min |
Math to use when merging multiple surface samples in depth: • premult-blend - Premult A then Under(B + A*Aa*(1-Ba)) or Over(B*(l-Aa) + A*Aa) - best for data Aovs • blend - Under(B + A*(1-Ba)) or Over(B*(l-Aa) + A) - best for color Aovs • plus -B+A • min - min (B, A) - best for Z • mid -(B+ A)/2 • max - max(B, A) |
| unpremult | aov_unpremult_mode0 (0 is the first one in the list of 9) | coverage | Unpremult this Aov by coverage or alpha channel. |
| output layer | aov_output0 (0 is the first one in the list of 9) | depth | Output channels to route Aov to. |
|
Deep Options |
|||
|
Drop Zero Alpha Samples |
drop_zero_alpha_samples |
enabled |
When enabled, deep samples with an alpha value of 0 do not contribute to the output. When disabled, deep samples with an alpha value 0 contribute to the output. |
| Advanced Tab | |||
| Integrator | integrator_type | Default |
Note: Debug and Occlusion modes are experimental options and currently not fully functional. |
| Reverse Order | reverse_order | off | Reverse the shading order from farthest surface to nearest surface. |
| Render Only | render_only | off | Output only the render against black (zeros) don’t overlay on background input. |
| Sample | diagnostic_sample | -1 | Selects samples for diagnostic mode, if enabled. |
| Pixel | diagnostic_pixel | x=-1
y=-1 |
Pixel coordinates to test or debug information at, for diagnostic mode. |