ScanlineRender
When connected to a Scene node, the ScanlineRender node renders all the objects and lights connected to that scene from the perspective of the Camera connected to the cam input (or a default camera if no cam input exists). The rendered 2D image is then passed along to the next node in the compositing tree, and you can use the result as an input to other nodes in the script.
The ScanlineRender node also outputs deep data if there is a Deep node downstream.
See also PrmanRender, Scene, and Camera
Inputs and Controls
Connection Type |
Connection Name |
Function |
Input |
obj/scn |
Either: • A Scene node that is connected to the objects and lights you want to render, or • a 3D object or MergeGeo node. |
cam |
An optional camera input. The scene is rendered from the perspective of this camera. If the camera input is not connected, ScanlineRender uses a default camera positioned at the origin and facing in the negative Z direction. |
|
bg |
An optional background input. This can be used to composite a background image into the scene and to determine the output resolution. If not used, this defaults to root.format or root.proxy_format defined in the ProjectSettings. If this input contains a depth channel, ScanlineRender considers it when doing Z-buffer and Z-blending calculations. |
Control (UI) |
Knob (Scripting) |
Default Value |
Function |
ScanlineRender Tab |
|||
transparency |
transparency |
enabled |
When enabled, objects appear transparent where their alphas are less than 1. |
Z-buffer |
ztest_enabled |
enabled |
Enable or disable the Z-buffer which compares object Z-depth within a scene, assisting with occlusions. |
filter |
filter |
Cubic |
Select the filtering algorithm to use when remapping pixels from their original positions to new positions. This allows you to avoid problems with image quality, particularly in high contrast areas of the frame (where highly aliased, or jaggy, edges may appear if pixels are not filtered and retain their original values). • Impulse - remapped pixels carry their original values. • Cubic - remapped pixels receive some smoothing. • Keys - remapped pixels receive some smoothing, plus minor sharpening (as shown by the negative -y portions of the curve). • Simon - remapped pixels receive some smoothing, plus medium sharpening (as shown by the negative -y portions of the curve). • Rifman - remapped pixels receive some smoothing, plus significant sharpening (as shown by the negative -y portions of the curve). • Mitchell - remapped pixels receive some smoothing, plus blurring to hide pixelation. • Parzen - remapped pixels receive the greatest smoothing of all filters. • Notch - remapped pixels receive flat smoothing (which tends to hide moire patterns). • Lanczos4, Lanczos6, and Sinc4 - remapped pixels receive sharpening which can be useful for scaling down. Lanczos4 provides the least sharpening and Sinc4 the most. • Nearest - Fastest and crudest, sample the nearest texel from the appropriate mip map. • Bilinear - Remove blockiness, sample and interpolate the four nearest texels from the appropriate mipmap level. • Trilinear - Smooth interpolation of texture quality according to the distance, bilinearly interpolate between two closest mipmap levels. • Anisotropic - Highest quality filtering, gives a better result when shading surfaces with a high angle relative to the camera. |
antialiasing |
antialiasing |
none |
Sets the level of antialiasing to reduce any aliasing artifacts in the render. Choose from none, low, medium and high. |
Z-blend mode |
zblend_mode |
none |
Type of ramp to use to blend two surfaces within the Z-blend range of each other. Smooth looks better, but linear is provided for back-compatibility. |
Z-blend range |
zblend_range |
0.1 |
Any two surfaces closer together than this distance on the Z axis are blended together to smooth the transition between intersecting objects. |
projection mode |
projection_mode |
render camera |
The projection modes are: • perspective - have the camera’s focal length and aperture define the illusion of depth for the objects in front of the camera. • orthographic - use orthographic projection (projection onto the projection plane using parallel rays). • uv - have every object render its UV space into the output format. You can use this option to cook out texture maps. • spherical - have the entire 360-degree world rendered as a spherical map. You can increase tessellationmax to increase the accuracy of object edges as they are warped out of line, but this takes longer to render. • rendercamera - use the projection type of the render camera. |
tessellation max |
max_tessellation |
3 |
Limits recursive subdivision of polygons by a screen-space distance percentage. This control can be useful in the spherical projection mode, which sometimes distorts object edges. If you see such distortions, you can try increasing this value to tessellate (subdivide) polygons into smaller polygons. This produces more accurate object edges, but also takes longer to render. |
overscan |
overscan |
0 |
The maximum additional pixels to render beyond the left/right and top/bottom of the frame. Rendering pixels beyond the edges of the frame can be useful if subsequent nodes need to have access outside the frame. For example, a Blur node down the node tree may produce better results around the edges of the frame if overscan is used. Similarly, a subsequent LensDistortion node may require the use of overscan. |
ambient |
ambient |
0 |
Enter a value between 0 (black) and 1 (white) to change the global ambient color. |
MultiSample Tab |
|||
samples |
samples |
1 |
Sets the number of samples to render per pixel, to produce motion blur and antialiasing. If you use this, in most cases you can turn off the antialiasing and filter controls on the ScanlineRender tab. |
shutter |
shutter |
0.5 |
Enter the number of frames the shutter stays open when motion blurring. For example, a value of 0.5 corresponds to half a frame. |
shutter offset |
shutteroffset |
start |
This value controls how the shutter behaves with respect to the current frame value. It has four options: • centred - center the shutter around the current frame. For example, if you set the shutter value to 1 and your current frame is 30, the shutter stays open from frame 29,5 to 30,5. • start - open the shutter at the current frame. For example, if you set the shutter value to 1 and your current frame is 30, the shutter stays open from frame 30 to 31. • end - close the shutter at the current frame. For example, if you set the shutter value to 1 and your current frame is 30, the shutter stays open from frame 29 to 30. • custom - open the shutter at the time you specify. In the field next to the dropdown menu, enter a value (in frames) you want to add to the current frame. To open the shutter before the current frame, enter a negative value. For example, a value of - 0.5 would open the shutter half a frame before the current frame. |
shuttercustom |
shuttercustomoffset |
0 |
If the shutter offset parameter is set to custom, this parameter is used to set the time that the shutter opens by adding it to the current frame. Values are in frames, so -0.5 would open the shutter half a frame before the current frame. |
randomize time |
temporal_jitter |
0 |
Adds randomness to the distribution of samples in time so they don’t produce regularly spaced images. The larger the value, the larger the time difference between the samples. |
sample diameter |
spacial_jitter |
1 |
The diameter of the circle that the samples for each pixel are placed in for antialiasing. The larger the value, the more pixels are jittered. |
focus diameter |
focal_jitter |
0 |
Randomly orbit the camera about a point at the focal distance in front of it for each sample to produce depth-of-field effects from multiple samples. Note: The focal distance is set in the Camera node's controls, in the Projection tab. |
stochastic |
stochastic_samples |
0 |
Sets the number of samples, per pixel, to use in stochastic estimation (zero is disabled). Lower values result in faster renders, while higher values improve the quality of the final image. Stochastic sampling is based on Robert L. Cook’s Stochastic Sampling in Computer Graphics, available in ACM Transactions on Graphics, Volume 6, Number 1, January 1996. Note: It is recommended for motion blur that the samples control is adjusted. This also provides anti-aliasing by jittering the sample point. |
uniform |
uniform_distribution |
disabled |
When enabled, use a uniform temporal distribution of scenes to sample. This generates more accurate results for stochastic multisampling. |
Shader Tab |
|||
motion vectors |
motion_vectors_type |
distance |
Select how to render motion vectors: • off - do not render motion vectors. • classic - render motion vectors the classic (pre-Nuke 6.1) way. This option is only provided for backwards compatibility, and isn't always accurate. • velocity - store the velocity of every single pixel in the motion vector channels (pre-Nuke 7.0 way). This option is only provided for backwards compatibility. In order to have the same behavior as Nuke 6.3, set samples to 1. • distance - for every pixel, store the distance (in pixels) between samples in the motion vector channels. This is the recommended option that usually produces the best results. It also allows the VectorBlur node to produce curved vector blur where interpolation between two frames is made according to a curve rather than linearly. |
motion vector channels |
MB_channel |
forward |
The channels the motion vectors are output to. You can use the checkboxes on the right to select individual channels. |
output vectors |
output_shader_vectors |
disabled |
When enabled, shader vectors (surface points and surface normals) are output as well as motion vectors. These can be useful if you want to relight the rendered 3D scene in the compositing phase. |
surface points |
P_channel |
none |
The channel to use as the surface point channel. When output vectors is enabled, ScanlineRender outputs the surface point positions (in world space coordinates) into this channel. |
surface normal |
N_channel |
none |
The channel to use as the surface normal channel. When output vectors is enabled, ScanlineRender outputs the surface point normals (in world space coordinates) into this channel. |
Deep Tab |
|||
drop zero alpha samples |
drop_zero_alpha_samples |
enabled |
When enabled, deep samples with an alpha value of 0 do not contribute to the output. When disabled, deep samples with an alpha value 0 contribute to the output. |
Step-by-Step Guides
Adding Motion Blur Using a Renderer
Video Tutorials
3D Workspace Overview from Foundry on Vimeo.
Nuke is not limited to a 2D space, in fact, it has a complete 3D environment built right in. For example, here is a 3D ship and a 3D sphere. In order to see the 3D environment, go to the View menu where it says 2D and switch that to 3D, and there’s the environment. In order to change the view, which is the default camera, you can use the Alt or your Option key, along with your mouse buttons. For example, Alt and left mouse button scrolls, Alt and middle mouse button zooms, and Alt and right mouse button orbits.
Let’s see what we have in the scene. There is a 3D Camera, a Spotlight, a Point light, a primitive Sphere, an imported spaceship, and a large primitive Card in the background. Let’s take a look at the node network, and you can see what we need to make a 3D scene happen. The node with the most connections is the Scene node. The Scene node groups together lights and geometry in order to pass them on to a render node. In order to render the scene so it becomes 2D, you need to have some sort of render node. In this case, there is a ScanlineRender node. Connected to the ScanlineRender is a 3D Camera. Connected to the Scene node are two lights - there is the Spotlight and the Point light. If I open up the properties on the Spotlight, you can see common options like color and intensity and, in the case of the Spotlight, cone angle. There are also two pieces of primitive geometry here - there is the Sphere and the Card. This will be a good time to note that 3D nodes have a rounded, pill-like shape, as opposed to the rectangular 2D nodes.
You can create a light or a primitive piece of geometry through the 3D menu. You can make your Point or your Spot, plus a Direct and a few specialized lights, like the one that’s called Light, which you can use to import lights from other programs, like Maya. There is also the Geometry menu, which has the primitives such as Card, or other shapes, like Cube and Cylinder. You can transform lights and geometry. For example, if I open up the Sphere, you will see there is a translate, rotate, and scale property. Once this is open, you will also see there is a transform handle. If you click+drag the handle along the axis, you can move it in that direction, for example, Y. Of course, you can also enter values into the properties panel. Lights also have their own set of transforms. Now, one new feature is the fact that lights can cast shadows right here in the 3D environment. For example, if I go to the Spotlight and go to the Shadows tab, you will see there is a place to click on cast shadows. Let’s go back to the 2D view. You can see the shadow of the Sphere right here on the spaceship. Now, aside from shadows of course, you can animate all of these properties. You can animate the light, changing over time, as well as the geometry. There are also animation buttons beside all of these properties. You can key these as you would any other node inside Nuke.
You will notice that the two pieces of geometry have shaders connected to their img pipes. These are necessary for the surfaces to be lit correctly. The Sphere has a Phong, which is similar to the one you might have in a program like Maya. The Card has an Emission shader, which has the emissive component or the ambient color component. Now, in terms of the spaceship, it has to be imported through a ReadGeo node. ReadGeo node has a place to bring in the file, and this supports .fbx files, or .obj files, or alembic files, .abc. If there is animation in the file, Nuke will recognize it. For example, with the .fbx file, it might have multiple takes. Nuke will recognize that and you can choose the animation take. So, if I go back to the 3D view, scrub the timeline, and we will see the ship is pre-animated, and this animation was created in Maya. There is also a material connected to the img pipe of the ReadGeo. Now, because the UV texture space came through the .fbx file, in order to map the geometry, you just need to bring in the texture bitmaps through Read nodes, and connect to a shader. For example, here is the diffuse map connected through the mapD, or map diffuse. There is a specular map connected through the mapS, or map specular. Lets go back to the 2D view.
Now, if anything is animated, you can also activate motion blur. To do that, you go to the render node and, for example, with the ScanlineRender, go to the MultiSample tab and change samples to a higher number like 8. At that point, the motion blur will appear, as you can see right here. The higher the samples number, the higher the quality.
So, there is a brief introduction to Nuke’s 3D environment. Keep in mind that any node you need to create for this you can find through the 3D node menu. This includes all your shaders, geometry, lights, Scene nodes, and cameras. Aside from animating lights and geometry, you are also free to animate cameras. They have their own set of transforms. In any case, I would suggest exploring this component of Nuke.