Search is based on keyword.
Ex: "Procedures"
Do not search with natural language
Ex: "How do I write a new procedure?"
Contact Support
Step Up to Nuke Training Series
New to Nuke? Learn more with our training series by Lee Lanier, VFX Trainer and author of Digital Compositing with Nuke.
NUKE Interface Overview from Foundry on Vimeo.
Hello, today I’m going to introduce you to Foundry’s Nuke. Now, if you are used to using other compositing packages, such as Adobe After Effects, you might find that Nuke’s interface is a little odd. That’s OK. As soon as you learn the basics, you will find Nuke’s interface is easy to use and also quite powerful. I will start out by introducing you to the various interface components of Nuke. By default, Nuke is composed of several panes and in those panes are one or more panels. For example, up here at the top is a Viewer pane, and right now there is a single Viewer panel. To the right, there is a properties panel, down below is the Node Graph, Curve Editor, and Dope Sheet pane and, in fact, you can see all three of those panels right now. There is also a node toolbar here at the left and at the top the main menu.
So, let me talk about what’s in the various panes and panels, and what they're used for. Let’s start with the Viewer pane. The Viewer pane contains the view of various Viewers. By default, you get a single Viewer, which you can see down here as a node. Now, because there is nothing hooked up to that node - there is no node network - the Viewer is just black. However, you can see the bounding box and the resolution of the current project. The Viewer pane also contains the playback controls. You will see these in other programs as well, however, there are a few special buttons that are unique and we will talk about those later on. Below that is the timeline and the time slider. Let’s go back down to the Node Graph. The Node Graph is the area where you add nodes and construct node networks. Right now there is only the Viewer node, however, you can add a new node at any time. Once you add a new node, you can look at the properties for that node up here in the properties panel. So, just for now, I am going to make a new node, just so you can see that there are several ways to make a new node. One of them is to go up here to the toolbar. If you let your mouse hover over one of the icons, you can see the category name, for example Filter. If you click that category, you will see all the nodes that belong to that category. For example, I can click Blur to create a Blur node, and there it is. Once you have created a new node, you will see the properties up here in the properties panel. These properties are arranged as various knobs. When I say knobs, I mean either sliders, dropdown menus, numeric cells, or checkboxes. You can close this view of these properties any time by clicking the x at the top right, or if you want to reopen those properties, just double-click the node.
Once you have something to look at in the Node Graph or the Viewer, you can use some shortcuts to move around. For example, you can middle-mouse-button-click+drag to scroll. You can also use your scroll wheel, if you have one on your mouse, to scroll in and out to zoom. If you press your middle mouse button one time, that frames whatever is in your panel. So, one time here, middle mouse, that frames the nodes, or one time in the Viewer, middle mouse, frames the Viewer. If you don’t have a scroll wheel on your mouse, you can also press the Alt key or the Option key, hold the middle mouse button down and drag left to right, and that is also a zoom.
Beside the Node Graph are also the Curve Editor and the Dope Sheet panels. You can just click on those tabs to see those. You can use these to edit animation curves and keyframes once you have animation, so we will return to these later on.
Let’s talk about the main menu. The main menu has all your menu items for common functions like Open and Save. Nuke files are a special text format with the .nk extension. Under Edit, you have your global Preferences and also your Project Settings. Each project has its own window in Nuke, so right now we have a single project. If I click this, I get all of the important settings for the project, including the frame range, frames per second, and the resolution. There are presets for the resolution, but you can also make your own custom size.
So, there are the main interface components of Nuke. In the next video, we will talk about how to import footage and how to construct a basic node network.
Importing Footage / Elements from Foundry on Vimeo.
We have gone over the basics of the Nuke interface. We can now move in and import some footage and create a basic network. In order to import footage into Nuke, you have to use a Read node. There are four main ways to create a new node in Nuke. The first is to go up to the toolbar and select a node, in this case, it will be under the first icon Image and then I can select Read. I will leave this for now. The second way is to return to the Node Graph, right-click, and choose a node from the list; once again Image > Read. The third way is to press the Tab key on your keyboard, type the node name into the cell, and once you see the name, select it from your dropdown list. The fourth way is to use a pre-assigned hotkey, in this case, the Read node has the R key, so the R key on the keyboard.
Now, if you used any of those four methods, you will get this browse window. On the left, you will see all of the drives and all of the main directories. You click on a directory name one time to go into that directory. Once you get to a folder with footage, you will see it listed, in this case, I have an image sequence. Nuke automatically recognizes image sequences, assuming that they are numbered correctly. So, here the image sequence runs from 0-90. You can just click on the image sequence until it turns orange and open it. Now, if you were to open a still image, or say a QuickTime movie, you would see those listed also. You would simply click on those files until they turned orange and open those also. So, I am going to click Open now and there is a Read node. You can click+drag any of these nodes to rearrange them inside the Node Graph.
So, let’s make a simple network. What I can do is connect these two nodes together. To do that, I can click+drag the end of this input pipe on the Viewer, which looks like an arrow, and then drag+drop it on top of Read1. Once I let go of the mouse button, it makes a connection, and there you see the image sequence in the Viewer. Note the image sequence is 1920x1012. You will see that on the bounding box and also it will appear by the format. If I disconnect this, and I can do that by click+dragging the end of this pipe and letting go, I will see that the original project format is 2K. So, keep in mind, if I reconnect this that Nuke will alter the view based on the resolution of the input going into the Viewer. Now, there are ways to reformat inputs so I can change the width and height, but we'll save that for a later video. For now, this is OK.
Let’s return to the properties panel and take a look at the properties. Of course, you have your path and the name of your image sequence, the recognized resolution, also the recognized frame range, and, at the very end there, is your colorspace. Now, colorspace is set to default (linear). What that means is the Read node is not making any additional color changes to the image sequence. I should note, however, that Nuke operates in a 32-bit floating point colorspace. Whereas a regular 8-bit colorspace operates with 2⁸ of colors per channel or 256 colors per channel, Nuke potentially operates 2³² colors per channel. Also, because its floating point means that Nuke can operate with decimal places. In other words, you can have a value for a color that is something like 1.000000001 - an incredibly tiny decimal place value. So, a combination of a large bit depth and also the ability to handle decimal places means that Nuke operates in a very large, very accurate colorspace. We have imported footage and we have created a basic network, now we are ready to move on to more complex node networks.
“Tears of Steel” footage courtesy (CC) Blender Foundation - mango.blender.org
Constructing Node Networks from Foundry on Vimeo.
Now we have some of the basics out of the way, we can move on to making more complex node networks. This is where the difference between layer-based compositing and node-based compositing really becomes apparent. Just to save some time, I have already read in two pieces of footage: one is a still image of a canyon and the other is an image sequence that features a CG spaceship.
Let’s build a more complex network. I am going to go ahead and connect the Viewer to the canyon. Now, one common task of compositing is to apply effects or filters to alter the image. So, for example, I can blur this canyon. If there are no nodes selected in the Node Graph, and I make a new node - for example, I can go to Filter > Blur with the right-mouse menu - the new node comes in by itself, it’s not connected. There are actually several ways to make these connections between nodes. Of course, one is simply to click+drag a pipe and drop it on top of another node by connecting the Viewer. If you have a pre-existing connection line, for example, Viewer1 pipe is connected to Read1, I can simply click+drag my new node and drop it on the current connection. Now, before we go further, I want to talk about what these pipes mean. You can look at the pipes and see the arrow heads and from that you can determine which direction the information is flowing. So, I can tell the information is flowing out of Read1, through Blur1, and down into Viewer1.
Now, there are other ways to connect these together too. If I disconnect these pipes, click+drag the ends, and break them, you can also simply connect the inputs and outputs. For example, the Blur node has an input pipe. I can tell it’s an input pipe because it’s flowing towards the node on the top here. The Viewer also has an input pipe. Some nodes also have outputs. There is an output right here on Blur1 and an output pipe right here on Read1 - that means the information is flowing out of those nodes. So, I can connect the input of Blur1 to the output of Read1. Now let’s grab the input pipe and drop it on Read1, which means that Read1 is now sending information to Blur1. You can also grab the output of Blur1 and drop it on Viewer1, which means the input of Viewer1 is coming from Blur1. There is a third way to do that, if I disconnect these one more time. Connect the Viewer back up to the canyon and then select canyon. If a node is selected, and I create a new filter like Blur, it is automatically inserted downstream. Let me delete this other Blur, as I don’t need it right now. So, there is a little network. There is no blur happening right now, and one thing that happens with some of the filter nodes is they are off by default. If I increase the size property on the Blur, it becomes blurry.
Another important task of compositing is combining multiple images or pieces of footage. Now, in a layer-based system, you would simply stack layers. In a program like Nuke, you would have to use a special node to merge inputs. Let’s give that a try. I am going to delete the Blur node out. I am going to bring in a Merge node, and that’s under Merge > Merge or the M key. The Merge node has an A and B input, plus an output. You can relate this to layer-based compositing by thinking of the B input as the lower layer, and you can think of the A input as the upper layer. So, what you can do is connect the Viewer to the output of the Merge1. In this case, connect the B to the canyon and connect the A to the spaceship. If I zoom in, I can see my spaceship is now composited over the top of my canyon. Now, the reason that works is there’s alpha transparency around the spaceship. This was rendered in Maya. Nuke automatically recognizes the alpha channel, so this black is converted to transparency. You can tell that Nuke has recognized the alpha because of this little white line right here that’s right beside the red, green, and blue lines. These are channel lines. So, here it recognizes the rgb+alpha, whereas over on the canyon it just sees rgb. In that case, here's your basic merge. Now, there is a few things to know about the merge that are useful. One thing is you can have more than one A input. In fact, as soon as you connect A, the A2 appears on the left. If you connect A2, you will have A3, and so on. The highest A number is equivalent to the highest layer, so A3 is on top of A2, and so on, with B on the very bottom.
Now, there is a mask input on the right side, however, we will talk about that later on when we get to rotoscoping. The Merge node also has a mix slider at the very bottom here. The mix slider controls the influence of the A input. If I reduce that, A starts to fade out. Another important aspect of the Merge node is the operation up here at the top. In layer-based compositing, you may have something like a blending mode. Now, a blending mode determines how the upper layers combine with the lower layers - some type of mathematical formula. In Nuke, it’s the operation and that defaults to over. Over is similar to normal inside of After Effects. You can change this menu though to other styles, for example plus. Plus adds A and B together. In this case, it gives me a semi-transparent, brighter result in that area. I will turn that to over for now. If you are curious what these operations do, let your mouse hover over the menu. You will see all of the different mathematical formulas for all of the different operations. In this case, a capital A and B represents the rgb values of the A inputs and the B inputs, whereas the lower-case a and b indicates the alpha values of A and B inputs.
If you are working with CG renders, you might be concerned with premultiplication. For example, Maya premultiplies automatically and that affects the quality of the alpha edges. If you want to interpret something that is premultiplied, open up the Read node and there is a premultiplied checkbox right here - click that on.
We have now created a more complex network. We have also merged together two inputs. Now, we are ready to move on to working with transforms, and we will do that in the next video.
Working with Transforms from Foundry on Vimeo.
We are now ready to move on to the transformations in Nuke and, in fact, Nuke does require a node to transform. In fact, there is a node called Transform. For example, if I select Read2 for this spaceship render, I can right-mouse-button-click and choose Transform > Transform, or press the T key, and there’s a Transform node. As soon as the node has its properties open in the properties panel, you will see an interactive handle in the Viewer. Let me disconnect the B input from Merge, so you can see that better. There it is. Now, you can either transform this interactively or change the properties. For example, you can rotate or scale. There is also a center x and y. This is a point in screen space where the transform handle rests and the transformations happen from. Now, it might be nice to get this handle in the center of the ship. You can do that by changing the center x and y values. So, there is 1100 and 500 for x and y, and now the handle is here. You can also move this handle interactively. If you click+drag in the center of the circle, you can translate. If you click+drag the long arm on the right, that rotates. If you click+drag one of the arcs of the circle, that will scale evenly in the x and y. Or you can click+drag one of the dots on the circle to scale unevenly, and also skew, which is a trapezoidal distortion, by grabbing the short lines. Now, you can animate all of these properties over time. In fact, we will discuss that in the next video.
For now, I want to talk about a few other issues associated with transformations, which are useful to know. So, let’s go ahead and hook up the background once again, and I will zoom out. You will notice, because I have changed some of the transformation of the ship, its bounding box overhangs the bounding box of the background. That’s OK. Now there is a tool to help you keep track of where the edges of the frame are. There is a guide button right here. If you click that, you can turn on one of these guides, for example action safe. So I know if I move my ship too close to the edge, past the white line, it might get cut off on certain TVs or screens. There is also a title safe. Now, if you don’t want to see any guides, simply turn it back to no guides. A new feature of version 7 is the mask button. You can use it by going to mask ratio and picking a new mask ratio that is different to your current one. Now, the current ratio is 16:9 - it is basically high definition widescreen. If I pick a different ratio - 4:3, which is standard television - and then turn on my masks, like half, you will see this crops off the corners. This shows you what would happen if you had to convert the 16:9 to 4:3. Now if you want to turn it off, return that button to no mask.
Another issue associated with transformations is reformatting. Reformatting is something you can do when you are working with footage that has different resolutions. Let me go to Read1 and bring in a different background. I am going to grab another image, this one is just called Sky, and because it is a different size and different resolution, my frame looks different. My new sky is only 1280x960, which is much smaller. You can solve that, though, by reformatting. If I choose the Read1 node, I can right-mouse-button-click and Transform > Reformat. Reformat will force the sky to be the same size as the ship. Now, the fact I have a Transform on the ship node, is going to continue to give me an overhanging bounding box, but now if I go ahead and delete my Transform, let’s see what happens. I will zoom in, the ship and the reformatted sky are the same size, so everything fits perfectly and there are no overhanging images. The Reformat works by simply scaling. If you look at the properties up here at the top, you can see that is scaled to the current root format, which is basically the resolution determined by the Project Settings - 1920x1080. By default, it resizes it by stretching it out in the width. Now, part of the image might be cut off that way, but you can also select some different options for resize type. For example, you can choose to distort it, which distorts the image, so all 4 corners match the proper resolution.
We are now ready to move on to animation.
Adjusting and Keyframing Properties from Foundry on Vimeo.
We are now ready to move on to animation in Nuke. Before we discuss animation, however, I want to go over one last issue associated with the transformations. That is concatenation. Concatenation is the ability of a program like Nuke to combine multiple transforms or transformations to maintain quality. For example, here is a network I set up as a demonstration. There is a single Read node here that has a CG image of a rock. Its output is going off in two directions. On one side, there is a set of Transform nodes that are scaling up and down. Because they are next to each other in the network, and because Nuke can concatenate, the quality is maintained for that image. Now, over on the right are a similar set of Transform nodes with the same settings. However, that set is interrupted by a Blur node. Because of the Blur node, concatenation can not happen fully and therefore the quality suffers. By the way, that is one great advantage of a node-based system: you can send the output of any node in multiple directions.
So, what's happening with the transforms on the left? We will start there. The Transform1 is scaling up by 2, Transform2 is scaling down by 0.25, and Transform3 is scaling up by 2 again. What that actually means is the rock ends up exactly the same size as when it started. It might seem strange, but this is a great way to test concatenation. Let me zoom in. Now, if concatenation is truly working, I should see no change in quality on this rock. In fact, I can test that by Shift+selecting these Transform nodes and then pressing the D key. The D key temporarily disables a node, so D, and the nodes are turned off. Here is without the transforms. If I press D again, it will turn back on, and here is with the transforms. So, with, without, with, without. I don’t see any change in quality. Therefore, I know that the concatenation has successfully happened. Nuke does this automatically. Now, if I look at the right side, this won’t be the case. In fact, I will go and plug Viewer1 into the right-hand side. There is a trick for this. Any Viewer can handle more than 1 input. Again, this is the beauty of a node network. So, if I want to hook up Transform6 to Viewer1, I can grab its output and drop it on Viewer1, and that becomes number 2. The way to look at these 2 inputs is to go to the Viewer and press the 1 or the 2 key. Here’s 2, so now I am on the output of this set of transforms. The same scaling is happening, but the concatenation is interrupted and therefore does not happen. Now the Blur, in fact, is not doing anything - the size is 0 - but it’s interrupting the flow. Concatenation does not fully happen and therefore the quality suffers. So, again, here is the 1 key - concatenation looks great. 2 key - not so good. Now, what nodes are able to concatenate? If you open up a node and you see a filter and a motionblur property, that means that node is able to look upstream and downstream, and will concatenate automatically with any other transformation nodes that it finds.
OK, so let’s move on to some animation. We are actually going to animate this rock. I will zoom out and connect the Viewer to this Merge node I already have set up. Here’s a piece of video footage that shows a man walking across a bridge, and at a certain point he looks at the camera. I will connect the A input to the rock. Now, in order to move this rock, I will need another Transform node. I don’t want to use these old ones here. I want to make a new one. I will click off, select nothing, press the T key, and make a new Transform, and drop it onto the A pipe to insert it. This node is open in the properties panel, so it means its transformation handle is right here. Now, it would be better to get that in the center of the rock, so in order to move that, I will have to change the center x and center y values. A value of 350 in x and 250 in y will put the transform handle down here, which is much more convenient for animating. Alright, so let’s animate this guy over time. Let me go to frame 15. That is about the point where the man looks at the camera. I am going to move this interactively out of the frame, so I want the first position for this rock to be here on frame 15. How do you set a keyframe though? I just need to go in the properties panel for the Transform node and go to translate x and y, then right-button-click over the top of this button right here, which is the animation menu button, which looks like a little square with a squiggly line in it. So, right-click and choose Set key. As soon as I choose Set key, a keyframe is set for that property. You can tell because the cells turn blue. Also there is a blue dash down here in the timeline. There’s my first keyframe. Now, I can move to a different point on my timeline with my time slider, say frame 40, then interactively move my handle and place the rock somewhere else. In fact, its position is automatically keyframed. I get a new keyframe here instantaneously, and you will see a motion path in between the 2 keys. Move to another, frames are light blue. If the cells are light blue, it means that is the in-between frame. Nuke has calculated that value based on an animation curve that’s graded.
Now, I can also animate other properties, like the rotate and scale. I can go back to frame 15, which I am on right now, right-mouse-button-click on the animation menu buttons beside those properties, and Set key. Those turn blue also. Then go to the last frame, frame 40, where I have another keyframe, and then change the values. This time instead of using the interactive handle, I am going to use the sliders. So, I can increase the rotation, for example, and reduce the scale, and there, we have a really basic animation. In fact, let’s play it back.
Now, there is a new feature in version 7 that’s a RAM preview. When I play it back the first time, it will read those frames into RAM memory. So, then the second, third, and fourth time it plays back, it will be much more efficient. So, when I play back frame 15 to frame 40 or so, there is a green line that indicates that those frames are in RAM. Now when I play it back, they will be in real-time; it will be much more efficient playback. There we go. Of course, it would be great to fine tune this animation over time, maybe add some more keyframes, adjust the animation curves, and, in fact, we can do that in the Curve Editor. We will talk about that in the next video.
Editing Curves and Keyframes from Foundry on Vimeo.
I have created some basic animation to move this rock across the screen. It would be great to alter the animation to improve it. Now, there are several ways to do that inside Nuke. One is to go to a pre-existing keyframe, say frame 15, and update the properties. For example, maybe the scale is too large, so I will reduce that by just typing in a new number - .25 - and do that anywhere as a keyframe. So, I can go to frame 40 and then adjust the scale again, say .1. You can also alter the rotation. For example, maybe 500 for the rotation. Let’s play it back now.
Right now the rock is moving a bit slow, and it’s a little hard to see with the transform handle and the motion path there, but what I can do is click inside the Viewer and press the O key once and then twice to hide those overlays. So, what we do with the rock now that it’s too slow; a great way to alter the speed is to go into the Dope Sheet and scale the keyframes so they are closer together. Now, before we do that, it might be worth applying another node. We are only using frames 15-40 for the animation. Let’s say I want to throw out the rest of the image sequence and not use it. You can do that in the Dope Sheet at the same time with the TimeClip node. The TimeClip node is new for Version 7. Let’s do that. I am going to select the Read1 node, right-mouse-button-click, and choose Time > TimeClip.
Now you can change the values for TimeClip in the properties panel, or you can do it right in the Dope Sheet. You can get to the Dope Sheet, right here, by clicking this tab. The Dope Sheet will show you whatever node is open, for example, TimeClip1 and Transform7. It will also show you whatever is keyframed in terms of the properties, like, rotate, scale, and translate, and also their associated channels, like x and y for translate. If something is animated, you will see a keyframe tick mark on the timeline. For example, here is the y keyframe for translate. You can move these tick marks to different frames. Now, in this case, I want to scale the keyframes to move them closer together. To do that, I can click+drag a marquee box around the keyframes and then click+drag the right of the transform box that appears. If I pull these closer together, the animation will become faster; if I do the opposite, the animation will become slower. Let’s make them closer together. To leave the transform box, just click off of it.
Now, the TimeClip works a little bit differently. You will see the TimeClip shows brackets for the time with a line in between. It shows you what range of the input is being used, in this case, frame 0 to frame 90 of the image sequence. You can click+drag these brackets inwards to alter that. For example, I can click+drag these, so then I am only using frame 15 to frame 29. There is a time slider here that I can move to take a look, so now all my animation and my background is occurring between frame 15 and frame 29. You can also make this all start on frame 0 by moving everything to the left. To do that, click+drag a marquee box around the keyframes and around the brackets. If I let go again, we have a transform box and I can click+drag that to the left. Now I have forced everything to start at frame 0. Whenever I make an alteration in here, the TimeClip properties automatically update. In fact, I can see up here that the frame range has been set from frame 15-29, and it is offset by 15 and that allows it to start at frame 0. This is a pretty basic application of the TimeClip node. A more advanced way to use it, which takes advantage of its power, is to have it offset completed upstream animation. We will come back to that at the end of this video.
So, now the rock moves much more quickly, but I don’t want to use anything after frame 14 in this case. It just goes black due to the TimeClip, and also there is no more animation on the rock. To fix that, what I can do is go to the Project Settings, Edit > Project Settings, and make the timeline last 14 frames. There we go.
So, what else can we do here? One thing that might be nice is to get an arch in the motion of the rock. So, let me go back to the Node Graph and make sure my Transform node is open, turn back on my overlay by pressing the O key in the Viewer, and moving the transform handle on some middle frame, like frame 8. As I move it, I will see the prior position for the rock, and the current position, but when I let go, it updates with the current position. When I do that, a new keyframe is laid down at frame 8. Now, the last way to alter information is through the Curve Editor. There is a Curve Editor tab right here. You can also get to the Curve Editor up here, in the properties panel. It’s a little bit better. If I right-mouse-button-click on the animation menu button, I can choose Curve editor. Whatever property I clicked on, like translate, will be loaded in the Curve Editor automatically and, in fact, here are the x and y curves for translate.
Now, this Curve Editor works like many other Curve Editors in other programs. On the left, will be a list of nodes, properties, and channels that have animation. If you click on a channel or a property and don’t see any curve, what you can do is click inside the Curve Editor and press the F key to frame it. So, there is the w curve for scale. I am going to go back to translate, and frame those. You can also use your camera shortcuts to move around, like scroll or zoom. Now each curve will have 2 or more keyframes. Here are 3. You can click on them to select them, you can move them up and down to change the value, or move them left to right to change where they are occurring on the timeline, in terms of the frame number. If a keyframe is selected, you can click+drag the tangent handle. It alters the shape of the curve. You can also right-mouse-button-click and choose a different tangent Interpolation, for example, Linear. Linear breaks the tangent handle, so you can move each side separately, or you can go back to the default, which is Smooth. You can delete a keyframe by clicking it and pressing the Delete key. You can make a new keyframe by Ctrl+Alt+clicking on the curve, or Cmd+Option clicking. So, there are quite a few things you can do right here in the Curve Editor in terms of shaping the curves. So, let’s say the animation is good enough for now. Let’s go back to the TimeClip node and talk about the more advanced function.
Now, it will take a minute to set up, so I am going to fast forward. I am back with the updated scene. The Transform node is being output to 3 different locations. The first goes to the Merge node that we previously used, the other outputs are going to 2 new TimeClips. Those are also connected to the Merge node through the A2 and the A3. What’s happening here is these new TimeClips are offsetting the completed animation that was created through the Transform7 node. So, what you can see is not just the first rock, but 2 other rocks that are offset in time by several frames. TimeClip2 has its frame property set to start at and there is a 3 entered here. This means that iteration of the rock lags behind 3 frames. TimeClip3 also has frames at the start at and it is offset by 6 frames. What’s great about this is you can always go back to the original animation. Whatever is upstream, you can change. So, I can go back to Transform7, and go to the Curve Editor and say display the y curve. Change that and all the TimeClip variations pick up that new animation.
One last thing we can do here is activate motion blur. In fact, the Transform node carries the motionblur option. It is set to 0, which is off by default. If you raise this up to 1 or higher, motion blur will be activated. In that case, there are several uses for the TimeClip node and how to fine-tune your animation inside of Nuke.
“Tears of Steel” footage courtesy (CC) Blender Foundation - mango.blender.org
Rotoscoping from Foundry on Vimeo.
We are now ready to move on to rotoscoping in Nuke. I have imported an image sequence that will be perfect for this. This features an actor hanging from a rope on a greenscreen stage. Now, the greenscreen is not clean. There is all sorts of equipment in the way, plus the rope runs over the edge of the stage where the greenscreen runs out. So, what we can do with this footage is first rotoscope a garbage mask to get rid of all this extra equipment. The second thing we can do is rotoscope this rope to separate it out from the background. There are several nodes you can use to rotoscope in Nuke. We use the most basic one called Roto. Roto has been improved for Version 7, so it’s even easier to use now. I am going to go back down to the Node Graph, right-mouse-button-click and choose Draw > Roto. Once a Roto node appears, of course, its properties are in the properties panel and, also, there is a special toolbar here in the left. In fact, the third button here allows you to draw new masks. This will be special curves with points that will eventually affect the alpha of another node. Right now, this is set to Bezier. We will try that first. Once you have that tool selected, you simply have to click in the Viewer. Each time you click, you get a point and the mask shape starts to form. So, I will go around the actor and cut off things like tracking marks, shadows, and lights that I don’t want. I will stop short of the top of the greenscreen and, in order to close it, click the very first point. There we go. That is the first mask shape, and it’s listed here in the curve section of the properties panel.
That’s not working yet. What I need to do is hook the Roto into another node through a mask input. The quick way to do that here is create a Merge node, so I am going to press the M key. I will get a Merge node, hook the A pipe into the Read node, and hook the Viewer into the Merge node. Then, I can grab the mask input at the right side of the Merge node, and drop it on the Roto node. There, the mask starts to function. What’s happening is the interior of the mask is converted into a pick alpha. Whatever is outside the mask is converted to a transparent alpha. In fact, we can look at the matte by switching to the alpha view in the Viewer. With your mouse in the Viewer, just press the A key. There we go. Go back to rgb and press A again. Now we have a mask, we can go ahead and animate it. In fact, you will get the first keyframe for free. I can go to a different frame, like frame 1, and alter the mask. One way to do this is to click+drag a marquee box around the entire mask and let go. You will then get a transform handle. I can go ahead and move the entire mask over. As soon as I do that, I will get a brand new keyframe at frame 1. You can also move individual points. To do that, you can click off the transform handle and then pick a point. Now, in order to select a point, you need to make sure you are on one of the selection tools. For example, up here to the left, Select All. Then you can click on a mask point and then drag it to move it. So, there are 2 keyframes. I will go to frame 30 next and add 1 more. There is our first rotoscope of the actor. We also mentioned saving the rope. What we can do there is draw a second mask that is much tighter that cuts the rope out. You can do that by simply going to one of these tools that will allow you to draw a mask, like the Bezier, B-Spline, or one of these shapes, like Ellipse or Rectangle.
I am going to go back to Bezier, for now. I can't really see the rope. One trick is to temporarily disconnect the mask pipe. Now I can see everything. So, with this tool selected, Bezier, I will draw a new mask shape and I will close it. This is a result. I will hook the mask pipe back up and I have the net result of both masks. Let’s take a look at the alpha channel one more time. I will press the A key. There we can see the alpha matte. Once again, white is opaque and black is transparent. Each of these shapes is listed in the curves section, so the new one is Bezier2. Notice that each one of these has a set of options beside it. For example, if I click on Bezier1 there is an invert button - click this and the result is inverted. Beside that, is an operation button. For example, if I go up to Bezier2, double-click that button which looks like 2 little squares, and I get a list of operations. This affects how the masks are combined, so I can change it from over to, say, difference. Difference causes the overlapping area to become transparent. I will return that to over just for now. Over is very similar to the mask add function in After Effects.
Now, there are other ways to affect the quality of the mask aside from moving points. For example, if I zoom in and select one point on the rope, I can see there is a small line beside it - that’s the feather. Once I see the feather line, I can click+drag that and pull it outwards. That becomes a softer transition from what’s opaque to what’s transparent. You can make a very large feather or a very small one. In this case, I will make a small feather just to soften the edges of the rope; this is per point. Now any feather is stored in the keyframe, so if I go to a different frame, I have an option to change the feather. Now, I have gone back to frame 1, so the first thing I need to do is go ahead and move the entire mask, and then I can adjust the feather.
Now, aside from the feather, you can also add or delete points. There’s tools for this, over on the left. Here’s the Add Points tool. Just click on a mask line, like here, or if I want to add it to the first mask, I can select that mask and click on one of its lines. Now you will notice that the new point has a tangent handle and it’s smooth. There are actually 2 types of points inside Nuke: there is a smooth pipe, as I have here, or there is a cusp pipe, which I have over here - the kind you get by default. You can switch between the two by going to one of the other tools. Up here to the left, there is a Smooth Points tool. I can click on a hard one and smooth it out like this. Once you have a smooth point, you can go back to the select tool and then alter the tangent handles. There is also a Cusp Points tool, where you can change a smooth one back to cusp, like this. I might have to click more than one time to go back to the linear version. You can also delete points. If you just select a point, you can delete it with the delete key, or there is also a Remove Points tool. So, there is some basic rotoscoping using the Roto node.
I do want to mention the RotoPaint node. The RotoPaint node builds up on the Roto node by using its rotoscoping functionality. It also adds an entire set of paint tools, and allows you to paint with a stroke-based brush. This is great for making paint fixes right in the program, so it’s worth checking out. Since we are working with greenscreen, the next step would be to move on to some of the chroma key tools inside Nuke. We will talk about that in the next video.
“Tears of Steel” footage courtesy (CC) Blender Foundation - mango.blender.org
One important task of compositing is the removal of bluescreen or greenscreen. Nuke provides a wide variety of keyers for this task. I am going to go through several of these very briefly just so you have an introduction. I have brought in two pieces of footage to try this on. There is a greenscreen of the man that we used previously and there is also a still image of a house. It just has a bright sky and, even though there is no green or blue in this particular photo, you can still use keyer tools to attack the sky and remove it. You can reach the keyer tools through the Keyer menu.
I am going to start with the simplest one, which is called Keyer. Now, even though it doesn’t have a lot of options, it is great for certain circumstances like this. I am going to drop down the A pipe between the Merge and the Read node. The Keyer has several operations, one of which is luminance key, which is default. You can also target certain colors like green, or certain properties like saturation. We will leave it at luminance for now. There is also a graph here, which represents the operation values. For example, this graph represents luminance, as it runs from 0 towards 1, or the maximum. There are also four yellow bars here: A, B, C, and D. You can click-and-drag those interactively. Now B and C are overlapping at the start, but you can separate them. As soon as I move these, the alpha matte will start to be formed. Let’s go take a look at that. I am going to go into the Viewer and press the A key; there’s the alpha channel. What this signifies is any pixel with a luminance value between B and C becomes opaque. Any pixel value that had a luminance between C and D, or B and A, has a tapering value somewhere between opaque and transparent. So, in this situation, what I can do is move B and A towards the far-left, and then adjust C and D to make the sky mostly transparent. For example, what this means is any pixel with a luminance value over 0.9 becomes 100% transparent, or if a pixel has a luminance value between 0.8 and 0.9 has tapering transparency.
Let’s go back to rgb. Press the A key again, and there we go. Now, it doesn’t look like anything is happening right now. What you have to do with the Keyer is premultiply the alpha. So, I will select the Keyer node, right-mouse-button-click, Merge > Premult. Premult multiplies the alpha values by the rgb values. Once I add this, the sky is removed. Now I can test this further. I can hook something up to the B pipe of the Merge. For example, I can right-mouse-button-click, and go to Image > Constant. Constant will produce a solid color - I will hook that up to the B pipe. Then go to the color wheel for that node and pick a color, like light blue. The sky is gone and now the Constant appears in the sky area.
Let’s move on to some other keyers. I am going to go back to the greenscreen footage. Now, in fact, you can have more than one Viewer in any project. You can make a new Viewer at any time by going to the Viewer > Create New Viewer (Ctrl/Cmd+I). If you have more than one Viewer, you can switch between them by clicking the tabs. So now, we are going to work on this greenscreen. The first keyer we will try here is Keylight, so Keyer > Keylight. Foundry writes Keylight and, in fact, it is available in other compositing packages like After Effects, the functionality is the same. It looks a little intimidating at the start because of all the inputs. What you have to do is plug the Source into the greenscreen, in this case, the A pipe into the output. Now there are many options you can adjust. I am going to adjust the basic ones you need to remove the greenscreen. The first thing to do is select the Screen Colour. You can click the swatch here, get the eyedropper, and go back to the Viewer. There are several ways to sample pixels with the eyedropper. You can Ctrl/Cmd+click or Ctrl/Cmd+drag your mouse. I will try the drag, so click+drag, let go - the screen color is sampled and you will see it right here in the swatch, and also beside the Screen Colour property. Let’s take a look at the alpha. Now, Keylight offers this View menu, which you can change from Final Result to Combined Matte. That’s what the alpha channel looks like, so right now there is some gray in the transparent area. What you can do is raise the Clip Black to erode that. What the Clip Black does is it looks for any pixel less than the slider and it makes it 100% transparent. There is also some gray in the white area. You can lower Clip White to make them more opaque, and now the matte looks pretty good. Once the matte looks good, you can return the View to Final Result. Now, to test that, I will disconnect the Constant over here, and plug it into the B pipe here. There we go, that’s pretty successful. Let’s try another keyer. I am going to disconnect Keylight, disconnect constant, and move that aside.
Now we will try Primatte. Primatte has been updated for Nuke 7, so now it’s even more powerful. In this case, I need to hook the fg or foreground to the greenscreen and then the output to the A pipe. Primatte has a very powerful button called Auto-Compute. If you click that, it will attempt to identify the screen color and remove it. In fact, it does a really good job right off the bat. Let’s see what the alpha looks like. I will press the A key again, and there it is. Again, there is some noise around the edges. Fortunately, Primatte offers a long list of operations you can use to clean up the matte. For example, I can switch this menu to Clean BG Noise. Now I will zoom in, and now, I can sample those pixels that are too gray. Another way to sample is to Ctrl/Cmd+Shift and draw a marquee box around the problem area that samples a whole bunch of pixels at once.
So now the edges are looking better, but there is still some gray on his jacket. We can then switch the menu to Clean FG Noise and then sample those pixels. Now that becomes more opaque. Let’s take a look at the rgb again. Press the A key - there it is. It looks pretty clean, however, there is a lot of green spill from the greenscreen in his clothing. In that case, I can go to Spill Sponge and sample those pixels to pull the green back out. There it goes. Now it’s not perfect yet, but you can see how quickly you can move the green. Let’s go ahead and plug in the Constant though and see what that looks like. Alright, let's move on to one more keyer. I will disconnect this one, move it aside.
Now we are going to try the IBKGizmo and IBKColor: two nodes that are designed to work together to tackle greenscreen or bluescreen. So Keyer > IBKColour and Keyer > IBKGizmo. Now, the connections here are a little bit more complicated. Basically, the 1 input for the IBKColor goes into the greenscreen, as does the fg for the IBKGizmo. Then the c, or color input pipe for the IBKGizmo, goes to the IBKColor node. Then the A pipe for the Merge goes to the output of the IBKGizmo. Let’s take a look at the options. The first thing to change is the screen type for the IBKColor node. It is set to blue by default, but you can change it to green. That looks a little funny here in the Viewer, but what I can do to see what the IBKColor node is doing is plug the Viewer into that node. Here, it’s targeting the color green; whatever color is not green it’s removing. Now the aggressiveness of the removal is controlled by the size slider. If I increase that, there is more averaging and there is fewer and fewer non-green colors. Let’s plug the Viewer back into the IBKGizmo. The color information is passed from the IBKColor node to the IBKGizmo node, which the IBKGizmo node turns into a matte. Now the first thing you need to do with the IBKGizmo is make sure the screen type is set to the same color, in this case, green. Let’s plug the Constant into the B pipe and put the Viewer back to the Merge node, and there we go - successful greenscreen removal.
Now, I know all of these keyers have numerous options, which we didn't talk about. This is just to give you a brief introduction and to show you there is a wide range of keyers to tackle pretty much any type of footage.
“Tears of Steel” footage courtesy (CC) Blender Foundation - mango.blender.org
Playback and Rendering from Foundry on Vimeo.
We have covered quite a few features in Nuke. We can now discuss different ways to playback and also how to write out frames. Once again, if you are using Nuke 7, you have the RAM preview. If you can see the green line on the timeline, it means those frames are stored in RAM and the playback will become much more efficient. Another way to optimize the playback is to press the new Optimize Viewer during playback button - it looks like a snowflake. If you click that, then all other parts of the UI outside the Viewer and timeline are frozen. For example, a Read node will not show the frame number until the playback stops. Aside from the timeline, you can also playback through a Flipbook. The Flipbook renders out frames to disk and uses an external program for playback and, in fact, Nuke comes bundled with FrameCycler for this very purpose.
In order to create a Flipbook, select a node whose output you want to see, such as this ColorCorrect node, and go to Render > Flipbook Selected. You can choose a frame range, and then click OK. Once it finishes, the FrameCycler window opens. This is a very industrial-strength tool and there are many options. There is a standard set of playback controls at the very bottom, it can also do things like crop the image based on certain aspect ratios, or display different channels such as red, green, blue, alpha, and luminance. You can view it in different colorspaces through this menu, or just choose Normal View. You can even bring in multiple clips. For example, you can go down to the Desktop button to see the file browser, browse through different directories, and then if you see an image sequence, still image, or a movie, you can select that. Place your mouse over it, like this image sequence, and click the + button. The image sequence is added to the timeline. To go back to the Viewer, go back down to Desktop - here is my original flipbook and here is the new image sequence. You can use FrameCycler to do basic editing with multiple clips. Now, there are many, many features in this program, too many to cover in a short time. I do want to mention it’s definitely worth investigation. Once you are done with FrameCycler, you can either minimize it or exit it. I will just minimize it.
Now we are ready to write out some files. Before I do that, however, I want to talk about this network. It starts with a Read node that’s written in the image sequence. Next is a TimeClip, which changes the frame range from 100 to 200; it also offsets that by 100, so that starts at frame 0. Now that the Read node carries the same frame range and frame properties as the TimeClip node, it’s unusual for multiple nodes to carry the same properties. This goes to show there is a lot of flexibility when it comes time to build your node networks. After that is a Reformat. Reformat is forcing the HD footage to be the project size, in this case 640x480. Because the black outside checkbox is clicked, that places a black letterbox on the top and bottom. I want to mention the Reformat has a filter property. Its property is also shared by Transform nodes. What the filter does is it averages the pixels whenever there is a scale, a rotate, or a translate. For example, if the image is scaled down, that means pixels have to be thrown away. If an image is scaled up, pixels have to be replicated. The filter ensures that operation maintains the highest amount of quality. Let’s zoom in and take a look at this man’s shirt. By default, the filter is set to Cubic. That is a form of convolution filter, which again averages the pixels. If I switch this menu to Impulse, I can see what it looks like with no pixel averaging. It looks very, very pixelated. This is what you would get if there was no filtering at all. There are other filter types aside from Impulse and Cubic, for example, Notch, which is much more aggressive at averaging. There are others that offer in-between results. I am going to go back to Cubic for now. Just keep in mind if you do see filter, you have the option of changing that property to get different results. Again, the Transform node carries this too.
After the Reformat, there is a HueShift and ColorCorrect. HueShift allows you to alter the hue by rotating the color wheel (hue rotation). ColorCorrect allows you to change the saturation, contrast, gamma and gain for the entire image, or for just the shadow areas, midtone areas, or highlight areas. In that case, these two nodes are applying color grading to the image. I can see what it looks like without these nodes by Shift+selecting them and pressing the D key. So here’s before and here’s after. You can find these nodes in the Color menu. Here’s ColorCorrect and HueShift.
Let’s write these out. In order to render out an image sequence or movie, you have to use the Write node. Right-mouse-button-click, and choose Image > Write or press the W key. The Write node needs to go after the node whose output you want to write out. In this case, I will place it after the ColorCorrect. Here are the options. The first thing to note is you can write out different channels. By default, it writes out rgb or red, green, and blue. If I want it to write out alpha also, I can switch this menu to rgba, or you can choose any other number of custom channels. For example, under other layers, you have z-buffer depth channels, motion vector channels, mask channels, and even deep compositing channels. Now, not all the formats can support those channels, but some do. So, the first thing to do here is to actually go to file and press the browse (file) button. Here, you can select the directory you want to render out the files to. For example, select the Test/ folder, after that you can enter in the name of a file, such as test5, and then follow the standard naming convention used by a lot of programs. So, I will add a period and several pound signs to represent the amount of numeric placeholders, such as ##, which is good for rendering frames 0-99. Nuke also supports expression-based numbers. For example, if you enter 0%2d, it will create the same number of numeric placeholders as ##. Another period and the extension, such as .exr, and I will press Save.
Nuke automatically recognizes the extension and changes the file type to match. It also adds whatever options come with that particular format. This is openexr, which means I have a choice of data type, such as 16 bit half or 32 bit float, and various compression schemes. There are quite a few formats you can choose. Go to this menu right here and take a look. For example, there is abc, which is Alembic. That’s a new visual effects format. There is also logarithmic formats, such as cin and dpx, floating point such as hdr, QuickTime mov, and then various other still image formats, such as png, targa, or tiff. If you do pick a format here, such as tiff, make sure you do change the extension to match. Also note there is a colorspace menu here. This is automatically set, based on the format you choose. Once you are ready to render out, just click the Render button. You can pick a Frame range and click OK. Once that window closes, the image sequence or the movie is written out to disk.
“Tears of Steel” footage courtesy (CC) Blender Foundation - mango.blender.org
Motion Tracking Overview from Foundry on Vimeo.
One important task of compositing is motion tracking. Nuke provides the Tracker node for this purpose. The Tracker has been updated for Nuke 7 and is much more powerful and easier to use. I have brought in some footage of a man walking down the street with a moving camera. This will be a great place to try motion tracking. For example, I can motion track this sticker on this post and replace it with something else. Let’s give that a try.
Go back to the first frame, select the Read node and right mouse button click, select Transform > Tracker. Again, this has been re-designed for Nuke 7. Right here, you have a track field. This is where you add tracks. You can have one track or as many tracks as you want. I will click the add track button; the track listed here is track 1. It shows me what I want to track, translate, rotate, and our scale. I am just going to do translate for now, but if you want to, you can turn on rotate and scale also. Associated with that is the anchor box for track 1 in the Viewer. You can click+drag the center of this to move it. I will go ahead and position this over the sticker. The inner box is the pattern box, that’s the pattern you are trying to track over time. I can scale this by click+dragging the ends. The outer box is the search box. This is where the node goes if it has a hard time finding the pattern. Now, with this version, there is a new zoom window right here. One nice feature is you can Shift+click+drag to zoom in or out, so you can see the pixels really closely. Another thing you can do is click+drag with your mouse in this area to position the entire box. I will zoom back out. Now, I have positioned the anchor box for track 1 on the first frame.
I can approach this as a more traditional transform tracking or go straight to my track buttons, press those, and let the program analyze the footage to create a motion path. The other way I could do this is to actually set several keyframes. That’s where you set keyframes over time and let the program analyze between those. This will help make sure the resulting motion path is more accurate. Let’s give that a try. Because I have placed my track 1 box on the first frame already, I get the first keyframe for free - I will see the blue dash right here. Now, I can go to a different keyframe, say frame 195, and re-position the box. As soon as I do that, I get a new keyframe, plus a motion path. Once again, I can use my zoom window to fine tune the position. You will also notice these two boxes up here at the top. These are keyframe patches. It shows you what’s underneath that box for each of those keyframes. You can click on those patches to compare them. The view of the patch is put into the zoom window, so it’s a great way to make sure your positioning is accurate. If you go back and forth, ideally, you should see that pattern stay static and not move around. You can always go ahead and add some more keyframes, for example, I will go to frame 200. This one is tricky because the man’s arm covers up the sticker. I can still position this box as best as I can to make it in the same relative position. As soon as I do that, I get a third keyframe patch. Then I can compare the keyframe positions. I will try to fine tune this last keyframe to make it more accurate. It looks like the relative position of that post and that handrail don’t really change between keyframes. Once I have a few keyframes, I can then analyze. I am not going to use my regular track buttons. Instead, I will use a special button right here, which is called Key Track All, that will track between the keyframes, and take the keyframes into account as it tries to determine the best motion path. Let’s give that a try. It’s going to analyze forward. When it’s finished, it will give me a motion path. Once it’s done, you can play it back and see how well the track follows the pattern.
It’s looking pretty good, however, at the very end, because of the overlap with the arm, it gets confused with 2 frames. Here’s a nice feature of this system. I am going to zoom in here and what I can do is re-position this box on one of the bad frames. For instance, on frame 199, click+drag it and it actually updates the surrounding keyframes. It takes in account the keyframes that are already set and it takes into account the updated position, and then re-calculates a path in that area. In order to improve the motion path at this point, I am going to spend more time adjusting keyframes and, in fact, I can add additional ones if I need to. I also always have the option to go back to any of my track buttons and reanalyze either part of the timeline or the entire timeline, and go forward or backward. It’s going to take a few minutes for me to adjust this so I am going to fast forward.
Alright, so I’m back and I have improved the motion path. So, basically, I spent more time adjusting keyframes for the problem areas near the end of the timeline. If you watch the zoom box, you will see the pattern is more stable now. Now, this is not the only way to see how accurate the path is; there is another new button for Nuke 7, which is right here. It’s the Show Error on Track button. If I click that, it color codes the motion path. The color of each point on the motion path indicates the error value generated by the node and the error represents the confidence with which the node has identified the pattern for that frame, with the patterns established by the keyframes. So, in other words, green is good. Now, in order to demonstrate what it looks like when it’s not so good, I can remove some of these keyframes and recalculate. To delete, just select the patch and press the Delete Key button. Let’s say that I also rushed the placement on my last keyframe, it’s going to recalculate automatically because I moved that box for that frame. So yellow, to orange, to red indicates a rising error value. I want to undo this - Ctrl/Cmd+Z to go back to my previous version of the motion path. I will fast forward.
Now we are back with a good motion path with the five keyframes. Now that we have a good path, we can export the data. You can go to this menu and click the style of data you want to export. Now, if you have four tracks, you can choose the CornerPin. In this case we want to use match-move, so I will set that to Transform (match-move) and then click the create button. That creates a Transform node that is hooked up to the Tracker via an expression. If I open up the Transform node, I will see the translate, rotate, scale, and center is linked via an expression to the Tracker. Now, I can plug something into the Transform to have it follow that motion path. Let me close all these windows.
Now, in order to keep my background, I am going to make a new Merge node: press the M key. I will hook up the B input to the Read1 node - that’s the background - and I will hook up A to the new Transform node, and hook the Viewer into the Merge node. Now, I just need something to match-move. I could bring in a Read file with a bitmap or, in this case, I will just bring in a Constant to test this (Image > Constant). I will change the color of the Constant so it’s more interesting, like this green. I will change the format. Initially the Constant is the same size as the project. I will pick something smaller. If you don’t see a resolution you like, you can always make a brand new one - just click the new button and enter a new name, and a new size. I made one already and it’s called square75, and it’s 75x75 pixels. I will go ahead and hook that up. Now, one issue here is the Constant is placed at the bottom left-hand corner of the composition. You can see that if I go to the first frame of the timeline, right here where 0,0 is, what I can do though is line it up with where the center of the motion path is by adding 1 more Transform node. I will select the Constant and press the T key. I do need to see the motion path, so I will open the Tracker. Now what I can do is grab the transform handle for the Constant, and move it so it lines up with the center of the anchor box. I will zoom in here so I can see this better, and position that until it lines up right there in the center. Now, I can close all of these windows and play it back.
And now that new Constant follows where that sticker was. There are still a few issues, for instance, I would have to rotoscope to get behind this man's arm, but the basic motion tracking is now working. So, there’s a quick introduction to the new Tracker in Nuke.
“Tears of Steel” footage courtesy (CC) Blender Foundation - mango.blender.org
3D Workspace Overview from Foundry on Vimeo.
Nuke is not limited to a 2D space, in fact, it has a complete 3D environment built right in. For example, here is a 3D ship and a 3D sphere. In order to see the 3D environment, go to the View menu where it says 2D and switch that to 3D, and there’s the environment. In order to change the view, which is the default camera, you can use the Alt or your Option key, along with your mouse buttons. For example, Alt and left mouse button scrolls, Alt and middle mouse button zooms, and Alt and right mouse button orbits.
Let’s see what we have in the scene. There is a 3D Camera, a Spotlight, a Point light, a primitive Sphere, an imported spaceship, and a large primitive Card in the background. Let’s take a look at the node network, and you can see what we need to make a 3D scene happen. The node with the most connections is the Scene node. The Scene node groups together lights and geometry in order to pass them on to a render node. In order to render the scene so it becomes 2D, you need to have some sort of render node. In this case, there is a ScanlineRender node. Connected to the ScanlineRender is a 3D Camera. Connected to the Scene node are two lights - there is the Spotlight and the Point light. If I open up the properties on the Spotlight, you can see common options like color and intensity and, in the case of the Spotlight, cone angle. There are also two pieces of primitive geometry here - there is the Sphere and the Card. This will be a good time to note that 3D nodes have a rounded, pill-like shape, as opposed to the rectangular 2D nodes.
You can create a light or a primitive piece of geometry through the 3D menu. You can make your Point or your Spot, plus a Direct and a few specialized lights, like the one that’s called Light, which you can use to import lights from other programs, like Maya. There is also the Geometry menu, which has the primitives such as Card, or other shapes, like Cube and Cylinder. You can transform lights and geometry. For example, if I open up the Sphere, you will see there is a translate, rotate, and scale property. Once this is open, you will also see there is a transform handle. If you click+drag the handle along the axis, you can move it in that direction, for example, Y. Of course, you can also enter values into the properties panel. Lights also have their own set of transforms. Now, one new feature is the fact that lights can cast shadows right here in the 3D environment. For example, if I go to the Spotlight and go to the Shadows tab, you will see there is a place to click on cast shadows. Let’s go back to the 2D view. You can see the shadow of the Sphere right here on the spaceship. Now, aside from shadows of course, you can animate all of these properties. You can animate the light, changing over time, as well as the geometry. There are also animation buttons beside all of these properties. You can key these as you would any other node inside Nuke.
You will notice that the two pieces of geometry have shaders connected to their img pipes. These are necessary for the surfaces to be lit correctly. The Sphere has a Phong, which is similar to the one you might have in a program like Maya. The Card has an Emission shader, which has the emissive component or the ambient color component. Now, in terms of the spaceship, it has to be imported through a ReadGeo node. ReadGeo node has a place to bring in the file, and this supports .fbx files, or .obj files, or alembic files, .abc. If there is animation in the file, Nuke will recognize it. For example, with the .fbx file, it might have multiple takes. Nuke will recognize that and you can choose the animation take. So, if I go back to the 3D view, scrub the timeline, and we will see the ship is pre-animated, and this animation was created in Maya. There is also a material connected to the img pipe of the ReadGeo. Now, because the UV texture space came through the .fbx file, in order to map the geometry, you just need to bring in the texture bitmaps through Read nodes, and connect to a shader. For example, here is the diffuse map connected through the mapD, or map diffuse. There is a specular map connected through the mapS, or map specular. Lets go back to the 2D view.
Now, if anything is animated, you can also activate motion blur. To do that, you go to the render node and, for example, with the ScanlineRender, go to the MultiSample tab and change samples to a higher number like 8. At that point, the motion blur will appear, as you can see right here. The higher the samples number, the higher the quality.
So, there is a brief introduction to Nuke’s 3D environment. Keep in mind that any node you need to create for this you can find through the 3D node menu. This includes all your shaders, geometry, lights, Scene nodes, and cameras. Aside from animating lights and geometry, you are also free to animate cameras. They have their own set of transforms. In any case, I would suggest exploring this component of Nuke.