Contact Support

Camera Tracking

Nuke’s CameraTracker node is designed to provide an integrated camera tracking or match-moving tool, which allows you to create a virtual camera whose movement matches that of your original camera. Tracking camera movement in a 2D footage enables you to add virtual 3D objects to your 2D footage.

Introduction

With the CameraTracker node, you can track the camera motion in 2D sequences or stills to create an animated 3D camera or a point cloud and scene linked to the solve. You can automatically track features, add User Tracks or tracks from a Tracker node, mask out moving objects using a Bezier or B-spline shape, and edit your tracks manually. CameraTracker can solve the position of several types of cameras as well as solve stereo sequences.

Quick Start

The tracking process is outlined below, whether you intend to track a sequence or a set of stills:

1.   Connect the CameraTracker node to the sequence you want to track. See Connecting the CameraTracker Node.
2.   Mask out any areas of the image that may cause CameraTracker problems, such as movement within the scene or burn-in. See Masking Out Regions of the Image.
3.   If you're tracking stereoscopic or multi-view images, set the Principal View on the CameraTracker or Settings tabs. See Working with Multi-View Scripts for more information.
4.   Set the camera parameters, such as Focal Length and Film Back Size, if they are known. These are described under Setting Camera Parameters.
5.   Set the Source dropdown to Sequence or Stills, and then:

If you intend to track a continuous Sequence of frames, set the tracking preferences using the Settings tab Features and Tracking controls. See Tracking in Sequence Mode for more information.

If you're using Stills, you can track all frames in the same way as sequence tracking, or a subset of Reference Frames using the +/- keyframe buttons above the Viewer or in the properties panel. See Tracking in Stills Mode for more information.

6.   You can place User Tracks to improve difficult solves, use an entirely manual tracking approach, or set 3D survey points. You can use 3D survey points to tie your sequence to a known 3D world, such as those created using stills. See Working with User Tracks for more information.

Tip:  3D survey points have replaced the ProjectionSolver workflow, but you can still add ProjectionSolver nodes by pressing X in the Node Graph and entering ProjectionSolver as a Tcl command.

7.   Click Track to begin tracking the sequence.
8.   Solve the Camera position by clicking Solve and refine it, if necessary. For more information, see Solving the Camera Position
9.   Set the ground plane, if required, and adjust your scene. See Adjusting the Scene.
10.   Select what to export from the solve using the Export dropdown and click Create.

You can export an animated camera, a stereoscopic or multi-view rig, a 3D scene and point cloud, lens distortion, or cards. See Using Solve Data.

11.   If you have multiple footage sources of the same scene or content available, you can also use survey points to solve each of your sources and then register them all in the same world. See Combining Solves.
12.   Add your 3D virtual objects to the footage. See Placing Objects in the Scene.
13.   By default, any 3D objects you added to your footage do not have lens distortion applied to them. As a result, they can look like they weren't shot with the same camera. To fix this, see Accounting for Lens Distortion.