SourceGeo Tutorial¶
This tutorial walks you through implementing a minimal Op which creates 3D
geometry by subclassing DD::Image::SourceGeo
. The Op creates a Tetrahedron
in the Viewer and allows you to manipulate the points describing the corners
and perform additional transformations.
It demonstrates how to:
create an object
add points to it
create a primitive which references the points
consider normals positioning when creating faces
create UVmapping for texture rendering
calculate geometry hashes
enable transformations to move, resize and rotate the object
use the rebuild flags to skip unnecessary work
Tetrahedron 3D Coordinates¶
A tetrahedron is a pyramidlike 3D structure, composed of 4 triangularshaped faces with a total of 4 vertices and 6 edges. In this tutorial we will construct a regular tetrahedron, each one of its faces is an equilateral triangle.
Let’s start by identifying the points in a 3D coordinate system that should compose the base of our tetrahedron, which will live in the XZ axis. Let’s call those points p0, p1 and p2, the equilateral triangle they form E and let’s call R the center of our 3D world. For convenience, we can design that shape as being circumscribed in a sphere of radius 1 in the center of the 3D world, meaning that the base points will be points on the circumference.
This is how the coordinates of p0, p1 and p2 would look like from a top view look of the XZ plane:
z ^
p0  p2
sin(30)..... .+.
.\  /.
. \ R / .
<+\+/+>
. \  / . x
. \  / .
. \/ .
1 . . . . . . . . .v p1 .
.  .
cos(30) 0 cos(30)
R = [ 0, 0, 0]
p0 = [cos(30), 0, sin(30)]
P1 = [ 0, 0, 1]
P2 = [ cos(30), 0, sin(30)]
There are many ways to find the coordinate values above. One way is to create the segment from R to p2. Now we can see that R, p1 and p2 form an isosceles triangle where the sides (p1, R) and (R, p2) have size 1, let’s call this triangle I. Since all the angles in E measure 60 degrees and E is equilateral, we know that in I the angles at p1 and p2 must measure 30 degrees. Now it is easy to see that the straight triangle composed by (0, p2), (0, p2.z) and (p2.x, 0) would have 30 degrees at the R vertex, which lead us to p2.x being cos(30) and p2.z being sin(30).
Note that p0, p1 and p2 have 0 at the Y coordinates. To complete the tetrahedron, we need to add the fourth point, p3, that will be the same height as the base triangle, 1 + sin(30), which is the value in the Y axis and the values in X and Z would be both 0, resulting in p3 being [0, 1 + sin(30), 0].
To keep the center of our 3D world in the center of our tetrahedron, we can subtract sin(30) on all Y components, resulting in the following coordinates:
p0 = [cos(30), sin(30), sin(30)]
P1 = [ 0, sin(30), 1]
P2 = [ cos(30), sin(30), sin(30)]
P3 = [ 0, 1, 0]
The frontal view of XY axis is now this:
y ^

1 ................. ^p3
/\
/  \
0 /  \
<./+\.>
. /  \ . x
sin(30)....../__________\.
.p0  .p2
cos(30) 0 cos(30)
UVmapping¶
We now have all the coordinates that compose our tetrahedron, we can now think of how we could render a 2D texture on it. To do so, we need to understand the role of UVmapping, which maps 2D (u, v) coordinates to 3D (x, y, z).
We can think of UVmapping for a tetrahedron as if we were wrapping one pyramid with a (very large) sheet of paper, making sure all the surfaces of the paper after wrapping are flat over the pyramid and all parts of the sheet that are not in contact with the pyramid could be trimmed (or disregarded).
The 1x1 2D sheet with the markings for folding would look like this:
Vi^ V ^
 
 1............................
2sqrt(3)/2  . . . . . . ^p33 .
  / \ .
  / \ .
  / \ .
  / 3 \ .
  / \ .
  / \ .
1sqrt(3)/4  . .p0^^p2 .
  / \ / \ .
  / \ 0 / \ .
  / \ / \ .
  / 1 \ / 2 \ .
  / \ / \ .
0 0/ \ / \.
 +++++>
 0 0.25 0.5 0.75 1 U
 p31 p1 p32
+>
0 1 2 3 4 Ui
The base of the tetrahedron would be positioned over the face number 0. Faces 1, 2 and 3 would be ‘wrapped’ over the sides of the tetrahedron with the points p31, p32 and p33 meeting at the p3 coordinate.
To simplify the representation, we will construct the UVmapping referencing the indexes for U and V axis, being U the array containing the relevant U values [0, 0.25, 0.5, 0.75, 1] and V the array [0, sqrt(3)/4, sqrt(3)/2]. Note that all values are between 0 and 1 inclusive range.
We can see from that visual representation how each surface is mapped to the 2D texture:
Face
3D coordinates
Mapping to U and V indexes
0
p0, p2, p1
(1, 1), (3, 1), (2, 0)
1
p0, p1, p31
(1, 1), (2, 0), (0, 0)
2
p1, p2, p32
(2, 0), (3, 1), (4, 0)
3
p2, p0, p33
(3, 1), (1, 1), (2, 2)
Note that the order we pick points matter for normals consideration. We chose the points p0, p2 and p1, for face 0, instead of p0, p1 and p2 which would make the face normal point inwards, which wouldn’t be very useful. The latter would make the face normal to be pointing inwards instead of outwards.
Basic Setup: Includes and Namespaces¶
We’ll be using assert statements in our code to catch coding errors, so we need
the definition of the assert()
function:
#include <cassert>
Next, we’ll need to include the relevant parts of the NDK:
#include "DDImage/DDMath.h"
#include "DDImage/Knobs.h"
#include "DDImage/PolyMesh.h"
#include "DDImage/SourceGeo.h"
We need DDImage/SourceGeo.h
because that’s where our base class is defined.
It includes most of the other headers we’ll need for our plugin. We also
include DDImage/PolyMesh.h
, which declares the specific type of primitive
we’ll be creating.
Most of the NDK classes are in the DD::Image
namespace. To save us having
to prefix every reference to them, we provide a using
statement:
using namespace DD::Image;
It’s good practice to put your own code inside a namespace as well. This helps
prevent conflicts with symbols defined elsewhere. For this example we’ll use a
namespace called Tetra
:
namespace Tetra {
// All further code in this tutorial will go inside this block.
}
Constants¶
All NUKE Ops need to provide a class name and help text that will be displayed to the user when hovering the mouse over the plugin Icon. Following general good programming practice, we declare those as constants:
static const char* kClassName = "Tetrahedron";
static const char* kHelp = "Creates a 3D Tetrahedron";
We can also define constants for the knob labels we will use. There will be four XYZ_knob’s, one for each vertex handle that will be used to deform the geometry and one Axis_knob that will allow for transformations like translations, scale and rotate:
static const char* const kVertexLabels[4]{"p0", "p1", "p2", "p3"};
static const char* const kAxisLabel = "transform";
We will also use constants for defining the shape faces and their respective UVmappings. To make it easier to read the UV mapping, we will use auxiliary arrays for U and V values (kU and kV), thus making it possible to refer to their indexes instead of their values:
static const float kU[5]{0.0f, 0.25f, 0.5f, 0.75f, 1.0f};
static const float kV[3]{0.0f, sqrtf(3.0f) / 4.0f, sqrtf(3.0f) / 2.0f};
static const int k3dFaces[4][3]{{0, 2, 1}, // p0, p2, p1
{0, 1, 3}, // p0, p1, p31
{1, 2, 3}, // p1, p2, p32
{2, 0, 3}}; // p2, p0, p33
static const int kUVMapping[12][2]{{1, 1}, {3, 1}, {2, 0}, // face 0
{1, 1}, {2, 0}, {0, 0}, // face 1
{2, 0}, {3, 1}, {4, 0}, // face 2
{3, 1}, {1, 1}, {2, 2}};// face 3
Declarations¶
Now we get to the more important part, where we declare the Op class and its
members. We’ll call the class Tetrahedron
. The declaration looks like this:
class Tetrahedron : public SourceGeo
{
Public:
explicit Tetrahedron(Node* node);
static const Description kDescription;
const char* Class() const override;
const char* node_help() const override;
protected:
void get_geometry_hash() override;
void create_geometry(Scene& scene, GeometryList& out) override;
void geometry_engine(Scene& scene, GeometryList& out) override;
void knobs(Knob_Callback callback) override;
private:
Vector3 _vertices[4];
Matrix4 _localTransformMatrix;
};
The first thing we declare is the constructor; Ops need to have a constructor
which takes a pointer to a Node. The Class()
method is required on every
custom Op class. We’re also adding some knobs to the node, so we need to
override the knobs()
method.
The static description
member tells NUKE what the Op is called and how to
create it. This is required on every custom Op class. As well as a class name
string for the Op, it also stores a pointer to the function for creating the Op
(the Build
function mentioned above, in this case).
The standard SourceGeo
method for creating geometry is called
create_geometry
; we will override that to create our tetrahedron. The
geometry we create depends on the values of our knobs, so we need to override
the get_geometry_hash
method as well.
Finally, we declare _vertices[4]
, an array that will hold the corners of
our shape and _localTransformMatrix
, that will be used for 6dof
transformation. We also declare the pointers that will hold the knobs that will
use the previous variables as storage, so that we can manipulate them inside
NUKE.
The Easy Bits¶
The implementations of the constructor, Class()
and Build()
methods are
fairly straightforward so we won’t spend much time on them:
Tetrahedron::Tetrahedron(Node* node)
: SourceGeo(node)
{
static const float kRadians30 = radians(30);
static const float kCos30 = cos(kRadians30);
static const float kSin30 = sin(kRadians30);
_vertices[0].set(kCos30, kSin30, kSin30);
_vertices[1].set(kCos30, kSin30, kSin30);
_vertices[2].set(0, kSin30, 1);
_vertices[3].set(0, 1, 0);
_localTransformMatrix.makeIdentity();
}
static Op* build(Node* node) { return new Tetrahedron(node); }
const Op::Description Tetrahedron::kDescription(kClassName, build);
const char* Tetrahedron::Class() const { return kDescription.name; }
const char* Tetrahedron::node_help() const { return kHelp; }
The constructor simply calls the base class constructor, passing through the
node, and initialises the data members. Note that the DD::Image::Vector3
class doesn’t do any initialisation in its noarg constructor; assuming that it
zeroes the vector is a common mistake.
The static Build
method is what NUKE uses to create instances of our plugin
when you add a Tetrahedron node to the Node Graph (DAG). This method can
actually be called anything you like, so long as it keeps the same signature.
It can also be a standalone function instead of a static method, but having it
as a static method is recommended for the sake of clarity in your code.
The Class()
method returns the constant we defined earlier and the
Build()
method creates a new Tetrahedron instance and then returns a
pointer to it. Note that there’s no need to define a destructor for this
example; the default generated by the compiler is sufficient for this class.
Adding Some Knobs¶
A node with no output controls is not particularly useful, so we override the
knobs()
method to add some. In this case, we’re calling the parent
SourceGeo::knobs()
method. First because we would experience crashes
otherwise and second, this method adds default knobs for SourceGeo’s, being 2
enumerations: “display” and “render”; and 3 checkboxes: “selectable”, “cast
shadow” and “receive shadow”.
In the sequence we add an Axis_knob
that will help us apply 6dof
transformations in order to allow for the user to resize, rotate and move our
tetrahedron. We will also add four XYZ_Knob
’s which will provide handles
that the user can use to adjust the position of each corner of the tetrahedron
independently:
void Tetrahedron::knobs(Knob_Callback callback)
{
SourceGeo::knobs(callback);
auto axisKnob = Axis_knob(callback, &_localTransformMatrix, kAxisLabel);
for (int i = 0; i < 4; ++i) {
auto knob = XYZ_knob(callback, &(_vertices[i].x), kVertexLabels[i]);
knob>geoKnob()>setMatrixSource(axisKnob>axisKnob());
}
}
Note that we are using the local transformation matrix in all of our
XYZ_knob
’s. That will ensure we will be moving the handles alongside any
transformation applied to our shape.
Generating the Geometry¶
The create_geometry
method is the meat of a SourceGeo
subclass as it
creates the geometry that is passed down through the Node Graph. The out
parameter is where we put the geometry we create; NUKE takes care of the rest.
In our implementation we create a single PolyMesh
primitive, then provide
the locations of the corner points, and finally, set the texture coordinates
for each corner:
void Tetrahedron::create_geometry(Scene& scene, GeometryList& out)
{
int obj = 0;
if (rebuild(Mask_Primitives)) {
out.delete_objects();
out.add_object(obj);
auto mesh = new PolyMesh(4, 4);
for (int i = 0; i < 4; ++i) {
mesh>add_face(3, k3dFaces[i]);
}
out.add_primitive(obj, mesh);
}
if (rebuild(Mask_Points)) {
PointList& points = *out.writable_points(obj);
points.resize(4);
for (int i = 0; i < 4; ++i) {
points[i] = _vertices[i];
}
}
if (rebuild(Mask_Attributes)) {
Attribute* uv = out.writable_attribute(obj, Group_Vertices, "uv",
VECTOR4_ATTRIB);
assert(uv != nullptr);
for (int i = 0; i < 12; ++i) {
uv>vector4(i).set(kU[kUVMapping[i][0]], kV[kUVMapping[i][1]], 0, 1);
}
}
}
In order to speed things up, NUKE tries to avoid recomputing values when it
doesn’t have to. As part of the processing sequence for geometry Ops it checks
the hashes for each group (see the get_geometry_hash()
discussion below)
and sets flags to indicate which parts of the geometry need to be rebuilt. We
test these flags using the rebuild()
function to see whether we need to
rebuild a specific part.
If we need to recreate primitives, the first thing we do is call
out.delete_objects()
to ensure that the output list is empty. Then we add a
single object to hold our tetrahedron (an object is a collection of points,
primitives, vertices, and attributes). Finally we add a single PolyMesh
primitive. Note that we haven’t specified any positions or attributes yet.
To (re)create points, we need to obtain a PointList
object that we can add
our points to. The out.writable_points()
method gives us this. We only ever
have four points, so we resize the list to 4 and set the values to match our
_vertices[4]
member respectively.
Texture coordinates in NUKE’s 3D system are stored in a vertex attribute
(Group_Vertices
) named “uv”. If the rebuild flag for attributes is set,
we’ll need to recreate them. The usual pattern for updating any attribute is:
call
out.writable_attribute()
to get an object you can store the attribute values in. The returned attribute already has a slot allocated for each item in the group.Use the accessor methods of the relevant type (e.g.
uv>vector4(i)
above) to set values for each item.
The get_geometry_hash()
Method¶
In order to minimize the amount of recomputation when you change something in
the Node Graph, NUKE keeps separate hashes for various aspects of the geometry
and uses them to set the appropriate rebuild flags (these are the flags we used
in the create_geometry
method above).
The get_geometry_hash()
calculates the current value for each of these
hashes:
void Tetrahedron::get_geometry_hash()
{
SourceGeo::get_geometry_hash();
for (auto& vertex : _vertices) {
vertex.append(geo_hash[Group_Points]);
}
_localTransformMatrix.append(geo_hash[Group_Matrix]);
}
The SourceGeo
class provides a default implementation of this method which
incorporates hashing of the material input, so we call that first.
The three knobs affect the point locations on the geometry we create, but that’s all, so we only need to hash them into the points group.
There’s nothing else that affects the geometry that we’ll produce, so our work here is done.