DeepPixelOp is a base class that provides similar functionality to the combination of NukeWrapper and PixelOp, albeit entirely within the base class rather than partly in a wrapper. DeepPixelOps provide no support for spatial operations: each pixel on the input corresponds exactly to a pixel on the output, and the samples within each pixel also correspond (with the exception that the samples from the input can be dropped and not be represented in the output at all).
Here is an example class, that implements a desaturate operation on the pixels, setting red, green and blue channels to the luminance.
#include "DDImage/DeepPixelOp.h" #include "DDImage/Knobs.h" #include "DDImage/RGB.h" static const char* CLASS = "DeepLuma"; using namespace DD::Image; class DeepLuma : public DeepPixelOp { public: DeepLuma(Node* node) : DeepPixelOp(node) { } const char* node_help() const { return "Take the luminance of deep data, eliminating chroma info."; } const char* Class() const { return CLASS; } virtual void in_channels(int, ChannelSet& channels) const { if (channels & Mask_RGB) channels += Mask_RGB; } virtual void processSample(int y, int x, const DD::Image::DeepPixel& deepPixel, int sampleNo, const DD::Image::ChannelSet& channels, DeepOutPixel& output) const { bool madeLuma = false; float luma; foreach(z, channels) { if (z == Chan_Red || z == Chan_Green || z==Chan_Blue) { if (!madeLuma) { luma = y_convert_rec709(deepPixel.getUnorderedSample(sampleNo, Chan_Red), deepPixel.getUnorderedSample(sampleNo, Chan_Green), deepPixel.getUnorderedSample(sampleNo, Chan_Blue)); madeLuma = true; } output.push_back(luma); } else { output.push_back(deepPixel.getUnorderedSample(sampleNo, z)); } } } }; static Op* build(Node* node) { return new DeepLuma(node); } static const Op::Description d(CLASS, "Image/DeepLuma", build);
There are two main functions that a subclass Op needs to implement. These are in_channels and processSample.
- virtual void DeepLuma::in_channels(int, ChannelSet& channels) const
This function, like the PixelIop::in_channels function it is based upon, is called by NUKE to determine what extra channels would be needed from the input. The example implementation, which desaturates, always need RGB, even if only one of those channels had been requested. The implementation does simply that.
- virtual void DeepLuma::processSample(int y, int x, const DD::Image::DeepPixel& deepPixel, int sampleNo, const DD::Image::ChannelSet& channels, DeepOutPixel& output) const
This is the main function that subclasses of DeepPixelOp must implement. It is given an input pixel in deepPixel and should process the sample indicated by sampleNo. It is expected to process the channels /channels/ in /output/. It is given the x and y coordinates of this pixel too.
The function may return without altering /output/ - in this case the sample is omitted. Otherwise, it should prepare /output/ by adding float values to it representing the various channels (in the order that foreach uses). The /channels/ set might be smaller or larger (or indeed disjoint) than the channels in /deepPixel/ depending upon your in_channels implementation.
Our example passes through all the channels, except for red, green and blue. For these, it calculates a luminance value, and then uses this for subsequent any RGB channels.