Pixel Ranges

When writing your model in PyTorch, be aware that when this model is run inside Nuke, the values in the tensor passed to your model forward function will correspond to the pixel values of the image being passed to the Inference node. This means that most values will lie in the range [0, 1], but if your input image to Inference contains superblack/superwhite pixels, your input tensor will contain values outside of the [0, 1] range.

For this reason, make sure that any preprocessing that must be performed on pixels to ensure they are in the correct range for your model is carried out in either the model forward function or externally in your Nuke script.

For example, consider the case in which a model expects all values in the input tensor to be in the range [0, 1]. Then a preprocessing step would need to be applied in the model’s forward() function to ensure that all tensor values are in the correct range:

def forward(self, input):
    """
    :param input: A torch.Tensor of size 1 x 3 x H x W representing the input image
    :return: A torch.Tensor of size 1 x 1 x H x W of zeros or ones
    """

    # This model requires tensor values to be in the
    # range [0, 1], so clamp values to this range
    input = torch.clamp(input, min = 0.0, max = 1.0)

    modelOutput = self.model.forward(input)
    modelLabel = int(torch.argmax(modelOutput[0]))

    plane = 0
    if modelLabel == plane:
      output = torch.ones(1, 1, input.shape[2], input.shape[3])
    else:
      output = torch.zeros(1, 1, input.shape[2], input.shape[3])
    return output

Alternatively, this processing could be done in the Nuke script before the image is passed to the Inference node. In this case, a Clamp node could be added before the Inference node to ensure pixels are in the correct range.