In my previous article, you learned how to run Neural Networks inside Maya. Now you’ll learn how to do so interactively implementing your network in a Python DG Node.
What you’ll learn
- Anatomy of a Python DG Node
- Running a Neural Network model inside the DG node
- Optimizing performance and usability
You’ll need these resources to follow this tutorial
Anatomy of a Python DG Node
DG (Dependency Graph) nodes are the atomic elements that make up a Maya scene. You can list all the DG nodes in your scene by unchecking the option to list only the acyclic ones (DAGs) in your outliner.
And you can inspect how DG nodes are connected using the Node Editor (Windows > Node Editor).
You can create DG nodes with custom, inputs, outputs, and computations in Python or C++. I’ll use Python for this tutorial.
To declare your custom DG node, you’ll need to create a Python plug-in for Maya. This is just a .py file where you will declare the name of your node, its attributes and the computations it should perform. Here is the collapsed anatomy of a Python plug-in that implements a single DG node. You can download this template in the resources for this article.
import maya.api.OpenMaya as om # (1) Load OpenMaya # (2) Inform Maya we are using OpenMaya2 + def maya_useNewAPI(): # (3) Declare global node params and other global vars nodeName = 'templateNode' nodeTypeID = om.MTypeId(0x60011) + class templateNode(om.MPxNode): # (4) Here we declare the computation + def create(): + def init(): # (5) Here we declare the + def _toplugin(mobject): + def initializePlugin(mobject): + def uninitializePlugin(mobject):
In the code above we load the OpenMaya API (1) and inform Maya we’ll be using OpenMaya2 (2) by declaring a maya_useNewAPI function. This is a convention. Then (3) we give the node a name and a unique hexadecimal ID (more conventions). If by any chance you have another node registered under the same ID Maya will not load your plug-in.
We create a class based on OpenMaya’s MPxNode, a class for custom Maya nodes. This is where we define the computation. We define functions to create and initialize the node. We define a function to declare the properties of the plugin. And finally, we define the functions that should be called for plugin initialization and uninitialization.
The things you got to look out for are the (4) definition of the class templateNode() and (5) the init() function.
Init function
In the init function we create all of the node’s the input (1) and output (2) attributes. Attributes are based on the appropriate OM classes, such as MFnNumericAttribute for numbers and MFnTypedAttribute for other types of data. Floats should be declared as OM’s kFloat type. Inputs should be writable, while ouputs should not, for they are the result of the node’s computation.
def init(): # (1) Setup attributes nAttr = om.MFnNumericAttribute() # Maya's Numeric Attribute class kFloat = om.MFnNumericData.kFloat # Maya's float type templateNode.a = nAttr.create('a','a', kFloat, 0.0) nAttr.hidden = False nAttr.keyable = True templateNode.b = nAttr.create('b','b', kFloat, 0.0) nAttr.hidden = False nAttr.keyable = True #(2) Setup the output attributes templateNode.result = nAttr.create('result', 'r', kFloat) nAttr.writable = False nAttr.storable = False nAttr.readable = True #(3) Add the attributes to the node templateNode.addAttribute(templateNode.a) templateNode.addAttribute(templateNode.b) templateNode.addAttribute(templateNode.result) #(4) Set the attribute dependencies templateNode.attributeAffects(templateNode.a, templateNode.result) templateNode.attributeAffects(templateNode.b, templateNode.result)
After adding attributes to the node (3) you’ll need to specify which inputs trigger the computation of specific outputs (4). In our template example we’ll sum the values of ‘a’ and ‘b’, thus changes in both ‘a’ and ‘b’ affect the result.
Computation
The computation is defined in the compute method of our MPxNode based class.
class templateNode(om.MPxNode): '''A template Maya Python DG Node.''' def compute(self, plug, datablock): #(1) # (1) Get handles from MPxNode's data block aHandle = datablock.inputValue(templateNode.a) bHandle = datablock.inputValue(templateNode.b) resultHandle = datablock.outputValue(templateNode.result) # (2) Get data from handles a = aHandle.asFloat() b = bHandle.asFloat() # (3) Compute c = a+b # (4) Output resultHandle.setFloat(c)
The data comes from Maya’s dependency graph through the data block. From it, we retrieve the handles for inputs and outputs. We then retrieve values from the input’s handles, perform the computations, and set the values in the output handles.
Result
If your code is correct you can tell Maya about the location of your plug-in. Edit the Maya.env file (that lives in your Username\Documents\Maya\MayaVersion\ folder) and include the following line:
MAYA_PLUG_IN_PATH=”FolderWhereYourPluginLives”
Load your plug-in in the Maya plug-in manager, and search for the name of your node in Maya’s Node Editor. You can test if computations are being performed correctly by changing the inputs and hovering your mouse over the outputs to see the results. Now let’s implement a Neural Network inside this Python DG node.
Running a Neural Network model inside the DG node
In this example, we will use the Neural Network we have trained in the previous tutorial, that classifies types of plants based on the sizes of petals and sepals. If you haven’t followed that tutorial, I highly recommend you do so. You can download the trained model and the Maya scene in the resources for this article. After changing the name and ID of our template node, you’ll need to change its input and output attributes.
Input and output attributes
We’ll change the input and output attributes of our node to match those of our neural network. The trained model has 4 inputs: sepal length, sepal width, petal length, petal width; and 3 outputs: probability of being type Setosa, probability of being type Virginica, probability of being type Versicolor. For simplicity we’ll not output each individual probability as a scalar value but one string with a list of all probabilities and another string with the name of the winner. This is how the init function should look:
def init(): # (1) Setup input attributes nAttr = om.MFnNumericAttribute() tAttr = om.MFnTypedAttribute() kFloat = om.MFnNumericData.kFloat kString = om.MFnData.kString irisModel.sepalLen = nAttr.create('sepalLength','sl', kFloat, 0.0) nAttr.hidden = False nAttr.keyable = True irisModel.sepalWid = nAttr.create('sepalWidth','sw', kFloat, 0.0) nAttr.hidden = False nAttr.keyable = True irisModel.petalLen = nAttr.create('petalLength','pl', kFloat, 0.0) nAttr.hidden = False nAttr.keyable = True irisModel.petalWid = nAttr.create('petalWidth','pw', kFloat, 0.0) nAttr.hidden = False nAttr.keyable = True #(2) Setup the output attributes irisModel.win = tAttr.create('winner', 'w', kString) tAttr.writable = False tAttr.storable = False tAttr.readable = True irisModel.prob = tAttr.create('probabilities', 'p', kString) tAttr.writable = False tAttr.storable = False tAttr.readable = True #(3) Add the attributes to the node irisModel.addAttribute(irisModel.filePath) irisModel.addAttribute(irisModel.sepalLen) irisModel.addAttribute(irisModel.sepalWid) irisModel.addAttribute(irisModel.petalLen) irisModel.addAttribute(irisModel.petalWid) irisModel.addAttribute(irisModel.result) irisModel.addAttribute(irisModel.win) irisModel.addAttribute(irisModel.prob) #(4) Set the attribute dependencies irisModel.attributeAffects(irisModel.sepalLen, irisModel.win) irisModel.attributeAffects(irisModel.sepalWid, irisModel.win) irisModel.attributeAffects(irisModel.petalLen, irisModel.win) irisModel.attributeAffects(irisModel.petalWid, irisModel.win) irisModel.attributeAffects(irisModel.sepalLen, irisModel.prob) irisModel.attributeAffects(irisModel.sepalWid, irisModel.prob) irisModel.attributeAffects(irisModel.petalLen, irisModel.prob) irisModel.attributeAffects(irisModel.petalWid, irisModel.prob)
Note that the output strings are instances of MFnTypedAttributes, and we initiate them with type kString. Also, note that we need to declare that every change in input affects both outputs explicitly.
Model computation
To load the trained model and feed data to it you’ll need to load the Keras and Numpy libraries, so make sure you add the following code to the beginning of your Python plug-in:
import numpy as np from keras.models import load_model
You can change the computation of the node to load the new input and output attributes that were created (1). Then we get the float values from the inputs and make Numpy floats out of them so Keras don’t throw warnings (2). Build an NP array to feed your model, just how it has been discussed in the previous tutorial, load the model from the ‘h5’ file and get the predictions (3). Once that is done you can set the ‘winner’ and ‘probabilites’ outputs (4).
class irisModel(om.MPxNode): '''A node computing the outputs of a Neural Network trained for the classification of plants.''' def compute(self, plug, data): # (1) Get data handles plHandle = data.inputValue(irisModel.sepalLen) pwHandle = data.inputValue(irisModel.sepalWid) slHandle = data.inputValue(irisModel.petalLen) swHandle = data.inputValue(irisModel.petalWid) winHandle = data.outputValue(irisModel.win) probHandle = data.outputValue(irisModel.prob) # (2) Get input data sepalLen = np.float32(plHandle.asFloat()) sepalWid = np.float32(pwHandle.asFloat()) petalLen = np.float32(slHandle.asFloat()) petalWid = np.float32(swHandle.asFloat()) # (3) Compute output plantData = np.array([sepalLen,sepalWid,petalLen,petalWid]) plantData = plantData.reshape((1,4)) model = load_model('C:\\Users\\gusta\\Downloads\\iris.h5') prediction = model.predict(plantData) # (4) Output value winHandle.setString(str(np.argmax(prediction))) probHandle.setString(str(prediction))
You can reload your plug-in, and create an irisModel node. If you are using the sample Maya scene (available in the resources), you can pipe the plant’s parameters directly to the Neural Network. You can use annotation objects to display the results.
Performance wise there is a problem with this code. We are loading our model from disk at every evaluation. From a usability perspective, it would be better to load the model with a file browser and to have the winner’s name instead of its index. Let’s address these issues.
Optimizing performance and usability
To load the model only once you can define it outside of the computation node. One easy way to do this would be to declare it alongside your global variables, as such:
# Declare global node params and other global vars
nodeName = 'irisModel'
nodeTypeID = om.MTypeId(0x60006)
model = load_model('C:\\Users\\gusta\\Downloads\\iris.h5')
Removing hardcoded file path
If you don’t want to hardcode the path to the model, and if you want to eventually make it a node attribute a better alternative might be to create a cache that only gets updated when the path to the model changes. In such a way:
# Implement a class for caching loaded models class ModelCache: '''A interface for loading and caching Keras models''' filePath = '' model = None def getOrLoad(self,filePath): if filePath == self.filePath: return self.model self.filePath = filePath self.model = load_model(filePath)
If you choose to use this ModelCache you’ll need to create a global instance of it that can be called during computation time, like so:
# Declare global node params and other global vars
nodeName = 'irisModel'
nodeTypeID = om.MTypeId(0x60006)
modelCache = ModelCache() # an instance of our model caching and loading class
And, you will need an attribute for the user to input the file path, which can be initiated like this:
irisModel.filePath = tAttr.create('filePath', 'fp', kString) tAttr.usedAsFilename = True
You can load this input attribute as you would any other. If you need a detailed implementation of this code please check the resources for this article.
Adding class (plant) names
Finally, to output the name of the winning class instead of its index, all you need is a dictionary like this:
plantNames = { 0:'Iris Setosa', 1:'Iris Virginica', 2:'Iris Versicolor' }
In conclusion
When everything is ready and properly connected it should look like this:
In the first tutorial of this series, you learned how you can train a Neural Network to classify any group of scalar values in your Maya scene. You have seen how this works in the example of a procedural flower from which parameters can be used to classify its type. In that first tutorial, the classification was done from within Maya’s script editor. Now you have learned how to do the same computation directly inside Maya’s dependency graph. This means you can use your Neural Network’s output to drive any other node in Maya interactively! In the image above you can see these outputs influencing annotation objects, but these could be any other Maya nodes. I hope you can see the potential in this.
Pingback: Deploying Neural Nets to the Maya C++ API - 3DeepLearner.com
is there a sample scene of this I can download
Can I ask where are the resources?