scrapbook
  • "Unorganized" Notes
  • The Best Public Datasets for Machine Learning and Data Science
  • Practice Coding
  • plaid-API project
  • Biotech
    • Machine Learning vs. Deep Learning
  • Machine Learning for Computer Graphics
  • Books (on GitHub)
  • Ideas/Thoughts
  • Ziva for feature animation: Stylized simulation and machine learning-ready workflows
  • Tools
  • 🪶math
    • Papers
    • Math for ML (coursera)
      • Linear Algebra
        • Wk1
        • Wk2
        • Wk3
        • Wk4
        • Wk5
      • Multivariate Calculus
    • Improving your Algorithms & Data Structure Skills
    • Algorithms
    • Algorithms (MIT)
      • Lecture 1: Algorithmic Thinking, Peak Finding
    • Algorithms (khan academy)
      • Binary Search
      • Asymptotic notation
      • Sorting
      • Insertion sort
      • Recursion
      • Solve Hanoi recursively
      • Merge Sort
      • Representing graphs
      • The breadth-first search algorithm
      • Breadth First Search in JavaScript
      • Breadth-first vs Depth-first Tree Traversal in Javascript
    • Algorithms (udacity)
      • Social Network
    • Udacity
      • Linear Algebra Refresher /w Python
    • math-notes
      • functions
      • differential calculus
      • derivative
      • extras
      • Exponentials & logarithms
      • Trigonometry
    • Probability (MIT)
      • Unit 1
        • Probability Models and Axioms
        • Mathematical background: Sets; sequences, limits, and series; (un)countable sets.
    • Statistics and probability (khan academy)
      • Analyzing categorical data
      • Describing and comparing distributions
      • Outliers Definition
      • Mean Absolute Deviation (MAD)
      • Modeling data distribution
      • Exploring bivariate numerical data
      • Study Design
      • Probability
      • Counting, permutations, and combinations
      • Binomial variables
        • Binomial Distribution
        • Binomial mean and standard deviation formulas
        • Geometric random variable
      • Central Limit Theorem
      • Significance Tests (hypothesis testing)
    • Statistics (hackerrank)
      • Mean, Medium, Mode
      • Weighted Mean
      • Quartiles
      • Standard Deviation
      • Basic Probability
      • Conditional Probability
      • Permutations & Combinations
      • Binomial Distribution
      • Negative Binomial
      • Poisson Distribution
      • Normal Distribution
      • Central Limit Theorem
      • Important Concepts in Bayesian Statistics
  • 📽️PRODUCT
    • Product Strategy
    • Product Design
    • Product Development
    • Product Launch
  • 👨‍💻coding
    • of any interest
    • Maya API
      • Python API
    • Python
      • Understanding Class Inheritance in Python 3
      • 100+ Python challenging programming exercises
      • coding
      • Iterables vs. Iterators vs. Generators
      • Generator Expression
      • Stacks (LIFO) / Queues (FIFO)
      • What does -1 mean in numpy reshape?
      • Fold Left and Right in Python
      • Flatten a nested list of lists
      • Flatten a nested dictionary
      • Traverse A Tree
      • How to Implement Breadth-First Search
      • Breadth First Search
        • Level Order Tree Traversal
        • Breadth First Search or BFS for a Graph
        • BFS for Disconnected Graph
      • Trees and Tree Algorithms
      • Graph and its representations
      • Graph Data Structure Interview Questions
      • Graphs in Python
      • GitHub Repo's
    • Python in CG Production
    • GLSL/HLSL Shading programming
    • Deep Learning Specialization
      • Neural Networks and Deep Learning
      • Untitled
      • Untitled
      • Untitled
    • TensorFlow for AI, ML, and DL
      • Google ML Crash Course
      • TensorFlow C++ API
      • TensorFlow - coursera
      • Notes
      • An Introduction to different Types of Convolutions in Deep Learning
      • One by One [ 1 x 1 ] Convolution - counter-intuitively useful
      • SqueezeNet
      • Deep Compression
      • An Overview of ResNet and its Variants
      • Introducing capsule networks
      • What is a CapsNet or Capsule Network?
      • Xception
      • TensorFlow Eager
    • GitHub
      • Project README
    • Agile - User Stories
    • The Open-Source Data Science Masters
    • Coding Challenge Websites
    • Coding Interview
      • leetcode python
      • Data Structures
        • Arrays
        • Linked List
        • Hash Tables
        • Trees: Basic
        • Heaps, Stacks, Queues
        • Graphs
          • Shortest Path
      • Sorting & Searching
        • Depth-First Search & Breadth-First Search
        • Backtracking
        • Sorting
      • Dynamic Programming
        • Dynamic Programming: Basic
        • Dynamic Programming: Advanced
    • spaCy
    • Pandas
    • Python Packages
    • Julia
      • jupyter
    • macos
    • CPP
      • Debugging
      • Overview of memory management problems
      • What are lvalues and rvalues?
      • The Rule of Five
      • Concurrency
      • Avoiding Data Races
      • Mutex
      • The Monitor Object Pattern
      • Lambdas
      • Maya C++ API Programming Tips
      • How can I read and parse CSV files in C++?
      • Cpp NumPy
    • Advanced Machine Learning
      • Wk 1
      • Untitled
      • Untitled
      • Untitled
      • Untitled
  • data science
    • Resources
    • Tensorflow C++
    • Computerphile
      • Big Data
    • Google ML Crash Course
    • Kaggle
      • Data Versioning
      • The Basics of Rest APIs
      • How to Make an API
      • How to deploying your API
    • Jupiter Notebook Tips & Tricks
      • Jupyter
    • Image Datasets Notes
    • DS Cheatsheets
      • Websites & Blogs
      • Q&A
      • Strata
      • Data Visualisation
      • Matplotlib etc
      • Keras
      • Spark
      • Probability
      • Machine Learning
        • Fast Computation of AUC-ROC score
    • Data Visualisation
    • fast.ai
      • deep learning
      • How to work with Jupyter Notebook on a remote machine (Linux)
      • Up and Running With Fast.ai and Docker
      • AWS
    • Data Scientist
    • ML for Beginners (Video)
    • ML Mastery
      • Machine Learning Algorithms
      • Deep Learning With Python
    • Linear algebra cheat sheet for deep learning
    • DL_ML_Resources
    • Awesome Machine Learning
    • web scraping
    • SQL Style Guide
    • SQL - Tips & Tricks
  • 💡Ideas & Thoughts
    • Outdoors
    • Blog
      • markdown
      • How to survive your first day as an On-set VFX Supervisor
    • Book Recommendations by Demi Lee
  • career
    • Skills
    • learn.co
      • SQL
      • Distribution
      • Hypothesis Testing Glossary
      • Hypothesis Tests
      • Hypothesis & AB Testing
      • Combinatorics Continued and Maximum Likelihood Estimation
      • Bayesian Classification
      • Resampling and Monte Carlo Simulation
      • Extensions To Linear Models
      • Time Series
      • Distance Metrics
      • Graph Theory
      • Logistic Regression
      • MLE (Maximum Likelihood Estimation)
      • Gradient Descent
      • Decision Trees
      • Ensemble Methods
      • Spark
      • Machine Learning
      • Deep Learning
        • Backpropagation - math notation
        • PRACTICE DATASETS
        • Big Data
      • Deep Learning Resources
      • DL Datasets
      • DL Tutorials
      • Keras
      • Word2Vec
        • Word2Vec Tutorial Part 1 - The Skip-Gram Model
        • Word2Vec Tutorial Part 2 - Negative Sampling
        • An Intuitive Explanation of Convolutional Neural Networks
      • Mod 4 Project
        • Presentation
      • Mod 5 Project
      • Capstone Project Notes
        • Streaming large training and test files into Tensorflow's DNNClassifier
    • Carrier Prep
      • The Job Search
        • Building a Strong Job Search Foundation
        • Key Traits of Successful Job Seekers
        • Your Job Search Mindset
        • Confidence
        • Job Search Action Plan
        • CSC Weekly Activity
        • Managing Your Job Search
      • Your Online Presence
        • GitHub
      • Building Your Resume
        • Writing Your Resume Summary
        • Technical Experience
      • Effective Networking
        • 30 Second Elevator Pitch
        • Leveraging Your Network
        • Building an Online Network
        • Linkedin For Research And Networking
        • Building An In-Person Network
        • Opening The Line Of Communication
      • Applying to Jobs
        • Applying To Jobs Online
        • Cover Letters
      • Interviewing
        • Networking Coffees vs Formal Interviews
        • The Coffee Meeting/ Informational Interview
        • Communicating With Recruiters And HR Professional
        • Research Before an Interview
        • Preparing Questions for Interviews
        • Phone And Video/Virtual Interviews
        • Cultural/HR Interview Questions
        • The Salary Question
        • Talking About Apps/Projects You Built
        • Sending Thank You's After an Interview
      • Technical Interviewing
        • Technical Interviewing Formats
        • Code Challenge Best Practices
        • Technical Interviewing Resources
      • Communication
        • Following Up
        • When You Haven't Heard From an Employer
      • Job Offers
        • Approaching Salary Negotiations
      • Staying Current in the Tech Industry
      • Module 6 Post Work
      • Interview Prep
  • projects
    • Text Classification
    • TERRA-REF
    • saildrone
  • Computer Graphics
  • AI/ML
  • 3deeplearning
    • Fast and Deep Deformation Approximations
    • Compress and Denoise MoCap with Autoencoders
    • ‘Fast and Deep Deformation Approximations’ Implementation
    • Running a NeuralNet live in Maya in a Python DG Node
    • Implement a Substance like Normal Map Generator with a Convolutional Network
    • Deploying Neural Nets to the Maya C++ API
  • Tools/Plugins
  • AR/VR
  • Game Engine
  • Rigging
    • Deformer Ideas
    • Research
    • brave rabbit
    • Useful Rigging Links
  • Maya
    • Optimizing Node Graph for Parallel Evaluation
  • Houdini
    • Stuff
    • Popular Built-in VEX Attributes (Global Variables)
Powered by GitBook
On this page
  • Create the dataset needed to train the model
  • Train a regression Neural Network to correlate transforms to deformation
  • Feature Normalization
  • Defining the model
  • Implement a custom deformer using the trained model
  • Init the node and create attributes
  • Compute deformation
  • Connecting everything up
  1. 3deeplearning

‘Fast and Deep Deformation Approximations’ Implementation

PreviousCompress and Denoise MoCap with AutoencodersNextRunning a NeuralNet live in Maya in a Python DG Node

Last updated 4 years ago

3D animated characters in feature films use sophisticated rigs with complex deformations that can be computationally intensive. The authors of the paper, Bailey et al., propose a method for approximating such deformations using Neural Networks. I have created a short video outlining the proposed model; you. This article is meant as support material to that video, so I encourage you to watch it first.

I have also implemented a prototype of that same model in Maya, and in this tutorial, I am going to show you how to implement it yourself. Here is what you will learn:

Create the dataset needed to train the model

In the FDDA paper authors use one Neural Network per joint to correlate joint transformation to mesh deformation. So, the first thing you’ll need to do is create the dataset that allows you to train such a network.

I have created two samples scenes that you can download (). One has a model deformed with clusters, which will be your base mesh, and the other has a more complex set of deformations, this is what you’ll try to approximate.

Base mesh (left) and deformed mesh that should be approximated (right)
import pymel.core as pmc
import numpy as np
from random import random
# Global vars (customize before running)
linSkin_mesh_name = 'linSkin:pCylinder1'      
# name for the base model with linear skinning
linSkin_joint_name = 'linSkin:joint2'         
# model inputcustomDef_mesh_name = 'customDef:pCylinder1'  
# name for the deformed model to be approximated
customDef_joint_name = 'customDef:joint2'     
# model inputsamples = 30                                  
# samples to be collectedcsvIn = 'c:/yourPath/inputs.csv'
csvOut = 'c:/yourPath/outputs.csv'

I think most of the code above is self-explanatory. One thing I’d like to comment is about how we are sampling the data. We’ll create random transforms for the joint. The variable ‘samples’ refer to how many of these random transforms, and corresponding deformations, we’ll create.

After that, we define a function to extract the displacement amongst two meshes. Note that we construct one big list with all displacements for all mesh vertices (1).

def getMeshDisplacement(meshA, meshB):
    '''Get displacement between two Maya polygon meshes as a single row vector.'''
    # Check if meshes match
    nverts = len(meshA.verts)
    if nverts != len(meshB.verts):
        raise Exception('Meshes must have the same number of vertices.')
    # Iterate vertices and calculate displacement
    dsplc = [None]*nverts*3 # Reserve space for displacement vector
    for i in range(nverts):
        dVec3 = meshB.verts[i].getPosition() - meshA.verts[i].getPosition()
        dsplc[i*3:i*3+3] = [dVec3.x, dVec3.y, dVec3.z]
    return dsplc

Finally, in the main execution, we get the meshes and joints and generate random transforms for which we’ll sample the inputs and outputs to our model. Both transforms and displacements are stored in NDArrays so they can be easily exported as CSVs.

# Get meshes and joints
linSkin_mesh = pmc.ls(linSkin_mesh_name)[0]
linSkin_joint = pmc.ls(linSkin_joint_name)[0]
customDef_mesh = pmc.ls(customDef_mesh_name)[0]
customDef_joint = pmc.ls(customDef_joint_name)[0]
Iterate meshes over time to sample displacementsxfos = []
dsplcs = []
for i in range(samples):
    # Create a matrix with a random orientation
    randXfo = linSkin_joint.getTransformation()
    randXfo.setRotationQuaternion(randQuatDim(), randQuatDim(), randQuatDim(), randQuatDim())
    # Set transformation in both joints
    linSkin_joint.setTransformation(randXfo)
    customDef_joint.setTransformation(randXfo)
    # Joints have limitations, so one must get its actual transformation
    xfo = np.array(customDef_joint.getTransformation()).flatten() # and cast to NDArray
    # Get displacement amongst meshes
    dsplc = np.array(getMeshDisplacement(linSkin_mesh, customDef_mesh)) # and cast to NDArray
    xfos.append(xfo)
    dsplcs.append(dsplc)
    print('Built sample ' + str(i))

# Output displacement samples as CSV
xfos = np.stack(xfos)
dsplcs = np.stack(dsplcs)
np.savetxt(csvIn, xfos)
np.savetxt(csvOut, dsplcs)

There are two important things to note about the joint transformation. The first is that randQuatDim()is generating random values from -1 to 1 for every dimension in the quaternion we build. The second is that we need to get the transformation from the joints after we have set them because they have joint limitations turned on. Hence, the final transform we’ll be different than that random matrix we have created.

You can inspect the CSV files in Excel, Google Spreadsheets, or other tools of your choosing.

Train a regression Neural Network to correlate transforms to deformation

import numpy as np
import keras
from keras.models import Sequential      # An object we need to define our model
from keras.layers import Dense           # This is the type of network
from keras import utils                  # Tools to process our data
import numpy as np                       # Types for structuring our data
import matplotlib.pyplot as plt
from google.colab import files           # Input and output files from GClab
# Upload the files you have extracted in Maya
inputs_file = files.upload()
outputs_file = files.upload()

You’ll be prompted to choose the files from your hard drive. The path to the files is stored as a key in a dict; this is how you retrieve the path and load the CSV as a Numpy NDArray:

# Get inputs and outputs from uploaded files
inputs = np.loadtxt(list(inputs_file.keys())[0])
outputs = np.loadtxt(list(outputs_file.keys())[0])

Feature Normalization

In this prototype, I’m normalizing the dataset before proceeding with the training. This step is not mandatory, and I have not done it in previous tutorials for the sake of simplicity. But it is a common practice that you should get used to because it improves the accuracy of your model at no extra cost.

The idea behind feature normalization is that some features in your dataset might be huge scalar values that vary a lot, while others might be near constant, near zero values. Such different values will have a very different impact on the activation of your neurons and will bias the network. Therefore, it is good to rescale the data to avoid such effect.

Here I’m using a widespread scaling approach, I remove the feature’s mean and divide the remainder by its standard deviation. Notice these are not the mean and standard deviation of the whole dataset, but of all samples for each feature (i.e., every component of the transform matrix, and every dimension of every displacement vector).

I create one function to normalize and another to ‘denormalize’ features:

# Implement feature normalization and denormalization functions.
def featNorm(features):    
    '''Normalize features by mean and standard deviation.    
    Returns tuple (normalizedFeatures, mean, standardDeviation).    
    ''' 
    
    mean = np.mean(features, axis=0) 
    std = np.std(features - mean, axis=0)    
    feats_norm = (features - mean) / (std + np.finfo(np.float32).eps)    
    
    return (feats_norm, mean, std)
    
def featDenorm(features_norm, mean, std):    
    '''Denormalize features by mean and standard deviation'''    
    features = (features_norm * std) + mean    
    
    return features

Note that in the ‘featNorm’ function we output not only the normalized features but the means and standard deviations. I do that because we’ll need this information to transform new data in the prediction phase, and also to ‘denormalize’ the network’s output. We apply the normalization and store the values using the following code:

inputNormalization = featNorm(inputs)
inputs_norm = inputNormalization[0]
inputs_mean = inputNormalization[1]
inputs_std = inputNormalization[2]
outputNormalization = featNorm(outputs)
outputs_norm = outputNormalization[0]
outputs_mean = outputNormalization[1]
outputs_std = outputNormalization[2]

Now that we have prepared the data, let’s train the model.

Defining the model

We have used the number of neurons and the activation functions suggested by the authors in the paper. Although the model will work with other configurations.

model = Sequential()
model.add(Dense(512, input_dim=inputs_norm.shape[1], activation='tanh'))
model.add(Dense(512, input_dim=100, activation='tanh'))
model.add(Dense(outputs_norm.shape[1], activation='linear'))
adam = keras.optimizers.Adam(lr=0.01)
model.compile(loss='mse', optimizer=adam, metrics=['mse'])

We train the model and save the information in a history variable so that we can plot the learning graph afterward. We are reserving 30% of samples for testing (validation_split).

history = model.fit(inputs_norm, outputs_norm, epochs=200, validation_split=0.3, batch_size=None)

During training, you should see the error in the validation and training sets diminish continually. Here is the plot for my training:

Loss (mse) diminishes along the training

After the training has finished, you can save and download your model. Remember you also need to keep the normalization data to apply it to new data during prediction:

model.save('FDDA.h5')
np.savetxt('in_mean.csv', inputs_mean)
np.savetxt('in_std.csv', inputs_std)
np.savetxt('out_mean.csv', outputs_mean)
np.savetxt('out_std.csv', outputs_std)
files.download('FDDA.h5')
files.download('in_mean.csv')
files.download('in_std.csv')
files.download('out_mean.csv')
files.download('out_std.csv')

Implement a custom deformer using the trained model

import maya.OpenMayaMPx as ompx
import maya.OpenMaya as om
import numpy as np
from keras.models 
import load_model
# Declare global node params and other global vars
nodeName = 'tdl_FDDA'
nodeTypeID = om.MTypeId(0x1C3B1234)
model = load_model('c:/yourPath/dfDef.h5')
inputs_mean = np.loadtxt('c:/yourPath/in_mean.csv')
inputs_std = np.loadtxt('c:/yourPath/in_std.csv')
outputs_mean = np.loadtxt('c:/yourPath/out_mean.csv')
outputs_std = np.loadtxt('c:/yourPath/out_std.csv')

Then we implement our normalization functions once again:

# Implement feture normalization and denormalization functions.
def featNorm(features, mean, std):    
    '''Normalize features by given mean and standard deviations.'''    
    feats_norm = (features - mean) / (std + np.finfo(np.float32).eps)    
    return feats_norm
    
def featDenorm(features_norm, mean, std):
    '''Denormalize features by mean and standard deviation.'''    
    features = (features_norm * std) + mean    
    return features

Note that here the ‘featNorm’ function does not generate the ‘mean’ and ‘std’ variables but instead receives them as input parameters.

Init the node and create attributes

As you have seen in the previous tutorial the first step in creating a custom Python DG node is setting up its attributes. In this case, since we are instantiating the MPxGeometryFilter class, some attributes are given, these are the input-output geometry, and the envelope. The envelope is a multiplier of the deformation effect.

We will add one other attribute, the matrix which we’ll use as input for the Neural Network. Declare it in the init function like this:

def init():    
    # (1) Setup input attributes    
    mAttr = om.MFnMatrixAttribute()    
    tdl_FDDANode.xfoMat = mAttr.create('matrix', 'xm')    
    mAttr.writable = True    mAttr.storable = True    
    mAttr.connectable = True    mAttr.hidden = False    
    
    # (2) Add the output attributes to the node    
    # The only ouput attribute is the deformed geometry    
    # which is the default for any deformer. Hence we add    
    # no additional outputs.    
    
    # (3) Add the attributes to the node    
    tdl_FDDANode.addAttribute(tdl_FDDANode.xfoMat)    
    
    # (4) Declare attribute dependencies    
    tdl_FDDANode.attributeAffects(tdl_FDDANode.xfoMat, ompx.cvar.MPxGeometryFilter_outputGeom)

Note that in the last line we tell Maya to update the output geometry when the matrix changes the value.

Compute deformation

In the MPxGeometryFilter nodes, you do not declare a ‘compute’ function but a ‘deform’ function. The deform function provides an iterator (geom_it) that can be used to iterate over all vertices. We start the deformation by setting up all the attributes we’ll need and check if the number of vertices in the mesh matches the number of output neurons in our network.

def deform(self, data, geom_it, local_to_world_mat, geom_idx):
    # Get the default deformer's class default attributes            
    # Get mesh        
    input_attr = ompx.cvar.MPxGeometryFilter_input        
    input_geom_attr = ompx.cvar.MPxGeometryFilter_inputGeom        
    input_handle = data.outputArrayValue(input_attr)        
    input_handle.jumpToElement(geom_idx)        
    input_geom_obj = input_handle.outputValue().child(input_geom_attr).asMesh()        
    mesh = om.MFnMesh(input_geom_obj)            
    
    # Get envelope        
    envelope_attr = ompx.cvar.MPxGeometryFilter_envelope        
    envelope = data.inputValue(envelope_attr).asFloat()        
    
    # Get custom deformer attributes        
    xfoMat_handle = data.inputValue(tdl_FDDANode.xfoMat)        
    xfoMat = xfoMat_handle.asMatrix()        
    xfo = [np.float32(xfoMat(r, c)) for r in xrange(4) for c in xrange(4)]        
    
    # Check if number of vertices match the trained model        
    if (mesh.numVertices() != model.output_shape[1]/3):            
        raise Exception('Mesh has ' + str(mesh.numVertices()) + ' vertices, '
                        'model expects ' + str(model.output_shape[1] / 3) + ' vertices.')

Then we get and cache the model’s prediction, so that if the joint’s transformation does not change we don’t re-evaluate it. Note that we normalize the network’s inputs (xfo) and denormalize its outputs (prediction/displacement).

# Get and cache displacement prediction        
if (self.xfo_cache == xfo):            
    pass        
else:            
    self.xfo_cache = xfo            

# Model predictions            
xfo = np.array(xfo)            
xfo = featNorm(xfo.reshape((1, 16)), inputs_mean, inputs_std)            
prediction = featDenorm(model.predict(xfo), outputs_mean, outputs_std)            
self.prediction_cache = prediction.flatten()

Finally, we trigger the geometry iterator and update the position for every vertex. We have to get the correct x,y, and z values for every vertex and make them regular floats as the network’s outputs are Numpy.floats. Then we compose an MVector that we’ll add to the vertex position.

# Deform vertex        
while not geom_it.isDone():            
    idx = geom_it.index()            
    pos = geom_it.position()            
    # Get displacement from cached prediction            
    x = float(self.prediction_cache[idx * 3])            
    y = float(self.prediction_cache[idx * 3 + 1])            
    z = float(self.prediction_cache[idx * 3 + 2])            
    dsplc = om.MVector(x, y, z)            
    
    # Apply deformation            
    new_pos = pos + (dsplc*envelope)            
    geom_it.setPosition(new_pos)            
    geom_it.next()

Connecting everything up

Load that base model ‘linSkin.ma’ once again, select the mesh to be deformed and run the following code from the Maya Python script editor:

import maya.cmds as cmds
cmds.deformer(type=’tdl_FDDA’)

A deformer will be created and connected to the geometry, now plug Joint2’s matrix to the deformer and voila. You should see the magic.

Tell me what you think about this prototype. Were you able to run it properly? Can you think of similar applications that can be dealt with a model like this?

You can import both these models into a new scene (using namespaces) and use a script to extract the joint transform and the displacement between these two models in a CSV file. but I’ll go over the important stuff here. The first thing we do is import the packages we’ll use, then we set some global parameters.

We limit the joint’s movement, so we won’t create random orientations that do not make sense

Now that you have the dataset it is time to train the network. In the resources for this post, you’ll see I have created a well-documented IPython notebook for you to run in . I’ll comment on the most critical aspects of that code here.

To train the network we’ll be using Keras, the same framework I have used in my first tutorial (). These are all the packages you’ll need to load:

The last one (google.colab) is only relevant if you are using . If you are you might be wondering how to get your custom dataset up to that system. Here is what you’ll do:

We represent the model using Keras sequential interface, . The main difference is that here we are not training a classification model, but a regression model (). So, the activation function in the final layer is just a linear mapping of the activation values. Also, the loss function, the thing we are trying to minimize, is the ‘mean squared error’, that is, the squared distance between the predictions and the actual values.

Plot of a downwards slope

This final step is . That is, we’ll be implementing a custom Python DG node to run our model live in Maya. But in this case, we are implementing a deformer, and Maya has a custom class for deformer nodes called MPxGeometryFilter. You should use it over MPxNode for convenience and performance. On the downside, this class is not available through OpenMaya 2, so you’ll have to stick to the old OpenMaya API. Here are the packages you’ll need to load:

This Python DG node has significantly more lines of code , so I took the liberty of hardcoding some things. If you don’t want to do that, .

If you have set everything up properly, and if your ‘3DL_FDDA.py’ file is being loaded by Maya.env () you can create your new deformer using maya.cmds.

Custom implementation of the FDDA deformer running inside Autodesk Maya

If you have followed this tutorial up to here, congratulations, you have understood and implemented your first SIGGRAPH deep learning paper. provides an interesting solution to make character deformations faster and portable. This is a Python prototype implementation, so rest assured it won’t be fast, but all the pieces are there. The deformations look very much like the original and the model generalizes well (try to play around with the joint). While the paper is limited to movement from joint transformations I think you can see that it is not impossible to connect other things to the input and test how the model reacts.

Three cilinders bending with different kinds of deformations
The full script is available in the resources for this article,
Google Colaboratory
here
Google Colaboratory
much like we have done in the previous tutorial
see the video for further clarification
similar to what I have shown you in a previous tutorial
than the last example
make sure you check the previous tutorial
if you don’t know how to do that look up the Python DG node tutorial
‘Fast and Deep Deformation Approximations’
‘Fast and Deep Deformation Approximations’
can watch the video clicking here
Create the dataset needed to train the model
Train a regression Neural Network to correlate transforms to deformations
Implement a custom deformer using the trained model
in the resources for this article