Compress and Denoise MoCap with Autoencoders
Autoencoders are at the heart of some raytracer denoising and image upscaling (aka. super-resolution) technologies. Despite the pompous name, an autoencoder is just a Neural Network for which inputs and outputs are the same (except for the addition of noise, in some cases).
In this tutorial we’ll implement a simple autoencoder network and train it with MoCap data, then we’ll test it inside Autodesk Maya. Here are the things you’ll learn in this article:
What is an Autoencoder?
Autoencoder is a fancy name for a Neural Network that has a symmetric topology and for which inputs and outputs are the same. Why would anyone bother to train such a network, you might ask? Building such a network means you are creating an alternative way to represent the data in the hidden-neurons. This new representation may have useful characteristics: it may be more compact than the original representation (dimensionality reduction); it is data-driven and thus will encode common patterns from the data-set.
We can build autoencoders out of different topologies, such as fully-connected (which is the topology I have been using in most other tutorials), convolutional, recurrent, and so on. The input and output layers will have the same number of features, whatever that is; for the data going in and out should be the same. The hidden layers should, in theory, have fewer features than the input layer. This constraint exists because if the number of neurons in the hidden layers were equal to or greater than the inputs the network would learn to copy the inputs.
While this is ok if you want to build an auto-encoder for dimensionality reduction it is less than optimal if you’re going to use it for denoising since the dimensions in the hidden-layer will be capped. A way to get around this is to introduce sparsity into your training; I’ll talk more about this latter. First, we need some data to train this autoencoder.
For this tutorial, I have converted a subset of the CMU dataset into CSV files for your convenience (see the resources). The data consists mostly of locomotive movements from different subjects. Motions from subjects 38, 39, 40, and 41 are used for testing purposes only. Actions from subjects 07, 08, 09, 16, 35, and 69 are used for training and have been condensed into one Numpy variable for greater convenience. Both test and train data follow this pattern:
Col 1
Col 2
Col 3
Col 4
Col 5
Col 6
Col 7
Col 8
Col 9
Col 10
Col 11
Col 12…
Root position x
Root position y
Root position z
Root quaternion x
Root quaternion y
Root quaternion z
Root quaternion w
Bone 1 quaternion x
Bone 1 quaternion y
Bone 1 quaternion z
Bone 1 quaternion w
…
This data structure is similar to what you would get from the original dataset in BVH format but using quaternions instead of Euler rotations. Training Neural Networks using Euler for rotations can be a bad idea since it is possible to express the same orientation with different combinations of angles.
To display this data in Maya we’ll use a skeleton that matches the one used in the CMU dataset (see resources), and a custom-built DG node that can read the CSV files and output them as 4×4 matrices that will, in turn, be used to control the skeleton. I’ve dealt with the basic building blocks for a custom Maya DG node in a previous post, so I highly encourage you to read that one first. I provide a template DG node in the resources so that you can follow along.
Creating the parameters
First, let’s create the Maya attributes needed for our node. All we need at first are a ‘file path’ to read from, a ‘frame’ so we can scrub through the animation, and a Matrix array to store the transforms for each of the joints of the character.
Remember to add the attributes to the node (3) and to declare their dependencies (4).
Outside the functions that create and init the node, we will build a class to cache the CSV data once loaded. Then we create a global variable to start an object from this class and have it available for use inside the compute part of our node.
Finally, we can declare our compute function, using attribute values to read from the data_cache object:
Here is a quick overview of the code above. After loading values from attributes, we load the CSV, which is cached if the filename has not changed. The number of ports in the node will be adjusted to fit the data available in the CSV, following pattern outlined in the table previously shown. The seven first columns refer to the global xfo (translation + orientation of the root joints), all remaining columns refer to quaternion orientations of the remaining joints.
If the code is ok, you should be able to load one of the CSV files and connect it to the FBX skeleton. You’ll need to use ‘decompose matrix’ nodes to connect the output matrices to the proper transform components, like so:
If everything is setup correctly you should see something like this:
Original data loaded from a CSV with our custom DG node
I provide you with a Maya scene in the resources, so you can see how everything should be connected.
Compressing data with an Autoencoder
Now that we can see the data, let us compress it using an Autoencoder. I provide you the Python code for that both in the form of regular .py files as well as Jupyter Notebooks (please check the resources for this article). Here I’ll go over the most cricital parts of the code.
The first thing we need to do, after loading the proper libraries, is to load the training dataset. Note that I am not using the first seven entries in the data container (root position and orientation) to train the network; the reason for that is to not teach the model to correlate global positioning with character poses, which would be wrong. Note that I have left commented out code you can use to train the network with a randomly chosen small sample of the data. It is good practice to check if your model works on little data, before having it working on big data, as training times differ a lot.
Now we set up the network’s topology:
The input dimension is 120, one 4-dimensional quaternion for each of the 30 joints. Then we compress that to 60 features on the first hidden layer and down to 30 on the second one. The third hidden layer, or first decoding layer, brings the data back to 60 dimensions. And the final layer brings everything back to the original representation. A word about the activations: I’m using sigmoid activations to encode and linear activations to decode the data. This activation choice is a typical one in autoencoders, but it would work with other activations. The final activation “tanh” (aka tangential) is a bit of a trick related to the data we are using. The tanh layer fits activations to a range between -1 and 1, as has been discussed in this previous article. Quaternions also range from -1,1; so by using “tanh” we make sure we’ll get valid quaternion values without any post-processing step.Now we compile and train the network.
Note that the inputs and outputs for our autoencoder are the same (data_container). This equality is a defining aspect of the autoencoder. To test our model in Maya, we save it:
Loading the model in Maya
Then in Maya, we add some code to our custom node to load the model. We can do this very simply by loading the model to a global variable, in a hard-coded way:
Then we can auto-encode the pose with our model by adding this line to the compute function:
In the final code, available in the resources, you’ll note I have created a toggle parameter for autoencoding the data and a file path to load the model dynamically. It is a more elegant UI, but the idea is the same. Here is a comparison of the original data and the auto-encoded data:
Note that it cannot match the input 100%, but keep in mind we are compressing the data by a factor of 4.
As the alternative representation of the pose emerges out of the recognition of patterns existent in the data, autoencoders tend to generate outputs that are built of components it has seen before. Hence, it enforces the reproduction of the structure of the original data and may remove unwanted artifacts such as noise (while introducing others such as blurring). We can put that to the test. For that we’ll create a noise parameter in our node:
And then we use this noise value to add normally distributed noise to the pose:
Note that we seed the random samples with the frame number, so we can properly compare results. Finally, you’ll need to adjust the remainder of the compute function to use noisy_pose instead of pose_data, as can be seen in the final code available in the resources.
As you can see in the GIF below the autoencoder reduces the noise that we’ve artificially introduced.
Left: CMU clip with synthetic noise, Right: Denoised output of a deep auto-encoder
Training autoencoders with sparsity
We may want to use an autoencoder exclusively for its denoising properties without any consideration for its dimensionality reduction capabilities. In that case, reducing dimensions can be a downside since we’ll limit the number of features our data representation is composed of and thus restrict its ability to learn how to represent the data. So, what we may want is to create an autoencoder that can have more dimensions in the hidden-layers than in the input and output layers. As I’ve explained before, this requires the introduction of some sparsity to the model. One way to add sparsity is to corrupt the inputs; another is to use penalties in the loss function. I’ll use the latter.
Here is how you add such a penalty in Keras:
L1 is a common sparsity-inducing penalty that is summed to the loss functions when you declare it in the activity_regularizer. In layman terms, what we do is induce fewer neurons to fire on any given stimulus, thus better isolating the learned features from one another. The amount of L1 added to the loss function (10e-9 in this case) directly impacts the training. Lower numbers will introduce less sparsity, the model will be more prone to overfitting, while larger numbers reduce the overfitting introducing more “blurriness” to the output of the network. Play with this value, give it a try.
Here is a comparison between the noisy data, the data auto-encoded with dimensionality reduction and the data auto-encoded with data expansion (and sparsity):
Left: CMU clip with synthetic noise, Middle: Denoised output of a deep auto-encoder, Right: Denoised output of a deep auto-encoder with sparsity
In Conclusion
In this article, you have learned what an autoencoder is and how to train one. Autoencoders have compression and denoising properties you can use in any data. Notice that our implementation is very simple and can only denoise a pose at a specific point in time, not accounting for previous and future poses. Expanding the model to account for such temporality would surely increase its precision and ability to denoise data.
You have also learned how to load kinematic data in Maya and pass it through a Neural Network using a custom DG node. This tooling is useful not only for autoencoders but for any other network you want to build to process MoCap data or Keyframe animations. You can also refactor this node to output data to CSV and build your own datasets.
Last updated