Although it’s hardly the FX bonanza of something like Tears of Steel, this movie depends on Blender’s camera and object tracking features for a handful of crucial effects.
One is the headgear the main character wears on his head for much of the movie. In order to track this as accurately as possible, we used a papier mache hat decorated with colored ball tracking markers.
That works great for the most part. We foresaw situations when the marker hat would be too far out of frame to track well, and when possible we shot full-frame rather than widescreen to get as much of the hat in the shot. There were a few close-ups I knew I wouldn’t be able to rely on the hat for.
But there were other shots where we (that is to say I) simply forgot to have the actor wear the hat. The headgear goes on, it comes off, it goes on again, and this is relevant both to the story and just for continuity. We shoot everything out of order of course, and so when I sit down to edit, lo and behold here’s some distant (i.e. tough to track) shot where he’s supposed to be wearing the headgear/hat and he’s not.
I’m not gonna lie. The thought crossed my mind that you, the esteemed viewer of the finished product, might not even notice that the headgear simply disappears from the character’s head for a shot. But that’s no way to think. There are several of these shots, and the presence of the headgear in each of them is actually relevant to the story. If he doesn’t have the headgear and you happen to notice that, then it makes no sense. So I had to suck it up and track the seemingly untrackable. Fortunately Blender’s object tracker (or more accurately, its reconstruction functionality, because I basically did the tracking by hand) held up extremely well.
Blender is so good at tracking features in video images that it’s easy to overlook that you can push it even further if you’re prepared to use your own brain’s (much superior) image processing power to track very difficult to track features. Blender requires features to be recognizable and consistent visual features. The planar tracking functionality gives you a lot of leeway in terms of how these features can transform spatially, but when colors change, or shadows pass over, or objects move in front of other objects, or if the feature is too subtle to be accurately picked up, automatic tracking can fail.
We people, on the other hand, are incredibly good at tracking features. That’s why poorly tracked objects look so obvious to us. We have a very good sense of whether something is stuck to the surface of a 3D object or not, and we can tell immediately.
I couldn’t get good enough settings to automatically track features on the heavily pixelated back of the character’s head. You need at least 8 consistent features between tracking key frames to established the basic parallax that the algorithm works from. The character was walking through shadowy areas and the features were very vague and inconsistent. Plus, his head turns, so I couldn’t use features lower than the base of his skull.
So I tracked each of the points by hand. This is a bit more painstaking than just setting the tracking marker for each frame. In order to get a decent track, you need to jump back and forth between frames to watch the marker jump, and adjust it until it seems fixed in place between frames. If you keep the marker area big, your own brain is able to use a lot of context to tell you whether the marker point is really stuck to the spot or not, and you can get remarkably accurate tracks with almost no locally distinct feature. Furthermore, your brain is ace at tracking a point through shadows or other light changes. Tracking this way I was able to get enough features tracked to get good matching movement on all the shots I’d been worried about.
Using the tracker this way is definitely not plan A. To be honest, it’s a pain in the ass. But it’s a lifesaver to have this alternative available if you failed to set up your shoot properly.