VFX post-mortem

Things are finally winding down. I’m rendering out a full sequence with a first complete stab at color correction and a near-finished sound mix. After I watch it on a couple screens and show it to some people, I’ll probably make a few adjustments with the color, and by the end of the month I’ll be submitting to festivals.

bridgethrow

I’ve been learning a lot in my AnimationMentor VFX course too. I was a little worried that with all I learned in the course I’d want to go back and redo the whole movie, but fortunately I’ve only felt I really had to do that with a couple shots.

I have picked up a few nuggets of knowledge both through taking the course and making the movie that I will definitely keep in mind in the future. Some of them are so obvious in retrospect it’s a little bit embarrassing to mention them, but if they weren’t obvious to me to begin with, I assume that there are Blender users out there for whom they aren’t obvious now. In no particular order, here they are:

1) Use, but don’t rely entirely on image-based lighting. 

You need to use IBL, and you need to render with Cycles (which is great, btw. IMHO it compares remarkably well to the industry standard Arnold). You must take light probes (which I did) and you should shoot gray balls (which I didn’t). However, for most shots you will want to augment this lighting with some CG lights for several reasons. For one thing, Cycles won’t give you a shadow pass from an environment texture. And even if it did it’s nearly impossible to get fine control over the various lighting components (key, fill, highlights, shadows) with only the IBL. In actual filming, small lights and reflectors are used all the time, and you should assume it’s the same in your VFX shots.

2) The shadow pass is not the actual shadow. Don’t multiply it.

This was the biggest forehead slap for me. All along I’d been using the shadow pass (maybe multiplied by an AO pass) as a multiplier with the plate to create the shadows (with a 0<1 factor to control intensity). This basically works to darken the plate and for some shots it looks okay. But shadows aren’t just darker. They have less contrast, and depending on lighting conditions they are different colors. You can’t control these factors just by multiplying the plate with gray/black. Also, even if you use a factor of less than one, it’s easy to crush your blacks to zero this way, which can be a big problem in color grading and correction.

What you do (of course, d’oh!) is you invert the shadow pass and use it as a matte for whatever grading or adjustments you need to do to make your plate look shadowed. This approach is actually made more explicit in some other 3D rendering software, where the shadow pass output itself is inverted from what you get in Blender.

I actually had been doing something like this in extreme shots where the shadow was obviously colored, like in this shot:

shadow

If I had straight multiplied the black/gray shadow, it would have been (was) completely wrong. So what I did here was to sample the real shadowed water in one RGB node, sample an corresponding pixel in the unshadowed water in another RGB node, run the two colors through a difference node, then run the output of that through a difference node with the plate, using the inverted shadow pass as the factor. Now, in a lot of cases it may be enough just to reduce the plate’s value, but multiplying with the shadow pass? No.

3) Always get the tracking data you need.

I refer you back to this post, which put an optimistic spin on a fairly f’ed up predicament. In some cases you can track tricky moves with what you happened to shoot. In some cases you can’t. And if you can’t track the movements, forget about hand-animating them. Shooting at a high resolution is a great luxury because it allows you to worry about framing later. This is especially important if you need to track things that will eventually be out of frame. Do not miss the opportunity to track things properly.

4) Be organized. No, more organized. 

I’ve been working on the VFX for this thing for close to two years. In the last few months, I’ve had occasion to want to go back and fix/tweak things in shots I did over a year before and hadn’t looked at since. What a mess. Half the time I couldn’t figure out which of several files I should be looking at, and which files they depended on in turn. A lot of the time I didn’t even know what computer or hard drive they were on. And this was in spite of actually trying to be organized from the beginning.

For any future project of this scale, even if working alone, I would set up a version control repository.  Sebastian Koenig has a good video on using Mercurial for this. Version control is absolutely essential for knowing which file you’re working on and when/how it has changed. Also, I could have done a better job of maintaining a consistent file structure. Shots should be separated (I did that). Animation, lighting, and comp work should be separated and linked where necessary (I did not do that very well). All of and only the renders you need should be kept. You will quickly get into multiple terabytes with a project like this, and unnecessary, over-large test renders, un-needed alpha channels, and superfluous color depth information eat up a lot of space.

Which brings me to:

5) Render all needed passes as image files. Comp separately.

This is another one that seems obvious in retrospect, but wasn’t obvious enough to me at the start. Do not try to comp in the same scene that you render 3D stuff just because Blender lets you. Render all 3D passes to file sequences  in separate directories, and composite with those image files as input. Don’t forget to make sure you’re rendering alpha channels where you need them and preferably not where you don’t. I would recommend not to use Blender’s Multilayer EXR output option unless you are confident in your organization system and you’re sure that’s what you want. You can’t view this format in any previewing or thumbnail software as far as I’m aware, which is a pain when all you want to do is see what’s in an image without loading it into the compositor (IrfanView works for ordinary EXR files, and is a pretty good general purpose image viewer). Even the Blender VSE doesn’t handle Blender Multilayer EXRs  (it would be nice if it did, though). For what it’s worth, Nuke can’t read them either.

Ironically, I tended to skip the step of rendering passes out to files on shots that were a bit more complicated, because it seemed like extra work. And of course it was those files that I found myself coming back to wanting to tweak and realizing that it would be another 2 weeks of 3D rendering to make simple comp adjustments.

The issue of keeping the composite node trees organized and comprehensible is another big one, but that’s probably worth its own post.

6) Oh, and you can set Cycles sampling override values per render layer. How did I miss that for so long?

No more 1000 sample shadow passes that I’m just going to blur out anyway!

I’m sure there are things I’m forgetting to mention, but these are the big issues that spring to mind.

Advertisements

One Response to “VFX post-mortem”

  1. […] shadow itself, I used a CC node on the plate factored by an inverted shadow pass as I discussed a few posts back. Lowering the gain slightly brought me to a very close approximation of the actual shadow, but of […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: