I’m excited to finally premier Electric Town today in front of actual audiences. By sheer coincidence it’s going to have two Japan premiers at the same time, one in Tokyo at the Shimokitazawa Film Festival and one in Hokkaido at the Sapporo International Shorts Festival. I wish I could get to both, but I’m only going to be able to make the Shimokitazawa showing. Looks like the festival is going to narrowly miss getting hit by supertyphoon Vongphong, so we’re lucky with the timing!
A few short months ago I thought the movie was about done, aside from a few small color tweaks. We’d done the best we could with a soundtrack, pieced together mostly from Creative Commons-licensed music and a few original odds and ends.
There had been one piece that I had really wanted to use by electronic music pioneer Laurie Spiegel, and I contacted her about using it. She agreed and encouraged the movie early on in the postproduction.
Imagine my surprise when I contacted her to show her the final movie and she liked it so much she said she wanted to work on (re)scoring the whole movie! I had actually already submitted to several festivals (including Sapporo, which just announced that Electric Town has been selected for screening). But I was happy to delay any future submissions to give her time to work on the movie.
Recently she sent me her finished score and I’m thrilled with it. She really managed to bring a lot more out of the movie than the previous soundtrack had been able to. I’m more excited than ever to submit to more festivals as soon as we finish the final mix.
The Sapporo International Short Film Festival and Market has just announced this years official selections for Japan and I’m pleased to report that Electric Town is on the list! (Sorry, Japanese only at the moment).
It’s exciting to have a screening, and hopefully it’s also a good sign that to have been accepted to the first festival we’ve submitted to. There are a few submissions out there yet that haven’t yet been announced.
It’s too early to tell if this will be the movie’s premier, but it would be a great environment to have the premier if it turns out to be.
I had an issue with doubling shadows that was surprisingly tricky to deal with (I’m still not sure I handled it as well as I could have, and I’m open to other ideas). The problem is that when you lay CG shadows over real shadows in the image, the double-shadowed area becomes twice as dark. This obviously isn’t correct. Only one light source (in this case the sun) is being blocked, so the shadows should merge seamlessly. You can see the problem in the top picture here, along with my best effort at a solution in the bottom picture.
I tried a couple of approaches that didn’t really work, and the more I thought about it the more I thought that getting a perfect result could be quite tricky. For what it’s worth, here’s how I handled it:
For the shadow itself, I used a CC node on the plate factored by an inverted shadow pass as I discussed a few posts back. Lowering the gain slightly brought me to a very close approximation of the actual shadow, but of course with the overlapping shadows too dark. Then I fed the output of that into a lighten node, again using the inverted shadow pass as a factor, with a solid color as the second input. The solid color was basically a sample of the darkest point of the ordinary, correct shadow color. I tweaked that color until I got the most unobtrusive result, like so:
Note that this method wouldn’t have worked at all if there were different colors or values in the ground under the shadow. I had to be able to assume that the shadow itself on the street should be the darkest value. Still, the problem here was that the resulting lightened portion lacked the speckled texture of the street. So I wanted to add some speckling only to that lightened area.
To isolate the lightened area, I used a difference node with the non-lightened image and the lightened image as inputs. I separated the results into HSV components and used the value, like this:
This gave me a matte to work with only the solid-colored lightened portion.
Separately, I rendered out a b/w high contrast image of the street to give me a street texture map of sorts. I multiplied that with the lightened image to give it a little bit of grit, using the difference output as a factor (I added a multiplier to that so I could adjust the influence). I also used that as a factor for adjusting the hue and value of the output.
The result is far from perfect, but it’s less noticeable than the double shadows I started with. With all that’s going on in the frame I don’t think it will be at all noticeable. Still, if people have other ideas for dealing with doubled shadows, I’d love to hear them.
You know that feeling when you’re just finishing up making a short movie about a lonely old man and a robot dog, and the Academy awards happen, and the winner for animated shorts turns out to be a short movie about a lonely man and a robot dog?
Fortunately, aside from the uncanny thematic similarity, the movies are about as different as they can be. Anyway, I loved Mr Hublot. Well-deserved win.
Things are finally winding down. I’m rendering out a full sequence with a first complete stab at color correction and a near-finished sound mix. After I watch it on a couple screens and show it to some people, I’ll probably make a few adjustments with the color, and by the end of the month I’ll be submitting to festivals.
I’ve been learning a lot in my AnimationMentor VFX course too. I was a little worried that with all I learned in the course I’d want to go back and redo the whole movie, but fortunately I’ve only felt I really had to do that with a couple shots.
I have picked up a few nuggets of knowledge both through taking the course and making the movie that I will definitely keep in mind in the future. Some of them are so obvious in retrospect it’s a little bit embarrassing to mention them, but if they weren’t obvious to me to begin with, I assume that there are Blender users out there for whom they aren’t obvious now. In no particular order, here they are:
1) Use, but don’t rely entirely on image-based lighting.
You need to use IBL, and you need to render with Cycles (which is great, btw. IMHO it compares remarkably well to the industry standard Arnold). You must take light probes (which I did) and you should shoot gray balls (which I didn’t). However, for most shots you will want to augment this lighting with some CG lights for several reasons. For one thing, Cycles won’t give you a shadow pass from an environment texture. And even if it did it’s nearly impossible to get fine control over the various lighting components (key, fill, highlights, shadows) with only the IBL. In actual filming, small lights and reflectors are used all the time, and you should assume it’s the same in your VFX shots.
2) The shadow pass is not the actual shadow. Don’t multiply it.
This was the biggest forehead slap for me. All along I’d been using the shadow pass (maybe multiplied by an AO pass) as a multiplier with the plate to create the shadows (with a 0<1 factor to control intensity). This basically works to darken the plate and for some shots it looks okay. But shadows aren’t just darker. They have less contrast, and depending on lighting conditions they are different colors. You can’t control these factors just by multiplying the plate with gray/black. Also, even if you use a factor of less than one, it’s easy to crush your blacks to zero this way, which can be a big problem in color grading and correction.
What you do (of course, d’oh!) is you invert the shadow pass and use it as a matte for whatever grading or adjustments you need to do to make your plate look shadowed. This approach is actually made more explicit in some other 3D rendering software, where the shadow pass output itself is inverted from what you get in Blender.
I actually had been doing something like this in extreme shots where the shadow was obviously colored, like in this shot:
If I had straight multiplied the black/gray shadow, it would have been (was) completely wrong. So what I did here was to sample the real shadowed water in one RGB node, sample an corresponding pixel in the unshadowed water in another RGB node, run the two colors through a difference node, then run the output of that through a difference node with the plate, using the inverted shadow pass as the factor. Now, in a lot of cases it may be enough just to reduce the plate’s value, but multiplying with the shadow pass? No.
3) Always get the tracking data you need.
I refer you back to this post, which put an optimistic spin on a fairly f’ed up predicament. In some cases you can track tricky moves with what you happened to shoot. In some cases you can’t. And if you can’t track the movements, forget about hand-animating them. Shooting at a high resolution is a great luxury because it allows you to worry about framing later. This is especially important if you need to track things that will eventually be out of frame. Do not miss the opportunity to track things properly.
4) Be organized. No, more organized.
I’ve been working on the VFX for this thing for close to two years. In the last few months, I’ve had occasion to want to go back and fix/tweak things in shots I did over a year before and hadn’t looked at since. What a mess. Half the time I couldn’t figure out which of several files I should be looking at, and which files they depended on in turn. A lot of the time I didn’t even know what computer or hard drive they were on. And this was in spite of actually trying to be organized from the beginning.
For any future project of this scale, even if working alone, I would set up a version control repository. Sebastian Koenig has a good video on using Mercurial for this. Version control is absolutely essential for knowing which file you’re working on and when/how it has changed. Also, I could have done a better job of maintaining a consistent file structure. Shots should be separated (I did that). Animation, lighting, and comp work should be separated and linked where necessary (I did not do that very well). All of and only the renders you need should be kept. You will quickly get into multiple terabytes with a project like this, and unnecessary, over-large test renders, un-needed alpha channels, and superfluous color depth information eat up a lot of space.
Which brings me to:
5) Render all needed passes as image files. Comp separately.
This is another one that seems obvious in retrospect, but wasn’t obvious enough to me at the start. Do not try to comp in the same scene that you render 3D stuff just because Blender lets you. Render all 3D passes to file sequences in separate directories, and composite with those image files as input. Don’t forget to make sure you’re rendering alpha channels where you need them and preferably not where you don’t. I would recommend not to use Blender’s Multilayer EXR output option unless you are confident in your organization system and you’re sure that’s what you want. You can’t view this format in any previewing or thumbnail software as far as I’m aware, which is a pain when all you want to do is see what’s in an image without loading it into the compositor (IrfanView works for ordinary EXR files, and is a pretty good general purpose image viewer). Even the Blender VSE doesn’t handle Blender Multilayer EXRs (it would be nice if it did, though). For what it’s worth, Nuke can’t read them either.
Ironically, I tended to skip the step of rendering passes out to files on shots that were a bit more complicated, because it seemed like extra work. And of course it was those files that I found myself coming back to wanting to tweak and realizing that it would be another 2 weeks of 3D rendering to make simple comp adjustments.
The issue of keeping the composite node trees organized and comprehensible is another big one, but that’s probably worth its own post.
6) Oh, and you can set Cycles sampling override values per render layer. How did I miss that for so long?
No more 1000 sample shadow passes that I’m just going to blur out anyway!
I’m sure there are things I’m forgetting to mention, but these are the big issues that spring to mind.