When one thinks of the lauded feature film Inception, one can’t help but think of its director Christopher Nolan–and deservedly so. Nolan earned a DGA Award nomination for Outstanding Directorial Achievement In Feature Film. He won the Writers Guild Award for Best Original Screenplay. And Nolan garnered the inaugural Visual Effects Society (VES) Visionary Award.
Yet as Nolan affirmed in accepting the latter honor, there are numerous artists who collaborate and contribute to making master works like Inception. The overall VES competition recognized that fact as Inception won all four categories in which it was nominated: Outstanding Visual Effects in a Visual Effects-Driven Feature Motion Picture; Outstanding Created Environment (Paris Dreamscape) in a Live-Action Feature; Outstanding Models and Miniatures (Hospital Fortress Destruction) in a Feature; and Outstanding Compositing in a Feature. The lead VFX house on Inception was Double Negative Visual Effects, which maintains studios in London and Singapore. Models and miniatures for the film came out of New Deal Studios, Los Angeles.
To shed more light on what goes into stellar work, SHOOT tapped into several of the artists who were VES Award recipients earlier this month, including talent at New Deal to discuss Inception; a rigger at Framestore, London, to share backstory on work for Harry Potter and the Deathly Hallows: Part 1, which earned Outstanding Animated Character In a Live-Action Feature Motion Picture (for the character Dobby); and artisans at Method, Los Angeles, and MPC, London, regarding two of the winning commercials.
MPC won the VES Award for Outstanding Animated Commercial on the strength of Cadbury’s “Stars V Stripes,” which was directed by Nick Gordon of Academy Films, London, for agency Fallon, London. (Gordon has since left Academy and co-founded Somesuch & Co. in London.)
And Method’s VES winner was Halo: Reach’s “Deliver Hope,” which earned distinction in the category Outstanding Animated Character in a Broadcast Program or Commercial. “Deliver Hope” was directed by Noam Murro of Biscuit Filmworks, Los Angeles, for agencytwofifteen, San Francisco.
SHOOT posed the following two questions to visual effects artists involved in the VES Award-winning work:
1) What was the biggest creative challenge you faced on this project?
2) And what noteworthy surprise or surprises arose (a lesson learned or an unexpected discovery) during the course of the project?
Here’s a sampling of their feedback:
Laurie Brugger, rigger, Framestore, London1) Dobby’s eyes proved a challenge for rigging–by design they had a large proportion of their curvature showing, and we found this sometimes distracting or cartoon like, especially from certain camera angles. We had to reshape his eyes without pulling the character [Dobby is a prime character in Harry Potter and the Deathly Hallows: Part 1] too far from the original design, which was already familiar due to his appearance in earlier films. Instead of remodelling, we chose to modify their shape within the rig so changes could be more dynamic. In some ways this was super beneficial, as the deformers were used to define the surface shape, therefore we could see the deformation effects more clearly. For example, as the eyeball moves around, we can feel the “egg” shape of the eyeballs themselves displacing the skin. This level of freedom in design, particularly in the rigging department, is rare for us in 3D character production and I think ultimately contributed to the overall success of Dobby’s endearing quality. 2) One “aHa!” moment for me was seeing a test our animation supervisor, Pablo Grillo, created where he composited the reference actor’s eyeball directly onto a render of the animated character. The contrast in the frequency of motion was really significant at that stage. The eyeball darting around, the eye area contractions, the entire orbital region was really alive with activity even in the smallest movement. As a result, we explored further the presence of micro movements in facial muscle behavior–for example, small twitch like movement in the face, not necessarily caused by an intentional emotional expression. We found their relevance equal to traditional poses and experimented with incorporating them more in the rig. |
Dan Glass, senior creative director, Method, Los Angeles1) The biggest challenge with Halo’s Reach: “Deliver Hope” was trying to find a balance between the feeling of a cinematic trailer–with its expanse, narrative arc and more objective point of view–with that of the first person experience of the game itself. We wanted to immerse the audience in the battle whilst telling an important story in the Reach mythology which directly ties into the new game. We deliberately chose to start the piece in a very real environment, enhanced significantly with additional practical and CG explosions and alien creatures but nevertheless based in a recognizable reality. The spot builds to its epic ending weighing much more heavily on vast CG environments, digital matte paintings and complex effects simulations. Part of the challenge, as ever within a tight timeline, was allowing the filmmaking process a natural flexibility to evolve as a piece which meant some very careful planning to map out the broad strokes of the work and several custom tools to be able to adapt as required by the visuals and edit as everything came together. 2) The process, whilst supremely challenging in its schedule and creative demands, was extremely rewarding. Two aspects turned out better than expected: firstly we always planned to use Nuke and its 3D capabilities to the full force but we were very excited by how far we were able to push its capabilities. All 3D cameras were by default exported into the Nuke pipeline as well as scene geometry so that we were able to reproject textures and place elements correctly in 3D within the scenes. In a few cases shots were designed fully in Nuke (including 3D camera) and actually exported out to CG for generation of elements, and in a couple of instances we even used Nuke to animate, texture and light CG into the scene without the need to move material between the 2D and CG departments. Secondly we devised a system to handle in an automated fashion the color correction and conform process for the dozens of versions ultimately required for the international distribution. This was done in close collaboration with Company 3 and helped gain us many hours of additional shot design time. |
Ian Hunter, creative director/co-founder, New Deal Studios, Los Angeles1) What we did on the film Inception was provide the action of a large snow-bound mountain fortress as it is destroyed with explosive charges. The scene starts with close-ups of the bombs going off using live action explosives done by the first unit effects crew. We at New Deal came into play to show the wider shots of the overall destruction of the fortress. We elected to build the fortress as a miniature, but a very large miniature. Using a miniature allowed us to film physical destruction and explosions that would interact with each other in a convincing way. In order to make sure the explosions and flames scaled out correctly–that the way the walls and floors broke apart looked realistic–meant that we had to build the model in the relatively large 1/6 scale, meaning the model was one-sixth the size of the real building, if the real building ever existed. Whenever we take on a project there are always new challenges that we have to overcome to accomplish the job. Looking back, these challenges make the job interesting, but when you’re in it at the time it drives you crazy–“How are we going to do this?” So in the case of the fortress collapse from Inception, we had several challenges. Our director Christopher Nolan wanted the building to come down like a controlled demolition and he wanted to see the building collapsing from the bottom up. Also the action had to start in the front and appear to be a chain reaction of destruction that spread to the back. This meant we couldn’t just build a big model and pack it with explosives and hope for the best. Rather we had to devise a way of bringing down the building that combine physical effects with pyrotechnics. Many parts of the building were mounted to hydraulic-powered elevator jacks that could pull the building down at a programmable speed. Floors were mounted to elaborate breakaway skeletons inside that allowed the walls to crumble away faster than the building was falling. And specific explosive charges were added at the base of faux support columns to “trigger” the collapse, even though most of these explosions were for show and the actual falling was done mechanically. Another big concern when making this scene happen was that the building itself was mostly built sort of upside down. The main building is an inverted pyramid or ziggurat–smaller at the base than it the top. So it is inherently unstable. Plus we had to develop a new technique for manufacturing all of the wall and floor parts in a way that they would be strong enough to support themselves but weak enough to break up on cue. It was like some huge Faberge egg. and we had to hoist it up on a crane to stand 40 feet in the air in our backlot to shoot in a very specific natural lighting condition at a specific time of day in order to match the look of the live-action footage we were cutting into. There were over 200 individual explosive and mechanical effects that had to be timed out to the literal split second in order to pull off this wave of destruction. And after we did it once, we took the broken parts down and put up a new one to do it all over again within days. What’s so challenging about this work is coming up with a prototype–something that has never been done before–and finding a way to do it so it looks like you can do it over and over again in a specific manner. We were thrilled to have received the VES Award for Best Miniature in a Feature, especially on the same night Christopher Nolan was being honored with the first Visionary award from the VES. So I guess we did something right. |
Jake Mengers, VFX creative director, MPC, London1) When the script [for Cadbury’s “Stars V. Stripes”] first arrived, it was the intention to create the entire commercial from live action footage. This process would have meant piecing together a narrative using stock footage of underwater creatures and shot elements. 3D was to be used for augmenting creature markings and seaweed bubbles. However, we soon realized that finding the right stock footage to tell the story was not straightforward. So Nick [director Gordon] asked us to recreate the creatures in 3D. If we could create the whole cast in 3D, we would be able to have much more control with the narrative and give back the power to direct a character piece. The stock that had been found became reference for the 3D, storyboards were drawn up and then reality dawned. In the first run of boards the creature count was over 20 different species. Many of these were dropped and we ended up with 14 creatures, not forgetting the seaweed bubbles, sand, water and detritus elements. Although this represented a huge challenge in terms of sheer volume of work, there was nothing ground breaking about our approach. We used a tried and tested 3D pipeline of Maya , Z-brush, Real Flow, Mental Ray and Nuke. 2) Our 3D/Nuke pipeline proved again that it is the only tool that gives you the flexibility to easily exchange 3D and share the workload. Importing cameras from Maya gave the compositor control over projecting and rebuilding matte-paintings in a 3D view. Though this was possible before, it’s the ease at which the two disciplines now merge that’s impressive. |