The ability to create photoreal CG humans has long been viewed as a sort of Holy Grail as a tool for storytelling. It all comes down to applications, which have included the ability to create digital stunt doubles–enabling digital characters to go where it would not be safe, practical or possible to send a leading character or a stunt double–as well as large numbers of digital extras. But in these cases, there was some distance between the characters and camera.
The experience and expertise of CG teams, along with technological improvements, have narrowed this distance as these characters have been brought closer to the camera and been called on for an increasingly wide number of applications. “There’s been a good amount of [creating CG actors] going on in the feature and commercial world,” says Paul Babb, executive producer of commercials at Los Angeles-based Rhythm & Hues Studios. “Is it totally photoreal? Not yet. Is it getting better? Yes.”
“More often than not, the industry is able to create a digital character that is able to convince the audience that it is real,” adds Tim Sarnoff, president of Culver City-based Sony Pictures Imageworks, although he acknowledges that the challenge remains considerable. “We are not consistent as an industry. That will be a hurdle that we have to overcome.”
But today, many agree that this area is reaching a turning point–shifting focus to the ability to have digital actors deliver convincing performances.
“There is a watershed coming,” says Ed Ulbrich, senior VP and executive producer of Venice-based Digital Domain’s commercial and music video unit. “We are at the point now where it’s not big effects movies anymore; we can do things with dialogue and actors and performance.”
This topic surfaced loudly this past summer at Siggraph, where San Francisco-based Mova’s Contour reality-capture system was one of the hits of the exhibition floor. Chief architect Steve Perlman–who heads Mova and its parent company Rearden–relates that as CG characters have evolved over the years, audiences have had empathy for them; however, some recent movies have featured CG characters that have been extremely close to photorealism but audiences could see that something wasn’t quite right, creating a dip in empathy.
This is the “Uncanny Valley,” what Perlman explains is a perceptual zone where a CG face approaches photorealism just enough to be eerie.
The Contour system, he says, is designed to capture data so precisely so as to overcome this occurrence. Instead of markers traditionally used in motion capture, Perlman explains that Contour uses an FDA-approved phosphorescent makeup mixed with a base and sponged onto the actor. Phosphorescent powder is used to mark the actor’s clothes in order to allow them to perform in costume and to capture the realistic movement of the clothing as well as the actors.
The actor is lit with customized Kino Flo flashing florescent lights, and two sets of HD-resolution cameras simultaneously capture the information. One set captures the information when the performance is lit; the other when it is dark, based on the glow of the makeup. The two sets of camera information–both visual and geometric–are combined to create a high-resolution 3D digital image. Perlman says that the goal is to capture data that doesn’t require cleanup, creating more precise information while saving time and money. “We let the performers get into their roles, let the directors direct,” Perlman says. “We wanted it to be production friendly–we wanted to get the technology out of the way.”
Perlman reports that Contour–due for availability at the end of the year–would be compatible with major 3D software systems, so that the appearance of a character could be altered, for instance, to show aging.
He tells SHOOT that certain projects have already started that will incorporate Contour, although he declined to name customers or projects, citing NDA agreements. But indicators point to Digital Domain for spots and director David Fincher of bicoastal Anonymous Content on a movie. Fincher is scheduled to direct a feature titled The Curious Case of Benjamin Button, which tells the story of a man aging in reverse.
And, Fincher says in a released statement from Mova: “Contour’s promise is enormous; the notion that the human face, in all its subtleties could be mapped in real time, and with such density of surface information opens up so many possibilities for both two-and three-dimensional image makers and storytellers. I can’t wait to get my hands on it.”
Meanwhile, Ubrich reports that Digital Domain is looking at technologies such as Contour and Softimage’s FaceRobot, as well as developing its own tools. While he declined to specify the toolset, Ulbrich did tell SHOOT that next generation processes would be applied to the creation of photoreal CG humans for commercial work that Ulbrich expects will be finished by Digital Domain in late fall.
He admits he is impressed by the Contour system. “This is a breakthough,” Ulbrich enthuses. “Performances can now be captured in 3D as they are performed, eliminating much of the postproduction work required in the past. It isn’t just capturing dots in space anymore; it’s actual live action volumetric capture. This brings photoreal, CG, human performance within reach of a wide range of feature film, video game and advertising applications. Coutour opens up a new world of creative possibilities for directors.”
Another evolving technique to watch is San Francisco-based Industrial Light + Magic’s proprietary iMoCap system, which essentially involves grabbing photographic references of an actor’s performance with high resolution cameras from different perspectives, and then using proprietary software to translate the imagery into 3D data that may be used as a starting point to create the performance of a CG character.
This was used, for example, to capture actor Bill Nighy’s performance in order to create a CG Davy Jones in this past summer’s blockbuster Pirates of the Caribbean: Dead Man’s Chest. Visual effects supervisor Roger Guyett explains that Davy Jones is also a good example of another application that will become increasingly more common: retargeting an actor’s performance to any type of character. “If you can retarget something, then we can cast the actor, and take that performance and reinterpret it,” he says.
Guyett reports that the iMoCap system doesn’t require a traditional motion capture set. By removing the controlled environment, he says camera operators could shoot the reference materials during actual takes–including with a Steadicam. The intent is to allow the director to concentrate on directing a performance, without placing constraints on the first unit photography. Guyett says the actor’s performance data becomes a sort of digital reference, a high fidelity baseline for getting a rich performance. This reference is important because each individual animator may create a character somewhat differently. “If it’s a team… the performance is influenced by multiple animators,” he points out.
“Computer animators are actors too,” agrees Sarnoff, pointing out that animators are accustomed to using their own faces and body movement–as well as those studied on others–as references. He says that every person moves and behaves differently; and no animator will animate a character the same way.
“The technology has improved dramatically in the last couple of years,” Sarnoff relates, adding that rendering is better, lighting is better and understanding of physics is better. “But I don’t think any technology is the total answer… no technology, no matter how advanced, can make a character who can make me laugh, only the person behind the technology can do that.” Imageworks has employed a variety of techniques to create CG humans–from the performance capture techniques used for the recent Monster House to keyframe animation and even some bluescreen work as deployed on Superman Returns to create a CG man of steel.
Marlon & Hues One application for CG humans is the ability to recreate a younger (or older) version of a known actor, or even bringing a deceased actor to the screen. A recent example is Rhythm & Hues–which also worked on Superman Returns–using its proprietary 3D and rendering software to create a CG Marlon Brando, who played Superman’s father in the Superman movie from the ’70s.
Babb explains that the process started with a recording of Brando’s voice that was made during the original film, as well as some footage from that shoot. “We got as close as we could get to the dialogue [with his image] and then created a full wireframe model and married textures to his face and onto that [applied] facial animation.”
Facial animation, adds Babb, still comes down to the skills of an animator. At last month’s Siggraph, it was clear that software developers are trying to provide more sophisticated tools in this area, as evidenced by the high profile launch of Softimage’s Face Robot.
CG actors are being refined in other ways, such as by taking advantage of improvements in computing time and rendering technology. Ulbrich notes that the worlds of video gaming and advancements in motion picture production are blurring. “We are going to use real-time video game engines as production tools,” he predicts. “In the next two to five years, we’ll start to see this widespread. Real-time rendering is going to change everything. If you can move 8,000 polygons in real time without having to render, that increases productivity exponentially for the artists.”
Guyett also cited advancements in the realism of textures such as skin. “The way skin reacts to light has been a very difficult thing,” he explains, adding that improvements include the development of subsurface scattering–recreating how light is absorbed in skin and bounces around in the skin.
Performance capture is also an important tool being used to create CG actors. Jon Damush, VP/general manager at motion capture technology manufacturer Vicon and its Los Angeles-based motion capture studio House of Moves, expects a growing number of productions–both features and spots–to use performance capture techniques. “[Motion capture] is a cost effective piece of the puzzle,” he says. “We record mountains of data in a day that would take animators months to produce. We are not replacing, but supporting the massive amount of content they produce.” Incidentially, Vicon recently announced a technology alliance with Mova, enabling the use of Vicon MX-series cameras with the Contour system.
It appears that most of the R&D in this area is being applied to features, although commercials are a part of the mix. And, as Babb points out, the technology developed for one ends up applied to other applications.
“The budgets are bigger,” offers Damush as a reason that features seem to be leading the charge. “And the commercial industry is more risk averse to trying new technology because their timeline is so tight. I think the feature work is going to drive the next leap.”