yys

Maya插件开发,(多多练习英文吧~)

  博客园 :: 首页 :: 博问 :: 闪存 :: 新随笔 :: 联系 :: 订阅 订阅 :: 管理 ::

 Turtle Talk

Prior to the on-set motion capture, the team had the actors perform expressions while being scanned with Disney Research's Medusa system. The Muse team decomposed those scans into components within Fez, ILM's facial animation system that animators used to re-create the expressions. After facial motion capture, layout and R&D teams triangulated the markers in the 2D images into 3D space, frame by frame, and then applied them to controls on those components in Fez.

 

 

Helping animators create believable dragons and engaging characters was a new generation of animation tools the studio has named Premo, and a new rigging system designed to take advantage of fast graphics cards. The crew on this film were the first to use the new system.

Helping the lighting team achieve that look was a new-generation lighting system the studio calls Torch. "Lighters can see quick renders of their setups," DeBlois says. "They can manipulate and tweak the light in each shot. It allows more subtlety."

As the animators work, they can see keyframes rendered in real time with a standard lighting setup that gives them a sense of volume, dimension, light, and shadow.

detail-oriented work

"The next step might be having a setup in which the lighters work concurrently with the animators so they can animate to the lighting.

The maturation of the animation and lighting tools, which has opened new ways of working for the animators and lighting artists, parallels the story in the film they worked on

 

 

 

 

http://www.cgw.com/Publications/CGW/2013/Volume-36-Issue-7-Nov-Dec-2013-/Winter-Wonderland.aspx

The rigging team devised a tool they named Spaces that gave animators a convenient way to reconfigure the rig. “He has one rig with mechanisms for connecting and disconnecting,” Hanner says. Working in Autodesk’s Maya, an animator could click a button to have Olaf’s head fall off and still animate his body walking away.

“The most important thing was bringing the nuances and subtleties of the hand-drawn characters to the CG characters,”

“The traditional CG hair interaction techniques, which involve curves, digital brushes, and digital combs, didn’t work well. So we wrote a new software package we call Tonic. It gives our hair artists a sculpture-based tool set.”

Typically the modelers would first create rough proxies that showed shapes or rough directions. Once approved, the hair artists began refining those shapes with Tonic. In Tonic, they could see pipes or tubes that represented hair and could toggle individual strands of hair within to see the flow. “Working with these volumes gives hairstyles complete fullness,” Hanner says. Once groomed and structured with Tonic, the hair moved into Disney’s simulation package called “Dynamic Wires.” “The transition is automatic,” Hanner says. “But, the artists can rearrange and procedurally regenerate subsets of data the simulation works with.”

To create the snow and manage the interaction between snow and characters, the team developed two systems: Snow Batcher for shallow snow, and Matterhorn for deep snow and close-ups. “The standard slight [foot] impressions wouldn’t work, so we created our Snow Batcher pipeline. It could define which characters disturbed the snow and how deep, automatically create foot impressions, add additional snow, and kick it up. We put a lot of information into the database for each shot.”

“We used raytracing for the large ice-palace environments, which were very, very expensive. For the snow, we generated large point clouds for subsurface scattering and used deep shadow maps.” “We ended up shaping shallow and deep subsurface scattering lobes according to real data and then combining the two different effects. It isn’t raytracing through a volume; it’s an approximation. But, we got a nice lighting effect.” For the deep snow and snow that the characters interact with, the lighting team used a completely different shading system that lit the snow as if it were a volume.

Disney uses a custom cloth simulator called Fabric that they updated to handle the bunad costumes.By the end of the film, the team had created 245 simulation rigs for the clothing, more than double the number used for all their previous films combined.

 

 

http://www.cgw.com/Publications/CGW/2013/Volume-36-Issue-7-Nov-Dec-2013-/Gravitational-Pull.aspx

In early 2010, long before production started, Paul Debevec’s group at ICT/USC had demonstrated a light stage system in which light from LEDs surrounding an actor provided changing lighting conditions, while high-speed cameras captured the actor’s face.

 

 

 The Gold Standard

Now, it is about story­telling, the character animation, the personalities, and other aspects other than the technical achievements.

The storytelling credibility that Gravity gives to 3D feels like a breakthrough statement on how the technique can amplify the audience experience in a very visceral, emotionally fulfilling way.

 

 

You're Hired

‘Look, there are fundamental skills you need that go beyond knowing how to write Maya scripts. You need to know basics, fundamental aspects of the creative process. You need to know how to think about different formulas for creating new media. And, you must have the flexibility to adapt. That will allow them to survive in the volatile job market.”

 

Birds of a Feather

Reel FX Animation Studios's feathering system, Avian. The proprietary system enabled the artists to generate feathers of all sizes and shapes, and then groom the birds, viewing the results in real time within the Autodesk Maya viewport prior to rendering.

“That was one of our goals, to make it artist-friendly, so you didn’t have to be a programmer to use it,”

Avian took approximately a year to develop, and many of the features were devised on the fly, so to speak, as the need arose.

During development, Michalakeas, Pitts, and Sawyer did a tremendous amount of physical and theoretical research, including examining a real turkey wing for the lay of the feathers and studying numerous SIGGRAPH papers,

Indeed, the artists had to contend with the constant colliding and stretching whenever the turkeys moved. Initially, they devised a plan that involved solving for a simulation, but after a long, painful process, they changed directions and instead of trying to fix the problem, developed a system that would prevent the colliding in the first place. Called Aim Mesh, the solution essentially placed a polygon cage over the Avian feather system to drive the orientation of the feathers. Animation would then rig up the Aim Mesh so when there was squashing of the body, such as when a turkey lifted its leg, rather than have penetration, custom deformers on the Aim Mesh would simply pull up the mesh.

“It prevented 99 percent of our collision right off the bat; the remaining 1 percent that was colliding was barely visible, so we just let that go,” says Pitts.

 

In Free Birds, the turkeys often use their wings as hands, so the feathers had to act sort of like fingers.

With help from Tom Jordan, modeling supervisor, the crew devised a feather-stacking solution, as bunches of feathers were stacked atop one another, with the thickness controlled by animation.

To keep rendering manageable, the crew employed some tricks to help cut back on the data crunching

 

Out of the World

a final note on Gravity's influence on filmmaking: The film was primarily a conversion from 2D to 3D. Increasingly the technique is being used because it can be more manageable to shoot live action in 2D and then convert.

Turkey Recipe

Another efficiency involved a nonlinear approach to constructing a shot. "As soon as layout was launched, we could start animating and lighting, so all three departments could work at the same time, which is unique," Peterson says. "Each department fed the other one cache, but we also had to be careful so that we were in sync, because at any time there could have been an update to a texture, model topology, or UV and the data would change. And that would affect all the shot-centric departments." To help the group avoid those issues, Reel FX developed an asset state system, which alerted artists to any change in the assets and automatically delivered the new pieces and packaged them correctly.

the group created a simple but elegant crowd system, whereby the animators would store a library of cycles and assign those cycles on the fly and blend between them. "

 

The crew also developed an in-house slicing system that let multiple animators work on the same shot - which was particularly useful when several characters were on screen at the same time. "Asking an animator to animate 15 characters would be challenging, so we developed a slicing system that could publish an animator's cache to their counterparts. So if one person works on Jake and one on Reggie, the person working on Reggie would have a Jake guest, that would be fed the cache from the person working on Jake," Esneault explains. "They could continually update each other on the fly all day long, and in the end, we would send and bake it down into a single shot for lighting."

The mood was also established through lighting, look dev, and the virtual camerawork.

 

 

 

 

 

 

 

 

 

 

 

 

The Hobbit Habit

Modelers created the overall sculpture within Autodesk’s Maya and then used Pixologic’s ZBrush for the scales. “We had close to a million individual scales on the dragon,” Saindon says. “We tried to build as many as we could. Sometimes in geometry. Sometimes in displacement. When he bends, the scales fold over and slide on top of one another so he doesn’t look like a big rubber thing.

Typically, Weta Digital’s creatures with human forms have a common UV layout to easily apply textures from one creature to another. But in this case, the shapes and sizes were different enough that the crew needed to do texture migration. “We set up a version of the transforming creature to have a common place where we could migrate the human textures into a bear-texture space,” Aitken says. “Our fur system already supported the ability to animate the length of fur with texture maps. So, we used the maps to shrink his bear fur and grow his human hair.”

As is typical for creatures that talk, the rigging team at Weta created an articulated model based on the Facial Action Coding System (FACS), which breaks expressions down into individual muscle movements.

“In that shot, we simulated 18 million coins at once,” Saindon explains. “To get that scale, we had to write a new rigid-body solver that could move millions of coins quickly. The new solver allowed us to fill the huge spaces with a 
volume of RBD (rigid-body dynamics) coins, not just a texture of coins, and get that sense of movement.”

The need for a new solver wasn’t due solely to the number of coins – the simulation engine needed to move the coins in ways most rigid-body dynamics solvers don’t address.
“Rigid-body solvers are great for destruction, for smashing things to bits,” Saindon says. “But, they don’t work well for things that don’t move. They like to be always moving. We needed to have the coins sit still and not jitter. Rigid-body solvers don’t like to do that.”
 By increasing the resolution of the simulation result and tweaking the weight and the friction, they persuaded the pile of digital coins to move as gold coins really would. “It’s like sand on a beach,” Saindon says. “You knock off a little and a little moves, but it isn’t a landslide like snow on a mountain. It’s heavy. It has more friction.”
 
Beyond Smaug, Visual Effects Supervisor Matt Aitkin singles out two areas in which the Weta Digital team working on Hobbit 2 advanced the state of the art: water simulation with the new fluid solver and digital doubles.
Before the first film, Weta Digital relied on a typical pipeline in which layout artists send simple geometry to modelers who, when finished, delivered sophisticated models to texture artists. Matte painters worked separately. Now, modelers, texture artists, matte painters, and layout artists work together in one area to create hero, render-able environments.
Moreover, on previous films, crews creating shots with digital water typically had relied on multiple render passes to create layers of foam, bubbles, and mist. In this film, the crew rendered most shots in one pass with one additional mist layer. The water is all raytraced within RenderMan.

“Our shading team wrote a proprietary shader to create our beautiful water,” Capogreco says. “You can see all the bubbles from the bottom of the water to the top, and they’re all rendered with a single pass. The particles store what we call ‘primvars,’ variables that the shader looks up and shades according to age, velocity, and vorticity. Because everything is custom and internal, we had complete control over the look.
 
For digital doubles of the actors, the crew captured their performances and those of their stunt doubles on the motion-­capture stage. To replicate their look, artists created skin textures using a meticulous method that Acevedo first developed for Avatar and has since enhanced: He does life casts to capture fine details, and then uses a unique technique to scan the result into Photoshop and produce displacement maps
“We had the actors playing the dwarves on an overscale set with giant chairs, so they looked small around the table,” Aitken explains. “At the same time, on an adjacent greenscreen stage, we filmed Mikael [Persbrandt] with a scaled-down camera, so all his moves are smaller. We slaved the camera filming him to the camera filming the dwarves through motion control. You could look at a monitor on set and see Beorn looming over the dwarves.”
 
 

 

Giant Footsteps

The two big ones,” he elaborates, “were the skin/scale system and the muscle system our team of character setup artists created for us, which enabled us to show how these amazing creatures were built, moved, and interacted.”

Animal Logic developed a number of tools that helped push the boundary of reality in the film. The artists used a procedural- rather than textural-based approach to the scales. Instead of painting them on or modeling them individually, they opted for a technique that was similar to what they used to create fur and feathers, “where we use a lot of maps to describe the kind of scale in different areas of the body in terms of shape, size, profile,” To this end, the studio developed a scale system, called Reptile。

The main surfacing challenge for the prehistoric cast resulted from the scale-based characters.

Rigging Supervisor Raffaele Fragapane and R&D Lead Aloys Baillet came up with the brand-new system, which properly managed individual muscles and bones, and provided interaction with the outer skin and internal fat. The process was completely transparent to animators and did not require shot-specific adjustments

 

 

 

 

Previs, Techvis, and Postvis on The Avengers

Previs: storyboard -->camera angles, frame the shots, and etc. motion capture or keyframe animation for previs. Give director a sense of the locations and identify issues before shooting.

Techvis: 比如根据previs的相机数据,现场特效人员计算爆炸物的大小和速度,这些数据techvis人员,techvis制作出相应的CG物体,确定哪些相机可用,然后告诉现场特效人员合适的爆炸时间和地点、相机走位、角色走位。

Postvis

Posvis:实拍时,测试previs的数据是否正确。Postvis also gives the director and editors richer scenes for cutting decisions, especially for sequences with partial sets and ones in which CG effects will drive the story

give Joss and Seamus a sense of the locations and identify issues before shooting. - See more at: http://www.studiodaily.com/2012/05/previs-techvis-and-postvis-on-the-avengers/#sthash.9VJC1BOj.dpuf
camera angles, frame the shots, and

 

camera angles, frame the shots
camera angles, frame the shots
camera angles, frame the shots
camera angles, frame the shots
camera angles, frame the shots
camera angles, frame the shots
camera angles, frame the shots
camera angles, frame the shots
camera angles, frame the shots
camera angles, frame the shots
camera angles, frame the shots

 

 

 

 

Pete's Dragon (from Cenifex 137)

and used Weta's Simulcam Slave Mocon setup to shoot interactions between Bilbo and the dwarves and the larger-scale Gandalf and Beorn.

To generate the volume and complexity of water interactions, Weta Digital retooled its physics simulation engine, Odin.

uln this case, everything was volumetric, including all the water surfaces, volumes, iso-surfaces and bubbles. We produced material models to create foam and aeration in the water, and by applying different properties and densities, we created areas of turbulence in a single system.

we added controls to give animators the ability to art-direct coin piles, maintaining natural flows that the solver could handle。 Effects lead Niall Ryan developed the rigid body solver in Odin, allowing interactions in up to 20 million treasure pieces arrayed in tiled layouts. 0 As Smaug passed through each tile, we activated those areas. Each tile fed into our rigid body simulator, which selected treasure assets - coins, cups, plates, all sorts of different treasure pieces - and propagated those through the tiles. 0 Technical directors applied variables to create varieties of treasure textures, color and reflectivity, and output simulations in giant renders in Synapse.

Effects artists created Smaug's internal body glow using geometry within the creature's belly to generate a subsurface pass that illuminated bones and muscles, and backlit dragon scales.

Flame effects were generated in Odin, embodying a range of physical properties that allowed the gigantic and highly detailed fireballs to splash and adhere to surfaces.

To create the flood of gold, Weta developed a model capable of handling visco-elastlc plastic materials. ... That allowed us to art-direct the statue's collapse as it fell under its own weight and created a jet of liquid.

 

 

 

 Beautiful Dreamer (from Cenifex 137)

Visual effects included environment extensions, using photographic elements  projected onto rough geometry to extend city streets into skyscraper canyons, and interactive debris that churned up street surfaces using MPC's dynamic simulation tool, Kali.

 

 

 

 

Extra- Vehicular Activity (from Cenifex 136)

P44

P45

p51

p60

 

"The rule was simple textures, complex lighting,"

To help design lighting for various sequences and sets, the artists began testing possibilities in the previs stage.

The lighting team did the previs work in Autodesk's Maya. The sets then moved into PDI/DreamWorks' proprietary animation system and the lighting into the studio's proprietary lighting and rendering software.

Effects artists working with Autodesk's Naiad (formerly from Exotic Matter) would run the simulation and then meet with animators,

 

 

Preparing Through Previsualization

For shots that required compositing CG elements into a plate with a camera move, artists would use Vicon's Boujou to matchmove the camera. After importing the tracked camera into the Maya scene, they would animate the effects, set extensions, and actions.

When shots didn't require a 3D camera move, the postvis team would composite elements onto the plates using After Effects.

 

posted on 2014-05-01 11:54  yys  阅读(941)  评论(0编辑  收藏  举报