Interview with Kirby Miller


Sitni Sati brings you exclusive interview with Kirby Miller, FX Supervisor at Blur Studio.


Q: Blur studio is well known for great cinematics with lots of stunning VFX shots. What were the most demanding VFX shots that you have produced so far?
Some recent / current projects are: Dante's Inferno, Dante's Inferno Superbowl commercial, Fable 3, Classic Transformers , Goldfish Crackers: Blast Off, Star Wars: The Old Republic, Dragon Age: Origins, X-Men Origins: Wolverine, Terminator: Salvation, Halo Wars, Prototype, Fable 2, Section 8, Warhammer Online, Tomb Raider Underworld, MX vs. ATV Untamed, Transformers: The Game, The Simpsons Trailer, NFL on FOX

Most projects we do are pretty demanding, some because of technical or artistic challenges and others because of time constraints. I would say the most demanding projects for us in general are game cinematics. First, because the expectations are increased with every project we do. Nobody wants to release a game trailer that's not better than the last thing we created, so it's always a challenge to try to do better than what we did before. And since the schedules don't get any longer, we have to rely on other factors to keep the work looking good (better software, faster machines, new technology, and artist experience). And because the schedules tend to be so short (usually about 3-5 weeks for the FX) it is so critical to have the entire pipeline working well and staying on schedule. For example, a one day delay in rigging or modeling at the beginning of the project could end up causing a delay of several days by the time it gets to scene assembly or FX.


One recent challenging project we worked on was the Dante's Inferno Superbowl commercial we did for EA. That was a 30 second all CG commercial and I think we had roughly 5 weeks for production (3 weeks for FX and I think about 6 guys on the FX team for that one). The expectations were obviously extremely high especially considering how much it cost the client to pay for the airtime during the Superbowl. It was a lot of fun to work on though and I think I read it was the third most watched commercial in the history of television (based on the number people watching at once). That's pretty cool to get to work on something like that. And it was pretty much wall to wall FX from start to finish.


Q: Can you tell us in short - what is the process of creating a VFX shot for game cinematics.?
We generally start off by trying to find some reference. That can be something from the clients if they have an idea of how the effect should look, or if it's something that already exists in the game. It could also be live footage or something we have done on a previous project. The timing and scale of an effect will sometimes be decided at the layout or animatic stage but if not then that is the first step -- for the FX artist to roughly block it out. Depending on the schedule, we will often have to start working on an effect in the layout scene and then later replace the scene assets with final models, environments, animation, cameras, etc. when they are ready. But ideally we will have most of that along with point caches before starting on an effect. It's really hard to generalize what is a typical effects shot, because obviously different types of FX will be approached differently. But from that point, we will start with either the particles or geometry and later add the FumeFX, or sometimes start with the FumeFX and later add particles or geometry. I guess, to take a vehicle explosion for example, we would start by destroying the vehicle geometry. That would be done in either Rayfire or ThinkingParticles.

The next step would be to add a FumeFX explosion (for something like a vehicle that's probably a custom FumeFX sim, but we might start from a previous setup and just resim it for the specific vehicle if the time is too short or if we already have something that will work). The reason for doing the geometry first is that the larger pieces of geometry will need to affect the FumeFX and also will typically be trailing smoke or fire. Once that is all looking good and approved we will start adding smaller debris/particles. There are usually several FumeFX passes also in addition to a main explosion. It's good to keep that stuff separate because if particles change you don't want that to affect your main explosion if you already have that looking good. Also that way things can be different resolutions as required to keep things efficient (tiny debris trails might need a very low spacing, but a more general smoke that will not be very dense can be a higher spacing / lower resolution).


The final step would be to do the comp. We use Digital Fusion for compositing, and all the FX artists do their own comps. A lot of people coming from bigger studios think it's odd that we don't have dedicated compositors, but personally I really enjoy the compositing process, and I think it would be difficult for a compositor to know exactly how I intended for all my FX passes to be used. And of course ultimately it gives us more creative control over our shots.

As for software, we are pretty flexible about that. Some artists prefer to use ThinkingParticles, some Particle Flow. Most of us are using Rayfire though that stuff can also be done in TP now. Some use Krakatoa while others have never touched it. We have even started doing a few effects in XSI. I think the only software we all use universally is probably Fusion and FumeFX. Everything else is up to the individual artists and what they feel most comfortable with. In the end, if a shot looks great and is done on time I really don't care how it got to that point, as long as it didn't personally cause me any pain. We have enough experienced people here with all the software that any new artist coming in always has someone they can go to for help.


Q: Fluid dynamics are well known for it's time consuming computations and memory requirements. How do you see FumeFX fits in your regular workstation memory/CPU limitations ?
I guess I don't really think of FumeFX as a traditional fluid dynamics system, meaning that when I think of fluids, I usually think of doing an hour of work and then waiting a day to see the results. With Fume it's a lot more interactive than that. It is very rare that I will do any effect that takes longer than one night to simulate. Though by far most of my FumeFX sims I would estimate take less than two hours, even for final shots. I guess I just have no patience to wait any longer than that. We currently have one machine on the farm with 24 gigs of ram and once in a while I will send a sim to that machine, but most of our simulations will easily run on a machine with only 8 gigs. It's pretty rare for us to go over 8 gigs on a sim anyway. As for CPU requirements, the faster the better of course. Since FumeFX is multithreaded so well I see the CPU usage pegged near 100% for the majority of a simulation, even on a 16 core machine.


Q: I assume that with FumeFX you are using SL version (simulation only license) to run simulation via Backburner to reduce time required to fine tune the effect?
Yes we are using FumeFX SL version to run simulations on our render farm, but we're not using Backburner. We are instead using custom render farm software that was developed by Blur. Writing custom software for the render farm allowed us to run many different types of processes such as Max renders, XSI jobs, Digital Fusion comps, MaxScripts, ThinkingParticles caches, generating point caches and previews, running RealFlow simulations, FumeFX simulations, syncing plugins, and installing software. The design department also runs Cinema4D and After Effects on the farm.

Having the SL licenses means we can run several variations of our FumeFX sims at the same time without taking up our workstation licenses and when the sims are done we can just pick the one that looks best.


Q: At Blur, there are many custom tools and max scripting. Can you share with us your experience with FumeFX integration within Blur's pipeline?
For the FX department, there is definitely no single tool we use more often than FumeFX. I would say that probably every FX shot we do, and certainly at least every hero FX shot we do involves FumeFX in some way. We are always looking at ways to improve our pipeline, and since FumeFX is such an integral tool, it's definitely something we focus on quite a bit. For example, we have recently started building a more organized FX library and we started by building a FumeFX asset library system. This is especially useful for assets that are commonly used on every project like ground fogs, smoke columns, steam vents, etc. Rather than create those things for every project, we can put them in a library and the scene assemblers can browse through the rendered previews, pick the effects they want, and drag them into their shots. This frees up the FX artists to spend more time on the hero shots and other custom setups. It looks a lot like the Getty Images website, but instead you are browsing FumeFX assets and you can just drag them into your shot right from the browser.

A lot of our effects involve very dense high resolution meshes, such as characters or vehicles (as either emitters or deflectors). FumeFX handles these high resolution point cached meshes so well that it's not even worth the time to make optimized versions of these meshes. Also, by keeping the point cached meshes unaltered, when we get animation changes it's usually just a matter of resimming the effect with the updated point cache. With such short schedules, we usually have animation, scene assembly, and FX all going on at the same time, so it's not unusual to have animation changing right up to the last minute. It's critical to have our fx tools allow for that.


Q: What sort of max scripting you do for FumeFX?
Other than tools related to our render farm (submitting FumeFX simulations to run on the farm) we're not doing a lot of scripting with FumeFX. I guess for the most part FumeFX pretty much does what we need it to without having to use the scripting feature.


Q: With FumeFX 2.0 around the corner, can you tell us how has FumeFX 2.0 changed your work flow?
FumeFX 2.0 has some really major new features that have helped us tremendously in production for the last few months while we have been using it in beta. I have used FumeFX 2.0 features quite a bit on this current project (wavelet and retiming especially). It's saving me a lot of time being able to take old sims from previous projects, resim the same setups with the additional channels and then run wavelet to get more detail for closeup shots. And also I'm retiming a LOT of stuff -- mostly things like smoke columns that the director wants slower. Normally that would be such a pain... "I love the shape and motion, I just want it about 3x slower..." AAARRRGGGGHHHH! ... With FumeFX 2.0 that's so easy now, just kick off the retimer and the shot is done. Also, another shot where fire needed to build up to fill a volume super fast and then go normal speed once it is filled (a cartoony sort of timing). We just did the fx normal speed and then retimed the beginning to go faster and then ramp back into normal speed. Pretty awesome stuff! This was such a tight deadline (as usual). I don't know how I could get it all done on time without some of the new FumeFX 2.0 features.

Wavelet Turbulence has changed the way I am approaching a lot of shots too. It used to be very difficult to start an effect using a low res sim because once you change the spacing for the final version, the shape and feel of the simulation could change quite a bit. Since adding wavelet turbulence is a post process, it keeps the same shape and feel of your effect. So now I can do a quick sim (without having to let it run overnight) and even have time to do several revisions during the day, and then run wavelet turbulence on it to get the final detail. And in the end, wavelet also lets us get a lot more detail than we could with a high res sim since the memory requirements are much lower.

Another great new feature in FumeFX 2.0 is the Void Source. So when we have to do something like a trail for a rocket covering a large distance, we can just make one huge FumeFX grid covering the entire scene and then link a void source to the rocket and set it to inverted. Then FumeFX will only calculate the simulation inside the void source.

There are a ton of other new features, but these are a few of the major ones that have been helping to save us a lot of time during this current E3 crunch.

Thanks you for your time to answer those question, and thank you for creating such a great art.
No problem. Thank you for creating great software that helps to make the art great!