Follow Me

Dropbox. ?Spread the word.  The first 2GB is free!

Journal Index
Tuesday
Mar152011

A Visit to Pixar


I visted San Francisco this weekend and got a chance to meet up with an old friend at Pixar

Pixar is in Emeryville, the east side of the San Francisco bay area.  I used to live in the bay area before I moved to Portland back in 2005.  Although I'm originally from New York, this was my old stomping grounds for many years while working for Autodesk.  It was good to take a long weekend and go back to vist the area.  I got to eat in some of my favorite resturants and visit some old friends.

It's been about 6 years since I visited the Pixar campus.  It's really funny pulling up to the campus since it's tucked away in Emeryville, right on the edge of Oakland. It's not exactly the best part of town.  I saw lots of new construction going on and confirmed that they have a second building going up that might be populated very soon.  Aparently, the first building got filled up pretty quickly and they had many people working off the main campus.  

As you walk in, you'll notice a large amphitheater in front of the building.  They will often have larger company meetings and parties out here in the nice weather. They also have a new Luxo Jr statue out front.  That was a nice touch too.  (New to me, they could have put it up any time since I was there last.)



The lobby has a life size replica of Sully and Mike from Monsters Inc., life size models ofLuigi and Guido from Cars, and a life size lego statue of Buzz and Woody. (I bet you could use some voxel 3d model generator for making a lego blueprint.  Knowing that, I'm not as impressed by large scale lego sculptures.  I just feel sorry for who ever has to put them together.)

 

 

The main area consists of the cafeteria, the giftshop and a game room.  The doors at the other end of the main room lead to the main screening room.  I didn't go in there this time, but I did get to see Spirted Away, introduced by John Lasseter after he got back from visiting Hayao Miyazaki.  He said he hand carried this print on the plane himself, but I dunno, maybe he's just a good story teller.  ;)

Lunch at Pixar is awesome. Back when I lived in SF I jumped at any chance to go have lunch over there. The food is great and it's real inexpensive. They always have daily specials and they make pasta and pizzas to order.  

They also had a great display of the art of Toy Story 3 in the Upper East side of the building.  Everything from concept art, to model sheets, sculptures and lighting designs.  I couldn't take any photos, rightly so, but it was very inspiring to see.

Pixar has a really great culture.  You can feel it when you're there.  The pay a reasonable wage, provide healthcare and retirement assistance for their employees.  They provide training through their Pixar University program, and encourage people to learn much more then just their position in the company.  I hope the day I run my own studio, I remember the things that make a great culture, and do my best to create my own culture.  John Lasseter and Ed Catmull are still the leading force at Pixar and it shows.  They inspire their employees to do great work.  I hope I can do the same with the people that work with me, now and in the future.

Monday
Mar142011

Making Mirrored Ball and Panoramic HDR Images - Part 1

 If your looking for a cheap mirrored ball you can buy an 8" Stainless Steel Gazing Ball here at Amazon.com for about $22. They also have smaller 6 " Silver Gazing Globe  for about $14. It can easily be drilled and mounted to pipe or dowel so that it can be clamped to a c stand otr a tripod.

First, Im no Paul Debevec.  Im not even smart enough to be his retarted intern. But I thought I'd share my technique for making HDRI's. My point being, I might not be getting this all 100% correct, but I like my results.  Please don't get all over my shit if I say something a little off.  Matter of fact, if you find a better way to do something, please let me know.  I write this to share with the CG community and I would hope that people will share back with constructive criticism. 

First, let's start by clearing something up.  HDRIs are like ATMs.   You don't have an ATM machine.  That would be an Automatic Teller Machine Machine (ATMM?)  See, there's two machines in that sentence now.  The same is true of an HDRI.  You can't have an HDRI Image.  That would be an High Dynamic Range Image Image.   But you can have an HDR image.  Or many HDRIs. If your gonna talk like a geek, at least respect the acronym.

I use the mirrored ball technique and a panoramic technique for capturing HDRI's. Which one really depends on the situation and the equiptment you have. Mirrored balls are a great HDR tool, but a panoramic hdr is that much better since it captures all 360 degress of the lighting.  However panoramic lens and mounts aren't cheap, and mirrored balls are very cheap.

Shooting Mirrored Ball Images

I've been shooting mirrored balls for many years now.  Mirrored balls work pretty damn good for capturing much of a scene's lighting.  Many times on a set, all the lights are comming from behind the camera, and the mirrored ball will capture these lights nicely. Where a mirrored ball starts to break down is when lighting is comming from behind the subject.  It will capture some lights behind the ball, but since that light is only showing up in the very deformed edges of the mirrored ball, it's not gonna be that acurate for doing good rim lighting.

However, mirrored balls are cheap and easily available.  You can get a garden gazing ball, or just a chrome christmas ornament and start taking HDR images today.  Our smallest mirrored ball is one othe those chinese meditation balls with the bells in it. (You can get all zen playing with your HDRI balls.)

  • The Size of Your Balls

My balls are different sizes, (Isn't everyones?) and I have 3.  I use several different sizes depending on the situation.  I have a 12" ball for large set live action shoots, and a 1.5" ball for very small miniatures. With small stop motion sets, there isn't alot of room to work. The small 1.5" ball works great for that reason.  I'll usually clamp it to a C-stand and hand it out over the set where the character will be.  I also have a 4" ball that i can use for larger miniatures, or smaller live sets.

  • Taking the Photos

With eveything set up like we've mentioned, I like to find the darkest exposure I can and expose it all the way up to being blown out.  I often skip 2-3 exposure brackets in between shots to keep the number of files down.  You can make an HDRI from only 3 exposures, but I like to get anywhere from 5-8 different exposures.

  • Chrome Ball Location

When shooting a mirrored ball, shoot the ball as close to where the CG subject will be.  Don't freak out about it if you can't get exactly where the CG will be, but try to get it as close as you can. If the CG subject will move around a lot in the shot, than place the ball in an average position. 

  • Camera Placement

Shoot the ball from the movie plate camera angle.  Meaning set up your mirrored ball camera where the original plate camera was.  This way, you'll always know that the HDRI will align to the back plate.  Usually, on set, they will start moving the main camera out of the way as soon as the shot is done.  I've learned to ask the lights to be left on for 3 minutes. (5 min sounds too long on a live set, I love asking for 3 minutes since it sounds like less.) Take your mirrored ball photos right after the shot is done.  Make nice nice with the director of photography on set.  Tell him how great it looks and that you really hope to capture all the lighting that's been done.

  • Don't Worry About...

Don't get caught up with little scratches being on your ball. They won't show up in your final image.  Also don't worry about your own reflection being in the ball. You give off so little bounce light you won't even register in the final scene.  (Unless your blocking a major light on the set from the ball.)

  • File Format

We use a Nikon D90 as our HDR camera. It saves raw NEF files and JPG files simultaneously which I use sort of as thumbnail images of the raw files.  I'm on the fence about using raw NEF files over the JPG's since you end up blending 6-8 of them together.  I wonder if it really matters to use the raw files, but I always use them just in case it does really matter.

  • Processing

To process your mirrored ball HDR image, you can use a bunch of dirrerent programs, but I just stick wtih any recent version of Photoshop.  I'm not holding your hand on this step.  Photoshop and Bridge have an automated tool for proceesing files to make an HDR.  Follow those procedures and you'l be fine. You could also use HDR Shop 1.0 to make your HDR images.  It's still out there for free and is a very useful tool.  I talk about it later when making the panoramic HDRI's.

Shooting Panoramic Images

The other technique is the panoramic HDRI. This is a little more involved and requires some equiptment.  With this method, I shoot 360 degrees from the CG subject's location with a fish-eye lens, and use that to get a cylindrical panoramic view of the scene. With this set up you get a more complete picture of the lighting since you can now see 360 degrees without major distortions.   However, it's not practicle to put a big panoramic swivel head on a miniature set.  I usually use small meditation balls for that.  Panoramic HDRI's are better for live action locations where you have the room for the tripod and spherical mount.  To make a full panoramic image you'll need two things.  A fisheye lens and a swivel mount.

  • The Lens

First, you'll need something that can take a very wide angle image. For this I use the Sigma 4.5mm f/2.8 EX DC HSM Circular Fisheye Lens for Nikon Digital SLR Cameras  ($899)  Images taken with this lens will be small and round and capture about 180 degrees. (A cheaper option might be something like a converter fish-eye lenses, but you'll have to do your own research on those before buying one.)

 

  • The Tripod Mount

 You'll need a way to take several of these images in a circle, pivoted about the lens.  We want to pivot around the lens so that there will be minimum paralax distortion.  With a very wide lens, moving the slightest bit can make the images very different and not align for our HDRI later.  To do this, I bought a Manfrotto 303SPH QTVR Spherical Panoramic Pro Head (Black)  that we can mount to any tripod. This head can swivel almost 360 degrees.  A step down from this is the Manfrotto 303PLUS QTVR Precision Panoramic Pro Head (Black)  which doesn't allow 360 degrees of swivel, but with the 4.5 fish eye lens, I found you don't really need to tilt up or down to get the sky and ground, you'll get it by just panning the head around.

Once you got all that, time to shoot your panoramic location.  You'll want to set up the head so that the center of the lens is floating right over the center of the mount.  Now in theory, this lens can take a 180 degree image so you only need front and back right?  Wrong. You'll want some overlap so take 3 sets of images for our panorama each 120 degrees apart. 0, 120, and 240. That will give us the coverage we need to stitch up the image later. 

  • Alignment

Just like the mirrored ball, I like to shoot the the image back at the direction of the plate camera. Set up the tripod so that 120 degrees at pointing towards the original camera position.  Then rotate back to 0 and start taking your mutiple exposures.  Once 0 is taken, rotate to 120, and again to 240 degrees.  When we stitch this all together, the 120 position will be in the center of the image and the seam of the image will be at the back where 0 and 240 blend. 

  •  Don't Worry About...

People walking through  your images.  Especially on a live action set.  There is no time on set to wait for perfect conditions.  By the time you blend all your exposures together that person will disapear. Check out the Forest_Ball.hdr image. You can see me taking the photos, and a ghost in a yellow shirt on the right side.

Processing The Panoramic Images

To build the paroramic from your images, you'll need to go through three steps. 1. Make the HDR images (just like for the mirrored ball).  2. Transform the round fish-eye images to square latitude/logitude images, and  3. Stitch it all back together in a cylindrical panoramic image.

  • Merge to HDR

Like we talked about before, Adobe Bridge can easily take a set of different exposures and make an HDR out of them. Grab a set, and go to the menu under Tools/Photoshop/Merge to HDR.  Do this for each of your 0, 120 and 240 degree images and save them out.

  • Transform to Lat/Long

Photoshop doesn't have any tool for distorting a fish-eye lens to a Lat/Long image.  There are some programs that I investigated, but they all cost money.  I like free.  So to do this, grab a copy of HDR Shop 1.0 Open up each image inside HDRshop and go to menu Image/Panorama/Panoramic Transformations.  We set the source image to Mirrored Ball Closeup and the Destination Image to Latitude/Longitude. Then set the resolution in height to something close to the original height. 

  •  Stitch It Together Using Photomerge

OK. You now have three square images that you have to stitch back together.  Go back and open the three Lat/Long images in Photoshop.  From here, you can stitch them together with Menu-Automate/Photomerge using the "Interactive Layout" option. This next window will place all three images into an area where you can re-arrange them how you want. Once you have something that looks ok, press OK and it will make a layered Photoshop file with each layer having an automatically created mask.  Next, I adjusted the exposure of one of the images, and you can make changes to the masks also.  As you can see with my image on the right, when they stiched up, each one was a little lower than the next.  This tells me that my tripod was not totally level when I took my pictures.  I finallized my image by collapsing it all ot one layer, and rotating it a few degrees so the horizon was back to being level.  For the seam in the back, you can do a quick offset and a clone stamp, or just leave it alone.

 This topic is huge and I can only cover so much in this first post.  Next week, I'll finish this off by talking about how I use HDR images within my vRay workflow and how to customize your HDR images so that you can tweak the final render to exactly what you want.  Keep in mind that HDR images are just a tool for you to make great looking CG. For now here are two HDRI's and a background plate that I've posted in my download section.

Park_Panorama.hdr          Forest_Ball.hdr          Forest_Plate.jpg

If your looking for a cheap mirrored ball you can get one here at Amazon.com. It's only like 7$!

Tuesday
Mar082011

Thanks to Max Plugins.de for Keeping Old Plugins Alive

I have to throw a shout out to David over at MaxPlugins.de.  If you didn't know about the site already, He has an extensive list of older 3dsmax plugins that he keeps re-compiling for the latest version.  

If you've ever tried to make a talking animal you would want to use an old plugin from Peter Watje called "Camera Map Animated".  You can use it to re-project footage onto a model, then use skin and deformers to pose the face, and render the distored camera mapped object out. If you tried to go to Peters old site for pluigns you'll be out of luck.  Go visit MaxPlugins.de and make a small pay pal donation for this selfless contribution to the community.

Fred

Monday
Feb212011

Video - Spring Simulations with the Flex Modifier

Hey everybody, I decided that I would try doing some video blog entries on some of the topics I've been going over.  In this first one, Im going over the stuff I talked about with spring simulations.  I was tired when I recorded this so cut me some slack.  I'm keeping these quick and dirty, designed to just get you going with a concept or feature, not hold your hand the whole way through.

The original posted article on spring simulations with Flex is posted here.

Friday
Feb182011

Pitfalls of Linear Workflow

I thought I'd follow up the Linear Workflow article with some of the pitfalls that I run into often with this workflow.  All of them can easily be avoided.

Double Gamma

If you've ever saved an exr that was washed out, this is what I would call a double gamma'd image.  (I love the use of this word "gamma" as a verb.) You've applied gamma to the image twice. To fix this, set the output gamma to 1.0 in max's preferences.

 

Ever Heard of a Gamma .45 Workflow?

No. No such thing. That's why you shouldn't be reversing a double gamma'd image. But you've probably tried to reverse it by setting the gamma to .45 or something close to that in Photoshop or After Effects. Actually, If you just rendered a big sequence I'd be fine turning the double gamma back down.  The exr is 16 bit, so you have plenty of data to work with to fix that mistake.  But try to get it right the next time you render.  Check your output gamma in max's preferences. It should be set to 1.0 (or linear.)

Linear Textures

All your textures applied to surfaces will get gamma on the other side.  With no lighting or computer graphics really involved, you can see that the texture will move as a texture through the system, getting gamma's on the other end, and looking washed out. The input textures need to be linear without gamma.  In theory, an 8 bit images that is crushed down to a linear image would not "un-crunch"very well since its in 8 bit format. However, I beleive that max can handle this since it's 32 bit.  Im pretty sure all images that come into 3d max are converteted into an internal 32 bit format.  (Hence the Bitmap texture loads a texture while Noise makes a noise function texture, all of these are floating point "texture types".)  So when max loads images, it can easily handle the incomming images as sRGB inputs, turning the gamma back down before rendering them. So, Set your input gamma to 2.2.  Meaning, deal with all my 2.2 textures.

Normal Maps

So all your textures are coming in now and getting crushed to be darker, then added final gamma pushes it all back up to what we see.  Now lets talk about that normal map your using.  It's a map of vector info.  So with this map, it's getting pushed darker with our 2.2 input gamma and it lookes jacked the fuck up.  So, when loading that particular texture, use the bitmap load dialog to override the gamma to 1.0.  It will remember it for this just this map.

 Displacement and Specular Maps

Ok, so if my theory about data maps is correct then all data maps will technically be wrong.  However, Im usually cranking maps to make my spec maps so does it really matter?  To be honest, I'd love to hear thoughts on that one. (Post to the forum) I just know normal maps seem to show the problem the most.  Maybe it's because of color RGB is XYZ vectors and the un-gamma curve just kills the effect?

Saving a JPG while Your Working

Im often saving off versions of my render work for producers and clients, and I don't always want a linear exr.  I work with the vray frame buffer and I usually remind myself by turning of the sRGB button.  If it's dark, i know that I will need to brighten it when saving.  So when saving, just use the bitmap save dialog and make the gamma 2.2 and then you can save a jpg or whatever you want.

Tuesday
Feb152011

Save your Valentines Day with a Romantic French Film

Ok, Valentines Day was yesterday and maybe you blew it.  Maybe you bought her chocolates and ate all of them, or maybe you jusrt picked the neighbors flowers before walking in the door.  Either way, this will save you.  Get this movie Priceless and watch it with your girl friend. She'll love you for it. 

  1. She'll be impressed becasue it's a French film.  
  2. Chicks get hot when reading sub-titles.  I swear, It's true.  I read it somewhere.
  3. It's really a good movie.  Funny and charming.
  4. The actress (AudreyTautou) in it is Hot as hell!  She playes Amelie in Jean Pierre Jeunet's Amelie . (Which is one of the best films of all time.) So you'll enjoy it too.

Good luck, hope you score one for me.

Friday
Feb042011

Deep Green - Solutions to Stop Global Warming Now


 

Deep Green is a film we worked on here at Bent Image Lab.  We did 2 animated shorts for it along with the animated cloud chapter markers. One was all done in After Effects, and the other was a mix of After Effects and 3D.  I supervised that particular production.  Also, we did these animated faces in the clouds as chapter markers.  Besides all that, the film was very informative and I encourage everyone to watch it.  

I was able to meet the filmaker, Mathew Briggs, a simple mushroom farmer from Portland. The idea that he got financing for this film to spread the word about global warming and possible solutions is amazing. Im posting this since I want everyone to buy this film and watch it.  It might just make you turn off the faucet more often or turn down your thermostat 1 degree. But from what I learned, that's a start.

Wednesday
Feb022011

Spring Simulations with the Flex Modifier

Ever wish you could create a soft body simulation on your mesh to bounce it around?  3ds max has had this ability for many years now, but it's tricky to set up and not very obvious that it can be done  at all. (This feature could use a re-vamp in an upcoming release.)  I pull this trick out whenever I can, so I thought I'd go over it for all my readers. It's called the Flex modifier, and it can do more than you realize.

A Breif History of Flex

The flex modifier, originally debuted in 1999 within 3D Studio Max R3.  I'm pretty sure Peter Watje based it on a tool within Softimage 3d called quick stretch. (The original Softimage, not XSI) The basic flex modifier is cool, but the results are less than realistic.  In a later release, Peter added the ability for flex to use springs connected between each vertex.  With enough springs in place, meshes will try and hold their shape and jiggle with motion applied to them.  

Making Flex into a Simulation Environment

So we're about to deal with a real-time spring system.  Springs can do what we call "Explode".  This means the math goes wrong and the vertices fly off into infinite space. Also, if you accidentally set up a couple thousand springs, your system will just come to a halt. So here are the setup rules to make a real time modifier more like a "system" for calculating the results...

  1. Save Often- Just save versions after each step you take.
  2. Use a Low Resolution Mesh- Work with springs on simple geometry, not your render mesh.  Later, use the Skin Wrap modifier to have your render mesh follow the spring mesh.
  3. Cache your Springs- Use a cache modifier on top of the Flex modifier to make playback real-time.  This is really helpful.

Setting Up the Spring Simulation

Ok, I did this the other day on a real project, but I can't show that right now so yes, I'm gonna do it on a teapot, don't gimme shit for it.  I would do it on an elephants trunk or something like that... wait a sec, I will do it on an elephant trunk!  (Mom always said, a real world example is better than a teapot example.)

OK, Lets start with this elephants head. (this elephant model is probably from 1999 too!) I'lll create a simple mesh model to represent the elephants trunk.  The detail here is important.  I start with a simple model, and if the simulation collapses or looks funky, I'll go all the way back to the beginning and add more mesh, and re make all my springs. (I did that at the end of this tutorial.)  First, lets disable the old method of Flex by turning off Chase Springs and Use Weights. Next, let's choose a simulation method.  

There are 3 different sim types. I couldn't tell you the difference.  I do know that they get better and slower from top to bottom.  With that said, set it to Runge-Kutta4 - the slowest and the most stable. (Slow is relative.  in this example, it still gives me real time feedback.)

OK. Before we go making the springs, lets decide which vertices will be held in place and which will be free to be controlled by Flex.  Go right below the Flex modifier and add a Poly Select modifier.  Select the verts that you want to be free and leave the hold verts un-selected.  By using the select modifier we can utilize the soft selection feature so that the effect has a nice falloff.  Turn of soft selection and set your falloff.

About the Spring Types

Now that we know which verts will be free and which will be held, lets set up the springs.  Go to Weights and Springs sub-object. Open advanced rollout and turn on Show Springs. Now, there are 2 types of springs.  One holds the verts together by Edge Lengths.  These keep the edge length correct over the life of the sim. The other holds verts together that are not connected by edges.  These are called Hold Shape Springs. I try to set up only as many springs as I need for the effect to work.

 Making the Springs

Bad SpringsTo make a spring, you have to select 2 or more vertices, decide which spring type you are adding in the options dialog, and press the Add Spring button. The options dialog has a radius "filter".  By setting the radius, it will NOT make springs that are a certain distance from each other.  This is useful when adding a lot of springs at once, but I try to be specific when adding springs.  I first select ALL the vertices and set the dialog to Edge Length springs with a high radius.  Then close the dialog and press Add Springs.  This will make blue springs on top of all the polygons edges.Good Springs (In new releases, you cannot see these springs due to some weird bug.)  After that, open the dialog again and choose shape springs to selected and then start adding shape springs.  These are the more important springs anyway. You can select all your verts and try to use the radius to apply springs, but it might look something like the image to the left. If you select 2 "rings" of your mesh at a time and add springs carefully, it will look more like the one on the right. (NOTE: It's easy to over do the amount of springs. Deleting springs is hard to do since you have to select the 2 verts that represent the spring, don't be upset about it deleting all the springs and starting over.)

When making your shape springs, you dont want to over do it.  Too many springs can make it unstable.  Also each spring sets up a constraint. Keep that in mind.  If you do try to use the radius to filter the amount of springs, use the tape measure to measure the distance between the verts to know what you will get after adding springs.  

Working with the Settings

In the rollout called "Simple Soft Bodies" there is 3 controls.  A button that adds springs without controlling where, and a Stretch and Stiffness parameter.  I don't recommend using the Create Simple Soft Body action. (Go ahead and try it to see what it does.) However, the other parameters still control the custom springs you make.  Lets take a look at my first animation and see how we can make it better.

You know we can make it better?  Take more than 20 seconds to animate that elephant head.  What the hell Ruff?  You can't even rig up a damn elephant head for this tutorial? Nope. 6 rotation keys is all you get.  Anyway, the flex is a bit floaty huh? Looks more like liquid in outer space.  We need the springs to be a lot stiffer.  Turn the Stiffness parameter up to 10.  Now let's take another look.

Better, but the top has too few springs to hold all this motion. It's twisting too much and doesn't look right. 

Lets add some extra long springs to hold the upper part in place.  To do this, instead of adding springs just between connected verts, we can select a larger span of verts and add a few more springs.  This will result in a stiffer area at the top of the trunk. Now lets see the results of this. (NOTE: The image to the left has an overlay mode to show the extra springs added in light pink. See how they span more than one edge now.)

 

Looking good.  In my case here, I see the trunk folding in on itself. You can add springs to any set of vertices to hold things in place.  The tip of the trunk flies around too much.  I'lll create a couple new springs from the top all the way down to the tip.  These springs will hold the overall shape in place without being too rigid.  

Now lets see the result on the real mesh. Skin wrap the mesh with the spring mesh.  I went back and added 2x more verts to the base spring mesh, then I redid the spring setup since the vertex count changed.  

I then made the animation more sever, so I could show you adding a deflector to the springs.  I used a scaled out sphere deflector to simulate collision with the right tusk.  Now don't go trying to use the fucking UDeflector and picking all kinds of hi res meshes for it to collide with.  That will lock up your machine for sure.  Just because you can do something in 3dsmax, it doesn't mean you should do it.

 

 So yeah, thats it.  Now I'm not saying my elephant looks perfect, but you get the idea.  Animate your point cache amount to even cross dissolve different simulations.  Oh, and finally, stay away from the Enable Advanced Springs option. (If you want to see a vertex explosion, fiddle with those numbers a bunch.)

Monday
Jan312011

The Secrets of Hiding Vertices

The feature of hiding and un-hiding polygon elements in under utilized in 3dsmax.  Probably because the buttons are buried so deep within editable poly. By putting a couple shortcuts in your quad menu, you can gain quick access to a very useful set of commands.

Hiding Vertices is like Freezing Polygons

I don't know if many people know this or not, but the hide un-hide tools of editable poly can be used to freeze parts of your model when working on it.  Hiding the verts, let you still see the polygons, but not see or touch the verticies.  This is helpful when using broad sculpt tools like Poly Shift or paint deformation.  I don't trust "ignore backfacing" to make my decisions for me on what to move around so I tend to hide verts I don't want to move.  I often select some verts I want to adjust and then use Hide Unselected to isolate only the verts I want to use the Poly Shift tool on.

 Hiding verts is especially important when making morph shapes.  Hide the vertices on the back of the head before sculpting a morph shape. The last thing you need to do it accidentally move some verts on the back of your characters head for one of the morphs.

Hiding Polygons to Help with Modeling

It's very useful to hide polygons to get into tight spaces.  Working with the inside of the mouth, or working in an armpit for example.  Remember to turn these polygons back on later, since they will render this way!

 

 Adding Them to Your Quad Menu

If you have max open right now, just do it.  You'll start to using hide and un-hide on polygons much  more often if they are in your quad menu. Open Customize/Customize User Interface dialog and go to the quad tab. I add these commands right next to regular hide and unhide in the upper right quad.  Click in the action window and press "h" on the keyboard to quickly jump to the hide commands.  You'll see one in there for "Hide (Poly)" drag that into the upper right quad.   Do that for Unhide (Poly) and Hide Unselected (Poly).  Now to make them easier to read in the quad.  Customize the name of each menu item to add the parenthesis and POLY so they are easier to recognize.  (Don't forget to save out your UI changes to your default UI file.)

The great part is when your not in an editable polygon object, they don't show up in your quad menu at all.

Wednesday
Jan192011

Skin Basics

The other night I was skinning a character and realized that some beginners might get a little lost when it comes to skinning. It started when I brought up the weight table and had to set 3 or 4 options before I could even use it.  So... here are my skin basics.

Check your bones first.  Did you name all your bones properly? Before you go assigning bones to a skin, make sure they are named. When I rig, I have bones that are meant for skinning and bones that are for the rig.   I add the suffix '_SKIN' on all my skin bones.  When picking the bones to add to the skin modifier, I just filter '*SKIN" in the select by name dialog and grab all the right bones in a split second.

The first thing I do after applying the skin modifier to a model is set up the default envelopes.  Although they seem strange and confusing I still find them very helpful for smoothing out joints. Don't jump to hand weighting verticies until your envelopes are working pretty good. If a joint creases too much, make the ends of each envelope larger and larger balancing one with the other.  You can also move envelope locations by sliding the envelopes around.  This might confuse you but keep in mind this is just the volume that will be affected by the bone.  It doesn't change where the bone pivots from.

Use excluded verts for fixing the most extreme verts.  If you go setting weights on the arm verts to get the spine bones not to affect it, you won't be able to adjust the envelopes for the elbow.  Using exclusions allows you to still weight with broad envelopes.

Bad DefaultsWhenever I skin up a character using 3dsmax's skin modifier, I always end up in the weight table.  It's a very useful tool for finalizing the skin after you've set up all your envelopes to be as good as they can be.  However, you need to set it up correctly.  When you open the weight table it's overwhelming with bone names across and vertex number down it.  Vert #1823? Which one is that? This doesn't really work by default.

 

To make your weight table useful again, do this.  Set the weight table to 'Selected Vertices'.  Now as you select verts you will see how much influence each bone has on them.  Next set it to 'Show Affected Bones' to only show only the bones that affect the selected vertices.  Now you can see the selected verts and how they under skin influence.  Finally, check the Use Global setting.  This adds an extra cell in the chart that can be used to adjust the entire bone column of selected verts.  The super part about this is that the effect is additive to the existing weight so each vert is tweaked a little without having to be forced to be the same value.

 Also, select the bone you want to view and it will show up in bold blue background.  If the bone doesnt show up, its becasue it has no weights assigned to those verts.  use the Abs Effect spinner to add a little, and then you can slide it up from there.   You can also use the 'Affect Sel. Verts' option to dial in weights for only certain vertices in you table.

 Questions? Post them to the Forum. Comments, post them to the Journal. Was this helpful... boring? let me that too.  I hate boring stuff.

Wednesday
Jan122011

Get Blog Notifications Directly in 3dsmax Interface

I just learned how to get updates from an RSS feed directly inside 3dsmax.  This is great for any of you who live in 3dsmax all day long and want to get notified when I post a new article. And, it makes use of that stupid little notification toolbar that never seems to tell me anything useful in the first place. (Maybe)

Setting up RSS Feeds in 3dsmax

 

 Start by getting to the Info Center Settings.  Do this by clicking the favorite star icon, and then clicking the settings button in the top right corner of the window. 

 

 

Once you see the main options window, click on RSS Feeds and then click the Add button. From there, add my rss url and click add.  After a few seconds, a confirmation window will appear and your done.  I just added myself to mine.  Let's see if I get notification of this posting.

Heres the RSS feed for my journal  ๏ปฟhttp://www.ruffstuffcg.com/journal/rss.xml

After it's done, click the little radar dish to see the different RSS feeds you've added.

 

Sunday
Jan092011

Vray Render Elements into Nuke

As a follow up to the article I wrote about render elements in After Effects, this article will go over getting render elements into The Foundry's Nuke.

I've been learning Nuke over the last few months and I have to say it's my new favorite program. (Don't worry 3dsmax, I still love you too.)  Nukes floating point core and it's node based workflow make it the best compositor for the modern day 3d artist to take his/her renderings to the next level. (In my opinion of course.)  Don't get me wrong, After Effects still has a place in the studio for simple animation and motion graphics, but for finishing your 3d renders, Nuke is the place to be. 

There a many things to consider before adding render elements into your everyday workflow.  Read this article on render elements in After Effects before making that decision. You also might want to look over this article about linear workflow too.

Nuke and Render Elements

Drag all of your non gamma corrected, 16 bit EXR render elements into Nuke.  Merge them all together and set the merge node to Plus.  Nuke does a great job at handling different color spaces for images, and when dragging in EXR's, they will be set to linear by default. 

Nuke applies a color lookup in the viewport not at the image level, so our additive math for our render elements will be correct once we add all the elements together.  (If it looks washed out, your renders probably have gamma baked into them from 3dsmax.  Check your output gamma is set to 1.0 not 2.2) If you want to play with the viewport color lookup, grab the viewport gain or gamma sliders and play with them.  Keep in mind that this will not change the output of your images saved from Nuke.  This is just an adjustment  to your display.

Alpha

After you add together all the elements, the alpha will be wrong again.  Probably because we are adding something that isn't pure black to begin with.   (My image has a gray alpha through the windows.) Drag in your background node and add another Merge node in Nuke.  Set this one to Matte.  Pull the elements into the A channel and pull the background into the B channel.  If you do notice something through the Alpha it will probably look wrong.  The easiest way to fix this is to grab the mask channel from the new Merge node and hook it up to any one of the original render elements.  This will then get the alpha from that single node, without being added up. 

Grading the Elements

That's pretty much it.  You now can add nodes to each of the separate elements and adjust the look of your final image. If you read my article about render elements and After Effects, you will remember that I cranked the gain on the reflection element and the image started to look chunky.  You can see here that when I put a Grade node on the reflection element and turn up the gain, I get better results. (NOTE: my image is grainy due to lack of samples in my render, not from being processed in 8 bit like After Effects does.)

This is just the beginning. Nuke has many other tools for working with 3d renders.  I hope to cover more of them in later posts.

Tuesday
Jan042011

Vray Elements in After Effects

Although I've known about render elements since their inception back in 3dsmax 4, I've only really been working with split out render elements for a couple years or so.  

 

The idea seems like a dream right?  Render out your scene into different elements and give control of different portions of the scene so that you can develop that final "look" at the compositing phase?  However, as I looked into the idea at my own studio, it's not that simple.  This is a history of my adventure of adapting render elements into my own workflow.

The Gamma Issue

The first question for me as a technical director is "Can I really put them back together properly?"  I've met so many people who tried at one point, but got frustrated and gave up.  Its a hassle and a bit of an enigma to get all the render elements back together properly.  One of the main problems for me was that you can't put render elements back together if they have gamma applied to them.  I had already let gamma into my life.   I was still often giving compositors an 8bit image saved with 2.2 gammed from 3dsmax.  So for render elements to work, I need to save out files without gamma applied.

Linear Workflow

Now that your saving images out of max without gamma, you don't want to save 8 bit files since the blacks will get crunchy when you gamma them up in composite. So you need to save your files as EXR's in 16 bit space for render elements to work.  You also need to make sure no Gamma is applied to them.  Read this post on Linear Workflow for more on that process.  

Storage Considerations

With the workflow figured out, you are now saving larger files than you would on a old school 8 bit workflow.  Also, since your splitting this into 5-6 render element sequences, your now saving more of these larger images. Make sure your studio IT guy knows you just significantly increased the storage need of your project by many times

Composite Performance

So now you got all those images saved on your network and you figured out how to put them back together in composite, but how much does this slow down your compositing process?  Well if you are your own compositor, no problem.  You know the benefits, and probably won't mind the fact that your now pulling do 5-6 plates instead of one.  You have to consider if the speed hit is worth it.  You should always have the original render so the compositor can skip putting the elements back together at all.  (Comping one image is faster than comping 5-6.)  I mean, If the compositor doesn't want to put them back together, and the director doesn't know he can ask to affect each element, why the hell are you saving them in the first place right? Also, if people aren't trained to work with them, they might put them back together wrong and not even know it.  Finally, to really get them to work right in After Effects, you'll probably have to start working in 16 bpc mode.  (Many plugins in AE don't work in 16 bit)

After all these considerations, it's really up to you and the people around you to decide if you want to integrate it into your workflow.  It's best to practice it a few times before throwing it into a studios production pipeline.  If you do decide to try it out, I'll go over the way that I've figured out how to save them out properly and how to put them back together in After Effects so that you can have more flexability in your composite process.

Setting up the Elements in After Effects

I don't claim to be any expert on this by far, so try to cut me some slack.  I'll go over how I started working with render elements specifically in After Effects CS3.  I'm using this attic scene as my example.  It's a little short on refraction and specular highlights, but there are there, and they end up looking correct in the end.

    

Global Illumination, Lighting, Reflection, Refraction, and Specular.

I use just these five elements. Add Global Illumination, Lighting, Reflection, Refraction and Specular as your elements.  It's like using the primary channels out of Vray.  You can break up the GI into diffuse and multiply Raw GI, and the Lighting can be created from Raw Lighting and Shadows, but I just never went that deep yet. (After writing this post, I'll probably get back to it and see if I can get it working with all those channels as well. The bottom line is that this is an easy setup, so call me a cheater. 

Sub Surface Scattering

I've noticed that if you use the SSS 2 shader in your renderings, you need to add that as another element.  Also, it doesn't add up with the others so It won't blend back in. It will just lay over the final result.

I usually turn off the Vray frame buffer since I've had issues with the elements being saved out if that is on.  I use RPManager and Dealine 4 for all my rendering and with the Vray frame buffer on, I've had problems getting the render elements to save out properly.

Bring them into After Effects

I'm showing this all in CS3.  I'm working with Nuke more often and hope to detail my experience there too in a later post. Load your five elements into AE.  As I did this, I ran into something that happens often in After Effects.  Long file names.  AE doesn't handle the long filenames that can be generated with working with elements.  So learn from this and give your elements nice short names. Otherwise, you can't tell them apart.

Before "Preserve RGB"Next make a comp for all the elements and all the modes to Add.  With addition It doesn't matter what the order is.  In the image to the left I've done that and laid the original result over the top to see if it's working right.  It's not. The lower half if the original, the upper half is the added elements.  The problem is the added gamma. After Effects is interpreting the images as linear and adding gamma internally.  Adding Final GammaSo now when the are added back up, the math is just wrong.  The way to fix this is to right click on the source footage and change the way it's interpreted.  Set the interpretation to preserve the original RGB color.  Once this is done, your image should now look very dark.  Now that the elements are added back together we can apply the correct gamma once.  (And only once, not 5 times.)  Add an adjustment layer to the comp and add an exposure effect to the adjustment layer.  Set the gamma to 2.2 and the image should look like the original.

 Dealing with Alpha

Next the alpha needs to be dealt with. The resulting added render elements always seem to have a weird alpha so I always add back the original alpha. One of the first issues is if your transparent shaders aren't setup properly. If your using Vray, set the refraction "Affect Channels" dropdown to All channels.

Alpha Problem

 

 Pre-comp everything into a new composition.  I've added a background image below my comp to show the alpha problem.  The right side shows the original image, and the left shows the elements resulting alpha.  So I add add one of the original elements back on top, and grab it's alpha using track matte.  Note that my ribbed glass will never refract the background, just show it through with the proper transparency. 

 

When this is all said and done, the alpha will be right and the image will look much like it did as a straight render. See this final screenshot.

Final Composition

OK, remember why we were trying to get here in the first place?  So we could tweak each element right?  So lets do that. 8 bit Failing When AdjustedLets take the reflection element and turn up it's exposure for example.  Select that element in the element pre-comp and add >Color Correct>Exposure.  In my example, I cranked the exposure up to 5.  This boosts the reflection element very high, but not unreasonable.  However, since After Effects is in 8 bpc (B-its P-er C-hannel) you can see that the image is now getting crushed.

So, now we need to switch the comp to 16 bpc.  You can do that by holding ALT while clicking the 8bpc under the project window.  Switch it to 16 bpc and everything should go back to normal.  But note that were now comping at 16 bit and AE might be a bit slower than before.  This is only a result of cranking on the exposure so hard.  You can avoid this by doubling up the reflection element instead of cranking it with exposure. Keep in mind that many plugins don't work in 16 bit mode in after effects.

 That's about it for after effects.  I'm curious how CS5 has changed this workflow, but we haven't pulled the trigger on upgrading to CS5 just yet.  I'm glad because I've been investigating other compositors like Fusion and Nuke.  I'm really loving how Nuke works and I'll follow this article up with a nuke one if people are interested in it.

 

Wednesday
Dec222010

The Missing 3dsmax Brush Cursor

Have you ever had your Hair and Fur tool brush disappear in 3dsmax? What about the Poly Shift tool? Has that ever gone missing on you?  It was there one minute, but now it's gone.  

I've had this happen a few times.  For me it first started with the Hair and Fur modifier, but it's true of the Poly Shift tool also.  I'd be distorting some mesh and then I go off and do some other stuff. I come back and my circle cursor is gone.  Im in the tool mode, but I don't see my cursor brush?!  Anywhere!  Shit, Must be the graphics card right?  Maybe restart 3dsmax.  Load the file again.  Go to tool brush mode.  Fuck! It's still missing, whats wrong, maybe I should reboot, maybe I ran out of memory? Is it file related....STOP!

The Problem

 Nothing is wrong with your machine. Don't reinstall anything or update your graphics drivers.  Just first check your layer manager.  If the "current" layer (The checked one) is hidden, then the brush cursor is hidden inside this hidden layer.  Simply unhide the layer and try the tool again.  Most likely the hidden layer was the problem.  If it's not, sorry, I can't help.  Keep Googling.

  I love seeing how many people have found my post on fixing the corrupted 3dsmax menu file so I hope people will find this post helpful too.

Wednesday
Dec152010

Linear Workflow with Vray in 3dsmax

These days, most shops are using a linear workflow for lighting and rendering computer graphics.  If you don't know what that means, read on.  If you do know, and want to know more about how to do it specifically in 3dsmax, read on as well.

Why use a linear workflow?

First thing about linear workflow is that It can actually make your CG look better and more realistic.  Do I really need to say anymore? If that doesn't convince you, the second reason is so you can composite your images correctly . (Especially when using render elements.) Also, it gives you more control to re-expose the final without having to re-render all your CG.  And finally, many SLR cameras and Digital cameras are now supporting linear images so it makes sense to keep in linear all throughout the pipeline.

A bit on Gamma

Lets start with an example of taking a photo on a digital camera.  You take a picture, you look at it, it looks right.  However, you should know that the camera encoded a 2.2 gamma curve into the image already.  Why, because they are taking into account the display of this image on a monitor. A monitor or TV is not a linear device.  So the camera maker applies a gamma curve to the image to compensate.  A gamma encoded curve looks something like this. The red curve shows how a monitor handles the display of the image. Notice the green line which is 2.2 gamma.  It's directly opposite to the monitor display, and when they combine, we get the gray line in the middle. So gamma is about compensation.  Read more on gamma here.

 The problem comes in when you start to apply mathematics to the image. (like compositing or putting render elements back together)  Now, the mathematics are off since the original image has been bent to incorporate the gamma.  So, the solution is to work with the original linear space images, and apply gamma to the result, not the source. NOTE: Linear images look darker with high contrast until you apply the display gamma to it. TV's use a gamma of 2.2. 

The problem also comes with computer graphics generated imagery.  All of CG is essentially mathematics, and for years many of us have just delt with this.  However, now that most renderers can simulate global illumination, the problem is compounded.  Again, the solution is let the computer work in linear space, and bent only the final results.

Why we use 16 bit EXR's

So, now that we know we have to work with linear images, lets talk about how.  Bit depth is the first problem.  Many of us were using 8 bit images for years.  I swore by targas for so long, mainly cause every program handled them the same. Targa's are 8 bit images with color in a space of 0-255.  So if you save the data with only 255 levels of each R-G-B color and alpha, when you gamma the image after, the dark end of the spectrum it will be "crunchy" since there wasn't much data in there to begin with.  NOw that very little data has been stretched.  Here's where 16 bit and 32 bit images come into play.  You could use any format of storage that support 16 bit.  16 bit is plenty for me, 32 is a bit of depth overkill and makes very large images.  Then you can adjust the resulting images with gamma and even exposure changes without those blacks getting destroyed.  EXR's seem to be the popular format since it came out of ILM and has support for multiple channels.  It also has some extended compression options for per scanline and zipped scanline storage so it can be read faster. 

Does 3dsmax apply gamma?

Not by default.  3dsmax has had gamma controls in the program since the beginning, but many people don't understand why or how to use it. So what you've probably been working with and looking at is linear images that are trying to look like gamma adjusted images.  And your trying to make your CG look real? I bet your renderings always have a lot of contrast, and your probably turning up the GI to try and get detail in the blacks. 

Setting up Linear Workflow in 3dsmax

Setup 3dsmax Gamma

First, let' set up the gamma in 3dsmax.  Start at the menu bar, Customize>Preferences and go to the Gamma and LUT tab.  (LUT stand for Look Up Table. You can now use custom LUT's for different media formats like film.) Enable the gamma option and set it to 2.2 (Ignore the gray box inside the black and white box.) Set the Input gamma to 2.2.  This will compensate all your textures and backgrounds to look right in the end.  Set the Output gamma to 1.0  This means we will see all our images in max with a gamma of 2.2, but when we save them to disk, they will be linear.  While your here, check the option in Materials and Color Selectors since we want to see what were working with. That's pretty much it for max.  Now lets talk about how this applies to Vray.

Setting up Gamma for Vray

You really don't have to do anything to Vray to make it work, but you can do a couple things to make it work better.  First off, many of Vray's controls for GI and anti-aliasing are based on color thresholds. It analyzes the color difference between pixels and based on that does some raytracing stuff.  Now that we just turned on gamma of 2.2 we will start to see more noise in our blacks.  Let's let Vray know that were are in linear space and have it "Adapt" to this environment.

 Vray has it's own equivalent of exposure control called Color Mapping.  let's set the gamma to 2.2 and check the option, "Don't affect colors (adaptation only)".  This will tell Vray to work in linear space, and now our default color thresholds for anti-aliasing and GI don't need to be drastically changed.  Sometimes when Im working on a model or NOT rendering for composite, I turn off the "Don't affect colors)" which means that I'm encoding the 2.2 and when I save the file as a JPG or something, it will look right. (This easily confuses people so stay away from switching around on a job.)

Vray Frame Buffer

I tend to almost always use the Vray frame buffer.  I love that it has a toggle for looking at the image with the view gamma and without. (Not to mention the "Track Mouse while Rendering" tool in there.)  The little sRGB button will apply a 2.2 gamma to the image so you can look at it in gamma space while the rendering is still in linear space. Here is just an example of the same image with and without 2.2 gamma. Notice the sRGB button in the bottom of these images.

 This asteroid scene is show without the gamma, and with a 2.2 gamma. Try doing that to an 8 bit image.  There would hardly be any information in the deep blacks.  With a linear image, I can now see the tiny bit of GI in the asteroid darker side.

Pitfalls...

Vray'sLinear Workflow Checkbox

I'm referring to the check box in the Color Mapping controls of the Vray Renderer. Don't use this.  It's misleading.  It's used to take an older scene that was lit and rendered without gamma in mind and does an inverse correction to all the materials. Investigate it if your re-purposing an older scene.

Correctly Using Data Maps (Normal Maps)

Now that we told max to adjust every incoming texture to compensate for this monitor sillyness we have to be careful.  For example, now when you load a normal map, it will try and apply a reverse gamma curve to the map, which is not what you want.  This will make your renderings look really fucked up if they are gammed compensated. Surface normals will point the wrong way. To fix this, always set the normal map image to a predefined gamma of 1.0 when loading it.   I'm still looking for discussion about if reflection maps and other data maps should be set to 1.0.  I've tried it both ways, and unlike normals, which are direction vectors, reflection maps just reflect more or less based on gamma.  It makes a difference, but it seems fine.

Always Adopt on Open

I've also taken on the ritual of always taking on the Gamma of the scene I'm loading.  Always say yes, and you shouldn't have problems jumping from job to job scene to scene.

Hope that helps to explain some things, or at least starts the process of understanding it. Feel free to post questions since I tend to try to keep the tech explanations very simple.  I'll try post a follow up article on how to use linear EXR's in After Effects

Tuesday
Dec142010

Swapping Instanced Models inside Trackview

Have you ever hand placed a lot of instances, but then realized that you actually wanted to swap out the entire object?  This leaves you with a problem since you want to keep the placement of the original obejcts, but you also want to use a new object.  One way is to add an edit mesh to any instance and "Attach" the other mesh to it, but that's sloppy.  Here's a neat little trick many people don't know.  You can copy and paste modifiers and base meshes in the track view.

 In this example, I hand placed 720 small sphere as "light bulbs".  But the sphere's aren't cutting it for the realism I want.  So, I need to replace all the sphere's with my new light bulb object on the right.

 First, to help things out, use the align tool and align the bulb to one of the spheres,  taking it's rotation and scale values. (Ignore position XYZ) Now if the bulb is took big, or oriented the wrong way, don't use the transform tools, use an Xform modifier or edit the mesh itself to line back up.  This will ensure that when we do replace the sphere, the bulb will be the same sale and rotation within the object space. 

Now, select on the new object and open the track view.  Navigate to the modifier or base object you want to copy.  In this case, I had an Editable Mesh.  Right click and choose "Copy" from the right bottom quad menu.  Then, select any of the instanced object, and navigate to it's base object.  In this case it was a sphere primitive.  Click on the sphere base object and right click again.  Choose "paste" from the right bottom quad menu.  Now when you get the paste dialog the first choice is to copy from the original or instance.  In this case, I don't need the original, so I'll leave it at copy.  Below is the key option.  "Replace All Instances" This will find all the instances of the sphere and replace them with my new object.

 Pretty cool huh.  You can also do this with modifiers and base objects anywhere in the track view.

 

Wednesday
Dec082010

2D Tracking inside 3dsmax

Who remembers the 3d tracker put in back in version 4? Did you ever try to use it? Did it work? probably not. I think I got it to work once, but the problem was that you had to know all the measurements of the set, and even when you did, the results were sketchy.  When I finally saw Boujou track for the first time, I almost shit my pants.  And of coruse, i never tried to use the max 3d tracker again.

So the max 3d tracker is not exactly a tool you'd use right?  Some of you probably didn't even know it was there.  However, is hides a very cool little tool that you might find useful.  2d tracking. To make 3D tracking possible, you have to track points in 2D first.  That's the part of the tracker that can still be useful today.  Every once in a while you need to track something in a plate where a full 3d track is either not needed, or not wanted.  

A Simple Example

 

 

I grabbed a cheap camera and took a shaky shot of the lights above me for this example.  

Lets say we want to pin a lens flare or a some 3D object to these lights.  Now of course, you could do this in a  composite program, but if the track moves across the screen enough, you would see the sides of the 3d object in 3d as opposed to 2d where your just tracking a picture. So, 2d tracking in 3d CAN be useful.  (It's up to you to figure out what to do with the info I'm spitting out.)

 

 

 

2D Tracking

 

Load up the movie in your background.  Either as a viewport background, or an an environment background.  Line up a quick camera and place a point helper where the one of the lights are.

 

Now, open up the utilities tab of the command panel and click "More..." and choose the Camera Tracker. First thing to do is load up your movie.  It can be a quicktime that max can read or an ifl sequence.  A window with your movie should come up.  next, go to the Motion Trackers rollout and click "New Tracker".  Drag that tracker over the light and center it up.  

Once it's where you want it, go to the Movie Stepper rollout and turn on the Feature Tracking button. (It'll turn red)  If you have a simple plate like mine, you can press the ">>" button and track right through to the end of a shot.  Better yet, press the ">10" button to step 10 frames at a time.  If the tracker gets lost, find the frame where it gets lost, and drag it back to where you want.  Then start stepping through again.  When your finished, you should see the motion as a line in the movie window.

 Object Pinning

OK, we're almost home. Once the track is where you want it, Scroll down past all the 3d tracking crap until you see the Object Pinning rollout.  Choose your tracker, and pick your object to pin to the motion. (The point helper I had you make.) Also, you can now choose if you want to pin in screen space, or in grid space, and weather it's absolute, or relative to it's starting position.  I used screen and absolute.  Press the "Pin" button and you should see the helper moving around to match the point in space.

 Here's a preview of the final results. 

 

  So... 3d tracking in max... I don't think so, but 2D tracking in max? Hell yea.
Want more crazy tracking... click to see chicken tracking

Tuesday
Dec072010

Working with 3dsmax Groups

You know how you work with 3dsmax groups?  You don't.  Don't use them.  They mess shit up.   They usually fuck up your pivot points.  Especially in game development where your exporting to another program.  Stay away from them.  

You wanna "group" something, parent everything to a null node. There ya go.

Friday
Dec032010

Getting Good Fur from 3ds max

OK, let's all admit that we're a little disapointed with the Hair and Fur system in 3ds max.  However, if you can't afford a better solution like Hair Farm, there is still hope.  Although the system is max has it's issues, you can still get some decent fur renderings out of it.  Here's a few fur tips that I remember doing before we got Hair-Farm.
            
These are a few examples of Fur that I've done with max's hair and fur system. (The are close up's. I didn't get approval to show the characters.)
Styling Fur
  1. When you first apply hair and fur, it might be very long compared to your model.  (I have no idea how the hell it gets it's scale.  It seems to be arbitrary to me.) Don't try to use the sacle setting to make it smaller. Use the style tools to get it smaller, then use the scale and cut spinners to tweak it when finishing.
  2. First thing I usually do is go into style mode and get the length right.  To do this you will have to first turn off DISTANCE FADE so that you can do an overall scale without having the brush fall-off from the center. Then zoom way out and try to eye it up from afar.
  3. Next, it the comb tool  This is great for animal fur.  Yes, I said it was great and it is. Use the transform brush to quickly brush the hair in a general direction.  Then press the comb button. Ta da! Great for making hair lie down along your surface very fast.
  4. Frizz is wavy and useful, Kink is some weird noise pattern that scatters the hair vertices. I try to avoid it. It's not very realistic.
  5. There is no way to make perfect little loops or helix's. Get over it. You can try to style it, but that might make you insane.

Lighting and Rendering

  1. Don't try to use geometry. it's stupid. Take advantage of the buffer renderer as much as possible.
  2. I also didn't find mental ray primitive option a solution.  Mental ray is a raytracer so to get smooth hair from frame to frame, you had to turn up the anti aliasing so high, I found it looked just like the buffer hair anyway.
  3. Turn off the vray frame buffer.  Hair and Fur are render effects and are computed into the max frame buffer. You won't see then in the vray frame buffer.
  4. Switch back to shadow maps and get over it. (Or render separate passes to use Vray soft shadows) 
  5. Keep the light cones tight around your hair since it's now resolution dependent.  (thats what she said?) Start with at least 1k hair maps. Im sure you'll need them to be 2k if your animating.
  6. Start to turn up the total number of hair, but render aas you go. (And save before rendering)
  7. Watch for missing buckets. If this happens, you can make the buckets smaller. in the Render Effects dialog.
  8. Use thickness to fill between hairs.  If you can't throw any more hairs at it, thicken the hairs.  Better yet, make sure the surface underneath has a hair looking texture on it. That will keep your render time down too.

 Material

  1. Turn down the shininess right away.  Its supposedly for simulating real hair, but often it looks very artificial.  
  2. Maker sure to set the colors for the tip and root to white when using texture maps.  These color swatches are multiplicative and anything other than white will make your map look wrong.
  3. Look out for the Hue variation amount.  It's defaulted at 10% and that'd high.  It will vary the hue of the hair and can start you off with purple, red and brownish hairs.

Don't get me wrong.  Max's hair and fur is a pretty much a pain in the ass and really should delt with.  I'm now using Hair Farm like I mentioned above, and the difference is worlds apart. Hair farm is fast, robust and the results look very nice.  (Speed is king in look development. if you can render it fast, you can render it over and over, really working with the materials and lighting to make it look the way you want.) 


Saturday
Nov272010

Top Free Tools I Can't Do Without

OK, maybe can't do without is a bit strong, but seriously.  If you're using 3ds Max and don't know about these tools read this article. I just wanted to write about my top free scripts, but I needed to expand it to plugins also. Over the last few years, I've tried many different scripts and tools to help me in production, here are my top favorites.

Bercon Procedural Maps

The Bercon Maps are a set of procedurals for 3ds max that are superb.  The noise is so versatile, you can throw away every other noise procedural that came before.  The best part about these is they look very realistic, but have all the benefits of being procedural. (just check out the images on the website) Shane Griffith over at Autodesk should really just buy this set so we don't have to chase them down every release.  

 PEN Attribute Holder

If you've ever used the modifier in max called Attribute Holder, to store custom attributes on an object, this is an awesome version of that. This is very dear to my heart since I wrote the original "Attribute Holder" modifier that's still in max today.  This version actually does something, unlike mine. My version was a hack to an existing modifier in which I hid it's UI. The PEN Attribute holder captures applied custom attributes and saves presets as sliders that you can use to call the attributes back. I use it on all my characters hand controls, as a way to store finger poses. First I connect all the finger rotations to custom attributes on the PEN modifier. Once the data is instanced as a rotation and a custom attribute, Paul's modifier stores the values together as one preset. (If anyone wants me to explain this more, let me know.)

Blur Beta Plug-in Pack

Blur was developing shaders and procedurals for 3ds Max since it first came on the scene. Many of these are re-compiled each release and given away for free. Splutterfish now hosts them since those are the guys that originally wrote them.  Thanks guys, for recompiling these every release.

 Sub-Object Gizmo Control

This is a great tool written by Martin Breidt.  It allows any modifier transform to be linked to another 3D object.  For example,  you can use this to set up a UV Map modifier that can be rotated with separate control object. Martin has some other greats tools on his site. Check them out here.

And as always, Script Spot is a great source for finding new scripts.  Let me know if you come across something cool!