Sign in to follow this  
  • entries
    375
  • comments
    1136
  • views
    297534

Heightmap photography

Sign in to follow this  

82 views

One of the big problems with next-gen graphics is asset production. Previously, if you wanted a high-res texture, you could just grab a digital camera and go find it - get the right lighting, line up your shot, and *snap*, you've got everything you need. The next-gen stuff needs so much more - normal maps, heightmaps, specular maps, gloss maps... I heard that EA even employ some people dedicated to creating 'sweat maps' for their characters.

So any technology to help in the creation of those assets will be well-recieved. While sitting here waiting for some disc images to be compiled, I'm pondering if there's anything I could play with.

The nicest solution is something that lets the artists do what they used to - travel around with a camera, get everything they need, then come back and get on with putting the textures into levels. So building this extra stuff into the camera itself (or using what a camera can do and postprocessing) is the way to go.

I'm thinking about how to generate normal maps with a regular digital camera, and the best I've come up with is a 'facingness' map. You take two photos of your surface, one with flash and one without. The one without flash is the base color texture. The one without flash is then subtracted from the one with flash; the resulting image is a 'facingness' map, as parts of the surface that face the camera will reflect more light and so show a higher difference when the flash is used.

It's the very beginnings of a normal map, I think, but while it tells us that a texel may be at a 45 degree angle to the base plane, it doesn't tell us in which direction the texel is sloping. It may be possible to combine two or three photographs of the surface and cross-reference facingness maps to figure out where the light is going?
Sign in to follow this  


2 Comments


Recommended Comments

Some people at Polycount were experimenting with something like this a while ago. What they did was have a fixed camera, and change the position of a lightsource. Then using those two images (I think 2 was enough) they could create a normalmap. Unfortunately I'm pretty sure I won't be able to find those posts, but you could ask there is you're interested in learning more.

Edit: managed to find a link on the topic: here

Share this comment


Link to comment
Quote:
Edit: managed to find a link on the topic: here

[lol] - superpig, this was mentioned in a pretty cool thread in "your" own forum a few months back...

Anyway, what you need to look into is "surface analysis" at the research level. I'm doing my thesis on it next year (but I'm combining it with HDRI and realtime D3D9 work). There's quite a lot of work done on the subject..

Things like surface perturbtion (normal/height/displacement) maps are reasonably well covered (still some holes) but it seems the other things such as light transmission properties (sub-surface scattering, gloss, glow, specular etc...) still need some work.

The thesis I mentioned is likely to use the technique covered in rick_appleton's link, but use multiple exposures to generate an HDRI diffuse map (etc..)

If you're interested, PM me (My email is hidden) and I'll see about sending you some stuff...

Like the post btw! [smile]
Jack

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now