• Advertisement
Sign in to follow this  

3D What are the advantages of creating normal maps the hard way?

Recommended Posts

I've built some simple normal maps out of meshes and a custom HLSL shader that writes their normals to the screen.  While I've only used this for creating tiling normal maps, where I control the orientation of the mesh used to generate normals, I don't see why I couldn't do this for a full-model normal map, placing the models in screen space based on their UV rather than world-space coords, writing the normals of the low-poly to one image, the high-poly to another, and the vector necessary to transform the normals of the first to the second onto a third image.  With the tiling normal maps I've made, I haven't seen any artifacts or weirdnesses.  All it takes is one or two models, a relatively simple shader, and a single frame of computer time.

But when I visit modelling sites, baking normals sounds like a major headache, involving the creation of a cage and a lengthy bake process.  It sounds like the modelling packages are using some kind of raycasting algorithm.

There must be a reason not to be doing things the way that I've been doing them.  Can anyone explain to me the problems with creating normal maps via shader?

Share this post


Link to post
Share on other sites
Advertisement

I don't exactly understand the method you are using... but I am guessing is very constrained to your specific use case, like a plane as a low poly version on a more detailed displaced version of it in just one axis. In the general case, the low poly and high poly models don´t have the same topology, and thus, they don´t share the same uv space... in fact, many times the high poly version of a model isn´t even uv unwrapped.

Share this post


Link to post
Share on other sites

I've done normal maps the way you describe, and it does have its uses. However, baking normal maps is a mapping process, mapping from high-detail geometry to low detail geometry. In the case of baking a tiling planar normal map, you are going from high-detail geometry arrayed on a plane, to a plane. It's a pretty easy mapping, 1 to 1 with the XY coordinates of a particular location on the plane. A given feature simply projects directly to the plane 'below' it. When baking an arbitrary mesh, however, then it's not so simple. In this case, you need to find the point on the high-detail poly mesh that most directly corresponds with a given point on the low-detail poly mesh. This involves projecting onto the surface of the high-poly, and this projection is NOT a simple projection onto a 2D plane. Once you advance to that level of complication, it's much easier to do the work in a 3D package. Not to say that it CAN'T be done in your own way, just that the math and logistics become a LOT more complex.

Share this post


Link to post
Share on other sites

Thanks for your responses, I think I understand better now.

 

It sounds like it should be acceptable-- it's nice to know that some awful artifact isn't going to jump out at me-- and the real issue is matching UV coords, matching corresponding points/spaces.  There are certainly situations where this is easy via UV correspondence, like if your high poly is just a subdivided low poly, it seems like it would be trivial; what I was doing with planes was trivial.  But I can see now how there are situations where it wouldn't be trivial.

Share this post


Link to post
Share on other sites

Yes, if the high poly is made by subdividing the low poly then it's not too difficult to bake yourself. You can build a workflow that ensures that. Some tools, however, allow dynamic subdivision while sculpting (see something like Sculptris, or Blender's dynamic subdiv). This allows more mesh detail in areas that need it, but can break the relationship between the high poly and low poly. If your workflow includes this, then the math becomes more difficult.

 

Additionally, I feel like you overstate the difficulty in using a 3d tool to bake normal maps. Tutorial videos make it seem more difficult than it really is. Once you understand the process, it can be a very quick thing. The actual baking setup and bake can take mere minutes.

 

Even for tiling textures, I now prefer Blender rather than my own older hand-tooled processes: 

 

Having access to all the other tools of a general purpose 3d tool makes all the difference.

Share this post


Link to post
Share on other sites

Thanks, that's easy to believe, and useful to know.  I actually made my own shader to bake lighting/mats/etc to textures in-engine before I discovered that in reality, there was nearly no difficulty involved in Blender baking.  So I can imagine the same is true with normals too.  (But, it wasn't bad HLSL practice either, not a bad way to get more comfortable with the concepts, not something I regret doing.)  So much just seems to be about finding the time to learn it, when there's so much to be learned, and difficult to know beforehand what's going to be hard to learn and what's going to be easy.

Share this post


Link to post
Share on other sites

You've implemented a baker ;)

You're using a 2D cage in texture space and require the artist to exactly map both meshes onto that cage.

Other methods change the workflow for the artist, such as not requiring UVs at all on the high poly 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Nimmagadda Subba Rao
      Hi,
         I am a CAM developer working with C++ and C# for the past 5 years. I started working on DirectX from past 6 months. I developed a touch screen control viewer using Direct2D. I am working on 3D viewer currently. I am very slow with working on Direct3D. I want to be a gaming developer. As i am new to this i want to know what are the possibilities to explore in this area. How to start developing gaming engines? Is it through tutorials? I heard suggestions from my friends that going for an MS helps. I am not sure on which path to choose. Is it better to go for higher studies and start exploring? I am currently working in India. I want to go to Canada and settle there. Are there any good universities there to learn about graphics programming? Sorry if I am asking too many questions but i want to know the options to choose to get ahead. 
    • By _RoboCat_
      Hi,
      Can anyone point me into good direction how to resolve this?
      I have flat mesh made from many quads (size 1x1 each) each split into 2 triangles. (made procedural)
      What i want to achieve is : "merge" small quads into bigger ones (show on picture 01), English is not my mother language and my search got no result... maybe i just form question wrong.
      i have array[][] where i store "map" information, for now i'm looking for blobs of same value in it -> and then for each position i create 1 quad. and on end create mesh from all.
      is there any good algorithm for creating mesh between random points on same plane? less triangles better. Or "de-tesselate" this to bigger/less triangles/quads?
      Also i would like to find "edges" and create "faces" between edge points (picture 02 shows what i want to achieve).
      No need for whole code, just if someone can point me in good direction would be nice.
      Thanks


    • By Karol Plewa
      Hi, 
       
      I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. 
       
      Is there anyone that is wishing to help me set up my compute shader?
      Thank you in advance for any replies and interest!
    • By PhillipHamlyn
      Hi
      I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh.

      For example my landscape vertex could be;
      struct Vtx { Vector3 Position; bool IsForest; bool IsRoad; bool IsRiver; } I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within.

      My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in.

      The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer.

      In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on;
      Is this a really bad idea ? If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ? If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful. Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ? Thanks
      Phillip
    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
  • Advertisement