Sign in to follow this  
bandages

3D What are the advantages of creating normal maps the hard way?

Recommended Posts

I've built some simple normal maps out of meshes and a custom HLSL shader that writes their normals to the screen.  While I've only used this for creating tiling normal maps, where I control the orientation of the mesh used to generate normals, I don't see why I couldn't do this for a full-model normal map, placing the models in screen space based on their UV rather than world-space coords, writing the normals of the low-poly to one image, the high-poly to another, and the vector necessary to transform the normals of the first to the second onto a third image.  With the tiling normal maps I've made, I haven't seen any artifacts or weirdnesses.  All it takes is one or two models, a relatively simple shader, and a single frame of computer time.

But when I visit modelling sites, baking normals sounds like a major headache, involving the creation of a cage and a lengthy bake process.  It sounds like the modelling packages are using some kind of raycasting algorithm.

There must be a reason not to be doing things the way that I've been doing them.  Can anyone explain to me the problems with creating normal maps via shader?

Share this post


Link to post
Share on other sites

I don't exactly understand the method you are using... but I am guessing is very constrained to your specific use case, like a plane as a low poly version on a more detailed displaced version of it in just one axis. In the general case, the low poly and high poly models don´t have the same topology, and thus, they don´t share the same uv space... in fact, many times the high poly version of a model isn´t even uv unwrapped.

Share this post


Link to post
Share on other sites

I've done normal maps the way you describe, and it does have its uses. However, baking normal maps is a mapping process, mapping from high-detail geometry to low detail geometry. In the case of baking a tiling planar normal map, you are going from high-detail geometry arrayed on a plane, to a plane. It's a pretty easy mapping, 1 to 1 with the XY coordinates of a particular location on the plane. A given feature simply projects directly to the plane 'below' it. When baking an arbitrary mesh, however, then it's not so simple. In this case, you need to find the point on the high-detail poly mesh that most directly corresponds with a given point on the low-detail poly mesh. This involves projecting onto the surface of the high-poly, and this projection is NOT a simple projection onto a 2D plane. Once you advance to that level of complication, it's much easier to do the work in a 3D package. Not to say that it CAN'T be done in your own way, just that the math and logistics become a LOT more complex.

Share this post


Link to post
Share on other sites

Thanks for your responses, I think I understand better now.

 

It sounds like it should be acceptable-- it's nice to know that some awful artifact isn't going to jump out at me-- and the real issue is matching UV coords, matching corresponding points/spaces.  There are certainly situations where this is easy via UV correspondence, like if your high poly is just a subdivided low poly, it seems like it would be trivial; what I was doing with planes was trivial.  But I can see now how there are situations where it wouldn't be trivial.

Share this post


Link to post
Share on other sites

Yes, if the high poly is made by subdividing the low poly then it's not too difficult to bake yourself. You can build a workflow that ensures that. Some tools, however, allow dynamic subdivision while sculpting (see something like Sculptris, or Blender's dynamic subdiv). This allows more mesh detail in areas that need it, but can break the relationship between the high poly and low poly. If your workflow includes this, then the math becomes more difficult.

 

Additionally, I feel like you overstate the difficulty in using a 3d tool to bake normal maps. Tutorial videos make it seem more difficult than it really is. Once you understand the process, it can be a very quick thing. The actual baking setup and bake can take mere minutes.

 

Even for tiling textures, I now prefer Blender rather than my own older hand-tooled processes: 

 

Having access to all the other tools of a general purpose 3d tool makes all the difference.

Share this post


Link to post
Share on other sites

Thanks, that's easy to believe, and useful to know.  I actually made my own shader to bake lighting/mats/etc to textures in-engine before I discovered that in reality, there was nearly no difficulty involved in Blender baking.  So I can imagine the same is true with normals too.  (But, it wasn't bad HLSL practice either, not a bad way to get more comfortable with the concepts, not something I regret doing.)  So much just seems to be about finding the time to learn it, when there's so much to be learned, and difficult to know beforehand what's going to be hard to learn and what's going to be easy.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628395
    • Total Posts
      2982432
  • Similar Content

    • By KarimIO
      Hey guys,
      I'm trying to work on adding transparent objects to my deferred-rendered scene. The only issue is the z-buffer. As far as I know, the standard way to handle this is copying the buffer. In OpenGL, I can just blit it. What's the alternative for DirectX? And are there any alternatives to copying the buffer?
      Thanks in advance!
    • By sveta_itseez3D
      itSeez3D, a leading developer of mobile 3d scanning software, announced today a new SDK for its automatic 3D avatar generation technology, Avatar SDK for Unity. The Avatar SDK for Unity is a robust plug-n-play toolset which enables developers and creatives to integrate realistic user-generated 3D avatars into their Unity-based applications. SDK users can allow players to create their own avatars in the application or integrate the SDK into their own production processes for character design and animation.
      “Virtual avatars have recently become increasingly popular, especially in sports games and social VR apps. With the advance of VR and AR, the demand to get humans into the digital world is only increasing”, said Victor Erukhimov, itSeez3D CEO. “Our new Avatar SDK for Unity makes it super-easy to bring the avatar technology into any Unity-based game or VR/AR experience. With the Avatar SDK for Unity now every developer can bring face scanning technology into their games and allow players to create their own personalized in-game avatars, making the gameplay much more exciting and immersive.”
      Key features of the Avatar SDK for Unity:
      Automatic generation of a color 3D face model from a single selfie photo in 5-10 seconds (!). Works best with selfies, but can be used with any portrait photo.
      Shape and texture of the head model are unique for each person, synthesized with a deep learning algorithm crafted by computer vision experts
      Head models support runtime blendshape facial animations (45 different expressions)
      Generated 3D heads include eyes, mouth, and teeth
      Algorithms synthesize 3D meshes in mid-poly resolution, ~12k vertices, and ~24k triangles
      Six predefined hairstyles with hair-recoloring feature (many more available on request)
      Avatar generation API can be used in design-time and in run-time, which means you can allow users to create their own avatars in your game
      Cloud version is cross-platform, and offline version currently works on PCs with 64-bit Windows (support for more platforms is coming soon)
      Well-documented samples showcasing the functionality.
       
      Availability
      The Avatar SDK for Unity is offered in two modes - “Cloud” and “Offline”. The “Cloud” version is available at http://avatarsdk.com/ and the “Offline” version is available by request at support@itseez3d.com.
      ###
      About itSeez3D
      At itSeez3D, we are working on the computer vision technology that turns mobile devices into powerful 3D scanners. itSeez3D has developed the world's first mobile 3D scanning application that allows to create high-resolution photorealistic 3D models of people's' faces, bodies and objects. The application is available for iOS and Windows OS mobile devices powered with 3D cameras. In 2016 the company introduced Avatar SDK that creates a realistic 3D model of a face from a single selfie photo. To learn more about itSeez3D scanning software and 3D avatar creation technology, please visit www.itseez3d.com and www.avatarsdk.com.

      View full story
    • By sveta_itseez3D
      itSeez3D, a leading developer of mobile 3d scanning software, announced today a new SDK for its automatic 3D avatar generation technology, Avatar SDK for Unity. The Avatar SDK for Unity is a robust plug-n-play toolset which enables developers and creatives to integrate realistic user-generated 3D avatars into their Unity-based applications. SDK users can allow players to create their own avatars in the application or integrate the SDK into their own production processes for character design and animation.
      “Virtual avatars have recently become increasingly popular, especially in sports games and social VR apps. With the advance of VR and AR, the demand to get humans into the digital world is only increasing”, said Victor Erukhimov, itSeez3D CEO. “Our new Avatar SDK for Unity makes it super-easy to bring the avatar technology into any Unity-based game or VR/AR experience. With the Avatar SDK for Unity now every developer can bring face scanning technology into their games and allow players to create their own personalized in-game avatars, making the gameplay much more exciting and immersive.”
      Key features of the Avatar SDK for Unity:
      Automatic generation of a color 3D face model from a single selfie photo in 5-10 seconds (!). Works best with selfies, but can be used with any portrait photo.
      Shape and texture of the head model are unique for each person, synthesized with a deep learning algorithm crafted by computer vision experts
      Head models support runtime blendshape facial animations (45 different expressions)
      Generated 3D heads include eyes, mouth, and teeth
      Algorithms synthesize 3D meshes in mid-poly resolution, ~12k vertices, and ~24k triangles
      Six predefined hairstyles with hair-recoloring feature (many more available on request)
      Avatar generation API can be used in design-time and in run-time, which means you can allow users to create their own avatars in your game
      Cloud version is cross-platform, and offline version currently works on PCs with 64-bit Windows (support for more platforms is coming soon)
      Well-documented samples showcasing the functionality.
       
      Availability
      The Avatar SDK for Unity is offered in two modes - “Cloud” and “Offline”. The “Cloud” version is available at http://avatarsdk.com/ and the “Offline” version is available by request at support@itseez3d.com.
      ###
      About itSeez3D
      At itSeez3D, we are working on the computer vision technology that turns mobile devices into powerful 3D scanners. itSeez3D has developed the world's first mobile 3D scanning application that allows to create high-resolution photorealistic 3D models of people's' faces, bodies and objects. The application is available for iOS and Windows OS mobile devices powered with 3D cameras. In 2016 the company introduced Avatar SDK that creates a realistic 3D model of a face from a single selfie photo. To learn more about itSeez3D scanning software and 3D avatar creation technology, please visit www.itseez3d.com and www.avatarsdk.com.
    • By glportal
      GlPortal is a free and open source first person 3D teleportation based puzzle game and platformer. But we have already integrated a physics engine and are planning for some physics based puzzles.
      We want to improve our Visual Studio support. Check out this project:
      https://github.com/kungfooman/glportal-vs
      You can chat with us on gitter https://gitter.im/GlPortal/glPortal
    • By Canislupus54
      I'm looking for programmers for an rpg I want to make. If you're wondering what "semi-turn based" means, it means that you take turns, but instead of a rigid back and forth like Pokemon, a timer determines when you can act, a sort of modernization of the classic Final Fantasy Active Time Battle system. Right now, I'm looking for programmers to create a prototype of both the combat system and the movement outside of combat. Preferably for Unity C#. Concept artists, particularly for characters, and writers to help me flesh out the character and story aspects, would also be helpful.
      Here's a concept doc to fully explain things: https://docs.google.com/document/d/1ObDMAUWsndSAJ1EpQGRDxtR8Xl9xPotx89OZ0sgRaIw/edit?usp=sharing
      If you can fill another role and are interested, feel free to let me know as well.
      At the moment, this is purely a hobby project, with no payment planned. If we produce something we feel we can release, then of course we'll work out something for compensation. But, again, don't join this project counting on payment.
      If you're interested, contact me on here, or at jordestoj@yahoo.com . Thanks.
  • Popular Now