Hi, all.
I'm working on large landmass generation. Right now I'm using pregenerated normal maps, however this is getting costly and the goal is to have a good quality at any zoom level, so the best approach would be procedural. Let's presume the mesh is already generated, and I need to render it using predefined colors and normal map. I was thinking to write a function in GLSL that will return smooth normal map by UV coords, pretty much similar to how GLSL implementation of noise works. What I "invented" so far is generating height map from noise and then converting it to normal map. However, all implementations of such conversion I have found, rely on sampling the heightmap texture's neighbouring pixels to calculate normals. This is problematic in GLSL, since noise doesn't really have "pixels", it's generated on the fly per fragment using UV coords.
So how would you approach this problem? Basically I think this boils down to knowing pixel size in UV coords, so I can calculate neighbouring fragments UV coords to calculate normals per fragment. Is this realistic? Is this problematic performance-wise?
Thanks,
Denis.