Convert Normal Map to grayscale Bump Map?

Started by
42 comments, last by s ludwig 19 years, 2 months ago
Yep, that's what I need to do. No rendering issues, no OpenGL or 3rd-party libs, apis, or dlls. This is to be done programmatically, preferably in C++. Just need to convert an RGB image representing a Normal map (in this case, an embossed-style with R and G represent the vectors, B is unutilized) into a Grayscale image representing a Bump map. Now I have been told, for better or worse, to 'perform an integration over U and V' to get back as much of a bump map as can be had from a normal map. This, to someone who only basically comprehends Calculus (and has never once had to use it in twenty year's programming), is information that I cannot use. I want examples, source, pseudo-source, references, anything but a vague reference to using integral calculus. And I definitely don't want to read a paper full of partial derivatives and lemmas and intregations. There are no 'calculus' libraries in the standard C++ libraries. Plus, this is not about putting the map onto a 3D object. It's strictly a 2D-2D conversion. I have found nothing pertinent online and certainly have no books relative to the topic - although my library contains a hefty set of 3D computer graphics programming books (the tomes). Thank you for any information leading to a means to perform this task! Robert
Advertisement
Quote:Original post by Kuroyume0161
Yep, that's what I need to do. No rendering issues, no OpenGL or 3rd-party libs, apis, or dlls. This is to be done programmatically, preferably in C++. Just need to convert an RGB image representing a Normal map (in this case, an embossed-style with R and G represent the vectors, B is unutilized) into a Grayscale image representing a Bump map.

Now I have been told, for better or worse, to 'perform an integration over U and V' to get back as much of a bump map as can be had from a normal map. This, to someone who only basically comprehends Calculus (and has never once had to use it in twenty year's programming), is information that I cannot use. I want examples, source, pseudo-source, references, anything but a vague reference to using integral calculus. And I definitely don't want to read a paper full of partial derivatives and lemmas and intregations. There are no 'calculus' libraries in the standard C++ libraries.

Robert


I know, but I wont tell, you are too demanding!
Not exactly what you want, but there's an interesting tool to do that this page:
http://www.zarria.net/ (Displacement Map Creator).

Maybe you can try to contact the author to ask him how he did it.

Y.
what i think should give quite decent results:

loop trough each row in your normalmap. summate the x-component and put it in a heightmap.
loop trough each collumn in your normalmap. summate the y-component and put it in a heightmap.

these two outputs should be similar. average them to get your final output. this output can contain values < 0, and its range isnt bounded either. however, enfocing that is trivial.

see, integration isnt that hard. a higher order integration sceme might yield better results, as would calculating the actual slope instead of just the vector component, but give it a try. its 5 minutes to code, and i dont think it will be bad at all.
Prior to asking this question, I did do a 'google' search of this forum, but without results. A more cumbersome page by page search found a couple of similar threads (at approximately page 35).

My problem (and for this type of thing, I guess it is) is that 'No college' equals no Computer Science major and no Calculus. All of my programming and math beyond HS is self-taught. What I know of Calculus is basic. I understand basic integration and derivation and orders. But that's about it. As soon as people mention double integrals, I glaze over. Sorry to say, I'm no David Eberly. ;)

I do have MathCad Pro at my disposal (an older version which I just dusted off for just this occassion). This could be used to load in the normal map image and apply whatever maths and check the results.

So, what do you recommend for an integral equation? Is this simply determining the area under a 'slice', which we'll call a pixel?

I do appreciate that anybody takes the time to explain this to me 'like I'm a five year old', because when it comes to this math, that's how I feel...

And, Anonymous Poster, why did you even bother posting? Not everybody is/was a Math major. As a matter of fact, I was an Art major. Yet I still managed to graduate in the top 5% of my 900 student class. Thinky about that.

Robert
Quote:Original post by Kuroyume0161
I do appreciate that anybody takes the time to explain this to me 'like I'm a five year old', because when it comes to this math, that's how I feel...

could you be more specific about what you dont understand about the approach i suggested? or did you try it and it didnt work as desired?

oh shit i just realized its flawed. easy to solve though.
attempt2:
for ix = each pixel in row[0] startheight = startheight + vectorcomponentx[ix,0] height[ix,0] = startheight for iy = each pixel in collumn[ix]  height[ix,iy+1] = height[ix,iy] + vectorcomponenty[ix,iy]

optionally do the same starting from collumn[0], which will produce similar result, and average to minimize errors. then again scale to the desired range.

im quite sure this will work.
I tried something similar, but the results were horrid. Instead of using vectors, I just used the values on the Red or Green plane of the bitmap (128=median) separately to determine changes in value as changes in height. So, if the Red value at some (x,y) was greater than the previous, I added one to that in the heightmap. If the Red value was less than the previous, I subtracted one from it in the heightmap. And so on.

The problem with the approach, among others, is that you must be absolutely sure about which basis direction the lighting is considered from (e.g.: Red = 0, Green = -90) on the UV plane. The other problem is the unbounded nature of the approach.

Would be more generalized to be able to do it irrespective of the lighting bases.

Will test your pseudo-code and see what happens.

Thanks, Eelco!

Robert
Quote:Original post by Kuroyume0161
I tried something similar, but the results were horrid. Instead of using vectors, I just used the values on the Red or Green plane of the bitmap (128=median) separately to determine changes in value as changes in height.

wrong. not the change in value is the change in height: the value itself is the change in height.

Quote:
So, if the Red value at some (x,y) was greater than the previous, I added one to that in the heightmap. If the Red value was less than the previous, I subtracted one from it in the heightmap. And so on.

why substract one? the vectorcomponents, or colorchannels, however you choose to interpret them, directly correspond to the change in height in that location.

Quote:
The problem with the approach, among others, is that you must be absolutely sure about which basis direction the lighting is considered from (e.g.: Red = 0, Green = -90) on the UV plane.
??
Quote:
The other problem is the unbounded nature of the approach.

there exists no bounded solution to this problem. all ranges are valid solutions. derivatives (the normals) contain only relative information by nature.

Quote:
Would be more generalized to be able to do it irrespective of the lighting bases.

i really dont get what you mean by lighting basis, but i can assure you lights have nothing to do with the solution of this problem.

Quote:
Will test your pseudo-code and see what happens.

Thanks, Eelco!

Robert

im curious to know how it will perform.

note that this is a first order integration scheme. hence it will only give accurate results when the input data is sufficiently cooperative, ie continuous and non-conservative. also, the slope is only linear with the vectorcomponents for small values.

improvements would be higher order integration schemes that are more accurate on small features. also, its not that hard to calculate the slope more accurately. the true slope would be calculated like this if im not mistaken (assuming red & green are unsigned bytes):

float vx = ((float)red)/127.5 - 1;float vy = ((float)green)/127.5 - 1;float vz = sqrt(1 - vx*vx + vy*vy);slopex = vx / vz;slopey = vy / vz;


higher order integration is a little more complex. lets see first how this works out. if its not good enough, we can give it a try.
No, I wasn't just saying that, for instance, values of Red larger than the median were higher and values lower than the median are lower. No. No. NO!

I was using the changes to determine if the surface was increasing or decreasing in height dependent on the light source (this makes a difference on whether brighter colors are increasing or decreasing in height - sloping upwards or downwards, relatively). Yes, there is a light source. Methods used to create Normal maps from geometry use... LIGHTS!

Normal Maps 1
Normal Maps 2

Heck, even one the links in an older thread points a paper (PDF) on taking a photograph and using it to determine the surface normals in order to extract height information (using the LIGHTing to determine the surface normals).

I agree that the image represents the normal of the surface at each pixel, but the analogy to three mutually perpendicular light sources is a direct and valid one. It may or may not play any role in the solution, but it has helped me understand what Normal maps represent. There is no understanding without representation (symbolically, analogously, or whatever).

I will try the code! :)

Robert
Quote:Original post by Kuroyume0161
No, I wasn't just saying that, for instance, values of Red larger than the median were higher and values lower than the median are lower. No. No. NO!

lower than what? i really cant follow you.

Quote:
I was using the changes to determine if the surface was increasing or decreasing in height dependent on the light source (this makes a difference on whether brighter colors are increasing or decreasing in height - sloping upwards or downwards, relatively). Yes, there is a light source. Methods used to create Normal maps from geometry use... LIGHTS!

i dont know what methods you refer to, but converting a heightmap to a normalmap has absolutely nothing to do with lights, and unsurprisingly, neither has the inverse operation.

Quote:
Heck, even one the links in an older thread points a paper (PDF) on taking a photograph and using it to determine the surface normals in order to extract height information (using the LIGHTing to determine the surface normals).

yeah. lighting has to be taken into account to calculate normals from an image. however, i thought you were trying to create a heightmap from a normalmap, no? this are two seperate procedures.

Quote:
I agree that the image represents the normal of the surface at each pixel, but the analogy to three mutually perpendicular light sources is a direct and valid one.

ehm, not that im aware of.
Quote:
It may or may not play any role in the solution, but it has helped me understand what Normal maps represent. There is no understanding without representation (symbolically, analogously, or whatever).

the only thing you need to visualize to understand normals is that they are the vector peripendicular to a tangent plane.
Quote:
I will try the code! :)

Robert

ok.

you seem confused as to what you really want though. i advice you to get that straight before you start typing. you are neither infinite monkeys, nor have infinite time.

This topic is closed to new replies.

Advertisement