Jump to content
  • Advertisement
Sign in to follow this  
PfhorSlayer

OpenGL Normalization via 2D texture lookup...

This topic is 3324 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm currently working on the iPhone, doing some graphics work for the in-house engine we're developing. At the moment, I'm implementing proper Dot-3 normal mapping - but I've come up against an issue that I haven't been able to solve yet. The drivers for OpenGL ES on the iPhone do not support cube mapping or pixel shaders of any kind, and while I am calculating a normalized vertex position to light position vector for each vertex of a lit mesh, these vectors become unnormalized as they are linearly interpolated across the triangle, making them less than unit length and resulting in much darker lighting than one would expect. Example Considering that the iPhone has only two texture units, and supports only 2D textures, I am trying to figure out a way to create a mapping from an unnormalized 3D vector (the interpolated V->L vector) to a 2D texture coordinate that will give me a correct, normalized representation of the same vector. The technique must be linearly interpolatable, of course! Does anyone have any ideas about how this might be achieved?

Share this post


Link to post
Share on other sites
Advertisement
you could

-tesselate the mesh if it's up near, that's easy to do and not that slow, if you take the triangle size into account (I did that for psp, worked quite well, but for other reasons )
-make an extra pass where you calculate dot(L,L) and use it for a lookup to a 1D texture that has the inverted light intensity. in the bumpmap pass you then have to multiply the output with the background (with the proper blend settings).

I dont have the iphone specs in my heat atm, so sorry if the 2nd suggest doesn't work (i'm not sure about the texture lookput after the dot, based on the dot).

Share this post


Link to post
Share on other sites
Yeah, I'm aware that tessellating the mesh would decrease the noticeability of the issue, but there are likely going to be certain cases where tessellating more may push the renderer over its triangle limits. I'd like to have a solution that works regardless of mesh tessellation.

And as far as I can tell, there's no way to use the results of a previous pass as a texture coordinate, unfortunately...

Share this post


Link to post
Share on other sites
You could also perform one or two steps of the Newton method.
That is, you try to solve the equation f(t) = dot(tn,tn) - 1 = 0, where t is the scaling factor that normalizes the normal vector n.
Newton's method linearizes this equation: f(t) ~ f(t0) + f'(t0) * (t - t0). t0 is some starting value, in your case probably t0 = 1.
The derivative is f'(t) = 2t*dot(n,n).
With this, you have
f(t0) + f'(t0) * t = t0*t0*dot(n,n) - 1 + 2*t0*dot(n,n) * (t - t0) = 0
--> t = (1 + t0*t0*dot(n,n)) / (2*t0*dot(n,n))
If you plug in t0 = 1, this simplifies to
t = (1 + dot(n,n)) / (2 * dot(n,n))
Compute this t and multiply n by it. Check whether the new "normalized" n gives better results.

Hmmm, can you do that on the iPhone at all? I guess not?

Share this post


Link to post
Share on other sites
Lutz, it's ALMOST implementable, with a couple passes... The only issue is that there's no way to calculate 1 / (2 * dot(n,n)). No reciprocal functionality with texture combiners that I can see. The rest of it is possible... but kind of useless without the quotient! :(

Share this post


Link to post
Share on other sites
Make unique normal reScale texture per object, witch are used as scalar for normals. (comensates shrinking normals)

/Tyrian

Ps. normal scalar texture can be pre calculated into dot3 normal map, if texture is unique for object. (just let normals be un-normalized & use bigger value range 0-255 => -5.0 - +5.0)

[Edited by - TyrianFin on June 9, 2009 5:55:55 AM]

Share this post


Link to post
Share on other sites
The probably cheapest way is to use an object space normal map as TyrianFin suggests. If you can't use that because it's too much texture data or your normal maps are repeating over geometry, you can modify the normalization cubemap approach:

That approach uses a 3d texture to set n = texCube(normalizationSampler, n),
where the normalization texture effectively computes n / sqrt(dot(n,n)).

So instead of a 3d texture, you can use a 1d texture and fill it with values
t(x) = 1 / sqrt(x) for x = 0..1
(in your case, dot(n,n) will always be <= 1 because n is a lerp of two normalized vectors). The pixel shader will then compute
n = n * tex1d(normalizationSampler, float2(dot(n,n),0)).x.
I hope this is implementable on your hardware!

- Lutz

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!