Texture Space Bump Mapping

Started by
4 comments, last by JonnyQuest 21 years, 7 months ago
Yeah, yeah, it''s me again, back with another annoying n00b question. Bump mapping is really cool, we all know that. I''ve been experimenting with it a bit now (while experimenting with pixel shaders) and have found that the coordinate space problem is indeed a very real one. Now, the solution is apparently texture space bump mapping, which makes sense, but I''m not 100% sure about the implementation. I obviously have to transform the light vector into texture space: Is this correct: Texture space is a barycentric coordinate system defined by the texture u, v coordinates and the normal vector. If so, must I split up the mesh into vertices for every single triangle? I mean, texture space is different for each polygon, right? I''m saving the light vector on a per-vertex basis as far as I''ve understood, and that also sounds pretty sensible. However, all of this seems to imply that I can''t re-use (i.e. index) vertices anymore, because the texture space light vector is different for each triangle. There must be something I''m missing here... (by the way, I''m not having trouble with the math or anything, I''ve derived all the matrix transformations and so on, it''s simply an implementation problem) If anyone could shine some light (no pun intended) on this issue, I''d be very grateful. - JQ Full Speed Games. Coming soon.
~phil
Advertisement
some light?

I also posted a more complete one in this forum a few months back.

--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

Thanks for that. So basically, I was heading in the right direction it seems.
(I''d have searched, but the search feature is down)
This of course brings up some issues with respect to optimizing vertex submission, as you can''t use the graphics card''s cache most of the time. I guess I''ll have to come up with some sort of approximation scheme (if the difference in coordinate systems for two polys sharing a vertex is below a certain threshold, average them and do share the vertex)
I guess it''s back to the drawing board then.
Thanks!

- JQ
Full Speed Games. Coming soon.
~phil
I'd definately advise coding it up and trying averaging for every shared vertex before worrying too much. While it does cause artifacts on some shared edges, it isn't too bad - on most models an uninformed viewer wouldn't know that it was happening!

There are so many parts of real "correct" lighting which aren't being modelled by dot3 per pixel lighting, that it's hardly worth worrying about - the extra vertex cost is a greater worry than "perfect" lighting IMO.

You get stretching on many models anyway due to non-perfect 1:1 texture mapping of the normal map by the artists (i.e. consider mapping a square normal map onto a sphere...)

One other gotcha to be aware of is that you can't mirror textures when sharing:

    A    /   /    /     /      D ------ B \      /  \    /   \  /    \/    C   


Consider these mapping coords:

A = 0.5, 1.0
B = 1.0, 0.0
C = 0.5, 1.0
D = 0.0, 0.0

Now work out the texture space base vectors for both those polygons. Yup - the vector for the V direction points in opposite directions for each poly! - so the averaged result is totally screwed! - that's one of the places you definately need to break polys or re-generate the normal map.

--
Simon O'Connor
Creative Asylum Ltd
www.creative-asylum.com

[edited by - S1CA on September 10, 2002 8:42:54 AM]

[edited by - S1CA on September 10, 2002 8:43:46 AM]

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

Grr: damn Edit mode on the board screwing up my ASCII art!

  A  /\ /  \D----B \  /  \/  C 

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

Yep, I''m aware of that mirroring effect. I''ll implement my "threshold" idea I think, as that should avoid that kind of thing happening, it only gets processes when I load the actual mesh into memory (not realtime) and gives me a nice performance <-> quality regulator by setting the threshold. I guess it comes down to actually coding it up then . (dammit, I was hoping I can avoid the Reference Rasterizer, silly Laptop doesn''t support all that stuff)

- JQ
Full Speed Games. Coming soon.
~phil

This topic is closed to new replies.

Advertisement