Sign in to follow this  
derodo

Position invariant vertex program question

Recommended Posts

derodo    122
Hi there, I was recently taking a look at the interaction.vfp shader coming with doom3 (you know what I mean, don't you?) and a question arised to me while trying to understand it. It seems like the shaders doesnt compute things in eye space, but in object space...I mean, for what I know about shaders, the matrix multiplation it uses to calculate, for example, the light falloff coordinate takes an environment variable (supposed to ve the S plane) and the vertex.position input... But that vertex.position is supposed to be the object space position of the vertex, isn't it? So...it if actually is...how does it work?...I mean, as it is a position_invariant shader, the final vertex position is supposed to be generated by multiplying that vertex.position by the modelviewproj matrix anyway, so the shader calculations and the actual position generated will be in completly different spaces...or maybe I'm missing something... I'm just very curious about this...as I'm trying to build my onwn home-made engine and I thouhgt I had all concepts clear...but it seems I don't :) See you soon,

Share this post


Link to post
Share on other sites
zedzeek    529
Quote:
It seems like the shaders doesnt compute things in eye space, but in object space...I mean, for what I know about shaders, the matrix multiplation it uses to calculate, for example, the light falloff coordinate takes an environment variable (supposed to ve the S plane) and the vertex.position input...

im not sure what doom3 does but
WRT lighting it doesnt really in what space normals lighting etc are in as long as when u do the calculations theyre both in the same space (orientation).
eg my light directions are always supplyed in WS
but some of my normalmaps are in objectspace + some in texturespace.
thus depending on whats cheapest i either stick the NM into the same space as the light or stick the light in the same space as the NM

check the nvidia developer for bumpmaping pdfs theres heaps + they explain it quite well

Share this post


Link to post
Share on other sites
derodo    122
Thanks for the tips zed,

I know about bump mapping, tangent space and all that stuff :)

The "problem" I see in the doom3 shader is that it seems it computes everything using the vertex.position (model space), so that would mean they (id software :) must be pre calculating evrything in the CPU before sending it to the graphic card...

I mean, in almost every 3D engine implementation, it's usual to have your model coordinates in its local space, and use simple gl transformation via the modelview matrix to move the model arround...so if the shader uses the vertex.position as in input to compute the tangent space, then it means both the light and the model's vertex are in the same space, so if you want to move the model arround, you must compute both elements before you send them to the rendering pipeline...but if you do that...mmm...wait a minute...I think i get it now...maybe they're just computing the light position in object space before the actual rendering is done...as the light position is not fecthed from the GL state, but as environment parameter...

Yes...that makes sense to me now :)

Thanks for the hint zed, nothing as thinking about the problem to solve it :P

Share this post


Link to post
Share on other sites
zedzeek    529
Quote:
The "problem" I see in the doom3 shader is that it seems it computes everything using the vertex.position (model space), so that would mean they (id software :) must be pre calculating evrything in the CPU before sending it to the graphic card...

yes this is what i do also, i believe for everything (and my vert counts are a lot higher than doom3)
its a good method esp if youre rendering the same meshes mutliple times a frame eg with shadows

Share this post


Link to post
Share on other sites
Trienco    2555
Quote:
Original post by derodo
I mean, in almost every 3D engine implementation, it's usual to have your model coordinates in its local space, and use simple gl transformation via the modelview matrix to move the model arround...


True, but it seems the md5 format is allowing for a vertex to be influenced by a whole lot of bones. If they are already doing that in software, because the hardware can't even store all the bones and it might be wasteful to either do one shader using a huge number of bones "just in case" or having different shaders for all bone amounts but sending just a handful vertices... anyway, it's possible they have to pretransform their vertices in software anyway, so placing and rotating the model in the scene would be just one more bone in the hierarchy. Plus, it means vertices are only transformed when something changes, not every frame it's drawn. Also, if the stencil shadows are done in software they will need the transformed vertices for the cpu anyway, so basically doing the transformation AGAIN in hardware would be rather pointless. Only downside would be the regular updates and sending the geometry to the card, but seeing how Doom uses gorram ugly lowres meshes and relies too much on normal mapping... or rather: guess WHY it has to stick with low res models ,-)

Share this post


Link to post
Share on other sites
Trienco    2555
Somewhere between an annoying headache I am wondering about something more or less related. Vertex skinning and trying to reduce the calculations as much as possible.

Anyhow, if I take the step to create instances for each object of not just the skeleton, but the whole mesh to avoid rebuilding the whole thing every frame, shouldn't there be an easy method to update only changed vertices?

In theory, when I change a bone, instead of recalculating the vertex position (which would require accessing all other contributing bones), shouldn't it work to subtract the bones old contribution and adding the new one?

ie. v += weight*newMatrix*vBone - weight*oldMatrix*vBone
that in turn should collapse to weight*(newM*vBone - oldM*vBone) and further to weight*(newM-oldM)*vBone (somewhat unsure if that whole distributive thing applies here).

Wondering if it might make more sense to store a list of affected vertices for each bone instead of a list of influencing bones for each vertex. On the other hand, would real life experience say that usually enough bones are updated every frame anyway so that just recalculating the whole model won't be much more work but a lot let hassle?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this