Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Spencer

normals trasformed by the current matrix?

This topic is 5216 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, i cant seem to get this confirmed anywhere althoug i think it is the case that the when i execute a glNormal(..) command the normal sent to OGL is transformed by the current matrix..is this the case? If so, would it be "smart" to use glNormal() with an untransformed normal for my model and then glGet(GL_CURRENT_NORMAL); to get the transformed normal to do backface culling with, and thus get rid of manually transforming my normals every frame?? thanks --Spencer "All in accordance with the profecy..."

Share this post


Link to post
Share on other sites
Advertisement
it sounds to me like the worst thing you could do in the name of laziness. reading data back from the card is never a good idea.

also backface culling is done per face by opengl, if you need it for larger chunks then transforming a few normals shouldnt matter (especially after only visible objects are left). and are you sure that glget would even return the transformed normal, because i would expect that nothing is done until you call glvertex and actually do something (not to mention that even if it would then nobody knows WHEN the normal will be transformed and in the worst case you have to wait til a ton of other stuff is finished first).

Share this post


Link to post
Share on other sites
lol ...okay i get it...
thanks for the reply

--Spencer

"All in accordance with the profecy..."

Share this post


Link to post
Share on other sites
Doing culling in software per face isn''t a good idea anyway. Are you sure your method would be faster than the HW culling (which does not use the normal by the way).

Share this post


Link to post
Share on other sites
quote:

Doing culling in software per face isn''t a good idea anyway. Are you sure your method would be faster than the HW culling (which does not use the normal by the way).



no, i am not sure of that...i just thought that since i can easily check with a dot product test for each triangle in my mesh i could just skip sending that triangle down to the graphics card...
..are you saying that it aint worth the traouble??



--Spencer

"All in accordance with the prophecy..."

Share this post


Link to post
Share on other sites
quote:
Original post by Spencer
i just thought that since i can easily check with a dot product test for each triangle in my mesh i could just skip sending that triangle down to the graphics card...
..are you saying that it aint worth the traouble??


then its worse than i thought. for that to work you would have to send single triangles, which is again one of the worst things you can do to your performance. every kind of culling you apply to single or very small numbers of triangles is not just wasted time but most likely a lot slower than letting the hardware do it. with bandwith and vertex processing power coming out of ears there''s a lot more useful things to do with rather precious cpu cycles.

Share this post


Link to post
Share on other sites
Okay i get it...but it still doesnt make sense...
i mean,
lets say i have a model, it is made up of, say, 1000 triangles,
but only about half of them will be visible each time...how can it be smarter to send 500 non-visible polygons down the pipeline than not to...
..is it really so that, due to caches and stuff, sending 1000 triangles as triangle strips are faster then sending 500 triangles as triangles??

*confused*

thanks



--Spencer

"All in accordance with the prophecy..."

Share this post


Link to post
Share on other sites
quote:
Original post by Spencer
lets say i have a model, it is made up of, say, 1000 triangles,
but only about half of them will be visible each time...how can it be smarter to send 500 non-visible polygons down the pipeline than not to...


simply because your test isnt free and takes 10x times longer than just sending them and let god.. erm.. the card sort them out.

quote:
..is it really so that, due to caches and stuff, sending 1000 triangles as triangle strips are faster then sending 500 triangles as triangles??


especially caches and stuff would let me expect the opposite (assuming you use indexed vertexbuffers and dont handfeed the same vertices over and over again).

in a perfect world, where all 1000 triangles can be rendered as a single strip you have 1002 indices, compared to 3000 for a triangle list. this sounds like a huge improvement if you ignore that indices are small and rasterizing triangle i will most likely take longer than sending and transforming geometry for triangle i+1, which has to wait til tri i is finished before it can be drawn. so unless you are drawing incredibly small and untextured triangles you shouldnt be surprised if the speed boost of triangle strips is exactly zero compared to triange lists (especially if they are ordered in a chache friendly way).

Share this post


Link to post
Share on other sites
hmmm....
...this was interesting
thanks for all the replies, i think it will all make sense in the end



--Spencer

"All in accordance with the prophecy..."

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!