Archived

This topic is now archived and is closed to further replies.

RipTorn

Cube mapping weirdness...

Recommended Posts

Hey. I've come across a problem to a feature I was implementing for fun. It's to do with real-time cube-mapping. As I was setting up a demo of this in my terrain, I was getting around 35fps when generating the 7 images needed, (the 6 256x256 ones, plus the main render). Obviously, I was really happy with this frame rate. I got it all together, copying the images to individual textures, and it was still running fine, but, I was obviously not copying them to a cubemap. So I started copying them straight in to a cubemap instead, and was getting slightly lower performance. But, when copying them, it was enabling cube mapping, causing all sorts of the weird visual artifacts using 2D textures with cube mapping enabled does and a good performance hit. So I had to use glEnable(GL_TEXTURE_CUBEMAP_ARB) and glDisable(GL_TEXTURE_CUBEMAP_ARB) some 14 times per frame to prevent this. And, subsequently, this caused a massive frame rate hit, as you can see in the image. This brings me to my questions... Does anyone know why this is?? Do non-ARB cube map extensions suffer from similar hits? Or is there something wrong with my approach, or maybe my system (533+GF1)?? (actually drawing the sphere has little impact on fps btw) Edited by - RipTorn on June 9, 2001 9:28:35 AM

Share this post


Link to post
Share on other sites
I just discovered why this was, and I don't blame anyone for not knowing, it's one of the weirdest errors...
I found that using cube mapping was _disabling_ the Vertex Array Range Nvidia extension, which caues a pretty huge slowdown (because the data needs to be taken from AGP memory, through the driver, and back into the card... Very slow.

here's the proof: (this is without using VAR, which is slower)



Edited by - RipTorn on June 11, 2001 4:42:40 AM

Share this post


Link to post
Share on other sites