Quote:Incidentally, the classification above also tells you that all the calls corresponding to GL_MAX_TEXTURE_UNITS are useless when using fragment programs and actually ignored by the driver (so, making those calls has no adverse performance impact). For example, you don't need to call glEnable/glDisable(GL_TEXTURE_xxx) since the texture target is passed as a parameter to the TEX/TXP/etc instructions called inside the fragment program. Making those calls will actually generate errors if the active texture index is above GL_MAX_TEXTURE_UNITS. glBindProgramARB is the one doing all the work of state configuration and texture enabling/disabling. One glBindProgramARB basically replaces a dozen or more of glActiveTexture, glEnable/glDisable, and glTexEnv calls.So if I get this correctly you just bind textures to units using BindTexture and not requiring Enable( TEXTURE_2D ) anymore? Is this just some nVidia idea or is this somehow given by the OpenGL specs? Since I can not remember seeing ATI liking this behavior. Some clarification would be nice.
BindProgram depracts Enable/Disable(GL_TEXTURE_xxx)?
Trying to find some concise informations about GL_MAX_TEXTURE_UNITS, GL_MAX_TEXTURE_UNITS and GL_MAX_TEXTURE_IMAGE_UNITS I stumbled across this FAQ on nVidia: http://developer.nvidia.com/object/General_FAQ.html#t6 . Now there they mention the following:
Yes, you are correct. When you are binding texture, you are binding it to active texture unit. Texture unit number is set as sampler in GLSL shader. No need to enable/disable GL_TEXTURE_xD - that is for fixed functionality only. I had no problems with this "feature" on ATI cards.
It is similar like you don't need to glEnable/Disable(GL_LIGHTING) if you are using GLSL shaders. You either calculate or don't lighting calculations in shader. Fixed functionatly boolean switch GL_LIGHTING doesn't influence it.
It is similar like you don't need to glEnable/Disable(GL_LIGHTING) if you are using GLSL shaders. You either calculate or don't lighting calculations in shader. Fixed functionatly boolean switch GL_LIGHTING doesn't influence it.
Just asking since my ATI card did some really strange things if I did not properly glEnable the right texture target ( texture 2D or some cube map face ).
Quote:Original post by RPTDBeware that if you mix shader with fixed functionality in the same application (for instance, with a fixed-function GUI library), then you still need to set all the texture state correctly for the fixed-function section of the program.
Just asking since my ATI card did some really strange things if I did not properly glEnable the right texture target ( texture 2D or some cube map face ).
Usually that would result in code like this before rendering your GUI:
glBindProgram(0);glActiveTexture(GL_TEXTURE0);glClientActiveTexture(GL_TEXTURE0);glDisable(GL_TEXTURE_3D);glEnable(GL_TEXTURE_2D);
Quote:Original post by RPTD
Just asking since my ATI card did some really strange things if I did not properly glEnable the right texture target ( texture 2D or some cube map face ).
I haven't had that problem. I'm running the same code on nvidia and ati and everything done with shaders. I don't use glLight, glEnable(GL_TEXTURE_2D), glMatrixMode, glFrustum, glOrtho, glRotate, glTexEnv and many other classical things.
There happen various problems amongst others those two:
1) Attaching a 2D texture, then use a shader, then detach the 2D texture and attach a cube texture, run another shader => GL_INVALID_OPERATION ( with glEnable and company works )
2) Attaching a depth texture without 2D texture set => garbled shadow maps ( with glEnable and company works )
1) Attaching a 2D texture, then use a shader, then detach the 2D texture and attach a cube texture, run another shader => GL_INVALID_OPERATION ( with glEnable and company works )
2) Attaching a depth texture without 2D texture set => garbled shadow maps ( with glEnable and company works )
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement