[Cg] Running out of TEXCOORD?
Most graphics cards support from TEXCOORD0 to TEXCOORD7 I think, and I am running out of them. How can I use more?
If newer cards like the Nvidia 8 series I am working on supports beyond TEXCOORD7, FX Composer 2 is not accepting them. Is there a compiler flag to turn them on?
I don't wish to manually insert data into the spare compoents of existing varyings either. Nor multi-pass.
I am using vp_4_0 and fp_4_0. Thanks before hand.
As far as I know, 8 channels is still the maximum, unfortunately.
Make sure you make full use of all available channels 4x8 floats. You could even consider to pack multiple values into 1 float. For example, store a value in the lower bytes, and another value in the higher bytes. Besides of the TEXCOORDS, you can also use the FOG channels I think.
In most cases there is always a work-around somehow, although it might mean that more work has to be done in the fragment shader, while ussually you try to do as much as possible in the vertex shader.
Greetings,
Rick
Make sure you make full use of all available channels 4x8 floats. You could even consider to pack multiple values into 1 float. For example, store a value in the lower bytes, and another value in the higher bytes. Besides of the TEXCOORDS, you can also use the FOG channels I think.
In most cases there is always a work-around somehow, although it might mean that more work has to be done in the fragment shader, while ussually you try to do as much as possible in the vertex shader.
Greetings,
Rick
Thanks. Is it very difficult to increase this for hardware design or there isn't sufficient demand for it? I wonder if others are sending in as much varying interpolants as I am and what you may be doing to address this.
For some shaders I work on, I am passing these varying interpolants from the VP to FP
object space position,
object space normal,
view space position,
texture coordinate,
world space reflection vector,
world space normal,
tangent space light
tangent space eye
I may or may not be able to reduce some by unifying spaces in calculation. Even if possible, in the future even more interpolants may be added.
Unlike multiple lights, these data are not very multi-pass friendly.
And I would rather not do:
float2 assembledUV = float2( PositionObj.w, NormalObj.w );
Finally, are there some documentations on how interpolants with the FOG and COLOR semantics are interpolated.
For some shaders I work on, I am passing these varying interpolants from the VP to FP
object space position,
object space normal,
view space position,
texture coordinate,
world space reflection vector,
world space normal,
tangent space light
tangent space eye
I may or may not be able to reduce some by unifying spaces in calculation. Even if possible, in the future even more interpolants may be added.
Unlike multiple lights, these data are not very multi-pass friendly.
And I would rather not do:
float2 assembledUV = float2( PositionObj.w, NormalObj.w );
Finally, are there some documentations on how interpolants with the FOG and COLOR semantics are interpolated.
Quote:Original post by spek
As far as I know, 8 channels is still the maximum, unfortunately.
Make sure you make full use of all available channels 4x8 floats. You could even consider to pack multiple values into 1 float. For example, store a value in the lower bytes, and another value in the higher bytes. Besides of the TEXCOORDS, you can also use the FOG channels I think.
In most cases there is always a work-around somehow, although it might mean that more work has to be done in the fragment shader, while ussually you try to do as much as possible in the vertex shader.
Greetings,
Rick
Are you sure about those 8 channels? I've never used more than 6, so I never had to bother, but... the GLSL 1.20 specification requires a minimum value of 16 for gl_MaxVertexAttribs.
GLSL 1.20 is required by OpenGL 2.1, which is available on every reasonably recent card. So, my guess is that the limit should be 16 rather than 8..?
GLSL 1.20 is required by OpenGL 2.1, which is available on every reasonably recent card. So, my guess is that the limit should be 16 rather than 8..?
Quote:Original post by samothAFAIK, you are both correct. There are only 8 texture coordinates, but there are considerably more vertex attributes. Here are the standard vertex attributes from the fixed-function pipeline, along with the attribute index they would bind to:
Are you sure about those 8 channels? I've never used more than 6, so I never had to bother, but... the GLSL 1.20 specification requires a minimum value of 16 for gl_MaxVertexAttribs. GLSL 1.20 is required by OpenGL 2.1, which is available on every reasonably recent card. So, my guess is that the limit should be 16 rather than 8..?
gl_Vertex 0gl_Normal 2gl_Color 3gl_SecondaryColor 4gl_FogCoord 5gl_MultiTexCoord0 8gl_MultiTexCoord1 9gl_MultiTexCoord2 10gl_MultiTexCoord3 11gl_MultiTexCoord4 12gl_MultiTexCoord5 13gl_MultiTexCoord6 14gl_MultiTexCoord7 15
That implies a minimum of 16 vertex attributes, and maybe more on recent cards. You can bind any of them as generic attributes instead if you want names a little more descriptive.
>> And I would rather not do:
float2 assembledUV = float2( PositionObj.w, NormalObj.w );
Why not? It's not the most beautifull way to do it, but probably there is not much else you can do, as long as your limited with the channels. It won't make your shaders slower or anything.
You can calculate the view-space normal and position as well in the fragment shader instead of the vertex shader. It will make your fragment shader slower though, but yet again... Probably you can do the same for the object space vectors. Just a possible scenario when everything is packed:
Ugly code, but you save 3 channels now. In case you don't want to calculate the view-space coordinates in the fragment shader, you have 6 floats left.
I don't know about the FOG and COLOR, but I guess they are explained in the Cg paper. Probably they are interpolated just like the other texture channels. I believe the FOG channel is limited though, you can only put 1 scalar on it. Best way to try these 2, is by rendering a simple triangle with FOG or COLOR as output color, and a different input value at each vertex.
greetings,
Rick
float2 assembledUV = float2( PositionObj.w, NormalObj.w );
Why not? It's not the most beautifull way to do it, but probably there is not much else you can do, as long as your limited with the channels. It won't make your shaders slower or anything.
You can calculate the view-space normal and position as well in the fragment shader instead of the vertex shader. It will make your fragment shader slower though, but yet again... Probably you can do the same for the object space vectors. Just a possible scenario when everything is packed:
XYZ Wchannel 0: float4 object_space_position texcoord.uchannel 1: float4 object space normal texcoord.vchannel 2: float4 world space reflection vector tangent space eye.xchannel 3: float4 world space normal tangent space eye.ychannel 4: tangent space light tangent space eye.z
Ugly code, but you save 3 channels now. In case you don't want to calculate the view-space coordinates in the fragment shader, you have 6 floats left.
I don't know about the FOG and COLOR, but I guess they are explained in the Cg paper. Probably they are interpolated just like the other texture channels. I believe the FOG channel is limited though, you can only put 1 scalar on it. Best way to try these 2, is by rendering a simple triangle with FOG or COLOR as output color, and a different input value at each vertex.
greetings,
Rick
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement