Sign in to follow this  

Simple Shader question

This topic is 4216 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi there , I just started with GLSL and I'm trying to write a very simple fragment shader , which copies the contents of one texture into another(which is attached to the framebuffer). Here's the shader code: uniform sampler2D sampler1; varying vec2 Texcoord; varying vec3 PixelData; void main(){ vec3 PixelData = texture2D(sampler1,TexCoord).rgb; gl_FragColor = vec4(PixelData, 1.0); } However when I try to run my application the following Infolog appears: InfoLog: (1) : error C1008: undefined variable "TexCoord" (1) : error C1102: incompatible type for parameter #2 ("c") (1) : error C1008: undefined variable "TexCoord" (1) : error C1102: incompatible type for parameter #2 ("c") (1) : error C1008: undefined variable "TexCoord" (1) : error C1102: incompatible type for parameter #2 ("c") From all the shader code examples I've read so far I cannot se something wrong with my code. Am I missing something? Thanx in advance :)

Share this post


Link to post
Share on other sites
I guess this is just your fragment shader right? You need to assign some values to TexCoord inside a vertex shader, else the shader doesn't know what values they would be. Also, PixelData doesn't have to be varying, you can just remove that line, as it is declared inside the function.

Also, note the Texcoord and TexCoord discrepancy, which is probably causing the actual error. The variable names are case-sensitive.

Share this post


Link to post
Share on other sites
Well don't know what to say, YES it was the case sensitivity, I probably should have noticed it myself :)
Apart from this , I would really like not to get involved with a vertex shader , is there another way to assign values to TexCoord? Cannot these values be assigned within the fragment shader itself? Or would it be better to use directly gl_TexCoord[0].st or gl_MultiTexCoord0.st ?

Share this post


Link to post
Share on other sites
Unfortunately you probably will have to use a vertex shader, because you can't use gl_MultiTexCoord in a fragment shader and gl_TexCoord is empty if you don't write some data into it.

So you should have a shader like this:

//*************************************
//vertex shader
//*************************************
void main(){
gl_TexCoord[0]= gl_MultiTexCoord0; //store the gl_MultiTexCoord data for later
gl_Position=ftransform(); //does the common vertices transformation
}

//*************************************
//and the fragment shader as follows:
//*************************************
uniform sampler2D sampler1;

void main(){
vec3 PixelData = texture2D(sampler1,gl_TexCoord[0]).rgb;
gl_FragColor = vec4(PixelData, 1.0);
}


But you could shorten your fragment shader to one line without saving the PixelData in an extra variable:
gl_FragColor=vec4(texture2D(sampler1,gl_TexCoord[0]));

Hope it helps..

Share this post


Link to post
Share on other sites
Quote:
Original post by Caste
Unfortunately you probably will have to use a vertex shader, because you can't use gl_MultiTexCoord in a fragment shader and gl_TexCoord is empty if you don't write some data into it.
...
I think that's only true if you are using a vertex shader. From the GLSL Specs...
Quote:
From section 7.6 in the GLSL Specs
The gl_TexCoord[] values are the interpolated gl_TexCoord[] values from a vertex shader or the texture coordinates of any fixed pipeline based vertex functionality.
I may be interpretting that wrong but I take it to mean that if you're using a vertex shader, the gl_TexCoord[] values will be interpolated based on what you set in the vertex shader or be undefined if you don't set them. But if you aren't using a vertex shader they will be set by the usual fixed function glTexCoord*/glMultiTexCoord* functions or automatic texture generation.

So I think you should be able to just use gl_TexCoord[0] in your fragment shader without needing a vertex shader.

Share this post


Link to post
Share on other sites
I'm not exactly sure about that TexCoord stuff without using a vertex shader. However, I urge you to write a simple vertex shader anyway, because I (and others as well) have experienced severe slowdown when we didn't have a fragment shader (I believe on ATI only, not certain though). This situation was the reverse to yours, but I can't imagine there to be much different from the hardwares point of view.

Share this post


Link to post
Share on other sites
Rick , Caste and Kalindor thanx for your replies. I tried out what both of you suggested and it seems that Kalindor is right because the result remains the same , whether I use a vertex shader and a variable TexCoord or just use TexCoord[0].st within the fragment shader. However my shader does not seem to work either way:(
What I am trying to do is very simple: I am trying to copy the contents of a W*H Texture into the color buffer(also W*H).
The vertex shader:

varying vec2 TexCoord;

void main(){
TexCoord= gl_MultiTexCoord0.st;
gl_Position=ftransform();
}



The fragment shader is :

uniform sampler2D sampler1;
varying vec2 TexCoord;

void main(){
vec3 PixelData = texture2D(sampler1,TexCoord).rgb;
gl_FragColor = vec4(PixelData, 1.0);
}


I succesfully (at least there are no syntax errors...) Compile , Create , Link , Use Shader Object/Program.

As a test I do glReadPixels in the whole Colorbuffer , and I do not get the expected values. Am I missing something? When does the actual execution of the fragment shader take place? Are any extra commands necessary?

Share this post


Link to post
Share on other sites
I should have put this in my last post but I too recommend using both a vertex and fragment shader. I didn't know about the problem rick_appleton posted about but that's just one more reason. My main motivation for that is because I simply dislike the fixed-function pipeline [grin]. I feel it's too bloated and overly confusing whereas shaders let you do only what you want and are much more flexible. I say forget about the fixed-function pipeline completely. The programmable pipeline is the way of the future, and indeed in D3D10 the fixed-function API is no more (it's been gone from hardware for a while now).

EDIT: I just saw your last post. I have to get lunch now but if it's not answered by the time I'm done I'll try to get back to it.

Share this post


Link to post
Share on other sites
Okay, sorry about the wait. Your shaders look fine to me.

I'm not sure exactly what it is you're doing. Do you mean that you are just rendering a full-screen quad to the framebuffer? If that's what you're doing why do you need to use glReadPixels to test, is your window not visible? If that's the case then the framebuffer's data is undefined because of the pixel ownership test (refer to section 4.1.1 of the OpenGL 2.0 Spec). The way to get around that is to render to a proper off-screen buffer, such as an FBO, and read the data from that.

If that's not the problem could you post more about what exactly you're doing, and maybe post some code as well?

Share this post


Link to post
Share on other sites
First of all big thanx Kalindor, really big, I mean it :)

You asked me what exactly I am trying to do. Well forget about the above shaders , they where just simple examples I was trying to do , because I'm a complete noob in OpenGL/GLSL.Here's the real thing:


OK, this is gonna be long :P I'm working on a 3DStereoComputerVision on GPU Project which forced me at some point to get involved with shaders.I won't go into much detail about the Project itself, but to cut a long story short I need at some point to make a shader program which works on 3 textures. I need to read 2 given textures (A and B), make some color comparisons and depending on the result I need to (each texture has it's own texture unit):

1) modify one of the above textures(Texture A)
2) write some other data on the third texture(Texture C)

I've read some stuff about Offscreen buffers and attaching textures to them and this seems the way to go. However:

1) How can I have a Texture (Texture A in my case...) for both read and write within the same fragment shader in GLSL?
2) How does the fragment shader render to the offscreen buffers rather than the color buffer?
3) Is it possible to render with a single shader in multiple buffers? I would like to have both texture A and C modified by the same shader , rather than write two shaders , one for each texture.
4) What is the performance cost of compiling linking loading etc shaders? In my application I'm gonna have a loop and in each loop I need fixed-functionality as well as the above shader program, so I probably have to load and unload the program many times.

These are some of my basic questions... At some point I may send some code too but at the moment it's trivial because nothing really works:|

[Edited by - Gorgi on May 25, 2006 9:35:06 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Gorgi
First of all big thanx Kalindor, really big, I mean it :)
...
You're welcome. And there's no N in my username. [grin] It doesn't bother me at all so it's no big deal, I just figured I'd correct you.
Quote:
Original post by Gorgi
...
I'm working on a 3DStereoComputerVision on GPU Project which forced me at some point to get involved with shaders.
...
Sounds pretty interesting. Do you have a website or something where I can read more about it?
Quote:
Original post by Gorgi
...
1) How can I have a Texture (Texture A in my case...) for both read and write within the same fragment shader in GLSL?
2) How does the fragment shader render to the offscreen buffers rather than the color buffer?
3) Is it possible to render with a single shader in multiple buffers? I would like to have both texture A and C modified by the same shader , rather than write two shaders , one for each texture.
4) What is the performance cost of compiling linking loading etc shaders? In my application I'm gonna have a loop and in each loop I need fixed-functionality as well as the above shader program, so I probably have to load and unload the program many times.
...
1) Unfortunately the answer to that is "You can't." You're going to have to "ping-pong" between textures. So you'll have another texture as a render target and output Texture A and its changes to that new texture. Then use the new texture for subsequent uses of Texture A, possibly rendering anymore changes you need to do back into the original Texture A. You keep going back and forth like that for any number of passes you need to do, hence the "ping-ponging" effect between the two textures.

2) If using GL_EXT_framebuffer_object (which I highly recommend) you do this by binding the FBO you want to render to and calling glDrawBuffer/glDrawBuffers (see next answer) with the proper attachment point(s). All the information you need can be found in the specs for it. It may be too much information however (it's a LOT of information), so you can get more help by searching these forums or asking questions of your own if you can't find the answers.

3) This is possible with the GL_ARB_draw_buffers extension. This adds the glDrawBuffers function (it's a core part of OpenGL 2.0 so you can drop the ARB at the end now) which allows you to set up to GL_MAX_DRAW_BUFFERS buffers into which to render. To render different data to different buffers you will need to write a fragment shader which outputs to gl_FragData[]. Keep in mind you can't write to both gl_FragColor and gl_FragData[] in a single shader; it's either one or the other. You can find more information about writing to gl_FragData[] in the GLSL specs (pdf).

4) You definitely don't want to be compiling and linking the shaders every time you want to use them. Do all that setup just once and then only bind the shaders when you need them. Binding shaders is the most expensive state change there is. If it's at all possible you should bind them once, do all the rendering that needs those shaders, then unbind and do all the fixed-function rendering. If you can't do it as clear cut as that, try to batch them together as much as possible so that you can minimize the amount of shader binding as much as possible.

EDIT: Added the pdf warning.

Share this post


Link to post
Share on other sites
Once again I have a question guys :)


In my shaders I'd like to use both FragData[0] and FragData[1], NOT FragColor.

In a typical example to render to an FBO :

glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, texNames[1], 0);

Do I have to change the "GL_COLOR_ATTACHMENT0_EXT" with something else in order to use FragData rather than FragColor?

Share this post


Link to post
Share on other sites
GL_COLOR_ATTACHMENT0_EXT maps to the glFragColor and glFragData[0], to use glFragColor[1] => glFragColor[n] you have to use colour attachments GL_COLOR_ATTACHMENT1_EXT to GL_COLOR_ATTACHMENTn_EXT (where n < the max render buffers your hardware supports).

You have to tell OpenGL which colour buffers are bound for writing however via the glDrawBuffers() function.

Share this post


Link to post
Share on other sites

This topic is 4216 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this