Jump to content
  • Advertisement
Sign in to follow this  
_dogo

help in vertex and fragment shaders

This topic is 4836 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hi could anyone explain me the difference between vertex and fragment shaders (i (try to) work in cg)? the problem is that i dont really know what "fragment" is... sure its a beginners problem, sorry :)

Share this post


Link to post
Share on other sites
Advertisement
if i want to shade an image, then i have to use a fragment shader, and vertex shader only when used for 3d objects?
thx

Share this post


Link to post
Share on other sites
It's easiest to think of fragments as pixels. That's not technically correct, but in practice that's what a fragment is.
Anyway, the answer to your actual question is harder. To understand what a vertex shader is and a fragment shader, you will first need to learn about the internal pipeline within your graphics card. Roughly, this pipline consists out of two stages: geometric processing (transforming 3D vertices) and rasterization (applying lighting (shading) and texturing to the projected geometry). On modern hardware, both stages can be programmed by the application. The geometric stage is implemented by a vertex shader (shader is a misnomer; something like transformer would be better, but shader was so commonly used that it became the official term), the rasterization by a fragment shader. If you wish to replace the fixed functionality of either of the two stages, the other must be implemented as well. In other words, if you want to replace only parts of the pipeline, you'll have to implement the whole thing anyway.
[google] will help you to find more detailed explainations...

Tom

Share this post


Link to post
Share on other sites
thanks!
i have searchad google, found a lot of info about fragment shaders, but nowhere was written what is that :)

i have to shade an image. can i do that without a vertex shader? can i use a solo fragment/pixel shader?
i read something that the input of the fragment shader is the output of the vertex shader.
can this be avoided by somehow giving the input (an image) for the fragment shader?
i read the framebuffer and shade it, than rewrite the framebuffer. this doesnt need any transformations, does it?
how can i transform the framebuffer to a fragmentshaders input? (i use glReadPixel())

thx

Share this post


Link to post
Share on other sites
Quote:
Original post by _dogo
i have to shade an image. can i do that without a vertex shader? can i use a solo fragment/pixel shader?
i read something that the input of the fragment shader is the output of the vertex shader.
can this be avoided by somehow giving the input (an image) for the fragment shader?
i read the framebuffer and shade it, than rewrite the framebuffer. this doesnt need any transformations, does it?
how can i transform the framebuffer to a fragmentshaders input? (i use glReadPixel())


No, you'll have to write both. What you can do is copy the frame buffer to a texture. Attach this texture to a quad and draw this quad using glOrtho. You'll still have to write a vertex shader but it will be extremely simple. I don't know Cg, but in GLSL (which is similar) it would look something like this:

varying vec2 texCoord;
void main(){
// store texture coordinates
texCoord = gl_MultiTexCoord0.xy;

// transform incoming vertex (stored in gl_Vertex) and store it
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}



You can then continue with the interesting stuff in the fragment shader and do the perpixel operations.

By the way: if you use glCopyTexImage2D for copying the image to a texture it will be much faster than the glReadPixels you use now, because the data will stay in memory. You won't be able to touch the pixels in your application, but I assume that's why you're using the pixel shader for anyway...

Tom

Share this post


Link to post
Share on other sites
It is not necessary to provide both a vertex and a fragment shader.
Quote:
ARB_shader_objects spec, Issues:
1) What to do if part of a shader pair is not present?

DISCUSSION: There are several shader types that go together. For
example, a VERTEX_SHADER and a FRAGMENT_SHADER form a pair that is part
of the geometry pipeline. It is not required to supply both shader types
of a pair in a program object.

RESOLUTION: If one or the other of a pair is not present in the program
object, OpenGL will substitute the standard 1.4 OpenGL pipeline for the
one not present. The most likely place for this substitution to happen
is the link stage. Note that it is defined elsewhere exactly what part
of the OpenGL 1.4 pipeline a shader replaces.

A vertex shader operates on vertices of any geometry you render. If you draw a single screen-oriented quad (like it sounds like you want to do) then the vertex shader will operate on the four corners of that quad and the calculated values will be interpolated across the quad. To get an understanding of this take a look at OpenGL's colours in smooth shading mode. If you specify the colour red for the top left vertex of a quad and black for the other three vertices then the top left corner of the quad will be red and will fade to black as you move away from that corner.

Fragment shaders on the other hand operate on the individual pixels* that make up your image. Texturing is an example of a per-fragment operation. Each fragment looks up the corresponding part of the texture map to get its colour, the colours are not interpolated**.

Generally shaders are only appropriate when you want to apply the same algorithm or a selection of possible algorithms to an object. If you want to make complex decisions then it might be more efficient to do them on the CPU rather than use a shader (although each new generation of graphics cards makes this less of a problem).

Enigma

*They're called fragments not pixels because the resulting fragment may or may not become a pixel. It may be culled entirely, or it may be blended with the existing pixel in the framebuffer.

**There may be interpolation if you need to magnify the texture, but this happens on the texture side of things, not the fragment side.

Share this post


Link to post
Share on other sites
Quote:
Original post by Enigma
It is not necessary to provide both a vertex and a fragment shader.

Hmm, I guess I somehow convinced myself of that along the way. I have no idea where I got the idea, but I've always believed I needed to provide both. I suddenly feel quite stupid :)

Tom

Share this post


Link to post
Share on other sites

thanks dimebolt and Enigma

Quote:
Original post by dimebolt
What you can do is copy the frame buffer to a texture. Attach this texture to a quad and draw this quad using glOrtho.




i do NOT want to draw the image, i just want to reach it to shade it, then write back the modified image to the framebuffer.
do i have to attach the texture to a quad in this case?
is there no way to somehow make my image "actual", without drawing?
naturally i understand that without drawing there are no vertices and no fragments so there can be no vertex/fragment programs, but it seems a bit... mechanic.

Quote:
Original post by Enigma
If you draw a single screen-oriented quad (like it sounds like you want to do) then the vertex shader will operate on the four corners of that quad and the calculated values will be interpolated across the quad.

sounds dreadful to me... so you suggest me to miss vertex shading in order to avoid interpolation?
is there a way to use a vertex shader, but let no interpolation happen?
or did i misunderstand you?

Share this post


Link to post
Share on other sites
Quote:
Original post by _dogo

how can i transform the framebuffer to a fragmentshaders input? (i use glReadPixel())

thx


It's still a bit unclear to me what you're trying to do.

If you want to pass the contents of the framebuffer thru a fragment shader you need not use glReadPixels (it's really slow).

After rendering to your frame buffer use glCopyTexImage2D() to copy the contents of the frame buffer into a texture (the resolution of the two targets must match), then switch to an ortho view, enable your shader program and render a quad using the said texture.

You only need to implement a vertex shader if there is some value you want to be able to read in the fragment shader after the rasterization which is not supplied as a predefined variable in the language (for example in glsl you have the screen position of the fragment and the interpolated color value of the fragment after vertex lighting).

If you wanted to access something like the normal, you'd need to write a vertex shader and define the normal as a variable and evaluate it yourself in the vertex shader.

Again, I'm not 100% sure what you're attempting, if you give more info perhaps I could help you further.


Good luck.

Share this post


Link to post
Share on other sites
Quote:
Original post by Aeluned


If you want to pass the contents of the framebuffer thru a fragment shader...


yes, thats what i want.
read the framebuffer --> shade it --> rewrite the framebuffer.

i want to convolve my image with a simple matrix.

for every a pixel
do:

a'=Conv*M(a)

where

1/9 1/9 1/9
1/9 1/9 1/9 = Conv
1/9 1/9 1/9


b c d
e a f = M(a)
g h i

(M is part of the image, a,b,...,i are pixel rgb colors, b,c,..,i are nearest pixels to a)



how can i reach the neighbour-pixels?

do i need a vertex shader in this case? i guess no...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!