Jump to content
  • Advertisement
Sign in to follow this  
Mr.A

Questions with GLSL Shaders

This topic is 1236 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Questions on shaders:

1-I understand that "varying" types are shared between the vertex and the fragment shaders. But assuming the vertex shader does its work on each of the  vertices before the fragment shader is called at all, does this mean that what is going to be stored in the "varying" for the fragment shader will be the value the vertex shader stored in it during its operation on the last vertex? (most commonly the bottom right vertex?)
 

Share this post


Link to post
Share on other sites
Advertisement

You set these values in the VS per-vertex and their values get interpolated over the primitive (e.g. triangle). In the PS (Fragmentshader) you then access the interpolated value.

Share this post


Link to post
Share on other sites


1-I understand that "varying" types are shared between the vertex and the fragment shaders. But assuming the vertex shader does its work on each of the  vertices before the fragment shader is called at all, does this mean that what is going to be stored in the "varying" for the fragment shader will be the value the vertex shader stored in it during its operation on the last vertex?
The "varying" nomenclature is kinda old, a more neutral one is "attribute". 

 

You have a vertex shader stage that transforms the vertices then a fragment shader stage that operates on the rasterized triangles by product of the vertex shader stage.

 

Both stages have inputs and outputs. So the data goes like:

 

vertex shader inputs -> do stuff with them -> output from vertex shader -> fragment shader inputs -> do stuff with them -> output from fragment shader.

 

As iSmokiieZz said, what the fragment shader receives is the interpolation of the values that the vertex shader outputs. Remember that we're making triangles here, and one triangle can produce many pixels (or less than one too). So instead of receving the last vertex values like you said, it receives the interpolated values from all three vertices that make up the triangle that resulted in the fragment/s being processed.

 

So if you make a triangle that results in say 8 fragments, they all get interpolated values from those 3 vertices (interpolated according to the fragment's positions of course).

Share this post


Link to post
Share on other sites

Aha! I see.

To make sure I understood:

 

-Vertex Shader operates:

1.It is being looped on every vertex.

2.Colors (and transformations) are set for each vertex.

3.Rasterization and the other stuff happens.

4.Interpolation happens; every pixel formed in the primitive is given a "mixed" color value resulted from its distance to each vertex, and these values are stored somewhere.

 

-Fragment Shader operates:

1.It is being looped on every pixel/fragment of the primitive.

2.The color of the fragment that is being worked on is calculated.

 

 

That raise few other questions :P :

2-So I think I understand the way stuff flows. But what if what we're dealing with a texture and no interpolated values? And where can I access these values? I've failed to even produce a minimalist shader that renders a texture on the screen. Can someone link me to an example that replicates the original fixed pipe line? I've been looking for such for a couple of days now but with no avail.

 

3-So varyings are old fashioned. I assume the new way of doing things is attributes and the "in"s and "out"s, and using them mean I will be using a higher version of GLSL which requires newer graphics card. How high can I get with the version without affecting its availability to users? The newest version would probably require top-notch graphics cards, so it wouldn't be wise to adopt it already, would it?

 

4-I've read about shaders that does stuff like blur. The thing is that blur usually requires one to draw extra pixels to the scene. How is that possible using shaders? I remember I read somewhere that shaders can't add vertices nor fragments to a primitive; just edit the existing values.

 

Sorry for all these questions. I just couldn't find a tutorial that goes through all these details. Most shader tutorials out there just explain how lighting is done in shaders, or how to do shaders that draw everything in one color; ignoring the key-facts that has to do with how values are sent between the OpenGL application and the shader program in the first place. If you know any proper guide I should check out, then please let me know.

Edited by Mr.A

Share this post


Link to post
Share on other sites

To touch on a couple of those questions:

 

I'd look at which cards support which opengl contexts, as though providing functionality for older cards is a valid concern, you can still cover the vast majority of them with modern opengl, so while there's a trade-off, to be sure, there's a solid chance that the players that may be interested in your game do, in fact, have a graphics card that is compatible with opengl 3.3+ / glsl 330. This depends on your game and target audience of course, but I wouldn't consider modern opengl an exclusive feature of top of the line graphics cards (I could be off, but I think it's been supported by most cards for about 5 years or so now. It's roughly on par with cards supporting direct3D10). Though, to be fair, laptops will fall more heavily in the unsupported side of things. Personally, I generally aim at opengl 3.3, though many developers will provide support starting with 4.whatever, and the scaling things down depending on the detected supported context. But, if you're going to do just a single context, personally, I think glsl 330 is a fair target.

 

Things like blur are often the result of "rendering to texture," which is pretty much exactly what it sounds like. You render all the objects to a texture, and then manipulate that texture (rather than manipulating every fragment when you render the object). Generally, I see this done in addition to a normal rendering pass. I don't have a fixed pipeline example of this, but can provide resources to modern examples if you'd like.

Edited by Misantes

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!