Shader noob trying to get the big picture.

Started by
4 comments, last by benang00 13 years, 3 months ago
Hi, I'm currently learning shader using GLSL. My sources are the tutorials from clockworkcoders.com, lighthouse3d.com, and nehe. Right now I'm barely scratching the surface. I know about how to compile the program, what is a uniform, a varying, the built-in functions, etc to create a shader. However, I still don't quite understand the fundamentals of the pipeline. How does the input of xxx can become an output yyy. Like so. So far, this is my own conception (or misconception, dunno) from the tutorials. Well, at least that's what I perceive from them. So CMIIW please.

1. The vertices that contains the data to form primitives would beprocessed individually on the vertex shaders.
2. The output from the vertex shaders would be the vertices' positions after they are manipulated inside the vertex shaders.
3. And then they are processed to become primitives (tris) and calculating the fragments inside each primitives.
4. The fragments are data that can be used to determine a pixel's color at a specific location.
5. The fragments are processed individually on the fragments shaders and will output their colors and depths.
6. The fragments are then rasterized into a pixel on the screen.

Well, maybe I misunderstood the tutorials but that's how I read it. So, now I've confused and need to ask a few question.

1. Can the vertex shaders be used to manipulate the vertice's data besides the positions? Eg. the connectivity data with other vertex to create a different primitives. Or maybe creating more vertices than before like trying to smooth a surface by dividing the tris that make up that surface into smaller tris? Etc.

2. AFAIK a fragment is the data that will be calculated with other fragments to determine a pixel's color at a certain position on the screen, CMIIW. And a fragment will have the data of the corresponding vertex that will make up the primitive where the fragment reside. But how do you determine which vertex affects which fragment? Is it the distance of the vertex and the fragment?

3. I understand that a fragment is a data inside a primitive. But I find that there are shaders that can also manipulate the data outside of it, like a glow effect or corona effect around the sun or object. How do you do that?

Well, maybe I can understand better with an example. Say I have a triangle that is parallel with the projection plane and have vertices of A(-10,0,0), B(10,0,0), and C(0,10,0). There's no other calculation in the vertex shaders besides transforming with gl_ModelViewProjectionMatrix to clip space.

The question:
1. If I added these line in the vertex shader:

varying float vertexConstant;

void main()
{
vertexConstant = (gl_Vertex.x + gl_Vertex.y + gl_Vertex.z);
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}


a. For the fragments at (9,1,0) which vertex would become the input for the vertexConstant? A, B, or C?
b. For the fragments at (20,20,0), or ouside of the triangle, which vertex would become the input for the vertexConstant? A, B, or C?

2. If I want to manipulate a pixel that is outside the boundary of the primitive, how do I do that? Like for example the glow or corona effect that I've mentioned earlier.

Thanks for reading this noobish questions. And thank you a lot if you do answer them. Maybe my whole conception about the shaders is wrong, so please CMIIW. I really-really am lost right now.

Again, thank you very much.
------------------ERROR: Brain not found. Please insert a new brain!“Do nothing which is of no use.” - Miyamoto Musashi.
Advertisement

1. Can the vertex shaders be used to manipulate the vertice's data besides the positions? Eg. the connectivity data with other vertex to create a different primitives. Or maybe creating more vertices than before like trying to smooth a surface by dividing the tris that make up that surface into smaller tris? Etc.

Nope. Those are "Geometry Shaders".


2. AFAIK a fragment is the data that will be calculated with other fragments to determine a pixel's color at a certain position on the screen, CMIIW. And a fragment will have the data of the corresponding vertex that will make up the primitive where the fragment reside. But how do you determine which vertex affects which fragment? Is it the distance of the vertex and the fragment?

All the values are interpolated across the fragment. If you gave each vertex a color value, and pass that color value from the vertex shader through a varying into the fragment shader into the fragment's color, you'd end up with a smoothly blended color across the triangle.


3. I understand that a fragment is a data inside a primitive. But I find that there are shaders that can also manipulate the data outside of it, like a glow effect or corona effect around the sun or object. How do you do that?

A shader is only going to manipulate valid fragments from the geometry you pass into the shader. To process other fragments, you have to render more geometry. For instance, Post-process effects, like bloom, are done by rendering a full-screen quad with a bloom shader attached. It can then blur the input buffers into the output buffers, resulting in your final image.

'benang00' said:

1. Can the vertex shaders be used to manipulate the vertice's data besides the positions? Eg. the connectivity data with other vertex to create a different primitives. Or maybe creating more vertices than before like trying to smooth a surface by dividing the tris that make up that surface into smaller tris? Etc.

Nope. Those are "Geometry Shaders".


2. AFAIK a fragment is the data that will be calculated with other fragments to determine a pixel's color at a certain position on the screen, CMIIW. And a fragment will have the data of the corresponding vertex that will make up the primitive where the fragment reside. But how do you determine which vertex affects which fragment? Is it the distance of the vertex and the fragment?

All the values are interpolated across the fragment. If you gave each vertex a color value, and pass that color value from the vertex shader through a varying into the fragment shader into the fragment's color, you'd end up with a smoothly blended color across the triangle.


3. I understand that a fragment is a data inside a primitive. But I find that there are shaders that can also manipulate the data outside of it, like a glow effect or corona effect around the sun or object. How do you do that?

A shader is only going to manipulate valid fragments from the geometry you pass into the shader. To process other fragments, you have to render more geometry. For instance, Post-process effects, like bloom, are done by rendering a full-screen quad with a bloom shader attached. It can then blur the input buffers into the output buffers, resulting in your final image.


Sorry for the long delay. Thank you very much. I understand it better now. However, I've got these new questions:

1. What are post process effects?
2. Can you give me some insight regarding bloom shader or glow shader? A tutorial or two perhaps?

Thanks again.
------------------ERROR: Brain not found. Please insert a new brain!“Do nothing which is of no use.” - Miyamoto Musashi.

'KulSeran' said:

'benang00' said:

1. Can the vertex shaders be used to manipulate the vertice's data besides the positions? Eg. the connectivity data with other vertex to create a different primitives. Or maybe creating more vertices than before like trying to smooth a surface by dividing the tris that make up that surface into smaller tris? Etc.

Nope. Those are "Geometry Shaders".


2. AFAIK a fragment is the data that will be calculated with other fragments to determine a pixel's color at a certain position on the screen, CMIIW. And a fragment will have the data of the corresponding vertex that will make up the primitive where the fragment reside. But how do you determine which vertex affects which fragment? Is it the distance of the vertex and the fragment?

All the values are interpolated across the fragment. If you gave each vertex a color value, and pass that color value from the vertex shader through a varying into the fragment shader into the fragment's color, you'd end up with a smoothly blended color across the triangle.


3. I understand that a fragment is a data inside a primitive. But I find that there are shaders that can also manipulate the data outside of it, like a glow effect or corona effect around the sun or object. How do you do that?

A shader is only going to manipulate valid fragments from the geometry you pass into the shader. To process other fragments, you have to render more geometry. For instance, Post-process effects, like bloom, are done by rendering a full-screen quad with a bloom shader attached. It can then blur the input buffers into the output buffers, resulting in your final image.


Sorry for the long delay. Thank you very much. I understand it better now. However, I've got these new questions:

1. What are post process effects?
2. Can you give me some insight regarding bloom shader or glow shader? A tutorial or two perhaps?

Thanks again.


Post process effects, in general, are fullscreen effects applied to scene ( like, bloom, full screen motion blur, tonemapping, screen space anti-aliasing, etc...)
For bloom, you can store somewhere ( in a specific render target/texture channel for example ) where your scene is going to bloom (according to "material" informations, or by getting values above 1.0 during lighting, or where you want), you will downsample it, blur it, and mix it with your scene, you can do that using HDR scene informations, using more than 8 bits per channel. There is a nice article about bloom basics in one of the GPU gems (the first one IIRC). Search on NVIDIA website for the article, you'll find it.

Post process effects, in general, are fullscreen effects applied to scene ( like, bloom, full screen motion blur, tonemapping, screen space anti-aliasing, etc...)
For bloom, you can store somewhere ( in a specific render target/texture channel for example ) where your scene is going to bloom (according to "material" informations, or by getting values above 1.0 during lighting, or where you want), you will downsample it, blur it, and mix it with your scene, you can do that using HDR scene informations, using more than 8 bits per channel. There is a nice article about bloom basics in one of the GPU gems (the first one IIRC). Search on NVIDIA website for the article, you'll find it.


I see. Yeah I am reading the article now. Well although I still don't quite understand 100% but I think I know most of the big picture now. Thanks a lot guys.
------------------ERROR: Brain not found. Please insert a new brain!“Do nothing which is of no use.” - Miyamoto Musashi.


Post process effects, in general, are fullscreen effects applied to scene ( like, bloom, full screen motion blur, tonemapping, screen space anti-aliasing, etc...)
For bloom, you can store somewhere ( in a specific render target/texture channel for example ) where your scene is going to bloom (according to "material" informations, or by getting values above 1.0 during lighting, or where you want), you will downsample it, blur it, and mix it with your scene, you can do that using HDR scene informations, using more than 8 bits per channel. There is a nice article about bloom basics in one of the GPU gems (the first one IIRC). Search on NVIDIA website for the article, you'll find it.


Another one that I don't really understand. Why do we have to downsample it? Why not just operate on the original resolution?
------------------ERROR: Brain not found. Please insert a new brain!“Do nothing which is of no use.” - Miyamoto Musashi.

This topic is closed to new replies.

Advertisement