Jump to content
  • Advertisement
Sign in to follow this  
Bozebo

OpenGL Being prepared to learn shaders...

This topic is 2914 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Right now I am still assembling together what I know about OpenGL to produce various scenes. I don't feel the requirement to learn shaders until I have covered what else there is to learn.

But what particular problems do shaders directly solve that cannot be solved with older methods? Apart from generally being able to make everything look nicer and performing fast on-gpu calculations - what are the main benefits?

Do "old" techniques such as light mapping, decals etc still apply? (I suspect so, I am just making sure) Or are their implementations replaced? Do shaders help with dynamic lighting at all? Is bloom/hdr a 2d effect? How are shaders capable of implementing a depth of field effect? What data can shaders work from, do they have access to the z-buffer etcetera? Do shaders help with anti-aliasing etc?

At the moment, the prospect of delving into shaders seems a little far off in my 'learning timeline'. But if I wanted to take a look now, how simply could I implement a nice example? Does it merely take a few lines of code to get started?

Here is my current understanding:
There are vertex shaders per-polygon at an early stage in the rendering pipeline.
There are fragment shaders near the end of the rendering pipeline which operate per-'pixel'.
Shaders are compiled on the gpu, either at run-time for portability among different gpus or before distribution for security.
Shaders are programmed in a c-like language (glsl)

One thing that I havn't been able to understand (bearing in mind I don't believe I have been at a stage where I should be investigating shaders yet) is how you choose where shaders apply. Is there a state machine machanism when rendering geometry, which "marks" those fragments which are updated while rendering to be processed by the selected shader? And when overdraw is encountered, do the overdrawn pixels encounter (fragment) shaders?

I have some confusion as to how the rendering pipeline interacts directly with my code, are "draw calls" my glBegin functions, with the defined geometry follows the whole rendering pipeline? Because after that there may be no more functions until I swap buffers. It seems counter-intuitive because my initial understanding made it seem that all geometry was known by the system before the rendering pipeline began.

I think I am beginning to understand how it all comes together. The resources I had read through did not seem to explain it in a manner which suggested how the rendering pipeline actually takes place in relation to the programming carried out.

If anybody can give me any overall suggestions or clarifications, I would appreciate it.

Share this post


Link to post
Share on other sites
Advertisement
OK, what you do is you create two shaders with glCreateShader. One vertex, one fragment.

Stuff code into them. Compile them. Create a program with glCreateProgram.

Use glAttachShader to put the shaders into the program.

Call glLinkProgram. If that works you have a shader.

What you do then is bind it -- the same sort of way you bind a texture.

Then you draw stuff. Each vertex goes to the vertex shader which manipulates it. They are then turned into primitives (triangles, lines etc) clipped and then each pixel which will be painted is passed to the fragment shader for manipulation.

They will not be passed to the frag shader if their Z depth indicates that they will not be painted -- this is called "early culling". If your fragshader contains any code which touches the depth value this will be disabled.

After that, they're culled by the Z buffer as usual.

When you need to pass stuff to the shaders, there's a bunch of calls to query a magic number for the variables and then to say "here, pass this to that magic number".

Similarly with arrays -- you can say "here, pass the data at this address to the vertex shader, one row each". This is how you pass in extra data.

Share this post


Link to post
Share on other sites
The best way to understand it is just... use it. GPUs supporting shaders was released more than 7 years ago - it's a lot of time. If U want to start:
http://www.swiftless.com/glsltuts.html - loading, compiling etc.
http://www.ozone3d.net/tutorials/index_glsl.php - a lot of simple effects
http://www.lighthouse3d.com/opengl/glsl/ - U can find few interesting things like info log
http://nopper.tv/opengl.html - if U want to learn sth new

Share this post


Link to post
Share on other sites
Thanks for the clarifications.

I might take a look into shaders earlier than I had expected (I am relatively new to 3D in general).

They could help with different tasks and processes which I am still learning now.

Share this post


Link to post
Share on other sites
Quote:

Here is my current understanding:
There are vertex shaders per-polygon at an early stage in the rendering pipeline.
There are fragment shaders near the end of the rendering pipeline which operate per-'pixel'.
Shaders are compiled on the gpu, either at run-time for portability among different gpus or before distribution for security.
Shaders are programmed in a c-like language (glsl)


Pretty much it. Although nowadays a little more.

The GPU also has 2 more stages:

A tessellation stage, this is used to perform things such as subdivision on polygons quickly. It saves a lot of bandwidth as you don't have to send as many vertexes to the gpu.

Geometry shaders. These can be used to generate geometry on the gpu. So you could use them for tessellation(if you don't have a tessallator), duplication of geometry to render the scene several times from multiple angles (say to render all six faces of a cube-map in one go) etc..

Quote:
One thing that I havn't been able to understand (bearing in mind I don't believe I have been at a stage where I should be investigating shaders yet) is how you choose where shaders apply. Is there a state machine machanism when rendering geometry, which "marks" those fragments which are updated while rendering to be processed by the selected shader? And when overdraw is encountered, do the overdrawn pixels encounter (fragment) shaders?


Shaders apply everywhere. In a modern game everything drawn is done with shaders. All your transforms are done in the shaders.

Heres off the top of my head the steps to get something drawn on screen. Might give you something to work with or make no sense at all. It will at least give you some stuff to google. Lighthouse 3d has some good stuff to start with.

create shader
bind it
create uniform for projection matrix
create uniform for modelview(camera) matrix
create uniform for position matrix
create attributes for uvcoords/normals/tangents
get unforms for textures.

get your vertexes/uvcoords/normals/tangents
get your indices
make a VBO with them (this involves uploading all your data to the gpu)

to draw:

bind shader
bind textures
upload uniform for projection matrix
upload uniform for modelview(camera) matrix
upload uniform for position matrix

bind your attributes for your uvcoords/normals/tangents

draw vbo

in the vert shader multiply your vert by your 3 matrixes
set a varying for your uv coords and assign your uv to it.
in the pixel shader sample your texture with your uv coords and output the colour.

-Si

Share this post


Link to post
Share on other sites
Quote:
Original post by Simon_Roth
Quote:

Here is my current understanding:
There are vertex shaders per-polygon at an early stage in the rendering pipeline.
There are fragment shaders near the end of the rendering pipeline which operate per-'pixel'.
Shaders are compiled on the gpu, either at run-time for portability among different gpus or before distribution for security.
Shaders are programmed in a c-like language (glsl)


Pretty much it. Although nowadays a little more.

The GPU also has 2 more stages:

A tessellation stage, this is used to perform things such as subdivision on polygons quickly. It saves a lot of bandwidth as you don't have to send as many vertexes to the gpu.

Geometry shaders. These can be used to generate geometry on the gpu. So you could use them for tessellation(if you don't have a tessallator), duplication of geometry to render the scene several times from multiple angles (say to render all six faces of a cube-map in one go) etc..

Quote:
One thing that I havn't been able to understand (bearing in mind I don't believe I have been at a stage where I should be investigating shaders yet) is how you choose where shaders apply. Is there a state machine machanism when rendering geometry, which "marks" those fragments which are updated while rendering to be processed by the selected shader? And when overdraw is encountered, do the overdrawn pixels encounter (fragment) shaders?


Shaders apply everywhere. In a modern game everything drawn is done with shaders. All your transforms are done in the shaders.

Heres off the top of my head the steps to get something drawn on screen. Might give you something to work with or make no sense at all. It will at least give you some stuff to google. Lighthouse 3d has some good stuff to start with.

create shader
bind it
create uniform for projection matrix
create uniform for modelview(camera) matrix
create uniform for position matrix
create attributes for uvcoords/normals/tangents
get unforms for textures.

get your vertexes/uvcoords/normals/tangents
get your indices
make a VBO with them (this involves uploading all your data to the gpu)

to draw:

bind shader
bind textures
upload uniform for projection matrix
upload uniform for modelview(camera) matrix
upload uniform for position matrix

bind your attributes for your uvcoords/normals/tangents

draw vbo

in the vert shader multiply your vert by your 3 matrixes
set a varying for your uv coords and assign your uv to it.
in the pixel shader sample your texture with your uv coords and output the colour.

-Si


There's some intreguing info in thare, thanks.
So... In a modern game like, perhaps, MW2 or GTA4. In the underlying system they manage nearly everything with shaders in the way you say? And in an earlier adopter of shaders like counter strike source, the scene is generated in a more traditional way while shaders are used on top of that to enhance the image? This seems to make things far more complicated, but I would assume it's for the best in terms of performance and possiblities.

Quote:
I might take a look into shaders earlier than I had expected

Maby I will retract that statement, I somehow knew that there would be more complexity that I originally imagined. I will have to make the move at some point though if I want to keep up with the ever increasing quality of modern graphics - if I tried to thoroughly learn everything that everybody before me had learnt, I would be 20-odd years behind in implementations forever...

Share this post


Link to post
Share on other sites
Quote:
Original post by Bozebo
Quote:
I might take a look into shaders earlier than I had expected

Maby I will retract that statement, I somehow knew that there would be more complexity that I originally imagined. I will have to make the move at some point though if I want to keep up with the ever increasing quality of modern graphics - if I tried to thoroughly learn everything that everybody before me had learnt, I would be 20-odd years behind in implementations forever...


Of course you can do VERY complicated things with shaders, but you can start off with very simple things, too! You shouldn't feel intimidated by them!

You only have to initialize shaders once:
Create a shader program with an id (like textures and other things in opengl).
Attach a fragment and/or vertex shader to it. You don't even need both.

After that:
1. Bind the shader program using its ID (again, like textures etc).
2. Set some variables for the shader (a matrix to use, a special color) if it requires them.
3. Draw. You can use immediate mode, VBOs, whatever.. it doesn't really matter.

As a little example a shader that colors your objects:

colored.vert:

uniform mat4 pmvMatrix;
attribute vec3 vertexPosition;

void main()
{
gl_Position = pmvMatrix * vec4(vertexPosition, 1);
}


colored.frag:


uniform vec4 vertexColor;

void main()
{
gl_FragColor = vertexColor;
}


You don't even need the vertex shader here, for example. The fixed pipeline will do the transformation by default.

Share this post


Link to post
Share on other sites
Thanks Eskapade, you described it in a very encouraging way. I will have a play with shaders soon.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!