Usage of shaders

Started by
1 comment, last by Krohm 12 years, 9 months ago
Ok so I have been learning openGL with GLSL during the last few weeks and now that I have come to writing shaders and implementing lighting, textures etc. I am slightly confused.

How are shaders supposed to be used and combined? If I want directional lightning, spot lights and point lights to affect all of a scene do I write one shader with the implementation of all lights? Or should they be implemented in different shaders. How do i combine shaders for different effects all in one scene? What i am basically asking is what should constitute as one shader? Any help on understanding this would be great.

Thanks
Advertisement
There's some freedom of choice when it comes to this, you could write lighting shaders for different types of lights and let your application choose which shader should be used depending on the handled light in question, or you could write one large shader handling all cases

Another option would be to write everything in one uber-shaders and use compiler flags to achieve a desired result (which is kind of a hybrid of the cases above)

These kinds of things will also depend on which shader model you're targeting, how many instructions you can work with, etc.

I gets all your texture budgets!

How are shaders supposed to be used and combined?
When it comes to connecting shaders from different stages the API is pretty clear, I suppose you're not really asking about that but rather how to combine different algorithms in a single shader (such as fragment stage).
The ugly truth is that they are not supposed to be combined. At the very least, the API won't help you in doing this (although there are now function pointers which might help you as far as I've understood).
Historically, this has been solved by the aforementioned uber-shader approach: just crack your head open figuring out what you might eventually need to use... sometimes... somewhere... for some effects... it is in my opinion fairly complicated but it has been industry standard for a while.

I am not the only one confused by this. The main benefit of deferred shading is to allow lighting effects to be separated from "plain" material rendering. An admittedly rudimentary form of deferred shading has been industry standard for years back when multitexturing was high-end feature. (I am ready to be flamed for writing this.)

You might want to consider multi-pass techniques as well but be warned they also need quite some support for generic shader code.

You might also want to take a look at the RenderMan specification. Being a higher level shading language it has some slick ideas - especially when it comes to lighting - that might help you.

Previously "Krohm"

I give every object it's own vertex/fragment shader pair.

Consider it pure joy, my brothers and sisters, whenever you face trials of many kinds, 3 because you know that the testing of your faith produces perseverance. 4 Let perseverance finish its work so that you may be mature and complete, not lacking anything.

This topic is closed to new replies.

Advertisement