Jump to content
  • Advertisement
Sign in to follow this  
MattCa

Help with *you guessed it!* shaders..

This topic is 2773 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi!
I'm not sure if this is the correct forum to post this, but I've ran out of patience trying to figure out the what seems to be a complex vault that is shaders.

I've searched high and low for a good, simple and explanatory tutorial on HGSL and Shaders. The HGSL part is something I consider myself to have gotten covered, but the shader part is something that continues to elude me.

What confuses me is what they are actually supposed to do, I've read so many descriptions but many of them never actually tell me what shaders are actually used for. Many say effects, but is that all? I could make effects using point sprites, and they are a hell of a lot easier to use. What kind of effect are they used for?

The second thing is how they work. I understand that a vertex shader takes a vertex and does.. something to it. And I understand that a pixel shader takes a pixel and does... something else to it. But my problem is, do shaders go through every single vertex/pixel in-game/on the screen?

The third thing is how do I actually apply them? All the tutorials I've read never actually go into detail about applying Shaders to things other than pre-defined vertices that have been hard-coded, but what about when I load a model from a .x file, at what stage do the Vertex/Pixel shaders take action, and how do I make them take action on the model, or even the whole scene?

The fourth thing, seen as Shaders can modify color values (including the Alpha channel), is this how a fog shader would me made? I get that somehow the vertex/triangle/scene/object's color is darkened/faded, but how do I actually tell how far away it is? Do I pass the camera's co-ordinates into the shader along with the vertex, and use that to calculate a distance? And another thing... what shader type would actually be used to do this!?!?

The fifth, and hopefully final thing, I've seen many of these tutorials talk about how lighting is applied with shaders, but this is completely different to how I create lighting. Instead, I use the D3DLIGHT9 structure to create its values and the SetRenderState function to apply them. Whats the difference??

I know its a rather large wall of text, and I don't blame you if you just hit the back button, but If anyone could answer my questions I'd be rather grateful!

Thanks!!!
Matt.




Share this post


Link to post
Share on other sites
Advertisement
First of all, I've never heard of HGSL before. I'm not sure if you're referring to HLSL (directX shading language), or GLSL (opengl shading language), but I don't think hgsl is a real thing, which may be complicating your search efforts. Anyway...


What confuses me is what they are actually supposed to do, I've read so many descriptions but many of them never actually tell me what shaders are actually used for. Many say effects, but is that all? I could make effects using point sprites, and they are a hell of a lot easier to use. What kind of effect are they used for?

The second thing is how they work. I understand that a vertex shader takes a vertex and does.. something to it. And I understand that a pixel shader takes a pixel and does... something else to it. But my problem is, do shaders go through every single vertex/pixel in-game/on the screen?
[/quote]

Shaders are a replacement for a large chunk of the existing graphics pipeline that used to be fixed purpose. This makes rendering much more flexible and powerful than when you only have a few switches to toggle. If you want to get an idea of some things you can do with shaders, this is a pretty nice list from nvidia of some effects that you can achieve:

http://developer.download.nvidia.com/shaderlibrary/webpages/shader_library.html

They are essentially small programs that you write that have somewhat fixed inputs and outputs. The inputs and outputs are different depending on which kind of shader you are referring to. A vertex shader generally takes an input vertex, and transforms it into screen space. After all the vertices of a primitive are transformed, it is then rasterized (converted into pixels) by the API, and then the fragment shader is executed on each pixel.

So if you draw a triangle that covers 200 pixels on the screen, your vertex shader will be run 3 times, and the fragment shader will be run 200 times.


The third thing is how do I actually apply them? All the tutorials I've read never actually go into detail about applying Shaders to things other than pre-defined vertices that have been hard-coded, but what about when I load a model from a .x file, at what stage do the Vertex/Pixel shaders take action, and how do I make them take action on the model, or even the whole scene?
[/quote]
This varies a little bit depending on the api you're using, but generally you bind a pair of vertex/pixel shaders, and then every draw call you make while they are bound is processed by the shaders instead of the normal pipeline. You can apply different shaders to different objects by changing the bound shader before rendering a particular object.


The fourth thing, seen as Shaders can modify color values (including the Alpha channel), is this how a fog shader would me made? I get that somehow the vertex/triangle/scene/object's color is darkened/faded, but how do I actually tell how far away it is? Do I pass the camera's co-ordinates into the shader along with the vertex, and use that to calculate a distance? And another thing... what shader type would actually be used to do this!?!?
[/quote]
Yes, fog could be something that you replicate in the shader. After you transform the primitive to screen space, you will know how far away it is from the eye (you get this as part of the transform process), so you get the depth as an input into the fragment shader. You then use this depth value in a function to determine how strong your fog should be. A little pseudocode snippet of a fog fragment shader would look like this:


uniform float4 fogColor;
in float4 fragDepth;
in float4 vertexColor;

out float4 fragColor;
void FragShader() {

fragColor = (1-fragDepth)*vertexColor + fragDepth * fogColor;

}


The inputs are variables that vary per vertex, while uniforms are constant for the whole program (like what color you want the fog to be, be it gray fog or red fog or yellow fog, etc).


The fifth, and hopefully final thing, I've seen many of these tutorials talk about how lighting is applied with shaders, but this is completely different to how I create lighting. Instead, I use the D3DLIGHT9 structure to create its values and the SetRenderState function to apply them. Whats the difference??
[/quote]

Yes, once you move away from fixed pipeline you won't use the fixed light structures anymore. You will define your own shader uniforms for things like "lightDirection", and "lightIntensity", and then use these values in your pixel shader calculations to compute the lighting on a fragment. Instead of SetRenderState, you'll use an API command to set program uniforms instead.

Share this post


Link to post
Share on other sites
Thanks for the reply!!

Are there any places/resources available that actually list the types of lighting and the equations/code that can be used to create it?

Also, can you recommend any tutorials or books that might help?

Finally, what are your thoughts on differences between HGSL and GLSL (specifically, easiness to understand the basics for someone new to DirectX), I'm still deciding whether or not to go with DirectX or OpenGL.

Thanks again!!

Regards,
Matt.

Share this post


Link to post
Share on other sites

Finally, what are your thoughts on differences between HGSL and GLSL (specifically, easiness to understand the basics for someone new to DirectX), I'm still deciding whether or not to go with DirectX or OpenGL.
[/quote]
I've worked with both, and they are essentially the same complexity. They both do the same things, they just have a slightly different syntax to achieve it. Choose DirectX if you like microsoft, OpenGL if you're interested in cross platform/mobile. They both do the same things on their target platform. I prefer the OpenGL api over the directX one, though I'm probably just biased cause I've used it first and for the longest. One's not really better than another.

And again its HLSL, not HGSL.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!