Sign in to follow this  

Pixel Shade EVERYTHING?

This topic is 4856 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Now that I've got my shaders working at last thanks for all your help everyone :) I was wondering, since using the FF pipeline up until now, is "pixel shading" to be used sparingly or should everything go through a pixel shader, since every operation in the FF was per vertex, is there a massive overhead from working at the pixel level? In other words, should I leave the pixel shading to only the important models or pixel shade the entire world? Thanks

Share this post


Link to post
Share on other sites
The overhead of a pixel shader depends on what you're doing. If you're simply doing standard rendering with no frills, then there's no difference from using the fixed function pipeline.

If your shader is costly, though, that changes everything.

Share this post


Link to post
Share on other sites
So if I wanted to do a single directional light and a specular effect in the pixel shader, is that considered costly. I'm very new to shaders so I'm not sure what is considered costly. Are you talking about multi-pass shaders or simple stuff like this. I'm probably going to stick to 1.1 shaders single-pass for now anyways.

Share this post


Link to post
Share on other sites
Quote:
Original post by cpcollis
Ok but what is a costly/complex shader considered to be?


it's entirely subjective. if it slows down your game / is the bottleneck, it's costly.

-me

Share this post


Link to post
Share on other sites
Quote:
Original post by cpcollis
Now that I've got my shaders working at last thanks for all your help everyone :) I was wondering, since using the FF pipeline up until now, is "pixel shading" to be used sparingly or should everything go through a pixel shader, since every operation in the FF was per vertex, is there a massive overhead from working at the pixel level? In other words, should I leave the pixel shading to only the important models or pixel shade the entire world?

Thanks


The primary reason to not use pixel shaders for everything is for hardware that doesn't have them. On almost all new hardware, FF is emulated by the driver creating a pixel shader under the covers which does whatever state you set the FF to.

If you plan to target only hardware with programmable shading, then it is perfectly reasonable to only use shadsers.

Share this post


Link to post
Share on other sites
One thing you might want to consider doing (that can speed your engine up TREMENDOUSLY depending on how you've implemented it already) is creating a z-test occlusion pass. Essentially, it works like this: First, render the whole scene with EVERYTHING off (no pixel shaders, no color writing, nothing) except for z-writing. Next, set the Z test to 'Z Equals' so only objects with the same Z as in the buffer will get drawn. That way, if there are 500 objects straight back with reall complex pixel shaders, the last of them won't get drawn, and only those parts of the front ones that are visible will be rendered and 'pixel shaded'.

This can allow you to use relatively complex shaders while still maintaining a decent framerate by eliminating overdraw and only calling pixel shaders for visible pixels.

You have to remeber not to draw anything with any kind of translucency during this phase though - you'll still need to process those in the normal way after all opaque material has been processed.

Share this post


Link to post
Share on other sites
intrestingly, ATI recomend against using the Equal and NotEqual depth tests (certainly with regards to OpenGL and probably D3D as well, same hardware and all that), so setting the depth test value to equal might not give you the best performance.

Just something to keep in mind :)
(source is page 20 in the OpenGL Opermising guide in my sig)

Share this post


Link to post
Share on other sites
Quote:
Original post by _the_phantom_
intrestingly, ATI recomend against using the Equal and NotEqual depth tests (certainly with regards to OpenGL and probably D3D as well, same hardware and all that), so setting the depth test value to equal might not give you the best performance.

Just something to keep in mind :)
(source is page 20 in the OpenGL Opermising guide in my sig)
Well, I don't know if doing it in hardware is a good idea, but the renderer you need is EXTREMELY simple so it could also be done in hardware to take some of the load off the GPU (YannL implement software occlusion culling via a simmilar method, but insted of doing per-pixel like would be done in hardware, you'd draw each pixel to the software z-buffer but only read back 'is this object visible at all')

Share this post


Link to post
Share on other sites
you could sort your meshes by depth using the center of your mesh

and then render the list from front to back so you reduce the overdraw when rendering your Z-Pass

you probably could use this directly with your final render pass maybe implement a system that judges the complexity of your renderer and let it decide on its own which way is faster let the engine create a <mapname>.cfg which you create after you run the map for the first time

Share this post


Link to post
Share on other sites
If you want to do textures in shaders, do you have to use a pixel shader, in otherwords, is there any way to do it using the vertex shader and just not use a pixel shader at all?

Share this post


Link to post
Share on other sites
A couple of points :

1) Always use Z_LESS_EQUAL it works for the first and later passes, and you never have to worry about hw that doesn't like == or !=

2) Sorting meshes by depth is almost never a win in my experience. If you already group by shader, you can try to sort those with the same shader by depth if you must.

3) My engine performs dynamic shadowing, diffuse & specular bump mapping, and glow, with about 4 passes of 10 instruction ps.1.1 shaders at 800x600 fps at ~80 fps on a geforce 5700 ultra, using ps.1.1 shaders. 1.1 shaders are fast enough to put on everything IF you stay away from dependent reads ( tex3x2tex, tex3x3vspec, texbem, etc. ).

A single frame of dependant reads ( texbem ) got only 85 fps on a geforce 4 ti 4400 at 1024x768. That was doing nothing else. So, if you have lots of fancy water and are targeting dx8 cards, be careful.

Also really good specular is hard to do fast.

If you are targeting x800 & 6800 cards, ps.2.0 shaders of 20+ instructions are fast enough to cover the entire screen several times.

Share this post


Link to post
Share on other sites
@Simmer: i didn t mean sorting the meshes by shader and then by depth but sorting them in a separate list by depth one could create a linked list for all meshes

lets say you farplane-nearplane in 4096 units

you could create 8 pointers as entrypoint into the depthlist every 512 units so you don t have to traverse the entire list everytime you want to add a mesh

this should work pretty well complex scenes since overdraw is a lot higher

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
this should work pretty well complex scenes since overdraw is a lot higher

The overhead of switching shaders is almost certainly going to be greater than any gain from early z-rejection.

Share this post


Link to post
Share on other sites
Quote:
Original post by cpcollis
If you want to do textures in shaders, do you have to use a pixel shader, in otherwords, is there any way to do it using the vertex shader and just not use a pixel shader at all?


I am using .fx files which allow you to use shaders and set renderstates as well. MS doesn't recommend it, but you can use a vertex shader for T&L, then use the fixed function texturing stages if you want. It says that there will be z fighting, but I haven't seen any artifacts as of yet.

Jason Z

Share this post


Link to post
Share on other sites
thanks jason. I think I've realised something bad in my engine design. Everyone is probably going to cringe at this, but currently I pass the Device (DirectX) object to things that need to be drawn i.e. MyModel.Draw(device); and it uses the device to draw its primitives, set materials, textures etc. I guess I'm going to have to change everything so it doesn't draw using the device object, since its now the shaders that need parameters set. As an aside, when you do device.SetTexture(myTexture) it still works with pixel shaders, but all the examples I've seen set the texture on the effect e.g. myEffect.SetValue("myTexture", myTexture). Is this the same thing? Or does setting the texture through the device slow things down (I havent noticed any change).

Thanks again, this thread is helping greatly with my design and understanding of shaders :)

Share this post


Link to post
Share on other sites
Off Topic:
It is almost scary how closely you are following the same steps that I took (I think I am only about 2 months ahead now!!!). I used to pass the device around to all my objects as well, which is bad for many reasons (which you already stated), but switched when I switched to effect files!!

Back on topic:
It is essentially the same, but if you are going to be using effect files, you really should use the SetTexture method. As noted before, even fixed function pipeline parameters are defined in the .fx file, so it would only make sense to bind the texture to the file itself rather than to the device.

So as a global parameter in your effect file you would have:

texture myTexture_1;

then in your application code before rendering you would have:

MyEffect->SetTexture("myTexture_1", myTexture);

Then render your geometry.

Good Luck,

Jason Z

Share this post


Link to post
Share on other sites
hehe, I guess its a narrow path to games engine guru ;)

I'm still having trouble getting my head round parsing Effect objects about, since nothing is stongly typed i.e. MyEffect->SetValue("SomeRandomStringGoesHere", someValue); just doesn't feel right, what if the effect file changes and the name of the property with it? I'm also hitting the wall of what if I want to support the old FF pipeline as well, everything that can be drawn will need to be able to draw using shaders AND the FF... for some objects this can cause a lot of code copying.

Perhaps there is a simpler method I'm not seeing?

Share this post


Link to post
Share on other sites
Here is a sample .fx file - maybe seeing it will benefit you.


//-----------------------------------------------------------------------------
// DiffuseTexture.fx
//-----------------------------------------------------------------------------

//-----------------------------------------------------------------------------
// Uniform parameters
//-----------------------------------------------------------------------------
float4x4 Transform;
texture DiffuseTexture;
//-----------------------------------------------------------------------------

//-----------------------------------------------------------------------------
// Samplers
//-----------------------------------------------------------------------------
sampler Sampler = sampler_state
{
Texture = (DiffuseTexture);
MipFilter = LINEAR;
MinFilter = LINEAR;
MagFilter = LINEAR;
};
//-----------------------------------------------------------------------------

//-----------------------------------------------------------------------------
// Structures
//-----------------------------------------------------------------------------
struct vertex
{
float3 position : POSITION;
float3 normal : NORMAL;
float2 texcoord : TEXCOORD0;
};
//-----------------------------------------------------------------------------
struct fragment
{
float4 position : POSITION;
float2 texcoord : TEXCOORD0;
float4 color : COLOR0;
};
//-----------------------------------------------------------------------------
struct pixel
{
float4 color : COLOR;
};
//-----------------------------------------------------------------------------

//-----------------------------------------------------------------------------
// Vertex shaders
//-----------------------------------------------------------------------------
fragment myvs( vertex IN )
{
fragment OUT;

OUT.position = mul( Transform, float4(IN.position, 1) );
OUT.color = float4(1.0, 1.0, 1.0, 1.0);
OUT.texcoord = IN.texcoord;

return OUT;
}
//-----------------------------------------------------------------------------

//-----------------------------------------------------------------------------
// Techniques
//-----------------------------------------------------------------------------
technique Textured
{
pass Pass0
{
ZEnable = true;

VertexShader = compile vs_1_1 myvs();
PixelShader = null;

Texture[0] = <DiffuseTexture>;

// Blending stage 0
ColorOp[0] = SelectArg1;
ColorArg1[0] = Texture;
ColorArg2[0] = Diffuse;

AlphaOp[0] = SelectArg1;
AlphaArg1[0] = Diffuse;
AlphaArg2[0] = Texture;

// Blending Stage 1
ColorOp[1] = Disable;
AlphaOp[1] = Disable;
}
}
//-----------------------------------------------------------------------------
Technique WireFrame
{
Pass P0
{
// Use transformation vertex shader
VertexShader = compile vs_1_1 myvs();
PixelShader = null;

// Disable texturing
ColorOp[0] = Disable;
AlphaOp[0] = Disable;

// Set Wireframe fillmode
FillMode = Wireframe;

// Set ambient to white
Lighting = true;
Ambient = {1.0,1.0,1.0,1.0};
MaterialAmbient = {1.0,1.0,1.0,1.0};
AmbientMaterialSource = Material;
}
}
//-----------------------------------------------------------------------------




Essentially, once the effect is loaded, you just have to do two things:

1) Set the uniform parameters
2) Send your geometry with one of the Draw calls

Keep in mind that there is more to it than just willy nilly setting of texture names and

different variables. Each uniform parameter is mapped to a specific register in the gpu,

the text name is more for us to understand than anything else.

Also, this is not to mention the use of annotations and semantics to keep the meaning of the

variables compatible accross different effect files. Read up on them and if you have any

questions post them back up here.

Good Luck,

Jason Z

Share this post


Link to post
Share on other sites
I didn't know you could do things like "Lighting = true" in a technique. I was thinking how the FF pipeline uses the Material object to set things like Diffuse, Specular etc properties before drawing. I was planning on implimenting something similar in vertex shaders so that those properties could be taken into account.

That sample is interesting, I was aiming to have lots of variables in my .fx file and set them at runtime i.e:


m_Effect.SetValue("WorldViewProjection", worldViewProj);
m_Effect.SetValue("World", m_World.Map.Camera.WorldMatrix);
m_Effect.SetValue("DirectionalLight1" , new Vector4(-0.6f, 0.08f, -0.82f, 1));
m_Effect.SetValue("DirectionalLight2" , new Vector4(0.3f, -1, 1, 1));
m_Effect.SetValue("AmbientColor" , new Vector4(0.4f, 0.4f, 0.4f, 1));
m_Effect.SetValue("LightColor" , new Vector4(0.4f, 0.4f, 0.4f, 1));
m_Effect.SetValue("MaterialColor" ,new Vector4(1, 1, 1, 1));




But from your example, such things are more or less hard coded into the .fx file. Is this the preferred method? I guess it is easier to modify an fx file than the source code to the engine. I am starting to get the feeling that HLSL should be used more as a part of the engine, simply written in a different language rather than a property of it.

Will that code u posted work with DirectX as is? I find it hard to know what code will and wont work when there are quite a few contradicting standards it seems, for example in the MSDN they always use the following to define the world x projecting x view matrix:

WorldProjectionView: WORLDPROJECTIONVIEW

yet, I have also seen examples where the "WORLDPROJECTIONVIEW" after the colon is skipped, such as your own, in fact, I don't even know why it is there, it doesn't seem to serve any purpose.

Thanks for your help here Jason, I want to set off down the right track, I've already boxed myself into some corners that have taken a while to code out of, often leaving a messy trail behind. But hey thats all part of the learning process :)

Share this post


Link to post
Share on other sites
The word after the colon is called a semantic. You can find a good explanation in the DX docs for them.

Remember that .fx files are very flexible. You can hard code or use runtime variables wherever you want, so it can be adjusted to your needs.

The file should work "as is", just remember when setting the Transform matrix that it is the WorldViewProjection and that you transpose the matrix before setting it. Let me know if you have trouble getting it to work.

Jason Z

Share this post


Link to post
Share on other sites

This topic is 4856 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this