[DX10] Dynamic multisample texture resolves

Started by
23 comments, last by Hyunkel 13 years, 9 months ago
I'm trying to implement anti-aliasing in my deferred renderer using dx10 custom texture resolves.

Say I have a texture with 4 samples:
Texture2DMS<float, 4> GDepth;Texture2DMS<float4, 4> GNormal;


With custom resolves:
float4 light = float4(0, 0, 0, 0);for (int i = 0; i < 4; i++) {  depth = GDepth.Load(texCoord, i);  normal = GNormal.Load(texCoord, i);  //...  light += pointLight(position, normal, ...)}return light / 4;


At least that's the theory, I haven't tried it yet.
My question is, is it possible to make the sample amount dynamic?
I don't want to write different shaders for each level of sampling I want to offer.

I'd like to something like:
Texture2DMS<float, sampleCount> GDepth;Texture2DMS<float4, sampleCount> GNormal;//...float4 light = float4(0, 0, 0, 0);for (int i = 0; i < sampleCount; i++) {  depth = GDepth.Load(texCoord, i);  normal = GNormal.Load(texCoord, i);  //...  light += pointLight(position, normal, ...)}return light / sampleCount;


Is there a way to do this?
I don't see how I could pass a variable to my shader and use it as soon as in texture declaration.

Cheers,
Hyu
Advertisement
AFAIK you'll have to create several permutations of your shader. If you're using the effects framework you could make it nicer by making a shader array, but if you're just using raw shaders then you'll just have to use a #define.
Alright, I'll do that then.

As for #define's, I'm using the effects framework, but I'd like to use #define's in my shaders to reduce the amount of permutations.
However I can't seem to find a resource that explains how to do so.

For example, if I want to do something like:

#if USELINEARDEPTHdo stuff...#elsedo other stuff...#endif


How do I actually set USELINEARDEPTH from code before compiling the shader?

Thanks,
Hyu
You calculate the value of the light for each sample...
Wouldn't be better to calculate the average depth/normal for each pixel and then calculate the Light?

Like this:
float depth = 0.0f;float3 normal = float3(0.0f,0.0f,0.0f);for (int i = 0; i < 4; i++) {  depth += GDepth.Load(texCoord, i);  normal += GNormal.Load(texCoord, i);}depth /= 4;normal /= 4;//...float4 light = pointLight(position, normal, ...);


I guess this approach give equal results and better performance.
I'm pretty sure that won't work properly, but I'm going to try it anyways.
Doing so would basically average the geometry data, which is not what a forward renderer does.
Please, post the results after you try.
Quote:Original post by Aqua Costa
You calculate the value of the light for each sample...
Wouldn't be better to calculate the average depth/normal for each pixel and then calculate the Light?

I guess this approach give equal results and better performance.


That will absolutely not give equal results.
Quote:Original post by Hyunkel
Alright, I'll do that then.

As for #define's, I'm using the effects framework, but I'd like to use #define's in my shaders to reduce the amount of permutations.
However I can't seem to find a resource that explains how to do so.

For example, if I want to do something like:

*** Source Snippet Removed ***

How do I actually set USELINEARDEPTH from code before compiling the shader?



Well to define a macro from code, you just pass in an array of D3D10_SHADER_MACRO's to D3DX10CreateEffectFromFile/Memory/Resource. It's really simple, you set .Name to the name of the macro (in this example "USELINEARDEPTH") and you set .Definition to the value you want (in this case, "1"). Then you terminate the array with a D3D10_SHADER_MACRO with NULL for both values.

You can also look in the docs at the section entitled "Compile an Effect (Direct3D 10)".
Thanks, that was exactly what I needed to know :)

I'll test it out tomorrow, but I'm expecting quite a performance hit when doing this, after all I will multiply the lighting calculations by the level of multisampling.

I'm thinking about running a sobel filter for edge detection, and only apply multiple resolves on those pixels, do you recon it is worth doing this?
The edge detection pass will take up quite a bit of performance itself.
No need for a sobel filter or anything like that, you can use centroid sampling to determine edge pixels. See this sample.

This topic is closed to new replies.

Advertisement