Sign in to follow this  

DX11 FXAA and Color Space

Recommended Posts

jerrinx    208
Hey guys,

I know, there were a few topics on FXAA, but it didn't help me with my problem, hence the new thread.

The problem is, I don't see any difference between the FXAA and Non-FXAA Render.

Then again I am not passing a non-linear color space texture to the FXAA shader. And not sure how to.
If my understanding is correct Linear color space is the one that is got when you sample a texture (0 to 1 range or 0 - 255 range) where the colors change in a linear fashion.

I am not sure what sRGB is about ?
Currently my texture is in RGBA8 dx format.

According to Fxaa3.11 release by Timothy.
[b]"Applying FXAA to a framebuffer with linear RGB color will look worse.[/b]
[b]This is very counter intuitive, but happens to be true in this case.
The reason is because dithering artifacts will be more visiable
in a linear colorspace."[/b]

The FXAA paper mentions something about using the following in DX9 (which is what i am working on)
[b]// sRGB->linear conversion when fetching from TEX SetSamplerState(sampler, D3DSAMP_SRGBTEXTURE, 1);
// on SetSamplerState(sampler, D3DSAMP_SRGBTEXTURE, 0); // off
// linear->sRGB conversion when writing to ROP SetRenderState(D3DRS_SRGBWRITEENABLE, 1);
// on SetRenderState(D3DRS_SRGBWRITEENABLE, 0); // off[/b]

This is what I am doing.
1. Render to texture using D3DRS_SRGBWRITEENABLE = 1, and turn it off after I am done.
When I render this texture, it looks brighter than usual.
2. Render screen quad with this texture using D3DSAMP_SRGBTEXTURE = 1, and turn it off after I am done.
When this texture renders, it looks correct.

But the aliasing still remains. I figured I shouldn't be doing step two because that would turn the non-linear color to linear while sampling.
But doing that results in the texture/scene got from the first step.

I have attached my shaders here.
Any help is greatly appreciated.

P.S, Timothy also mentioned something about pixel offset being different on dx11 w.r.t dx9 by 0.5 of a pixel.

Thanks a lot !


Share this post

Link to post
Share on other sites
Ashaman73    13715
[quote name='jerrinx' timestamp='1343587522' post='4964290']
The problem is, I don't see any difference between the FXAA and Non-FXAA Render.
Could you post some comparison screenshots ?

Share this post

Link to post
Share on other sites
jerrinx    208
Oh I forgot, about the pics.

I have an abstraction going on for directx9 and opengl2.
So the same window in the pics support both of them.

I got hold of a glsl shader for FXAA and integrated to the opengl side.
(am not sure what version it is, but it appears to be from the same source)

Seems to work. But when I made changes to the directx hlsl with updated inputs, it still doesn't work [img][/img]

I am reattaching all the shaders and the screenshots here.

Note the difference w.r.t the directx screenshots only. The window name holds the name of the renderer
When I use GLSL it works fine.

Jerry Edited by jerrinx

Share this post

Link to post
Share on other sites
Ashaman73    13715
The major difference between DX9 and OGL2 is, that DX9 is, well, DX9, but OGL2 can be pimped by using extensions. The FXAA shaders use a lot of extensions, I would test it on a higher DX version, maybe the FXAA implemenation has reached its limits on DX9, or you should choose a preset better suited for DX9.

Share this post

Link to post
Share on other sites
Hodgman    51223
The problem with DX9 is that it has the "[url=""]half pixel offset problem[/url]", which will wreak havoc with any kind of post-processing shader.

The official description of the problem is here:

In the versions of FXAA that I've used, it expects you to account for this flaw in DX9 yourself and EITHER shift your full-screen vertices OR texture coordinates by half a pixel as described in the MSDN link, so that the pixel shader receives the same interpolated values as it would in GL/DX10/DX11. Edited by Hodgman

Share this post

Link to post
Share on other sites
hflong325    247
If you want to understand the linear color space, try to google "gamma correction". Generally speaking, to the monitor, the relationship between luminance and color is not linear, but a Exponential curve. So in order to output the linear color, texture color was also been precorrected by an exponent value. So although you can see the correct color on the screen,the color you used before is nonlinear. This can cause problems in some algrithm,but in my work,the FXAA use nonlinear color space works well.

Share this post

Link to post
Share on other sites
jerrinx    208
Hey Thanks guys.

Like Hodgman said, My initial setting was 1/2 texel size off on DX9.
After fixing the issue the textures still renders the old jaggy pattern [img][/img]

Initially I noticed some difference in the image when rendering to texture vs normal rendering in the case of DX9.
Now, both look the same. I am assuming the 1/2 texel problem is accounted for.

I used this article to understand the color space stuff

Tried to use the color space as in the Fxaa_3.11 header. But still no dice.

Reattaching updated shaders.

lol... Maybe I should go for SMAA [img][/img]
Fxaa3 (lower quality 0.62 ms) vs SMAA Tx2 (higher quality 1.32 ms)
Hard to decide

Thanks guys
Jerry Edited by jerrinx

Share this post

Link to post
Share on other sites
hflong325    247
I`ve read your code,you need to #define FXAA_GREEN_AS_LUMA 1,otherwise it will take the alpha of tex color as luminance,as your input alpha is always 1,nothing will be changed. And your parameters seemed to have some problems,try to read the annotation and check them.

Share this post

Link to post
Share on other sites
jerrinx    208
Managed to port glsl code to hlsl for FXAA.
Seems to work.

Attaching it for people who need it. It looks simpler than the Fxaa 3.11. I am guessing its the old version. But it performs better.
Tried with and without Luma. Works in both cases.

On DX9 you need to do color space conversion, in order to use it correctly.

I do something like this:

Stage 0
Render Main Scene to texture
- Texture Read convert - SRGB to Linear
- Texture Write convert - None

Stage 0.5 (Optional, if filling alpha with luma)
Render Texture to Texture
- Texture Read convert - None
- Texture Write convert - None

Stage 1
Render Texture to Monitor using FXAA
- Texture Read convert - None
- Texture Write convert - Linear to SRGB (as Monitor requires SRGB format)

Hopefully that clarifies some stuff.

For now this works.

@Dragon. Couldn't get the Fxaa 3.11 to work though. I rechecked all the variables, I am not sure whats the problem. if you find something wrong, please tell me. Attaching the updated old one again. Attaching some part of the code, if someone wants to see.

I have a question
FPS drops as follows (@1080p):
- Normal Render (1500 FPS)
- Render To Texture (1000 FPS)
- Texture To Scene with FXAA (500 FPS)
Is that normal ?

Jerry Edited by jerrinx

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By gsc
      Hi! I am trying to implement simple SSAO postprocess. The main source of my knowledge on this topic is that awesome tutorial.
      But unfortunately something doesn't work... And after a few long hours I need some help. Here is my hlsl shader:
      float3 randVec = _noise * 2.0f - 1.0f; // noise: vec: {[0;1], [0;1], 0} float3 tangent = normalize(randVec - normalVS * dot(randVec, normalVS)); float3 bitangent = cross(tangent, normalVS); float3x3 TBN = float3x3(tangent, bitangent, normalVS); float occlusion = 0.0; for (int i = 0; i < kernelSize; ++i) { float3 samplePos = samples[i].xyz; // samples: {[-1;1], [-1;1], [0;1]} samplePos = mul(samplePos, TBN); samplePos = + samplePos * ssaoRadius; float4 offset = float4(samplePos, 1.0f); offset = mul(offset, projectionMatrix); offset.xy /= offset.w; offset.y = -offset.y; offset.xy = offset.xy * 0.5f + 0.5f; float sampleDepth = tex_4.Sample(textureSampler, offset.xy).a; sampleDepth = vsPosFromDepth(sampleDepth, offset.xy).z; const float threshold = 0.025f; float rangeCheck = abs(positionVS.z - sampleDepth) < ssaoRadius ? 1.0 : 0.0; occlusion += (sampleDepth <= samplePos.z + threshold ? 1.0 : 0.0) * rangeCheck; } occlusion = saturate(1 - (occlusion / kernelSize)); And current result:
      I will really appreciate for any advice!
    • By isu diss
       I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me?
      #define InnerRadius 6320000 #define OutterRadius 6420000 #define PI 3.141592653 #define Isteps 20 #define Ksteps 10 static float3 RayleighCoeffs = float3(6.55e-6, 1.73e-5, 2.30e-5); RWTexture2D<float4> SkyColors : register (u0); cbuffer CSCONSTANTBUF : register( b0 ) { float fHeight; float3 vSunDir; } float Density(float Height) { return exp(-Height/8340); } float RaySphereIntersection(float3 RayOrigin, float3 RayDirection, float3 SphereOrigin, float Radius) { float t1, t0; float3 L = SphereOrigin - RayOrigin; float tCA = dot(L, RayDirection); if (tCA < 0) return -1; float lenL = length(L); float D2 = (lenL*lenL) - (tCA*tCA); float Radius2 = (Radius*Radius); if (D2<=Radius2) { float tHC = sqrt(Radius2 - D2); t0 = tCA-tHC; t1 = tCA+tHC; } else return -1; return t1; } float RayleighPhaseFunction(float cosTheta) { return ((3/(16*PI))*(1+cosTheta*cosTheta)); } float OpticalDepth(float3 StartPosition, float3 EndPosition) { float3 Direction = normalize(EndPosition - StartPosition); float RayLength = RaySphereIntersection(StartPosition, Direction, float3(0, 0, 0), OutterRadius); float SampleLength = RayLength / Isteps; float3 tmpPos = StartPosition + 0.5 * SampleLength * Direction; float tmp; for (int i=0; i<Isteps; i++) { tmp += Density(length(tmpPos)-InnerRadius); tmpPos += SampleLength * Direction; } return tmp*SampleLength; } static float fExposure = -2; float3 HDR( float3 LDR) { return 1.0f - exp( fExposure * LDR ); } [numthreads(32, 32, 1)] //disptach 8, 8, 1 it's 256 by 256 image void ComputeSky(uint3 DTID : SV_DispatchThreadID) { float X = ((2 * DTID.x) / 255) - 1; float Y = 1 - ((2 * DTID.y) / 255); float r = sqrt(((X*X)+(Y*Y))); float Theta = r * (PI); float Phi = atan2(Y, X); static float3 Eye = float3(0, 10, 0); float ViewOD = 0, SunOD = 0, tmpDensity = 0; float3 Attenuation = 0, tmp = 0, Irgb = 0; //if (r<=1) { float3 ViewDir = normalize(float3(sin(Theta)*cos(Phi), cos(Theta),sin(Theta)*sin(Phi) )); float ViewRayLength = RaySphereIntersection(Eye, ViewDir, float3(0, 0, 0), OutterRadius); float SampleLength = ViewRayLength / Ksteps; //vSunDir = normalize(vSunDir); float cosTheta = dot(normalize(vSunDir), ViewDir); float3 tmpPos = Eye + 0.5 * SampleLength * ViewDir; for(int k=0; k<Ksteps; k++) { float SunRayLength = RaySphereIntersection(tmpPos, vSunDir, float3(0, 0, 0), OutterRadius); float3 TopAtmosphere = tmpPos + SunRayLength*vSunDir; ViewOD = OpticalDepth(Eye, tmpPos); SunOD = OpticalDepth(tmpPos, TopAtmosphere); tmpDensity = Density(length(tmpPos)-InnerRadius); Attenuation = exp(-RayleighCoeffs*(ViewOD+SunOD)); tmp += tmpDensity*Attenuation; tmpPos += SampleLength * ViewDir; } Irgb = RayleighCoeffs*RayleighPhaseFunction(cosTheta)*tmp*SampleLength; SkyColors[DTID.xy] = float4(Irgb, 1); } }  
    • By amadeus12
      I made my obj parser
      and It also calculate tagent space for normalmap.
      it seems calculation is wrong..
      any good suggestion for this?
      I can't upload my pics so I link my question.
      and I uploaded my code here

    • By Alessandro Pozzer
      Hi guys, 

      I dont know if this is the right section, but I did not know where to post this. 
      I am implementing a day night cycle on my game engine and I was wondering if there was a nice way to interpolate properly between warm colors, such as orange (sunset) and dark blue (night) color. I am using HSL format.
      Thank  you.
    • By thefoxbard
      I am aiming to learn Windows Forms with the purpose of creating some game-related tools, but since I know absolutely nothing about Windows Forms yet, I wonder:
      Is it possible to render a Direct3D 11 viewport inside a Windows Form Application? I see a lot of game editors that have a region of the window reserved for displaying and manipulating a 3D or 2D scene. That's what I am aiming for.
      Otherwise, would you suggest another library to create a GUI for game-related tools?
      I've found a tutorial here in gamedev that shows a solution:
      Though it's for D3D9, I'm not sure if it would work for D3D11?
  • Popular Now