Sign in to follow this  

DX11 Demo/Idea: Cache pixel triangles and adaptively supersample it later

Recommended Posts

jameszhao00    271
I was doing some stuff for my game architecture final project, and came up with the following AA idea.
It works with subpixel information, and in the current implementation is pretty expensive.
I didn't do a lot of research into this area, so think of it as a implementation demo if it's already been proposed [img][/img]

The idea is to cache, per pixel, the triangle being shaded. Later, we selectively supersample occlusion.

Requires a DX11 card. Black spots are artifacts from small triangles (around the head and zooming out too much).
The 'AA Viz' option visualizes the triangles in the (AA Viz X, AA Viz Y) pixel.
Use the DX option to simulate subpixel movement
Shader source in shaders/aa.hlsl



[*]Render the scene and store the pixel's triangle verticies.
[*]a. Store 3 float2s in NDC space to 32 bit SNORM textures
[*]NDC space because it's easy to get in the PS, and later on we can do 2D triangle-point containment instead of 3D triangle-ray intersection tests
[/list][*]Currently I store 3 float3s to R32G32B32A32_FLOAT textures (for debugging/laziness reasons)
[*]Currently I render with no MSAA.
[/list][*]Render a supersampling mask.
[*]Currently I just render an AA wireframe
[/list][*]In the AA pass, for each pixel that we wish to super sample...
[*]Collect every triangle from the neighboring pixels... for up to a total of 9 triangles
[*]When the triangles aren't tiny these 9 triangles will cover the entire pixel
[/list][*]Collect every depth, color from the neighboring pixels.
[*]For every sample point (16 in the demo)
[*]Perform a 2d triangle-point containment test for every triangle
[*]Find the closest triangle (using the neighboring depth values)
[*]Pretty crude approximation
[/list][*]Store the color for that triangle (using the neighboring color values)
[*]Normalize by # of samples hit
[*]There's a background triangle that covers the entire NDC cube, so mesh-background edges work
[*]However, in the interior we can get gaps at places where small triangles, which aren't necessarily stored, exist
[*]Fixed storage cost for arbitrary sampling density.
[*]Stores partial subpixel information
[*]Works well for long edges

[*]Heavy initial memory cost (storing 3 float2s (right now float3s) per pixel)
[*]Extremely ALU intensive with point - tri containment tests
[*]Currently I use some generic algorithm... hopefully better ones exist
[/list] [*]Artifacts with small (subpixel to few pixel) triangles
[*]Incorrect (though minor) blending
[*]Alternative triangle representation...
[*]We just need local triangle information relative to the pixel
[/list] [*]Way cheaper 2d triangle - point containment tests
[*]Currently the major 'demo bottleneck'.
[/list] [*]Actually adaptively supersample
[*]Right now its binary.
Any suggestions/this is interesting/this is stupid? [img][/img]

Share this post

Link to post
Share on other sites
Sirisian    2263
Without analyzing what you've done much I'd look into FXAA which seems to have taken over as a post-processing AA effect.

You mentioned "Works well for long edges" which I remember reading was a problem with FXAA. Might be useful to do a comparison against FXAA to see if your method is higher quality.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By gsc
      Hi! I am trying to implement simple SSAO postprocess. The main source of my knowledge on this topic is that awesome tutorial.
      But unfortunately something doesn't work... And after a few long hours I need some help. Here is my hlsl shader:
      float3 randVec = _noise * 2.0f - 1.0f; // noise: vec: {[0;1], [0;1], 0} float3 tangent = normalize(randVec - normalVS * dot(randVec, normalVS)); float3 bitangent = cross(tangent, normalVS); float3x3 TBN = float3x3(tangent, bitangent, normalVS); float occlusion = 0.0; for (int i = 0; i < kernelSize; ++i) { float3 samplePos = samples[i].xyz; // samples: {[-1;1], [-1;1], [0;1]} samplePos = mul(samplePos, TBN); samplePos = + samplePos * ssaoRadius; float4 offset = float4(samplePos, 1.0f); offset = mul(offset, projectionMatrix); offset.xy /= offset.w; offset.y = -offset.y; offset.xy = offset.xy * 0.5f + 0.5f; float sampleDepth = tex_4.Sample(textureSampler, offset.xy).a; sampleDepth = vsPosFromDepth(sampleDepth, input.uv).z; const float threshold = 0.025f; float rangeCheck = abs(positionVS.z - sampleDepth) < ssaoRadius ? 1.0 : 0.0; occlusion += (sampleDepth <= samplePos.z + threshold ? 1.0 : 0.0) * rangeCheck; } occlusion = saturate(1 - (occlusion / kernelSize)); And current result:
      I will really appreciate for any advice!
    • By isu diss
       I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me?
      #define InnerRadius 6320000 #define OutterRadius 6420000 #define PI 3.141592653 #define Isteps 20 #define Ksteps 10 static float3 RayleighCoeffs = float3(6.55e-6, 1.73e-5, 2.30e-5); RWTexture2D<float4> SkyColors : register (u0); cbuffer CSCONSTANTBUF : register( b0 ) { float fHeight; float3 vSunDir; } float Density(float Height) { return exp(-Height/8340); } float RaySphereIntersection(float3 RayOrigin, float3 RayDirection, float3 SphereOrigin, float Radius) { float t1, t0; float3 L = SphereOrigin - RayOrigin; float tCA = dot(L, RayDirection); if (tCA < 0) return -1; float lenL = length(L); float D2 = (lenL*lenL) - (tCA*tCA); float Radius2 = (Radius*Radius); if (D2<=Radius2) { float tHC = sqrt(Radius2 - D2); t0 = tCA-tHC; t1 = tCA+tHC; } else return -1; return t1; } float RayleighPhaseFunction(float cosTheta) { return ((3/(16*PI))*(1+cosTheta*cosTheta)); } float OpticalDepth(float3 StartPosition, float3 EndPosition) { float3 Direction = normalize(EndPosition - StartPosition); float RayLength = RaySphereIntersection(StartPosition, Direction, float3(0, 0, 0), OutterRadius); float SampleLength = RayLength / Isteps; float3 tmpPos = StartPosition + 0.5 * SampleLength * Direction; float tmp; for (int i=0; i<Isteps; i++) { tmp += Density(length(tmpPos)-InnerRadius); tmpPos += SampleLength * Direction; } return tmp*SampleLength; } static float fExposure = -2; float3 HDR( float3 LDR) { return 1.0f - exp( fExposure * LDR ); } [numthreads(32, 32, 1)] //disptach 8, 8, 1 it's 256 by 256 image void ComputeSky(uint3 DTID : SV_DispatchThreadID) { float X = ((2 * DTID.x) / 255) - 1; float Y = 1 - ((2 * DTID.y) / 255); float r = sqrt(((X*X)+(Y*Y))); float Theta = r * (PI); float Phi = atan2(Y, X); static float3 Eye = float3(0, 10, 0); float ViewOD = 0, SunOD = 0, tmpDensity = 0; float3 Attenuation = 0, tmp = 0, Irgb = 0; //if (r<=1) { float3 ViewDir = normalize(float3(sin(Theta)*cos(Phi), cos(Theta),sin(Theta)*sin(Phi) )); float ViewRayLength = RaySphereIntersection(Eye, ViewDir, float3(0, 0, 0), OutterRadius); float SampleLength = ViewRayLength / Ksteps; //vSunDir = normalize(vSunDir); float cosTheta = dot(normalize(vSunDir), ViewDir); float3 tmpPos = Eye + 0.5 * SampleLength * ViewDir; for(int k=0; k<Ksteps; k++) { float SunRayLength = RaySphereIntersection(tmpPos, vSunDir, float3(0, 0, 0), OutterRadius); float3 TopAtmosphere = tmpPos + SunRayLength*vSunDir; ViewOD = OpticalDepth(Eye, tmpPos); SunOD = OpticalDepth(tmpPos, TopAtmosphere); tmpDensity = Density(length(tmpPos)-InnerRadius); Attenuation = exp(-RayleighCoeffs*(ViewOD+SunOD)); tmp += tmpDensity*Attenuation; tmpPos += SampleLength * ViewDir; } Irgb = RayleighCoeffs*RayleighPhaseFunction(cosTheta)*tmp*SampleLength; SkyColors[DTID.xy] = float4(Irgb, 1); } }  
    • By amadeus12
      I made my obj parser
      and It also calculate tagent space for normalmap.
      it seems calculation is wrong..
      any good suggestion for this?
      I can't upload my pics so I link my question.
      and I uploaded my code here

    • By Alessandro Pozzer
      Hi guys, 

      I dont know if this is the right section, but I did not know where to post this. 
      I am implementing a day night cycle on my game engine and I was wondering if there was a nice way to interpolate properly between warm colors, such as orange (sunset) and dark blue (night) color. I am using HSL format.
      Thank  you.
    • By thefoxbard
      I am aiming to learn Windows Forms with the purpose of creating some game-related tools, but since I know absolutely nothing about Windows Forms yet, I wonder:
      Is it possible to render a Direct3D 11 viewport inside a Windows Form Application? I see a lot of game editors that have a region of the window reserved for displaying and manipulating a 3D or 2D scene. That's what I am aiming for.
      Otherwise, would you suggest another library to create a GUI for game-related tools?
      I've found a tutorial here in gamedev that shows a solution:
      Though it's for D3D9, I'm not sure if it would work for D3D11?
  • Popular Now