Sign in to follow this  
ambershee

DX11 Atlased Textures and Mip-Mapping

Recommended Posts

ambershee    532

Hi guys,

 

I've got a nasty little conundrum that I'm hoping folks may be able to help with. For the purposes of example, assume I am texturing a terrain, and need to atlas a number of textures together to avoid reaching the sampler limit - array textures and volume textures are not available resources.

 

We can atlas simply, but we will get nasty seams where textures border thanks to sampling. If we create artificial borders between atlased textures, we'll get artifacts at high incidence viewing angles.

 

We've looked at alternative sampling techniques, but haven't found a viable solution. Our Dx9 solution  uses tex2dgrad, but we're primarily working in Dx11 where tex2dgrad doesn't seem to be available. Due to a limitation with our shader system, we are unable to pass or declare a sampler state, so we cannot use the updated equivalent samplegrad either.

 

 

Anyone have any ideas?

Cheers!

Share this post


Link to post
Share on other sites
kauna    2922

Well using the atlas has two problems : texture color bleeding because of mipmap generation, and texture bleeding because of filtering

 

I haven't tried this but I assume that the problems could be addressed separately: 

 

- create mipmaps separately (or use algorithm which minds the borders) for each texture inside the atlas and then pack together

- for each mip map level, use 1 pixel for borders

- while doing texture sampling in the shaders, clamp the values so that they will stay inside the border area.

 

There might be problems at the edges because of discontinuity of the texture coordinates and you may have to calculate the partial derivates. 

 

Cheers!

Share this post


Link to post
Share on other sites
ambershee    532

Hi Kauna - those are the two problems we are trying to solve. Doing it as you describe will indeed result in artifacts like seams :(

 

@Swiftcoder - unfortunately that example makes use of texture2DLod, the equivalent of which is SampleBias, which we cannot use for the same reason we can't use SampleGrad, as it requires passing the sampler state which we don't have access to. Otherwise, it's very similar to what we've tried to do already, but gotten stuck when it comes to sampling appropriately!

Share this post


Link to post
Share on other sites
Styves    1792

I'm not entirely sure what your problem is regarding sampler states. Can you elaborate a little bit?

 

You should be able to interchange any texture2D function with texture2DLod, etc, without any issues or "sampler state" changes..

Share this post


Link to post
Share on other sites
Hodgman    51223

When you create the texture asset itself, you don't have to create a full mip-chain for it (all the way down to 1x1 pixel). You can stop generating mips once you reach the point where the atlas 'cells' start bleeding together. Then when you load the texture from disc, be sure to create the same number of mip-levels as the texture file actually has.

Share this post


Link to post
Share on other sites
phil_t    8084


There's more to the problem than just mips bleeding together. At high incident viewing angles you'll get seams where there's a transition between texture LoDs :/

 

I don't understand why this would be... seems like that would only happen if your mipmaps don't "match up", or if you still have a full mip chain.

 

I have some information on how I construct my terrain texture atlas on my blog, some of it might be useful:

http://mtnphil.wordpress.com/2011/09/22/terrain-engine/

http://mtnphil.wordpress.com/2011/09/26/terrain-texture-atlas-construction/

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By gsc
      Hi! I am trying to implement simple SSAO postprocess. The main source of my knowledge on this topic is that awesome tutorial.
      But unfortunately something doesn't work... And after a few long hours I need some help. Here is my hlsl shader:
      float3 randVec = _noise * 2.0f - 1.0f; // noise: vec: {[0;1], [0;1], 0} float3 tangent = normalize(randVec - normalVS * dot(randVec, normalVS)); float3 bitangent = cross(tangent, normalVS); float3x3 TBN = float3x3(tangent, bitangent, normalVS); float occlusion = 0.0; for (int i = 0; i < kernelSize; ++i) { float3 samplePos = samples[i].xyz; // samples: {[-1;1], [-1;1], [0;1]} samplePos = mul(samplePos, TBN); samplePos = positionVS.xyz + samplePos * ssaoRadius; float4 offset = float4(samplePos, 1.0f); offset = mul(offset, projectionMatrix); offset.xy /= offset.w; offset.y = -offset.y; offset.xy = offset.xy * 0.5f + 0.5f; float sampleDepth = tex_4.Sample(textureSampler, offset.xy).a; sampleDepth = vsPosFromDepth(sampleDepth, offset.xy).z; const float threshold = 0.025f; float rangeCheck = abs(positionVS.z - sampleDepth) < ssaoRadius ? 1.0 : 0.0; occlusion += (sampleDepth <= samplePos.z + threshold ? 1.0 : 0.0) * rangeCheck; } occlusion = saturate(1 - (occlusion / kernelSize)); And current result: http://imgur.com/UX2X1fc
      I will really appreciate for any advice!
    • By isu diss
       I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me?
       
      #define InnerRadius 6320000 #define OutterRadius 6420000 #define PI 3.141592653 #define Isteps 20 #define Ksteps 10 static float3 RayleighCoeffs = float3(6.55e-6, 1.73e-5, 2.30e-5); RWTexture2D<float4> SkyColors : register (u0); cbuffer CSCONSTANTBUF : register( b0 ) { float fHeight; float3 vSunDir; } float Density(float Height) { return exp(-Height/8340); } float RaySphereIntersection(float3 RayOrigin, float3 RayDirection, float3 SphereOrigin, float Radius) { float t1, t0; float3 L = SphereOrigin - RayOrigin; float tCA = dot(L, RayDirection); if (tCA < 0) return -1; float lenL = length(L); float D2 = (lenL*lenL) - (tCA*tCA); float Radius2 = (Radius*Radius); if (D2<=Radius2) { float tHC = sqrt(Radius2 - D2); t0 = tCA-tHC; t1 = tCA+tHC; } else return -1; return t1; } float RayleighPhaseFunction(float cosTheta) { return ((3/(16*PI))*(1+cosTheta*cosTheta)); } float OpticalDepth(float3 StartPosition, float3 EndPosition) { float3 Direction = normalize(EndPosition - StartPosition); float RayLength = RaySphereIntersection(StartPosition, Direction, float3(0, 0, 0), OutterRadius); float SampleLength = RayLength / Isteps; float3 tmpPos = StartPosition + 0.5 * SampleLength * Direction; float tmp; for (int i=0; i<Isteps; i++) { tmp += Density(length(tmpPos)-InnerRadius); tmpPos += SampleLength * Direction; } return tmp*SampleLength; } static float fExposure = -2; float3 HDR( float3 LDR) { return 1.0f - exp( fExposure * LDR ); } [numthreads(32, 32, 1)] //disptach 8, 8, 1 it's 256 by 256 image void ComputeSky(uint3 DTID : SV_DispatchThreadID) { float X = ((2 * DTID.x) / 255) - 1; float Y = 1 - ((2 * DTID.y) / 255); float r = sqrt(((X*X)+(Y*Y))); float Theta = r * (PI); float Phi = atan2(Y, X); static float3 Eye = float3(0, 10, 0); float ViewOD = 0, SunOD = 0, tmpDensity = 0; float3 Attenuation = 0, tmp = 0, Irgb = 0; //if (r<=1) { float3 ViewDir = normalize(float3(sin(Theta)*cos(Phi), cos(Theta),sin(Theta)*sin(Phi) )); float ViewRayLength = RaySphereIntersection(Eye, ViewDir, float3(0, 0, 0), OutterRadius); float SampleLength = ViewRayLength / Ksteps; //vSunDir = normalize(vSunDir); float cosTheta = dot(normalize(vSunDir), ViewDir); float3 tmpPos = Eye + 0.5 * SampleLength * ViewDir; for(int k=0; k<Ksteps; k++) { float SunRayLength = RaySphereIntersection(tmpPos, vSunDir, float3(0, 0, 0), OutterRadius); float3 TopAtmosphere = tmpPos + SunRayLength*vSunDir; ViewOD = OpticalDepth(Eye, tmpPos); SunOD = OpticalDepth(tmpPos, TopAtmosphere); tmpDensity = Density(length(tmpPos)-InnerRadius); Attenuation = exp(-RayleighCoeffs*(ViewOD+SunOD)); tmp += tmpDensity*Attenuation; tmpPos += SampleLength * ViewDir; } Irgb = RayleighCoeffs*RayleighPhaseFunction(cosTheta)*tmp*SampleLength; SkyColors[DTID.xy] = float4(Irgb, 1); } }  
    • By amadeus12
      I made my obj parser
      and It also calculate tagent space for normalmap.
      it seems calculation is wrong..
      any good suggestion for this?
      I can't upload my pics so I link my question.
      https://gamedev.stackexchange.com/questions/147199/how-to-debug-calculating-tangent-space
      and I uploaded my code here


      ObjLoader.cpp
      ObjLoader.h
    • By Alessandro Pozzer
      Hi guys, 

      I dont know if this is the right section, but I did not know where to post this. 
      I am implementing a day night cycle on my game engine and I was wondering if there was a nice way to interpolate properly between warm colors, such as orange (sunset) and dark blue (night) color. I am using HSL format.
      Thank  you.
    • By thefoxbard
      I am aiming to learn Windows Forms with the purpose of creating some game-related tools, but since I know absolutely nothing about Windows Forms yet, I wonder:
      Is it possible to render a Direct3D 11 viewport inside a Windows Form Application? I see a lot of game editors that have a region of the window reserved for displaying and manipulating a 3D or 2D scene. That's what I am aiming for.
      Otherwise, would you suggest another library to create a GUI for game-related tools?
       
      EDIT:
      I've found a tutorial here in gamedev that shows a solution:
      Though it's for D3D9, I'm not sure if it would work for D3D11?
       
  • Popular Now