• Advertisement
Sign in to follow this  

DX11 DXGI_FORMAT_R8_UNORM for luminance gives me only red channel

This topic is 1828 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi...
I am trying to load a texture with the DXGI_FORMAT_R8_UNORM to obtain its luminance version(grayScale), in D3D9 i was used D3DFMT_L8 and the result is okay but in D3D11 i only obtain the red channel, what i am doing wrong?
 
Rendered Color image: 
good.png
 
Render with DXGI_FORMAT_R8_UNORM(bad!!): 
DXGI_FORMAT_R8_UNORM.png
 
 
here is the code:
 

	D3DX11_IMAGE_INFO* pImgInfo = 0;
	HRESULT imghr = D3DX11GetImageInfoFromFile( L"Tiles.png", NULL, pImgInfo, NULL);
	
	D3DX11_IMAGE_LOAD_INFO imgLoadInfo;
		imgLoadInfo.Width = D3DX11_DEFAULT;
		imgLoadInfo.Height = D3DX11_DEFAULT;
		imgLoadInfo.Depth = D3DX11_DEFAULT;
		imgLoadInfo.FirstMipLevel = D3DX11_DEFAULT;
		imgLoadInfo.MipLevels = D3DX11_DEFAULT;
		imgLoadInfo.Usage = D3D11_USAGE_DEFAULT;
		imgLoadInfo.BindFlags = D3D11_BIND_SHADER_RESOURCE;
		imgLoadInfo.CpuAccessFlags = 0;
		imgLoadInfo.MiscFlags = 0;
		imgLoadInfo.Format = DXGI_FORMAT_R8_UNORM;
		//imgLoadInfo.Format = DXGI_FORMAT_R16G16B16A16_UNORM;
		imgLoadInfo.Filter = D3DX11_FILTER_NONE;
		imgLoadInfo.MipFilter = D3DX11_DEFAULT;
		imgLoadInfo.pSrcInfo = pImgInfo;


	m_Texture = m_pRenderer11->LoadTexture( L"Tiles.png", &imgLoadInfo  );

Share this post


Link to post
Share on other sites
Advertisement

R8_UNORM means you have a texture with just a red channel. If you load an RGB texture with that format, you will only get the red channel from that texture. If you want a texture containing the luminance of the RGB values, you'll need to do that yourself.

Share this post


Link to post
Share on other sites
R8_UNORM means you have a texture with just a red channel. If you load an RGB texture with that format, you will only get the red channel from that texture. If you want a texture containing the luminance of the RGB values, you'll need to do that yourself.

 

Is there no a way like in D3D9 with the format  D3DFMT_L8? when i used these forma to load a texture it visually return a gray scale version

Share this post


Link to post
Share on other sites

Use the R value from your DXGI_FORMAT_R8_UNORM texture for all channels of the returned color except for A, which should be set to 1.0f.

 

 

L. Spiro

Share this post


Link to post
Share on other sites
Use the R value from your DXGI_FORMAT_R8_UNORM texture for all channels of the returned color except for A, which should be set to 1.0f.

 

 

L. Spiro

 

i'm so sorry L.Spiro, i'm a beginner in d3d11, may you be more specific? When i call LoadTexture how can i set the channels as you said?

Share this post


Link to post
Share on other sites

Ok thanks for your help...

 

Now i have errors with the texture sampler, it is only happening with the R8_unorm format.

 

Here is the result: 

bad_sampling.png

 

And here is the source image:

woman.jpg

 

this is the pixel shader:

// Pixel Shader
float4 main_ps( v2p input ) : SV_TARGET
{
	float4 l_color0;
	l_color0 = g_frostTexture.Sample(SampleType, input.texCoord);
	l_color1 = float4( l_color1.rrr, 1.0f);
	returnl_color1;
}

 

thanks in advance....

Share this post


Link to post
Share on other sites

I don’t use Ogre but the error seems apparent: The image is now 4 times as wide and the RGB channels are all the same (so every 3 pixels are grey) and the A channel is white (so every 4th pixel is white).

 

In other words, something is expecting 1 result but is being fed 4.  For whatever reason it is actually using all 4 results and treating them as separate results.  Instead of combining the RGBA values into one pixel, it is spreading them out over 4 pixels.

 

That is basically the problem, but I have never seen this and I am not familiar with Ogre, so I can’t tell you where to look.  The first things I would check would be the texture-creation values and the render-target creation values.

 

 

L. Spiro

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By AxeGuywithanAxe
      I wanted to see how others are currently handling descriptor heap updates and management.
      I've read a few articles and there tends to be three major strategies :
      1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc)
      2) You have one descriptor heap for an entire pipeline
      3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc)
      The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient.
      The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.
    • By evelyn4you
      hi,
      until now i use typical vertexshader approach for skinning with a Constantbuffer containing the transform matrix for the bones and an the vertexbuffer containing bone index and bone weight.
      Now i have implemented realtime environment  probe cubemaping so i have to render my scene from many point of views and the time for skinning takes too long because it is recalculated for every side of the cubemap.
      For Info i am working on Win7 an therefore use one Shadermodel 5.0 not 5.x that have more options, or is there a way to use 5.x in Win 7
      My Graphic Card is Directx 12 compatible NVidia GTX 960
      the member turanszkij has posted a good for me understandable compute shader. ( for Info: in his engine he uses an optimized version of it )
      https://turanszkij.wordpress.com/2017/09/09/skinning-in-compute-shader/
      Now my questions
       is it possible to feed the compute shader with my orignial vertexbuffer or do i have to copy it in several ByteAdressBuffers as implemented in the following code ?
        the same question is about the constant buffer of the matrixes
       my more urgent question is how do i feed my normal pipeline with the result of the compute Shader which are 2 RWByteAddressBuffers that contain position an normal
      for example i could use 2 vertexbuffer bindings
      1 containing only the uv coordinates
      2.containing position and normal
      How do i copy from the RWByteAddressBuffers to the vertexbuffer ?
       
      (Code from turanszkij )
      Here is my shader implementation for skinning a mesh in a compute shader:
      1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 struct Bone { float4x4 pose; }; StructuredBuffer<Bone> boneBuffer;   ByteAddressBuffer vertexBuffer_POS; // T-Pose pos ByteAddressBuffer vertexBuffer_NOR; // T-Pose normal ByteAddressBuffer vertexBuffer_WEI; // bone weights ByteAddressBuffer vertexBuffer_BON; // bone indices   RWByteAddressBuffer streamoutBuffer_POS; // skinned pos RWByteAddressBuffer streamoutBuffer_NOR; // skinned normal RWByteAddressBuffer streamoutBuffer_PRE; // previous frame skinned pos   inline void Skinning(inout float4 pos, inout float4 nor, in float4 inBon, in float4 inWei) {  float4 p = 0, pp = 0;  float3 n = 0;  float4x4 m;  float3x3 m3;  float weisum = 0;   // force loop to reduce register pressure  // though this way we can not interleave TEX - ALU operations  [loop]  for (uint i = 0; ((i &lt; 4) &amp;&amp; (weisum&lt;1.0f)); ++i)  {  m = boneBuffer[(uint)inBon].pose;  m3 = (float3x3)m;   p += mul(float4(pos.xyz, 1), m)*inWei;  n += mul(nor.xyz, m3)*inWei;   weisum += inWei;  }   bool w = any(inWei);  pos.xyz = w ? p.xyz : pos.xyz;  nor.xyz = w ? n : nor.xyz; }   [numthreads(1024, 1, 1)] void main( uint3 DTid : SV_DispatchThreadID ) {  const uint fetchAddress = DTid.x * 16; // stride is 16 bytes for each vertex buffer now...   uint4 pos_u = vertexBuffer_POS.Load4(fetchAddress);  uint4 nor_u = vertexBuffer_NOR.Load4(fetchAddress);  uint4 wei_u = vertexBuffer_WEI.Load4(fetchAddress);  uint4 bon_u = vertexBuffer_BON.Load4(fetchAddress);   float4 pos = asfloat(pos_u);  float4 nor = asfloat(nor_u);  float4 wei = asfloat(wei_u);  float4 bon = asfloat(bon_u);   Skinning(pos, nor, bon, wei);   pos_u = asuint(pos);  nor_u = asuint(nor);   // copy prev frame current pos to current frame prev pos streamoutBuffer_PRE.Store4(fetchAddress, streamoutBuffer_POS.Load4(fetchAddress)); // write out skinned props:  streamoutBuffer_POS.Store4(fetchAddress, pos_u);  streamoutBuffer_NOR.Store4(fetchAddress, nor_u); }  
    • By mister345
      Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception?
       
      _lightBufferVS->viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&_lightBufferVS->position), XMLoadFloat3(&_lookAt), XMLoadFloat3(&up));
      It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions.
          m_viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&m_position), XMLoadFloat3(&m_lookAt), XMLoadFloat3(&up)); Here is the repo for the broken code (See LightClass) https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/LightClass.cpp
      and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter.
      https://github.com/mister51213/DX11Port_SoftShadows/blob/master/Engine/lightclass.cpp
    • By mister345
      Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
      I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
      double clicked on the frame to open it, but no idea where to go from there.
       
      I've been searching for hours and there's no information on this, not even on the Microsoft Website!
      They say "open the  Graphics Pixel History window" but there is no such window!
      Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
      Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?
       
      All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated
      and Microsoft's instructions are horrible! Somebody please, please help.
       
       
       

    • By mister345
      I finally ported Rastertek's tutorial # 42 on soft shadows and blur shading. This tutorial has a ton of really useful effects and there's no working version anywhere online.
      Unfortunately it just draws a black screen. Not sure what's causing it. I'm guessing the camera or ortho matrix transforms are wrong, light directions, or maybe texture resources not being properly initialized.  I didnt change any of the variables though, only upgraded all types and functions DirectX3DVector3 to XMFLOAT3, and used DirectXTK for texture loading. If anyone is willing to take a look at what might be causing the black screen, maybe something pops out to you, let me know, thanks.
      https://github.com/mister51213/DX11Port_SoftShadows
       
      Also, for reference, here's tutorial #40 which has normal shadows but no blur, which I also ported, and it works perfectly.
      https://github.com/mister51213/DX11Port_ShadowMapping
       
  • Advertisement