• Advertisement
Sign in to follow this  

DX11 [DX11] CreateShaderResourceViewFromMemory

This topic is 2554 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts


I've got a question about the function D3DX11CreateShaderResourceViewFromMemory()

I tried to combine it with a live camera image, and wasn't able to upload the images coming from the camera to the ShaderResource.

What I tried:

pLoadInfo.Width = 640; pLoadInfo.Height = 480; pLoadInfo.Depth = 1; pLoadInfo.FirstMipLevel = 0;
pLoadInfo.Usage = D3D11_USAGE_DYNAMIC;
pLoadInfo.BindFlags = D3D11_BIND_SHADER_RESOURCE;
pLoadInfo.CpuAcessFlags = D3D11_CPU_ACCESS_WRITE;
pLoadInfo.MiscFlags = 0;
pLoadInfo.Format = DXGI_FORMAT_R8_UINT; // Monochrome, 8bit per pixel bitmap data
pLoadInfo.Filter = D3DX11_FILTER_NONE;
pLoadInfo.MipFilter = D3DX11_FILTER_NONE;
pLoadInfo.pSrcInfo = 0;

D3DX11CreateShaderResourceViewFromMemory(pd3dDevice, (LPCVOID) image.GetData(), size, &pLoadInfo, NULL, &g_Target, &result);

When I tried this, I would result = E_FAIL;
So my suspect would be the pSrcInfo, but I have no idea what it should, or if it is something totally different. I was searching for samples showing the ResourceViewFromMemory, but this seems to be a hardly used function.

Thanks for any feedback!

Edit: fixed the pixelformat, wrote R8G8 by accident

Share this post

Link to post
Share on other sites
D3DX11CreateShaderResourceViewFrom* functions are intended for loading an image file and creating a texture from it, not for taking a raw block of data and filling a texture from it. In the case of D3DX11CreateShaderResourceViewFromMemory, it expects an array containing raw data from an image file such as .PNG, or .BMP.

For what you're attempting to do, you shouldn't need this helper function at all. Simply use ID3D11Device::CreateTexture2D and fill the D3D11_TEXTURE2D_DESC with your desired settings. If you have initial data you want to fill the texture with, pass the pointer to that data by filling out the D3D11_SUBRESOURCE_DATA structure and passing it as the pInitialData parameter. Then when you want to change the contents of that texture at runtime, you call ID3D11DeviceContext::Map.

To use the texture as a shader resource, you just create a shader resource view. If you just need to access the whole texture and don't need to do anything fancy, you can just pass NULL as the pDesc parameter.

Share this post

Link to post
Share on other sites
Okay, I've got one more question.

The data I want to display is an unsigned 8bit monochrome image. Now I set the data, but it tells me .Sample() doesn't work for DXGI_FORMAT_R8_UINT. I tested pd3dDevice->CheckFormatSuppor(), and R8_UINT should be supported by Texture2D.

So what I tried instead of sample is getting the integer value from the texture via load()
Something like int r = g_txDiffuse.load( int3 ( asint(In.TextureUV.x), asint(In.TextureUV.y), 1));

This is wrong, I get an error when trying to load that into the pixel shader, but how am I supposed to get the int value from there?

The other issue I am having seems to be the fact that pd3dImmediateContext->Map() is not blocking, and when Draw() is called, the mapping hasn't finished yet. Can I make this call block until the upload is done somehow?
Also, I am unable to debug this in the first place, when running PIX the code fails at
D3DX11CompileFromFile( str, NULL, NULL, "RenderScenePS", "ps_4_0_level_9_3", dwShaderFlags, 0, NULL, &pPixelShaderBuffer, NULL, NULL )

which is expected since the pixel shader is still wrong, but how can I debug the shader aside from that?

Edit: okay, noticed that the asint() is failing, so I rewrote it to

int3 loc;
loc.x = (int)(In.TextureUV.x * 640);
loc.y = (int)(In.TextureUV.y * 480);
loc.z = 1;
unsigned int r = g_txDiffuse.load(loc);

Texture2D <unsigned int> g_txDiffuse : register( t0 );

but it's still failing to load.

Share this post

Link to post
Share on other sites
Yeah you don't want asint. It takes the raw data and interprets it as integer, similiar if you were to this in c++:

float fval = 1.0f;
int ival = *((int*)(&fval));

You just want to cast to int, like you did in your revised code.

If compiliation fails to a shader compilation error, then you'll get the errors back in the buffer set for the ppErrorMsgs parameter. Pass it an ID3D10Blob, and then call GetBufferPointer on the blob and vast the void pointer to char* to get a string containing the shader compilation errors. This is the function I use:

ID3D10Blob* CompileShader(LPCWSTR path,
LPCSTR functionName,
LPCSTR profile,
ID3D10Include* includes)
// Loop until we succeed, or an exception is thrown
while (true)

UINT flags = 0;
#ifdef _DEBUG

ID3D10Blob* compiledShader;
ID3D10BlobPtr errorMessages;
HRESULT hr = D3DX11CompileFromFileW(path, defines, includes, functionName, profile,
flags, 0, NULL, &compiledShader, &errorMessages, NULL);

if (FAILED(hr))
if (errorMessages)
WCHAR message[1024];
message[0] = NULL;
char* blobdata = reinterpret_cast<char*>(errorMessages->GetBufferPointer());

MultiByteToWideChar(CP_ACP, 0, blobdata, static_cast<int>(errorMessages->GetBufferSize()), message, 1024);
std::wstring fullMessage = L"Error compiling shader file \"";
fullMessage += path;
fullMessage += L"\" - ";
fullMessage += message;

#ifdef _DEBUG
// Pop up a message box allowing user to retry compilation
int retVal = MessageBoxW(NULL, fullMessage.c_str(), L"Shader Compilation Error", MB_RETRYCANCEL);
if(retVal != IDRETRY)
throw DXException(hr, fullMessage.c_str());
throw DXException(hr, fullMessage.c_str());
throw DXException(hr);
return compiledShader;

You can obviously take out the exception stuff if you want, and if you're not using wstrings you can skip that MultiByteToWideChar stuff. If you're looking to debug shaders in PIX you'll definitely want to add in those debug flags like I did.

Share this post

Link to post
Share on other sites
Awesome, thanks a lot. This helps a lot, turns out, it didn't load, because I used
Texture.load, instead of Texture.Load, so it failed on capitalization!

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By turanszkij
      Hi, right now building my engine in visual studio involves a shader compiling step to build hlsl 5.0 shaders. I have a separate project which only includes shader sources and the compiler is the visual studio integrated fxc compiler. I like this method because on any PC that has visual studio installed, I can just download the solution from GitHub and everything just builds without additional dependencies and using the latest version of the compiler. I also like it because the shaders are included in the solution explorer and easy to browse, and double-click to open (opening files can be really a pain in the ass in visual studio run in admin mode). Also it's nice that VS displays the build output/errors in the output window.
      But now I have the HLSL 6 compiler and want to build hlsl 6 shaders as well (and as I understand I can also compile vulkan compatible shaders with it later). Any idea how to do this nicely? I want only a single project containing shader sources, like it is now, but build them for different targets. I guess adding different building projects would be the way to go that reference the shader source project? But how would they differentiate from shader type of the sources (eg. pixel shader, compute shader,etc.)? Now the shader building project contains for each shader the shader type, how can other building projects reference that?
      Anyone with some experience in this?
    • By osiris_dev
      Have a problem with reflection shader for D3D11:
      1>engine_render_d3d11_system.obj : error LNK2001: unresolved external symbol IID_ID3D11ShaderReflection
      I tried to add this:
      #include <D3D11Shader.h>
      #include <D3Dcompiler.h>
      #include <D3DCompiler.inl>
      #pragma comment(lib, "D3DCompiler.lib")
      //#pragma comment(lib, "D3DCompiler_47.lib")
      As MSDN tells me but still no fortune. I think lot of people did that already, what I missing?
      I also find this article http://mattfife.com/?p=470
      where recommend to use SDK headers and libs before Wind SDK, but I am not using DirectX SDK for this project at all, should I?
    • By trojanfoe
      Hi there, this is my first post in what looks to be a very interesting forum.
      I am using DirectXTK to put together my 2D game engine but would like to use the GPU depth buffer in order to avoid sorting back-to-front on the CPU and I think I also want to use GPU instancing, so can I do that with SpriteBatch or am I looking at implementing my own sprite rendering?
      Thanks in advance!
    • By Matt_Aufderheide
      I am trying to draw a screen-aligned quad with arbitrary sizes.
      currently I just send 4 vertices to the vertex shader like so:
      pDevCon->Draw(4, 0);
      then in the vertex shader I am doing this:
      float4 main(uint vI : SV_VERTEXID) : SV_POSITION
       float2 texcoord = float2(vI & 1, vI >> 1);
      return float4((texcoord.x - 0.5f) * 2, -(texcoord.y - 0.5f) * 2, 0, 1);
      that gets me a screen-sized quad...ok .. what's the correct way to get arbitrary sizes?...I have messed around with various numbers, but I think I don't quite get something in these relationships.
      one thing I tried is: 
      float4 quad = float4((texcoord.x - (xpos/screensizex)) * (width/screensizex), -(texcoord.y - (ypos/screensizey)) * (height/screensizey), 0, 1);
      .. where xpos and ypos is number of pixels from upper right corner..width and height is the desired size of the quad in pixels
      this gets me somewhat close, but not right.. a bit too small..so I'm missing something ..any ideas?
    • By Stewie.G
      I've been trying to implement a gaussian blur recently, it would seem the best way to achieve this is by running a bur on one axis, then another blur on the other axis.
      I think I have successfully implemented the blur part per axis, but now I have to blend both calls with a proper BlendState, at least I think this is where my problem is.
      Here are my passes:
      RasterizerState DisableCulling { CullMode = BACK; }; BlendState AdditiveBlend { BlendEnable[0] = TRUE; BlendEnable[1] = TRUE; SrcBlend[0] = SRC_COLOR; BlendOp[0] = ADD; BlendOp[1] = ADD; SrcBlend[1] = SRC_COLOR; }; technique11 BlockTech { pass P0 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurV())); SetRasterizerState(DisableCulling); SetBlendState(AdditiveBlend, float4(0.0, 0.0, 0.0, 0.0), 0xffffffff); } pass P1 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurH())); SetRasterizerState(DisableCulling); } }  
      D3DX11_TECHNIQUE_DESC techDesc; mBlockEffect->mTech->GetDesc( &techDesc ); for(UINT p = 0; p < techDesc.Passes; ++p) { deviceContext->IASetVertexBuffers(0, 2, bufferPointers, stride, offset); deviceContext->IASetIndexBuffer(mIB, DXGI_FORMAT_R32_UINT, 0); mBlockEffect->mTech->GetPassByIndex(p)->Apply(0, deviceContext); deviceContext->DrawIndexedInstanced(36, mNumberOfActiveCubes, 0, 0, 0); } No blur



      P0 + P1

      As you can see, it does not work at all.
      I think the issue is in my BlendState, but I am not sure.
      I've seen many articles going with the render to texture approach, but I've also seen articles where both shaders were called in succession, and it worked just fine, I'd like to go with that second approach. Unfortunately, the code was in OpenGL where the syntax for running multiple passes is quite different (http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/). So I need some help doing the same in HLSL :-)
  • Advertisement