Sign in to follow this  

DX11 Non-fatal, undocumented return code

Recommended Posts

Posted (edited)

Hi guys, I was wondering if any of you ever had an encounter with 0xCCCCCCCC as a return code/result when calling a D3D11CreateDeviceAndSwapChain or D3D11CreateDevice.

My code worked just fine as of yesterday, and did for months, but I now get 0xCCCCCCCC instead of the usual S_OK (or rarely S_FALSE) for some reason. Literally nothing in my code changed. I woke up to that return code. Whatever it is, it's in no way preventing my application from working as it should but I do have to bypass the error checking I've put in place because of how unusual the error code is.

Can anyone shed some light on this undocumented return code?

Edited by Shangbye

Share this post


Link to post
Share on other sites
Posted (edited)

Now that you mention it, it does look like it. The value is just plain off when I take a look at the range of the documented values.

I'll try replacing D3D11.dll and see if it changes anything. Thanks dude.

Edited by Shangbye

Share this post


Link to post
Share on other sites
Posted (edited)

You're spot on about the origin of the value. I found out about it by monitoring the value before and after the call and then I discovered that the call didn't write to the HRESULT at all so it kept 0xCCCCCCCC as its value and my error detection failed. The corruption hypothesis turned out to be correct because I had no such problem when debugging the code on my other computer. Everything makes sense to me now because it seems my Win10 installation is partly corrupted. CL.EXE and C2.DLL started giving me weird errors 2 days ago and CL.EXE eventually wouldn't run at all. I replaced them with the files from the VS installation and it fixed the issue. I guess the same thing happened to d3d11.dll. Repairing my Win10 installation should take care of it. I should have come to that conclusion sooner tbh.

Edited by Shangbye

Share this post


Link to post
Share on other sites
12 minutes ago, Shangbye said:

it seems my Win10 installation is partly corrupted. CL.EXE and C2.DLL started giving me weird errors 2 days ago and CL.EXE eventually wouldn't run at all. I replaced them with the files from the VS installation and it fixed the issue

Your HDD/SSD is failing and/or you've got a virus attacking your PC :o

Time to pull your drives out and put in a clean one.

Share this post


Link to post
Share on other sites

Outside of failing hdd/virus, your other computer assumption is absurd.

 

Undefined behavior is undefined, you could have run fine for monh then crash, cure the cancer, resolve climate change or start world war 3. Undefined behavior means everything possible.

 

In regards to uninitialized variable, usually they either get set to 0xcccc or 0 if you run a debug runtime or contains whatever was at the address if not. And you cannot rely on it to be consistent between machines or os version !

Share this post


Link to post
Share on other sites

No, all this magic numbers are Microsof, VS sure don't set memory to zero !, you can find a good list of theme here (https://stackoverflow.com/questions/127386/in-visual-studio-c-what-are-the-memory-allocation-representations). They are meant to help you debug, and be glad it gives consistent behavior to something that is not, because it allow you to catch bugs that could be silent for months. Your worst enemy would be everything initialized to 0.

Share this post


Link to post
Share on other sites
22 hours ago, galop1n said:

VS sure don't set memory to zero

Doesn't the MVSC++ compiler with debug flag enabled need to generate the explicit initialization code statements? (for me VS and MVSC++ are so coupled, pretty much the same; some skin over the compiler).

Edited by matt77hias

Share this post


Link to post
Share on other sites
On 9/1/2017 at 1:22 AM, matt77hias said:

Doesn't the MVSC++ compiler with debug flag enabled need to generate the explicit initialization code statements? (for me VS and MVSC++ are so coupled, pretty much the same; some skin over the compiler).

Exact behavior in practice depends on what debug flags are passed to the compiler, which version of the standard libraries you're linking against (debug or release), whether or not you started the program with the debugger attached, and other stuff I'm likely forgetting.  Many of these will be to use magic numbers, not zero, explicitly to help you track down what would be considered bugs in cases where the C++ standard does not require initialization, and may not initialize in Release builds.

In release builds, the compiler may make extremely aggressive optimizations about uninitialized memory, such as evaluating two mutually exclusive if statement as both being true, even though common sense would say uninitialized memory has only one value and that such a thing is impossible:  https://markshroyer.com/2012/06/c-both-true-and-false/

There are some circumstances where C++ guarantees data gets zero-initialized (global data, memory returned by calloc (clear-alloc), etc.), use those if you want your memory to be zero.  Debug patterns like CCCCCCCC help make sure you're not accidentally using something that just happens to sometimes return zero-initialized memory and expecting it to always be zero-initialized (when it may not be in Release builds, on other compilers, when the allocator starts to reuse freed memory, etc.)

EDIT:  There are also more involved tools like https://clang.llvm.org/docs/MemorySanitizer.html which cause your program to actually crash when reading uninitialized memory, specifically so you can easily find it and fix it, instead of having strange bugs in your program which can be hard to get to the bottom of.

(Additionally, there are ways to have Visual Studio use clang, gcc, and other non-Microsoft compilers - so it doesn't hurt to be specific about what you're talking about :))

Edited by MaulingMonkey
Add note of memorysanitizer

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628663
    • Total Posts
      2984111
  • Similar Content

    • By lawnjelly
      It comes that time again when I try and get my PC build working on Android via Android Studio. All was going swimmingly, it ran in the emulator fine, but on my first actual test device (Google Nexus 7 2012 tablet (32 bit ARM Cortex-A9, ARM v7A architecture)) I was getting a 'SIGBUS illegal alignment' crash.
      My little research has indicated that while x86 is fine with loading 16 / 32 / 64 bit values from any byte address in memory, the earlier ARM chips may need data to be aligned to the data size. This isn't a massive problem, and I see the reason for it (probably faster, like SIMD aligned loads, and simpler for the CPU). I probably have quite a few of these, particular in my own byte packed file formats. I can adjust the exporter / formats so that they are using the required alignment.
      Just to confirm, if anyone knows this, is it all 16 / 32 / 64 bit accesses that need to be data size aligned on early android devices? Or e.g. just 64 bit size access? 
      And is there any easy way to get the compiler to spit out some kind of useful information as to the alignment of each member of a struct / class, so I can quickly pin down the culprits?
      The ARM docs (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka15414.html) suggest another alternative is using a __packed qualifier. Anyone used this, is this practical?
    • By ADDMX
      Hi
      Just a simple question about compute shaders (CS5, DX11).
      Do the atomic operations (InterlockedAdd in my case) should work without any issues on RWByteAddressBuffer and be globaly coherent ?
      I'v come back from CUDA world and commited fairly simple kernel that does some job, the pseudo-code is as follows:
      (both kernels use that same RWByteAddressBuffer)
      first kernel does some job and sets Result[0] = 0;
      (using Result.Store(0, 0))
      I'v checked with debugger, and indeed the value stored at dword 0 is 0
      now my second kernel
      RWByteAddressBuffer Result;  [numthreads(8, 8, 8)] void main() {     for (int i = 0; i < 5; i++)     {         uint4 v0 = DoSomeCalculations1();         uint4 v1 = DoSomeCalculations2();         uint4 v2 = DoSomeCalculations3();                  if (v0.w == 0 && v1.w == 0 && v2.w)             continue;         //    increment counter by 3, and get it previous value         // this should basically allocate space for 3 uint4 values in buffer         uint prev;         Result.InterlockedAdd(0, 3, prev);                  // this fills the buffer with 3 uint4 values (+1 is here as the first 16 bytes is occupied by DrawInstancedIndirect data)         Result.Store4((prev+0+1)*16, v0);         Result.Store4((prev+1+1)*16, v1);         Result.Store4((prev+2+1)*16, v2);     } } Now I invoke it with Dispatch(4,4,4)
      Now I use DrawInstancedIndirect to draw the buffer, but ocassionaly there is missed triangle here and there for a frame, as if the atomic counter does not work as expected
      do I need any additional synchronization there ?
      I'v tried 'AllMemoryBarrierWithGroupSync' at the end of kernel, but without effect.
      If I do not use atomic counter, and istead just output empty vertices (that will transform into degenerated triangles) the all is OK - as if I'm missing some form of synchronization, but I do not see such a thing in DX11.
      I'v tested on both old and new nvidia hardware (680M and 1080, the behaviour is that same).
       
    • By Josheir
      In the following code:

       
      Point p = a[1]; center of rotation for (int i = 0; I<4; i++) { int x = a[i].x - p.x; int y = a[i].y - p.y; a[i].x = y + p.x; a[i].y = - x + p.y; }  
      I am understanding that a 90 degree shift results in a change like:   
      xNew = -y
      yNew = x
       
      Could someone please explain how the two additions and subtractions of the p.x and p.y works?
       
      Thank you,
      Josheir
    • By Doggolainen
      Hello, 
      I am, like many others before me, making a displacement map tesselator. I want render some terrain using a quad, a texture containing heightdata and the geometryshader/tesselator.
      So far, Ive managed the utilize the texture on the pixelshader (I return different colors depending on the height). I have also managed to tesselate my surface, i.e. subdivided my quad into lots of triangles .
       
      What doesnt work however is the sampling step on the domain shader. I want to offset the vertices using the heightmap.
      I tried calling the same function "textureMap.Sample(textureSampler, texcoord)" as on the pixelshader but got compiling errors. Instead I am now using the "SampleLevel" function to use the 0 mipmap version of the input texture.
      But yeah non of this seem to be working. I wont get anything except [0, 0, 0, 0] from my sampler.
      Below is some code: The working pixelshader, the broken domain shader where I want to sample, and the instanciations of the samplerstates on the CPU side.
      Been stuck on this for a while! Any help would be much appreciated!
       
       
      Texture2D textureMap: register(t0); SamplerState textureSampler : register(s0); //Pixel shader float4 PS(PS_IN input) : SV_TARGET {     float4 textureColor = textureMap.Sample(textureSampler, input.texcoord);     return textureColor; } GS_IN DS(HS_CONSTANT_DATA input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<DS_IN, 3> patch) {     GS_IN output;     float2 texcoord = uvwCoord.x * patch[0].texcoord.xy + uvwCoord.y * patch[1].texcoord.xy + uvwCoord.z *                    patch[2].texcoord.xy;     float4 textureColor = textureMap.SampleLevel(textureSampler, texcoord.xy, 0);      //fill  and return output....  }             //Sampler             SharpDX.Direct3D11.SamplerStateDescription samplerDescription;             samplerDescription = SharpDX.Direct3D11.SamplerStateDescription.Default();             samplerDescription.Filter = SharpDX.Direct3D11.Filter.MinMagMipLinear;             samplerDescription.AddressU = SharpDX.Direct3D11.TextureAddressMode.Wrap;             samplerDescription.AddressV = SharpDX.Direct3D11.TextureAddressMode.Wrap;             this.samplerStateTextures = new SharpDX.Direct3D11.SamplerState(d3dDevice, samplerDescription);             d3dDeviceContext.PixelShader.SetSampler(0, samplerStateTextures);             d3dDeviceContext.VertexShader.SetSampler(0, samplerStateTextures);             d3dDeviceContext.HullShader.SetSampler(0, samplerStateTextures);             d3dDeviceContext.DomainShader.SetSampler(0, samplerStateTextures);             d3dDeviceContext.GeometryShader.SetSampler(0, samplerStateTextures);  
    • By alex1997
      Hey, I've a minor problem that prevents me from moving forward with development and looking to find a way that could solve it. Overall, I'm having a sf::VertexArray object and looking to reander a shader inside its area. The problem is that the shader takes the window as canvas and only becomes visible in the object range which is not what I'm looking for.. 
      Here's a stackoverflow links that shows the expected behaviour image. Any tips or help is really appreciated. I would have accepted that answer, but currently it does not work with #version 330 ...
  • Popular Now