Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 06 Jun 2010
Offline Last Active Today, 04:07 PM

Topics I've Started

Direct3D9 - Fullscreen to Windowed transition and Aero

07 August 2013 - 02:04 PM

OK, I'm having trouble managing the transition from fullscreen to windowed modes using D3D9, and the impact of this on Aero/DWM themeing of the window's non-client area.


Switching from windowed to fullscreen is OK.  Switching between windowed modes is OK.  Switching from windowed to fullscreen is OK.  The specific problem is with switching from fullscreen to windowed modes, and it's consistent whether I start fullscreen then switch to windowed, or start windowed, switch to fullscreen, then back.


As part of the switch back to windowed, D3D9 disables Aero on the non-client region of the window.  It doesn't disable Aero entirely, it on does so on the window itself.


Tracking the messages, I can see that I get a WM_DWMNCRENDERINGCHANGED with WPARAM 0 during a transition to fullscreen - so it's evident that D3D9 disables Aero at this point.  I don't get one when transitioning back.


Using DwmSetWindowAttribute with DWMWA_NCRENDERING_POLICY and DWMNCRP_ENABLED as part of the mode change (just after the D3D device has completed recovery) I get a succeeded HRESULT but no intended effect (nor does a WM_DWMNCRENDERINGCHANGED message arrive).


Querying with DwmGetWindowAttribute and DWMWA_NCRENDERING_ENABLED confirms that non-client rendering remains disabled.


I know that I can bypass all of this by just using D3DCREATE_NOWINDOWCHANGES but then I get to have to mess around with controlling the mouse, managing focus and other nonsense, which is a hell of a lot more work.  It seems obvious that there should be a way to get proper non-client rendering back, but what on earth could that way be?

Hungarian Notation taken too far

26 March 2013 - 08:46 AM

Saw this once in some crufty old VBScript code:


Variant vntBlah = DoSomething ()


For why this is even worse than usual, the punchline is here.

Direct3D 11 - Detect when reading from a Mapped buffer

24 September 2012 - 02:23 PM

I suppose this is relevant to all versions of all APIs, but I was recently tripped up by this under D3D11:
struct cbtype
	float a, b, c, d;

context->Map (...);
cbtype *dest = (cbtype *) MappedResource.pData;
cbtype->a = cbtype->b = cbtype->c = somenumber;
cbtype->d = someothernumber;
context->Unmap (...);

That's not exact code - just a simplified case illustrating a condition that got triggered, but it surprised me (which with hindsight it shouldn't have) to find out (by viewing disassembly) that the "a = b = c = somenumber" line ended up reading from the Mapped buffer.

Now, I'm aware of the warnings given here: http://msdn.microsof...7(v=vs.85).aspx and I'm aware of the implications, but the question is: since it's so easy to accidentally do this (the page I linked gives another example commonly found when dealing with CPU-side memory), how does one reliably detect when/if it happens? The Debug Runtimes have nothing so say on it, even with Info-level output enabled, and PIX is quiet as the grave. Visual Studio can be configured to break on a memory write, but not on a read.

Is there really no way but the hard way?

Weird Observation with Depth/Stencil States

13 June 2012 - 04:44 PM

So, I'm creating a DepthStencil state to disable both depth testing and depth writing.

Setup is completely standard, and setting a breakpoint on the creation code confirms that the values are going in:

    ddesc.DepthEnable = FALSE;
    ddesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ZERO;
Creation succeeds, but if I then do a GetDesc on the created state, I get the following:
ddesc.DepthEnable = FALSE;
    ddesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
Also, if at runtime, I set the state, do GetDepthStencilState, then GetDesc, I also get the above - again with the D3D11_DEPTH_WRITE_MASK_ALL. PIX also tells me that it has D3D11_DEPTH_WRITE_MASK_ALL.

Running with AMD GPU Perf Studio confirms that writes to the depth buffer are happening, but this page - http://msdn.microsoft.com/en-us/library/windows/desktop/bb205120%28v=vs.85%29.aspx - indicates that this is not normal or expected behaviour - it should be answering "No" to "Is Depth Write enabled?" and earlying-out. Clearly it's not, and this is measurable as a quite significant performance decrease (did I mention that it's in a performance-sensitive part of the code - drawing fullscreen quads for post-processing filters).

What gives?

Vertex Array Objects - Driver bug or am I doing it wrong?

19 April 2012 - 03:18 PM

I'm transitioning some code to use VAOs and have run into a curious problem that I'm not certain is a driver bug or the result of a brain malfunction on my part. Basically, the first batch of drawing code I've transitioned works perfectly; the API works as described and expected. The second batch does not.

Here's the code that works:
glGenVertexArrays (1, &vao1);
	glBindVertexArray (vao1);

	glEnableVertexAttribArray (0);
	glEnableVertexAttribArray (1);
	glEnableVertexAttribArray (2);

	glBindBuffer (GL_ARRAY_BUFFER, vbo1);

	glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, sizeof (vertextype1), (void *) 0);
	glVertexAttribPointer (1, 3, GL_FLOAT, GL_FALSE, sizeof (vertextype1), (void *) 12);
	glVertexAttribPointer (2, 3, GL_FLOAT, GL_FALSE, sizeof (vertextype1), (void *) 24);

	glBindBuffer (GL_ELEMENT_ARRAY_BUFFER, ibo1);

	glBindVertexArray (0);
"vbo1" and "ibo1" are buffer objects that have already been successfully generated and confirmed to work without VAOs. "vbo1" has been created with GL_STATIC_DRAW, "ibo1" with GL_STREAM_DRAW.

And now here's the code that fails:
glGenVertexArrays (1, &vao2);
	glBindVertexArray (vao2);

	glEnableVertexAttribArray (0);
	glEnableVertexAttribArray (1);
	glEnableVertexAttribArray (2);

	glBindBuffer (GL_ARRAY_BUFFER, vbo2);

	glVertexAttribPointer (0, 2, GL_FLOAT, GL_FALSE, sizeof (vertextype2), (void *) 0);
	glVertexAttribPointer (1, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof (vertextype2), (void *) 8);
	glVertexAttribPointer (2, 2, GL_FLOAT, GL_FALSE, sizeof (vertextype2), (void *) 12);

	glBindVertexArray (0);
Again, "vbo2" is a buffer object that has already been successfully generated and confirmed to work without VAOs. This time however it has been created with GL_STREAM_DRAW and there is no GL_ELEMENT_ARRAY_BUFFER in use.

In both cases the drawing code is essentially the same: bind the VAO and issue draw calls. In the working case glDrawRangeElements is used, in the non-working case glDrawArrays is used. In both cases glMapBufferRange is used for updating the GL_STREAM_DRAW buffers.

Checking in the debugger I see that both VAOs are successfully generated and are assigned non-zero names.

In the non-working case, however, I can make it work by just adding a "glBindBuffer (GL_ARRAY_BUFFER, vbo2)" call immediately before my glBindVertexArray call.

So in summary, works:
glBindVertexArray (vao1);
	glMapBufferRange (.....); glUnmapBuffer (.....);
	glDrawRangeElements (......);

Doesn't work:
glBindVertexArray (vao2);
	glMapBufferRange (.....); glUnmapBuffer (.....);
	glDrawArrays (......);

glBindBuffer (GL_ARRAY_BUFFER, vbo2);
	glBindVertexArray (vao2);
	glMapBufferRange (.....); glUnmapBuffer (.....);
	glDrawArrays (......);

Also works:
glBindVertexArray (vao2);
	glBindBuffer (GL_ARRAY_BUFFER, vbo2);
	glMapBufferRange (.....); glUnmapBuffer (.....);
	glDrawArrays (......);

The failure is just a black screen. Nothing is drawn. glGetError consistently returns GL_NO_ERROR. glGetIntegerv (GL_ARRAY_BUFFER_BINDING, ...) returns the wrong buffer name in both the working and non-working cases, glGetIntegerv (GL_ELEMENT_ARRAY_BUFFER_BINDING, ...) returns the correct buffer name in the working case.

If the buffer binding was not part of the VAO state it would explain everything, but according to http://www.opengl.or...ex_Array_Object it is.

The failure case occurs on an AMD 6490 M with Catalyst 12.3, Windows 7 x64, using a core GL3.3 profile. I haven't yet been able to test on other hardware.