Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 14 Jul 2004
Offline Last Active Sep 18 2015 04:24 PM

Topics I've Started

Memory Corruption: Accessor returns invalid value!

22 March 2012 - 05:23 AM

Hey everyone,

This is a little bit cry for help, like how the hell do I find this problem. Maybe anyone has some tips.

I'm working in a big project (cross-platform Windows/MacOSX) and I'm having the problem on both operating systems.

I've got a class and it does weird things.

- Sometimes the class contains correct pointers, but when I try to access those pointers from outside they return invalid values
(GetCurrentRenderLayerList() is a dead normal accessor (return currentRenderLayerList_), but it returns 0xcdcdcdcd instead of 0x07c28418 or returns sometimes 0x00000320 and other weird values).
Posted Image
The value in the green circle should be 0, but also got suddenly an invalid value.

- Also sometimes I have the feeling that the memory is shifted by 4 bytes
The "videoMappingResolution_" should have been 800 x 600, but in this case it is shifted, so it is 600 x "invalid value" and the pointer before it got the 800 (0x0000320) value.
Posted Image

- I also have the feeling it sometimes already goes wrong in the constructor or just after. So you create the class and some variables are already badly initialized (in the constructor I try to set all values to NULL and then I see that those values are like 0xcdcdcd or have another value).

The only thing I can think about now is maybe a vtable corruption? How can this be caused? I've checked that I don't call the constructor twice , but I can't find where I'm doing that or so, I don't do anything special in that class. And for arrays I only use std::vector and std::map etc.

So I don't think I overwrite some data with an array.

Render Target Size & Viewport

20 February 2012 - 04:48 AM

Hi everyone,

I have an application where I need to render to render targets of different dimensions.
These can go from e.g. 800x600 to 4Kx4K or maybe even 8Kx8K etc.

So I assume that I really need to watch out that my render targets fit in the GPU memory!

So I was thinking to just use a few render targets of e.g. 4Kx4K but adapt the viewport instead of creating render targets of all those different sizes.

But my question is, if I render to a viewport of 800 x 600 on a 4Kx4K render target is there any performance drawback?
Any thoughts?

GPU Gems: Image-Based Lighting

09 December 2011 - 06:05 AM

Hi everyone,

I'm playing around with reflections and cubemaps but I'm having problems with flat surfaces and nearby objects.
So basically, I also want to take the fragment position into account instead of only a direction vector.

Therefore I was reading Chapter 19. Image-Based Lighting of GPU Gems.

Cube maps are typically used to create reflections from an environment that is considered to be infinitely far away. But with a small amount of shader math, we can place objects inside a reflection environment of a specific size and location, providing higher quality, image-based lighting (IBL).

and further in the document:

In addition to the standard coordinate spaces such as eye space and object space, we need to create lighting space— locations relative to the cube map itself.

but I don't really understand what he means with "lighting space".

Can somebody explain me this? Or can point me to some other resources?

I've also seen Reflection with billboard imposters of graphicsrunner but that seems so much work to create imposters for all objects.

I was also thinking about Real Time Local Reflections (raytracing in view space in your depth buffer), but that is something I want to try next :)( looks really easy to do so ) 


07 November 2011 - 07:07 AM

Hi guys,

What is your experience with the Direct9Ex swappeffect FLIPEX mode in Windows 7 ? (Direct3D 9Ex Improvements)
Do you guys have better performance than with the normal DISCARD ?

With me, the performance with FLIPEX is always worse :(
I hoped I would get a serious performance boost :(

First I tried to render in a 1920x1080 window (NO exclusive fullscreen) and I drew 2000 rotating cubes. With DISCARD I always kept 30 FPS (30 ms) but with FLIPEX it will not get higher than 15 - 16 FPS (60 ms)

And the actual program I am working on utilizes multiple swap chains.
When I am just drawing 1 rotating cube to 4 fullscreen swapchains (not in exclusive mode, 2 screens are on another GPU) the framerate drops to 15 FPS while with just DISCARD it still reaches 30 FPS.
Off course when I present each swapchain I pass the D3DPRESENT_DONOTWAIT flag but it still takes a long time when it returns from the swapchain->Present () function.

device->swapChain_->Present(NULL, NULL, NULL, NULL, D3DPRESENT_DONOTWAIT);

So, what is your experience with FLIPEX and does this mode even work with multiple swapchains?


[DX9Ex] Shared Resources

22 August 2011 - 04:43 AM

Hi guys,

Is it possible to share a surface/texture between different GPU's on DX9 Ex?
I'm working on a Windows 7 system with 2 graphic cards, 1 ATI and 1 NVIDIA.
I'm creating an exclusive fullscreen device on the ATI and one on the NVIDIA.
Now I want to share a texture between those devices, I know i need to go through system memory, but it would be very nice if Direct3D could handle that :/

But the creation of the shared texture always fails, it is like the handle is invalid. This code works perfectly if I create the devices on the same GPU.

HANDLE								sh_sharedHandle_=NULL;

         	IDirect3DTexture9                    *sh_master_Texture_=NULL;        // Belongs to primary device
         	IDirect3DTexture9                    *sh_slave_Texture_=NULL;            // Create a shared texture on the main device!

			if (FAILED(deviceObject_->deviceD3D_->CreateTexture(
				800, //d3ddisplaymodesex[o].Width, 
				600, //d3ddisplaymodesex[o].Height, 
				D3DFMT_X8R8G8B8, //d3ddisplaymodesex[o].Format // D3DFMT_X8R8G8B8,
				&sh_sharedHandle_ )))
 			log(TRACE_ERROR, L"Failed to create DX9Ex Shared Texture");
				goto create_dev_error;

			// Create a shared texture on the slave device!

 // FAILS!!
			if (FAILED(device.deviceD3D_->CreateTexture(
 			log(TRACE_ERROR, L"Failed to create DX9Ex Shared Texture");
				goto create_dev_error;

The DX9 Debug runtime gives the following output:

Direct3D9: (INFO) :======================= Hal HWVP device selected

Direct3D9: (INFO) :HalDevice Driver Style b

Direct3D9: :Subclassing window 000c0a06
Direct3D9: :StartExclusiveMode
Direct3D9: (INFO) :Using FF to VS converter

Direct3D9: (INFO) :Using FF to PS converter

Direct3D9: (INFO) :======================= Hal HWVP device selected

Direct3D9: (INFO) :HalDevice Driver Style b

Direct3D9: :StartExclusiveMode
Direct3D9: (INFO) :Using FF to VS converter

Direct3D9: (INFO) :Using FF to PS converter

Direct3D9: (INFO) :Enabling multi-processor optimizations
Direct3D9: (INFO) :DDI threading started

Direct3D9: (ERROR) :Error during initialization of texture. CreateTexture failed.
Direct3D9: (ERROR) :Failure trying to create a texture

So, should this work? Or do I need to manually need to detect if 2 devices are on a different gpu and then copy via memory