Per-pixel sprite collision detection in DirectX 10

Started by
3 comments, last by benp444 12 years ago
I have been scouring this board and the rest of the internet for a few days now trying to get per-pixed collision detection working in DirectX 10. but without success.

My starting point was this post here:

http://www.gamedev.n...ks-directx-9-c/

However the problems is that I am using DX 10 (D3D10) not 9 so e.g.
pTexture->LockRect
doesn't work.

I read that I need to use ID3D10Texture2D::Map instead. I ran into all sorts of issues with calling this could not call it without getting E_INVALIDARG. The internet suggested that I hadn't called the upstream function D3DX10CreateTextureFromFile() correctly and I started playing around with the D3DX10_IMAGE_LOAD_INFO with a number of different permuatations but again all to no avail. This function always told me D3DERR_INVALIDCALL if I tried to call it.

Now before I start talking about my code I want to go back to my initial requirement and check that I am using the right approach. Basically I want to be able to test if parts of a sprite are overlapping parts of another sprite based on the non-alpha channels. I know that there are more peformant ways of collision detection but please humour me. So question A) is:

A) What is the high level approach to doing per-pixel sprite collision detection in Direct X 10.

Any assistance would be greatly appreciated.
Advertisement
In DX10 you can't read back texture data on the CPU if you want to use it on the GPU. This is because it is really slow to read data out of GPU-accessible memory. In DX9 you could usually do it if you use D3DPOOL_MANAGED, since that pool caused the runtime to keep a seperate version of resources in CPU memory.

What you really want to do is have your sprite texture using D3D10_USAGE_IMMUTABLE so that it's only in GPU memory, and have your collision mask in CPU memory only. If you need to read a texture on CPU to generate your collision mask, or if you have your collision mask pre-generated and you just want to read it into CPU memory, then you should have a separate call to D3DX10CreateTextureFromFile() that specifies D3D10_USAGE_STAGING and D3D10_CPU_ACCESS_READ.

Also for your future reference, the rules for what the CPU and GPU can and can't access are listed here.
OK, I think I understand. My problem then becomes the fact that I cannot call D3DX10CreateTextureFromFile and specify a D3DX10_IMAGE_LOAD_INFO. See code below:

// This is valid - if I call D3DX10CreateTextureFromFile without the D3DX10_IMAGE_LOAD_INFO param (i.e. passing null) then my sprites bounce around the screen.
ID3D10Device* pD3DDevice=GameManager::get_instance().getD3DDevice();

D3DX10_IMAGE_LOAD_INFO loadInfo;
ZeroMemory(&loadInfo,sizeof(D3DX10_IMAGE_LOAD_INFO));
// I have played around with a number of permutations of this. Sometimes using default settings or simply not specifiying a particular D3DX10_IMAGE_LOAD_INFO field
//loadInfo.Width = 300;
//loadInfo.Height = 300;
loadInfo.Depth = D3DX10_DEFAULT;
loadInfo.BindFlags = 0;
loadInfo.pSrcInfo = NULL;
loadInfo.CpuAccessFlags = D3D10_CPU_ACCESS_READ;
loadInfo.MiscFlags = 0;
loadInfo.Filter = D3DX10_FILTER_NONE;
loadInfo.MipFilter = D3DX10_FILTER_NONE;
loadInfo.FirstMipLevel = D3DX10_DEFAULT;
loadInfo.Usage = D3D10_USAGE_STAGING;
loadInfo.MipLevels = D3DX10_DEFAULT;

// Loads the texture into a temporary ID3D10Resource object. filename.c_str() is valid.
hr = D3DX10CreateTextureFromFile(pD3DDevice, filename.c_str(), &loadInfo, NULL, &pRes, NULL);
// I then check hr


The above code always results in D3DERR_INVALIDCALL the descriptions of which is "Invalid call". If I make the call using:

hr = D3DX10CreateTextureFromFile(pD3DDevice, filename.c_str(),NULL , NULL, &pRes, NULL);

Then everything works, but I cannot then call the ID3D10Texture2D::Map function later. Can anyone see anything wrong with my D3DX10CreateTextureFromFile call?

Addtionally this is the image I am using, but the same problem occurs on any image I try and use:

http://www.axialis.com/tutorials/sample/logo-ps.png
Do you use the DEBUG flag when creating your device, and do you link to the debug version of D3DX10.lib? If you do that, you will get a more detailed explanation of the failure in the debug output window.
Its been 12 years since I programmed c++ so I am a bit rusty. I am not sure that I am using the debugging tools correctly anyhow I now link to d3dx10d.lib. And I have set the configuration to debug in numerous locations in VS Express 2010.

I click the little green 'play' button and when it hits my exception I have a look at the debugger window. The only additional line is

First-chance exception at 0x7510b9bc in DXSpriteWrapper.exe: Microsoft C++ exception: GameException at memory location 0x0043e7d0..

This is the line caused by the exception that I have built and thrown due to the D3DERR_INVALIDCALL error I mentioned previously. I could not see any other information that might help me. It does not seem to go into the D3DX10CreateTextureFromFile method and report anything useful.

This is what my creation of the pD3DDevice looks like:

HRESULT hr = D3D10CreateDeviceAndSwapChain(NULL,
D3D10_DRIVER_TYPE_HARDWARE,
NULL,
D3D10_CREATE_DEVICE_DEBUG,
D3D10_SDK_VERSION,
&swapChainDesc,
&pSwapChain,
&pD3DDevice);

This topic is closed to new replies.

Advertisement