Cursor position in Worldspace from NDC coords

Started by
4 comments, last by BattleCollie 12 years, 4 months ago
I had an idea about using the GPU (I'm using Directx 10) to get the worldspace coordinates of the mouse cursor.

You set up a 1x1 render target and a shader that takes a resource view to the depth buffer and the NDC coords of the cursor as inputs, which just samples the depth buffer at those coordinates to give you the x,y and z in camera space. Then it's just a case of transforming them back to worldspace and using a fullscreen quad to write them to the single texel of the render target. You then read these back in your main code.

The problem I can see is to do with reading the values back to the CPU. I'm guessing I'd need to use a render target created with D3D_USAGE_STAGED to be able to read it, would this cause some horrible slowdown issues? I also realise you'd only be able to use the depth buffer from the preceding frame, so your coordinates would be one frame behind.
Advertisement
You can't directly render to a STAGING texture. You have to render to a regular render target, use CopyResource to copy the render target contents to the STAGING texture, and then Map the STAGING buffer to get the data. There is a lot of latency involved in doing this, since you need to wait for the GPU to catch up with the CPU. In practice you will probably need to wait even longer than a single frame if you want to avoid stalls.
So I'm best just giving it up as a bad job and just doing ray/collision-shape tests on the CPU?
Well it's fine if you can tolerate the latency...if you need the results immediately, then yes you'll need keep your computations on the CPU.
I'm doing this too and noticed stalls when mapping on the CPU to get the data from the GPU. I have a sort of hacky solution which seems to work by just copying the data at the cursor position to a single pixel render target every frame. But instead of reading that render target from the CPU I have a small array ring of them which I space out which one the GPU is writing to and which one the CPU is reading from that I wrapped in a texture sampler class inside my engine. There's a little lag between what you get and what the mouse cursor is actually over but not much (especially at high FPS), and it's hacky, but seems to work to stop some of the stalling between the two by not having them access the same texture at the same time.

I decided to stick with this despite the potential downfalls just due to the fact that I preferred to have pixel perfect picking on a scene... well.. that and ray/box collision detection I haven't looked at yet. The other advantage I got from this is I could do this for all the textures in my gbuffers and get the encoded view and tangent space normals for doing things like aligning objects to the exact angle of whatever I was currently pointing at. I also went this route since I encode object IDs into a single texture for reading back to find out which gui elements or objects I'm pointed at without ray casting.

Like I said, not elegant but it seems to do the job till I can figure out something better.



I should mention too I know doing this that the data I'm reading from the single pixel texture won't always be 100% exactly what the mouse is over, but it's close enough and updated at enough frequency that it seems to cause no problems.

DAMN YOU CPU/GPU STALLING! *shakes fist*

This topic is closed to new replies.

Advertisement