Sign in to follow this  

Interpreting viewpoint information from Direct3D 9 calls

Recommended Posts

Shakkan    122
Hello everyone, I learnt a way to determine the offset of any DX9 method and implemented a way to inject my DLL and intercept calls. Now for the tough part. I am currently exploring ways of interpreting a viewpoint (or camera?) out of D3D calls only. I am aware of utility functions like D3DXMatrixLookAtLH that returns a transformation matrix to be applied to the environment (i.e. to produce a camera movement). However, can D3DX be considered a safe bet? It is not part of DirectX, but considered like a very useful addon. Anything else used often for camera movement in a game? I am not looking forward doing some copy/paste of code. I want to read/learn/understand and that's what I do everyday. However, if any of you guys would be so kind as to share valuable information/hints/advice/links with me about how to interpret camera position/movement, I would be very grateful. Note that I do not want to alter any data coming from the DX calls but only read/interpret. Thanks a lot in advance for your inputs, Alex

Share this post

Link to post
Share on other sites
S1CA    1418
If the application you are hooking/intercepting uses the fixed function D3D pipeline, then look at the matrices passed to IDirect3DDevice9::SetTransform(). From those you should easily be able to determine the object to world (aka world), world to camera (aka view) and camera to clip (aka projection) transformations.

Some applications use D3DX functions to compute matrices, but many don't.

If an application does not use D3DX and does not use the fixed function transformation and lighting pipeline then it will be difficult to find the world space to camera space transformation.

If an application uses vertex shaders then there isn't any easy way to know which vertex shader constant registers contain the transformation applied in the shader code - and for some shaders the 'view' (world space to camera space) transformation may be concatenated with other transformations (e.g. I have shaders where the only matrix that is set is a "object space to clip space" transform.) - and matrices can have different representations in memory.

Furthermore, with a shader the original developer of the application may not even be using matrices to store transformations - e.g. a quaternion and a translation could be used for a camera.

I suppose the only truly reliable way to find the transformation(s) in a shader based app is to disassemble the shader stream passed to IDirect3DDevice9::SetVertexShader() and work your way back from the last thing to write to oPos - and even then, separating the projection (camera->clip) transform from the camera (world->camera) and world (object->world) transforms might be pretty difficult.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this