Sign in to follow this  
cptrnet

Getting vertex positions

Recommended Posts

Not really - because to provide access to that data would really slow down the graphic card. You may be able to do something with a shader but getting the output in a correct form may be tricky. So you will need to do it yourself, e.g. you could use the D3DXVec3ProjectArray function.

Share this post


Link to post
Share on other sites
You can use ProcessVertices() to run the vertex pipeline with the current device setup. The result is a vertex buffer transformed by the wrold/view/projection mats setup on the device. The declaration may be different than the input declaration so reading it may be a little tricky. See: IDirect3DDevice9::ProcessVertices in the docs for more info.

Share this post


Link to post
Share on other sites
Quote:
Original post by cptrnet
Once i create a vertex buffer is there anyway to get the positions of the vertices once i translate, rotate, or scale the matrix.


You should treat data in vertex buffers as write-only if you want good performance for your application. Vertex buffers are usually located in AGP memory or even video memory with modern GPUs/drivers.
AGP/video memory isn't cached for CPU accesses, and if the buffer is in true video memory, it's on the other side of a bus such as AGP/PCI. Effectively you'll get guaranteed a cache miss for every DWORD of data you read from the vertex buffer, and it'll be even slower if the read has to be performed by the driver from true video memory.

The only things that should read from D3DPOOL_DEFAULT vertex buffers are vertex shaders and the GPU itself.


Can I ask why you need access to the vertex positions after transformation?

If it's for something like collision detection or "picking", then I'd strongly recommend storing separate data specifically for that, and use one of the methods already suggested such as D3DXVec3TransformArray() to transform the data.

Most of the time, the meshes/vertices used for things like collision don't need anywhere near as many polygons/vertices as the meshes being rendered (I'd say between 10 and 50 times fewer polygons in most real world situations). Additionally, collision vertices don't need "render" things like colours and texture coordinates; and you can also store specific data that the renderer doesn't need (e.g. friction coefficients, face normals, etc)


Quote:
Original post by turnpast
You can use ProcessVertices() to run the vertex pipeline with the current device setup. The result is a vertex buffer transformed by the wrold/view/projection mats setup on the device. The declaration may be different than the input declaration so reading it may be a little tricky. See: IDirect3DDevice9::ProcessVertices in the docs for more info.


Beware though, ProcessVertices() is only ever executed with software vertex processing using your computers' CPU. ProcessVertices() is never performed on the GPU. [It does use D3D's PSGP though, so will take advantage of SSE or 3DNow! if present on the CPU].

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this