C# SharpDX dynamically moving vertices

Started by
4 comments, last by markypooch 6 years, 10 months ago

I am trying to manipulate vertices during rendering time in order to move my rendred object.

VertexBuffer:


            var VertexBuffer = SharpDX.Direct3D11.Buffer.Create(_device, BindFlags.VertexBuffer, Vertices);

Vertices:


Vertices = new[] //A BOX
 {
 new Vector4(-1.0f, -1.0f, -1.0f, 1.0f), 
 new Vector4(1.0f, 0.0f, 0.0f, 1.0f),
 ...
 new Vector4( 1.0f,  1.0f,  1.0f, 1.0f),
 new Vector4(0.0f, 1.0f, 1.0f, 1.0f),
 };

Setting VertexBuffer:


 d3dDevice.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer, Utilities.SizeOf<Vector4>() * 2, 0));

My purpose is moving them on the screen left,right... so just like in games.

Before I did something like :


                RenderLoop.Run(form, () =>
                {
                            if ( __mesh.IsSelected )
                            {
                                for (int i = 0; i <= __mesh.VerticesCount - 1; i++)
                                {
                                if (IHandler.KeyRight) __mesh.Vertices[i].X += 0.050f;
                                if (IHandler.KeyLeft) __mesh.Vertices[i].X -= 0.050f;
                                if (IHandler.KeyUp) __mesh.Vertices[i].Y += 0.050f;
                                if (IHandler.KeyDown) __mesh.Vertices[i].Y -= 0.050f;
                                }
                            }
            //INSIDE RENDERLOOP I CREATED NEW BUFFER EVERY TIME (?) MOST PROBABLY THIS IS WRONG.
            VertexBuffer = SharpDX.Direct3D11.Buffer.Create(device, BindFlags.VertexBuffer, Vertices);
});

It worked but I am not sure if this is the correct way to do it. What should I do ?

Advertisement

You would normally use matrices to move a rendered object instead of altering the local space in which the vertices are defined. Something like this

if ( __mesh.IsSelected )
{
for (int i = 0; i <= __mesh.VerticesCount - 1; i++)
{
if (IHandler.KeyRight) __mesh.worldMatrix = XMMattrixTranslate(posX += 0.050f, posY, posZ);
if (IHandler.KeyLeft) __mesh.worldMatrix = XMMatrixTranslate(posX -= 0.050f, posY, posZ);
if (IHandler.KeyUp) __mesh.worldMatrix = XMMatrixTranslate(posX, posY += 0.050f, posZ);
if (IHandler.KeyDown) __mesh.worldMatrix = XMMatrixTranslate(posX, posY -= 0.050f, posZ);
}
}

Assuming of course that your object has a XMMATRIX member. You want to use matrices for transformations mostly for speed purposes as you don't have to edit potentially thousands of vertices, and re-upload them to the GPU for every key press, and a Matrix can describe a transformation in 64 bytes.

That being said, I'd like to clarify, are you asking about overall mesh transformations, or vertice animation? Like a sine wave, or vegetation animation?

You would normally use matrices to move a rendered object instead of altering the local space in which the vertices are defined. Something like this

if ( __mesh.IsSelected )
{
for (int i = 0; i <= __mesh.VerticesCount - 1; i++)
{
if (IHandler.KeyRight) __mesh.worldMatrix = XMMattrixTranslate(posX += 0.050f, posY, posZ);
if (IHandler.KeyLeft) __mesh.worldMatrix = XMMatrixTranslate(posX -= 0.050f, posY, posZ);
if (IHandler.KeyUp) __mesh.worldMatrix = XMMatrixTranslate(posX, posY += 0.050f, posZ);
if (IHandler.KeyDown) __mesh.worldMatrix = XMMatrixTranslate(posX, posY -= 0.050f, posZ);
}
}

Assuming of course that your object has a XMMATRIX member. You want to use matrices for transformations mostly for speed purposes as you don't have to edit potentially thousands of vertices, and re-upload them to the GPU for every key press, and a Matrix can describe a transformation in 64 bytes.

That being said, I'd like to clarify, are you asking about overall mesh transformations, or vertice animation? Like a sine wave, or vegetation animation?

Thank you for the answer, for now I was asking for the mesh transformation. If I got it correct you mean I will have World view projections for each mesh (?) I had one projection for all and I tried to move each mesh's vertices.


//EXAMPLE : Matrix.Translation(-1,0,0);
var worldViewProj = Matrix.Translation(-1,0,0)*Matrix.RotationX(45) * Matrix.RotationY(0 * 2) * Matrix.RotationZ(0 * .7f) * viewProj;

Now should I change my setup to have multiple matrixes to move each mesh ? Then if I want to move camera instead of meshes should I do operation for all the mesh matrixes ?

The general gist is this: You have three matrices, they're always multiplied and sent to the shader.

For every mesh you have a world matrix, that transforms the mesh vertices from object into world space. The camera poses the view matrix, to transform all objects into the camera's view. The projection matrix is then used to transform the objects into screen space.
Usually then there's also a viewport that comes into play, but that one's optionally.

World matrices are per object, and change when the object is moved/rotated/zoomed/whatever.
The view matrix is changed when the camera is moved/rotated/zoomed/whatever.
The projection matrix is usually chosen to account for one specific projection (e.g. 3d or ortho), and then is never touched.

You never ever modify the meshes directly if you can avoid it.

Fruny: Ftagn! Ia! Ia! std::time_put_byname! Mglui naflftagn std::codecvt eY'ha-nthlei!,char,mbstate_t>

The general gist is this: You have three matrices, they're always multiplied and sent to the shader.

For every mesh you have a world matrix, that transforms the mesh vertices from object into world space. The camera poses the view matrix, to transform all objects into the camera's view. The projection matrix is then used to transform the objects into screen space.
Usually then there's also a viewport that comes into play, but that one's optionally.

World matrices are per object, and change when the object is moved/rotated/zoomed/whatever.
The view matrix is changed when the camera is moved/rotated/zoomed/whatever.
The projection matrix is usually chosen to account for one specific projection (e.g. 3d or ortho), and then is never touched.

You never ever modify the meshes directly if you can avoid it.

Alright I fixed it by adding to my mesh class worldviewmatrix. Now each mesh has its own. I set positions from a Vector3 position and when calculation is done I use it inside


WorldViewMatrix = Matrix.Translation (position.X,position.Y,position.Z) * ... * viewProj;

Move operation inside render loop


foreach (Mesh __mesh in Meshes.MeshCollection)
{
                            if ( __mesh.IsSelected )
                            {
                                if (IHandler.KeyRight) { __mesh.Position.X += 0.050f; }
                                if (IHandler.KeyLeft)  { __mesh.Position.X -= 0.050f; }
                                if (IHandler.KeyUp)    { __mesh.Position.Z += 0.050f; }
                                if (IHandler.KeyDown)  { __mesh.Position.Z -= 0.050f; }
                                if (IHandler.KeyQ)     { __mesh.Position.Y -= 0.050f; }
                                if (IHandler.KeyE)     { __mesh.Position.Y += 0.050f; }
                            }
                            __mesh.Render(contantBuffer,viewProj);
}

Then if I should avoid to directly manipulate vertices how should I do some animation for my mesh just like in autodesk maya ? Example an water surface animation ???

vertice animation is an option for some meshes. Like plants, water, what have ya. Normally the route I take, though not always feasible, is to offset the vertices animation to the vertex shader. Though sometimes doing it on the app side with a dynamic buffer is the most sane.

I do some animation for my mesh just like in autodesk maya

Normally you'd rig your mesh in your 3D software, and export the vertices, and mesh skeleton to file to be read in by your app. In that scenario, you can still use matrices, but the standard is quanternion. You'd traverse the skeleton of your mesh concatenating each parent's bones transformations down to the most subsidiary bone. Though a fundamental topic, it is pretty complex in the scheme of things, especially if you're just starting out.

Another option is just doing it the old way. That is having two topologically identical meshes in different poses, and interpolating between them. This is called morph targets, or tweening. This obviously is much more memory intensive, but far more simple to implement.

This topic is closed to new replies.

Advertisement