HowTo: bone weight painting tool

Started by
7 comments, last by JoeJ 6 years ago

Hello,

in my game engine i want to implement my own bone weight painting tool, so to say a virtual brush painting tool for a mesh.

I have already implemented my own "dual quaternion skinning" animation system with "morphs" (=blend shapes)  and "bone driven"  "corrective morphs" (= morph is dependent from a bending or twisting bone)

But now i have no idea which is the best method to implement a brush painting system.

Just some proposals

a.  i would build a kind of additional "vertecie structure", that can help me to find the surrounding (neighbours) vertecie indexes from a given "central vertecie" index

b.  the structure should also give information about the distance from the neighbour vertecsies to the given "central vertecie" index

c.  calculate the strength of the adding color to the "central vertecie" an the neighbour vertecies by a formula with linear or quadratic distance fall off

d.  the central vertecie would be detected as that vertecie that is hit by a orthogonal projection from my cursor (=brush) in world space an the mesh
      but my problem is that there could be several  vertecies that can be hit simultaniously. e.g. i want to paint the inward side of the left leg. the right leg will also be hit.

I think the given problem is quite typical an there are standard approaches that i dont know.

Any help or tutorial are welcome

P.S. I am working with SharpDX, DirectX11

  

Advertisement

a. Yes, you need additional data to represent connenctivity. 'Half Edge' data structure is common, personally i use a more naive approach: For each vertex store all its edges and polys (clockwise ordering can become useful), for each poly store all its vertices and edges, for each edge store its 2 vertices and polys. You can implement neighbour searching and region growing on top of this. Because i have never implemented half edge data structure, i can not recommend which way to go, but my approach sometimes feels wasteful or even laborious to use. It always depends on what you need, but i'd give half edge a try if i'd need to start over from scratch.

b. It is quite common to implement Dijkstra shortest paths on meshes for geometry processing tasks, which is very similar to the idea of region growing. Region growing means to extend your current selection - e.g. a single vertex initially - by one ring of neighbouring verts after another. I've implemted a region grower that can grow verts, edges and polys, so keeping this logic independent of the data can be useful to safe some work. You typically stop growing after all vertices are outside of a given max distance.

There are differnt kinds of 'distance' you might want to consider:

Euclidean distance, which means clipping your growing inside a simple sphere centered at the start vertex. (Should be fine for vertex painting.)

Geodesic Distance: The exact distance  of the shortest path between two points on the surface. 

Approx. Geodesic Distance: This is what you get if you use Dijkstra to measure distance by summing visited mesh edge lengths. This may be a zig zag path, so longer than necessary.

c. You use any of the distances listed above and model any fallof function you want.

d. You could trace a ray and use the closest vertex from the hit triangle, or use the hit triangle itself for growing. Because growing grows by rings of primitives on the surface, vertices of the 'wrong' leg would not be reached, even if you use simple euclidean distance and wrong legs vertices are very close. The growing process stops before it would start considering 'wrong' vertices (assuming your radius is small enough of course).

Optionally / additionally you could use normals to fade out unwanted results.

 

I may be able to answer some more detailed questions if you have. That stuff is not hard to do, but quite some work :)

 

 

 

hi JoeJ,

many thanks for your comprehensive answer.

What about the following method that came i my mind ?

a. We treat the vertex painting like shading our mesh with a spotlight in a deferred renderer ( that i already have )

b. The brush would be a round cone vertexbuffer geometry that is "shining" on the mesh.

c. Yust like doing a phong shading the mesh is shaded e.g. with cosin intensity and a fall off function from deviation of the light vector.

By this way the problem of touching vertecies that are behind visible geometry is also solved and we get automatically the searched vertecies that surround our "center vertecie"

d. every vertecie gets a "unique color" attribute that coresponds to its index

Either we udate an unordered acess view or write to a texture rendertarget.

d.    e.g. all black parts of the rendertarget corespond to vertecies with no change the lighted parts give the amount of weight to add.

But here my question:

How is it possibe in the PixelShader to calculate the color that we can later find out from which 3 vertecie colores the pixel was blended ?
Or even better we want to find that unique Color from that vertecie to which the pixel is nearest to. 

We need this information to write in a additional rendertarget or buffer the information to get the link from light intensity to vertecie index.

On the CPU Side, we update the bone weights by the information we read back from these two rendertargets

What do you think ?

27 minutes ago, evelyn4you said:

But here my question:

How is it possibe in the PixelShader to calculate the color that we can later find out from which 3 vertecie colores the pixel was blended ?
Or even better we want to find that unique Color from that vertecie to which the pixel is nearest to. 

Probably you'd need to write triangleID to render target, get 3 vertices from that and their weights from barycentric coords.

29 minutes ago, evelyn4you said:

 

What do you think ?

I get the impression you are experienced and used to working with the GPU graphics pipeline, and because of this you tend to utilize this tool to solve a problem that is probably easier to solve without it. Personally i think that's really a CPU thing, and being independent of graphics API is a good thing on the long run. You may continue using those algorithms for decades and applying them to different problems. I use regions a lot for all kinds of things: Mesh smoothing / sharpening, calculate curvature directions and crossfields etc. which is the basis for advanced stuff like segmentation, simplification and quadrangulation.

So if you think you might add anything like this in the future it's worth to build the data structures and algorithms i've mentioned.

But if you are sure you just want vertex painting and nothing else then your idea sounds good to me.

There is however the visibility problem which will be frustrating sometimes: How to reach occluded regions - turn your viewport and try to get a good view, but then you accidently would paint to other parts of the surface, so you need make a manual selection first to avoid this, and so forth. But many professional tools have those same limitations and it's not a big problem usually.

 

hi JoeJ,

again many thanks for your input.

I think for debugging and other purpose i will need a vertecie and/or triangle visualizer/picker so i think i wil begin with my "shader version" that can serve for both ( visualization AND editing )

Here my method ( easiest solution ? ) i will give a try in code:

a.   give every vertecie ( = point ) a small 3d geometrie representation e.g. a very small cube.

b.  each cube has has its color representing the vertex index, and draw with one draw call the whole "cube cloud"

c.  make a simple render to texture ( renderTarget )

On CPU Side

d.  read only the small 2d area from the RenderTarget, that is surrounding the mouse cursor positon.

e.  scan the small area for the different colors an weight according to distance from mouse centerpoint

    (here the problem is over and undersampling, and visibility depending on zoom level

     its not exact but i think for a first solution it should work, with all the disadvantages you mentioned.

 

Instead rendering full screen for picking and using only a small region of it later, you could use a modified projection matrix and tiny frame buffer to render only the stuff around the cursor. (That's how OpenGL picking works)

But i assume it's faster to do this on CPU, even in brute force: Just loop aver all vertices and select the closest of those being close enough to the picking ray. It saves the GPU <-> CPU communication, rasterization and your work to setup projection and render targets. Same is true for triangles. (To avoid undersampling issues you could also pick closest triangle instead vertex and select its vertex closest to intersection.)

19 hours ago, JoeJ said:

Instead rendering full screen for picking and using only a small region of it later, you could use a modified projection matrix and tiny frame buffer to render only the stuff around the cursor. (That's how OpenGL picking works)

Thank you, this is a good trick. I think it will save memory but not much workload, because the pixel shader in this case is so easy. (Just writing the index Id to the renderTarget )

20 hours ago, JoeJ said:

But i assume it's faster to do this on CPU, even in brute force: Just loop aver all vertices and select the closest of those being close enough to the picking ray. It saves the GPU <-> CPU communication, rasterization and your work to setup projection and render targets. Same is true for triangles. (To avoid undersampling issues you could also pick closest triangle instead vertex and select its vertex closest to intersection.)

Maybe i dont understand correctly, but i feel it would be a heavy workload.

E.g. my base character ( from which all others are made just be changing the morphing parameters ) are highpoly 20.000 Vertecies about 60.000 trinangles.

When doing weight painting the characters are not in bind pose but in a certain user defined pose and morph.

So on CPU side i have to  apply all morphs and bone transforms to all vertecies and store them in the "access vertecie array". 

But there is no need to transform them in screen space with worldViewPrjectionj matrix, right ?

In my case this would be no problem, because i have a compute shader that does this pre-tranform an stores the new world coordinates in a structured buffer (unordered acces view.)

The CPU side would begin here

a. read back the transformed vertecies from GPU in a "access vertecie array"  ( still in world space)

b. Now i have to translate "somehow" the cursor position to world space. (by inverseViewProjection matrix ??)

c. Then the brute force loop is done finding these vertecies with a quadratic distance within the given range.

d. change and update the coresponding bone weights of the vertecies in the input vertecie buffer to GPU

I will simply have to try out how fast the methods are. I think both should not be too hard to implement.

 

Yesterday i made a first implementation of my shader version with full screen rendertarget but the FSP did drop from 48 to 39

which is in my case a lot. ( Did the test with a big scene of 8 animated high poly characters and only 1500 by mouse selectable vertecie geometrie representations ( small cones with only 4 vertecies each of them in one big vertexbuffer and a constant buffer with the transform matrixes)

Not at all optimized.

 

1 hour ago, evelyn4you said:

Maybe i dont understand correctly, but i feel it would be a heavy workload.

Not really, because if you render a 4x4 pixel frame buffer for picking, most of the GPU will be idle all the time. CPU is probably faster even when not considering the data transfer.

But if you don't have a CPU skinning implementation, i agree it's not an option (i just assumed this for a tool). Performance does not matter a lot for a tool. If you have two options, one 10 times faster but more work, i'd pick the one causing less work and see if performance is acceptable (even if some guy on a forum mentions it's not ideal :) ). 

You would only paint one one model at a time, yes? So a test with 8 characters is already worse than expected worst case.

This topic is closed to new replies.

Advertisement