Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


sebjf

Member Since 22 Nov 2010
Offline Last Active May 13 2012 01:31 PM

Posts I've Made

In Topic: How would I retrieve the 'inner surface' of an arbitrary mesh?

28 April 2012 - 05:17 AM

Hi Graham,

First, sorry for the late reply, I am starting to wonder if I am completely misunderstanding the "Follow This Topic" button!

To clarify, the first image is the 'detailed mesh' the second is the 'physical mesh'. The 'physical mesh' is literally the detailed mesh with overlapping polygons removed (and in this example, it was manual). This may require some explanation:

In my project, I am working on automatic mesh deformation whereby my algorithm fits one mesh over another. To do this, I reduce the target mesh to a simplified 'physical mesh' and check for collisions with a 'face cloud'. The 'face cloud' consists of the baked faces of every mesh making up the model(s) that the target mesh should deform to fit. (The target mesh when done will completely encompass the face cloud.)

For each point in the 'physical mesh', I project a ray and test for intersections with the face cloud, find the furthest one away then transform that control point to this position.

Before this is done, I 'skin' my detailed mesh to the 'physical mesh' - for each point in the detailed mesh (regardless of position/normal etc) I find the closest four points in the 'physical mesh', then weight the point to each of them (where the weight is the proportion of each points distance, to the sum of the distances); the result is, when the 'physical mesh' is deformed, each point in the 'detailed mesh' is deformed linearly with it.

The purpose of this is to preserve features such as overlapping edges, buttons, etc because with these, the normals of each point cannot be relied upon to determine which side of the surface the point exists on, hence the need for a control mesh.
What I am attempting to create in the 'physical mesh' is simply a single surface where all the points' normals accurately describe that surface.

So far, I do this by using the skinning data to calculate a 'roaming' centre of mass for each point, which is the average position of the point + all others that share the same bones. Any point whose normal is contrary to (Point Position - Centre Of Mass for that Point), is culled. (But is still deformed correctly because it is skinned to the surrounding points which are not deformed)


This whole setup is designed for user generated content, hence why I can't do what normal sensible people do and just have artists build a collision mesh in Max, it is also why I cannot make any assumptions about the target mesh*.

*Well, I can make some assumptions, for 1. I can assume it is skinned, and that the mesh it is deforming to fit is also skinned. Since I started using the skinning data the peformance (quality of results) has increased dramatically.

For more complex meshes though I still need a better solution, as it won't cull two points that sit very close, one outside the collision mesh, one inside (and hence when deformed the features are crushed as only one pulls its skinned verts out).

Your idea for ray tracing to find overlapping polys sounds very promising, I will look into this, Thanks!

Seb

In Topic: Could someone explain how this equation to calculate weights for control poin...

20 April 2012 - 11:21 AM

Hi TheUnbeliever,

Thank you! I don't know how I read that as d1, d2 and d3 the first time round. (I still think they are very obscurely named variables!)

It is somewhat clearer what is happening. As I see it now, when the sum of the distances is calculated each distance is actually weighted by the angle of that point to the 'main point'. This would be so that when a vertex lies close to the vector between two control points, the third points influence is reduced, as the technical distance may be close but the practical deformation is controlled by the control points at either side right?

In Topic: Producing a depth field (or volume) for cloth simulation?

12 April 2012 - 05:22 AM

Hi Spek, thinking out loud is what I am after Posted Image

For this part of the project real-time cloth animation is in the back of my mind but at the moment I am focusing on what originally I was looking at as a subset of it. This system has user generated content in mind and so its designed to fit meshes together but does not allow for any significant assumptions about the target mesh. The purpose of this is not to run a continuous simulation but to merely deform a mesh once (to start with) so that it fits a character it was not designed for.

I have read about image space based collision detection for actual real-time cloth simulation (e.g. http://ecet.ecs.ru.a...s/siiia/315.pdf) and generally, the implementations use multiple cameras, as you say rendering the same object from multiple sides to account for the limited information in a single depth map.

My ideal implementation would be comparing the surface to a 3D volume without a doubt; this is the best real-world approximation and I like that it would be a 'one stop' for the complete mesh and would avoid needing to cull false positives. I haven't seen anything though yet which suggests the performance could be comparable to the image space implementation.


The biggest problems I have with my implementation so far really come down to two things:

1. Loss of fine detail on deformations
2. Culling heuristics (that is, deciding which points in a 3D space are affected using only a 2D expression of this space)

(1) I am experimenting with now. My plan is to take the 'reference' depth (depth map of the item) and the 'deviations' (difference between this map and the map with the character drawn).
Each point in the target mesh is projected into screen space, then its position is changed to the one retrieved in world space from the combination of those depth maps. I can then alter how each point is transformed by filtering the 'deviation map' (e.g. by applying a Gaussian blur, to filter out the small high frequency changes in depths and thus preserve the details of the mesh while still transforming the section as a whole outside the character mesh).

This preserves the performance characteristics (O(number of vertices in target mesh)) and should preserve a satisfactory level of detail for cloth.


What I really want though is a way to build a series of control points from the deviation map, almost like the low detail variant you referred to, but this set of control points would be arbitrary, pulling specific parts of the mesh based on the 'clusters' of pixels in the deviation map.
This would give poorer performance (O(no. verts * no. deformers)), but it would preserve the mesh detail, would require no clever culling
heuristics, and would be easily extended to support more involved cloth simulation.

I have attached a couple of before and after (1 iteration) images showing how its working now. This is without the blur, with vertices transformed using the picture plane normal.

I reckon I could extend the culling to make some clever use of the stencil buffer but I still think a deformer set would be worth the performance hit esp. when I can do it on the GPU with OpenCL.

(This is buoyed by my latest test which deformed all vertices (14k) in the mesh based on deformers created for each deviating pixel (10k) which OpenCL processed for all intensive purposes instantaneously (the results were Completely Wrong in every way possible - but it got them wrong Quick! ;))

[attachment=8169:1.PNG][attachment=8170:2.PNG]

In Topic: Why does my Unproject() method insist on Unprojecting towards 0,0,0?

29 January 2012 - 09:01 AM

I see, I misunderstood it as a feature of the makeup of a projection matrix, as opposed to an agreed implementation.
Thank you very much clb.
The way you explained it, as the capabilities of the are 'built up' with the dimensions of the matrices, is very clear. I understand this convention now, as opposed to simply trying to remember it!

In Topic: Why does my Unproject() method insist on Unprojecting towards 0,0,0?

29 January 2012 - 08:03 AM

Thank you very much clb! I added the divide by w operation and the (un)projection is now working perfectly. I can say surely it would have been a Long time before I figured that one out.

As I understand it, w is used to control projection, by 'standing in' for z - when the final position is calculated z is moved into w by the projection matrix, and the divide is performed. So is it that, when the inverse of this matrix is taken, that operation is no longer performed and so needs to be done manually/explicitly as it is in my method now?

PARTNERS