Jump to content

  • Log In with Google      Sign In   
  • Create Account

Producing a depth field (or volume) for cloth simulation?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
2 replies to this topic

#1 sebjf   Members   -  Reputation: 116

Like
0Likes
Like

Posted 11 April 2012 - 06:08 AM

In my project, I am working on a system which will deform a mesh so that it fits over an arbitrary convex mesh - specifically - fitting clothing items over characters.

To start with I used the depth/stencil contents to filter pixels where an intersection took place (since the the scope is narrowed down to clothing this is simplified because the 'item' mesh will completely occlude the 'hull' mesh), then iterated over the positions in the 'item' mesh and deformed each vertex so that it was positioned between the camera and the world position as retrieved from the depth buffer.
When it worked this was very effective, and even with deformations on the CPU was almost real-time, but it did not allow for deforming the mesh in a natural way which preserved its features.

My preferred idea was to filter the depth field to create a set of 'magnetic deformers' which could then be applied to the mesh (per vertex with weight based on euclidean distance*); deforming the mesh on the GPU (OpenCL) I think would allow me to have a reasonable amount of deformers.

The reason I liked the depth buffer for this is that the hull was allowed arbitrary complexity with (practically) no impact on performance, it also allowed me to use objects whose shaders did 'anything' with my system**; but I have spent days trying to cajole it into doing what I want, but am realising that I will probably spend more time (programmer + processing in the final system) trying to create a field in a suitable space rather than creating one for this application.


Cloth simulation systems seem a good resource for basing the collision detection on, since at this point their purpose is identical and they need to be fast, but everything I read seems to focus on realistic real-time simulation of the cloth, I am only interested in the collision detection part.

Does anyone know of a good (fast) cloth simulation system that doesn't use 'geometric primitives' for its collision detection? I have read that some cloth simulation systems use depth fields and this seems like it would produce the best results when deforming against a mesh such as a character.

What about volumes such as voxel volumes? This I think would be ideal if it could be done quickly but I have not read much about creating volumes from from polygonal meshes, and nothing about the performance of testing points against these volumes.


*The best implementation from my POV would allow this to be done in real-time; since this is about fitting rather than cloth simulation I think I could get satisfactory performance by operating without a fully constrained cloth sim - its more important the features of the mesh are preserved.

**This is done with user generated content in mind so if it wasn't hard enough, no significant assumptions about the mesh can be made (if they could, this wouldn't be needed!)

Sponsor:

#2 spek   Prime Members   -  Reputation: 997

Like
0Likes
Like

Posted 12 April 2012 - 04:15 AM

Just thinking loud
If you have a screen texture that contains depth and (world)normals, you could certainly use that for collision detection and reponse. If a cloth particle is intersecting something (depth bigger than), just push it back a certain distance, using the pixel normal. If needed, you could also render the world 3D positions into a texture, or recalculate them from the depth value, if that is useful.

Yet it's tricky though. If your cloth particles move fast / have little iterations, they skip pixels, which makes it hard to trace back the initial point where it penetrated the surface. You should do sufficient iterations, and only collide with very nearby pixels to prevent cloth colliding with objects on the foreground (see below). Getting a stable simulation is hard.

And of course, a depthMap is only a very limited representation of the real scene. Imagine a table with a sheet on it, you can't do proper physics for the backside of the sheet, as you don't know the depth / normals of the rear side (unless you render a second texture that contains backsides). Whenever a cloth particle gets occluded or is just not in sight, it should freeze or at least disable collision detection until you see it again. That might cause some weird pop-in artefacts.

If this is a big problem, maybe rendering your cloth wearing characters into a 3D volume texture can fix this problem. This is expensive though (at least if you have many characters, and/or if you want high accurasy).



Maybe another idea is to make a combi solution. For each piece of cloth, make a low-detail variant that will be calculated by the CPU. Possibly your physics library already has ready-to-go functions for that. Then use those CPU calculated points as a skeleton/frame for the detailed cloth particles that "hang" between them. These particles can be calculated by the GPU, as usual. If you have some margins, they may not even need collision detection. And otherwise you can try the depthMap method. The CPU calculated cloth particles will keep the thing together, in case of errors, or when being rendered out of sight.

Not sure if these tips are good in practice, not a real expert on cloth either. But hopefully it helps!
Rick

#3 sebjf   Members   -  Reputation: 116

Like
0Likes
Like

Posted 12 April 2012 - 05:22 AM

Hi Spek, thinking out loud is what I am after Posted Image

For this part of the project real-time cloth animation is in the back of my mind but at the moment I am focusing on what originally I was looking at as a subset of it. This system has user generated content in mind and so its designed to fit meshes together but does not allow for any significant assumptions about the target mesh. The purpose of this is not to run a continuous simulation but to merely deform a mesh once (to start with) so that it fits a character it was not designed for.

I have read about image space based collision detection for actual real-time cloth simulation (e.g. http://ecet.ecs.ru.a...s/siiia/315.pdf) and generally, the implementations use multiple cameras, as you say rendering the same object from multiple sides to account for the limited information in a single depth map.

My ideal implementation would be comparing the surface to a 3D volume without a doubt; this is the best real-world approximation and I like that it would be a 'one stop' for the complete mesh and would avoid needing to cull false positives. I haven't seen anything though yet which suggests the performance could be comparable to the image space implementation.


The biggest problems I have with my implementation so far really come down to two things:

1. Loss of fine detail on deformations
2. Culling heuristics (that is, deciding which points in a 3D space are affected using only a 2D expression of this space)

(1) I am experimenting with now. My plan is to take the 'reference' depth (depth map of the item) and the 'deviations' (difference between this map and the map with the character drawn).
Each point in the target mesh is projected into screen space, then its position is changed to the one retrieved in world space from the combination of those depth maps. I can then alter how each point is transformed by filtering the 'deviation map' (e.g. by applying a Gaussian blur, to filter out the small high frequency changes in depths and thus preserve the details of the mesh while still transforming the section as a whole outside the character mesh).

This preserves the performance characteristics (O(number of vertices in target mesh)) and should preserve a satisfactory level of detail for cloth.


What I really want though is a way to build a series of control points from the deviation map, almost like the low detail variant you referred to, but this set of control points would be arbitrary, pulling specific parts of the mesh based on the 'clusters' of pixels in the deviation map.
This would give poorer performance (O(no. verts * no. deformers)), but it would preserve the mesh detail, would require no clever culling
heuristics, and would be easily extended to support more involved cloth simulation.

I have attached a couple of before and after (1 iteration) images showing how its working now. This is without the blur, with vertices transformed using the picture plane normal.

I reckon I could extend the culling to make some clever use of the stencil buffer but I still think a deformer set would be worth the performance hit esp. when I can do it on the GPU with OpenCL.

(This is buoyed by my latest test which deformed all vertices (14k) in the mesh based on deformers created for each deviating pixel (10k) which OpenCL processed for all intensive purposes instantaneously (the results were Completely Wrong in every way possible - but it got them wrong Quick! ;))

Attached Thumbnails

  • 1.PNG
  • 2.PNG





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS