• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0

Producing a depth field (or volume) for cloth simulation?

2 posts in this topic

In my project, I am working on a system which will deform a mesh so that it fits over an arbitrary convex mesh - specifically - fitting clothing items over characters.

To start with I used the depth/stencil contents to filter pixels where an intersection took place (since the the scope is narrowed down to clothing this is simplified because the 'item' mesh will completely occlude the 'hull' mesh), then iterated over the positions in the 'item' mesh and deformed each vertex so that it was positioned between the camera and the world position as retrieved from the depth buffer.
When it worked this was very effective, and even with deformations on the CPU was almost real-time, but it did not allow for deforming the mesh in a natural way which preserved its features.

My preferred idea was to filter the depth field to create a set of 'magnetic deformers' which could then be applied to the mesh (per vertex with weight based on euclidean distance*); deforming the mesh on the GPU (OpenCL) I think would allow me to have a reasonable amount of deformers.

The reason I liked the depth buffer for this is that the hull was allowed arbitrary complexity with (practically) no impact on performance, it also allowed me to use objects whose shaders did 'anything' with my system**; but I have spent days trying to cajole it into doing what I want, but am realising that I will probably spend more time (programmer + processing in the final system) trying to create a field in a suitable space rather than creating one for this application.

Cloth simulation systems seem a good resource for basing the collision detection on, since at this point their purpose is identical and they need to be fast, but everything I read seems to focus on realistic real-time simulation of the cloth, [i]I am only interested in the collision detection part.[/i]

[b]Does anyone know of a good (fast) cloth simulation system that doesn't use 'geometric primitives' for its collision detection?[/b] I have read that some cloth simulation systems use depth fields and this seems like it would produce the best results when deforming against a mesh such as a character.

[b]What about volumes such as voxel volumes?[/b] This I think would be ideal if it could be done quickly but I have not read much about creating volumes from from polygonal meshes, and nothing about the performance of testing points against these volumes.

*The best implementation from my POV would allow this to be done in real-time; since this is about fitting rather than cloth simulation I think I could get satisfactory performance by operating without a fully constrained cloth sim - its more important the features of the mesh are preserved.

**This is done with user generated content in mind so if it wasn't hard enough, no significant assumptions about the mesh can be made (if they could, this wouldn't be needed!)

Share this post

Link to post
Share on other sites
Just thinking loud
If you have a screen texture that contains depth and (world)normals, you could certainly use that for collision detection and reponse. If a cloth particle is intersecting something (depth bigger than), just push it back a certain distance, using the pixel normal. If needed, you could also render the world 3D positions into a texture, or recalculate them from the depth value, if that is useful.

Yet it's tricky though. If your cloth particles move fast / have little iterations, they skip pixels, which makes it hard to trace back the initial point where it penetrated the surface. You should do sufficient iterations, and only collide with very nearby pixels to prevent cloth colliding with objects on the foreground (see below). Getting a stable simulation is hard.

And of course, a depthMap is only a very limited representation of the real scene. Imagine a table with a sheet on it, you can't do proper physics for the backside of the sheet, as you don't know the depth / normals of the rear side (unless you render a second texture that contains backsides). Whenever a cloth particle gets occluded or is just not in sight, it should freeze or at least disable collision detection until you see it again. That might cause some weird pop-in artefacts.

If this is a big problem, maybe rendering your cloth wearing characters into a 3D volume texture can fix this problem. This is expensive though (at least if you have many characters, and/or if you want high accurasy).

Maybe another idea is to make a combi solution. For each piece of cloth, make a low-detail variant that will be calculated by the CPU. Possibly your physics library already has ready-to-go functions for that. Then use those CPU calculated points as a skeleton/frame for the detailed cloth particles that "hang" between them. These particles can be calculated by the GPU, as usual. If you have some margins, they may not even need collision detection. And otherwise you can try the depthMap method. The CPU calculated cloth particles will keep the thing together, in case of errors, or when being rendered out of sight.

Not sure if these tips are good in practice, not a real expert on cloth either. But hopefully it helps!

Share this post

Link to post
Share on other sites
Hi Spek, thinking out loud is what I am after [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]

For this part of the project real-time cloth animation is in the back of my mind but at the moment I am focusing on what originally I was looking at as a subset of it. This system has user generated content in mind and so its designed to fit meshes together but does not allow for any significant assumptions about the target mesh. The purpose of this is not to run a continuous simulation but to merely deform a mesh once (to start with) so that it fits a character it was not designed for.

I have read about image space based collision detection for actual real-time cloth simulation (e.g. [url="http://ecet.ecs.ru.acad.bg/cst04/docs/siiia/315.pdf"]http://ecet.ecs.ru.a...s/siiia/315.pdf[/url]) and generally, the implementations use multiple cameras, as you say rendering the same object from multiple sides to account for the limited information in a single depth map.

My ideal implementation would be comparing the surface to a 3D volume without a doubt; this is the best real-world approximation and I like that it would be a 'one stop' for the complete mesh and would avoid needing to cull false positives. I haven't seen anything though yet which suggests the performance could be comparable to the image space implementation.

The biggest problems I have with my implementation so far really come down to two things:

1. Loss of fine detail on deformations
2. Culling heuristics (that is, deciding which points in a 3D space are affected using only a 2D expression of this space)

(1) I am experimenting with now. My plan is to take the 'reference' depth (depth map of the item) and the 'deviations' (difference between this map and the map with the character drawn).
Each point in the target mesh is projected into screen space, then its position is changed to the one retrieved in world space from the combination of those depth maps. I can then alter how each point is transformed by filtering the 'deviation map' (e.g. by applying a Gaussian blur, to filter out the small high frequency changes in depths and thus preserve the details of the mesh while still transforming the section as a whole outside the character mesh).

This preserves the performance characteristics (O(number of vertices in target mesh)) and should preserve a satisfactory level of detail for cloth.

What I really want though is a way to build a series of control points from the deviation map, almost like the low detail variant you referred to, but this set of control points would be arbitrary, pulling specific parts of the mesh based on the 'clusters' of pixels in the deviation map.
This would give poorer performance (O(no. verts * no. deformers)), but it would preserve the mesh detail, would require no clever culling
heuristics, and would be easily extended to support more involved cloth simulation.

I have attached a couple of before and after (1 iteration) images showing how its working now. This is without the blur, with vertices transformed using the picture plane normal.

I reckon I could extend the culling to make some clever use of the stencil buffer but I still think a deformer set would be worth the performance hit esp. when I can do it on the GPU with OpenCL.

(This is buoyed by my latest test which deformed all vertices (14k) in the mesh based on deformers created for each deviating pixel (10k) which OpenCL processed for all intensive purposes instantaneously (the results were [i]Completely Wrong[/i] in every way possible - but it got them wrong Quick! ;))


Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  
Followers 0