# Dynamic meshes (?)

This topic is 4846 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

[Before i say anything, let me preface it with the fact that i'm not up on all the current 3D techniques and terminology, so feel free to correct me. I need the help.] Let's say i have a "mesh" (collection of triangles) that i want to control via some "control points". What i want to have happen is when i move the control points the mesh should adjust itself accordingly, as illustrated here: Ultimately, i'd like to build a small model of a fish but change it's movement and animation patterns via some control points (this might be considered a "bone structure"?). I want the entire model to bend and flex when the control points move. My question is, is this technique something i can do in realtime? Is this how other games work or is it something reserved strictly for Hollywood movie productions with rendering farms that do the work? If this is something that i can do in realtime, any ideas on how to get meshes to "figure themselves out"? I'm talking about the math behind moving control points or bones, how that effects all the other vertices on the model. Any books or articles on the subject? What i do not want is a model built from smaller static parts like a person with arms, legs, torso, etc. The model is a fish so it needs to be really fluid. Pre-made parts that simply rotate or stretch in place will not work for what i want. I need a way to animate, stretch, bend, and flex the whole model or major portions thereof, so i imagine that display lists could not be used. I would have to calculate the vertices of the model on every frame. I'm really struggling to describe this properly, so i hope this gets across okay. Thanks for your input.

##### Share on other sites
This can absolutely be done in real time. Google or look for references on 'skeletal animation' and you'll find all sorts of info. It's certainly managable in software. I'm not really up to speed on GPU stuff, but I know this can be done on graphics cards as well.

The skeleton for a fish would be fairly simple, I would think. You would want a fair number of bones for the spine, and then a few for the fins and perhaps the jaw.

I recall that in the movie 'Deep Blue Sea' they animated the sharks by having them follow splines. The shark moved along the spline, and its body also followed the contour of the spline, so the motion was very natural.

If you plan to have multiple fish, another topic you might be interested in is flocking. There is coverage of that subject in GPG1, as well as 'AI for Game Developers'. Skeletal animation isn't exactly trivial, and may take some work. I know the topic is covered (with code) in '3D Game Engine Programming' (Zerbst). It's very common in modern games and is well documented.

I seem to remember a post on a related topic a few weeks back. Was that you? I remember some mention of procedural generation of fish - perhaps simulated evolution, genetic algorithms, stuff like that. Anyway, it sounded interesting - perhaps you could tell us more about the project?

##### Share on other sites
One way to do this would be to define vertices in terms of their control points - to say that the position of vertex X is (some fraction) * (position of point 1) + (some fraction) * (position of point 2) + (some fraction) * (position of point 3), etc. For a vertex that was midway between two control points, it could be 0.5 * point1 + 0.5 * point2 (which is the same as (point1+point2)/2, the average of the two points). Then you could move the control points and the vertices would move to account for it.

The problem is that building that information - selecting control points and calculating weightings - is not something that can be done automatically, because there are an infinite number of ways to do it. You can make a 'best guess' by having control points influence vertices that are near them (and the nearer they are, the greater the weighting) but you'll probably need an animator to go through and tweak that to get a good-looking result.

It's absolutely something that can be done in realtime, too - though you'll need a vertex shader for it.

##### Share on other sites
Quote:
 If you plan to have multiple fish, another topic you might be interested in is flocking.

You bet. I'm really looking forward to it. I've played with pathfinding in previous projects and i already have AI Game Programing Wisdom (1). I've dabbled in flocking and look forward to getting into something heavy.

Quote:
 I seem to remember a post on a related topic a few weeks back. Was that you? I remember some mention of procedural generation of fish - perhaps simulated evolution, genetic algorithms, stuff like that. Anyway, it sounded interesting - perhaps you could tell us more about the project?

Yup. My goal is to create a spiritual sequel to a nice little Maxis game called El-Fish. It may not make to full-fledged game status, but something to learn on would be good. I imagine it would just be a AI/animation playground for me. My basic goals with the project are thus:

- Create a "fish" that is dynamically created via a genetic code. The code will control the shape, size, textures, colors, and other physical features of the fish. It will also control behaviours as they relate to AI. This may include flocking preferences (or lack thereof), pathfinding or more likely "ambling" or "strolling", grazing locations (surface? bottom?), movement behaviours (fast? slow? darting? turning?), and other things.

- Create a semi-realistic genetic model where fish can "breed" and create new fish by splicing their currently existing gene code, much like real organisms. Also create mutations.

- Create a tank with objects to swim in and around

- Fish-Cam! (just imagine how cool it would be to swim as a fish with a fish-eye lens and everything!)

Quote:
 One way to do this would be to define vertices in terms of their control points - to say that the position of vertex X is (some fraction) * (position of point 1) + (some fraction) * (position of point 2) + (some fraction) * (position of point 3), etc. For a vertex that was midway between two control points, it could be 0.5 * point1 + 0.5 * point2 (which is the same as (point1+point2)/2, the average of the two points). Then you could move the control points and the vertices would move to account for it.

That's what i imagined as well. That's going to take a lot of work! I'll basically have to plot each vertex by hand and make a map of the whole fish.

Quote:
 It's absolutely something that can be done in realtime, too - though you'll need a vertex shader for it.

Can you explain? Please excuse my total lack of knowledge on the subject ;-)

Thanks again for the help. I appreciate it. Looking forward to getting started with a trip to the aquarium in the next few weeks.

##### Share on other sites
Quote:
 It's absolutely something that can be done in realtime, too - though you'll need a vertex shader for it.
superpig knows a lot more about rendering than I do, so I'll just pose the question: would this really have to be done on the card? It seems that for a relatively small number of fish, the skeletal animation could be done in software, no?

Anyway, the project sounds fascinating (if ambitious!). So here's another idea for you to consider; creating your meshes as subdivision surfaces. With a subdivision surface, you can take a rough, low-poly mesh and 'smooth it out' to an arbitrary level of detail. I think it would work very well for fish.

This would allow you to make the 'genetic' adjustments using a simple, low-poly mesh. You'd only have a few vertices to worry about - move a few here and there to make the body fatter, longer, stretch out a fin, or whatever. Then, the subdivision process would take that course mesh and convert it into a smooth, high-poly mesh of arbitrary level of detail.

I haven't done it, but it also seems that you could distribute the bone weights from the course mesh over the subdivision surface in the same way you distributed other properties, such as color and texture coordinates.

Anyway, there is quite a lot involved here - skeletal animation and subdivision surfaces are both fairly complex. I don't have any skeletal animation code lying around, but I do have a subdivision surface class somewhere. So let me know if you want any further info on that.

##### Share on other sites
Quote:
 Original post by jykAnyway, there is quite a lot involved here - skeletal animation and subdivision surfaces are both fairly complex. I don't have any skeletal animation code lying around, but I do have a subdivision surface class somewhere. So let me know if you want any further info on that.

I do, yes. I have a lot to learn on the subject (and really, that's why i'm doing it), but i'm looking more for conceptual how-to's than actual code at this point. If i need code, i'll ask later. The mechanics of how bones and meshes and all those things work is appreciated though.

##### Share on other sites
I'd be very careful about subdivision surfaces. If you try using them in realtime - adjusting the subdivision based on distance to the camera, curvature, etc - then you're talking about changing mesh topology on a frame-to-frame basis. That basically means you'll have to regenerate your vertices on the CPU into a dynamic VB then send them to the card - if you're planning on having a relatively large number of fish, that's going to be quite a lot of data that (a) the CPU has to generate, and (b) the AGP bus has to transfer to the card. So I wouldn't recommend it - if you can come up with a system whereby vertex positions are modified but the mesh topology stays the same, you can have a single static VB that sits on the card all the time and is modified as you go along using a vertex shader.

What I'm proposing is this: You have X control points (where X is limited to around 90), numbered consecutively from 0 to (X-1). Your vertex is defined as a set of control point IDs and blend weights, like so:

struct fishVertex{ unsigned short controlPointID[4]; float blendWeight[4];};

So that's 4 control points per vertex (it could be more or less if you want). At render time, you upload the positions of all your control points into vertex shader constant registers, and then the shader uses the control point IDs to index into that, like so:

mov a0.x, v0.xmov r1, c[a0.x]mad r0, r1, v1.x, r0mov a0.x, v0.ymov r1, c[a0.x]mad r0, r1, v1.y, r0mov a0.x, v0.zmov r1, c[a0.x]mad r0, r1, v1.z, r0mov a0.x, v0.wmov r1, c[a0.x]mad r0, r1, v1.w, r0

Or if you prefer something higher-level:

for(int i = 0; i < 4; ++i){  position += (IN.BlendWeights) * ControlPointPositions[IN.ControlPointID];}

You can take a similar approach for things like vertex normals and texture coordinates - just give your control points that data, and use the blend weights to scale them as you add them together.

And yes, you'd need to set up blend weights and suchlike for your fish model - this is the part of the animation process known as 'rigging,' and things like 3DSMax have tools to help you with it.

##### Share on other sites
Quote:
 I'd be very careful about subdivision surfaces. If you try using them in realtime - adjusting the subdivision based on distance to the camera, curvature, etc - then you're talking about changing mesh topology on a frame-to-frame basis. That basically means you'll have to regenerate your vertices on the CPU into a dynamic VB then send them to the card...
Just for the record, I wasn't suggesting modifying the subdivision surfaces in real-time. I was thinking more in terms of creating the initial mesh procedurally (as a pre-processing step) given a certain 'genetic' description represented as a low-poly mesh. From there you could handle the mesh however you wanted, including uploading it to the card and manipulating it via shaders.

In any case, I'm out of my area with the rendering and vertex shader aspects, so I'll bow out of further discussion. Good luck with the project, though!

##### Share on other sites
Quote:
Original post by jyk
Quote:
 I'd be very careful about subdivision surfaces. If you try using them in realtime - adjusting the subdivision based on distance to the camera, curvature, etc - then you're talking about changing mesh topology on a frame-to-frame basis. That basically means you'll have to regenerate your vertices on the CPU into a dynamic VB then send them to the card...
Just for the record, I wasn't suggesting modifying the subdivision surfaces in real-time. I was thinking more in terms of creating the initial mesh procedurally (as a pre-processing step) given a certain 'genetic' description represented as a low-poly mesh. From there you could handle the mesh however you wanted, including uploading it to the card and manipulating it via shaders.

Ahh, I see what you mean. Yeah, that'd be an alternative approach - say you're using cubic bezier patches, instead of blend weights and control point IDs you'd just have a u/v coordinate per vertex, and the nine control points go in the shader registers - you use the U/V coords to place your vertex within the patch, like regular bezier tesselation, except that the vertices have already been created and you're now moving them to the right place.

The only problem with that is (I think) you'd have trouble rendering multiple patches. I guess you could maybe set up your vertices as U/V + 'patch index', where you use that index to select a set of control points from the constant registers - then you put ten sets of control points into the registers at once, render ten patches in one call.

Hmm... not sure how that stands up to my proposed method - my thing should let you render each fish using one strip per material - while yours uses less data and is easier to set up from an art point of view. *shrug*

1. 1
2. 2
3. 3
4. 4
frob
15
5. 5

• 16
• 12
• 20
• 12
• 19
• ### Forum Statistics

• Total Topics
632163
• Total Posts
3004519

×