Vetting an Idea with Mesh Morphing

Started by
5 comments, last by hrmmm 10 years, 8 months ago

I have a thought on which I would value the community's feedback.

Say you have a tree. For ease of reference, this is just a cylinder. Now, you have an actor swinging an axe. Given a few variables, we know how deeply the axe can "cut" into the tree, the angle of the strikes and so on. Based on these angles and deepness of the cuts, I want to deform the base model potentially separating it entirely, doing some dynamic texture swapping to keep up the illusion you're actually chopping into a tree.

Does anyone see a computational hindrance here even when considering potentially hundreds of NPCs? Would it be possible to "destroy" the original model and have two models if you, say, chopped a wedge out of the side of this tree so they could then be interacted with independently?

No, I have not done any programming work on this myself, yet. I don't want to waste time on something that could prove to be a fool's errand. I'm just asking from a conceptual, theoretical basis. This is the tip of an iceberg should this be doable. I just thought this would be an "easy" starting point. I know there have been some examples with water and pouring between containers and with other liquids. My understanding of them is they work with, more or less, particle systems. This would be with solid models. I've thought about doing this with voxels, and while possible, I would prefer to use traditional models. I would think the tradeoff here would be putting more work on the CPU and GPU if what I'm asking about is doable versus keeping a voxel model and all that contains stored in memory at once.

Thanks!

Advertisement

well a model is simply a large list of vertices that you usually store in a GPU vertex buffer and you can refill this buffer at anytime with whatever you want - so yes you can split models or move certain vertices.. in fact animation does this.. and water can also be a large textured plane mesh where you move the vertices..

however, this is pretty complicated stuff that takes a lot of experience to do well and judging by your post im guessing your not that experienced (no offense meant!)..

so - you might want to pin this idea on your wall for a bit later...

I've worked on real-time constructive solid geometry before. I suppose you could start with wikipedia:

http://en.wikipedia.org/wiki/Constructive_solid_geometry

The system I worked on was purely graphics: the subtraction was done with instances of a cut model being removed from the base model. You could render thousands of cuts on-screen this way, without modifying vertices. There was a 3D texture for the interior.

The part I haven't done is detecting when the model gets split into separate pieces. I assume you want the tree to fall down once that happens. You may be able to use the gpu to voxelize the CSG'd model and do a minimum spanning tree to see if everything's connected, though you probably would have some small limit on how many of those checks you could do per frame. I'm no expert, though. There could be a fast technique I don't know about.

Overall, I think it's possible, but it is complex. Maybe that wikipedia article links to some useful code to get you started. If your game is about chopping stuff, maybe it's worth it. If this is just some background feature in an RPG, I wouldn't bother with it.

Thanks for your responses.

EarthBanana, no offense taken. I'm not experienced with game programming, per se, but have been a programmer for about seven years and more an engineer for about three. I understand concepts, though, so I tried to ask with that approach in mind. You've given me something to think about, though, so thanks!

Pink Horror, yes, this would be for an RPG. They're so damned attractive. Even if I fail, the mere pursuit would be worth it, to me. However, instead of trying to be an exact simulationist or traditionalist, I want to bridge the two concepts by making a lot of predetermined choices that the game can use to supplement otherwise inhibitively intense calculations. An example from my reading would be providing material friction coefficients up front rather than trying to calculate that at run time. That's not an intense equation, sure. It is just an example. But, the fundamental approach would be build tiny blocks that work with each other. The result, then, would be a system taking inputs and responding "naturally," insofar as the built-in assumptions are concerned.

I was hoping that by sticking to solid models rather than voxels I could cheat and calculate certain things as I need to rather than trying to keep all the voxels in memory at all times. But, the more I read, the better they seem to be in certain situations.

Here's an article by VALVe on the zombie wounds for Left 4 Dead 2. They project textures and alpha-test fragments among other things to give the illusion of depth:

33mr2fc.png

http://valvesoftware.com/publications/2010/gdc2010_vlachos_l4d2wounds.pdf (Warning: strong imagery)

There's also a famous feature of the RedFaction series games made by Volition, it's an engine called GeoMod that allows the player to destroy the environment (to some limited extent).

http://web.archive.org/web/20051027104611/http://redfaction.volitionwatch.com/faq/geomodfaq.shtml

Thanks, Kryzon.

This topic is closed to new replies.

Advertisement