Creating all scenario animations/reactionary situations?

Started by
3 comments, last by rAm_y_ 9 years, 5 months ago

By that I mean..... Well lets say a character can go walking, running, crouch and prone - however we all know this is a linear simplified representation of real actions. How about we emulate many/real possible actions for a character.

This would be linear also as you would have to code every possible action that could occur, there is no such thing as non human intelligence in binary as far as I know. So how much would this task the CPU/GPU if we were to allow for so many different possible animation scenarios.

I know we have physics simulation but I don't think they apply here as they are still linear in a sense.

Advertisement

I'm still very much a beginner at games programming so take this with a big grain of salt, but two things have come up in my reading that could help here.

One is animation blending, which is a way to combine two or more distinct animations to make new animation sequences. For example, you could combine a running animation with a crouch to get a kind of running crouch. I imagine it's not as simple as that but I'm sure you can find some good tutorials if you look for them

The other idea would be inverse kinematics. Basically your physics engine would work out exactly where your character needs to place their feet to make them move in the direction you want them to move. That would give you the most accurate animations but I suspect it's costly in CPU/GPU time.

I don't think this is a programming problem per se, but more of an interface one.

I think the problem is that what you're describing is more akin to games that give you control over the limbs of the character. Surgeon Simulator comes immediately to mind as one of those (if you haven't played it, it's a humorous take on surgery, where you have to control every aspect of the arms, wrists, fingers, etc and perform surgery.It's damned near impossible). I assume that in addition to gpu/cpu limitations, our interface limitations come into play here. You wouldn't want to play a game where you had realistic movement options as that would mean you needed realistic control over those movements, but unlike in the real world, you're left with a controller/keyboard to try to approximate those.

I think things like the kinnect, occulus rift (for head movement, anyhow), Wii controllers, etc already are pretty adept at interfacing "real" or all possible scenarios in a slightly more limited fashion, but are not far from being able to approximate real movement. At that point, it's just translating the actual movement to the bone-rigging, which is much simpler (I mean, I'm new to game development myself, I don't mean to diminish the probably ridiculously complicated tech that translates human movement to something a program can use. I just mean from THAT point, it's largely trivial).

So, I think the limitations on movement probably have more to do with our abilities to actually virtually control something than they do with any programming limitation. They're more a shorthand to make the game playable. We can intuitively "get" moving left-right-up-down /jumping, etc. But, we'd break down if we had to literally bend-knee-lift-foot-extend-leg-plant-foot-rotate-shoulder-move-arm etc to play a game tongue.png

There are plenty of games that add randomization in to simulate a much more natural running-gait or jumping, in addition to a probably much more complex animation code, but I don't think this is quite what you're talking about

.

Anyhow, with those technologies coming along nicely, and considering the difficulties humans have at translating keypresses into anything resembling real movement, I think figuring out how to approximate every possible movement seems like it would be like trying to figure out the most difficult way possible to solve a problem.

Edit*

Unless you're talking about NPC animation, which renders my entire long post here moot >.<

In that case, yeah, that's a conundrum. I think you could do it without taxing the code too much, put the animations in data files rather than code, have it randomly pick from there. Though, the sheer amount of work would be daunting. I think a combination of a physics engine with a large animation database could go a long way. Tell the npc to walk across the room using a walking animation, but if he/she trips on an object let physics take over and combine that with a "flailing" animation where they try to correct themselves. But, t his still doesn't seem to be what you're talking about.

Until the singularity though, I think we're pretty much stuck with linear coding. Well, I sure am anyhow, I don't want to speak for everyone :P

Beginner here <- please take any opinions with grain of salt

Creating all scenario animations/reactionary situations? ... So how much would this task the CPU/GPU if we were to allow for so many different possible animation scenarios.

The CPU makes sure all the models and textures are loaded, and then does a bunch of math to figure out the matrix values and other numbers needed by the shaders.

The GPU takes the rigged models, runs the rig and textures and other data through the shaders with the frame-specific values, and the result is a (hopefully) beautiful picture.

The CPU operates on animation curves. The animation curves are usually created by animators by hand. Sometimes they will start with motion capture data or even a DIY rotoscope the image by recording a video and using it as an overlay as they manipulate the models.

While you can sometimes do a bit of work with animation blending, some for the arms and upper body and some for the legs and lower body, it often doesn't work well. The same with IK, you can use it here and there for specific tasks. Often IK is used in conjunction with an animation to provide fine tuning. The basic animation is to extend the arm, the IK portion is to have it line up perfectly with an object's animation point. The basic animation is to take a step, the IK portion is to slightly raise or lower the foot for the terrain.

While you might be able to mix some animations --- you can mix a basic walking gait with arms down, with holding a cup, with turning the head, with talking --- there are many you cannot mix. Holding a cup on the top hand while simultaneously doing a summersault on the bottom. Tightly-held upper body with legs running rapidly. Throwing a spear or ball while the legs are in a resting idle position. Many actions involve the full body for proper balance and control. The animations need to reflect that. You can build state machines and use other model and animation data to control exactly what can be used where, and transition between animation clips.

Animators spend a LOT of time making those animations for major games. It is enough to keep the animation team employed for the length of development. For some games that is a hand full of people for a few months, for other games it is hundreds of animators for multiple years. If it were something that could be easily solved then most major studios would do that instead and fire their animation departments. unsure.png

These are all great answers, it's a video game after all it's not a test of trying to control a real life situation as I am trying to describe, you would end up making it so complex that nobody would play it!

So thanks for that, transistion from animation to ragdoll physics is what I should be thinking.

I will look up Surgeon Simulator, sounds fun.

This topic is closed to new replies.

Advertisement