do most of AAA games use ragdoll animations?

Started by
13 comments, last by JoeJ 3 years, 1 month ago

recently i saw a video from naughty dog engineer that they mix physics and animations to make more realistic animations and make each body joint have interaction with environment. there are some basic unity implementations and they dont look bad. i want to know is it something that used in most of games? i think current effects in video games cant simply be implemented with those capsule colliders, character controllers and basic animations graphs.

Advertisement

I think what you may be referring to is IK (Inverse Kinematics) . Ragdoll physics usually refers to death or dying simulations, where someone is being thrown around like a “ragdoll”. I think the capsule colliders are still usually implemented along with IK. The colliders are still used to keep a character out of the geometry and do sliding movement, and the IK is used to make it look more real. I'm not really an expert on this so maybe someone else can fill in the details.

Did not work on AAA games, but having good experience with powered ragdolls.

It looks like most games do not combine animation with simulation - there only is a hard switch from living animated skeleton to ragdoll after death. IK is used a lot, e.g. to adapt food placement, but that's no simulation.
I always wondered why there is no combination of both. It's pretty easy to do and can help a lot with realism. For example if animation is bad and violates laws of physics, e.g. when switching animations discontinuously, adding simulation can help a lot. Downside is it can also remove details from the animation and blur it out.
I did this using custom powered joints which convert animation to acceleration. The Newton physics engine is very good for this because accurate. Havok had a lot of functionality for this too, e.g. they supported the mapping of a low res physics skeleton to a high res one. Bullet turned out just too inaccurate, and PhysX also seemed useless because it already struggled to get a dead ragdoll lying on the floor to rest without high sleep thresholds.

That was a decade ago, and i did not check again since that. PhysX has added Featherstone solver and also they demonstrated robotics demos, so it should work better now. But i doubt U engines expose such features easily (also it seems both move away from PhysX and work on their own physics implementations). In any case, if you only want to make motion more realistic, the demands on accuracy are not very high in comparison my own requirements when working on a self balancing walking ragdoll. I remember an animator from Ubisoft who has demonstrated extreme improvement of cartoon animation with some bouncing, which he has implemented simply within MaxScript inside 3DSMax. Unfortunately can't find the video again, but this is a field with low hanging fruits, waiting to be taken.

moeen k said:
i think current effects in video games cant simply be implemented with those capsule colliders, character controllers and basic animations graphs.

All those systems can remain untouched. The simulator can just add bouncy movement on top of that. Found the Ubi video again:

This can be really simple and has nothing to do with Euphoria, as it only improves existing motion using physics, but does not generate the motion itself.

Maybe you can post the source to Naughty Dog? I'd be interested. I've heard their awesome animations shown in TLoU2 reveal was more a result of lots of manual work and tweaking than simulation, but may be wrong.

EDIT: um, no - that's not the video i meant. Seems more about animation matching than adding bouncy dynamics, but nice anyways.

Gnollrunner said:
The colliders are still used to keep a character out of the geometry and do sliding movement, and the IK is used to make it look more real.

In my experience collisions and even joint limits are not that relevant. I did spend some time on the IK solver so it does not violate joint limits, but then when i started work on walking ragdoll, it turned out that natural motion just never violates joint limits. There a re many styles of walking, but no one tries to bend a knee into the wrong direction : )
It's also possible to disable self collisions because intersections rarely happen for the same reasons.

@JoeJ I guess what I meant was from what I've seen many games still implement the spheres, pill or ellipse mesh collision to do general movement. Otherwise the computation for just walking around would be pretty high. I've never done IK but I have done sphere/mesh collision. The sphere(s) or pill just keeps your character where it's supposed to be and does the typical sliding along the ground and walls for running and walking. I'm not sure if there are many games with 3D characters that forgo sphere, pill or ellipse mesh collision completely. I have read their some but I've watched a few unity demos and the pill is still involved. Same for UE4. They just add the IK to make it look good.

Yeah, we'll likely go robotics simulation, or ML procedural animation, whatever, but we'll still use capsules for collisions, i guess. Collision shapes become only a limitation if we would do things like complex and detailed actions, e.g. a kiss. But such stuff likely remains topic for cut scenes and motion capture, with collisions turned off.

Also, from the view of rigid body simulation, every body has the shape of a ellipsoid anyways, because that's the shape which our inertia tensors describe. Even if we add a detailed mesh collider on top of that, it behaves like an egg. : )

So, detailed collision shapes would be more a topic for things like realistic skin, hair and cloth simulation, but it's not related to the goal of having more natural character animation IMO.

We used this excessively in Half-Life: Alyx. Here is my favorite example when this works very well:

This is all live in game. Nothing is scripted. The headcrab is both animated and simulated at the same time. We changed the headcrab jumping attack such that there is always an active ragdoll. You gain a lot of quality in terms of environment interaction with this. E.g. when the headcrab jumps against a wall it now looks nice since it collides properly and slides down the wall. Also headcrabs could tumble down stairs. Besides defending with a chair people caught headcrabs in air with their hands or even in a bucket. They put buckets on top headcrabs. And so forth. The player feels much more powerful and has more impact in the world.

I used some of the techniques described in the Naughty Dog paper, but we build another system on top of this. It is a combination of animation, physics and game code. I haven't used any ML for these things nor do I think it is particular useful for these use cases, but maybe from an academic perspective. A nice side effect of the system was that ragdoll transition and get up from dead came for free with this approach.

@Dirk Gregorius can you please share that paper with me?

Dirk Gregorius said:
Here is my favorite example when this works very well:

Ooooh… being one of those frustrated from AAA games working mostly like movies, it's exciting to see some are still on the right track. Must get some VR sometime… : O

Maybe you can share some thoughts on the question: Could this work as well on a non VR game? For me it does. When playing Penumbra, i loved to interact with objects all the time. It gives me so much immersion that i just accept the awkward mouse controls. But am i a minority, and most people feel just discomfort when having to do this? Or is it just limited to PC games, and thus the majority of the industry ignores physical interaction with a games world? I mean, why nobody big beside Valve makes more games like this?

@moeen k

Sorry, there is no paper. I was referring to a ND presentation a few years ago at the GDC. All we really do from that presentation is how we spawn the ragdolls through animation events from the animgraph. I often see people try to add ragdoll nodes into an animgraph, but this would obviously be a bad idea. Personally I have not written anything up unfortunately . I just used the pandemic to write an excessive test suite to study and work on this for future titles since things were a bit more quiet at work and I had time to do this luckily. So hopefully I will be able to talk about what we did at a GDC or SteamDevDays soon!

@JoeJ

The same ideas would work equally in non VR. There is really nothing VR specific here (maybe the hands, but this is really a different story). The Half-Life series has always had an emphasis on physics driven gameplay and this was just a natural extension of this. The problem with all the physical interaction is manifold. First, as physical gameplay is important for us we designed our physics engine Rubikon to work very well for our anticipated gameplay scenarios. Secondly. the whole physics engine is a first class citizen inside Source2 and many systems build on top of this. This is historically this way since HL2. This helped a lot since people are used to build gameplay on top of physics for a long time already. Thirdly, I believe it is important to have a physics programmer in house. To get a lot of physics in a game the designers and artists must trust in the engine and have short communication ways with engineering. So if an artist or designer wants to try something with physics they quickly come to my desk and we discuss how to best approach this. Then they usually set it up at their desk and I walk over and we finish it together. This provides trust that they can succeed using physics and educates people for future challenges. If you use middleware you have to file support tickets and so forth. The turnaround time becomes days instead of hours. At that point the artist will just do something else. As a funny example I didn't even know that the interaction with the chair and headcrab worked this way until I saw it in the video. This was not a planned feature. It just happened to work ?

This topic is closed to new replies.

Advertisement