"animation scene" class?

Started by
3 comments, last by Norman Barrows 9 years, 3 months ago

So i'm working on action animations for Caveman....

(game description)

http://www.gamedev.net/blog/1730/entry-2258672-caveman-v30-general-desciption/

they are similar to the animations in The Sims such as making a meal, eating, etc.

i have asset pools of meshes, textures, models, and animations. the object types database has drawing info. i can use the built-in modeler to determine the fix-up to place objects in an avatar's hand.

right now i'm just brute force adding animations, sfx's, and objects in hand to actions or action-object combos - IE either to the action types or object types databases.

but ultimately, for every action or action-object combo, i'd like to have one or a series of animations, objects in hand, objects on the ground, and sound effects that gets drawn/played.

this seems to lead to an "animation scene" class, and an asset pool of "animation scenes" - which would allow asset reuse. an "animation scene" would contain the (or pointers to the) animations, sfx's, objects in hand and on the ground to be drawn or played, along with where to draw the objects on the ground.

i'd have to come up with a "play animation series" routine - shouldn't be hard - my engine currently can't chain them.

how is this (animation scene class thingy) usually handled in games or engines?

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Advertisement
I'm not sure what you're asking. Do you mean attach points (where to put objects relative to a model to make it look like they're holding or wearing it)? You could add attach points around the feet of an avatar to make it look like it's on the ground.

Games like the Sims uses a more data-driven approach to how the environment works. e.g., the floor or counters are actionable objects that include all the logic for wear things are placed or animated when used. A script on a microwave might trigger when used that:

1) tell the avatar to move in front of it
2) tell the avatar to play a "put food in microwave animation"
2b) at the same time, play the "open and close" microwave animation
3) detach the food from the player's hand and attach it to the microwave
4) tell the avatar to play the "reach out and press button" animation (which may use IK or a consistent set of button locations mapped to a small set of "press button" animations)
5) tell the microwave to play its animation
6) spawn a cooked food item in the microwave
7) tell avatar to play the reach out and use animation
8) attach the food item to the avatar's hand

The scripts would also state that the avatar must have a food item in hand before it cna use the microwave. A goal-planning system is used to know that hunger is satisfied by prepared food, the microwave prepares some kinds of unprepared foods, and the cupboard contains those unprepared foods, so the avatar knows that to fill its hunger bar it must:

1) go to the cupboard
2) get food
3) go to the microwave
4) use it to get prepared food
5) then eat that food

Each of these are distinct steps with distinct animations and data.

A combination of attachment points, use scripts, and consistent art guidelines (e.g., if an object has a button, that button is always directly in front of where the avatar stands when using the object, and the button is always 4' off the ground) allows for a lot of reuse of animations.

Sean Middleditch – Game Systems Engineer – Join my team!


Do you mean attach points (where to put objects relative to a model to make it look like they're holding or wearing it)?

no, that's just the "fixup" i mentioned that can be determined in the modeler. all it does is adjust an object-in-hand relative to the "weapon bone" for precise placement in an avatar's hand.


Games like the Sims uses a more data-driven approach to how the environment works. e.g., the floor or counters are actionable objects that include all the logic for wear things are placed or animated when used.

yes, the animations are similar, but the SIMs is highly programmable (data driven), so as you say, they store all the info in an object (such as the microwave).

what about more typical shooter type games? for example, in skyrim, they have things like the "use forge" and "use arcane enchanter" animations. but they don't have a lot of items you can interact with that way: smelter, forge, armor table, grinding wheel, tanning rack, arcane enchanter, potion mixing station (i forget the name) - that's just 7 objects in the whole game. maybe they just have 7 animations and pass them an avatar ID.

but i have 300 types of objects, and make, repair, and find are object specific. and there are 100 types of actions. make, repair, and find alone add up to about 900 animations (thank god for re-use! <g>). the learn action works with about 50 skills, so that's another 50 animations. so i'm figuring there will be 1000-1500 action animations total.

and each one will need a little "scene description" of all the info required to play the animation: which band member to draw, which animations and sfx's to play, what to draw in their hands, and what to draw around them and where.

which animation scene to play will be a function of the action, or action-object combo. that info will be stored in the actions or objects database. but the actual scene info (list of anis, sfxs, objects, locations). should be in its own shared resource pool (i would think)...

actually, they could all be parameter driven. you'd pass in:

band member (ie what avatar to use)

objects in hand

animations

avatar Y for each ani (height to draw avatar at for a given ani)

objects on ground.

and a single animation routine could draw it all.

but you'd still need to store the parameters somewhere. and that somewhere would be in the actions database for animations that depended only on the action, and the object types database for actions that depend on the object in question.

Now... what if two actions or action-object combos call for the same animations and objects? this may occur a lot in the game. do you have all the data for making something with leather while holding a stone knife in EVERY leather object ?

of course not - you add a third relational database to go with the actions and object types databases. it has the animation scene info, and the action and object type databases have pointers to the appropriate scene in the "animation scenes" database.

maybe this is something most games don't do. The SIMs has a lot of action animations, but it has that whole programmable data driven thing going on, so everything is contained in an object (such as a microwave).

perhaps another way to ask the question would be how are cut scenes typically handled in shooters? canned animations loaded on demand then thrown away? and then they get an army of artists to make every possible combo of anis, sfxs, and objects required? and they can do this because they have a high number of artists compared to the number of animations required? unless they had twice as many artists as animations, i would think they'd try to use code to get some reuse. so that would lead to a data driven animated scene description of some sort. and every event that triggered an animation would have the name of the scene description file to load and play. something like that?

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

After reading the posts above I came to the conclusion that I would not solve the problem as indicated there. The reason is twofold: First, the author side of design of the engine I'm working on would not be matched very well. I'm thinking in component based game objects and their interrelations. Second, to stay with Norman's example above, why should a leather item game object contain all data of what and especially how can be done with it when holding a stone knife? Why shouldn't it be part of the stone knife instead? And what if more than 2 item objects are involved? In summary, I don't think in monolithic solutions.

1.) Game objects

Game objects are define by components, although it is not necessary to do that way and still get the same result. Adding an ItemSheet component makes the game object be an item, giving it a name in the game and a description. Adding an Edible component makes it, well, edible; effects on constitutions are inherent to this kind of components. So game objects are marked and provided with properties, so that clients can investigate what possibilities exist to interact with the game object.

2.) Utilities & Mechanisms

Grabbing is a component that, when being added to a game object, denotes that the game object is able to establish the attachment of another game object. It may realize a hand, a carrying loop, a scabbard, and so on. The passive counterpart is the Grip component which denotes that a game object can be attached. Grabbing and Grip together build up one kind of mechanism. Attaching a Grip to a Grabbing is possible if and only if their respective keys match in at least one combination. The placement of an attached Grip is forced to be the same of those of the Grabbing when seen in the same space. (You already mentioned something like this above with the terms "fixup" and "attach point" without giving details.)

Besides the above also other utilities and mechanisms exist. They may but need not be explicit. For example, Location components define prescribed positions in the world, and Posture components give hints how a skeleton should be adjusted (e.g. when sitting on a chair). Other utilities are implicit. For example, the knob to open the door of the microwave oven is just a graphical representation, a Collider component, and a script triggered when the Collider fires. Also here key matching helps to distinguish wanted from unwanted interaction.

3.) Animations

Animation clips consist of tracks. Several types of tracks exist to control all types of animatable values. Skeleton joints, sound effect playback, visibility, enabling, ... whatever. So an animation playback can control also the enabling of keys in e.g. Grabbing components. This allows to limit the usability of effectors to phases of the animation.

Traditional inverse kinematic or faked inverse kinematic (see e.g. this somewhat old article) can be used to steer animation to reach a specific spatial goal. Say, procedural control can be used to make animations more flexible.

4.) Actions

When thinking of behavior trees or planners, in the end there is a sequence of actions that are processed by the agent. Such actions may be "go to Location X", "adopt Posture Y", "reach for Grip Z and grab", and so on. Notice please that the subjects in the actions are components mentioned above.

Whether the action sequence is prescribed (think of the leaves of a behavior tree's sequence node), selected (using a behavior tree), planned (using a planner) depends on how reactive the agents should be. The AI solutions allow to take the availability of all necessary items into account, and they allow to cheat if not all is available.

----

With the approach described above, what an agent can do is anchored within its Controller (PC) or AI (NPC). How items can be used in principle is declared with the respective game object. IMHO the (more or less) well known solutions in game development are sufficient to solve the given problem, or else I haven't understood what the real problem is. Distributing the date isn't simple either, but I think that concentrating the data to a single item will have its own problems, too. Just my 2 Cents.


The placement of an attached Grip is forced to be the same of those of the Grabbing when seen in the same space. (You already mentioned something like this above with the terms "fixup" and "attach point" without giving details.)

i use the weapon bone approach for drawing objects in an avatar's hand. however, for precise fit, i find i need a fixup on a per object basis to precisely place an object between the fingers, etc.


Distributing the date isn't simple either, but I think that concentrating the data to a single item will have its own problems, too. Just my 2 Cents.

yes, this is the real question. i have avatars, objects, animations, and sfx, but i seem to need a higher level construct to specify the animations, sfx's, and objects in hand and on ground to be used in a given type of animation sequence for a given action, action-object or action-skill combo.

it looks like there will be a generic "action animation player" that takes an avatar, and sequences of animations, sfx's, objects in hand, and objects on the ground (with avatar relative locations). it will play the animations specified, such as cut reeds, grind reeds, twist reeds into cordage - three animations with different objects and sfx's involved. where it is in the playback would be based on the % compete for the action. IE loop the cutting ani until action is 33% complete, then loop the grinding ani until action is 66% complete, then loop the twisting ani.

the anis, sfx's, and objects will be constant for a given action, action object, or action-skill combo. only the avatar changes. more than one action may use the same animation. many do in fact - you'd be surprised how many things require a stone knife! <g>.

so that seems to lead to a database of action animation descriptions (animation scenes) that specify the ani, sfx, etc sequences associated with an action for playback by the "action animation player".

by placing them in a shared resource pool, i think i have all the bases covered:

object = controller.select

action = object.actions_menu

action_animation_player.play(avatar, action.animation_scene_ID)

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

This topic is closed to new replies.

Advertisement