# Motor control for physics-based character movement

This topic is 1158 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I've been bashing my head on this, in one form or another, for about 2.5 years now. I would've asked here earlier, but had had too much of the IRC channel's "how to do X?" "do Y instead" garbage... but apparently the IRC channel has been officially disowned by the forums for a while now.

I want to do physics-based character animation, locomotion, etc.

What I have:

A home-brew physics engine that works, mostly. The one known issue is that collisions between certain kinds of convex primitives currently only generate a single contact point, when the contact region would be better represented by multiple contact points bounding an area.

A character consisting of rigid bodies, with skeletal joint constraints between them. The class is named Dood. I can tell each joint what torque I want it to apply, it will check to make sure it's within an allowed range, and then it will apply that torque across the joined bones.

What I don't have:

Control logic for those joint torques. I've experimented with a bunch of different things, but really I haven't got a clue how to proceed with this. I was able to come up with an analytic solution for the upper body, but only because I was able to sink all of the extra torque into the lower body. To extending this approach to work for the lower body as well would be nontrivial, to say the least. I'm completely stumped.

I've found a few papers on the subject, but I wasn't able to extract anything useful from them.

Possible "black box" formulation:

Inputs:

• Anything and everything about the state of the Dood
• (Maybe) info about objects the Dood is in contact with (or just his feet)
• "Goal Description", possibly in the form of a desired linear and angular velocity (or net force and torque) on the left foot, the right foot, and the pelvis

Outputs:

• Joint torques (3-component vectors) for all of the joints of the Dood's lower body: { left, right } x { hip, knee, ankle }

Little help?

##### Share on other sites

It's a bit difficult to guess your exact implementation, of course, but torques normally arise from a force and a moment arm, which is resolved to a force at the fulcrum plus a torque. If you have a hierarchical bone structure, that force at the fulcrum of a child would result in a torque and force on the parent joint. That sequence could be used to "pass" the force/torque to the root of the character.

##### Share on other sites

Why not use a neural network based solution? I think this is one of the few cases were an AI solely based off neural networks wouldn't be one of the worst ideas ever. It could be quite functional

##### Share on other sites

You can use motors that drive a specific orientation. As far as doing anything intelligent goes, well isn't this why studios pay a lot of money for software like Euphoria?

##### Share on other sites

@Buckeye: If I understand you correctly, that's what I'm already doing for the upper body. Thing is, it can't be generalized for the whole body. At least one bone has to act as a sink, to allow the other bones to reach their desired orientations. Which means unless there's some very lucky coincidence, that sink bone won't be able to reach its own desired orientation.

@WireZapp: I tried, but neural nets hate me

Two problems:

1. I don't have a set of "correct outputs" the system should produce for a given set of inputs, and I'm not sure such a set can even be constructed. At least, it would need to take into account what the Dood's state was not just in the most recent physics tick, but also in the one or two physics ticks prior. Or perhaps some cut-down version of that information, Idunno.
2. I don't know how to select coefficients for a neural network that isn't feed-forward. Or one that is for that matter... the Wikipedia article on backpropagation wasn't very helpful. I see they've changed it a little since the last time I looked at it; it no longer contains the phrase "local induced field" (wat?), so I guess that's an improvement.

Also so far I've been too stubborn to use an existing NN library. That might change, Idunno.

@Randy Gaul: Motors that drive a specific orientation? Sounds like what I'm already doing for the upper body. Euphoria? No thanks, I'm trying to make something of my own. Or were you saying the knowledge required for that sort of awesomeness is not available to people without money to spend?

##### Share on other sites

Yeah I'm trying to say that intelligently driven motors are something studios have payed a lot of money for. That would imply that it's a pretty difficult thing to do! I wish I could provide more details about specifics, but I really don't know much about the implementation of such a thing.

##### Share on other sites

I want to do physics-based character animation

I was able to sink all of the extra torque into the lower body

At least one bone has to act as a sink

You're apparently come up with a set of "physics" rules that differ from what is commonly understood to be application of "torque." You shouldn't end up with "extra" torque and have to "sink" it somewhere (not sure what those terms mean with regard to "real world" physics).

To speak in common terms, agreement needs to be reached for the meaning of words. How do you define "torque"? If you can express it in units (either SI or British, such as Newton-meters or ft-lbs) that would help a lot. Looking for some common terms to be able to help.

Also, describe why the lower body is (somehow) disconnected from the upper body. That may be the first thing you want to fix.

Edited by Buckeye

##### Share on other sites

I would suggest looking at Michiel Van De Panne's research -- he's worked on a lot of projects that demonstrate that you can achieve pretty nice control with fairly simple state machines + PD controllers. SIMBICON is a good place to start: http://www.cs.ubc.ca/~van/papers/Simbicon.htm

Higher-level stuff (when to transition from one control strategy to another) is another piece of the puzzle: http://www.cs.ubc.ca/~van/papers/2010-TOG-gbwc/index.html

Or if you like machine-learning type stuff, he has that too: http://www.cs.ubc.ca/~van/papers/2013-TOG-MuscleBasedBipeds/index.html

##### Share on other sites

I want to do physics-based character animation... I was able to sink all of the extra torque into the lower body... At least one bone has to act as a sink

At least one bone has to act as a sink

You're apparently come up with a set of "physics" rules that differ from what is commonly understood to be application of "torque." You shouldn't end up with "extra" torque and have to "sink" it somewhere (not sure what those terms mean with regard to "real world" physics).

Actually Aken is trying to do a sensible thing One problem with controlling a human character is that you want the control to be accurate without being stiff.

For example - think about those jointed desk lamps with springs in each joint. They manage to convert a jointed system that would (without the springs) need quite a lot of effort to just stop it collapsing, into one where it supports itself and just needs gentle inputs to then adjust. What the springs do is (approximately) compensate for the vertical linear gravitational force (i.e. acceleration of -g) of the lamp end by "converting" it into a single torque that is applied to the base (some people might like to call this a foot), and thus "sunk" into the environment. So long as the foot doesn't overturn, of course.

You might want to read up on inverse dynamics.

Edited by MrRowl

##### Share on other sites

Actually Aken is trying to do a sensible thing

I didn't say it wasn't sensible - resolving forces and torques through a hierarchical structure is, indeed, a reasonable approach (and can approximate the "real" world.) I said he's using terms to describe his problem that (at present) only he understands the meaning of.

##### Share on other sites

Well, the meaning of "torque" is the same as what most people mean by it... torque = MoI * angular acceleration...

Each of the joints of my Dood effectively has a motor in it, and whatever delta-angular-momentum the motor induces in one bone, it induces the opposite delta-angular-momentum in the other bone. The physics engine's constraint solver resolves the linear components of the velocities to make sure the bones stay in their sockets (although this may in turn alter the angular velocities of the bones as well).

I can let each bone have a desired orientation. If I compare its current orientation to the desired orientation, I can compute the angular velocity necessary to get it there by the next timestep. Then if I compare that desired angular velocity to the current angular velocity, I can get the angular acceleration necessary to reach that angular velocity by the next timestep. And then by multiplying that desired angular acceleration by the oriented MoI matrix for that bone (I'm using a 3x3 matrix to store the MoI data, even though it only has 6 unique numbers), I can get the torque necessary to get that bone into its desired orientation by the next timestep.

So then I tell the motor in one of the joints this bone has to set its torque (in world coordinates) to that value. If another motor has already affected this bone, then I also need to undo whatever it did... so if I have a chain of bones with motors between, and each bone has a desired orientation, I have to add up the world-space torque vectors as I go through the chain. And when I get to the last bone in the chain, all of the motors have already had their torques set to accommodate the desired orientation of the previous bone... so there is no motor left to accommodate the desired orientation of the last bone in the chain. Thus I call it a 'sink' for the torques of the rest of the chain.

As far as the lower and upper body being separate... I just happened to decide to start from the arms and head, and work down until I got to the pelvis. The upper body stuff works, but at the cost of the pelvis doing completely arbitrary stuff, whatever spastic motion is necessary to keep the bones above it oriented as they desire. As I said before, ultimately this approach cannot be generalized to work for the whole body, because I can only satisfy as many bones' desired orientations as I have joint motors, and there are more bones than joints.

Will look at some of the other links that have been posted when I have more time.

##### Share on other sites

Good explanation, Aken. Clarifies the situation .. except why you can't have more joints/joint motors(?) The reasons for the restrictions aren't clear.

I understand that modeling the lower body and the ground's effect on it may not be trivial, but why is not possible?

##### Share on other sites

If I understand your first question...

The problem is conservation of angular momentum. Say A and B are the angular momentum of two bones. If a joint motor between them does A += X, in order to conserve angular momentum, it must also do B -= X.

As I explained before, any desired orientation to be reached by the next timestep corresponds to a desired delta-angular-momentum. Ignoring the possibility of going >2pi radians to get there, it's a 1:1 correspondence. So if the bones I mentioned earlier have desired orientations, then I can talk about Adesired and Bdesired. But in general I can't satisfy both at once: if I choose X so that A' = A + X = Adesired, B' = B - X will not generally = Bdesired.

If I add a third bone connected by a second joint (new objects' properties will be "C" and "Y")...

A' = A + X

B' = B - X + Y

C' = C - Y

I can choose Y so that B' = Bdesired

Y = X + Bdesired

but then C' will not generally = Cdesired.

In general, there is no way to choose X and Y such that A' = Adesired, B' = Bdesired, and C' = Cdesired, simultaneously. If I choose to satisfy the desired orientations of everything but the last bone in a chain, that last bone will end up with a delta-angular-momentum = -(X + Y + Z + ...).

Other stuff to note...

There's some approximation going on here, because A', B', etc. are the values that go into the constraint solver... whereas it's the values that come out of the constraint solver that determine what the actual position/orientation of the bones is at the next timestep. I haven't studied just how much of a difference that makes.

Also, it just occurred to me that I've been working in terms of desired bone orientations that must be reached by the next timestep... and perhaps that "by the next timestep" part is an unreasonable limitation. It occurs to me, for example, that a cat's righting reflex is at least a 3-step process: the cat starts out sideways, step 1 is to fold itself up into a torus, step 2 is to rotate the torus' surface (as if turning itself "inside out"), and then finally step 3 is to unfold the torus, leaving the cat right-side-up.

I don't think it's impossible... clearly it's possible; we do it IRL. It just isn't a simple generalization of the approach that worked for the upper body.

YouTube video of the working upper body stuff, if anybody is interested:

(warning: the gunfire sound effect is sudden and loud; consider turning your volume down before watching)

http://youtu.be/9Tr52MN3Qjs

Edited by Aken H Bosch

##### Share on other sites

The problem is conservation of angular momentum.

The principle of conservation of angular momentum applies to closed systems. A character standing on the ground (unless you include the ground as part of your system) is not a closed system. It appears your model is for a character in free space, and (from the video) you may well be getting the results your model dictates. If you extend your character's interaction to include forces and torques with the ground (as intimated by MrRowl above,) that should help the situation.

Edited by Buckeye

##### Share on other sites

The principle of conservation of angular momentum applies to closed systems. A character standing on the ground (unless you include the ground as part of your system) is not a closed system.

True, but the foot/ground interaction will only affect the system in the ways the constraint solver makes it. It's not a free pass to give bones arbitrary angular velocities. I mean, yes I could cheat, but I don't want to.

Failure to conserve angular momentum is a serious problem. About two years ago I had a system that (inadvertently) didn't conserve angular momentum. Each bone had a desired position and orientation. I computed the average linear velocity of the Dood, modified the linear and angular velocities of the bones to be whatever was necessary to get them to their desired pos/ori by the next timestep, and then subtracted out any net change to the linear velocity of the Dood to make it at least conserve linear momentum. Note, this is not part of the constraint solver I'm talking about; this is an extra step which, if done incorrectly, could cause the Dood's linear or angular momentum to change, completely independent of interactions with other objects.

Here's a video from back then (although I seem to have avoided showing the serious problems that failure to conserve angular momentum caused):

(warning: the gunfire sound effect is sudden and loud; consider turning your volume down before watching)

http://youtu.be/JNR5ohuOBHk

On to the problems:

The old system effectively said "I don't care what your angular velocities were; here's what they are now." As a result, characters could not be rotated by being pushed on. This includes interactions with the ground. The enemies the player is going to fight in my FPS[1] are giant bug aliens (but unlike nearly every other universe with "bugs" mine actually have six legs). Two very obvious problems arise when these bugs, with their wide stance, interact with these sloppy physics:

1. They can stand with only one foot in contact with the ground, while the center of mass is suspended in the air above a precipice.
2. When the larger bugs (not shown in the video) try to rotate, their feet slam into the ground at arbitrarily high speeds, causing a reaction impulse which sends them careening across the map.

Yes, I realize both of these problems could hypothetically have been remedied by improving how the desired bone orientations were selected. But that's not the path I've chosen.

[1]At one point "do it in a less cheaty manner" was a "side quest" of "make an FPS where you fly around with jetpacks and fight giant space bugs". But I've decided that this is what I want to do now. That said, I am still hoping for a solution that will be generalizable to work for the six-legged bug enemies.

Edited by Aken H Bosch

##### Share on other sites

Assuming the AI part will come later after the 'complex action' part exists it will make use of...

Primitives of course of getting the 'body'/ appendages/whatever to be stable in whatever environment they live in  (try to make a biped stand/balance (then walk in a particular direction) on an irregular floor surface...)   Then includes various transition to different orientations within that terrain   (all of course within the limitations of the body's individual appendages (allowed angles, movement inertias, etc)

Then fluid movements to move the body/appendages to where they are needed (a given target)  when adjacent to a blocking environment structure   (how to move the whole body to transition from a 'hand' outsidfe a hole to inside while being mideful of the physical limitations of the structure)

Next applying force on an external object to make it move in a desired direction  (shove it with proper force and friction  and within impact/force limits of the appendage/body  (obviously countering forces to maintain balance of the body  - later with more agressive stances and motions to impart greater force -- like a pitcher winding up for his pitch)

Trajectories to be  decided  (balistic path calculated to work around obstacles -- besides actually getting the object to its desired locations)

All of the above with the system being able to judge that the task given is impossible - maybe suggesting alternatives (like getting closer to the target or a position which obstacles arent in the way, etc...)

##### Share on other sites

@wodinoneeye:

I don't understand what you're saying? A list of considerations for once I get the low-level stuff working? Or a plan of action?

I actually had done a bit of AI coding a long time ago, before I completely revamped the character physics... well, A* pathfinding, at least. And I have some general ideas how to do the layers that pass goal states (or whatever) to the low level stuff I'm working on... how to choose where I want the feet to go, that sort of thing...

What I've been trying recently...

Recently (the past couple of months) I've been experimenting with using genetic algorithms to select coefficients for an artificial neural network, converting most variables of the the state of the Dood to an array of float inputs, and also giving it a goal state in the form of a desired linear and angular velocity for the left foot, right foot, and pelvis... or expressing it as forces as torques (I've tried things a few different ways at various points). I give these inputs to the ANN, and it outputs an array of float joint torques for each axis of the left and right ankle, knee, and pelvis joints.

The ANN is a custom setup. Here's how that works:

every physics tick (the physics runs at 60hz):

repeat this for some number of iterations:

(outputs, memories') = tanh((inputs, memories) * (coefficient matrix))

Note, these "memories" serve both as addresses to store the results of intermediate computations within a single physics tick, and to record these results for use in a subsequent physics tick.

Currently there are 150 inputs and 18 outputs; the number of inputs is more subject to change than the number of outputs.

I believe this sort of non-feed-forward setup is necessary because of the complicated nature of the foot/ground interaction. I cannot rely on there being exactly one contact point between the foot and the ground, nor can I rely on the object(s) the foot is in contact with actually being an immobile part of the terrain. Thus there is no real way to pack all of the information about the state of the foot/ground interaction into a fixed-size input array.

Also, looking at existing papers on the subject, I see a lot of things use PD controllers, a strategy that is very much unlike a feed-forward neural network.

However, it seems as though nobody ever does ANNs with anything but feed-forward neural networks, so I haven't a clue how to train such an ANN.

There are a couple of factors that I imagine will affect whether or not a "good enough" coefficient matrix exists:

1. The number of iterations it takes processing the "memory array" per physics tick
2. The size of the "memory array"

I would be very much surprised if 8 iterations per physics tick and 100 memories were not sufficient for a "good enough" coefficient matrix to exist, but I have no idea how to find it. I've been trying to use genetic algorithms, but haven't had much progress.

I don't have a precise definition of what qualifies as "good enough", but I can tell from looking at it that everything the GA has been able to come up with so far hasn't been it.

And even more recently...

This past week in particular, I've been experimenting with what kind of control I can accomplish when neither foot is in contact with the ground. Even though it's a soldier in power armor, including a jetpack and rocketboots which realistically could be used to torque the whole system, the cat righting reflex proves that some control is possible even when there isn't anything to push off against. And in an FPS, the player needs to be able to change their heading and pitch even when they're airborne (though perhaps with slower turning rates).

First I tried just disabling scoring for the pelvis' position, then I disabled scoring for everything but the pelvis' orientation. Now I've broken the pelvis orientation score (formerly the magnitude squared of the difference between the desired angular velocity of the pelvis and the actual angular velocity the system achieved) into a separate scoring category for each x/y/z component of the error. The scores for the y component (y is my vertical axis) are worst for some reason, both before and after I've given the GA a chance to try to evolve anything.

I reckon it must be possible to have better control while airborne than what I've managed to achieve so far... but maybe I need to let the "airborne yaw/pitch system" (name I just now made up) affect the spinal joints' torques, not just the left/right ankle, knee, and hip torques?

If all else fails, I can cheat and give the Soldier Dood a gyroscope. But I won't be happy about it.

##### Share on other sites

A few comments. I don't know the details of your implementation so these may not apply but I hope it is useful:

If I am reading your implementation correctly, it looks like you're multiplying your ANN inputs across the coefficients to get outputs and running them through your squash function. The reason I don't think this would produce results is that the model seems to be missing hidden layer summing junctions, which are where the real computation in an ANN is accomplished. It would be very difficult to produce ANN-like behavior without a feedforward, 2-layer graph structure because without it your control inputs are basically just scaled products of your inputs.

It is completely feasible to have your input array be allocated to the largest size that it would possibly encounter and just feed your inputs in as 0.f values when they are unused, as long as you are always mapping the same inputs in to the same locations. When the GA converges it will automatically omit any unnecessary inputs.

Make sure to normalize your inputs to the interval [-1, 1] before pushing through the ANN. I usually just divide by the feasible domain.

ANN feedforward (if you choose to implement it) need only be accomplished once per frame as long as your delta-t values in the model all accurately reflect the operating frequency.

An ANN's success is only limited by the soundness of the fitness function you are giving the GA. Use easier-to-accomplish fitness functions at first and work your way up as it trains. Also, an ANN will converge very well on ONE behavior, so don't expect multiple control functions out of the same ANN. For multiple functions you would construct another ANN in parallel designed to converge to a different fitness function.

A trained feedforward ANN can produce exactly the same results as a PID controller, even though the architecture is different. If you have already worked out the transfer functions for every joint in the system though, then PD control might be even easier than what you're doing now. You may be able to develop inverse kinematics for desired behavior first and then just find controller parameters to minimize the disparity between actual and desired behaviors.

##### Share on other sites

Ah whoops, I forgot I had a reply to reply to!

Hidden layers

My "memories" setup is just a more generalized form of your standard "hidden layers" construction. Instead of breaking the ANN down into layers, I categorize my neurons as either inputs (whose value is determined before batch-processing begins), memories (whose value may change with every iteration), or outputs (whose value is only used after batch-processing finishes). It also saves the values not just between iterations but also between batches of iterations, hence why I call them "memories". That and it's like a chunk of memory for use as "scratch paper", in whatever manner the ANN sees fit.

So it's capable of everything a traditional ANN with hidden layers is capable of, and then some. And if I wanted to enforce that certain coefficients in the coeff matrix must always be zero, I could make it exactly emulate a traditional "hidden layers" ANN.

"The largest size"

Yes, I suppose I could just have an arbitrarily large array of inputs, and have a reasonable "default value" to indicate that they aren't in use... but there really is no upper limit on how many contact points the feet can have at once. And even if I chose an arbitrarily large "maximum" number... how would I make the response "symmetric"? I.e. if there are multiple ways to represent virtually identical states, it should behave nearly identically regardless. I guess I could sort the contact points by priority somehow?

Also, a real person doesn't pack contact points into an array. Maybe I could attempt to categorize them by what part of the foot they're in contact with, what their normal vector is, etc.? Even categorizing the contact points like that, if I still had a separate array for each... I don't know, it sounds weird. Maybe I'm being too stubborn.

Normalizing my inputs

You bring up a good point, I haven't been (strictly) normalizing my inputs to [-1, 1]... I did realize that some of the goal-state quantities were way too big (because I was multiplying by 60hz to compute them), and so I multiplied them by something like 0.02 to compensate, but... I didn't want to run the inputs through tanh() unnecessarily, because it seems to me that if the ANN has use for the "squashed" inputs, it will do that itself, whereas if it would've been more useful to have the original linear values, it's going to have to do a bunch of extra work to un-squash that data.

Not sure what to make of this:

ANN feedforward (if you choose to implement it) need only be accomplished once per frame as long as your delta-t values in the model all accurately reflect the operating frequency.

"Only limited by the fitness function"

You can't actually mean that, can you? Sorry if this is bordering on nitpicking, but that "only" is so inaccurate I can't help but address it at length.

You said yourself you didn't think something without any hidden-layer neurons would be capable of solving this sort of problem. That's just the extreme case of "too few memories". Or do you mean to suggest letting it dynamically increase the number of memories? Hmm... I could try it, but I'd need convincing.

And the number of iterations matters too. Iterations are signal propagation... If I configured my ANN to emulate a traditional feed-forward hidden-layers ANN with 5 hidden layers, and it only did one iteration per physics tick, every action it takes will be 5 ticks late for the input that prompted it. That's only 83.3 ms, so it's actually better than the average human reaction time, but that assumes 5 hidden layers is sufficient. Even discounting the extreme case, "zero iterations", clearly the number of iterations is going to have an effect on the quality of the solutions.

And then there's the choice of inputs and outputs!

I don't know what it would look like, but I can say with reasonable confidence that there is some curve in (number of memories, number of iterations) space below which no coefficient matrix will be "good enough" (choice of inputs and outputs are implicit parameters). Though, as I said earlier, I would be very much surprised if (100, 8) is on the wrong side of that curve.

Multiple behaviors

The thing that makes this difficult is that I have a multifaceted goal state, all the parts of which need to be achieved simultaneously (I think). The most recent formulation of this goal state is a desired net force and torque vector on each of three bones: the left foot, the right foot, and the pelvis. And by "net" I mean what's measurable after the constraint solver has had its say, so it's equivalent to specifying a linear and angular velocity I want it to end up with, or a position and orientation. And in fact, a desired pos & ori is how I'm currently selecting the desired force and torque vectors... but in theory I can choose a goal state in any of those terms.

"Dynasties"

The thing is, I can't just let them evolve completely separately and then hope to simply lerp them together or something. If I don't force it to attempt multiple goals simultaneously, the strategies it comes up with to achieve one goal will come at the exclusion of the others.

I've been experimenting with a scheme for GA with multiple simultaneous scoring categories, which at one point I considered calling "dynasties". Inspiration for the idea came from a phrase I once encountered in a paper, "GA with villages". I didn't look up what it meant, but from the sound of it, I guessed it's a scheme for compromising between preserving genetic diversity (between "villages") and maintaining enough homogeneity for crossovers to be viable (within "villages").

Anyway here's how my "dynasties" scheme works:

For each scoring category there is a group of N parents, and every generation, each parent chooses a replacement for itself, which may either be an exact clone, a single-parent mutant, or a crossover with any of the parents in any of the categories. When there were only 1 or 2 parents per category, the label "dynasties" felt more appropriate, but when there's a lot, it's more like "nepotistic apprenticeships" But whatever, it's not like I'm planning on patenting it.

I can't really tell if it's a step in the right direction. It definitely isn't "good enough" yet though, and I get the sense it's not going to get there just by being left to run for a few days.

Emulating P[I]D with feed-forward ANN

I don't see how that's possible? To get values for the I and D terms, it needs to be able to remember what those components were from the last tick. If it stores "memories" separate from the normal inputs and outputs, doesn't it cease to qualify as "feed-forward"? Or is it a special case where some of the outputs were to be fed back in as inputs? Because that sounds exactly like the original premise that led me to come up with my "memories" system. If you do that, regardless of whether it qualifies as "feed-forward", the idea of having a "training set" of the correct outputs for a given set of inputs, including those memories as both inputs and outputs, becomes much more complicated, if not impossible. At least, I don't know how to do it.

Or did you just mean I would give it the values for P, I, and D as normal inputs, rather than any kind of "emulation"?

Aside: rules check: are my posts here too blog-like? I know some places have rules (or guidelines) against using the forum as your dev blag, and I posted this in the AI board (though it's since been moved to Math & Physics) rather than "Your Announcements"... But I just have so much to say!

##### Share on other sites

My "memories" setup is just a more generalized form of your standard "hidden layers" construction. Instead of breaking the ANN down into layers, I categorize my neurons as either inputs (whose value is determined before batch-processing begins), memories (whose value may change with every iteration), or outputs (whose value is only used after batch-processing finishes).

I'm sorry, I'm having trouble understanding the way you're defining it without seeing a discrete description of the structure. The best guess I have is that you're representing the structure as an adjacency matrix and iteratively modifying it, which is a possible solution, but I have no way of knowing exactly what your algorithm accomplishes without seeing a mathematical description and I'm having trouble finding any reason that one would need to iterate the algorithm more than once per frame.

(edits)

From what I think I understand about your implementation, I strongly suspect that there is no benefit to redundantly passing information through the structure while the input vector stays the same in each timestep (which is what I'm assuming is happening) because analytically you're passing new feature information through a set of weights that are meant to process other features. This can cause a lot of information to be lost and is why recurrent information is normally just fed in as an auxiilary input on the next timestep (aka recurrence). You may have better results by using another static coefficient matrix in lieu of "memories" and just feed the recurrent information you deem relevant in as separate inputs when your input vector is updated.

there really is no upper limit on how many contact points the feet can have at once

If this is true and you don't have a way to bound the data then you're limited by the finite size of your program memory and no algorithm is suitable anyway.

"Only limited by the fitness function"
You can't actually mean that, can you? Sorry if this is bordering on nitpicking, but that "only" is so inaccurate I can't help but address it at length.

Fair enough, let me be more specific: Assuming you are using the appropriate architecture for your objective, the viability of your model's hypothesis is only limited by your ability to select the appropriate fitness function.

You said yourself you didn't think something without any hidden-layer neurons would be capable of solving this sort of problem. That's just the extreme case of "too few memories". Or do you mean to suggest letting it dynamically increase the number of memories? Hmm... I could try it, but I'd need convincing.

And the number of iterations matters too. Iterations are signal propagation... If I configured my ANN to emulate a traditional feed-forward hidden-layers ANN with 5 hidden layers, and it only did one iteration per physics tick, every action it takes will be 5 ticks late for the input that prompted it. That's only 83.3 ms, so it's actually better than the average human reaction time, but that assumes 5 hidden layers is sufficient. Even discounting the extreme case, "zero iterations", clearly the number of iterations is going to have an effect on the quality of the solutions.

I think you may be misunderstanding the concept. The output set is a mapped function of the inputs at any given time regardless of how many layers there are. There is normally

no temporal aspect involved. Mathematically any function can be represented with only two layers so it seems that all you would be doing by repeatedly running information through tanh and multiplying it by the same matrix is corrupting your data in proportion to the number of iterations you run.

The thing that makes this difficult is that I have a multifaceted goal state, all the parts of which need to be achieved simultaneously

Anything that is necessary to occur at a given moment can be represented by a function. The hypothesis function (from whatever optimization/learning/controller) algorithm you use just has to change if you move to another function after. If actions have to occur in sequence to reach the desired goal then you can add recurrence.

I've been experimenting with a scheme for GA with multiple simultaneous scoring categories, which at one point I considered calling "dynasties"

This is called speciation, and it is already well-developed if you're interested in expanding on it. What you can't do is run GA on two disparate goals without partitioning the population, though. It will fail every time.

Emulating P[I]D with feed-forward ANN

I don't see how that's possible? To get values for the I and D terms, it needs to be able to remember what those components were from the last tick. If it stores "memories" separate from the normal inputs and outputs, doesn't it cease to qualify as "feed-forward"?

I didn't mean that you combine the two. I meant that they are both methods to minimize error when comparing a system's state to its desired state.

(edit)

Actually, intentional emulation is possible because you can feed information from one iteration as input to the next iteration if you choose to do so. This is actually pretty common in physical systems. See "recurrence" above.

Edited by Algorithmic Ecology

##### Share on other sites

Mathematically any function can be represented with only two layers

Bwaaa???

Really? I had read that most problems are solved with only one hidden layer, but I didn't know of any mathematical law that said it was always possible? I assume that means "with an arbitrarily large hidden layer" then?

RE: my setup:

Yes, it's basically an adjacency matrix (with floats for elements). The columns (inputs) correspond to "strict" inputs and memories. The rows (outputs) correspond to "strict" outputs and the updated values for memories.

I've already got a "SetAllowedCoeffs" method which lets me enforce that certain coefficients must be zero. To make it emulate a traditional 1-hidden-layer ANN, I just need to disallow nonzero coefficients mapping directly from inputs to outputs, or from memories to memories. That should be trivial to do. Emulating passing some outputs back in as inputs... maybe harder, because of the temporal aspect. Although maybe if instead of trying to emulate it, I actually had additional inputs and outputs for it... I could try it. Might, even.

RE: no upper limit:

I can pick an arbitrarily large limit, but the real upper limit is indeed "how many pieces of gravel can I put under the player's foot before it makes the simulation cry?" And even though IRL there's a lot of nerve endings in a foot, I think the way we actually handle the foot/ground interaction is a lot more reactive force feedback than some comprehensive understanding of the external rigid-body physics. Maybe even to the extent that I don't even need to give it any contact point info as inputs... or so I had thought.

I'll think about categorizing the contact points into "sensor areas", and maybe taking some kind of average for everything within a sensor area. Dunno if I'm actually going to implement that yet.

Personal aside:

You (anyone reading this) may notice that I've got a habit of not trying things to see if they work. It's irrational, I know, but it's the result of ~2.5 years of nothing that I can say "works", and often when I try things not even being able to tell whether they're a step in the right direction. If you've got an idea, go ahead and suggest it, and I'll tell you why it cannot be done[1]. And if you still think your idea has particular merit... idk, convince me?

[1]A joke.

##### Share on other sites

Please could you give a clear high-level overview of what problem you're trying to solve, as distinct form the techniques you're trying to use. I think that might be helpful to you in getting help/ideas.

Edited by MrRowl

##### Share on other sites

Short version of what MrRowl said in PM: "what is it you're ultimately trying to do?"

Several things at once.

Some of the things are already working, under certain circumstances[1]:

• The head should match a given orientation (or maybe just a forward vector). If I lock the position and orientation of the pelvis bone, and don't cause the arms to flail around too wildly in their attempts to aim the gun, this works pretty well. Even if I don't, using the analytic-ish system (mentioned near the beginning of the thread) for computing joint torques for the two spine joints, it still manages to face the right direction most of the time.
• The arms should hold the gun in a 2-handed grip that a) aims the gun in the right direction, and b) looks reasonable. I've gotten passable(ish) results by locking the position and orientation of the "torso 2" bone (to which the "head", "torso 1", "l shoulder" and "r shoulder" bones attach), or locking the pos & ori of the pelvis. Anyway, the system I have for this seems to work fairly well. If I shake the aim direction around too violently it has trouble; there's definitely still room for improvement.
• The bones "torso 2", "torso 1", and "pelvis" should match whatever orientations I tell them to match. Again, this works as long as the arms aren't doing anything too crazy, and everything below those bones is "cheating" to avoid making it unnecessarily difficult to comply. Presumably I can eventually get them to do their thing legitimately (see below).

Then there's a broad category of "lower body stuff" where I'm less sure of what I want, but have thought of a few possibilities:

• When airborne and not using the jetpack/rocketboots:
• The pelvis (or maybe for "torso 2" instead) should match a given orientation
• If a landing is imminent[2], the feet should match a given pos/ori by that time
• When on the ground normally, I haven't got such a clear idea of what I want, but I have some general ideas:
• The pelvis (or the upper body? or the CoM?) should match a given linear velocity
• The pelvis should match a given orientation, and for best appearances, "torso 1"'s orientation should try to stay about halfway between the orientation of "torso 2" and "pelvis"
• If the feet are grounded and supposed to stay grounded[2], they should stay grounded. Orientation-wise, I think that will usually mean keeping the local y (up) vector of the foot matching the normal vector of the surface, although I recognize that sometimes it may be necessary to stand on the toe, heel, or sides of the foot. Alternatively, I could work in terms of a force/torque with which they should push against the ground[2]
• If the feet are supposed to come off of the ground, or stay off of the ground, they should probably be moving toward a destination pos/ori[2]

Except for the "torso 1" and "torso 2" stuff (they don't really count as part of the "lower body" anyway), all of those "grounded" goals can be expressed as desired values for each of the "l foot", "r foot", and "pelvis" bones to match, specifiable as either a pos/ori, vel/rot[3], or force/torque.

Now imagine the numeral 2 written a hundred times, with square brackets around it, in superscript... because as the footnote indicates, I don't know how I'm going to come up with the "goal state" values just yet (though I've got some general ideas). What I've been doing with this GA/ANN stuff recently is trying to come up with a "black box" implementation that can handle any goal states I throw at it... if it can handle as diverse an input set as possible, the implementation of that higher-level stuff shouldn't matter. That said, I do need to make sure I'm not asking it to do something that isn't possible.

Edit: even higher level? I want him to be able to look around, aim, stand, walk, run, jump, and land, on varied terrain.

[1]See my video titled "Upper Body Stuff Working", wherein the Dood is floating in the air looking/aiming in various directions and "twerking" (or that's the label some of my friends gave it... bleh )

[2]Details of how the higher-level code to make such determinations will work are TBD

[2]In my code, my angular velocity variable is named "rot" for brevity (and consistency: all of the names "pos", "vel", "ori", and "rot" are the same length). So when you see "rot" in my posts, that's what it means.

Edited by Aken H Bosch

##### Share on other sites

Really? I had read that most problems are solved with only one hidden layer, but I didn't know of any mathematical law that said it was always possible?

It was mathematically proven in 1989.

To make it emulate a traditional 1-hidden-layer ANN, I just need to disallow nonzero coefficients mapping directly from inputs to outputs, or from memories to memories. That should be trivial to do.

I think you should go back and double check this. I strongly suspect that this is not the case for many reasons. I've never seen anyone use adjacency matrices for practical ANNs so I'm hesitant to put this out there but I would expect a correct ANN feedforward with adjacency matrices to be something more of the form:

output vector(Ox1) = sigmoid( layer2weights(OxH) * (layer1Weights(HxI) * inputs(Ix1)) )

where O is number of outputs, I number of Inputs, H number of hidden units and members of layer2Weights and layer1Weights are selected by the GA. The only parameters that should be changing within a simulation should be the inputs and outputs. If anything else is changing before evaluating a new GA individual or you're running your intermediate values through tanh, you're preventing the ANN from doing its job.

(edit) I'm going to say this again though..If you already know the positions and velocities you are looking for you may be better off just using inverse kinematics to develop the movement before trying to work it out through optimization algorithms.

Edited by Algorithmic Ecology

##### Share on other sites

To make it emulate a traditional 1-hidden-layer ANN, I just need to disallow nonzero coefficients mapping directly from inputs to outputs, or from memories to memories. That should be trivial to do.

I think you should go back and double check this. I strongly suspect that this is not the case for many reasons. I've never seen anyone use adjacency matrices for practical ANNs so I'm hesitant to put this out there but I would expect a correct ANN feedforward with adjacency matrices to be something more of the form:

output vector(Ox1) = sigmoid( layer2weights(OxH) * (layer1Weights(HxI) * inputs(Ix1)) )

where O is number of outputs, I number of Inputs, H number of hidden units and members of layer2Weights and layer1Weights are selected by the GA. The only parameters that should be changing within a simulation should be the inputs and outputs. If anything else is changing before evaluating a new GA individual or you're running your intermediate values through tanh, you're preventing the ANN from doing its job.

I'm certain. (Edit: certainty revoked due to not being sure whether you normally put the hidden-value weighted sums through sigmoid function or not)

I define that the rectangular chunks of the matrix mapping inputs to outputs, or memories to memories, must be zero; therefore, the only thing that can affect memories' is inputs, and the only thing that can affect outputs is memories. Do two iterations on the same inputs, and it exactly emulates a traditional hidden-layer ANN.

Oh, OHHHHH. Nononono. I left out some surrounding scope:

For each generation:

For each individual:

For X trials (currently 50):

Add a new Dood to the physics world

For Y physics ticks (currently 45, = 3/4 of a second):

Simulate

Pack Dood state into I

For Z iterations (currently 2):

(O, M') = sigmoid((coeff matrix) x (I, M))

Use values of O as joint torques

Accumulate score (if the Dood does "perfectly" in a category, its score for this trial will add up to 1.0)

Remove the Dood from the physics world

Individual's score is the average of the X trials' scores

Pick parents from which to spawn the next generation

Edit: wait, what? Why aren't you putting the hidden-layer weighted sums through the sigmoid function? Isn't that how it's normally done?

Inverse kinematics... as in, to come up with a complete description of a pose or movement (i.e. pos/ori/vel/rot of each bone, on a frame-by-frame basis) based on some properties I want it to satisfy (i.e. what I want the feet and pelvis to be doing)?

The problem I foresee with this is once again what I was telling Buckeye: conservation of linear and angular momentum. Yes, the foot/ground interaction can effectively create or destroy momentum, but it will only do so in the manner the constraint solver dictates. A pose or movement chosen arbitrarily has no guarantee of being achievable.

Edited by Aken H Bosch

##### Share on other sites

This topic is 1158 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account

• ### Forum Statistics

• Total Topics
628719
• Total Posts
2984386

• 25
• 11
• 10
• 15
• 14