State space control

Started by
12 comments, last by bzroom 14 years, 8 months ago
Another control systems question. To my understanding the first step in designing a state space control is modeling the system. This is a pretty natural step that we've all done in basic mechanics. It seems like a good way to produce an open-loop simulation of the system. Spring mass systems oscilate for ever, inverted pendulums fall over, etc. Completely uncontrolled. The part that is still escaping me is how to introduce the control value effectively. PID controllers are very intuitive. You add a little gain here and there until the system stabalizes. I primarily am only capable of tuning these systems with fuzzy logic and have not grasped the physical derivations. I was hoping that someone could help me connect that intuitiveness to state space control. There's a very simple physical model explained here: http://en.wikipedia.org/wiki/State_space_(controls)#Moving_object_example The formula the provide, when integrated (assuming U(t) == 0) would oscliate for ever. If the system had target values for y or y-dot or y-dot-dot.. how do these get used in determining U(t). Actually i think i may have just figured it out.. Is the idea to get U(t) alone on the left side of the equation? So in my assumptions i was putting U(t) = zero and watching the other values as simulation outputs. But to control the system, would i let Y[nodot,dot,or doubledot](t) equal some desired value, and the refactor the equation to find U(t) ? What i feel is lacking is the gain. Say you had an inverted penulum, rigidly mounted to the ground. According to wikipedia the acceleration of the theta from vertical is equal to G*sin(theta)/L. It would be easily to make a PID controller and choose gains which would create at torque to keep theta somewhat close to zero. How can this be modeled with state space? I never see any mention of gains when reading about state space so my assumption is that the pendulum would just fall over, since i only have the ability to understand state space as an open simulation. Or for instance it would be easy to compute a torque which would counter the effects of gravity, to zero the theta acceleration. But what if i want to control its velocity or position? Torque will not just be equal to that of gravity, but will have to be "gained-up" in order to achieve the desired state. It is this gaining that i'm not witnessing in the examples.
Advertisement
In that example (referring to the Wikipedia article) the gains are the elements of the matrix K. You're setting the control input at any instant to be the state premultiplied by a gain matrix. In symbols, u = K x.

This assumes that you can measure the entire state, but it's still a very useful theoretical result even when you can't. The reason is that you can use an estimated state instead of the real state; wiki 'Luenberger observer'
We can make an even simpler example too, but i think the inverted pendulum is about as simple as it gets.

Example: Free floating body.

A(t) = U(t)/M

The acceleration of the object is equal to the force divided by the mass.

To obtain a desired A(t) you'd rewrite it as:
U(t) = A(t) * M

But say we wanted to control the velocity of the object. When incorperating velocity it seems that the equation would look like.

A(t) = U(t)/M + V(t) * 0 + P(t) * 0

As you can see position and velocity have no effect on A(t) or U(t), so how can they be used to control the system?
Emergent, I see that now, and i'm going to continue to look over this more closely. Thank you.
I think what i'm missing is that i'm only looking at a particular state layer, the one which i expect to apply the input. But really the point of state space is it takes into account all the states' equations. So that the input could be determined from a separate state than it is applied. Such as controlling the velocity by adding a force.

So the gains are just properties extracted from the physical model, and they're just all cluster fugged into a single linear expression. That's some intense shit right there. I'm going to try to implement this very simple problem i've created, first as open loop rigid body simulation solved as a single expression. Then it's just algebra right?

I can do this! haha. Eventually. I'm going back to school to finish my engineering degree next semester. I'm just trying to get a little head start.

It's gonna be tough, i've been out of school for a while, and while i never really had it that great, you definitely lose it if you dont use it.
I think you're beginning to get some of the concepts, but the language you're using is jumbled up; let me try to help...

The word state is philosophically very important. It means, "all the information I need to describe the current status of my system." Literally, if you know the state of your system, you know everything about what's going on inside it, at this instant -- and, if there are no control inputs, this is everything you need to know to simulate the system forward in time indefinitely.

For instance, in physics, if you know the position and velocity of a particle floating in space (in the absence of external forces), you know everything you need to in order to predict exactly where that particle is going to be at any time in the future. The vector (p, v), where p is the position and v is the velocity would be the particle's state.

State space is where the state lives. Picture three dimensions. The state is a point in the state space. Again, picture a point in 3d. The state has different elements -- say, position and velocity -- exactly like a point in 3d has different components along the different coordinate axes.

Sometimes you will hear people talk about "a system with three states" or something like that. What they really mean is "a system with a three-dimensional state." I bring this up to emphasize that there is ONE state; it is a POINT in the state space. The various components of the state are just this point's coordinates.

There's a certain class of systems that are called "linear time-invariant" or 'LTI.' Without going into details, let me just say that this is the easiest class of systems to understand, the first that you'll study if you take a controls class. Many systems of interest are LTI systems, or can be approximated as such. For instance, a simple harmonic oscillator is an LTI system.

The state-space dynamics of an LTI system look like

dx/dt(t) = A x(t) + B u(t)

where x is the state -- say it is n-dimensional, so it's an n-dimensional column vector -- and A is an nxn matrix. For simplicity let's say it has a single scalar input u(t); then B is an nx1 matrix (column vector).

One point of view is that this is just a very compact way to write down a coupled system of n differential equations. That is correct. However it helps to keep the geometric meaning of this in mind: x(t) is a point that moves around.

Let's say that you are putting no input to the system, i.e. u = 0. Then the dynamics reduce to

dx/dt(t) = A x(t) .

What does this mean? The function that maps a state 'x' to the vector 'A x' is a vector field defined over the entire state space. You can picture it that way, as a bunch of arrows overlaid on the state space. The state "wants" to move along this vector field.

However, you have a control input, which means that you can nudge the point in certain directions. The directions that you can nudge the point in are the columns of the B matrix. In our example the B matrix has one column, so there is only one direction that you can "nudge" the state in.

Here's an example.

Say you have a simple harmonic oscillator (spring mass). It has dynamics

dp/dt = v
dv/dt = -k p + u

where p is the displacement from equilibrium, v is the velocity, and k is the spring constant. The control input is the force u. We can write this in matrix form as

 d [ p ]    [ 0    1 ] [ p ]   [ 0 ]---[   ]  = [        ] [   ] + [   ] u dt[ v ]    [ -k   0 ] [ v ]   [ 1 ]


or letting x=(p,v) be a two-dimensional vector, A be the 2x2 matrix in the above equation, and B=(0,1) be a 2d vector, we can just write this as,

(d/dt) x = A x + B u

which is the standard form.

I hope this was helpful.
Yes, thank you very very much. I'm quite familiar with the concept of state. It's all these other matrices that are tripping me up. Specifically the A and B matrices which you thankfully went into great detail to explain. I'll keep at it and I appreciate your help. I think it was really beneficial.

The time variant matrices were also tripping me up a bit. I understand conceptually what they would do. I believe the would affect the "visibility" of states based on t. Basically like a time variant gain. I just can't think of an example yet how it would be used. So i'm going to assume everything i do will be time-invariant, or a constant A and B matrix.

I have very little experience with constructing matrices outside the world of 3d graphics. I'm going to need to get some practice and i think your example is basic enough that i should be able to understand it easily.

Thanks again.

My goal after understanding this control theory is to build a generic control systems framework. It may not need to use state space in this exact sense. But i think if i can understand state space control then i could realize the concepts necessary to make the system extensible. Even if it's not necessarily straight forward to compute. For example you said you set the input to equal the state pre-multiplied by the gain. That doesn't need to be a convenient matrix multiply. It could be done with any kind of integrator.

I'd like include time variant systems at that point. I assume anything that loses mass or comes into contact with stuff would be a time variant problem. My ultimate goal is to control walking robots.
Quote:Original post by bzroom
For example you said you set the input to equal the state pre-multiplied by the gain. That doesn't need to be a convenient matrix multiply. It could be done with any kind of integrator.


For that example, an integrator has state, so you'd actually want to augment the state of the system with that of the integrator. The state equation for the integrator is just

ds/dt = u2

where u2 is the input to the integrator and s is the value on the integrator. The overall state of the system has to include the state of the integrator. Here's how you'd handle that.

If the original system has state x (vector) (and state equation dx/dt = A x + B u1) and integrator has state s (scalar) then you'd look at the system,
  d  [ x ]   [ A    0 ] [ x ]    [ u1  u2 ]<br> — [   ] = [        ] [   ] &#43; [   ]<br>  dt    [ 0    0 ]    [ 1 ]</pre><br>Note that the matrix containing 'A' and a bunch of zeros is a block matrix.  The point is that now designing, say, a PI controller is reduced to the problem of just finding a feedback gain matrix K as before.<br><br>Also, walking is a really nonlinear problem and LTI theory won't solve it.  But you'll want to know LTI state-space theory as a foundation since nothing will make sense without it.<br><br><!–EDIT–><span class=editedby><!–/EDIT–>[Edited by - Emergent on July 23, 2009 11:19:14 AM]<!–EDIT–></span><!–/EDIT–>
I'm still a bit fuzzy about the gain matrix. Most times control systems are designed with a set of operating parameters, for example rise time and overshoot limits.

What it looks like to me, using the physical model to derive the gain matrix (not sure if that even makes sense), that the system will be solved in 1 step. So that rise time is 1 step and overshoot is zero. It seems like you'd need to some how scale the gains by these operating parameters to achieve them.

If the gains are not chosen by the physical model, then i dont see how one would go about choosing gains except via trial and error or fuzzy logic.

I'm just thinking of the simple free floating mass. Controlling its velocity by applying a force. It seems that if you solved the equations for U(t) you'd just come up with a value that would correct V completely during the next step. And that if you needed a slower rise you'd want to scale back the K matrix?
So the question is, where does the K matrix come from? How do you choose it?

Let's look at the stabilization problem. Your system is unstable and you want to use feedback to make it stable.

If your system is

dx/dt = A x + B u

and you feed back into the system

u = K x

then you get the closed-loop system

dx/dt = A x + B K x = (A + B K) x

which is stable IFF the eigenvalues of the closed-loop system matrix A + B K are all in the left half plane. If the system is controllable (technical term), then you can choose a K matrix that will place the poles of A + B K wherever you want. This leads to a few design methodologies:

1. The simplest method is to just choose ahead of time where you want the closed-loop poles to be, and then solve for the K that puts them there. This is called pole placement.

2. Instead of choosing poles, you can choose that a certain cost function be minimized -- some weighted sum which trades off the energy you put into the system against the mean square error. For the linear systems case, this would be a Linear Quadratic Regulator (aka the "LQR problem"). This falls under the category of Optimal Control.

3. If you're not completely sure about the parameters defining your plant (and only know what range they might fall in), you can make sure that your system is stable no matter what the parameters are by choosing the gain matrix K that gives you the best worst-case performance. This is called Robust Control.

So you see there are a number of ways to choose K.

Maybe try the following example problem: "Use pole placement to stabilize the simple harmonic oscillator system given above. Give the closed-loop system two poles at '-p' on the real line."

This topic is closed to new replies.

Advertisement