• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By stale
      I've just started learning about tessellation from Frank Luna's DX11 book. I'm getting some very weird behavior when I try to render a tessellated quad patch if I also render a mesh in the same frame. The tessellated quad patch renders just fine if it's the only thing I'm rendering. This is pictured below:
      However, when I attempt to render the same tessellated quad patch along with the other entities in the scene (which are simple triangle-lists), I get the following error:

      I have no idea why this is happening, and google searches have given me no leads at all. I use the following code to render the tessellated quad patch:
      ID3D11DeviceContext* dc = GetGFXDeviceContext(); dc->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_4_CONTROL_POINT_PATCHLIST); dc->IASetInputLayout(ShaderManager::GetInstance()->m_JQuadTess->m_InputLayout); float blendFactors[] = { 0.0f, 0.0f, 0.0f, 0.0f }; // only used with D3D11_BLEND_BLEND_FACTOR dc->RSSetState(m_rasterizerStates[RSWIREFRAME]); dc->OMSetBlendState(m_blendStates[BSNOBLEND], blendFactors, 0xffffffff); dc->OMSetDepthStencilState(m_depthStencilStates[DSDEFAULT], 0); ID3DX11EffectTechnique* activeTech = ShaderManager::GetInstance()->m_JQuadTess->Tech; D3DX11_TECHNIQUE_DESC techDesc; activeTech->GetDesc(&techDesc); for (unsigned int p = 0; p < techDesc.Passes; p++) { TerrainVisual* terrainVisual = (TerrainVisual*)entity->m_VisualComponent; UINT stride = sizeof(TerrainVertex); UINT offset = 0; GetGFXDeviceContext()->IASetVertexBuffers(0, 1, &terrainVisual->m_VB, &stride, &offset); Vector3 eyePos = Vector3(cam->m_position); Matrix rotation = Matrix::CreateFromYawPitchRoll(entity->m_rotationEuler.x, entity->m_rotationEuler.y, entity->m_rotationEuler.z); Matrix model = rotation * Matrix::CreateTranslation(entity->m_position); Matrix view = cam->GetLookAtMatrix(); Matrix MVP = model * view * m_ProjectionMatrix; ShaderManager::GetInstance()->m_JQuadTess->SetEyePosW(eyePos); ShaderManager::GetInstance()->m_JQuadTess->SetWorld(model); ShaderManager::GetInstance()->m_JQuadTess->SetWorldViewProj(MVP); activeTech->GetPassByIndex(p)->Apply(0, GetGFXDeviceContext()); GetGFXDeviceContext()->Draw(4, 0); } dc->RSSetState(0); dc->OMSetBlendState(0, blendFactors, 0xffffffff); dc->OMSetDepthStencilState(0, 0); I draw my scene by looping through the list of entities and calling the associated draw method depending on the entity's "visual type":
      for (unsigned int i = 0; i < scene->GetEntityList()->size(); i++) { Entity* entity = scene->GetEntityList()->at(i); if (entity->m_VisualComponent->m_visualType == VisualType::MESH) DrawMeshEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::BILLBOARD) DrawBillboardEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::TERRAIN) DrawTerrainEntity(entity, cam); } HR(m_swapChain->Present(0, 0)); Any help/advice would be much appreciated!
    • By RenderPixel
      I have just started writing shaders recently (a couple of weeks ago.) (Background info in another post here.) I decided to start with a basic phong shader. I understand (I think) the implementation of the diffuse and specular refleections. 
      Diffuse reflection = (N.L)
      Specular reflection = (R.V)^S   where s = specular power. 
      Reflection vector = 2* (N.L)*N-L
      I have a shader put together for fx composer 2.5 that is an attempt at the phong lighting model. (Shader code linked.) I have not added in the texture sample to the pixel shader output yet since I am trying to debug lighting model issues. The diffuse reflection works fine, however, the specular reflection is giving me some trouble. 
      The problem I am experience is that the specular reflection (shown here) seems to affect the back of the sphere. 
      When I saturate the specular reflection value I get this: (If saturate clamps input to min=0 and max=1, I do not know where these values on the back are coming from since negative values should just get clamped to 0.)

      If I remove the saturate from the specular reflection and add the difuse reflection to the specular reflection I get this.  (I will post an image of my diffuse reflection below to show how it looks on its own. )

      Diffuse reflection by itself. 

      What am I missing here?
      Also, I am completely open to feedback on my shader. I am looking to learn anything I can here.  Thank you in advance for your time.

    • By Kai Keeper
      I finished this game a while ago, now I'm trying to make an introduction video before I put it on Steam, looking for feedback.
      I have already got some feedback from other people, and this is what they felt or some of the things I think the video doesn't do a good job: 1. I don't understand the core concept of the game. I think another video that explains the core concept of this game would be better.
      2. I feel that the video was too long, it showed way too many features.
      If you feel the same way, please let me know. If you have any other suggestions/feedback please don't hold back.
    • By fgp069
      Hi there, this is my first time posting, but have been a long-time lurker in this community. I am currently developing a 3D game engine using a deferred renderer and OpenGL.
      I have successfully implemented recursive portals (and mirrors) in my engine utilizing the stencil buffer to mask out regions of the screen. This solution is very favorable as I am able to have dozens of separate views drawn at once without needing to worry about requiring multiple G-buffers for each individual (sub)view. I also benefit with being able to perform post processing effects over all views, only needing to apply them over what is visible (one pass per-section with stencil masking for no risk of overdraw).
      Now presently I am pondering ways of dealing with in-game camera displays (for an example think of the monitors from Half-Life 2). In the past I've handled these by rendering from the camera's perspective onto separate render target, and then in the final shading pass applying it as a texture. However I was greatly disappointed with the performance and the inability to combine with post-processing effects (or at least the way I do presently with portals). Another concern being that I wish to have scenes containing several unique camera screens at once (such as a security CCTV room), without needing to worry about the associated vram usage of having several G-Buffers.
      I wanted to ask more experienced members of this community if it would be possible to handle them in a similar fashion as I do with portals, but with the difference being for them to be transformed so they take on the appearance of a flat 2D surface. Would anybody with a more comprehensive understanding of matrix maths be able to tell me if this idea is feasible or not, and if so could come up with a possible solution?
      I hope all this makes enough sense. Any possible insight would be greatly appreciated!
    • By GytisDev
      me and few friends are developing simple city building game with unity for a school project, think something like Banished but much simpler. I was tasked to create the path-finding for the game so I mostly followed this tutorial series up to episode 5. Then we created simple working system for cutting trees. The problem is that the path-finding is working like 90% of the time, then it get stuck randomly then there's clearly a way to the objective (tree). I tried looking for some pattern when it happens but can't find anything. So basically I need any tips for how I should approach this problem.
      Use this image to visualize the problem.
  • Advertisement
  • Advertisement

Quadcopter simulation and PIDs

Recommended Posts

I'm trying to write a simple quadcopter simulation. It doesn't need to be 100% accurately simulated and I don't intend to write an actual drone controller, it's just supposed to "look" right in the simulation, and I'm trying to get a better understanding of PIDs. As such, I don't need to worry about sensors, or an accelerometer or gyroscope, I get all of the information from the simulation directly.

I'm new to quadcopter physics and PIDs, so there are a few things I'm a bit unclear on. I'm not asking for concrete solutions, I'd just like to know if my ideas go in the right direction and/or if I'm missing anything.


Let's say the quadcopter has the world coordinates Q(0 -50 0) (y-axis being the up-axis) and I want it to fly to O(0 0 0). The velocity I need is O -Q, so I need to accelerate the drone until I reach that velocity (Correcting it every simulation tick). To do that, I need a PID, with the setpoint being the desired velocity O -Q, and the process variable being the current velocity of the drone (both on the y-axis), and 1/4th of the PID output is then applied as force to each rotor (Let's ignore the RPM of the rotors and assume I can just apply force directly). Is that correct?


The drone is at the world coordinates Q(0 0 0) and should fly to P(100 0 100). It also has the orientation (p(itch)=0, y(aw)=0 and r(oll)=0). I want the drone to fly to point Pwithout changing its yaw-orientation. How would I calculate the target pitch- and roll-angles I need? I can work with euler angles and quaternions (not sure if I need to worry about gimbal lock in this case), I just need to know what formulas I need. Of course I also want to make sure the drone doesn't overturn itself.


I want to change the drone's orientation from (p=0, y=0, r=0) to (p=20, y=0 r=0) (With positive pitch pointing downwards). To do so, I'd have to increase the force that is being applied by the front rotors, and/or decrease the force of the back rotors. I can use a PID to calculate the angular velocity I need for the drone (similar as in 1) ), but how do I get from there to the required forces on each rotor?


Am I correct in assuming that I need 6 PIDs in total, 3 for the angular axes and 3 for the linear axes?

Share this post

Link to post
Share on other sites
1 hour ago, Silverlan said:


The velocity I need is O -Q

That's the velocity needed to travel from Q to O in one time unit.  Which...is probably not what you want? But yes, if you want to move at a certain velocity then your setpoint would be the target velocity and your process variable would be the current velocity.  But then you have to figure out how to set the target velocity over the whole path, and it's not so great for maintaining position.  I would more likely have the setpoint be a target location (and orientation) and the process variable be the current location.  Then you can plan a path and move the target along it at a comfortable cruising speed and the PIDs should make it follow fairly nicely.

Also, the "apply 1/4 force to each rotor" isn't exactly correct.  Maybe you were just glossing over the details?  To accelerate, you need to tip the quadcopter in the direction you want to accelerate and then apply enough force to the rotors to maintain height.  So the horizontal acceleration you get is based on the orientation as much as the rotor lift.


2) I would think you would use the cross-product of the yaw (up) axis and the direction that you want to go to get an axis of rotation.  Then convert axis-angle (I want to tilt this much on this axis) and convert to yaw/pitch/roll so you can apply it to the rotors easily (left/right pairs and forward/back pairs).  Here I would definitely feed the orientations to the PID rather than the angular velocities. You can probably get away without worrying about gimbal lock for a while since you'll only be making small orientation changes. But if something bad happens (you crash into something, or whatever) then you might run into trouble.


3) I would do this empirically: that's kind of what the PID is for.  Feed in the target orientation and the current orientation, and then tune the PID until the output values give you nice behavior.  IIRC Wikipedia has a procedure for manual tuning that works pretty well.


4) Yes, the linear axes are independent, so one PID for each.  There might be some better way to do the angular one?  But one per axis should work well enough to get you up and running...

Share this post

Link to post
Share on other sites

Controlling the position of the drone is a very high level task, so you're coming at this from a top-down direction. I would recommend going bottom-up instead.

In order of simplicity, I'd make a PID controller for:

  1. Rate / acro mode: This is what "drone racers" fly with. Stabilization is based on a gyroscope (or angular velocity from your physics system). The PID controller trying to reduce the error between a requested angular velocity (from the pilot's sticks) and the measured angular velocity. When you let go of the sticks, the drone should remain at the current orientation (i.e. it should make it appear that angular momentum doesn't exist).
  2. Angle / level mode: This is what most "toy drones" fly with. Stabilization is based on an accelerometer (or orientation from your physics system). The PID controller is trying to reduce the error between the requested orientation (from the pilot's sticks) and the measured orientation. When you let go of the sticks, the drone should return to 0º pitch/roll. Yaw is actually processed the same way as in #1.
  3. Hover -- this can be added on top of either of the above. A second layer of stabilization is based on an altimeter (or Y position from the physics system). An additional PID controller is trying to maintain the requested altitude. At middle throttle this PID controller is in full effect, or at negative/positive throttle stick positions this PID controller is attenuated to allow the drone to gain/lose altitude.
  4. Positional movement from waypoint to waypoint.

Unless you already know how to fly an acrobatic drone and are interested in it, you can probably skip #1 and just do #2 instead (they are mutually exclusive controllers anyway). #3 is a secondary PID controller that is layered over the top of #2.

For doing autonomous flying (move from position to position), you would then add another PID controller on top (#4) which controls the virtual "pilot's sticks", which are fed into the PID controllers from step #2 and step #3. This layer probably doesn't even need to be a PID controller -- you can just get the direction to the target and directly feed that value to the "sticks" without any filtering at all.

So that gives:
Layer #2 -- converts the throttle/yaw and pitch/roll stick commands into motor outputs. Throttle inputs are not stabilized, but pitch/roll are converted into a target absolute orientation, and yaw is converted into a target angular velocity.
Layer #3 -- generates throttle stick movements based on a requested altitude.
Layer #4 -- generates yaw/pitch/roll stick movements based on a requested position/speed.

You should be able to disable layer 3/4 and fly it manually. 

So for your questions --
2) don't. The waypoint (#4) layer just needs to know what direction it wants to fly in and how fast. It then pushes the pitch/roll stick on that direction with a magnitude proportional to the speed that it wants. The angle stabilization layer (#2) then converts that stick position into a desired pitch/roll angle, and its PID controller adjusts the motor speeds until the drone's actual pitch/roll converges to the desired one. The conversion from stick position to angle can be as simple as:

float StickToAngle( float stick ) { /*stick is from -1 to +1*/ return stick * 30; }//at full stick movement, the drone will rotate 30 degrees.

When the drone rotates 30º, it will start to move sideways and also begin to lose altitude. The hovering layer's PID controller (#3) will then kick in due to the change in altitude, and will increase the throttle stick in order to increase thrust an maintain altitude. Layer #2 will read the throttle/yaw stick at the same time as it reads the pitch/roll stick, and adjust the motor speeds accordingly.

3) Make a map of which axis requests affect each motor. e.g. a pitch request (nose up please) will have +1 for the front two motors and -1 for the back two motors // a roll request (right wing down) will have +1 for the left two motors and -1 for the right two motors.

Make a sum for each motor and initialize it to zero. For each angular velocity output, add it to each motor based on the map.
e.g. if the request is pitch=0.5, roll=0.3
front-left = 0.5+0.3 = 0.7
front-right = 0.5-0.3 = 0.2
back-left = -0.5+0.3 = -0.2
back-right = -0.5-0.3 = -0.7
If any of these numbers go above 1.0 (100%), then you can divide all of them by the largest value to avoid loss of control during harsh manouvres. You make sure that the motors responses are not too high or too low by tweaking the P value in your Layer#2 PID.

4) Layer #2 needs 3 PIDs, Layer #3 needs 1 PID, Layer #4 needs 4 PIDs.

Share this post

Link to post
Share on other sites

Thanks, that helps quite a bit! Still having a bit of trouble, however.

For testing, I disabled angular movement for the drone, as well as movement on the x- and z- axes (so it can only move up and down), and I've tried to implement #3 (keeping a specific altitude). However, no matter what I set the parameters of the PID to, I can't get it to change the rotor speed quickly enough, so it ends up overshooting the target massively:

(The target altitude is 0, which is more or less central between the green block and the ground floor.)

Here's the test-code I've used:

local pidAltitude = math.PIDController(100,20,5) -- Parameters: proportional, integral, derivative
function PhysDrone:Simulate(dt)
  local pos = self:GetPos() -- Current position
  local posTgt = Vector(0,0,0) -- Target position; x and z are ignored for now
  local v = pidAltitude:Calculate(pos.y,posTgt.y,dt) -- Parameters: processFeedback, setpoint, delta time
  for rotorId,entRotor in ipairs(self.m_rotors) do -- for each of the 4 rotors
    entRotor:ApplyTorque(Vector(0,v,0)) -- PID output is applied as a torque force to the rotor

    local up = entRotor:GetUp() -- Normalized vector pointing upwards from the rotor (Always the same direction as the up-vector of the drone)
    local liftForce = entRotor:GetLiftForce() -- How much lift force should be applied to the drone by this rotor (Depending on current revolutions per second)
    local posRotor = self:WorldToLocal(entRotor:GetPos()) -- Where the lift force originates from, relative to the drone
    self:ApplyForce(up *liftForce *dt,posRotor)

I've tried following the tuning instructions for PIDs on wikipedia, but haven't had any luck, the result is always almost exactly what you see in the video. There are two different forces involved, which I suppose complicates things a little, but I'm not sure what to do here.

Edited by Silverlan

Share this post

Link to post
Share on other sites

What are the two forces involved?

A PID that overshoots usually can be reigned in by tweaking the D term. You may need a smaller P term too.

Share this post

Link to post
Share on other sites
13 hours ago, alvaro said:

What are the two forces involved?

The torque force of the rotors, as well as the lift force applied to the drone, as shown in the code snippet.

14 hours ago, alvaro said:

A PID that overshoots usually can be reigned in by tweaking the D term. You may need a smaller P term too.

The P-term has to be that high, otherwise the rotors take too long to reach the required RPS.


It gets increasingly unstable with higher values though.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement