Jump to content
  • Advertisement

scippie

Member
  • Content Count

    83
  • Joined

  • Last visited

Community Reputation

119 Neutral

About scippie

  • Rank
    Member

Personal Information

  • Interests
    Audio
    Design
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. scippie

    Spaceship controls -> autopilot

    I am currently looking into a non-mathematical way to solve this problem as it does not really seem to be something that can be solved straightforward.
  2. scippie

    Spaceship controls -> autopilot

    The thrust in my implementation is fine, it's controlling it on auto-pilot that is the problem.
  3. scippie

    Spaceship controls -> autopilot

    Thanks!
  4. scippie

    Spaceship controls -> autopilot

    Ok, you are right, I have really been expressing myself completely incorrectly. Of course, quaternions and matrices are the same for rotation and can both prevent gimbal lock. What I really meant to say was working with yaw, pitch and roll with a constant reference is different and will also cause gimbal lock. But, I really do understand it, no worries ๐Ÿ™‚
  5. scippie

    Spaceship controls -> autopilot

    Cool! Yes please, share your code! ๐Ÿ˜‰
  6. scippie

    Spaceship controls -> autopilot

    I was going to explain why I don't agree (because of the way the planes already change when you adjust one thruster even before you have calculated the new rotation), but you are making me think now... let me think about this ๐Ÿ™‚
  7. scippie

    Spaceship controls -> autopilot

    Well, that's exactly what I was doing now and that is exactly my problem: calculating the thrust so that all trajectories end up at (0, 0, 0) at the same time. By changing the pitch of the ship, the yaw is already no longer valid as it will make your ship roll because of the normalized way quaternions work. Your XY/XZ plane as you put it, constantly changes. Edit: and as you need to calculate all thrusting forces at a time before you apply them to the calculations that actually changes the ship rotation, they are immediately badly chosen. This is not a completely different approach. I did it based on what you said above: looking at what my angles are now and where they need to be and just 'virtually pressing keys' until I get there. But calculating exactly when and how much to apply is what I am unable to do and the result is that the ship oversteers, so starts applying counter forces, oversteers again in the other direction, starts applying counter forces, etc... It will come to the end point most of the time after shaking back and fro several times but sometimes, the oversteering is so much that it starts rotating in circles and never gets there. Steering too little will look silly, steering too much will make it go past the target. Calculating that exact number is what I am actually trying to ask. The suggested PID controller does this by trial and error, which is fine, but there also, it is a fine tuning of constant values which might be good in most situations but can generate untested situations where the ship will start spinning and never find its target, because... in the end, it comes to: well... we need to go there, let's just give some thrust and see what happens... while a human being will 'feel' where to go and will already start thrusting more aimed.
  8. scippie

    Spaceship controls -> autopilot

    What I mean is that quaternions don't work in the yaw/pitch/roll sense. I know you can construct a quaternion from them, but then you could as well be using matrices and have gimbal lock problems. What I mean is that if it were, I wouldn't be asking this question as I would just make sure the yaw and the pitch would linearly move to the target. But if you do this with quaternions, you are automatically influencing the other parts of the rotation. If you pitch, your yaw-axis is already changed and no longer has a reference to the original 3D axis. I had already searched this forum (and others) extensively and had already found these discussions. I had found the PID approach and it was new to me and I know how they work and I might end up using it, but it is an inaccurate system which can easily lead to unforseen/untested bugs where your ship starts spinning because of its 'just try and then correct'-approach. Maybe I should have mentioned that in my OP. Edit: also, the PID uses a fixed time-step between frames. I prefer to find a solution that works on fluctuating time steps. I would really think that there should be a more mathematical, more robust solution. Thank you, but it is so much easier in 2D. You just take a rotation and thrust to it (and I have done that one before). The problem is that by using quaternions, you no longer simply change one rotation, you automatically change them all, making the difference between your current state and the target change more than just in one angle. In 2D, you don't have that problem.
  9. I am creating a simple spaceship sim with 6DOF flying. For the ships' angle, I use a quaternion and with the keyboard, I can change yaw, pitch and roll separately which result in three quaternions which are multiplied into the current rotation. The rotation force is limited by a value [-1.0 ... 1.0] and is built up gradually, not only to simulate joystick controls on the keyboard, but also because it feels realistic that the thrusters must build up their power. Then, I also have a throttle [0.0 ... 1.0] which makes the ship move in the direction of the rotation. Now, I am trying to create an autopilot and the first thing I need is auto-aim. When I am looking at a planet in front of me and I want the autopilot to move to the planet behind me, I need it to handle the rotation and the thruster to get there. Simply doing quaternion interpolation (slerp) is not acceptable as this does not allow me to limit the thrust-force (or in other words, I would not be able to know how much my thrusters would be burning). Also, this would not take the building up of the power for the thrusters. I have tried just coding it as it feels, taking the polar directions of where I am looking at and the polar direction of where I am going to and just adjust yaw and pitch thrusters to rotate there, but as you know, rotation with quaternions isn't as linear as I would hope and the result is constantly reviewed which not only makes it look unnatural but also makes it do spirally smaller getting circles around the target because of the thruster-forces which I don't really know how to calculate accurately because of the same reason. Can anyone give me some tips and/or point me to a solution? The math currently is like this: yaw_thruster, pitch_thruster, roll_thruster are values between [-1 ... 1] and are added with a factor in function of passed time. rot_x_axis, rot_y_axis, rot_z_axis are axis-angle quaternions with the thruster values * passed time as angle. They are all multiplied with the ship rotation on every frame and I normalize the ship rotation from time to time to handle precision errors. I calculate polar direction vectors by calculating the (normalized) z-axis from the rotation and then: theta = atan2(z.x, -z.z), phi = acos(z.y). These things all seem to work perfectly.
  10. Yes, I know. But when everything fails, you begin to doubt what you know. Actually, my engine only needs the depth_stencil buffer in framebuffers, not in the main backbuffer, so I will disable it there.
  11. I found out what the problem was: I made the false assumption that when attaching a DEPTH24_STENCIL8 buffer to GL_DEPTH_ATTACHMENT to a framebuffer would be enough to make it also have a stencil buffer. But when I queried for GL_STENCIL_BITS, I found out that I had 0 stencil bits. So, you also need to attach the same buffer as GL_STENCIL_ATTACHMENT. Problem solved!
  12. I'd love to post my solution on this problem, but I was still unable to make it work. Sorry for the bump, but is there really no one who can help me with this?
  13. I am trying to get volume shadows to work. My degenerate mesh is correct, I have rendered it to a framebuffer and saw that it was good. But when I try to get it on a stencil buffer, my stencil buffer stays blank or has junk. First I render my normal scene, I make sure my stencil buffer is cleared, and the Z-buffer is filled with correct values. Then I enable the stencil buffer (I use double sided), I disable the pixel mask, and I draw the degenerate meshes. I enable the pixel mask and draw shadow quad. It doesn't work, on my windows machine, there is no shadow, on my mac, there is junk on the screen (like it has been raining shadow drops on my camera). To make sure that the problem is with the stencil buffer, I have rendered out a fully white quad to a framebuffer with the stencil enabled, and I log that framebuffer. On my windows, it's completely white, on my mac, it has junk. So I am quite sure that I am doing something wrong with the stenciing. It must be some kind of initialisation issue or something, but I follow every rule in the blue book and several tutorials I found. Here's the code that does this: glClearColor(0, 0, 0, 0); glClearDepth(1.0f); glClearStencil(0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); glEnable(GL_DEPTH_TEST); glEnable(GL_CULL_FACE); ... render scene ... glEnable(GL_DEPTH_TEST); glDisable(GL_CULL_FACE); glDepthMask(GL_FALSE); glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); glEnable(GL_STENCIL_TEST); glStencilOpSeparate(GL_BACK, GL_KEEP, GL_DECR, GL_KEEP); glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_INCR, GL_KEEP); glStencilMask(0xff); glStencilFunc(GL_ALWAYS, 0x01, 0xff); ... render degenerate shadow mesh ... glBindFramebuffer(GL_FRAMEBUFFER, m_fb_test); glDepthMask(GL_TRUE); glDisable(GL_DEPTH_TEST); glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); glStencilFunc(GL_LEQUAL, 0x01, 0xff); shader = m_shader_flatfill; glUseProgram(shader); GLuint aVertexPosition = glGetAttribLocation(shader, "aVertexPosition"); glEnableVertexAttribArray(aVertexPosition); glBindBuffer(GL_ARRAY_BUFFER, m_quad); glVertexAttribPointer(aVertexPosition, 2, GL_FLOAT, false, 4 * sizeof(float), (GLvoid*)(0)); // the 4 * is correct, the vertex buffer also has texture coordinates that are not used by the shader glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisableVertexAttribArray(aVertexPosition); glDisable(GL_STENCIL_TEST); => m_fb_test contains white or junk I have been doing some testing later on the evening and I had used the screen as render target for the stencil buffers => test buffer, it looks like my stencil buffer is ANIMATING with junk???? But I clear it every frame, so I don't understand. Could this have to do with how I initialize my opengl? Or am I just doing something wrong with my stencil buffer? Initialization of opengl: static PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 32, // 32Bit Z-Buffer (Depth Buffer) 1, // Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; I have tried putting 24 in z-buffer depth and 8 in stencil buffer bits, no help at all, but it shouldn't as I use a different framebuffer to render to, and that framebuffer has a DEPTH24_STENCIL8 attached. Anyone?
  14. I am using a geometry shader to create vertex data. This shader should read one point primitive and output 1-10 point primitives. I did this with glsl 1.20 to make it compatible with mac osx. I used some extensions, and hoorah, as long as the output data consists of no more than 2 varyings, everything works fine. When I go to 3 varyings, it still works perfectly on the mac, but it no longer works on windows. It can't be a limit of my video adapter, it's a geforce 580. The adapter on my mac is lower level. And to prove things, I did some queries on the adapter: glGetIntegerv(GL_MAX_GEOMETRY_OUTPUT_VERTICES_EXT, &temp); => temp = 1024 glGetIntegerv(GL_MAX_GEOMETRY_TOTAL_OUTPUT_COMPONENTS_EXT, &temp); => temp = 1024 glGetIntegerv(GL_MAX_GEOMETRY_VARYING_COMPONENTS_EXT, &temp); => temp = 127 As long as my geometry shader outputs 2 varyings into my vertex buffer: static const char *varying_names[] = { "gl_Position", "gl_FrontColor" }; glE(glTransformFeedbackVaryingsEXT(id, 2, varying_names, GL_INTERLEAVED_ATTRIBS)); everything works fine. When I go to three components: static const char *varying_names[] = { "gl_Position", "gl_FrontColor", "gl_BackColor" }; glE(glTransformFeedbackVaryingsEXT(id, 3, varying_names, GL_INTERLEAVED_ATTRIBS)); it works fine on the mac, it no longer works on windows. The shaders compile/link without warnings or errors. But the geometry shader doesn't seem to emit data anymore: the query GL_TRANSFORM_FEEDBACK_PRIMITIVES_WRITTEN returns 0. I then have been rewriting things and used some more modern glsl-coding: using in and out varyings (they don't work on mac but they do work on windows, or do I just need to enable another extension?) where I can use my own naming instead of the predefined opengl variables. Now I also got it to work perfectly up to two components (I'm talking windows only from now on), but when I use three, the geometry shader still seems to emit primitives, but now the values seem to be corrupt. Does anyone have any experience with this? Is it just some simple shader parameter that needs to be set or something? So here is the test-code I created on windows to retry from scratch. Vertex shader: #version 120 attribute vec3 aVertexPosition; attribute vec3 aVertexDirection; varying out vec3 vInPosition; varying out vec3 vInDirection; void main(void) { gl_Position = vec4(aVertexPosition, 1); // not sure if this is necessary vInPosition = aVertexPosition; vInDirection = aVertexDirection; }; Geometry shader: #version 120 #extension GL_EXT_geometry_shader4 : enable #ifdef GL_ES precision highp float; #endif varying in vec3 vInPosition[1]; varying in vec3 vInDirection[1]; varying out vec3 vOutPosition; varying out vec3 vOutDirection; void main(void) { vOutPosition = vInPosition[0] + vec3(0.0001, 0, 0); vOutDirection = vInDirection[0]; gl_Position = vec4(vOutPosition, 1); // not sure if this is necessary EmitVertex(); EndPrimitive(); }; So if I just add: vs: attribute vec3 aVertexColor; vs: varying out vec3 vInColor; vs: vInColor = aVertexColor; gs: varying in vec3 vInColor[1]; gs: varying out vec3 vOutColor; gs: vOutColor = vInColor[0]; and adapt the c++ code accordingly: using 3 attributes instead of 2, setting 3 attrib-pointers, increasing the vertex-size in every attrib-pointer, etc... and settings the feedback varyings right... it no longer works. It seems to emit data, but it seems to emit incorrect data. By the way, I have tried to change every attribute and in/out varying to vec4 (and of course changed my vertex-buffer attrib-pointers accordingly, didn't help. What am I forgetting??? The only thing I can imagine is that the output varyings of the vertex shader are not put into the input varyings of the geometry shader in the same order. Do I need to specify that? Or is using the same name enough? Thanks, Dirk.
  15. Now you've made me feel stupid :-) Of course that couldn't work during the compilation/linking phase, where did I get that stupid idea. I'm not that dumb though :-) Anyway, I think I must have mixed some things together, as now I can make it work on both glee and glew. I didn't need to add your #ifdefs although I link it right into my project just like you do it. Thanks for your help.
  • Advertisement
ร—

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net isย your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!