sjaakiejj

Members
  • Content count

    157
  • Joined

  • Last visited

Community Reputation

130 Neutral

About sjaakiejj

  • Rank
    Member
  1. [quote name='Ashaman73' timestamp='1333444971' post='4927825'] [quote name='sjaakiejj' timestamp='1333314094' post='4927275'] dGeomTriMeshDataBuildSingle(dataID, vertices, sizeof(float)*3, vCount, indices, 6, sizeof(dTriIndex)*3); [/quote] From the doc over [url="http://192.121.234.229/manualer/programering/Tao-doc/Tao.Ode/Tao.Ode.Ode.dGeomTriMeshDataBuildSingle.html"]here[/url]: [quote] Applies to all the dGeomTriMeshDataBuild single and double versions. (From [url="http://ode.org/ode-latest-userguide.html#sec_10_7_6"]http://ode.org/ode-l...html#sec_10_7_6[/url]) Used for filling a dTriMeshData object with data. [u][b]No data is copied here[/b][/u], so the pointers passed into this function must remain valid. [/quote] So, you fill your data with temporary data which will be gone once you leave the createTriMeshBB method. Here's a fast hack, please do this not at home: [CODE] static float vertices[vCount * 3] = { 0,0,0, 20*30,0,0, 20*30,0,20*30, 0,0,20*30 }; static dTriIndex indices[6] = { 0,1,2, 0,2,3 }; [/CODE] When this work, try to use some proper memory management to handle your mesh data. [/quote] How could I have missed that line? Besides, it's pretty logical... Thanks a lot, I'll go try it immediately [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] Edit: It works, I had an additional issue afterwards which was that my object was just falling through the plane, but turned out the plane was upside down.
  2. A* considering edges?

    You'd have to define the relationship between your grid nodes differently. A grid, to A*, is just a graph, and any node that is connected to another node is automatically possible to navigate. You'd either have to remove those connections (for instance, at the point on which you get the adjacency nodes, check if there's a wall obstructing the path from the current node to the one we're looking at), or change your approach from a grid to something like waypoints or a navigation mesh. I think the Sims uses NavMeshes, but I'm not a 100% sure.
  3. Hey guys, I've been scratching my head at this for a while now, and Google hasn't been my friend for this particular issue. This is my first time using ODE after somebody recommended it to me, and it seems to be a fairly nice and straightforward library to use. However, I'm having some trouble with the use of TriMeshes. Since I want to have terrain collision detection, I decided to use ODE's built-in TriMesh to represent the terrain. Of course, I start off with a simple 2-triangle square representing my "Terrain". I do that with the following code: [code] void PhysicsManager::createTriMeshBB(Model* model) { const int vCount = 4; float vertices[vCount * 3] = { 0,0,0, 20*30,0,0, 20*30,0,20*30, 0,0,20*30 }; dTriIndex indices[6] = { 0,1,2, 0,2,3 }; dTriMeshDataID dataID; dGeomID geomID; dataID = dGeomTriMeshDataCreate(); dGeomTriMeshDataBuildSingle(dataID, vertices, sizeof(float)*3, vCount, indices, 6, sizeof(dTriIndex)*3); //dGeomTriMeshDataBuildSimple(dataID, vertices, 4, tmpCreateIndices(), 6); geomID = dCreateTriMesh(mSpace, dataID, 0, 0, 0); dGeomSetPosition(geomID,0,0,0); dGeomSetBody(geomID, 0); model->setGeometryID(geomID); } [/code] After having done this, I specify a callback function - say "tmpCallback" as below: [code] void PhysicsManager::tmpCallback(void * d, dGeomID id1, dGeomID id2) { int i; // Get the dynamics bodies dBodyID b1 = dGeomGetBody(id1); dBodyID b2 = dGeomGetBody(id2); std::cout << "Callback" << std::endl; // Holds the contact joints - Need to read up on this dContact contact[32]; dContactGeom ptr; for (i = 0; i < 32; i++) { contact[i].surface.mode = dContactBounce; contact[i].surface.mu = dInfinity; contact[i].surface.mu2 = 0; contact[i].surface.bounce = 0.01; contact[i].surface.bounce_vel = 0.5; contact[i].surface.soft_cfm = 0.01; } int numc = dCollide(id1, id2, 1, &contact[0].geom, sizeof(dContact)); if (numc > 0) { for (i = 0; i < numc; i++) { dJointID c = dJointCreateContact(PhysicsManager::get()->getWorld(), PhysicsManager::get()->getContactGroup(), contact + i); dJointAttach(c, b1, b2); } } } [/code] This callback function works perfectly when doing plane to sphere collision detection. When the TriMesh is involved however, the program crashes at "dCollide(id1,id2, 1, &contact[0].geom, sizeof(dContact))", throwing an unhandled Access Violation from inside the standard libraries. I'm at a loss at what's causing the problem, and I've tried everything I could think of to solve it. If you want to slap me in the face and tell me I shouldn't be using ODE, that's fine as well - please do give your own recommendations however Thanks in advance
  4. Object Movement

    [quote name='AltarofScience' timestamp='1331169662' post='4920263'] [quote name='sjaakiejj' timestamp='1331169234' post='4920262'] Have you tried swapping the order of Rotated and Translated? I think that should do the trick so instead of [code] glTranslated(mx,my,mz); glRotatef(direction,0,1,0); [/code] [color=#666600]you do[/color] [code] glRotatef(direction,0,1,0); glTranslated(mx,my,mz); [/code] [/quote] That does not appear to do anything different as far as I can tell. [/quote] You should definitely see something different, though its possible I pointed to the wrong pair, as I'm not quite sure which part represents your ship. Swapping the order means rotate first, then translate. Rotation should always be done at the origin, otherwise you get the exact effect you described, where the object rotates around the origin instead of its own axis.
  5. Object Movement

    Have you tried swapping the order of Rotated and Translated? I think that should do the trick so instead of [code] glTranslated(mx,my,mz); glRotatef(direction,0,1,0); [/code] [color=#666600]you do[/color] [code] glRotatef(direction,0,1,0); glTranslated(mx,my,mz); [/code]
  6. Bounding Box Calculations

    [quote name='gbMike' timestamp='1331147201' post='4920154'] I had considered this, but if I were to generate the bbox, and then apply the transformation/rotation/scale to it manually, a rotated model, would result in a rotated bbox. I was under the impression that this was bad practice. Unless I am mistaken. Alternatively, I could use the apply the matrix to the points in the model, and then calculate a bounding box, but since applying rotation requires trig, and it would be run on every single vertex on every model, each frame, I imagined it would absorb ridiculous resources. Again though, I could be mistaken here as well. [/quote] Depends on how you build it. Either use AABBs like haegarr mentioned, or if you want rotated bounding boxes (probably mostly used in hitboxes) I would simply have a class that contains the model and its orientation, location and scale, and use that to set the state of both the GL objects and the Bounding box. That way you have logically seperated the two very different types of functionality, and you ensure that you're always running your calculations in the correct space. [quote name='mhagain' timestamp='1331150761' post='4920179'] A bounding sphere may be an option for you here. It will certainly be easier to calculate - just position it's center at the object's position and calculate the radius from the object size and largest scale factor, done - no need to bother with rotations. They're also faster to test against the frustum, but the tradeoff is that a sphere may not be an optimal shape for all objects. [/quote] A sensible approach, and it'll help simplify collision detection as checking for intersection with a sphere is trivial.
  7. Bounding Box Calculations

    Why are you using OpenGL functions to transform a bounding box? The way I generally approach it is to give the box its own coordinates, and apply transformations to those manually using vectors, Matrices and quaternions. The OpenGL functions will only apply to a matrix in the OpenGL state, which even if you could access it would be bad practise to use for this kind of application.
  8. Help me using buffer objects

    [quote name='SpectreNectar' timestamp='1330968050' post='4919505'] Thanks, I now have visual output! It covers the whole screen in an orange color though. Could have something to do with my camera matrix. In pursuit of my goal of having the shader make things isometric, I set it up with the GLM math library ( [url="http://glm.g-truc.net/"]http://glm.g-truc.net/[/url] ) to look at the screen like this: [CODE] glm::vec3 eye = glm::vec3(0.0f, 0.0f, 0.0f); glm::vec3 center = glm::vec3(0.0f, 0.0f, 1.0f); glm::vec3 up = glm::vec3(0.0f, 1.0f, 0.0f); glm::mat4 viewMatrix = glm::lookAt(eye, center, up); glUniformMatrix4fv(glGetUniformLocation(p, "vMatrix"), 1, GL_FALSE, glm::value_ptr(viewMatrix)); [/CODE] So I have a few more questions before I can draw things properly. When you create an orthographic projection, any non zero Z value isn't drawn as it is outside znear/zfar range. So if you don't want perspective yet want to have your geometry stored with all three axis, any z coordinate must be set to zero eg. inside a vertex shader. Is this a correct assumption? [/quote] Orthographic view is not zeroing the Z value. Orthographic is instead setting the angle between your near and far plane to be zero, or in OpenGL just by using glOrtho which does it for you. The objects are still 3D, and so your z-values are not lost. [quote] What range are coordinates inside a vertex shader in? 0.0-1.0? In the following shader, what function do I need in order to set "vertex" to the same value as that inside "t->vertices"? (so I don't need ftransform()) [CODE] #version 330 in vec4 vertex; uniform float ox; uniform float oy; uniform mat4 vMatrix; uniform mat4 mMatrix; out float odepth; void main() { vec4 vert = ftransform() * vMatrix; float sqrX = vert.x; float sqrY = 1.0-vert.y; float sqrZ = vert.z; vert.x = ox + sqrX-sqrY; vert.y = oy + (sqrY+sqrX)/2.0 - sqrZ; vert.z = 0.0; odepth = sqrZ; gl_Position = vert; } [/CODE] Again, thanks very much. [/quote] Use the following: [code] vec4 vertex = gl_Vertex; [/code] that'll put your vertex coordinate into the vec4 vertex. Remember that a vertex shader is called once for every vertex, and so instead of ftransform(), you would multiply your vertex by your projection matrix, e.g. [code] gl_Position = gl_ModelViewProjectionMatrix * vertex; [/code]
  9. [quote name='Trienco' timestamp='1331012469' post='4919680'] [quote name='sjaakiejj' timestamp='1330943918' post='4919410'] Got it. My analogy of the concept was off a bit, and I was taking "zooming in" to be a non-mechanical concept, and the only way I could think of enlarging an object from that standpoint was by translation [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] [/quote] Considering how orthogonal projection works you must realize that distance will not change how large the object appears, right? So as long as your object is perfectly centered, you can translate back and forth as much as you like without seeing any difference. Until the object gets clipped against the near and far plane. [/quote] Yes, due to the fact the view frustrum is in fact a cube/rectangle (e.g. near and far plane are of equal size) [quote] If you want to zoom, just decrease your fov or limit your ortho projection to a smaller area. You don't run into clipping issues and it's a lot less messy. You will still not find a simple relationship for perspective and ortho zoom without considering the objects distance to the camera in your calculation. Even if you find that matching set of zoom factors, where the object appears equally large in both projections... move it closer or farther away and your numbers won't work anymore. [/quote] As mentioned before, I mainly looked at replacing the glTranslate function. As I never use Orthographic views myself, I didn't really think too much about what translation means in such a projection.
  10. Help me using buffer objects

    You've got a few problems in your code that I can spot straight off. I haven't used GL_STREAM_DRAW myself, so I'd try using GL_STATIC_DRAW to make sure your code is working correctly. So your initialisation should look a bit like this: [code] GLuint pVertexBufferObj; //You should remember this one, as it specifies the "location" of the data in GPU memory /* Generate a buffer object, and bind it with the data */ glGenBuffers(1, &pVertexBufferObj); glBindBuffer(GL_ARRAY_BUFFER, pVertexBufferObj); glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * t->NumberVertices, t->vertices, GL_STATIC_DRAW); /* Tell OpenGL we're now done providing it our data */ glBindBuffer(GL_ARRAY_BUFFER, 0); [/code] Next in the draw call you would specify the following: [code] glEnableClientState(GL_VERTEX_ARRAY); // glEnableClientState(GL_NORMAL_ARRAY); //If we're using normals // glEnableClientState(GL_TEXTURE_COORD_ARRAY); //If we're using textures glBindBuffer( GL_ARRAY_BUFFER, pVertexBufferObject); //3 = Number of elements (xyz), GL_FLOAT = Type of elements, sizeof(Vertex) = size of your elements in bytes, 0 = offset from the start of your vertex buffer object array glVertexPointer(3, GL_FLOAT, sizeof(Vertex), 0 ); //Same for Normal & Texture pointers glPushMatrix(); glDrawArrays(GL_TRIANGLES, 0, t->NumberVertices); glPopMatrix(); glBindBuffer( GL_ARRAY_BUFFER, 0 ); //glDisableClientState( GL_TEXTURE_COORD_ARRAY ); //glDisableClientState( GL_NORMAL_ARRAY ); glDisableClientState( GL_VERTEX_ARRAY ); [/code] Important thing to note here is that Vertex Buffer Objects do not have their coordinates stored in the CPU memory, but in the GPU memory. Sending the vertices off to a buffer object at every draw call defeats the purpose of using them, as they exist to reduce the bandwidth usage between CPU and GPU.
  11. [quote name='Brother Bob' timestamp='1330943717' post='4919409'] [quote name='sjaakiejj' timestamp='1330942122' post='4919405'] Isn't the concept of "zooming in" is itself a translation of the eye-point in the real world? I suppose scaling the scene is a reasonable approximation of the concept as well. The reason I advice gluLookAt for this operation is because it guarantees correct translation of all the objects in the scene, whereas it's easy to go wrong with glTranslate. [/quote] Zooming is when you change the field of view. If you have a camera, zooming in or out is when you increase or decrease the focal length to get a narrower or wider angled shot of your object. The reason objects appear larger when you zoom in is because you narrow the field of view so that the object effectively occupies a larger portion of the view volume. You don't move the camera closer to or further away from the object you're taking a photo of. Your translation method makes the zoom direction dependent; you can only zoom in the direction the camera is facing at the time you do the zoom. If you, after zooming in, you face the camera in the opposite direction, you have zoomed out because you are further away from those ojects, and if you face it perpendicular to the direction of translation, you have a sideways translation instead of a zoom. [/quote] Got it. My analogy of the concept was off a bit, and I was taking "zooming in" to be a non-mechanical concept, and the only way I could think of enlarging an object from that standpoint was by translation
  12. [quote name='haegarr' timestamp='1330934614' post='4919389'] When moving the camera / eye-point closer into the scene you get something that looks like zooming due to the perspective foreshortening. As is already mentioned, this is not a "real" zoom. E.g. you move the clipping volume together with the camera, and hence geometry near to the camera may disappear while geometry far from the camera may pop up. And that this works at all in perspective mode only is already noticed. Because depth of field is usually neglected anyway, I don't see a reason why zooming is not to be simulated simply by scaling the scene in the horizontal and vertical dimensions of the view space. I.e. if [b]V[/b] denotes the view matrix computed like before (from gluLookAt or whatever), then prepending a scaling [b]S[/b] to yield in a new view matrix like so [b]S[/b]( z, z, 1 ) * [b]V[/b] where z denotes the zoom factor should do the trick. The above is independent of the mode of projection. The scaling [b]S[/b] may, however, also be attached to the projection, because OpenGL prepends projection matrix [b]P[/b] to the view matrix like so [b]P[/b] * ( [b]S[/b]( z, z, 1 ) * [b]V[/b] ) and due to associativity, that can be computed as ( [b]P[/b] * [b]S[/b]( z, z, 1 ) ) * [b]V[/b] as well. However, this is a question of optimization only. If the zoom is static, than attaching it to the projection matrix is advantageous. In the more likely situation that the zoom will change frequently, attaching it to the view matrix is more efficient. [quote name='sjaakiejj' timestamp='1330895683' post='4919263'] You're not actually zooming in if you're using "Translatef", which is what Lee said. Do you know what the different parameters of gluLookAt do? To zoom in, you can move the eye point closer to the look at point. I use this for "zooming in" on my system: [code] void zoomin() { int vx = mLookPoint[0] - mPosition[0]; int vy = mLookPoint[1] - mPosition[1]; int vz = mLookPoint[2] - mPosition[2]; mPosition[0] += vx * 0.01; mPosition[1] += vy * 0.01; mPosition[2] += vz * 0.01; } [/code] [/quote] Moving the eye-point [i]is actually a translation[/i]. Not only that, it is further a translation at the same stage in processing as the OP already does. [/quote] Isn't the concept of "zooming in" is itself a translation of the eye-point in the real world? I suppose scaling the scene is a reasonable approximation of the concept as well. The reason I advice gluLookAt for this operation is because it guarantees correct translation of all the objects in the scene, whereas it's easy to go wrong with glTranslate. [quote] [quote name='sjaakiejj' timestamp='1330895683' post='4919263'] And then in my rendering call I have [code] gluPerspective(mPosition[0], mPosition[1], mPosition[2], mLookPoint[0], mLookPoint[1], mLookPoint[2], 0, 1, 0); [/code] [/quote] This is an incorrect combination of a routine name and a set of parameters. You probably mean gluLookAt instead of gluPerspective. [/quote] Yeah, that should be gluLookAt, not gluPerspective [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  13. OpenGL OpenGL Simple game

    This is your problem: [code] glutInitDisplayMode(GL_DOUBLE | GL_RGBA); [/code] Should be [code] glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA); [/code] Easy to identify on Linux (You get an X error if the window parameters are incorrect), though based on your error log I suppose it's not quite as easy on windows, where it just crashes because of invalid window parameters (without actually telling you why). Edit: Just for future reference, it's easy to remember which enumerations go where in OpenGL. If the function starts with "glut", then the parameters should always be "GLUT_", where as functions that start on "gl" should always have parameters starting with "GL_". They're two different libraries, though since GLUT complements OpenGL it's easy to get confused if you don't pay attention to this detail [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  14. GLSL Displacement Mapping

    I found the major problem... Finally. Afterwards I encountered some minor problems, but I solved them quickly (in fact, I realised what the bug was whilst typing this post). The last standing issue is one I'll have to solve myself, which is the "stitching" effect you get when there's a difference in LOD on the edge (e.g. more detail on one edge makes it misalign slightly with the other edge) First off, the tiling issue was solved by adding [code] glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); [/code] instead of just using MIN_FILTER. That explains why I didn't see it before, as it was in my older code. Secondly, I found the major problem that I started this thread for was the fact that OpenGL defaults to GL_REPEAT on GL_TEXTURE_WRAP_{S|T}, and I wasn't setting it to GL_CLAMP anywhere. So adding this solved that problem: [code] glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); [/code] I have attached a picture of the results
  15. OpenGL Rendering the depth buffer?

    [quote name='Limedark' timestamp='1330808842' post='4918967'] I thought that since the Z-buffer exists somewhere (I assume, or things would overlap, and my clearing the depth buffer would affect nothing), I should be able to simply(??) swap it for the color buffer. [/quote] The thing with that is that, for as far as I'm aware, the z-buffer is implemented (either by hardware or software) on the GPU. This means that the only direct way to get access to it is via shaders.