# OpenGL ODE and DirectX

This topic is 4680 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

i everyone. I've done a lot of looking around but I can't seem to find much about this topic. The samples that come with the ODE SDK are all done in OpenGL so I need a couple pointers in the right direction. In particular, since ODE is graphics API independant how do I go about using *.X files with it? Can I use a directx "mesh" object in any way or will I need to lock it and use raw vertex data? Second, I'm not completely sure about the pipeline of the ODE. From what I understand the process is: 1) Set up world and spaces (what is a space?) 2) Create bodies and position/configure them (this is mostly where I'm getting hung up) 3) each frame use dQuickStep (or something along those lines) and have a callback function for what to call when you get a collision. All help is appreciated. Thanks. PS. If anyone knows of any barebones ODE tutorials I would appreciate any links too.

##### Share on other sites
ODE has dBodyGetPosition and dBodyGetRotation functions that you can call on each body. You can setup a D3D transformation matrix w/ this data for the mesh when rendering it.

##### Share on other sites
Okay, I see where you're going with this. So I think I understand now how to use the primitive collisions, but what if I want to use a "trimesh" collision?

##### Share on other sites
Quote:
 Original post by rjacketsIn particular, since ODE is graphics API independant how do I go about using *.X files with it?

You don't.
Quote:
 Can I use a directx "mesh" object in any way or will I need to lock it and use raw vertex data?

I don't see any way you are going to be able to use ODE's tri mesh stuff without locking. Once you have locked it should be fairly trivial though: dGeomTriMeshDataBuild allows you to specify a stride, so as long as your position data is the first thing in your vert you should be able to use the locked data directly. If that does not work, or if you also want to use normals you may have to copy the data out into vector3 arrays.
Quote:
 Second, I'm not completely sure about the pipeline of the ODE. From what I understand the process is:1) Set up world and spaces (what is a space?)2) Create bodies and position/configure them (this is mostly where I'm getting hung up)3) each frame use dQuickStep (or something along those lines) and have a callback function for what to call when you get a collision.

I think you have it about right. I chose not to use ODE myself, but I spent some time researchig it and it seems that most of these engines work in this sort of way. Set the positions/masses/velocities of stuff, call Step() or the equiv, read the positions and stuff back (many of which may have changed by stepping) and use it to update your graphics.

##### Share on other sites
Excellent. Thanks a lot, thats exactly what I was looking for.

##### Share on other sites
Okay, now I realize this is now outside the realm of DirectX specific help but I figured I'd put it in the same thread for consistency. I've gotten it running so that my objects are linked to the ODE system... So now it starts 10 units above the floor and it falls properly. The floor is made with dCreatePlane(...) and the object colliding with it is a basic sphere. The collision is being detected as it is supposed to be, but it is not responding to it -- it basically just goes through the floor. The rotational momentum changes ever-so-slightly so it is detectable. I set it to stop calculating the physics when it collides, so it clearly is colliding. So my question is: is there anything I have to do to explicitly specify how to behave when it collides? I have the following nearCallback (like in the samples). All help would be greatly appreciated -- and once I get the barebones sample working I swear I'll make a public tutorial since it seems a lot of people are looking for them ;)

here is my nearCallback code:

static void nearCallback(void *data, dGeomID o1, dGeomID o2){			int i;	// if bodies are connected by a joint, return	dBodyID b1 = dGeomGetBody(o1);	dBodyID b2 = dGeomGetBody(o2);	if(b1 && b2 && dAreConnectedExcluding (b1, b2, dJointTypeContact))		return;	dContact contact[MAX_CONTACTS];		// up to MAX_CONTACTS per box	for(i=0; i<MAX_CONTACTS; i++)	{		contact.surface.mode = dContactBounce | dContactSoftCFM;		contact.surface.mu = dInfinity;		contact.surface.mu2 = 0;		contact.surface.bounce = 0.1;		contact.surface.bounce_vel = 0.1;		contact.surface.soft_cfm = 0.01;	}	if(int numc = dCollide(o1, o2, MAX_CONTACTS, &contact[0].geom,					sizeof(dContact)))	{		collided = TRUE;		for(i=0; i<numc; i++)		{			dJointID c = dJointCreateContact(world, contactgroup, contact+1);			dJointAttach(c, b1, b2);					}	}}

##### Share on other sites
ummm, nevermind. I must have missed something when I was working through this. I just tried a direct cut-and-paste from the sample code and it works fine [wink].

• 9
• 10
• 12
• 10
• 10
• ### Similar Content

• Good Evening,
I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
I am really stucked right now because of the fundamental question:
Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on.
In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
Should I treat those debug objects as entities/components?
For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
Regards,
LifeArtist
• By QQemka
Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
Let's go:
Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
There were several more but i forgot/solved them at time of writing
• By RenanRR
Hi All,
I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
#version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);
So, some doubts:
- Why use it like that?
- Is it okay to manipulate the camera that way?
-in this way, are not the vertex's positions that changes instead of the camera?
- I need to pass MVP to all shaders of object in my scenes ?

What it seems, is that the camera stands still and the scenery that changes...
it's right?

Thank you

• Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

int rgbValue = int(textureSample.w);//4 bytes of data packed as color
// algorithm might not be correct and endianness might need switching.
vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
extractedData /= 255.0f;

• While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
Anyone has any idea .. what should I do?