Physics and Sound Engine Encapsulation

Started by
5 comments, last by ExErvus 7 years, 9 months ago

Hi there. I am interested in game engine design and there are a few questions I have regarding the use of encapsulation in game engines. I know game engines have a graphics api wrapper for platform independence and have an engine api that hides the sound engine, rederer and physics engine. Though at the engine level, is the sound engine and physics engine encapsulated like the graphics api? For example if an engine uses Physx, will it call Physx directly or call a physics interface that could be implemented for different physics engines. Obviously this would have the benefit of being able to change the physics engine more easily. Unless of course physics engines are so different that a wrapper physics engine would be too messy to create. Is it bad design not to wrap the physics and sound engines?

Advertisement
Yeah it's a bad thing(tm) to create a wrapper for the hell of making a wrapper :)
Wrappers should always do something useful - usually either:
A) Making multiple platform-specific APIs share a common interface for portability reasons, or
B) Simplifying the underlying API so that it's easier to use / so it's usable at a different level of abstraction.

An example of (A) would be a D3D/GL wrapper.
An example of (B) would be something like Unity's "Rigid body component", which is very easy to add to one of your game objects without you knowing anything about the physics library.

In my game I did recently swap out PhysX for Bullet, and yes, due to differences, I did have to make some changes in my physics wrapper...
However, now that I've used both, I think I could make a wrapper that would be able to support both Bullet and PhysX :)
...but, such a wrapper is pretty useless. Even if I can wrap them both up in the same API, they have different internal behaviors, so the game would play differently if you swapped the implementation. You would only need this kind of wrapper if you're making an engine that must work for two different games, and for some reason, one of the game's teams is demanding that you support bullet, and the other team is demanding that you support physx :lol:

Not wrapping physics/sound at all is usually ok, as long as they're already somewhat high level libraries. e.g. XAudio is a low-level sound API that isn't gameplay-programmer friendly, but FMOD studio is a high-level sound API that has nice gameplay-friendly abstractions, such as events, loops, music streaming, etc...
If you're already using a high level library, then you may just have to make a few helper functions/classes to integrate it with the rest of your engine.
e.g. if you've got some kind of Entity system, you might make an "Audio Source" entity component, so that level designers can place them in their levels.

You are either making a wrapper to simplify, or you make the library yourself. You wouldn't take the bullet library and make a wrapper that simply passes through functions to call bullet functions

ie.


class MyPhysObject
{
    public:
    
        void CreateBody()
        {
          _bulletLib->CreateBody();
        }
};

If you don't write your own library and instead use one already available, a wrapper must have a purpose.

So instead of above, your would make a wrapper library that provides something like this.


class MyPhysEnvironment
{
    public:
    
        void Explosion(3Dvector location)
        {
          for each phys obj from your phys library 
          object->applyForce(push away from explosion location);           

          _sendEvent(explosion)
        }
};

As contrived as the example may be, it gets the point across I hope.

Thanks for the replies. I understand why wrappers are used. Maybe "encapsulation" is the wrong word. I'm talking about providing a common physics interface that would allow different physics engines to be used with the same game engine. I know that if this interface is created by a programmer that has knowledge of only one physics engine, it may not fit other physics engines that well but the interface could be updated in the future if it was decided to switch physics engine. At least the engine would never point directly to a specific physics engine but rather the interface meaning it would be easier to update the code. Then again, it might be over-engineering.

While on the subject of game engine design, I have another question regarding the level editor. Most level editors have snap functionality allowing the user to let an object snap to the ground. For the editor, to know where the ground is, it will have to do a collision query; maybe a ray cast downwards to find the nearest point of intersection. Would a game engine use the physics engine for this ray cast or would it traverse its own spatial hierarchy and do the intersection tests itself? For example, let's say the user wants to place a chair on the ground and they can hit a key that will instantly place the chair on the floor. The floor will have a static actor in the physics engine because its not going to move in the game. Though what if the user wants to move the floor after they place the chair? Would the editor make all static objects dynamic actors in the physics engine for the sake of editing or would the physics engine be in use by the editor at all? Sorry for the long winded question.

While on the subject of game engine design, I have another question regarding the level editor. Most level editors have snap functionality allowing the user to let an object snap to the ground. For the editor, to know where the ground is, it will have to do a collision query; maybe a ray cast downwards to find the nearest point of intersection. Would a game engine use the physics engine for this ray cast or would it traverse its own spatial hierarchy and do the intersection tests itself? For example, let's say the user wants to place a chair on the ground and they can hit a key that will instantly place the chair on the floor. The floor will have a static actor in the physics engine because its not going to move in the game. Though what if the user wants to move the floor after they place the chair? Would the editor make all static objects dynamic actors in the physics engine for the sake of editing or would the physics engine be in use by the editor at all? Sorry for the long winded question.

You can use your depth buffer to get a 3d location. No physics or knowledge of mesh is required. I can post my alg when I get home from work.

@ExErvus

I can't see how the depth buffer would store this information. What if the floor is off camera or occluded by another object?

Sorry, kind of forgot about the post, I will post my implementation tonight.

You use math to pull that information out of the depth buffer.

That is the thing, yes, you are going to get only what your depth buffer can see from the cameras point of view. I use exactly this to place objects in my editor and it works fine.

If something is interrupting the view of the camera to the ground, you cant get a location on the ground. If you want to get around this it will require some extra work. So the question is, does that work justify the outcome? Realistically speaking, when will you not be able to see the ground of your map, and if you cant, would a simple movement of the mouse fix that so that you can place your chair?

If it is necessary, then you will have to expose the mesh information to your editor(which is fine) and cast a ray from the chair location to the ground mesh and find the intersection. This means you have to accommodate and specifically handle this, IE keep the ground object separate from chair objects, OR you could modularize it and take general object A and general object B and project A onto B. The details are in the implementation.

Ok, here is my implementation, the comments should describe what is happening(try not to pay attention to my shotty error handling....)


//Offscreen
if(x < 0 || y < 0)
	return LWSVector3(0.0f, 0.0f, 0.0f);

//Normalize mouse coordinates to [-1,1]
D3DXVECTOR3 v;
v.x =  ((( 2.0f * x ) / _deviceWidth  ) - 1);
v.y = -((( 2.0f * y ) / _deviceHeight ) - 1);
v.z =  1.0f;

//Get depth data into offscreen surface for reading
if(_d3dDevice->GetRenderTargetData(viewPtr->gBuffer.depthBufferSurf, viewPtr->gBuffer.offscreenR32FSurface) != D3D_OK)
	ErrorMessenger::ReportMessage("Failed to GetRenderTargetData!", __FILE__, __LINE__);

//Lock surface data
D3DLOCKED_RECT surfaceData;
viewPtr->gBuffer.offscreenR32FSurface->LockRect(&surfaceData, 0, D3DLOCK_READONLY);
	
//Get the index where the mouse is located
DWORD index = (x * 4 + (y  * (surfaceData.Pitch)));

//Find depth
float* depth = nullptr;
BYTE* bytePointer = (BYTE*)surfaceData.pBits;
depth = reinterpret_cast<float*>(bytePointer + index);

	//Unlock
viewPtr->gBuffer.offscreenR32FSurface->UnlockRect();

if(depth)
{
   //Solve mouse world position
   D3DXMATRIX invViewProj;
   D3DXMatrixMultiply(&invViewProj, &_camera->GetViewViewMatrix(0), &_camera->GetViewProjectionMatrix(0));
   D3DXMatrixInverse(&invViewProj, 0, &invViewProj);

   D3DXVECTOR4 out;
   D3DXVec3Transform(&out, &D3DXVECTOR3(v.x, v.y, *depth), &invViewProj);
				
   return LWSVector3(out.x / out.w, out.y / out.w, out.z / out.w);	
}

This topic is closed to new replies.

Advertisement