Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

228 Neutral

About ThePointingMan

  • Rank
  1. ThePointingMan

    Inversion of control across libraries

    Yes, this sounds a lot like what I am trying to do. My confusion comes from the three separate libraries in your example (Engine lib, GraphicsGL4 lib, and the Graphics DX9 lib) all needing to know of some kind of abstract/interface GraphicsEngine class. In your example it looks like App game's constructor take some kind of abstract GraphicsEngine pointer, so app must know of this abstract class. In that case GraphicsEngineDX9 and GraphicsEngineGL4 must both also know about this abstract class, as they would be children of it so they can be passed into that constructor. Each of these classes are in different libraries though, so where does this abstract class exist? Does a copy of it just exist in all the libraries? If it exists in the Engine so that App can use it in its constructor, then how will GraphicsEngineDX9 or GraphicsEngineGL4 know about it? If it exists in GraphicsEngineDX9 then the engine would contain it since it contains the GEDX9 Lib, but then would a copy of it have to exist in GEGL4? That seems a little odd to me, but maybe it is fine? Perhaps I'm just missing some concept of abstract classes that is important here? There might be something I'm missing in what you are saying though, because I'm not entirely sure what you mean by "translation unit."
  2. Alright so lets say I've got a little library for graphics called... "GraphicsLib", a Physics library called "PhysicsLib", and another to hold them together... the "EngineLib." How can I go about making sure that EngineLib is not entirely dependant on GraphicsLib and PhysicsLib? I've read about Inversion of control/dependency injection and it sounds like it's the right thing for this, but it's left me a little bit puzzled.   What I've read about dependency injection is that you would create an interface and then inject the actual object you want it to be into the object using it. So in this example lets there would be an interface for the graphics and physics and then the Engine class would have a setter for each of those interfaces.   Now that's pretty straight forward but what happens when the graphics system isn't part of the same library? the interface needs to exist in both sections, the engine needs to know of the interfaces existence so it can call all of its functions yes? but then the graphics and physics libs need to know of their respective interfaces as well so they can inherit from them. Is Dependency injection really not the tool for the job here or am I just thinking about things all wrong?   I've also thought of an alternative where there is another library that both the "Engine" and the "Physics/Rendering" libs use that contains interfaces for everything the engine might use but... that seems like it could be a bit of a stretch to force this to work.
  3. I am liking the idea of the higher level code with knowledge of both ideas, and then keeping the actual character in the dark about everything, seems to be what most of you are suggesting too. So if I'm getting this right, the player itself will basically just be a block of data, it has some render information, its transform, and maybe some collider information, but it doesn't really do anything. Then I have some higher level systems like one for keeping objects grounded that has information about the physics world. the gamestate passes everything that needs to be effected by this system to it, all the players/npc's etc. Another example could be like... a player controller that tells the player to move based on input or something. Nothing is actually done by the character, only by things who's purpose is telling the player to do things.      Yeah haha, this is something I'm trying to push on myself, it's so easy to get lost in the idea of engine construction and completely forget about what you wanted to use the engine to make. I'm just making small scale 2D stuff too, so I really don't need to go overboard on engine construction.
  4. Heya! I've been doing some work on a little engine for myself, and I've stumbled upon a little thing that I'm not entirely sure how I should approach. I'm not sure how I should handle instances where an object needs to communicate with one of the modules of the engine. A specific example of this would be for a raycast. If I have a player who needs to care about whether or not their on the ground for example, when I update the player I'm going to want them to shoot a raycast down to see if there is in fact something beneath them. This would mean that the object needs to know about the physics systems existence, a dependency that I'd rather avoid! Searching through some older topics I've seen a suggestion that the objects store a pointer to the scene, and then the scene contain things like the collision system etc, an idea that scares me for similar reasons. Am I just being too paranoid about this? Functionally it would work to have these dependencies, should their existence bother me or would anyone be able to explain to me some better designs on avoiding these kinds of dependencies?
  5. I'm currently going to school for programming, and my 3d graphics teacher absolutely hates unity. He complains about how he can always tell when a game was made with unity because the load times will be huge for small things and he will think to himself, "That shouldn't have taken that long." Another complaint of his is that it does everything for you and thus leads to incredibly generic games. I rather like unity, as it allows me to wip up a game fairly quickly without having to worry about first making a framework, but I must admit hearing his hate for it does spark a bit of fear in me, in the long run if my productions do get bigger, despite unity's ease of use, will it harm my product in the end, or is that entirely dependent on the user? If my fundamentals of coding are strong enough does it's easy workflow still come with a price or is it just as efficient as any other engine or is my teachers hate for unity justified?
  6. ThePointingMan

    Smoothing out the ray tracing of a surface on a curve

    For the hover mechanic, I shoot a raycast straight down from the center of the ship, if the distance from the start of the ray to where the ray hits is shorter then a specific amount it moves it back up to the minimum distance. As for the physics engine, It seems as though Unity uses the PhysX engine. Unfortunately I can't give you specific pieces of code right now as I am not at my house, but when I return there I will put up some more info on how I am doing everything. I think this first suggestion AllEightUp gave will probably do the trick though! Thanks very much for your input. I'm also thinking of doing something like what rumblesushi suggested, I'm thinking having a raycast at the middle, nose, and back, all checking would probably be best.
  7. I've been using unity for some school projects recently, and I made a  racing game fairly similar to F-zero GX. It was really unpolished though, and I've got a question about one thing that I thought really needed polish. While the hover car goes along the track, it shoots a ray directly below it and orientates itself so that the car's up is the same as the surface's normal. Gravity is always pulling in the hover cars down direction. The biggest problem I am having with doing this is that with upward curves. Lets say I was doing a loop de loop. The car would move forward by an amount, then orientate itself and move forward again. If you are going fast enough though, you end up moving so far forward that you tap the surface of the loop de loops curve with your ship, making you loose all momentum. Does anyone have any ideas as to what I could do to work around this?
  8. ThePointingMan

    Depth Transparecy issues.

    So I literally need to just make sure that when something is rendered it is rendered before everything that it needs to cover?
  9. ThePointingMan

    Depth Transparecy issues.

    I've been working on a Direct3D game run for a while now and I've come across an issue with alpha blending that I don't quite know how to solve.  I've set my render states like this // both sides of the triangles d3ddev->SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE); //Set render states d3ddev->SetRenderState(D3DRS_LIGHTING, FALSE);// turn off the 3D lighting //make it so things color can fade d3ddev->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE); d3ddev->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); d3ddev->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA); //------------------------------------------------------------- d3ddev->SetRenderState(D3DRS_ZENABLE, TRUE); // turn on the z-buffer d3ddev->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_MODULATE); Now at first I thought everything was perfectly fine, things that where transparent in my png file would go on screen transparent, but I later found out that they will only be transparent to things they have been rendered after, so if i render a see through a space ship, and then a red cube and then an asteroid and put the red cube in front of both the space ship and the asteroid, only the space ship will be visible with a red tinge, the asteroid will be completely blocked out. Does anyone know why this is happening and how it can be solved?
  10. ThePointingMan

    Rotation in 3D

    Inverting it seems to have fixed everything! Thanks so much.
  11. ThePointingMan

    Rotation in 3D

    In the long run I did end up going for the method you had suggested, but things aren't quite working correctly, I can't pinpoint what the problem is though. It appears as though the camera is rotating around the origin point, but I am fairly certain it should be translating and rotating locally with this code. I'm also not sure whether I set up my matrix properly, I have never manually set the values in a matrix before and I couldn't find the order in which it needs to be filled on msdn.   Edit: {  I have this feeling it is related to this line here node->Position->Position += (node->Position->Forward * node->Velocity->VelocityPos.z); My guess is that this is permenantly multiplying forward by the distance, while it only needs to be multiplied for this assignment and then go back to not being multiplied. }   Here's what my camera matrix looks like. //Rotate Dem vectors D3DXMATRIX yaw; D3DXMatrixIdentity(&yaw); //Programming when you are tired is a bad idea, very important D3DXMatrixRotationAxis(&yaw,&ActiveCamera->Up,node->Position->Angle.y*(M_PI/180)); D3DXVec3TransformCoord(&ActiveCamera->Forward,&ActiveCamera->Forward,&yaw); D3DXVec3TransformCoord(&ActiveCamera->Right,&ActiveCamera->Right,&yaw); // set the view transform // set the projection transform node->Camera->mCamera(0,0) = ActiveCamera->Right.x; node->Camera->mCamera(1,0) = ActiveCamera->Up.x; node->Camera->mCamera(2,0) = ActiveCamera->Forward.x; node->Camera->mCamera(3,0) = ActiveCamera->Position.x; node->Camera->mCamera(0,1) = ActiveCamera->Right.y; node->Camera->mCamera(1,1) = ActiveCamera->Up.y; node->Camera->mCamera(2,1) = ActiveCamera->Forward.y; node->Camera->mCamera(3,1) = ActiveCamera->Position.y; node->Camera->mCamera(0,2) = ActiveCamera->Right.z; node->Camera->mCamera(1,2) = ActiveCamera->Up.z; node->Camera->mCamera(2,2) = ActiveCamera->Forward.z; node->Camera->mCamera(3,2) = ActiveCamera->Position.z; D3DXMatrixPerspectiveFovLH(&node->Camera->mLense, D3DXToRadian(45), (FLOAT)600 / (FLOAT)600, 1.0f, // the near view-plane 1000000.0f); // the far view-plane //This is what the position component looks like #pragma once #include "Component.h" #include "Directx.h" class PositionComponent : public Component { public: PositionComponent(unsigned int ID) : Component(ID) ,Position(0,0,0) ,Forward(0,0,1) ,Up(0,-1,0) ,Right(1,0,0) ,Angle(0,0,0) { } //I realize now that it would have been better to have made a forward vector, up vector and right vector, thus I will be modifying how everything is moved later, but again I'm rushsian here! //LOL JK QUATERNIANS ALL THE WAY! I don't want none of this Gimball lock crap! TIME TO REWRITE EVERYTHING! ///I dont think I am ready for quaternians :/ D3DXVECTOR3 Angle; D3DXVECTOR3 Position; D3DXVECTOR3 Forward; D3DXVECTOR3 Up; D3DXVECTOR3 Right; }; This is where every object gets moved for(auto index = targets.begin(); index != targets.end(); index++) { auto node = static_cast<MoveNode*>(*index); node->Position->Position += (node->Position->Forward * node->Velocity->VelocityPos.z); And here is where the world and view matrix come in { auto node= static_cast<RenderQuadNode*>(*index); //Deal with the matrices D3DXMATRIX mTranslate; D3DXMATRIX mRotateX; D3DXMATRIX mRotatey; D3DXMATRIX mRotatez; D3DXMATRIX mScale; //MAYBEH LAAAAATA D3DXMATRIX mWorld; //Set All Matrices to a neutral state D3DXMatrixIdentity(& mWorld); D3DXMatrixIdentity(&mTranslate); D3DXMatrixIdentity(& mRotateX); D3DXMatrixIdentity(& mRotatey); D3DXMatrixIdentity(& mRotatez); D3DXMatrixIdentity(& mScale); //MoveEverythaaaang D3DXMatrixTranslation(&mTranslate,node->Position->Position.x,node->Position->Position.y,node->Position->Position.z); //D3DXMatrixRotationX(&mRotateX,node->Position->Angle.x*(M_PI/180)); //Update the world mWorld = mRotateX * mRotatey * mRotatez * mTranslate; // select which vertex format we are using d3d->d3ddev->SetFVF(d3d->FVF); //set up camera d3d->d3ddev->SetTransform(D3DTS_VIEW, &node->Camera->mCamera); d3d->d3ddev->SetTransform(D3DTS_PROJECTION, &node->Camera->mLense);
  12. ThePointingMan

    Rotation in 3D

    Alright, Thank you all so much! It sounds like Quaternians are the way I will be going, reading up a bit about them, the cpu appears to be able to do quaternian multiplication faster. I've got to admit, I don't quite understand what this ;D3DXVec3Cross and D3DXVec3Normalize function do, as well as, Rotating objects locally. If anyone would like to give some input on any of these things I'd love it, and I will be heading over to msdn! Whenever I try and rotate something that is not the camera it ends up orbiting the center point as well, perhaps when I get the camera fully functional this will become clear though.. Edit: So as far as I can tell, normalizing a vector changes all of the vectors components to their equivalent if the length between the origin point and the vectors current position was 1. Crossing them treats vec1 and vec 2 as an x,y and gives you a z with a scalar value that gets bigger/smaller depending on the size of the angle between the to.   Edit2:Well I am kind of being rushed atm, so I might just have to come back to the quaternian method, as for the most part I have a better understanding of the vector method.
  13. ThePointingMan

    Rotation in 3D

    I am using direct x to try and make a free roaming camera. I am having a bit of difficulty when it comes to rotation. I have tried a few things, each having different result, none being the results that I want. This is basically how my camera system works. I plan on changing it a lot later, but i needed this to be done fast, and its mostly functional. for(auto index = targets.begin(); index != targets.end(); index++) { auto node = static_cast<CameraNode*>(*index); if(node->Player != NULL) { Position = node->Position;//this is a pointer to the players position vector } if(Position != NULL) { static int i; i++; //Deal with the matrices D3DXMATRIX mTranslate; D3DXMATRIX mRotateX; D3DXMATRIX mRotatey; D3DXMATRIX mRotatez; D3DXMATRIX mScale; //MAYBEH LAAAAATA D3DXMATRIX mWorld; //Set All Matrices to a neutral state D3DXMatrixIdentity(& mWorld); D3DXMatrixIdentity(&mTranslate); D3DXMatrixIdentity(& mRotateX); D3DXMatrixIdentity(& mRotatey); D3DXMatrixIdentity(& mRotatez); D3DXMatrixIdentity(& mScale); //MoveEverythaaaang D3DXMatrixTranslation(&mTranslate,Position->Position.x,Position->Position.y,Position->Position.z); D3DXMatrixRotationX(&mRotateX,Position->Angle.x*(M_PI/180)); D3DXMatrixRotationY(&mRotatey,Position->Angle.y*(M_PI/180)); D3DXMatrixRotationZ(&mRotatez,Position->Angle.z*(M_PI/180)); //Update the world mWorld = mTranslate * mRotateX * mRotatey * mRotatez;// this line maximum important // set the view transform D3DXMatrixLookAtLH(&node->Camera->mCamera, &D3DXVECTOR3 (0.0f, 0.0f, 600.0f), // the camera position &D3DXVECTOR3 (0.0f, 0.0f, 0.0f), // the look-at position &D3DXVECTOR3 (0.0f, -1.0f, 0.0f)); // the up direction // set the projection transform D3DXMatrixPerspectiveFovLH(&node->Camera->mLense, D3DXToRadian(45), (FLOAT)600 / (FLOAT)600, 1.0f, // the near view-plane 1000000.0f); // the far view-plane node->Camera->mCamera = node->Camera->mCamera * mWorld; } } So basically I am multiplying the camera's matrix by a matrix that is being affected by a vector containing pitch, yaw, and whatever the name for the other axis of rotation is, and a position vector   Now if I multiply mtranslate by the rotation matrices, then I can turn my camera just fine, but moving forward only goes the base forward, so it I am looking left, forward is not to the left, but remains unchanged. if i do it the other way around then when i rotate I orbit around the center point. To counter this I when't with the one that would still move forward and backwards independent of which direction you are looking and threw this in to my player move thing. for(auto index = targets.begin(); index != targets.end(); index++) { auto node = static_cast<MoveNode*>(*index); //Deal with the matrices D3DXMATRIX mTranslate; D3DXMATRIX mRotateX; D3DXMATRIX mRotatey; D3DXMATRIX mRotatez; D3DXMATRIX mScale; //MAYBEH LAAAAATA D3DXMATRIX mWorld; //Set All Matrices to a neutral state D3DXMatrixIdentity(& mWorld); D3DXMatrixIdentity(&mTranslate); D3DXMatrixIdentity(& mRotateX); D3DXMatrixIdentity(& mRotatey); D3DXMatrixIdentity(& mRotatez); D3DXMatrixIdentity(& mScale); //MoveEverythaaaang D3DXMatrixTranslation(&mTranslate,node->Position->Position.x,node->Position->Position.y,node->Position->Position.z); D3DXMatrixRotationX(&mRotateX,node->Position->Angle.x*-(M_PI/180)); D3DXMatrixRotationY(&mRotatey,node->Position->Angle.y*-(M_PI/180)); D3DXMatrixRotationZ(&mRotatez,node->Position->Angle.z*-(M_PI/180)); D3DXMATRIX mRotation = mRotateX * mRotatey * mRotatez; D3DXVECTOR3 Position; //D3DXVec3TransformCoord(&Position,&node->Velocity->VelocityPos,&mRotateX);w D3DXVec3TransformCoord(&Position,&node->Velocity->VelocityPos,&mRotation); //D3DXVec3TransformCoord(&Position,&node->Velocity->VelocityPos,&mRotatez); node->Position->Position += Position;// * time; node->Position->Position += Position;// * time; node->Position->Angle += node->Velocity->AngularVelocity;// * time; } Now this works if I only rotate on one axis, but as soon as I start rotating on 2 things go crazy!   Any advise on how I should go about making this not go insane for a free roaming camera/how have you accomplished what I am trying to do in the past?
  14. ThePointingMan

    Component Entity system difficulties

    Ok I see! That's exactly what you did in your system then, I just didn't realize what a bit-set exactly was when I was reading that. Thanks yet again BeerNutts
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!