Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

412 Neutral

About tapped

  • Rank
  1. tapped

    Is this loop any good?

    For me this looks like a main loop doing rendering (you call each iteration for frames) and logic.  The loop you made is not deterministic, which means how long your 60 frames takes, depends on how long your logic takes to execute, and will differ from system or amount of work. But that is without knowing how you have setup your rendering API, in other words I am not sure your frames is bound to the screen refresh rate. If VSYNC is enabled(should be to prevent screen tearing or jiggered pixel movement), your loop could be rewritten to something like:  stopwatch frameTime; setSwapBufferInterval(1); // Swap buffer by an interval of one frame. while(window.isOpen()) { frameTime.restart(); // LOGIC/RENDERING swapBuffer(); // Will block until it is time to swapBuffer based on screen refresh rate. frameTime.stop(); std::cout << "FPS: " << 1.0 / frameTime.seconds << std::endl; // You should probably throttle FPS printing, by for example taking average FPS every sec. } Beware that there exist screens with 120 hz (and more than that too), which may not be suitable for your physics engine or game logic (too low precision per timestep etc.). Also having a dedicated rendering thread is what is recommended today. There are unfortunately no standard way of doing it, but there exist per platform solutions. On OS X and iOS you have something called CADisplayLink (CVDisplayLink on OS X). On android you have something called Choreographer. In browser you have requestAnimationFrame. On Windows you make your own thread and use from GDI+ (Only works on Vista and upwards). So now you need to sync rendering and logic, since rendering happens in its own thread, and event handling and other things on the main thread. This can be done by simple communication (in hardware) between your main thread and rendering thread. For example by using semaphores. Also all the per platform methods above gives you the current frame duration, which is very accurate and can be used later in your time independent code to do proper physic integration or movement.  I guess this may seem a bit daunting to you, however I have seen a lot of weird solutions to this problem, and in my experience the best way to make a main loop is by reading the docs for your target platform.
  2. tapped

    Physic of ball for review

    First of all your code is not cache friendly at all. You branch your code to much, for example why check if the force vector is empty? Another thing, you have not made a common solution for physic update. The same update function should work for all types of shapes not only spheres.   The update function should only be a physic integration. As you may recall: a = F / m, v += a * t, s += v * t + 0.5 * a * t * t for linear physic, and angularAcc = torq * I-1, rotation += angularAcc * t, orientation += rotation * t for angular physic. I see you are doing this, however you should separate your integration code from the contact resolving, so that you can have common solution for all bodies. So what you need to do is to clean up your code, and separate it, and have in mind that your solution should not be specific for each shape, but common.  Also to paste code from visual studio, turn on "space for tabs" in the editor settings, and then convert your code file before you copy it.
  3.   Thanks for pointing that out, i have edited the post.
  4. I can't really see your problem. You can create a function that returns the world client position, and use it when you want. Also, i would have made a struct/class for points/vectors, so that you could easily handle coordinates.  // Example of how a Vector3 struct may look like struct Vector3 { int x, y, z; Vector3(int x, int y, int z) : x(x), y(y), z(z) {} void set(int x, int y, int z) { x = x; y = y; z = z; } Vector3 operator+(const Vector3 &a) const { return Vector3(x + a.x, y + a.y, z + a.z); } /* ... add some other neat functions ... */ }; // Example #1 Vector3 Object::getWorldCoord() { Vector3 result; if(parent) { result = parent->getWorldCoord() + m_position; // position is a Vector3, and is a member of the Object class. } else { result = m_position; } return result; } // Example #2 that works out of the box for you void Object::getWorldCoord(int &outX, int &outY, int &outZ) { if(parent) { int parentX, parentY, parentZ; parent->getWorldCoord(parentX, parentY, parentZ); outX = parentX + x; outY = parentY + y; outZ = parentZ + z; } else { outX = x; outY = y; outZ = z; } }
  5. tapped

    How to create a physics engine

    Did i mention Bullets? Even if i am wrong with Havok, since it has limited GPU support, however are going to have full on next-gen consoles.   Hmm, poorly expressed by me. What i mean is that AAA physic engines are moving towards GPU acceleration, which is state of the art. With the next-gen consoles, we would see that CPU based physics engine are being slowly thrown away, in favor of GPU based. Why create a CPU based physic engine, when we are missing a good and open-source GPU based physic engine, that are platform independent. Not that he is going to create a open-source engine, but you see what i mean. The question is what's the goal. Do you want to create an useful engine and push the market, or are you going to create an engine just for education(where parallel programming is a good start though).
  6. tapped

    How to create a physics engine

    If you want to make a physic engine, that may be used in a real project, you should hardware accelerate your physic engine for the GPU, by OpenCL or just by OpenGL shaders( a lot harder though, and more hackish...). That's what makes PhysX and Havoks performance so good.  As people have pointed out, the method and algorithm used in a physic engine is common across the engines. And as L. Spiro said, Real-Time Collision Detection and Game Physics Engine Development are very good books(I have both). So it is up to you, how you gonna implement the engine, however you should have a decent understanding of geometry and physic laws, that you may recall from high-school. Another thing, it takes time to create a physic engine, so it should only be made for learning purposes or as part of your CV.
  7. tapped

    Opengl ES on Desktop

    Yeah, you can use OpenGL ES on desktop computers. However it is both platform and GPU dependent approaches for this to work. If you are using linux, just download the mesa-gles drivers, or even better find the vendor specific drivers for GLES. I suggest to use the SDK from the GPU vendor on windows, which is for AMD and for Nvidia. Look here for installation instructions for Nvidia emulator:   You may also look at for a platform independent solution.   Good luck, and remember I have yet not found a 100 % stable solution for my hardware setup, so don't expect to much, and happy debugging ;) 
  8. tapped

    What is a good C++ Framework?

    You need to ask yourself, what is your motivation, and what do you want to learn or produce from it, With the answers of these questions, you can find a framework that suits your needs. For instance, if you want to learn how to use a pretty high level framework, use SFML. However SFML can also be used for low-level OpenGL programming too.   If you want to be individual and invent the wheel again, then create everything yourself using Windows API and OpenGL/Directx contexts to setup some nice graphics. When it comes to the decision between OpenGl and Directx, you need to compare your experience level with your program specifications. Today, Directx is so simple to implement using Visual Studio 2012, and the Windows SDK. It even exists a template for it. So you don't need to use time to setup, but can code right away. Unfortunately, it will only work on windows. Use OpenGL if you want to support a lot of platforms, but keep in mind that OpenGL is harder to learn, especially in the beginning. Also, if you are not experienced with cross-platform coding, you will pretty fast find it a nightmare from time to time, since compilers work differently. So your code may look as a mess, because of all the compiler specific stuff. However it is a good thing to learn how to structure code for more than one platform. And remember the best way to learn is by your faults.    But again, what do you want? If you want to learn as much as possible, then you better invent the wheel again. And if you want to make a game, and you got a team of artists, then stick to UDK, Unity3D, Torque3D, C4 etc.
  9. Why not use the existing tile system as a grid? Here is a snippet that maybe useful: void DrawTilesWithinFrustum(Vec2f worldPos, int windowWidth, int windowHeight) { Vec2f minIndex = worldPos / Vec2f(TILE_SIZE_X, TILE_SIZE_Y); Vec2f maxIndex = (worldPos + Vec2f((float)windowWidth, (float)windowHeight)) / Vec2f(TILE_SIZE_X, TILE_SIZE_Y); // We want to be sure that we draw the tiles that are slightly in the frustum. maxIndex.x += 1; maxIndex.y += 1; int numColumns = (maxIndex.x - minIndex.x) + 1; int numRows = (maxIndex.y - minIndex.y) + 1; for(int row = 0;row < numRows;++row) { for(int column = 0;column < numColumns;++column) { int index = (unsigned int)(minIndex.x + row) + (unsigned int)(minIndex.y + column) * TILE_SIZE_X; Tiles[index]->Draw(); } } } worldPos is the position of the camera. This approach would run a lot faster than bruteforcing each tile. It will only loop through the tiles that you want to draw.  Beware that this code snippet is not tested, so it may include type errors etc.   Good luck.
  10. tapped

    Space Partitioning on Sphere

    Hmm, why not use the same grid system that is used to represent Earth coordinates. In other words use spherical coordinates.   Spherical coordinates can easily be worked out from Cartesian coordinates as long as you know the radius and position of the sphere in question.  So you create the boundaries virtually by specifying [?, ?] extent of each grid cell, as you may be used to when working with normal grids. Then you can calculate indices for the cells by coordinates, which is done as following: int GetLocationIndex(const Vec2f &sphericalPoint) { Vec2f indices2D = sphericalPoint / extent; return (unsigned int)(indices2D.x) + extent.x*(unsigned int)(indices2D.y); } The x coordinates of the vectors, are Theta, and the y coordinates are Phi.   Note, i have not tried this, so it is only an approach that may work. However good luck
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!