Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

481 Neutral

About bullfrog

  • Rank
  1. The `Index Buffer` will need updating every frame based on what the camera can see. The `Vertex Buffer` will only need updating every time a block is destroyed.This may only happen once every 1.5 seconds, based on the rate you can destroy blocks. Notice the index buffer is made up of 4 byte integers. If you had 150,000 faces in your vertex buffer, you will need 900,000 indices to draw every face. Add frustum culling, which will take it down to ~33% (Based on what the camera can see), 297,000 indices is now required to draw all the faces that the camera can see. 297,000 indices * 4 bytes = 1.13MB That amount of data should have no proplem being sent down to the grapihcs card every frame.
  2. Yes it will, the less draw calls the better in most situlations. You may be going around the problem in the wrong way. The graphics card is made to have huge amounts of vertices and indices deleted and loaded every frame. Heres how I got around this same problem with a very good frame rate. For every chunk, calculate which cube faces are visible, then store the vertices and indices for the faces in memory. Create 1 very large static vertex buffer and fill it with all vertices from all the chunks. Create 1 very large dynamic index buffer. Every frame use frustum culling to find with chunks are visible to the camera, fill the index buffer with the indices from the visible chunks and draw. If a cube is destoryed, you will need to reload the visible faces for that chunk and refill the whole vertex buffer. Don't worry like I said, graphics cards are built to do this!
  3. You know what the problem is. The `device` variable is NULL and you are calling draw on it. Step through your code to find why the device is NULL. If you use visual studio, use the call stack.
  4. This is more of a `Is this the correct way of doing it` question. But I have added functionality if it helps. class CModel { void Draw(CRenderer& _rRenderer) { rRenderer.DrawIndexBuffer(m_uiIndexBufferId); } } class CPlayerModel : public CModel { void Draw(CRenderer& _rRenderer, const CVector3& _krPlayerPosition) { SetModelPosition(krPlayerPosition); CModel::Draw(rRenderer); } } So the user adds a CPlayerModel to their game. They need to call Draw on the CPlayerModel. They have the choice of either CModel::Draw or CPlayerModel::Draw. Both are available to be called even though CPlayerModel::Draw should be the only function that used be for drawing this object type. What stops the user from calling the CModel::Draw? Or should the user just have the "Smarts" or do the research before choosing which draw function to use?
  5. Hi, I always wondered what is the right thing to do in this situation. This is a quick example, the third inheritance makes no sense but it adds more context to the question. class CModel { void Draw(CRenderer& _rRenderer); } class CPlayerModel : public CModel { void Draw(CRenderer& _rRenderer, const CVector3& _krPlayerPosition); } class CExamplePlayerModel : public CPlayerModel { void Draw(CRenderer& _rRenderer, const CVector3& _krPlayerPosition, void* _pAnotherParameter); } As the hierarchy extends, the Draw function requires another parameter for the model to draw correctly. Is this correct OOP? What stops a user from using the parents draw function and therefore breaking the game?
  6. Without seeing the code there is only so much other people can do to help you. Such as pointing out common causes to this issue. - not defining default values for variables - putting functions into debug only macros, such as asserts - memory trampling
  7. Like as mentioned, the object (Entity) you are calling `GetLivesRemaining()` on has not been instanced - created - allocated yet. Use the `Call Stack` feature in visual studio to go back to where your getting this null pointer from.
  8. bullfrog

    Frequently setting data to a vertex buffer?

    .I do not know how XNA works, but if it follows the directx API, you will need to specify the size of the vertex buffer. If you post the function for creating the buffer I will be able to tell you. Having wasted memory with vertex buffers is not uncommon. I would set the vertex buffer size to the maximum size that the buffer can reach during the game life time. But then again, that might not work with your game. Up to you!
  9. bullfrog

    Frequently setting data to a vertex buffer?

    Graphics cards have lots of memory, so it depends on how much you need. If you vertices are 24 bytes each (Thats x, y, z positions, texures coords and diffuse), you can store over 400,000 vertices with 10MB of video memory. Creating a buffer big enough will save programming time and lag spikes. But in the end, you need to make what you need to make, to make your game : )
  10. bullfrog

    Frequently setting data to a vertex buffer?

    Dynamic vertex buffer tells the graphics card to store the vertices in the best possible place in memory to be written to every frame or multiple times every frame. Static vertex buffer tells the graphics card to store the vertices in the best possible place in memory to be rendered only and not updated regularly. Use the static buffer if you’re not updating every frame, perfect for your event system if it doesn't trigger every frame. Otherwise use dynamic buffer. Static buffer rendering > Dynamic buffer rendering Dynamic buffer updating > Static buffer updating
  11. bullfrog

    C++ Memory Usage & Allocation

    You don't need to do more test, especially test with task manager or any process inspector. That makes very few sense. What you should focus on, 1, Check and avoid memory leak, 2, If the memory usage is still huge, check when and reduce the memory usage. [/quote] Allocating cube by cube seems to be the problem. When I allocated the cubes in batches, the memory now settles at expected levels. Windows memory aligning must not be the best. Thank you for everyones help!
  12. bullfrog

    C++ Memory Usage & Allocation

    Your memory is not allocated one byte after another. There are maybe holes in your memory. Those holes have to be counted in task manager. It's quite reasonable that task manager reporting twice or triple times memory than you allocated. [/quote] I see what you are saying. I will do another test with allocated blocks of 16 cubes instead of 1 by 1. Thanks!
  13. bullfrog

    C++ Memory Usage & Allocation

    From what I understand and what the program tells me, 16 * 16 * 16 = 4096 Clusters 4096 * 16 * 16 * 16 = 16777216 Cubes 16777216 * (sizeof(Cube*)) = 64MB 16777216 * (sizeof(int) * 2) = 128MB, like you said 329 - (64 + 128) = 137MB memory that is unaccounted for? I am using Process Explorer as well to check the programs memory footprint.
  14. Hi, I am currently working on a game and I am having some difficulty understanding why my memory usage is so high. So for testing I created a simple project to that allocates memory. #include <iostream> unsigned int g_uiTotalClusterMemory = 0; unsigned int g_uiTotalCubeMemory = 0; class CCube { public: CCube() {} ~CCube() {} private: int m_i1; int m_i2; }; class CCluster { public: CCluster() { for (int i = 0; i < s_kiX; ++i) { for (int j = 0; j < s_kiY; ++j) { for (int k = 0; k < s_kiZ; ++k) { m_pCube[j][k] = new CCube(); g_uiTotalCubeMemory += sizeof(CCube); } } } } ~CCluster() {} private: static const int s_kiX = 16; static const int s_kiY = 16; static const int s_kiZ = 16; CCube* m_pCube[s_kiX][s_kiY][s_kiZ]; }; CCluster* g_pCluster = 0; void main() { const int kiNumClusters = 16 * 16 * 16; //Instance clusters g_pCluster = new CCluster[kiNumClusters]; //Calculate memory useage for all clusters g_uiTotalClusterMemory = sizeof(CCluster) * kiNumClusters; //Convert bytes to megabytes g_uiTotalClusterMemory /= 1024; g_uiTotalClusterMemory /= 1024; g_uiTotalCubeMemory /= 1024; g_uiTotalCubeMemory /= 1024; //Calculate total memory unsigned int g_uiTotalMemory = g_uiTotalClusterMemory + g_uiTotalCubeMemory; //Output memory values to screen std::cout << "Total Cube Memory Used:" << g_uiTotalCubeMemory << "\n"; std::cout << "Total Cluster Memory Used:" << g_uiTotalClusterMemory << "\n"; std::cout << "Total Memory Used:" << g_uiTotalMemory << "\n"; //Pause float fCaek = 0.0f; std::cin >> fCaek; } The code outputs the following: Total Cube Memory Used:128 Total Cluster Memory Used:64 Total Memory Used:192 But windows taskmanager and process explorer both tell me the program is using 329,624k of memory. Is this test setup correctly? If so why is my program using more memory then I am allocating? Thanks! EDIT: I am using Visual Studio 2010 with the program running outside of the IDE, release build.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!