• Advertisement
Sign in to follow this  

OpenGL ARGH! Tilemap Framerate in D3D is Awful

This topic is 4346 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This is making me very sad. Why was DirectDraw deprecated? Here's the situation. I'm making a tile based game in C# using MDX and Direct3D. I'm trying to achieve real time framerates and having a dismal time at it.
I have done some benchmarking with a PerformanceTimer and I've determined my problem is that I am calling DrawPrimative once for every visible tile. That is, to draw the map, I transform the world coordinates, call DP on a square vertex buffer, then move to the next tile. Typically I have about 1000 tiles on the screen at once. So this didn't seem like it would be a problem, even if this is not the way to draw a 3D scene. To draw the whole 100x100 map this way, without textures, gives 1.5 FPS. I did an experiment and made a single vertex buffer to render the whole map with one DP call (still passing in 60,000 vertices) and my FPS jumped to 40-80 (variance probably due to the presentation params and the vblank). I now have 3 options, that I can tell, all of them unappealing. 1. Trying using the MDX Sprite class. I hear different things about it's performance from different people. 2. Put all my textures into one big texture, put my whole map in one big vertex buffer, set all the texture coords of the individual tiles right, and blast the whole mess to the GPU with one DP call. This is not ideal since it makes updating the tilemap complicated. Especially for layers above the ground layer (critters, items, ect) that are sparse and would need constant updating. Also I would need to batch my textures if they don't fit on a single large texture in video mem. 3. Use DirectDraw, even though it is deprecated. Or use OpenGL, which doesn't have this problem of the DP-equivalent call costing so much. OpenGL is not natively supported by C#, so I'd rather not. There's also stuff like SDL.Net -- but it doesn't seem I should need a 3rd party lib to draw a tilemap. I mean, come on. So... What should I do? Discontinuing DirectDraw was so short-sighted. D3D really isn't a substitute. Why is the latency of DP so high? Modern GPUs had GBs of bandwidth, and I'm not close to using all mine.

Share this post


Link to post
Share on other sites
Advertisement
PS - I am running on a 4 year old laptop with a Geforce 440 Go. However, I have written games in OpenGL that can handle 200,000 triangles in a scene at 30 FPS, drawing them in the same stupid way.

Share this post


Link to post
Share on other sites
the problem is that you are calling DP too much, as you noted.

with 3D hardware you need to submit your graphics to the card in large batches.

a 'batch' occurs when you set some render states, or change textures, and then submit one or more trinagles via draw primitive.

as you say, you call draw primitive for each tile, this is very acceptable in directdraw but in Direct3D it is considered abusing the hardware.

so what you need to do, is sort your drawing by texture and render states.

that is, instead of


set texture to rock
drawprimitive for tile 0

set texture to grass
drawprimitive for tile 1

set texture to dirt
drawprimitive for tile 2

set texture to rock
drawprimitive for tile 3




you need to do


set texture to rock
draw all tiles that are visible and using rock

set texture to grass
draw all tiles that are visible and using grass

etc.





now in this example you /could/ still use a drawprimitive for each tile, but that is still wasteful.

one way to solve this is to structure all /like/ tiles next to each other in a single static vertex buffer, so that you can render large sections of them at once by drawing many tiles that all use the same texture.

antoher approach which is more flexible but less performant is to use a dynamic vertex buffer that is locked and filled each frame with all of the quads you want to draw, by this time you've already recorded them and you put them into the vertex buffer so that like-textures are adjacent, this may be neccisary if your tiles can change the image they are displaying.

in short, it is very possible to do in Direct3D, but you cannot acomplish it performantly using the same approaches you did with DirectDraw, 3D hardware simply doesnt work that way.

that being said, you might be happier with directdraw, or SDL or some other 2D library.

Share this post


Link to post
Share on other sites
Your engine must be really flawed because you should be getting much faster speeds than that. My engine gives me 1200+ FPS rendering around 150 animated sprites to a dynamic vertex buffer and using index buffers. You sure you made your buffer dynamic? Otherwise, your gonna get a huge performance hit from locking the buffer.

Another thing, you shouldn't be drawing the whole map in one call. If the tile is not visible, do not upload it to the buffer.

And one more thing, Direct Draw can't even begin to compare to Direct3D for 2D games. Once you sort out the bottleneck in the engine(which isnt Direct3D's fault) you will be amazed at how powerful your engine can be.

Share this post


Link to post
Share on other sites
The thing that bothers me the most is that it is actually not a hardware problem - otherwise it would occur in OpenGL when you draw a scene one quad at a time.

Also, 1000 DP calls a frame is just not that many, seems to me. What the hell is going on?

Share this post


Link to post
Share on other sites
barakus:

I honestly don't know if my vertex buffer is dynamic or not - I cannibalized a MDX demo for my project - kind of learning by example here.

This is my VB definition:

tile_vb = new VertexBuffer(typeof(CustomVertex.PositionNormalTextured), 6, device, Usage.WriteOnly, CustomVertex.PositionNormalTextured.Format, Pool.Default);
tile_vb.Created += new System.EventHandler(this.OnCreateVertexBuffer);


I do lock it, but only once, when it is initialized I make a 2 element triangle list that together make a square tile.

Then I draw each tile like this (but only for the visible ones):


private void DrawTile(int texID, float x, float y)
{
/* This may need to be optimized at some point. The bottleneck is
* we change the texture we are rendering with a lot. This is apparently
* expensive. As a stop gap solution I have added an optimization
* that takes advantage of temporal coherence. That is, if the texture
* hasn't changed since last time, don't change it. This will happen
* a lot given the current homogenous nature of the dungeon. We can
* increase performance with this even more by rendering the dungeon layers
* at a time.
*/




dev.SetStreamSource(0, tile_vb, 0);
dev.VertexFormat = CustomVertex.PositionNormalTextured.Format;

// if(last_tex != texID)
// {
// otherwise a texture swap is not necessary
dev.SetTexture(0, TileTex[texID]);


// }

// last_tex = texID;

Matrix trans = Matrix.Translation(x*(float)GameData.TileSize*zoom,y*(float)GameData.TileSize*zoom,0);
Matrix scale = Matrix.Scaling(zoom*(float)GameData.TileSize,zoom*(float)GameData.TileSize,1);

scale.Multiply(trans);

dev.SetTransform(TransformType.World,scale);//TransformType.World,Matrix.Translation(x,y,0));
dev.DrawPrimitives(PrimitiveType.TriangleList, 0, 2);
}

Share this post


Link to post
Share on other sites
Well, your buffer creation code should look something like this :

spriteBuffer = new VertexBuffer(
typeof(CustomVertex.PositionColoredTextured),
BUFFERSIZE,device,Usage.Dynamic | Usage.WriteOnly,
CustomVertex.PositionColoredTextured.Format,
Pool.Default)

This will create a vertex buffer with minimal locking latency.

Now, during the render loop, we iterate through the sprites. If a sprite is visible(ie within the view of the camera)we upload it to the buffer. You want to have a variable that counts the number of vertices that have been uploaded to the buffer(the number of sprites multiplied by the number of vertices on each sprite, 4 if your using an index buffer in conjunction with a vertex buffer, 6 if your using a vertex buffer alone).

Finally, when you either run out of sprites or need to change a renderstate, you draw all thats in the buffer at the moment(don't draw all the buffer, just the number of vertices you have counted), and then change the renderstate if needed and start the process again.

A really good article right here on gamedev dealing with this subject can be found here. It's in C++ but its fairly easy to convert to C#. Pay special attention to issue 4, batching.

Good luck!

EDIT: Also, something really bad about your drawing algorithm is that your setting the stream source and texture each frame. This is a big no-no.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
  • Advertisement