• Advertisement
Sign in to follow this  

D3DFORMAT issue

This topic is 3507 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This may be an image generation, pixel format, or neither issue. I'm basically making MegaMan NES and my framerate drops from 200+ to 50- when the "tiles" of the level are onscreen. It was suggested to me that my D3DFORMAT of A8R8G8B8 could be the issue when I was using 24 bit bmps for the level and a single 24 bit png for megaman. I tried to make the images 32 bit using GIMP, and the X8R8G8B8 bmps wont load now (LoadTexture() returns NULL), and the PNG that GIMP saves is 24 bit. So is there a better way to convert them than GIMP (perhaps its a unique version of bmp?) or can I use my 24-bit versions somehow? I don't see a format that would support 24bit pngs. On a side note, i dont think vista recognises the 32 bit images either, since my thumbnails are now generic images.

Share this post


Link to post
Share on other sites
Advertisement
32-bit formats are the most efficient to use during rendering. The format of the actual image file is irrelevant; your images are decompressed and converted at load-time. So your 24-bit PNG will automatically be converted to a 32-bit A8R8G8B8 format (if that's the format you specified).

Rather than guessing where the performance problem lies, I suggest using a profiling tool such as PIX or NVPerfHUD to find out what's causing the slowdown.

Share this post


Link to post
Share on other sites
Conversion is what I would expect. So perhaps the suggestion given to me was incorrect? Ah, and thanks for pointing out a profiler, I knew I was going to need one, but a 30 day trial of VTune didnt have me jumping at the chance to get it. Does anyone have any quick suggestions on what could be giving me such a huge performance hit? I would expect that I could render a bunch of 2d images without dipping below 60 fps at the worst... I realise there could be a lot of factors, but I've at least narrowed it down to the level images on screen.

Share this post


Link to post
Share on other sites
As Sc4Freak suggested, use PIX. It comes with the SDK, you can set it to delay gathering info to a specified frame count, etc. It'll tell you how long each call takes and should pretty quickly tell you where the time is being taken up.

Share this post


Link to post
Share on other sites
Dropping from 200 to 50 doesn't actually sound too unreasonable. Remember that frame rate is non-linear (although in this case, 5ms to 20ms versus 200fps to 50fps is still a factor of 4 [lol]). The point, however, stands that if you're rendering a blank screen you're not really doing any useful work thus comparing it to when you are doing some useful work is largely meaningless. What you want to know is the difference between doing some meaningful work and more meaningful work.

Given bottlenecks and overheads you might find that rendering 0 tiles is 200fps, 100 tiles is 50fps but 1000 tiles is 45fps. You can then try to guesstimate how much effort is involved in rendering individual tiles, cross reference that against how many tiles you want or need to render and map that to your target hardware - then you'll know whether you actually have a problem.

But, all that said, I don't suppose you're rendering each tile with its own Draw**() call, or using a new ID3DXSprite per tile are you? Such code is quite common for people implementing 2D tile games in D3D and the resultant high number of draw calls is usually the key problem...

hth
Jack

Share this post


Link to post
Share on other sites
Ah, yes I see... I was calling my own draw function for each tile which set verticies, set textures, and called DrawPrimitive() (which i believe you are referring to). I need to rearrange this so I can call DrawPrimitive() once for each tile texture. Perhaps a multimap is in order....

Share this post


Link to post
Share on other sites
If that is how you've written your renderer then, yes, you will most likely get a sizeable performance improvement for redesigning it.

Try to set it up with shared vertex and index buffers (remember, you don't have to draw every vertex in a call - understanding and utilizing the DrawIndexedPrimitive() parameters properly is a great idea) and trying to despatch as many tiles in one draw call is also a good approach. This can be tricky if each tile is on its own texture so consider using "texture palettes" (aka "texture atlas") to combine multiple small textures onto one larger one.

Best of luck!
Jack

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement