Displaying bitmaps larger than 256x256 with D3DX...?

Started by
7 comments, last by Fixx 24 years ago
Hi everyone, I''ve been building a game with plain DirectDraw 2D functionality. Now, I''ve realized that I need to use D3DX to make certain bitmaps (128x128 pixels in size) rotate and scale. Now that I have the D3DX part implemented, I had to strip out all of the old DirectDraw blit calls because I couldn''t get them to work with D3DX running in exclusive mode! I need to display some large bitmaps (usually 800x600) for the menu system and title sequence. I tried doing this with D3DX and they show up really blurry. I assume this is because D3DX is loading the bitmap as a texture into a 256x256 surface, and then it has to expand it again to fit the 800x600 screen size. Is there ANY way I can display my large bitmaps with D3DX without having to tile them? Thanks for you help... much appreciated.
Advertisement
first off..
displaying bitmaps larger than 256x256 using d3d, is hardware dependant. Example: voodoo2 doesn''t allow bitmaps bigger than that, nor do they allow non-power-of-two textures.
however, the software mode does i''m pretty sure.

why do you need to display 800x600 bitmaps? It will be slow, just warning.

You''re right about the d3dx streching then scaling the image.

if the hardware supports surfaces bigger than 256x256 I''m pretty sure d3dx will take advantage of the hardware and allow larger bitmaps.

as a solution: you could always create some surfaces with directdraw in system memory, to the size of your liking, then blt them to the back buffer using directdraw calls.
___________________________Freeware development:ruinedsoft.com
"as a solution: you could always create some surfaces with directdraw in system memory, to the size of your liking, then blt them to the back buffer using directdraw calls."

Then, the ability to rotate and scale images fast will be lost. I''ve been trying to find a soluion to this myself, but I haven''t found any other way than tiling them up. No matter what you do, tiling will probably be faster when rendering, but aquires quite a lot of programming.
Excellent. Thanks for all the help. Looks like I''m writing a tiling class tonight!


Hey, I''ve done exactly that!
I''ve written a little app which can load bitmaps (tga, tif, jpg, png, bmp, pcx) of any size and at any color depth. If the image is larger than 256x256, multiple surfaces are used. In fact, you''ll be able to load images at any size until you''re out of video ram
Ex, a 600x400 bitmap will be stored as

256x256 - 256x256 - 128x256
/ / /
256x256 - 256x256 - 128x256

(The lower ys are also 256 because a rest of 144 stays -- the next power of 2 is 256)

You can find my app with source at
http://www.lunaticsystems.de/collector.zip

-Markus-
Professional C++ and .NET developer trying to break into indie game development.
Follow my progress: http://blog.nuclex-games.com/ or Twitter - Topics: Ogre3D, Blender, game architecture tips & code snippets.
Another option is to break down the large bitmaps into smaller bitmaps (less than 256x256) This ensures that all your textures will be small enough and powers of 2. If you want more info on how to go about breaking up the bitmaps just let me know.

Check out the GPI project today!
Are you using polygons to map the texture on to? If so, when you tile the texture, you also have to break the original polygons down into multiple smaller ones so that the individual texture tiles will go on properly. ...You have to have vertices to store the UV coords. at the edge of each texture, and one poly only allows one texture - unless you use multiple texture layers.
Yes, my collector.zip already does this.
If you wanted to draw a polygon using the 600x400 texture I described in the above post, you''ll actually have to draw 6 seperate polygons.
For the remaining of the rightmost/bottommost polygons (which probably aren''t sized by a power of 2), the u/v coordinates are adjusted, of course.

The main problem lies somewhere else:
When using bilinear filtering, you''ll see where the polygon was split because there''s no filtering from one polygon onto another.

-Markus-
Professional C++ and .NET developer trying to break into indie game development.
Follow my progress: http://blog.nuclex-games.com/ or Twitter - Topics: Ogre3D, Blender, game architecture tips & code snippets.
see above:
Anon is correct.
Unfortunately you will lose the ability to do rotation and perhaps scaling in the way that you are thinking.

However, you could do some software rotation which is truly hard and you have to deal with memory allocation while trying for speed.

There might be some GDI functions for rotation because you could use StretchBlt for the scale, however it would be really really slow.
___________________________Freeware development:ruinedsoft.com

This topic is closed to new replies.

Advertisement