Loading large textures for mapping

Started by
8 comments, last by ET3D 17 years, 11 months ago
Hello all, I have an intersting problem. I've been working on a project for a while where I'm creating software that displays aerial photos as like a mapping software. Typically, these images I load and texture to a quad come from places like terraserver or google earth. The image sizes are very large and odd sizes like 2400x1871. I know this isn't a standard power of two size so when I load the texture using D3DXCreateTextureFromFileEx into the managed pool it resizes it, which is fine. I usually have it create 5 mip levels using D3DX_FILTER_POINT for the generation. I store off the original image size and when drawing the quad I scale it so that it is in its original form. So thats kind of the background information and this has always worked just fine for one image. The problem is that I want to now support multiple images. Loading time for one image was acceptable and so was the amount of ram used. But now I am working on loading multiple images possibly up to 10 or more at the same high resolution. By just loading 3 images the time it takes to load them is around 10 seconds. System Specs are: Geforce FX 6200 with 128 megs RAM CPU: P4 3 GHz RAM: Corsair DDR400 2 gigs I noticed that loading each image takes up about 20 megs of system RAM, since the textures are managed this makes sense. So I then loaded the images with DX1 compression. This helped a lot with the RAM but load times still remained the same. I read in the SDK that if you precreate the images with texture compression by using a format like .dds load times should be faster since directx doesn't need to spend time compressing and creating the mip levels. Well I think I must be doing something wrong since the load times are still the same. So thats kind of where I am at right now. I've tried not creating any mip levels and the load time doesn't really change. How would you guys address this problem where you would like to have high resolution aerial maps that you can pan around and zoom? I appreciate any help in advance, thanks.
Advertisement
You might change the files to power of two as a pre-processing step, then save as a DDS file. EDIT: you can generate the mipmaps with the dds program (that comes with the SDK).

After that put all your images in a big blob of memory, essentially an archive with no compression. archiveHeader+file1+file2+file3+etc.

On initalization, load the whole archive into memory (do it in one call if possible). You then create images with the function: D3DXCreateTextureFromFileInMemory(pDevice,buffer+offset,size,&texture);

I think that should be faster... We are then working with memory rather then disk which might make it faster.
Insufficent Information: we need more infromationhttp://staff.samods.org/aiursrage2k/
Some very good points and I thank you for your input. The only problem is that this is a user program in that they can load whatever map images they want within reason (jpg, png, bmp, etc) basically whatever D3DXCreateTextureXX allows.

I only tried loading a dds to just see if the load time changes. Do you really think that the load time is mostly influenced by reading the files from disk? I think the biggest bottleneck is probably the transfer of texture memmory to the graphics card or the mip map generation. But again I'm here asking for help so what do i know ;)

Thanks
Gimp,

I'm affraid I can't answer you specifically without trying it out myself, but I'd be happy to offer suggestions about what I would try in your situation.

First, change the number of mipmap levels if you haven't already done so...slowly reduce it down until you eventually remove the mipmapping. This will tell you whether the biggest performance hit is in the mipmap generation. Though I can tell you that if you're making 5 copies of a 20MB file you're talking 100MB of data being created procedurally, and then perhaps dumped into video memory.

Next, with mipmapping still distabled, try and load an equally large file that is the closest larger power of 2. Check and see if load times are "roughly" the same. This will indicate that the problem has nothing to do with the rounding that D3DXCreateTextureFromFileEx must do with non-^2 textures.

Finally, try loading the texture into a different memory pool, such as D3DPOOL_SCRATCH. By doing this, the texture will be created from the disk resource, but will not be loaded into video memory. This will help you profile whether the delay is in copying to system memory or video memory.

Play with these combinations and see if you can narrow in on the exact problem. As the bus from video to system memory is relatively fast, and scaling an image is pretty quick, I suspect the performance hit you're seeing is in generating 5x 20 MB textures procedurally.

Cheers and good luck!
Jeromy Walsh
Sr. Tools & Engine Programmer | Software Engineer
Microsoft Windows Phone Team
Chronicles of Elyria (An In-development MMORPG)
GameDevelopedia.com - Blog & Tutorials
GDNet Mentoring: XNA Workshop | C# Workshop | C++ Workshop
"The question is not how far, the question is do you possess the constitution, the depth of faith, to go as far as is needed?" - Il Duche, Boondock Saints
Very useful and informative. I tried a couple of tests here is my results.

First I tried changing the texture to a power of two. This made a big difference.

To load 2 maps (textures) with resolutions of non-power twos of 2400x1820 it took on average 10.8 seconds (this is also creating the full mip chain).

To load 2 maps with resolutions with power two of 2048x2048 (same textures just rescaled in photoshop) it took on average 5.72 seconds. Again with full mip chain.

Now i adjusted the parameter in D3DXLoadTextureXX to just 1 mip level and it increased the speed for loading by approximately 1 second: 4.61 for power 2, and 9.1 for non-power two.

So this is kind of intersting. The only problem is that I can't expect everybody to open up their image and rescale it. Also that poses a problem since these aerial maps are geo-referenced so I need to know the original pixel size.

I tried loading the textures into the D3DPOOL_SCRATCH but it made no noticiable difference, other than the textures don't draw but thats to be expected. I'm not sure what the best approach to this is. Perhaps create an empty texture and load the images by myself into memmory and lock the texture resource and fill it. This would gurantee that it is a power of 2 type texture, but then i would have to implement rescaling and loading of file formats.

Any other ideas?
This is a quote from the DirectX SDK Documentation
Quote:
For the best performance when using D3DXCreateTextureFromFile:

1. Doing image scaling and format conversion at load time can be slow. Store images in the format and resolution they will be used. If the target hardware requires power of two dimensions, create and store images using power of two dimensions.

In other words, Microsoft is aware of the general slowness of doing image size/format conversion while using D3DXCreateTextureFromFile. You have a few options then:

1. Convert all images to powers of 2 before loading into your application. You can do this with a helper application which does not create mip-chains, does not render, etc...just batch converts a directory of images into powers of 2. Then those images can be used in your application quite nicely.

2. Write your own loading code which takes a file and PADS it with black until the size is a power of 2. then passes it off to "D3DXCreateTextureFromFileInMemory" or something similar.

3. Additionally, you can write your own loading code, which takes the file and SCALES it to a power of 2, then passes it off to "D3DXCreateTextureFromFileInMemory" or something similar.

Currently, those are the only options apparent to me.

Good luck!
Jeromy Walsh
Sr. Tools & Engine Programmer | Software Engineer
Microsoft Windows Phone Team
Chronicles of Elyria (An In-development MMORPG)
GameDevelopedia.com - Blog & Tutorials
GDNet Mentoring: XNA Workshop | C# Workshop | C++ Workshop
"The question is not how far, the question is do you possess the constitution, the depth of faith, to go as far as is needed?" - Il Duche, Boondock Saints
As jwalsh mentioned in his first post, you might want to use D3DXCreateTextureFromFileEx. Use D3DX_DEFAULT_NONPOW2 for the size, and you'd get a texture of the actual size, instead of scaled. You'll have to live with the limitations of this format (see D3DPTEXTURECAPS_NONPOW2CONDITIONAL on the D3DCAPS9 page in the docs), but you're likely to get a better loading time.
ET3D,

Good suggestion. I had considered that idea as well. Unfortunately, the D3DX_DEFAULT_NONPOW2 isnt available on all video cards - or even all models of a video card. As you pointed out, the D3DPTEXTURECAPS_NONPOW2CONDITIONAL device capability determines whether or not that flag will work on a given device.

Additionally, the flag can only be used if the following are ALL true:

  • Texture addressing mode is D3DTADDRESS_CLAMP.
  • Texture wrapping for the texture stage is disabled.
  • Mipmapping is not being used
  • DXT Compression is not being used.

I believe Gimp was using Mipmapping, and should compress his textures for better performance if possible.

Cheers!
Jeromy Walsh
Sr. Tools & Engine Programmer | Software Engineer
Microsoft Windows Phone Team
Chronicles of Elyria (An In-development MMORPG)
GameDevelopedia.com - Blog & Tutorials
GDNet Mentoring: XNA Workshop | C# Workshop | C++ Workshop
"The question is not how far, the question is do you possess the constitution, the depth of faith, to go as far as is needed?" - Il Duche, Boondock Saints
very good comments. I think I might use the command line tool texconv.exe to create a nice compressed .dds behind the scenes whenever the user loads a normal image.

I am still very curious how other popular mapping software programs do this. I wonder if they just load up the image into memmory using whatever loading software they have and then use GDI to render a portion of the map.
All GeForce and all Radeon cards support non power of 2 textures. Most of them through D3DPTEXTURECAPS_NONPOW2CONDITIONAL, but the GeForce 6 and 7 families (and GimpMaster has a 6200) don't require power of 2 textures at all, and won't have any limitations on them. (All according to the caps doc in the SDK.)

So it seems to me that at least for the cards that don't have D3DPTEXTURECAPS_POW2 set it's certainly worth not scaling the textures, since there are no limitations. For others, it seemed from the posts that GimpMaster was entertaining the idea of no mip mapping to save time. IMO it can be a decent enough compromise. Maybe leave it to the user. He was also saying that users will load the images, and I don't believe users are likely to load DXT images. For DXT pre-scaling may be the solution.

If not having mip maps is not acceptable, I'd suggest using D3DX_DEFAULT_NONPOW2 for cards with D3DPTEXTURECAPS_POW2 not set, and D3DX_FILTER_NONE for other cards. It's likely a lot faster than scaling.

This topic is closed to new replies.

Advertisement