D3DXCreateCubeTexture fails with E_OUTOFMEMORY

Started by
4 comments, last by MattStevens 12 years, 4 months ago
Hello everyone,
I am using the Ogre engine but my issue is related directly with DirectX. I sometime get this error message when loading my game:
14:40:16: OGRE EXCEPTION(3:RenderingAPIException): Error creating texture: Ran out of memory in D3D9Texture::_createCubeTex at ..\..\..\Ogre_1_7_2\RenderSystems\Direct3D9\src\OgreD3D9Texture.cpp (line 1218)
The HRESULT returned by D3DXCreateCubeTexture is E_OUTOFMEMORY. The texture in question is a cubemap (always the same that fails), 6 faces of 1024x1024.

My games runs fine for a very long period of time. We tested it up to 60 hours, loading and playing the game over 600 times without a restart. However, in a very precise case, I will get the above error on the first loading. It happens when I add the following code before doing any resource loading:
char* tests[12800];
for(unsigned int j=0; j<12800; j++)
tests[j] = new char[51200];

for(unsigned int j=0; j<12800; j++)
delete []tests[j];

This code allocates 625mb in blocks on 50kb, then frees it immediately. It basically have no effect except messing around with the memory, yet it causes the DX issue when loading a cubemap. I know for a fact that I still have plenty of ram available and it is not the reason for the error. I read somewhere that directx resources need some "kernel memory" and that might be the problem, so I verified using the task manager and it seems to be fine on this side.

I am pretty beginner with these kind of complex memory management and I'm very confused on why DX would run out of memory. I'm sure it is not a memory overwrite issue because the behavior is not consistent at all with this kind of bugs. So any help would be very appreciated, even if you have no idea what the issue is, maybe just pointers or tips on where to look or tools I can use to help me debugging is welcomed.

Thanks,
- Matt
Advertisement
Does your cubemap have mipmaps? D3D running out of memory is quite simply it running out of system memory. There's not any kernel memory required that I know of, but there is VRAM. If your texture is too large to fit in VRAM, you may get an error like this - although I'd expect an "out of video memory" error.

Have you tried running the debug D3D runtimes? They should give a better indication of why it's failing.

It's also possible that D3D uses a separate heap from your application. Because you allocate 625MB of memory, your application will use 625MB of the available address space, which might mean that D3D can't allocate a single chunk of memory large enough.

Does your cubemap have mipmaps? D3D running out of memory is quite simply it running out of system memory. There's not any kernel memory required that I know of, but there is VRAM. If your texture is too large to fit in VRAM, you may get an error like this - although I'd expect an "out of video memory" error.

Have you tried running the debug D3D runtimes? They should give a better indication of why it's failing.

It's also possible that D3D uses a separate heap from your application. Because you allocate 625MB of memory, your application will use 625MB of the available address space, which might mean that D3D can't allocate a single chunk of memory large enough.

I will try using the D3D debug runtimes this week to try and get more infos. One test I did was to call [font=Arial,]_heapmin function [/font]to force the returning of the memory. It allowed the DirectX call to success, but it still crash a bit later in my function when using a new.

One test I did was to call [font="Arial,"]_heapmin function [/font]to force the returning of the memory. It allowed the DirectX call to success, but it still crash a bit later in my function when using a new.
That certainly indicates that it's lack of address space that's causing the problem. What error do you get when calling new later on? I assume it's caused by the CRT trying to grow the heap and failing.

In case you're not familar with address space starvation, on a 32-bit operating system, each executable has 4GB of address space to use, between 0x00000000 and 0xffffffff. The top 2GB is reserved by the operating system, some address ranges are reserved (E.g. the low 64KB (I think? Maybe more?)), and the EXE file and all the various system DLLs loaded into the process usually means that a single application has around 1GB to 1.5GB of free space to use for the heap (At most, the address space could be fragmented meaning there's not one contiguous chunk of memory that size).

You could try running sysinternals' VMMap to see what your address spaces fragmentation is like.

I'm not sure where D3D gets its memory for managed / system memory resources, but it could well be using its own heap (VMMap will tell you), and could well be running out if your application's heap is already pretty large.
Here's some ideas to cut down memory usage so the game fits inside 2GB:

- Convert all textures to DXT1 or DXT5 .dds files. DXT1 is half the size of DXT5, but has no alpha channel. Compared to 32-bit RGBA a DXT1 texture is 8 times smaller. Note that the compression is lossy so it may not be suitable for all textures.

- Do some leak checking. The D3D leak checker is built in to the debug runtimes, and will list every allocation that's not been released on game exit. There's also VLD for standard memory allocations.
Thanks you both for the informations. I'm not too sure about DXT1 or DXT5 texture format, I'll have to check if it is easily feasible in our pipeline. I'll definitely try out VMMap and VLD to try and get more information on memory usage. I do know roughly how it works but not all the details. Speaking of which, I do have a question from this quote:
[...]usually means that a single application has around 1GB to 1.5GB of free space to use for the heap (At most, the address space could be fragmented meaning there's not one contiguous chunk of memory that size).
In my test case, since I request memory in small chunks and release it, is it possible that it fragments the memory ? I believe it should not because, big or small, I release all the memory I just requested before any other operation is done. However I might be wrong, so what do you think ? Also, is there a function like _heapmin that would defragment the memory ? A google search turns out negative but I'm asking just in case.

Thanks for the help,
- Matt

This topic is closed to new replies.

Advertisement