# [SlimDX] VolumeTexture creation fails

This topic is 2937 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi! I have some piece of software here, developed using SlimDX (D3D9) on a GF8800. I make quite heavy use of VolumeTextures, as I abuse them as 3D-arrays for a GPGPU use. The following call is my issue: VolumeTexture Stipples = new VolumeTexture(device, w, h, d, 1, Usage.None, Format.R32F, Pool.Managed); This works just smoothly on my desktop PC with the GF8800, but I now tried to run things on my notebook, on which I intended to present this software - it's my bachelor thesis, which is why I have reached a good level of despair now. The notebook has a ATI Mobility Radeon X1600, and the above call simply throws an "Invalid call" exception. I tried to run this with the debug runtime switched on, but that only leads to #Develop crashing at this point, and ultimately even less specific info on what went wrong. Also, running outside the IDE only lead to the message, that a user defined breakpoint has been reached, and crashing if I choose to debug. The texture's dimensions in this testcase were: w=560; h=375; d=7 That leaves me with a texture of about 5,880,000 bytes, and even accounting for some excess due to strides won't really make this huge. Some other processes from before in the program worked all well, however none of them used volume textures. Only 2D floating point textures (R32F too, so the format seems to be okay itself). Before I go on with the part with my problem, I properly wipe the device and all resources I used before, and create an entirely fresh D3D and device object. At the point of creation, it is the ONLY resource for the device. Nothing but a shader has been created in it's context, so I do not really expect lack of memory to be an issue here. But then, what is? Many thanks for any help! I've got to figure that one out until Wednesday, so please don't hesitate to throw every whatever improbable idea at me. I will test about everything! :) PS: Subtitution by N 2D textures is not a solution, since the depth is variable, and can easily reach ~100 and more. A hundred texture samplers in my shader are not really that feasible, and I do need all layers in it at once.

##### Share on other sites
According to the caps spreadsheet (DX_SDK/Samples/C++/Direct3D/ConfigSystem), the X1600 only supports power of 2 textures. That means you'll have to create a volume texture where each side is a power of two - there's no way around that unless you want to create multiple blocks to make up the size you need, but that'll be pretty painful I expect.

Alternatively, you can use the reference rasterizer on your laptop, but expect a frame rate in seconds per frame rather than frames per second.

##### Share on other sites

You might want to give DebugView a try to capture the DirectX debug output. Without more information we'd just be guessing and it's quite possible that if you haven't routinely run your project with the debug runtime yet, it could be balking at something completely unrelated before reaching the actual error.

Some random guesses:

1) Try power-of-2 texture sizes
2) Try another format like A32R32G32B32F
3) Get a laptop with a GF8x00 [smile]

Edit - I need to type faster :p

##### Share on other sites
Yikes, if I run the caps viewer on both machines, the notebook shows
D3DPTEXTURECAPS_POW2 = Yes
but my desktop PC doesn't even display this cap at all. Nevertheless, IF my X1600 doesn't support non-power of 2 textures, how comes, that the 2D textures did well? After all, these have the exact same dimensions for w and h as the volume has, which in this case are far from being a pow2.

Oh, remigius, didn't see you at first - damn you're fast today dudes :)

1) see above
2) I'll try, but it botheres me that 2D textures of the same format work just well
3) ... I just spent too much money on that one ~1.5 years ago. I'm a student - it'd take me about 1 year at least to get that amount of excess cash, if all went well. Baaaaad idea! ;)

##### Share on other sites
Quote:
 Original post by Medium9Nevertheless, IF my X1600 doesn't support non-power of 2 textures, how comes, that the 2D textures did well?
Sounds like a driver bug to me, or if you were using the D3DX version of CreateTexture (Or if SlimDX does internally), it'll create a power of 2 texture for you behind the scenes.

EDIT: Ah, the X1600 has the D3DPTEXTURECAPS_NONPOW2CONDITIONAL cap bit too - that means that you can use non-pow 2 textures with some restrictions, see The docs on the subject; I assume that applies to volume textures as well as "normal" textures:
Quote:
 D3DPTEXTURECAPS_POW2 is also set, conditionally supports the use of 2D textures with dimensions that are not powers of two. A device that exposes this capability can use such a texture if all of the following requirements are met. The texture addressing mode for the texture stage is set to D3DTADDRESS_CLAMP. Texture wrapping for the texture stage is disabled (D3DRS_WRAP n set to 0). Mipmapping is not in use (use magnification filter only). Texture formats must not be D3DFMT_DXT1 through D3DFMT_DXT5.If this flag is not set, and D3DPTEXTURECAPS_POW2 is also not set, then unconditional support is provided for 2D textures with dimensions that are not powers of two.A texture that is not a power of two cannot be set at a stage that will be read based on a shader computation (such as the bem - ps and texm3x3 - ps instructions in pixel shaders versions 1_0 to 1_3). For example, these textures can be used to store bumps that will be fed into texture reads, but not the environment maps that are used in texbem - ps, texbeml - ps, and texm3x3spec - ps. This means that a texture with dimensions that are not powers of two cannot be addressed or sampled using texture coordinates computed within the shader. This type of operation is known as a dependent read and cannot be performed on these types of textures.

I presume the refrast isn't feasible for what you need (requires the SDK be installed, and is extremely slow)?

##### Share on other sites
You were all right with the power of 2. A volume of 512x256x8 is properly created, but this devastates me. There is no way I could rewrite the whole thing in two days to cope with this :(
Also, this will lead to quite large textures in a lot of cases making almost sure I'll run into memory problems too soon, since segmenting would lead to extreme headaches and probably a good week of work (the whole part relying on this is rather extensive and complicated).

I'm gonna try the ref rasterizer, although this defeats the whole purpose of why I even bothered to do this stuff on the GPU - I have made a CPU-only version before, and it was way slow. But hey, maybe it's okay for mere presentation.

(Btw: Is a X1600 really THAT old?)

Edit: Just saw your edit :)
Okay, uhm - how do I control these conditions in SlimDX? I've never set anything for addressing or wrapping mode outside of my shaders, to which the textures are bound after both are created. I don't use mipmapping, and none of the mentioned formats.
Or are volume textures just an entirely different game?

##### Share on other sites

Quote:
 Original post by Medium9(Btw: Is a X1600 really THAT old?)Edit: Just saw your edit :)Okay, uhm - how do I control these conditions in SlimDX? I've never set anything for addressing or wrapping mode outside of my shaders, to which the textures are bound after both are created. I don't use mipmapping, and none of the mentioned formats.Or are volume textures just an entirely different game?

Afaik, just setting those conditions (states) on the samplers in your shaders should be fine.

And well, the X1600 somewhat old. I got mine about 4 years ago I think, which does make its raw output a bit lacking by now. About the featureset, I don't think I ever had any real issue with it like you're having now, but then again I mostly stuck to the beaten path with power-of-two textures and such.

Volume texture support off the beaten path may be a bit flakey, but I still find it incredible myself that it's being this problematic. I strongly suggest you try to get the output of the debug runtimes to verify what's happening.

Perhaps a driver update would be a good idea too, although I have some misgivings that with AMD's legacy driver program for the X1600 series, the quality isn't improving much. I'm not sure if this is true, but I'd try both the latest drivers and the original drivers that came with the card.

##### Share on other sites
Okay, 4 years is quite some time. The driver... don't get me started on this topic. I've spent 3 evenings on trying everything I could imagine or have found in various forums: There just seems to be NO friggin way to update this stinky beast. ATI driver installers say: Nay, not going to happen, go ask you notebook manufactor. Asus says: Nay, just the 2006 drivers here, for newer stuff go ask ATI. What the...? (2006, that could have been a hint on the chip's age *cough*)
Omega drivers screwd up my entire system once, I have little intention of reinstalling everything yet again (I just did 4 days ago). Well, at least one thing is certain: Never again a notebook by Asus, never again with an ATI chip. I can miss out on that form of support very easily *rant*.

What would be the "canon" way of obtaining anything from the debug runtimes? Since #Develop goes entirely overboard when it hits an error with the dbg on, I probably never came to see how a proper output from them would look like. (I unfortunately do not have any Visual Studio available at the moment too.)

Edit: Uhm, how could setting this stuff in the shaders make any difference? The shader isn't set before the texture is created, and the texture isn't associated with the shader until rigth before rendering. How could D3D then know what I declared in my texture samplers? *scratchinghead*

##### Share on other sites
Actually, the more I think about it, the more I reckon it's either a driver bug, or the D3DPTEXTURECAPS_NONPOW2CONDITIONAL bit only applies to 2D textures - if you can't even create the texture, then it shows that the driver doesn't care what state the device is in at the time it's used.

As remigius said; a driver update may help, or you could try contacting ATI/AMD about it.
I've posted on the DirectX MVP newsgroups to try and find out if the D3DPTEXTURECAPS_NONPOW2CONDITIONAL flag applied to volume textures as well, but I can't say if/when I'll get an answer there.

##### Share on other sites
Contacting ATI might be a good thing to do right now, you're right. And many thanks for taking this into the newsgroups! I might be able to go with my software solution first, and update it afterwards. At least I seem to have a good reason for this delay.

##### Share on other sites
Quote:
 Original post by Medium9What would be the "canon" way of obtaining anything from the debug runtimes? Since #Develop goes entirely overboard when it hits an error with the dbg on, I probably never came to see how a proper output from them would look like. (I unfortunately do not have any Visual Studio available at the moment too.)

Visual Studio would be canon, but an alternative is Microsoft's DebugView. It's an external application that'll capture the stream of debug messages on the system. It should even work when running a debug build of your application standalone, so without the #Develop debugger attached.

Quote:
 Edit: Uhm, how could setting this stuff in the shaders make any difference? The shader isn't set before the texture is created, and the texture isn't associated with the shader until rigth before rendering. How could D3D then know what I declared in my texture samplers? *scratchinghead*

I'm not entirely sure on the timings, but by the time you're rendering the DirectX Effects framework will have configured your samplers and bound the textures to the device (this probably does not coincide with your calls to set the texture on the effect). The effects framework basically sets the sampler states for you through D3D as you'd do manually in ancient times. Judging by the comments above, one would hope it sets up the sampler states under the hood before trying to bind the textures... That said, you and steve are spot on that D3D shouldn't care about your sampler states when you're just creating the texture, so it's likely a driver bug.

As for contacting ATI/AMD, you also might get lucky contacting Asus customer support. If you can somehow magically get hold of a good support guy, he might be able to point out alternative drivers (for other notebook models for example) that can help you out. It's a long shot, but I had this with Dell and another model's driver worked wonders for my laptop.

##### Share on other sites
DebugView works! THANK you! Finally a way to really obtain proper debugging results! Oh that will make things a lot easier.

It states: "Width must be power of two for mip-volumes"

Now while I do get why that is, I do not get why D3D assumes a mip-volume, since I explicitly pass "1" for the mip-levels parameter.

##### Share on other sites

Hopefully someone from SlimDX can chime in or Steve gets back with some more info, but looking at the source for VolumeTexture there doesn't seem to be anything strange going on there that would cause mip levels to be generated despite mipLevels being 1. You could try calling VolumeTexture.CheckRequirements with your parameters and see what D3D has to say about that, but likely it'll just correct your non-power-of-two sizes.

Is that the exact error message by the way? I googled a bit but I can't find any hits for it. For some rampant speculation, it might even be a D3D bug since it seems to incorrectly ignore the mipLevels parameter in the first place. If you're by any chance running an old DirectX runtime/sdk version on your laptop, updating that might also help.

Edit - had a go at this in XNA 3.0 on the March 2008 SDK where it seems to work correctly. The second line is for testing if XNA reads back the LevelCount parameter instead of caching it, which it seems to do.

Texture3D t = new Texture3D(GraphicsDevice, 560, 375, 7, 1, TextureUsage.None, SurfaceFormat.Single);System.Diagnostics.Debug.WriteLine(t.LevelCount); // outputs 1Texture3D t2 = new Texture3D(GraphicsDevice, 560, 375, 7, 0, TextureUsage.None, SurfaceFormat.Single);System.Diagnostics.Debug.WriteLine(t2.LevelCount); // outputs 10

I'm at a loss why D3D is creating mip levels against your wishes, but seeing as the D3D runtime is the one doing the complaining, a driver bug seems less likely, right? At this point switching to the reference device might be a good idea to see if things are working as intended.

##### Share on other sites
I'll check the DirectX debug runtime source when I get home tonight (About 3 hours from now), and see if they're doing anything "Odd" that would explain that behaviour. I check the SlimDX source too, it all seems completely fine...

##### Share on other sites
Quote:
 Original post by remigiusIs that the exact error message by the way?

It is, word by word.

I use all SDKs and updates for WinXP, .NET 3.5, MSDN+WinSDK, SlimDX SDK, DX SDK and #Develop that were marked as latest available stable version the day before yesterday - I re-installed the whole system from scratch, so there wouldn't even be left over older components. The only older thing really is the damn driver. I mentioned this in my mail to ATI too, and I hope they have a solution for this. I'll also try to contact Asus on that issue.

Usually I like strange effects and the process of digging deep into it, but it's considerably less fun when time knocks on the door =)
Also, I really need to throw in an intermediate big thanks to both of you!

Edit: @remigius' edit:
I tried the refrast on my (faster) desktop PC, gave it a run but killed the application after ~10min of ongoing calculations. With the GPU, the process takes about 2 seconds (I render essentially just one frame with a shader that contains quite some loops and dynamic branching), doing it all software this way though seems to be magnitudes slower. Slow enough to be no option at all, since my initial CPU-based method even is faster then (of course the whole structure is different there).

Edit2: Texture creation with the reference device on my notebook succeeds though.

##### Share on other sites

I know where you're coming from, there's a time for interesting problems and there's a time when things should just work! [smile]

As you probably guessed I wasn't suggesting to use the refrast as a replacement for your GPU, but just as a test where the error lies. It's strange that refrast should work correctly on your laptop, while the device doesn't. I guess this is quite definite proof that ATI screwed up somewhere.

The test above was run succesfully on my laptop with a GF8600M, which apparently supports non-power-of-two (npot from now on) volume textures. When I run it on an old desktop with an X1900, it fails with both 1 and 0 for the mipmap parameter. I'm guessing the debug output is just being coy and no mipmaps are being generated when passing 1, but that these X1x00 cards or their drivers just crap out at npot volume textures in general.

For some more rampant speculation, it also wouldn't be a stretch to suspect ATI of deliberately misguiding the runtime on the mipmaps parameter, that wouldn't be a first.

##### Share on other sites
Quote:
 Original post by remigiusI know where you're coming from, there's a time for interesting problems and there's a time when things should just work! [smile]

Oddly, the occurance of problems and the availability of time have a diametral relation way too often :)

If that really would turn out to be a glitch by ATI, it would be quite a let down, since I probably would have little hope of having this fixed in time - or at all I guess.
If it really comes that far, I'll have to hope that my prof's notebook has a more or less up to date nvidia chip (which I belive it has), and that I could lend it for the presentation.
The program also is more of a tech demo than an app intended for public use, and I hope that this is seen as a non-issue by my prof too.

But it would still be awesome if there was an actual solution. I'll also dig around a bit more, and I'm eager to see if Steve comes up with something.

##### Share on other sites
Ok, I got a chance to find out where that error is coming from, and found the problem.

I can't post the code, but the error message is a little misleading; it's displayed no matter what the number of mip levels are, which is why you get it no matter what the number of mip levels is set to.
The actual problem is that volume textures test another caps bit, D3DPTEXTURECAPS_VOLUMEMAP_POW2. If that's set, then volume textures need to be a power of 2. The X1600 has that bit set, so volume textures need to be powers of 2.

Unfortunately that puts you back in the "It's unsupported" field; you'll need to go for some other option previously discussed.

Hope this helps [smile]

##### Share on other sites
Damn! That really is bad news, but it's great to have definite closure on this!
Using the fallback to software means a jump in execution time by a rough factor of 50-80 (depending on dualcore or not), but so be it.

Again, BIIIG thanks for the great amount of help to the two of you! I probably would have struggled for days(+) on my own.

Now I can head off to a still worried, but no longer desparate night's sleep ;)