D3D10: Non 32-Pixel-Wide Textures Corrupted

Started by
1 comment, last by Commodore.64 12 years, 10 months ago
Quick Summary:
In my D3D10 application, only textures that have widths that are multiples of 32 (ex: 640 pixels wide, 480 pixels wide, 544 pixels wide, etc) are displayed correctly. All other textures are corrupted.

Details:
This has been driving me nuts for weeks. I've written my own real-time video and image processing framework that sits atop D3D10 (yay pixel shaders). Everything was working great... until I tried processing some video that wasn't 720p or 1080p. After a bunch of debugging, I was able to rule out bad data sources and isolated the problem to my actual D3D rendering code. And after even more debugging, I was able to isolate the problem further: my textures only work right if they're multiples of 32 pixels wide. Otherwise, the scanlines are getting corrupted.

I ended up stripping the application down to the bare minimum - all it does is show a textured quad. And yet, the problem still persists. I've spent weeks on this, and other then creating a stripped-down minimal application that reproduces the problem, I've made no progress.

Any suggestions on what I'm doing wrong?


example of an correctly rendered texture (320 pixels wide)[color="#ffffff"]___[color="#ffffff"]___[color="#ffffff"]___example of a corrupted texture (324 pixels wide)
moz-screenshot-2.png[color="#ffffff"]___moz-screenshot-1.png
Advertisement
Textures can have padding, to make each row the same number of bytes. This can require you to write one row at a time, if your source data doesn't have the same padding.
Something like this instead of the memcpy you use now:
[source]
char *dst = (char*)mappedTexture.pData;
for(UINT i=0;i<height;++i)
memcpy(dst + i*mappedTexture.RowPitch, pBitmap+i*width*sizeof(UINT), width * sizeof(UINT));
[/source]

Textures can have padding, to make each row the same number of bytes. This can require you to write one row at a time, if your source data doesn't have the same padding.
Something like this instead of the memcpy you use now:
[source]
char *dst = (char*)mappedTexture.pData;
for(UINT i=0;i<height;++i)
memcpy(dst + i*mappedTexture.RowPitch, pBitmap+i*width*sizeof(UINT), width * sizeof(UINT));
[/source]

Well, I feel like a right idiot. That cleared things right up! I don't know how I didn't pay attention to row pitch. Or how that didn't just jump screaming out at me given the scanline-level corruption. :/

A bit disappointing, though - looks like I'm going to have to do more bit twiddling throughout my application to compensate for the pitch. And for realtime HD video manipulation, that sort of bit twiddling can get expensive CPU side...

But I digress. Thank you soooo much Erik!

This topic is closed to new replies.

Advertisement