Jump to content
  • Advertisement
Sign in to follow this  
Commodore.64

D3D10: Non 32-Pixel-Wide Textures Corrupted

This topic is 2600 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Quick Summary:
In my D3D10 application, only textures that have widths that are multiples of 32 (ex: 640 pixels wide, 480 pixels wide, 544 pixels wide, etc) are displayed correctly. All other textures are corrupted.

Details:
This has been driving me nuts for weeks. I've written my own real-time video and image processing framework that sits atop D3D10 (yay pixel shaders). Everything was working great... until I tried processing some video that wasn't 720p or 1080p. After a bunch of debugging, I was able to rule out bad data sources and isolated the problem to my actual D3D rendering code. And after even more debugging, I was able to isolate the problem further: my textures only work right if they're multiples of 32 pixels wide. Otherwise, the scanlines are getting corrupted.

I ended up stripping the application down to the bare minimum - all it does is show a textured quad. And yet, the problem still persists. I've spent weeks on this, and other then creating a stripped-down minimal application that reproduces the problem, I've made no progress.

Any suggestions on what I'm doing wrong?


example of an correctly rendered texture (320 pixels wide)[color="#ffffff"]___[color="#ffffff"]___[color="#ffffff"]___example of a corrupted texture (324 pixels wide)
moz-screenshot-2.png[color="#ffffff"]___moz-screenshot-1.png

Share this post


Link to post
Share on other sites
Advertisement
Textures can have padding, to make each row the same number of bytes. This can require you to write one row at a time, if your source data doesn't have the same padding.
Something like this instead of the memcpy you use now:
[source]
char *dst = (char*)mappedTexture.pData;
for(UINT i=0;i<height;++i)
memcpy(dst + i*mappedTexture.RowPitch, pBitmap+i*width*sizeof(UINT), width * sizeof(UINT));
[/source]

Share this post


Link to post
Share on other sites

Textures can have padding, to make each row the same number of bytes. This can require you to write one row at a time, if your source data doesn't have the same padding.
Something like this instead of the memcpy you use now:
[source]
char *dst = (char*)mappedTexture.pData;
for(UINT i=0;i<height;++i)
memcpy(dst + i*mappedTexture.RowPitch, pBitmap+i*width*sizeof(UINT), width * sizeof(UINT));
[/source]

Well, I feel like a right idiot. That cleared things right up! I don't know how I didn't pay attention to row pitch. Or how that didn't just jump screaming out at me given the scanline-level corruption. :/

A bit disappointing, though - looks like I'm going to have to do more bit twiddling throughout my application to compensate for the pitch. And for realtime HD video manipulation, that sort of bit twiddling can get expensive CPU side...

But I digress. Thank you soooo much Erik!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!