# CreateTexture problems...and ROQ.

This topic is 4256 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

This should be simple. I havent used Direct3D in awhile, but im trying to write a Q3 ROQ viewer. THe problem is when I try to create a dynamic texture it returns with a error.
if(FAILED(tr.g_pd3dDevice->CreateTexture(w,h,1,D3DUSAGE_DYNAMIC,
D3DFMT_R3G3B2, D3DPOOL_DEFAULT, &cmd->texture, NULL)))
{
Com_Error(ERR_FATAL, "Failed to create texture for raw texture\n");
}


Any ideas? [Edited by - eviltwigflipper on June 27, 2006 10:37:16 AM]

##### Share on other sites
My first step would be to see what the actual return value is instead of immediately eating it with FAILED(). Other than that, my guess would be that D3DFMT_R3G3B2 isn't supported. I think your chances would be better with D3DFMT_X8R8G8B8.

##### Share on other sites
It returns D3DERR_INVALIDCALL, but I need to have 8bit textures =/.

Also in debug mode returns this Invalid format specified for texture, so 8bit textures can't be used natively?

##### Share on other sites
Well, you can use the DirectX Caps Viewer -> DirectX Graphics Adapter -> Your Card -> D3D Display Types -> HAL -> Adapter Formats -> Your Display Format -> Texture Formats to see the list of texture formats your card supports for a given display mode.

Looking at the list on my GeForce 7800 doesn't show any "subdivided" 8 bit formats. For example, A8 and L8 are supported, but D3DFMT_R3G3B2 isn't. P8 (palettized 8-bit) is also not supported - palettized textures have seriously fallen out of favor, anyways.

So why do you "need" 8 bit textures? If your source data is 8-bit, is there anything that would prevent you from converting it to 24 or 32 bit when you create the texture contents? If you're concerned about memory footprint or texture bandwidth, you could create it as 24-bit, then use DXT to compress it.

Oh, as an aside: If you really, really need 8-bit textures with a palette (if you're doing old-school palette animation or something), you can do it in a pixel shader. Create your texture as A8, create your palette as a 24-bit 256x1 texture, then use texture.a as a lookup into the 24-bit texture.

##### Share on other sites
Guess I don't "need" to have it has a 8 bit src image, but the ROQ frames are 8bit textures. So i guess my next question is how do you convert a 8bit texture to a 16 bit texture?

##### Share on other sites
This how im doing it now but this is odviouslly wrong:

// Upload the texture buffer to this dtexture.	// this needs to be a DFMT_X1R5G5B5 compatitble texture.	failed(cmd->texture->LockRect(NULL, &LockedRect, NULL, 0));	WORD *lockdata = (WORD *)LockedRect.pBits;	// Shift 8 jsut makes it easier to look at(green tinted mess instead of a red/bue mess).	for(int i = 0; i < (cols * rows) * 2; i++) {		lockdata = data << 8;	}	cmd->texture->UnlockRect( 0 );

This what it generates while playing the Q3TA Menu cinematic.
http://img460.imageshack.us/my.php?image=d2messed8fj.jpg

##### Share on other sites
Quote:
 Original post by eviltwigflipperThis how im doing it now but this is odviouslly wrong:*** Source Snippet Removed ***

Yep, that won't do it. Remember, you've got some set of bits in the 8-bit value, each of which represent some set of bits you want present in the 16-bit value, some each for red, green, and blue.

If the source is R3G3B2, then the 8 bit value looks like:

rrrgggbb

and if you're targeting X1R5G5B5, that looks like:

xrrrrrgggggbbbbb

BYTE IsolateFrom8Bit(BYTE value, unsigned startBit, unsigned bitCount){ value >>= startBit; value &= (1 << bitCount) - 1; return(value);}WORD ConvertTexel8to16(BYTE value){ WORD b = IsolateFrom8Bit(value, 0, 2); WORD g = IsolateFrom8Bit(value, 2, 3); WORD r = IsolateFrom8Bit(value, 5, 3); WORD w = (b << 3) | (g << 7) | (r << 12); return(w);}

so, you'd iterate through, and do (assuming data[] is an array of BYTEs):

for (int i = 0; i < cols * rows; ++i) lockdata = ConvertTexel8to16(data);

What the code does is isolate each field from the 8-bit value, by shifting the first bit of the field to the 0-th bit (value >>= startBit), and then masking off the irrelevant bits. To use green as the example, if we start out with

rrrgggbb

value >>= 2
00rrrggg

value &= ((1 << bitCount) - 1) == ((1 << 3) - 1) == 7 == binary 00000111
value &= binary 00000111
00000ggg

Once we've gotten our raw green field, we shift it to the top of the green field in the output word:

00000ggg -> (WORD)0000000000000ggg
0000000000000ggg << 7 == 000000ggg0000000
which fits where it's supposed to in the output word:
xrrrrrgggggbbbbb

The same goes for the other fields, but just with different positions and masks.

Mondo caveat: I just typed this code into this editor. I haven't checked that it even compiles. I do know the theory is right, the code is probably mostly right, and I have written code like this a bunch of times.

Also, if you wanted to be more accurate, especially across multiple conversions, you'd OR in a 0.5 for each field, but I'll leave that as an exercise for the reader.

Cheers,
Jason

##### Share on other sites
Ah thank you, just haven't done this before and I don't really know the ROQ format that well(the only part of the Quake 3 engine I never really researched at all). I took advantage of the fact it always worked =/. Anyway the image still has alot of noise in it and the resulting image is about the same just some frames are little bit more defined.

http://img66.imageshack.us/my.php?image=noiseintro28uf.jpg

RE_StretchRaw - full function of the source snippet above.
What CIN_DrawCinematic calls after it translates a ROQ frame into a 8bit frame. All it does is adds a strechraw command to the backend rendering list array and converts the byte array from Quake 3's CIN_DrawCinematic into a 16bit image(for the streched quad surface when the back end list is called up.

/*=================RE_StretchRaw=================*/void RE_StretchRaw(int x, int y, int w, int h, int cols, int rows, const byte *data, int client, qboolean dirty) {	trBeCmd_t    *cmd;	D3DLOCKED_RECT LockedRect;	if(w == 0 && h == 0) {		return;	}	cmd = &backend.cmds[backend.framecmds++];		cmd->opcode		= RCMD_MATRAW;	cmd->x			= x;	cmd->y			= y;	cmd->w			= w;	cmd->h			= h;	cmd->cols		= cols;	cmd->rows		= rows;	cmd->client		= client;		cmd->texture    = tr.roqtex;	// Upload the texture buffer to this dtexture.	// this needs to be a DFMT_X1R5G5B5 compatitble texture.	failed(cmd->texture->LockRect(NULL, &LockedRect, NULL, 0));	WORD *lockdata = (WORD *)LockedRect.pBits;	// Shift 8 jsut makes it easier to look at(green tinted mess instead of a red/bue mess).	for(int i = 0; i < (cols * rows) * 2; i++) {	    // 8to16 conversion curtosy of JasonBlochowiak - GameDev forum.		WORD byte16 = ConvertTexel8to16(data);		lockdata = byte16;	}	cmd->texture->UnlockRect( 0 );}

Quake 3's CIN_DrawCinematic, which calls the rendering function above,
re.DrawStretchRaw. I have very little idea how this translates a compressed roq frame into a 8bit frame, but I thought once you converted the result 8bit image to a 16bit image it would display properlly without all the noise. A little bit of insight into mechanics of this function does will also be helpful.
/*==================SCR_DrawCinematic==================*/void CIN_DrawCinematic (int handle) {	float	x, y, w, h;	byte	*buf;	if (handle < 0 || handle>= MAX_VIDEO_HANDLES || cinTable[handle].status == FMV_EOF) return;	if (!cinTable[handle].buf) {		return;	}	x = cinTable[handle].xpos;	y = cinTable[handle].ypos;	w = cinTable[handle].width;	h = cinTable[handle].height;	buf = cinTable[handle].buf;	SCR_AdjustFrom640( &x, &y, &w, &h );	if (cinTable[handle].dirty && (cinTable[handle].CIN_WIDTH != cinTable[handle].drawX || cinTable[handle].CIN_HEIGHT != cinTable[handle].drawY)) {		int ix, iy, *buf2, *buf3, xm, ym, ll;                		xm = cinTable[handle].CIN_WIDTH/256;		ym = cinTable[handle].CIN_HEIGHT/256;                ll = 8;                if (cinTable[handle].CIN_WIDTH==512) {                    ll = 9;                }                		buf3 = (int*)buf;		buf2 = Hunk_AllocateTempMemory( 256*256*4 );                if (xm==2 && ym==2) {                    byte *bc2, *bc3;                    int	ic, iiy;                                        bc2 = (byte *)buf2;                    bc3 = (byte *)buf3;                    for (iy = 0; iy<256; iy++) {                            iiy = iy<<12;                            for (ix = 0; ix<2048; ix+=8) {                                for(ic = ix;ic<(ix+4);ic++) {                                    *bc2=(bc3[iiy+ic]+bc3[iiy+4+ic]+bc3[iiy+2048+ic]+bc3[iiy+2048+4+ic])>>2;                                    bc2++;                                }                            }                    }                } else if (xm==2 && ym==1) {                    byte *bc2, *bc3;                    int	ic, iiy;                                        bc2 = (byte *)buf2;                    bc3 = (byte *)buf3;                    for (iy = 0; iy<256; iy++) {                            iiy = iy<<11;                            for (ix = 0; ix<2048; ix+=8) {                                for(ic = ix;ic<(ix+4);ic++) {                                    *bc2=(bc3[iiy+ic]+bc3[iiy+4+ic])>>1;                                    bc2++;                                }                            }                    }                } else {                    for (iy = 0; iy<256; iy++) {                            for (ix = 0; ix<256; ix++) {                                    buf2[(iy<<8)+ix] = buf3[((iy*ym)<<ll) + (ix*xm)];                            }                    }                }		re.DrawStretchRaw( x, y, w, h, 256, 256, (byte *)buf2, handle, qtrue);		cinTable[handle].dirty = qfalse;		Hunk_FreeTempMemory(buf2);		return;	}	re.DrawStretchRaw( x, y, w, h, cinTable[handle].drawX, cinTable[handle].drawY, buf, handle, cinTable[handle].dirty);	cinTable[handle].dirty = qfalse;}

I'm trying to learn why and what is wrong I hope im not implying everyone should do my thinking for me ; ), this definetlly is a good learning experience. Anyone have any further thoughts on what is generating such noisey and inacurate frames?

##### Share on other sites
Well, I haven't had enough coffee to fully figure out what the decode function is doing, but off the top of my head it looks like it might be doing deinterlacing or line doubling. Take that with a large grain of salt, though.

What I would do if trying to debug what's going on is to work backwards. Start with the last thing that's about to get blitted to the screen (or backbuffer, or whatever). Fill it with a known value - like all green 16-bit pixels or somesuch, and see if it makes it to the screen as expected. If that works right, then fill the 8-bit buffer with a known value, and see if that gets translated to the 16-bit values correctly, and makes it to the screen. Etc.

If you can feed known frames into the movie compressor (again, something like a completely green screen) then you can step through the debugger, and watch the various phases of decoding, to see where things aren't going as expected.

Remember, debugging is about 95% figuring out what's NOT happening, 3% figuring out what IS happening, and 2% fixing things once you know what's going on. By working backwards from the end, you can quickly eliminate large swaths of code from the parts that aren't working as expected.

Good luck,
Jason

##### Share on other sites
If I take out the alpha byte(forth byte) the image looks alot more distorted, but making it BGR did help a little bit. Also whats kinda wierd on the intro cinematic for Q3TA it plays what seems to be interlaced frames over the last active frame, but on the cinematic that plays in the menu it doesn't do that. And the resulting image quality is alot less then what it should be(might be either because the alphia value is still the fourth byte(doesn't seem like its there though), or mabye it had something to do with me just switching byte to a word without rewriting the decrypter.

Intro cinematic normal frame:
http://img218.imageshack.us/my.php?image=darklight2intromessed7xu.png

Intro cinematic next frame it plays over the last frame:
http://img526.imageshack.us/my.php?image=darklight2intromessed11ch.png

Team arena menu cinematic normal no interlaced plays perfect except fo rthe quality.
http://img526.imageshack.us/my.php?image=darklight2nointerlaced3tn.png

simple loop I used for switching it to BGR

	for(int i = 0; i < (cols * rows) * 2; i+=3) {		lockdata = data[i+2];		lockdata[i+1] = data[i+1];		lockdata[i+2] = data;	}

##### Share on other sites
Umm, I thought in your code that "lockdata" was a WORD pointer (to your 16-bit texture bits)? If so, that code doesn't swap RGB to BGR. If you're using the previous code I posted, you'd change:

WORD w = (b << 3) | (g << 7) | (r << 12);

to

WORD w = (b << 13) | (g << 7) | (r << 2);

This puts the various sub-fields in different spots in the output WORD.

The rest seems like plain-old-debugging - good luck,
Jason