Texturing with GL_TEXTURE_RECTANGLE_ARB

Started by
9 comments, last by MARS_999 17 years ago
Heys, I'm having a spot of trouble with texturing a regular quad using the "GL_TEXTURE_RECTANGLE_ARB" texture target. I have read Nehe's Lesson 06 and search around a bit, but have been unable to find a proper solution. I have used a neat debugging tool (imdebug) to assure that the texture actually contains what I believe it does, but for some reason OpenGL refuses to texture the quad I have drawn... Perhaps because I've misunderstood something fundamental? I'd be eternally grateful for any help anyone is willing to spare... My initialization code looks like this:


	cgEnabled = true;

	//glShadeModel(GL_SMOOTH);
	//glEnable(GL_TEXTURE_2D);
	//glEnable(GL_TEXTURE_RECTANGLE_ARB);

	// Initialize FBO
	// Create FBO (off-screen framebuffer)
    glGenFramebuffersEXT(1, &fb); 
    // Bind offscreen framebuffer (that is, skip the window-specific render target)
    glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
	//glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); // Must be placed here?

	// Init Cg.
	cgSetErrorCallback(cgErrorCallback);

	// get the best profile for this hardware
	vtex_profile = cgGLGetLatestProfile(CG_GL_VERTEX);
	frag_profile = cgGLGetLatestProfile(CG_GL_FRAGMENT);

	// Make certain profiles exist...
	assert((vtex_profile != CG_PROFILE_UNKNOWN) && (frag_profile != CG_PROFILE_UNKNOWN));
	cgGLSetOptimalOptions(vtex_profile);
	cgGLSetOptimalOptions(frag_profile);

	printf("BeeSystem: Latest vertex Profile loaded - %s \n", cgGetProfileString(vtex_profile));
	printf("BeeSystem: Latest fragment Profile loaded - %s \n", cgGetProfileString(frag_profile));

	g_context = cgCreateContext();

	// Load Cg Programs
	testFProg = cgCreateProgramFromFile(g_context, CG_SOURCE, "moveBees.cg", frag_profile, "main", NULL);

	if(testFProg != NULL)
	{
		/* Vertex shader only needs to be loaded once */
		cgGLLoadProgram(testFProg);

		TextureParam = cgGetNamedParameter(testFProg, "beeTexture");
	}

	textureFProg = cgCreateProgramFromFile(g_context, CG_SOURCE, "textureDisplay.cg", frag_profile, "main", NULL);

	if(textureFProg != NULL)
	{
		/* Vertex shader only needs to be loaded once */
		cgGLLoadProgram(textureFProg);

		anotherTextureParam = cgGetNamedParameter(textureFProg, "aTexture");
	}

	// Create Landscape Texture
	int bmpStatus;
	scenarioBmp = new BMPImg();

	printf("DEBUG IMG Pointer: %p \n", scenarioBmp->GetImg());

	bmpStatus = scenarioBmp->Load("beeworld0.bmp");

	printf("DEBUG IMG Pointer: %p \n", scenarioBmp->GetImg());

	printf("DEBUG IMG INFEW: BPP %i | Width %i | Height %i \n", scenarioBmp->GetBPP(), scenarioBmp->GetWidth(), scenarioBmp->GetHeight());

	if (bmpStatus == 1) {
		printf("BeeSystem: Bmp Load succeeded! \n");
	} else {
		printf("BeeSystem: Bmp Load failed: %i \n", bmpStatus);
	}

	glGenTextures(1, &scenarioTex);
	glBindTexture(GL_TEXTURE_RECTANGLE_ARB, scenarioTex);

	glTexParameterf(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); // GL_CLAMP_TO_EDGE
	glTexParameterf(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); // GL_CLAMP_TO_EDGE
	glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

	// CRASH: In all likelyhood due to img being 24bit for RGB and computer expecting RGBA
	glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_FLOAT_RGBA32_NV, 512, 512, 0, GL_RGB, GL_UNSIGNED_BYTE, scenarioBmp->GetImg());

 
	imdebugTexImagef(GL_TEXTURE_RECTANGLE_ARB, scenarioTex, GL_RGBA, 0);


The actual rendering of the quad with the supposed texture looks as following


	// Draw a Quad with Ortho that fills screen!
	glMatrixMode(GL_PROJECTION);
	glLoadIdentity();
	gluOrtho2D(0.0, 512, 0.0, 512);

	// width / height
	float ratio = 1.0* 512 / 512;
		
	// Set the viewport to be the entire window
	glViewport(0, 0, 512, 512);

	// Set the correct perspective.
	// FOV, Aspect Ratio, Znear, Zfar
	//gluPerspective(90,ratio,0.1,10000);
	glMatrixMode(GL_MODELVIEW);
	glLoadIdentity();
	// Vec3 eyePos, Vec3 SceneCenter, Vec3 ViewUpVector
	gluLookAt(0.0,0.0,1.0, 0.0,0.0,-1.0, 0.0f,1.0f,0.0f);

	glColor4f(0.2f,0.2f,0.2f,1.0f);

	glDisable(GL_TEXTURE_RECTANGLE_ARB);
	glEnable(GL_TEXTURE_RECTANGLE_ARB);

	glBindTexture(GL_TEXTURE_RECTANGLE_ARB, scenarioTex);

	glBegin(GL_QUADS);
			glTexCoord2f(0.0, 0.0); 
			glVertex2f(0.0, 0.0);
			glTexCoord2f(512.0, 0.0); 
			glVertex2f(512.0, 0.0);
			glTexCoord2f(512.0, 512.0); 
			glVertex2f(512, 512);
			glTexCoord2f(0.0, 512.0); 
			glVertex2f(0.0, 512);
	glEnd();


I perform a flushing after the rendering and make sure that values aren't clamped... Other than that I am not sure what I am missing... Thanks in advance, Gazoo
Advertisement
First obvious question is what hardware are you running?
I have no experience using GL_TEXTURE_RECTANGLE_ARB, but I somehow lack a line like this:

glEnable(GL_TEXTURE_2D);

You've got this line commented out in the first codeblock.

I don't know whether it suffices to use only glEnable(GL_TEXTURE_RECTANGLE_ARB) or whether you need both. So ... give it a try ;)
Hey,

I'm running the program on a GeForce 7600. And I have tried running with glEnable in the first code block which didn't change anything unfortunately. I found another texturing tutorial where the glEnable call was made in the rendering loop which is where I have tried placing them last (which is evident in the 2nd code block). Unfortunately - that hasn't helped either...

Regards,

Gazoo
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_FLOAT_RGBA32_NV, 512, 512, 0, GL_RGB, GL_UNSIGNED_BYTE, scenarioBmp->GetImg());

shouldn't you be using

glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_FLOAT_RGBA32_NV, 512, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, scenarioBmp->GetImg());

You are setting up the data for RGBA32 and IIRC internally Nvidia makes RGB into RGBA to align it to 32bit anyway. Not sure if that will help, but doesn't hurt to try it.

But I would use GL_RGBA32F_ARB instead of GL_FLOAT_RGBA32_NV the former is ARB approved and works on ATI/Nvidia.

I don't see anywhere where you copy the data to the texture when you render, e.g. glDrawBuffer() since you are using FBOs correct? You need to call glDrawBuffer() render what you want to shutdown the FBO and use that texture however you want. Unless this isn't your complete code.
Hey Mars_999,

I owe you a great big thanks :D Exchanging the ARB approved "GL_RGBA32F_ARB" internal format on the relevant textures made it all work magically. I was actually plugging away at the problem on a seperate much more contained project - which I usually do regarding OpenGL stuff cause I don't trust a lot of it. I assume the problems perhaps arose from the combination of the ARB approved texture target "GL_TEXTURE_RECTANGLE_ARB" and the nvidia created "GL_FLOAT_RGBA32_NV" internal format.

As already mentioned, switching to the ARB internal format seems to have worked wonders and I can't spot any other sideeffects, so I am assuming the whole program is still happy :D

As for your first suggestion - I can't really say anything regarding how nvidia re-wraps the format, but declaring the in-format GL_RGBA instead of GL_RGB will (and does) crash the program due to reading memory that doesn't belong to the texture... (a solid 8 bits x texWidth x texHeight into illegal territory :D)

Anyway - Thanks a lot - you've made my day... Not sure how I would have otherwise solved that problem...

Regards,

Gazoo
Glad that it works and happy to have helped.
Well for one not sure if Gazoo cares if you just hijacked his thread! ;) Your image isn't showing up that you wanted us to look at...
I guess it can be construed as rude :) But since I've already solved my problem, I'm more concerned about Frepo getting any help - I mean - it's kind of off topic with regards to the original problem :D

Gazoo
Never mind then.

This topic is closed to new replies.

Advertisement