Rendring into texture problem

Started by
3 comments, last by Weny Sky 16 years, 4 months ago
Hi, I have a little problem with rendring into texture. When I render text into backbuffer and copy backbuffer into texture by glCopyTexImage2D and then I mapped the texture on quad and render it, it looks like it has only half aplha. If I render texture twice on same place it looks as the source backbuffer. Backbuffer content Backbuffer content Quad with pre-rendred texture (Doesn't look like as original backbuffer and it's wrong!) Quad with pre-rendred texture Function for render into texture
void BAMWE_text::textOutput(int size_x, int size_y, const char *text, BAMWE* engine, unsigned short style)
{
	int new_x, new_y;
	new_x = size_x <= 512 ? (size_x <= 256 ? (size_x <= 128 ? 128 : 256) : 512) : 1024;
	new_y = size_y <= 512 ? (size_y <= 256 ? (size_y <= 128 ? 128 : 256) : 512) : 1024;

	// set viewport
	glViewport(0, 0, new_x, new_y);
	glMatrixMode(GL_PROJECTION);
	glLoadIdentity();
	glOrtho(0.0f, new_x, new_y, 0.0f, -1.0f, 1.0f);
	glMatrixMode(GL_MODELVIEW);
	glLoadIdentity();

	// free old texture
	if(TID > 0 && (texture_x != new_x || texture_y != new_y))
	{
		engine->freeTexture(TID);
		TID = -1;
	}

	texture_x = new_x;
	texture_y = new_y;

	// map the quad
	if(x != size_x || y != size_y)
	{
		sprite.getVertices()[0] = BAMWE_vertex(0,             0,             0);
		sprite.getVertices()[1] = BAMWE_vertex((float)size_x, 0,             0);
		sprite.getVertices()[2] = BAMWE_vertex((float)size_x, (float)size_y, 0);
		sprite.getVertices()[3] = BAMWE_vertex(0,             (float)size_y, 0);

		sprite.getTextureCords()[0] = BAMWE_texture_cord(0,1);
		sprite.getTextureCords()[1] = BAMWE_texture_cord((float)size_x/(float)new_x,1);
		sprite.getTextureCords()[2] = BAMWE_texture_cord((float)size_x/(float)new_x,1-(float)size_y/(float)new_y);
		sprite.getTextureCords()[3] = BAMWE_texture_cord(0,1-(float)size_y/(float)new_y);

		sprite.createVBO();
	}

	// create free texture	
	if((TID = engine->createEmptyTexture(new_x, new_y)) == -1)
	{
		engine->log(__FILE__, __LINE__, "Can't create texture");
		return;
	}

	// clear backbuffer
	glClearColor(1.0f, 1.0f, 1.0f, 0.0f);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

	// write text
	glColor4f(1.0f,1.0f,1.0f,1.0f);
	font_program->textOutputRECT(0, 0, (float)size_x, (float)size_y, text, engine, style);

	// copy backbuffer
	glEnable(GL_TEXTURE_2D);
	glBindTexture(GL_TEXTURE_2D, engine->getTexture(TID));
	glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, new_x, new_y, 0);

	// clear backbuffer
	glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
	
	// return viewport
	glViewport(0, 0, engine->getWidth(), engine->getHeight());
	glMatrixMode(GL_PROJECTION);
	glLoadIdentity();
	glOrtho(0.0f, engine->getWidth(), engine->getHeight(), 0.0f, -1.0f, 1.0f);
	glMatrixMode(GL_MODELVIEW);
	glLoadIdentity();
}
Create empty texture
int BAMWE::createEmptyTexture(unsigned short x, unsigned int y)
{
	int TID;
	unsigned int* data;

	// alloc data
	data = (unsigned int*)new GLuint[((x * y)* 4 * sizeof(unsigned int))];
	
	if(data == NULL)
		return -1;

	// get free ID for texure pointer
	TID = free_TID.top();
	free_TID.pop();

	// create texture
	glGenTextures(1, &textures[TID]);
	glBindTexture(GL_TEXTURE_2D, textures[TID]);
	glTexImage2D(GL_TEXTURE_2D, 0, 4, x, y, 0,
		GL_RGBA, GL_UNSIGNED_BYTE, data);
	glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);

	delete [] data;

	return TID;
}

	SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 5 );
	SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE, 5 );
	SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 5 );
	SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 16 );
	SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
	SDL_GL_SetAttribute( SDL_GL_STENCIL_SIZE, 0);
Advertisement
Don't really know what actually is wrong, but why do you specify a 16 bit color format and a 16bit depth buffer with SDL_GL_SetAttribute?? It's best to omitt all these SDL_GL_SetAttribute calls so you will generally get a 32 bit color buffer and a depth buffer which the card works best with. This is probably what you want.
I'm not sure if omitting all these SDL_GL_SetAttribute calls will give you an alpha channel in your back buffer (which you probably need), so you could make the calls like this:

SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8)
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8)
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8)
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8)
If the read back texture contains an alpha channel that you use for blending, you have to be careful. For example, take a look at what happens with a fragment on the edge of a glyph. Say it's color is 1 grayscale (white) with 50% alpha. When rendered into an initially black backbuffer, with GL_SRC_ALPHA/GL_ONE_MINUS_SRC_ALPHA, the color will be 0.5 and alpha 0.5, just as expected. Read it back into a texture, and draw it again on a black background, using the same blending function. Color is now 0.25, which is darker than the initial back buffer.

The problem in my example, which could be the case in your application aswell, is that the read back texture does not contain the same type of color data as the original font data. The font data is a non-premultiplied image (premultiplied means the color data is already multiplied by the alpha channel, saving a multiplication when performing blending), whereas the texture contains premultiplied data. Solution would, I guess, be to use GL_ONE/GL_ONE_MINUS_SRC_ALPHA when rendering the texture.

If you treat a premultiplied image as non-premultiplied, the effect is that blending happens on the source image with alpha squared. This results in about the same effect as your second image.
Brother Bob, thank a lot. It was a problem with blending
I've problem with S3G UniChrome graphics card. I don't know why, but on this IGP doesn't work rendring into texture(copy from backbuffer into texture). Nothing appears in texture after glCopyTexImage2D(...) and OpenGL doesn't throw any error.

Even the NeHe Tutorial 36 ( http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=36 ) doesn't work properly (render only helix but without radial blure texture)

It's solvable problem by adding some features into code or it's unsolvable on this card?


Here is information about the card.
Graphics accelerator:  S3 Graphics  VIA/S3G UniChrome Pro IGP/MMX/SSE  1.2Supported extensions:  GL_ARB_point_parameters GL_ARB_multitexture GL_EXT_blend_colorGL_EXT_blend_minmax GL_EXT_blend_subtract GL_ARB_texture_env_combineGL_EXT_texture_env_combine GL_ARB_texture_env_dot3 GL_EXT_texture_env_dot3GL_ARB_texture_env_add GL_EXT_texture_env_add GL_EXT_secondary_colorGL_EXT_texture_lod_bias GL_ARB_texture_mirrored_repeat GL_EXT_stencil_wrapGL_EXT_fog_coord GL_ARB_transpose_matrix GL_EXT_separate_specular_colorGL_EXT_rescale_normal GL_ARB_window_pos GL_EXT_abgr GL_EXT_bgraGL_EXT_packed_pixels GL_EXT_paletted_texture GL_EXT_vertex_arrayGL_EXT_compiled_vertex_array GL_WIN_swap_hint GL_EXT_texture_compression_s3tcGL_ARB_texture_compression GL_S3_s3tc GL_EXT_draw_range_elements

This topic is closed to new replies.

Advertisement