# OpenGL Problems rendering text with SDL

This topic is 3494 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I know this question has been answered many times, in fact I used one of the previous posts in these forums to build the following program. The problem is that when the texture is applied (or generated) I only get a white box (or whatever color you pick) blended with my "glClearColor". However just before I do glTexImage2D(), I dump the contents of the SDL_Surface to a bmp file and I can clearly see that is well rendered by TTF_RenderText_Blended().
#include <SDL.h>
#include <SDL_ttf.h>
#include <GL/gl.h>
#include <GL/glu.h>
#include <cmath>

inline unsigned int NextPowerOfTwo(unsigned int x)
{
double logbase2 = log(x) / log(2);
return (unsigned int)round(pow(2, ceil(logbase2)));
}

void glEnable2D()
{
int vPort[4];

glGetIntegerv(GL_VIEWPORT, vPort);

glMatrixMode(GL_PROJECTION);
glPushMatrix();

glOrtho(0, vPort[2], 0, vPort[3], -1, 1);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
}

void VideoInit()
{
SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);

TTF_Init();
int flags = SDL_OPENGL | SDL_HWACCEL | SDL_HWSURFACE;
SDL_Surface *screen = SDL_SetVideoMode(800,600, 32, flags);
}

void OpenGLInit()
{
glViewport(0, 0, 800, 600);
glClearColor(0.f, 0.f, 1.f, 0.f);

glDepthFunc(GL_LEQUAL);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glBlendFunc(GL_ONE, GL_ONE);
glEnable(GL_BLEND);

glMatrixMode(GL_TEXTURE);
//glScalef(1, -1, 1); //SDL images are drawn upsidedown

glMatrixMode(GL_PROJECTION);
glMatrixMode(GL_MODELVIEW);
}

int main(void)
{
VideoInit();

SDL_Color color = {255, 255, 255, 0};
SDL_Surface *rendered = TTF_RenderText_Blended(font, "Testing", color);

unsigned int w = NextPowerOfTwo(rendered->w);
unsigned int h = NextPowerOfTwo(rendered->h);

if ((rendered->flags & SDL_SRCALPHA) == SDL_SRCALPHA)
SDL_SetAlpha(rendered, 0, 0);
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
const unsigned int rmask = 0xFF000000;
const unsigned int gmask = 0x00FF0000;
const unsigned int bmask = 0x0000FF00;
const unsigned int amask = 0x000000FF;
#else
const unsigned int rmask = 0x000000FF;
const unsigned int gmask = 0x0000FF00;
const unsigned int bmask = 0x00FF0000;
const unsigned int amask = 0xFF000000;
#endif
SDL_Surface *image = SDL_CreateRGBSurface(SDL_SWSURFACE, w, h, 32,
if (image == 0) {
std::cout << SDL_GetError() << std::endl;
}
SDL_Rect area;
area.x = 0;
area.y = 0;
area.w = rendered->w;
area.h = rendered->h;

SDL_BlitSurface(rendered, &area, image, &area);
SDL_FreeSurface(rendered);

OpenGLInit();

GLuint tex = 0;

glGenTextures(1, &tex);
if (tex == 0) {
return 1;
}
glBindTexture(GL_TEXTURE_2D, tex);

const char *testname = "test.bmp";
SDL_SaveBMP(image, testname);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image->w, image->h, 0,
GL_RGBA, GL_UNSIGNED_BYTE, image->pixels);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable2D();
float up = area.h / image->h;
float right = area.w / image->w;

SDL_FreeSurface(image);

glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
glColor3ub(color.r, color.g, color.b);

glTexCoord2f(0.f, 0.f);
glVertex2i(400, 300 + area.h);

glTexCoord2f(0.f, up);
glVertex2i(400, 300);

glTexCoord2f(right, up);
glVertex2i(400 + area.w, 300);

glTexCoord2f(right, 0.f);
glVertex2i(400 + area.w, 300 + area.h);

glEnd();
SDL_GL_SwapBuffers();
while (true) {
SDL_Event event;
SDL_WaitEvent(&event);
if (event.type == SDL_QUIT)
break;
}
SDL_Quit();
glDeleteTextures(1, &tex);
return 0;
}


I know the code is ugly, but I tried to do a fast demo from a larger program that I'm building. However the same SDL and OpenGL configuration is being applied. I even tried to change to a different TTF font to see if that's the problem but it isn't. The other post I'm talking about is located here. I'm compiling the program with g++ version 4.3.3 in a 2.6.28-11 Linux box, if that's relevant. Thanks

##### Share on other sites
Appareantly in this example, the only thing I did (besides resolving a bug in which float up, right resulted into 0) was to get rid of the call to SDL_SetAlpha() and it started working.

However I have other program with the same SDL/OpenGL configuration (it is perhaps too extensive to list here), and the problem was that I forgot to enable GL_TEXTURE_2D -_-; but in that program I have the call to SDL_SetAlpha() and it is working perfectly, anyone knows why? (I know this might turn into a SDL question instead of an OpenGL one).

##### Share on other sites
Without knowing the specifics involved, I will refer you to the documentation on SDL_BlitSurface(). This describes how it acts in the presence of the various flags.

By disabling the alpha before the blit, you are doing an opaque blit with what should be transparent pixels. My guess is that in your other program the alpha channel isn't used, so the call does nothing. For example, if you SDL_LoadBMP() you will get a surface without an alpha channel, so the conditional expression "(surface->flags & SDL_SRCALPHA) == SDL_SRCALPHA" is probably never true.

A sidenote, but the SDL_SetVideoMode flags "SDL_HWACCEL" and "SDL_HWSURFACE" don't work with OpenGL. What might be worth trying is the SDL OpenGL attribute SDL_GL_ACCELERATED_VISUAL.

##### Share on other sites
Quote:
 Original post by rip-offWithout knowing the specifics involved, I will refer you to the documentation on SDL_BlitSurface(). This describes how it acts in the presence of the various flags.

Ok, thanks I'll check that in the documentation.

Quote:
 By disabling the alpha before the blit, you are doing an opaque blit with what should be transparent pixels. My guess is that in your other program the alpha channel isn't used, so the call does nothing. For example, if you SDL_LoadBMP() you will get a surface without an alpha channel, so the conditional expression "(surface->flags & SDL_SRCALPHA) == SDL_SRCALPHA" is probably never true.

The curious thing is that I'm doing the same thing with font rendering, and then copying it to another surface.

Quote:
 A sidenote, but the SDL_SetVideoMode flags "SDL_HWACCEL" and "SDL_HWSURFACE" don't work with OpenGL. What might be worth trying is the SDL OpenGL attribute SDL_GL_ACCELERATED_VISUAL.

Ok, cool thanks. I put those flags based on what I've been reading on the Internet. I guess you can't trust everything in the net, right? [google]

1. 1
2. 2
3. 3
Rutin
20
4. 4
5. 5
khawk
14

• 9
• 11
• 11
• 23
• 12
• ### Forum Statistics

• Total Topics
633655
• Total Posts
3013177
×