OpenGL 8 bit grayscale raw image problem

This topic is 3398 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Hello, I am working on a terrain generator using heightmaps for a class I am currently in and I seem to have come across a problem which I cannot figure out, as I have very little experience using openGL. I have calculated a shadowmap based on my heightmap and saved it to my terrain file as simply a raw 8 bit image. However, when I attempt to load it into my graphics card I believe it thinks I am loading in a 24 bit image. I'm not sure so I will not speculate further, I'll just show you what is going on. When the shadow map is applied it looks like this: However, if I output the shadowmap as a .BMP it looks like it should, This is my shader source: Vertex Shader:
void main()
{
gl_Position = gl_ProjectionMatrix*gl_ModelViewMatrix*gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}


uniform sampler2D sand;
uniform sampler2D grass;
uniform sampler2D rock;
uniform sampler2D snow;

uniform sampler2D alphamap;

void main(void)
{
vec4 alpha   = texture2D( alphamap,  gl_TexCoord[0].xy ).rgba;

vec4 tex0    = texture2D( sand,  gl_TexCoord[0].xy * 8.0 );
vec4 tex1    = texture2D( grass, gl_TexCoord[0].xy * 8.0 );
vec4 tex2    = texture2D( rock,  gl_TexCoord[0].xy * 8.0 );
vec4 tex3    = texture2D( snow,  gl_TexCoord[0].xy * 8.0 );

tex0 *= alpha.r;	// Red channel - sand/below sealevel
tex1 = mix( tex0, tex1, alpha.g );	// Green channel - grass/above sealevel
tex2 = mix( tex1, tex2, alpha.b );	// Blue channel - rock/steep slopes
vec4 outColor = mix( tex2, tex3, alpha.a );	// Alpha channel - snow/high altitude

gl_FragColor = clamp( outColor, 0.0, 1.0 );
}


and this is how I load in the image:
		GLuint texture_id;

glGenTextures(1, &texture_id);
glBindTexture(GL_TEXTURE_2D, texture_id);

/* set texture parameters */
if(GL_EXT_texture_filter_anisotropic)
{
float MaxAnisotropic;
glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &MaxAnisotropic);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, MaxAnisotropic);
}

// set parameters
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );

// create mipmaps
gluBuild2DMipmaps( GL_TEXTURE_2D, internal_format, width, height, format, type, data );


This is how I call it: GraphicsManager::GetSingleton().LoadRawTexture("shadowmap", 1, map->get_width(), map->get_height(), GL_LUMINANCE, GL_UNSIGNED_BYTE, map->get_shadowmap()); I use a hashmap in my graphics manager and hash the names of textures, that is why I have "shadowmap" as the 1st argument. That may be a bad way of doing it, I don't know, but I know that it works for what I am currently doing. So that isn't the issue here. All I do when I render is
		// set uniform for shadowmap
glActiveTextureARB(GL_TEXTURE4);
glUniform1iARB(sampler_loc, 4);


If it comes down to it I will store the image as RGB, just repeating the 8bit value, but as heightmaps can get large, that seems far from a good solution. [Edited by - melbow on May 1, 2009 10:14:46 AM]

Share on other sites
GL_UNSIGNED_BYTE is making your glTexImage2D call expect 8 bits per pixel.

While its possible to use "GL_INTENSITY4" to indicate that the texture in video memory should be stored as 4-bit intensity, i believe that you still need to pass the data into glTexImage2D at a minimum of 8 bits per pixel (there is no type smaller than GL_UNSIGNED_BYTE)

so, when you load the file off disk, expand the data to 8-bits.

Something like this should do it: (off the top of my head, i may be wrong)

char * fourbitimage = whatever;char * eightbitimage = malloc(numPixels);for (int i = 0; i < numPixels/2; i++){   eightbitimage[i*2]   = (fourbitimage & 0x0F) << 1;   eightbitimage[i*2+1] = (fourbitimage & 0xF0);}glTexImage2D(...whatever... GL_INTENSITTY4, GL_UNSIGNED_BYTE, eightbitimage);

Share on other sites
Quote:
 Original post by ExorcistGL_UNSIGNED_BYTE is making your glTexImage2D call expect 8 bits per pixel.

My goodness do I feel silly. I actually meant 8 bit. I've edited my post. Sorry, I've been awake for... let's just say way too long.

It is stored as Unsigned Bytes, and I write it to my file like this:

std::fstream out;
out.open(output_file, std::ios::out | std::ios::binary);
... //some code later
out.write((char *)t.lightmap, t.num_vertices()); //num vertices just does w*h

Share on other sites
The image you say is correct is 513x513, so you must check the padding and stride of the image. The default alignment, which is 4, is not divisible by the stride of the image, which appears to be 513 bytes. Either pad the image to 516 (the next multiple of 4 above 513) or set the unpack alignment to 1 (see glPixelStore).

Share on other sites
Ohhh! If I understand correctly then, that would explain why my RGBA raw loads just fine that is also 513x513.

Thank you very much.

1. 1
2. 2
JoeJ
17
3. 3
4. 4
frob
11
5. 5

• 13
• 16
• 13
• 20
• 13
• Forum Statistics

• Total Topics
632186
• Total Posts
3004637

×