# OpenGL 8 bit grayscale raw image problem

## Recommended Posts

Hello, I am working on a terrain generator using heightmaps for a class I am currently in and I seem to have come across a problem which I cannot figure out, as I have very little experience using openGL. I have calculated a shadowmap based on my heightmap and saved it to my terrain file as simply a raw 8 bit image. However, when I attempt to load it into my graphics card I believe it thinks I am loading in a 24 bit image. I'm not sure so I will not speculate further, I'll just show you what is going on. When the shadow map is applied it looks like this: However, if I output the shadowmap as a .BMP it looks like it should, This is my shader source: Vertex Shader:
void main()
{
gl_Position = gl_ProjectionMatrix*gl_ModelViewMatrix*gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}


uniform sampler2D sand;
uniform sampler2D grass;
uniform sampler2D rock;
uniform sampler2D snow;

uniform sampler2D alphamap;

void main(void)
{
vec4 alpha   = texture2D( alphamap,  gl_TexCoord[0].xy ).rgba;

vec4 tex0    = texture2D( sand,  gl_TexCoord[0].xy * 8.0 );
vec4 tex1    = texture2D( grass, gl_TexCoord[0].xy * 8.0 );
vec4 tex2    = texture2D( rock,  gl_TexCoord[0].xy * 8.0 );
vec4 tex3    = texture2D( snow,  gl_TexCoord[0].xy * 8.0 );

tex0 *= alpha.r;	// Red channel - sand/below sealevel
tex1 = mix( tex0, tex1, alpha.g );	// Green channel - grass/above sealevel
tex2 = mix( tex1, tex2, alpha.b );	// Blue channel - rock/steep slopes
vec4 outColor = mix( tex2, tex3, alpha.a );	// Alpha channel - snow/high altitude

gl_FragColor = clamp( outColor, 0.0, 1.0 );
}


and this is how I load in the image:
		GLuint texture_id;

glGenTextures(1, &texture_id);
glBindTexture(GL_TEXTURE_2D, texture_id);

/* set texture parameters */
if(GL_EXT_texture_filter_anisotropic)
{
float MaxAnisotropic;
glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &MaxAnisotropic);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, MaxAnisotropic);
}

// set parameters
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );

// create mipmaps
gluBuild2DMipmaps( GL_TEXTURE_2D, internal_format, width, height, format, type, data );


This is how I call it: GraphicsManager::GetSingleton().LoadRawTexture("shadowmap", 1, map->get_width(), map->get_height(), GL_LUMINANCE, GL_UNSIGNED_BYTE, map->get_shadowmap()); I use a hashmap in my graphics manager and hash the names of textures, that is why I have "shadowmap" as the 1st argument. That may be a bad way of doing it, I don't know, but I know that it works for what I am currently doing. So that isn't the issue here. All I do when I render is
		// set uniform for shadowmap
glActiveTextureARB(GL_TEXTURE4);
glUniform1iARB(sampler_loc, 4);


If it comes down to it I will store the image as RGB, just repeating the 8bit value, but as heightmaps can get large, that seems far from a good solution. [Edited by - melbow on May 1, 2009 10:14:46 AM]

##### Share on other sites
GL_UNSIGNED_BYTE is making your glTexImage2D call expect 8 bits per pixel.

While its possible to use "GL_INTENSITY4" to indicate that the texture in video memory should be stored as 4-bit intensity, i believe that you still need to pass the data into glTexImage2D at a minimum of 8 bits per pixel (there is no type smaller than GL_UNSIGNED_BYTE)

so, when you load the file off disk, expand the data to 8-bits.

Something like this should do it: (off the top of my head, i may be wrong)

char * fourbitimage = whatever;char * eightbitimage = malloc(numPixels);for (int i = 0; i < numPixels/2; i++){   eightbitimage[i*2]   = (fourbitimage[i] & 0x0F) << 1;   eightbitimage[i*2+1] = (fourbitimage[i] & 0xF0);}glTexImage2D(...whatever... GL_INTENSITTY4, GL_UNSIGNED_BYTE, eightbitimage);

##### Share on other sites
Quote:
 Original post by ExorcistGL_UNSIGNED_BYTE is making your glTexImage2D call expect 8 bits per pixel.

My goodness do I feel silly. I actually meant 8 bit. I've edited my post. Sorry, I've been awake for... let's just say way too long.

It is stored as Unsigned Bytes, and I write it to my file like this:

std::fstream out;
out.open(output_file, std::ios::out | std::ios::binary);
... //some code later
out.write((char *)t.lightmap, t.num_vertices()); //num vertices just does w*h

##### Share on other sites
The image you say is correct is 513x513, so you must check the padding and stride of the image. The default alignment, which is 4, is not divisible by the stride of the image, which appears to be 513 bytes. Either pad the image to 516 (the next multiple of 4 above 513) or set the unpack alignment to 1 (see glPixelStore).

##### Share on other sites
Ohhh! If I understand correctly then, that would explain why my RGBA raw loads just fine that is also 513x513.

Thank you very much.

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627678
• Total Posts
2978602
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 12
• 12
• 10
• 12
• 22