Sign in to follow this  

OpenGL 16 bit Texture loading

This topic is 3286 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Okay, I'm new to gamedev.net, but not a n00b. I've been working on a tool that loads a model format used by a game I like. And the format also contains a 16-bit TGA image as the texture. I've extracted the texture and saved it externally (Also in memory) but I can't bind it to a texture in OpenGL. I can bind the 24/32 bit images so I made a few temporary ones in the place of the 16-bit. But since the model format only supports 16-bit TGA... I'm rather stuck. I googled around for an answer and found little that helped me out. There were bogus examples claiming you could load the data with GL_RGB5... but they failed. My data is composed of 2 byte values in place of the standard 3 bytes (RGB). For example: My first pixel in the 24-bit TGA is 16,16,0 (RGB) where as the 16-bit is composed of 64,8 as the bytes. I first thought about attempting to load it as a 16-bit, but I can't get OpenGL to load it. Next I tried converting the pixels manually and loading the data as a 24-bit image... but none of the bitshifts I tried worked (I've never been good with bits) Supposedly, the format is: byte1 = ARRRRRGG byte2 = GGGBBBBB but due to lo-hi (or was it hi-lo?) the second byte comes first (GGGBBBB then ARRRRRGG) I know it's not the API itself and it's me doing the wrong... but I can't figure it out. Help would be greatly appreciated here :) If this has been answered before, please link me to it (or possibly dumb it down or give me a solution that doesn't force me to read for a few weeks and try to learn bits, which I simply do not understand) Thanks in advance.

Share this post


Link to post
Share on other sites
Try this:

glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB5_A1, width, height, 0, GL_RGBA, GL_UNSIGNED_SHORT_5_5_5_1, data );

Share this post


Link to post
Share on other sites
Thanks, I never expected a reply so soon :D

Sadly that didn't work. At least it does something. It loads the data fine, I can even make out the texture... under all that garbled junk.

http://img66.imageshack.us/img66/993/c3ditscreen1bp9.png

this is what the texture actually is meant to look like (This is the 24-bit texture btw):

http://img177.imageshack.us/img177/5576/c3ditscreen0mx4.png

Any idea of a way to fix this? I'm getting a feeling that we're close now.
But we're getting somewhere at last! :) Thanks so far.

EDIT:
Sheesh, I never realized that this forum doesn't support BBCode >.> ahh well.

EDIT2: Never mind :D HTML code works instead.

[Edited by - RexHunter99 on January 16, 2009 9:28:17 PM]

Share this post


Link to post
Share on other sites
Are you using C/C++? If you are, get DevIL. If you are using GCC, use reimp to convert the coff libraries rather than compiling it yourself. This is a bit of the code I use. Also, in my experience, most 16-bit images are coded as 5-6-5 or 5-5-5-1, 1 being a useless bit.

bool GLTexture :: LoadFromImage(LPSTR strFilename)
{
UINT nImage;
char strBuffer[1024];
bool bEnabled; // Used to restore the state of GL_TEXTURE_2D at the end of the this function.

ilInit();
ilGenImages(1, &nImage);
ilBindImage(nImage);
if(!ilLoadImage(strFilename))
{
sprintf(strBuffer, "SOURCE: GLTexture::LoadFromImage()\nERROR: Could not load file %s. File likely does not exist.", strFilename);
MessageBox(0, strBuffer, "Error! Application must terminate.", MB_ICONERROR);
ilBindImage(0);
ilDeleteImages(1, &nImage);
return 0;
}
ilConvertImage(IL_RGB, IL_UNSIGNED_BYTE);

if(!(bEnabled = glIsEnabled(GL_TEXTURE_2D))) glEnable(GL_TEXTURE_2D);
glGenTextures(1, &m_nTexture);
glBindTexture(GL_TEXTURE_2D, m_nTexture);
gluBuild2DMipmaps(GL_TEXTURE_2D, ilGetInteger(IL_IMAGE_BYTES_PER_PIXEL), ilGetInteger(IL_IMAGE_WIDTH),
ilGetInteger(IL_IMAGE_HEIGHT), ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE, ilGetData());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
if(!bEnabled) glDisable(GL_TEXTURE_2D); // If GL_TEXTURE_2D was originally disabled, disable it again. */


// Delete the DevIL image
ilBindImage(0);
ilDeleteImages(1, &nImage);

return 1;
}



Share this post


Link to post
Share on other sites
I'd rather not compile this with a library that I'll only use for a few functions involving the textures... Not too mention DevIL doesn't work on my PC at all.

Share this post


Link to post
Share on other sites
Looks like the R and B channels need to be swapped. There doesn't seem to be a GL_BGR5_1 format, so you might want to try plain old GL_BGR. If that doesn't work you'll just have to do some bitshifts.

Edit: You may also have to change GL_RGBA to GL_BGRA.

Share this post


Link to post
Share on other sites
So I want to try something like this?

glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB5_A1, width, height, 0, GL_BGRA, GL_UNSIGNED_SHORT_5_5_5_1, data );

I was just checking the game that these model's go with (Me and a few other people have it's source code) and the data is loaded into a structure with the following:
WORD Texture[256*256];

(The game actually allows for non-uniform textures as long as they are 256 pixels wide, so the actual array size can be larger than that, but that's basically what it is 90% of the time.)

But since the game doesn't use D3D or Glide like most normal games do, I can't exactly work out how it loads the textures... >.>

If I was to manually bitshift them into RGB (or to whatever I need) how would I do that? I've tried to bitshift a bit today... but no luck.

EDIT:
I just checked a documentation... and I thought that maybe I could try this:
GL_UNSIGNED_SHORT_5_5_5_1_REV
But doesn't that turn it into the reverse? eg; RGBA becomes ABGR

Share this post


Link to post
Share on other sites
OMG I think I just figured out how to convert a WORD into 3 RGB values!
I've been toying around with bits in GameMaker 6.1... (Yeah, don't even start about how lame GM is) and I may have gotten it! (So far all my testing has come up perfectly.)

Here's the little code I use to get the RGB from a BGRA WORD
B=((WORD>>0) & 31)*8
G=((WORD>>5) & 31)*8
R=((WORD>>10) & 31)*8

I used that on a WORD value of 2112 from the 16-bit texture. Then I checked the 24-bit texture and found that the value was indeed R=16 G=16 B=0.
Then I ran that WORD into the code above:
B=((2112>>0) & 31)*8
G=((2112>>5) & 31)*8
R=((2112>>10) & 31)*8

And it returned 16,16,0 (RGB respectively)
Now I managed to basically reverse that out of a jumble of code from the game's source code which made almost no logical sense. And I think it originally bit shifted to the right by 8 not multiplied... but this does exactly what I wanted!

Theoretically, you could use this:
A=((2112>>15) & 31)*8
To get the alpha... or maybe not... I don't know to be honest, Alpha only takes up 1 bit of the 16 and I won't need it anyway.

If I encounter any more problems, or if my method fails me, I'll come back. But for now this seems to be a case solved :D

EDIT:
I didn't double post, kittycat posted before me with some help but deleted their post (I think because I posted before they edited their post with some really helpful info)
Thanks kittycat for contributing to this :)

[Edited by - RexHunter99 on January 16, 2009 11:11:35 PM]

Share this post


Link to post
Share on other sites
Okay, so far I've gotten no problems. But I suck with bits... can someone help me reverse that code I posted above, so it can convert 3 RGB values into a WORD (with alpha)

Share this post


Link to post
Share on other sites

This topic is 3286 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now