Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Gandalf

OpenGL Loading a bitmap in OpenGL

This topic is 6663 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I want to load a bitmap image from a resource file in Visual Studio. This is the win32 function to do it: HBITMAP LoadBitmap(HINSTANCE hInstance, LPCTSTR lpBitmapName); But when I have this bitmap handler how can I use it in OpenGL with this function: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 256, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, bmp[0]->data) ? (The problem is I don´t want to load the image from file.) Edited by - Gandalf on 6/18/00 12:44:27 PM

Share this post


Link to post
Share on other sites
Advertisement
Here is little code

            

HBITMAP bitmap = LoadBitmap(GetModuleHandle(NULL),MAKEINTRESOURC(IDB_BITMAP3));


unsigned char *data;
GetBitmapBits(bitmap, 256*256*3, &data);


glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 256, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, &data);


I don´t get a error but the bitmap look very ugly!

Gandalf the White


Edited by - Gandalf on June 19, 2000 4:23:18 AM

Share this post


Link to post
Share on other sites
How ugly? Is it just that the colors are all screwed up? or is it you cannot recognise your original image?

- make sure your image is 256*256, 24 bits per pixel
- if I remember, I had once to reverse the component order (bitmap is BGR and OpenGL expects RGB)
- and yes, data_size = bmp_width * bmp_height * bytes_per_pixel

maybe you should use LoadImage & GetDIBits instead.

ps: in your code, I assume you allocate enough space for the data!

Share this post


Link to post
Share on other sites
Gandalf, are you using 8 bits or 24 bits BMP''s?
if it''s 8 bit, it looks ugly because 8 bit images doesn''t have the colors stored as 24 bit images. In 8 bit, there''s a palette of colors, which is simply an array of RGB values, and each pixel of the image is stored as one byte, that represents the index of the palette.
You have to do a conversion, Here''s some pseudo-code:

- Create an array of 256 entries of the PALETTEENTRY structure (Pal)
- Allocate Width*Height bytes of memory (In)
- Allocate Width*Height*3 bytes of memory (Out)
- Use LoadBitmap to get an HBITMAP
- Use GetDIBColorTable to fill Pal
- Use GetBitmapBits to fill In

now, for (i=0; i < width*height-1;i++) do
Out[i*3+0] = Pal[In].peRed;
Out[i*3+1] = Pal[In[i]].peGreen;
Out[i*3+2] = Pal[In[i]].peBlue;

and use Out in the glTexImage2D:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, &Out);

and that''s it! I don''t remember well, but I recall an extension to use 8 bit textures... but I don''t know how to use it.

For 24 bit images, the only problem is that bitmaps are stored in BGR, and not in RGB (don''t ask me why...), so you have to exchange the bytes, as Jehan said.

Hope that helps,


Nicodemus.

----
"When everything goes well, something will go wrong." - Murphy

Share this post


Link to post
Share on other sites
I´m using 24 bits bitmaps. The texture not just looking ugly, it must be something wrong with the color bytes. But I can recognise my original image. Switch from RGB to BGR? Something like this, or what?

    

unsigned char temp;
for(int i=0; i<(256*256*3); i++)
if(i%2==0 && i>=2)
{
temp = data ;
data<i> = data[i-2];
data[i-2] = temp;
}



My texture still looking bad.

Gandalf the White



Edited by - Gandalf on June 20, 2000 3:46:36 AM

Share this post


Link to post
Share on other sites
Hi Gandalf,
There is an extension called GL_BGR_EXT / GL_BGRA_EXT , which allows to load the channels in reverse order. The ext is supported by Nvidia ICDs since RivaTNT, maybe even earlier. But I dont know whether it is supported by other ipmlentations. But you can also use RAW or TGA files (onvert them with Photoshop for example), where the first one of them is very easy to use and allways in the right order.

???
Why do you write that in the for slope
temp=data; instead of temp=data [ i ];

shouldn't it look that way

int a=3;
for(int i=0;i<(256*256*3);i++)
{
if(a==3)
{
temp = data ;
data[i] = data[i+2];
data[i+2] = temp;
a=0;
}
a++;
}
Hope that helps !

How do you make these source code frames ?

Edited by - TheMummy on June 20, 2000 5:18:33 AM


Edited by - TheMummy on June 20, 2000 5:24:32 AM

Had to edit it because the forum always creats out of

Edited by - TheMummy on June 20, 2000 5:26:12 AM

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!