Archived

This topic is now archived and is closed to further replies.

spiffgq

OpenGL Create a texture from a [OpenGL-type] bitmap

Recommended Posts

I''m trying to create a texture object in OpenGL from a bitmap. (And by bitmap, I mean a true one-bit-per-pixel OpenGL type bitmap, not that Microsoft abomination file type.) My first thought was to just use glTexImage2D as follows:
  
const unsigned char BITMAP_DATA [] = { /* pixels are here */ };
const int BITMAP_PITCH = 1; // Number of bytes per row

const int BITMAP_ROWS = 12; // Number of rows (the height)

const int BITMAP_COLS =  8; // Number of pixels per column


// Some code goes here blah blah blah


unsigned int texObj;
glGenTextures (1, &texObj);
	
glBindTexture (GL_TEXTURE_2D, texObj);
glTexImage2D (
   GL_TEXTURE_2D, 
   0, 
   GL_INTENSITY, 
   BITMAP_COLS, 
   BITMAP_ROWS, 
   0, 
   GL_LUMINANCE, 
   GL_BITMAP, 
   BITMAP_DATA
);

// and so on

  
Unfortunately, this doesn''t work. The resulting "texture" is just a solid color and does not contain the bitmap''s pixels.
  
glBindTexture (GL_TEXTURE_2D, texObj);
glBegin (GL_QUADS);
glColor3f (1.0, 0.0, 0.0);
glTexCoord2f (0.0, 1.0); glVertex3f (  0.0f,   0.0f, 0.0f);
glTexCoord2f (1.0, 1.0); glVertex3f (300.0f,   0.0f, 0.0f);
glTexCoord2f (1.0, 0.0); glVertex3f (300.0f, 300.0f, 0.0f);
glTexCoord2f (0.0, 0.0); glVertex3f (  0.0f, 300.0f, 0.0f);
glEnd ();
  
The above code creates a solid red rectangle instead of showing the pixels of the bitmap. Am I on the right track or do I need to go another route completely? If I am on the right track, what am I doing wrong? Thanks for your help.

Share this post


Link to post
Share on other sites
quote:
Original post by Subotron
I believe the LUMINANCE is for 8 bits... (black & white)
So I guess that''s the problem indeed


No, I changed the GL_TexImage_2D line from

glTexImage2D (GL_TEXTURE_2D, 0, GL_INTENSITY, BITMAP_COLS, BITMAP_ROWS, 0, GL_LUMINANCE, GL_BITMAP, BITMAP_DATA);


to


glTexImage2D (GL_TEXTURE_2D, 0, GL_INTENSITY, BITMAP_COLS, BITMAP_ROWS, 0, GL_BITMAP, GL_UNSIGNED_BYTE, BITMAP_DATA);


but end up with the same, blank quad.

Here is a complete listing of the code. Perhaps the problem lies somewhere else (for example, maybe I''m forgetting to initialize something).


  
#include <stdio.h>

#ifdef WIN32
# include <windows.h>
#endif

#include <GL/gl.h>

#include <SDL/SDL.h>



#if 0
const unsigned char BITMAP_DATA [] = {
0x41, 0x00,
0x41, 0x00,
0x42, 0x00,
0x22, 0x00,
0x22, 0x00,
0x14, 0x00,
0x14, 0x00,
0x1C, 0x00,
0x0C, 0x00,
0x08, 0x00,
0x08, 0x00,
0x30, 0x00,
};
#endif

//#if 0

const unsigned char BITMAP_DATA [] = {
0x41,
0x41,
0x42,
0x22,
0x22,
0x14,
0x14,
0x1C,
0x0C,
0x08,
0x08,
0x30
};
//#endif


const int BITMAP_PITCH = 1;
const int BITMAP_ROWS = 12;
const int BITMAP_COLS = 8;
const int NUM_PIXELS = BITMAP_PITCH * 8 * BITMAP_ROWS;

const int SCREEN_WIDTH = 900;
const int SCREEN_HEIGHT = 750;

int main () {

// Init SDL

SDL_Init (SDL_INIT_VIDEO);
SDL_Surface *screen = SDL_SetVideoMode (SCREEN_WIDTH, SCREEN_HEIGHT, 32, SDL_OPENGL | SDL_HWSURFACE | SDL_DOUBLEBUF);
if (screen == NULL){ return 0; }
SDL_WM_SetCaption ("Test Bitmap OpenGL Texture Program", NULL);

// Init OpenGL

glViewport (0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glOrtho (0, SCREEN_WIDTH, 0, SCREEN_HEIGHT, -1, 1);
glMatrixMode (GL_MODELVIEW);

glClearColor (0.0, 0.0, 0.0, 0.0);
glClearDepth (1.0);
glEnable (GL_DEPTH_TEST);
glDepthFunc (GL_LEQUAL);
glPolygonMode (GL_FRONT, GL_FILL);
glPolygonMode (GL_BACK, GL_LINE);

glEnable (GL_TEXTURE_2D);

// Create texture

unsigned int texObj;
glGenTextures (1, &texObj);

if (texObj == 0){
fprintf (stderr, "Unable to create an OpenGL texture object\n");
SDL_Quit();
return 1;
}

glBindTexture (GL_TEXTURE_2D, texObj);
glTexImage2D (
/*target*/ GL_TEXTURE_2D,
/*level*/ 0,
/*internalFormat*/ GL_INTENSITY,
/*width*/ BITMAP_COLS,
/*height*/ BITMAP_ROWS,
/*border*/ 0,
/*format*/ GL_BITMAP,
/*type*/ GL_UNSIGNED_BYTE,
/*texels*/
BITMAP_DATA
);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

// Show a quad with the texture

glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity ();
glTranslatef (200.0f, 150.0f, 0.0f);
//glBindTexture (GL_TEXTURE_2D, texObj);

glBegin (GL_QUADS);
glColor3f (1.0, 0.0, 0.0);
glTexCoord2f (0.0, 1.0); glVertex3f ( 0.0f, 0.0f, 0.0f);
glTexCoord2f (1.0, 1.0); glVertex3f (100.0f, 0.0f, 0.0f);
glTexCoord2f (1.0, 0.0); glVertex3f (100.0f, 150.0f, 0.0f);
glTexCoord2f (0.0, 0.0); glVertex3f ( 0.0f, 150.0f, 0.0f);
glEnd ();


SDL_GL_SwapBuffers ();

SDL_Event event;
bool done = false;

while (!done){

SDL_PollEvent (&event);

if (event.type == SDL_QUIT){
done = true;
}else if (event.type == SDL_KEYUP){

if (event.key.keysym.sym == SDLK_ESCAPE)
done = true;
}


}


SDL_Quit ();
glDeleteTextures (1, &texObj);


return 0;
}




It is a very simple program that uses SDL as the window manager. It starts, creates the texture, displays the texture, and waits for the user to exit the program. It should compile on Windows and Linux (although it has only been tested on Linux).

Here is a Makefile that will compile the above program (assuming you have SDL and OpenGL libs installed in the same place ):

  
CC=g++
LIBS= -lSDL -L/usr/X11R6/lib -lGL

opengl_test_program: main.o
$(CC) $(LIBS) main.o -o opengl_test_program


Thank you for your help.

Share this post


Link to post
Share on other sites
Shouldn''t you bind your texture before you draw code?
Before the quad drawing you put //glBindTexture (GL_TEXTURE_2D, texObj);
Maybe uncommenting it would help?
Not a pro, just a thought :D

Share this post


Link to post
Share on other sites
You''re using one byte per row in the image, which requires the unpack alignment to be set to 1. The default is 4, which means each new row must start on a 4 byte aligned offset from the start address. I don''t know if this is the problem you are currently experiencing, but with a one byte wide image, it will be a problem sooner or later. Put a glPixelStorei(GL_UNPACK_ALIGNMENT, 1) before uploading the texture.

Either that, or pad the image with enough bytes so that each new row starts on 4 byte aligned offset from the start address.

Share this post


Link to post
Share on other sites
quote:
Original post by michel
Shouldn''t you bind your texture before you draw code?
Before the quad drawing you put //glBindTexture (GL_TEXTURE_2D, texObj);
Maybe uncommenting it would help?
Not a pro, just a thought :D


I''ve tried it both ways with similar results.

quote:
Original post by Brother Bob
You''re using one byte per row in the image, which requires the unpack alignment to be set to 1. The default is 4, which means each new row must start on a 4 byte aligned offset from the start address. I don''t know if this is the problem you are currently experiencing, but with a one byte wide image, it will be a problem sooner or later. Put a glPixelStorei(GL_UNPACK_ALIGNMENT, 1) before uploading the texture.

Either that, or pad the image with enough bytes so that each new row starts on 4 byte aligned offset from the start address.



I''ll try that when I get home. To pad the image, should I change the data type to unsigned integer or add three more bytes per row in unsigned chars?

Share this post


Link to post
Share on other sites
quote:
Original post by CnCMasta
kills this GL_LUMINANCE and put GL_BITMAP on its place.
Then put GL_UNSIGNED_BYTE on the place where GL_BITMAP was!


One thing I don''t understand is according to the OpenGL red book, GL_BITMAP and GL_UNSIGNED_BYTE are type constants whereas GL_LUMINANCE is a format constant. That would suggest that I cannot replace GL_LUMINANCE with GL_BITMAP because they are different categories of constants. Where is my reasoning wrong?

quote:
Original post by Brother Bob
Put a glPixelStorei(GL_UNPACK_ALIGNMENT, 1) before uploading the texture.


I''m not sure what you mean by "before uploading the texture". I''m assuming that means before the glTexImage2D call. Either way, no such luck.

quote:
Original post by Brother Bob
Add three bytes after the one you already have. If you use unsigned ints instead, you have to make sure the information ends up in the correct byte (little vs big endian issues), but it is possible.


I changed the constants to


const unsigned char BITMAP_DATA [] = {
0x41, 0x00, 0x00, 0x00,
0x41, 0x00, 0x00, 0x00,
0x42, 0x00, 0x00, 0x00,
0x22, 0x00, 0x00, 0x00,
0x22, 0x00, 0x00, 0x00,
0x14, 0x00, 0x00, 0x00,
0x14, 0x00, 0x00, 0x00,
0x1C, 0x00, 0x00, 0x00,
0x0C, 0x00, 0x00, 0x00,
0x08, 0x00, 0x00, 0x00,
0x08, 0x00, 0x00, 0x00,
0x30, 0x00, 0x00, 0x00
};
const int BITMAP_PITCH = 4;
const int BITMAP_ROWS = 12;
const int BITMAP_COLS = 8 * BITMAP_PITCH;
const int NUM_PIXELS = BITMAP_COLS * BITMAP_ROWS;


but that didn''t work, either. Thanks for the suggestions. Any other ideas?

Share this post


Link to post
Share on other sites
OK, I did make a rather bone-headed mistake. The texture height is not a power of two (12) as OpenGL requires. I added some rows to make it 16 rows high. The pitch is one which is a power of two and the number of columns is 8 which is also a power of two. So now everything should be covered. Unfortunately, this doesn''t fix the problem.

Share this post


Link to post
Share on other sites
quote:

const int BITMAP_ROWS = 12;
const int BITMAP_COLS = 8 * BITMAP_PITCH;


If these are the width and height of the texture, the width is wrong. Your image is still 8 bits wide, not 8*4. The extra padding has nothing to do with the texture data, but with pointer alignment after each row is read.

And after looking in the spec (you should look there too concerning what parameters to pass), you should pass GL_BITMAP as the second last parameter, and if you do, you must pass GL_COLOR_INDEX as third last parameter. Like this.

glTexImage2D (
...
GL_COLOR_INDEX,
GL_BITMAP,
BITMAP_DATA
);

This is the only valid combination including GL_BITMAP.

After reading through the specification quickly, I get the impression you can''t use bitmaps as "normal" textures. I strongly suggest you have a look in the specification, you find it here.

Share this post


Link to post
Share on other sites

  • Announcements

  • Forum Statistics

    • Total Topics
      628400
    • Total Posts
      2982448
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now