Jump to content
  • Advertisement
Sign in to follow this  
saman_artorious

render image raw data to texture

This topic is 1824 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I receive raw data of an image via socket. What I want to do is to create a texture with this image raw data and render the texture in the widget. I do not know what functions to use, could anyone give me some help here please.

Share this post


Link to post
Share on other sites
Advertisement

raw data is just a pixel data without width and height (unless you send one line) or know its values:

 

 

anyway making textures looks like this:

glEnable(GL_TEXTURE_2D);
pData = new unsigned int [256*256*3];

// //Generate an ID for texture binding
glGenTextures(1, &GLCAM_TEX); //Texture binding
glBindTexture(GL_TEXTURE_2D, GLCAM_TEX);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);

glTexImage2D(GL_TEXTURE_2D, 0, 3, 256, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, pData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glDisable(GL_TEXTURE_2D);

where pData is your whole received buffer, ofc you need to change 256 to your sizes, and maybe change gl_rgb to something else

Edited by ___

Share this post


Link to post
Share on other sites

raw data is just a pixel data without width and height (unless you send one line) or know its values:

 

 

anyway making textures looks like this:

glEnable(GL_TEXTURE_2D);
pData = new unsigned int [256*256*3];

// //Generate an ID for texture binding
glGenTextures(1, &GLCAM_TEX); //Texture binding
glBindTexture(GL_TEXTURE_2D, GLCAM_TEX);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);

glTexImage2D(GL_TEXTURE_2D, 0, 3, 256, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, pData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glDisable(GL_TEXTURE_2D);

where pData is your whole received buffer, ofc you need to change 256 to your sizes, and maybe change gl_rgb to something else

 

 

I do not see anywhere you saved the result, so i do not how how to use the resulting texture!

besides, GLCAM_TEX is not identified by the compiler, do you know why?

 error: 'GLCAM_TEX' was not declared in this scope
Edited by saman_artorious

Share this post


Link to post
Share on other sites
GLCAM_TEX is not identified by the compiler

A iine like

GLuint GLCAM_TEX;

is missing at the beginning of the code snippet.

 

 

I do not see anywhere you saved the result, so i do not how how to use the resulting texture!

The texture is "saved" when invoking glTexImage2D, so to say. In fact, OpenGL is instructed to take the pixel data and to make a texture from it. The data is expected to be in pData array, copied there from the data stream received from the socket.

 

 

Now I'm coming to a kind of sermon... ;)

 

I'm under the impression that you more or less literally take what you get. With all due respect, that way is prone to fail miserably. Please, oblige yourself and try to understand how the provided code snippet works. There a dozens of tutorial how to texture with OpenGL on the internet. There are the manual pages for the OpenGL API, too. And here (and elsewhere) is a forum with people willing to help.

 

E.g. the glTexImage2D deals with the extent, data type, and component structure of the raw as well as of the texture data. You can't make this right if you don't know how glTexImage2D works. Look at the GL_UNSIGNED_BYTE specifier in the invocation above, and then look at the type of pData, which is an array of unsigned int. Well, there is no clear definition of what "unsigned int" actually means, but be sure that it does not match GL_UNSIGNED_BYTE. This is mentioned here not to blame the poster (he's done a good job to help you), but to show that an understanding of things is necessary also when adopting foreign code snippets.

 

Another point is that you're speaking of "a widget to render the texture to". Well, OpenGL doesn't define anything like a widget, and you gave no information of what API you use to define those widget. So we can't know what the solution could be. Maybe you mentioned it in another of your posts (I recently saw a second post from you), but you should write each thread so that it is self-contained (or, at least, references another post that contains enough information).

Edited by haegarr

Share this post


Link to post
Share on other sites

 

GLCAM_TEX is not identified by the compiler

A iine like

GLuint GLCAM_TEX;

is missing at the beginning of the code snippet.

 

 

I do not see anywhere you saved the result, so i do not how how to use the resulting texture!

The texture is "saved" when invoking glTexImage2D, so to say. In fact, OpenGL is instructed to take the pixel data and to make a texture from it. The data is expected to be in pData array, copied there from the data stream received from the socket.

 

 

Now I'm coming to a kind of sermon... ;)

 

I'm under the impression that you more or less literally take what you get. With all due respect, that way is prone to fail miserably. Please, oblige yourself and try to understand how the provided code snippet works. There a dozens of tutorial how to texture with OpenGL on the internet. There are the manual pages for the OpenGL API, too. And here (and elsewhere) is a forum with people willing to help.

 

E.g. the glTexImage2D deals with the extent, data type, and component structure of the raw as well as of the texture data. You can't make this right if you don't know how glTexImage2D works. Look at the GL_UNSIGNED_BYTE specifier in the invocation above, and then look at the type of pData, which is an array of unsigned int. Well, there is no clear definition of what "unsigned int" actually means, but be sure that it does not match GL_UNSIGNED_BYTE. This is mentioned here not to blame the poster (he's done a good job to help you), but to show that an understanding of things is necessary also when adopting foreign code snippets.

 

Another point is that you're speaking of "a widget to render the texture to". Well, OpenGL doesn't define anything like a widget, and you gave no information of what API you use to define those widget. So we can't know what the solution could be. Maybe you mentioned it in another of your posts (I recently saw a second post from you), but you should write each thread so that it is self-contained (or, at least, references another post that contains enough information).

 

Thanks You, I will do review the function usages. You know, this code we have above, is what we do on our side. On the other side, the machine send the raw data. Could you please guide me what function I need to know to do that part as well.

Edited by saman_artorious

Share this post


Link to post
Share on other sites


On the other side, the machine send the raw data. Could you please guide me what function I need to know to do that part as well.

You gave too little information to let us say something detailed. Do you have an influence on what the sender does? If so ...

 

In general, the receiver must be synchronized with the data in the stream, must be told how big the image is (i.e. its real extent in pixels), the format of the image if this is not fixed (i.e. 3 channels RGB, in this order, one byte per channel, interleaved, in scan line order), the encoding of the image data (in case it is not every time raw), the byte size of the image data, perhaps where the image has to be placed.

 

For example, the sender transmits a 4CC like '>IMG' to mark the beginning of an image in the stream. It then sends a image header like structure with the format specification like those mentioned above, followed by the image data encoding specification followed by the image data itself, perhaps followed by another mark denoting the end of the image, e.g. a 4CC like '<IMG'.

 

When the receiver reads '>IMG' in the stream and its state is not already set to receive something special, it is now set to "receiving an image, header awaiting". Similarly it reads all the following data and buffers it somewhere. The image data itself may or may not be decoded. If you use raw data or any other encoding that is already understood from OpenGL (e.g. OpenGL's compressed texture formats count to the understood encodings as well), then you can copy the image data "as is" to the pData array (referring to the pData of ____'s code snippet). Otherwise, e.g. if the encoding is JFIF or so, you need to decode it and copy the decoded result to the pData buffer. Notice that the size of the pData buffer depends on the extent of the image, its format, and its encoding. So handle this well or else your program will crash. Then calculate the set-up for glTexImage2D from the transmitted header data and the (perhaps re-coded) encoding of the image data in pData. After invoking glTexImage2D, clear the back buffer of the OpenGL widget's context, and render a full sized quad with the texture as matte color onto it.

 

BTW: I don't know why you are rendering a 3D scene in OpenGL, transmit a raw image of that to another machine, and display the scene as an image again in OpenGL. Even if transmitting the scene itself isn't an option, you may also consider to transfer a compressed image, or perhaps only parts of the image, just to reduce traffic and lag.

 

However, if you don't have any influence of what the sender does, the above still holds in principle, but you have to tell us more details about what you actually receive.

Share this post


Link to post
Share on other sites

 

BTW: I don't know why you are rendering a 3D scene in OpenGL, transmit a raw image of that to another machine, and display the scene as an image again in OpenGL. Even if transmitting the scene itself isn't an option, you may also consider to transfer a compressed image, or perhaps only parts of the image, just to reduce traffic and lag.
Really? I didn't know I can do that. What I have already asked you is to get the raw data, create texture and map it at the background. Can I directly load raw data without using textures?

Share this post


Link to post
Share on other sites


Can I directly load raw data without using textures?

OpenGL requires a texture, because the texture has more informations like how to handle accesses outside the normal co-ordinates range, how to do sub-texel sampling, and so on. However, you can load texel data into the full texture (glTexImage2D) or into a rectangular part of it (glTexSubImage2D). You can load raw data or already compressed data (although the later feature requires a specific minimum API version / extension).

Share this post


Link to post
Share on other sites

 


Can I directly load raw data without using textures?

OpenGL requires a texture, because the texture has more informations like how to handle accesses outside the normal co-ordinates range, how to do sub-texel sampling, and so on. However, you can load texel data into the full texture (glTexImage2D) or into a rectangular part of it (glTexSubImage2D). You can load raw data or already compressed data (although the later feature requires a specific minimum API version / extension).

 

Thanks for the hint. You know, I have done the same things, However, what I do not desire is high CPU consumption due to updating the screen 25 times per second. I receive 25 frames per second, load raw data into texture and then update the screen. This method takes much consumption, around 80% of CPU.

A fellow, On the other hand, suggested me to use VLC Plugin for displaying the frames, you do have any information about using VLC plugin? by the way, I think I cannot progress with OPenGL anymore now.

Share this post


Link to post
Share on other sites

I'm not sure what you are about at all, and that makes suggesting a more suitable solution problematic...

 

It looks like you want to do a remote GUI, but for what purpose? Is a separate, dedicated solution like VNC (Virtual Network Computing, a remote desktop solution) suitable? Is the function of VNC suitable, but you need to integrate it into an own program? If, on the other hand, replicating an exact image of the GUI is not necessary (or even the GUI need to exist only on the one machine at all), then an approach with command sending would be much more efficient.

 

You mentioned to use VLC (VideoLAN). If I understand it right ... I suspect that performing video compression / decompression on-the-fly could lower the CPU load. 

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!