C++ Serialize OpengGL pixel data in bitmap formatted byte array

Started by
9 comments, last by maxfire 9 years, 5 months ago

Hi guys,

Im trying to send a screenshot to a remote device using TCP. My networking is working fine however I am unable to serialize the OpenGL pixel data into a format that can be de-serialized into a bitmap image on the client. I am using a Bitmap to keep it simple without compression at the moment.

My current attempt is using FreeImage as it is cross platform but would not mind switching Image framework.


		glReadPixels( 0, 0, g_iScreenWidth, g_iScreenHeight, GL_RGB, GL_UNSIGNED_BYTE, g_pixels );

		FIBITMAP* image = FreeImage_ConvertFromRawBits( g_pixels, g_iScreenWidth, g_iScreenHeight, 3 * g_iScreenWidth, 24, 0xFFFF0000, 0xFF008000, 0xFF0000FF, false );

		FIBITMAP *src = FreeImage_ConvertTo32Bits( image );
		FreeImage_Unload( image );
		// Allocate a raw buffer
		int width = FreeImage_GetWidth( src );
		int height = FreeImage_GetHeight( src );
		int scan_width = FreeImage_GetPitch( src );
		BYTE *bits = ( BYTE* ) malloc( height * scan_width );
		FreeImage_ConvertToRawBits( bits, src, scan_width, 24, 0xFFFF0000, 0xFF008000, 0xFF0000FF, false );
		FreeImage_Unload( src );

		g_pTCPServer->SendDataToClientsBytes( bits, height * scan_width );

However this does not give me a byte format that can be de-serialized as a bitmap image on a c# application.

Any advice on how I can get the correct format would be appreciated smile.png.

Thanks

Advertisement
It looks to me like you aren't sending a BMP header with your bit data. You'll need to either use the bits you send as raw bits in the C# side or construct a correct header sequence for your image data.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

What's the cleanest way to do what im trying to achieve?, Atm im thinking of making my own bitmap class and using boost to serialize it but it seems a little overkill

Check this page first. You should be able to use this constructor of System.Drawing.Bitmap to initialize a bitmap object with your raw bit sequence.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Sorry for the such a basic question (I very rarely use visual c++) but how do I use the .Net classes if I have a c++ console app in visual studio?

Sorry for the such a basic question (I very rarely use visual c++) but how do I use the .Net classes if I have a c++ console app in visual studio?


The most common way to do that is to use C++/CLI. Either by changing your console application to C++/CLI itself, or by making a C++/CLI translation DLL that loads in the .NET assemblies and exposes C bindings to your console application.

However I think you misunderstood what Apoch was saying - he's saying that C++ program sends the bit sequence to your C# program which then uses the Bitmap .NET class to construct an image from the bit sequence directly, not that your C++ program should use .NET

Ahh I though that I would have to convert to CLI. I decided to create Header and info structures then write them to a buffer manually. Thanks for the additional knowledge guys :)

Sorry for yet another post, after 6 hours I'm pulling my hair out.. So everything seemed to work fine until I started looking at the bitmap file/buffer generated. The colour is different from what is actually rendered. I have searched high and low for Bitmap Headerfile examples but can't find any example that are hugely different from my implementation. Here is the code I use to save to a file and a screenshot.

Capture_zps7d44b0ed.jpg


MyBitmap::MyBitmap( int iWidth, int iHeight, int iNoOfBits, BYTE* pPixelData, size_t sPixelDataSize  )
{
	_header.bfType = 0x4d42;
	_header.bfSize = sizeof( BITMAPFILEHEADER ) + sizeof( BITMAPINFOHEADER ) + sPixelDataSize;
	_header.bfReserved1 = 0;
	_header.bfReserved2 = 0;
	_header.bfOffBits = sizeof( BITMAPFILEHEADER ) + sizeof( BITMAPINFOHEADER );

	_info.biSize = sizeof( _info );
	_info.biWidth = iWidth;
	_info.biHeight = iHeight;
	_info.biPlanes = 1;
	_info.biBitCount = 24;//iNoOfBits;
	_info.biCompression = BI_RGB;
	_info.biSizeImage = iWidth* iHeight * 3;
	_info.biXPelsPerMeter = 0;
	_info.biYPelsPerMeter = 0;
	_info.biClrUsed = 0;
	_info.biClrImportant = 0;
	_info.biXPelsPerMeter = 0;
	_info.biYPelsPerMeter = 0;

	_pPixelData = pPixelData;
	_sPixelDataSize = sPixelDataSize;
}

MyBitmap::~MyBitmap()
{
	if ( _pPixelData ) delete _pPixelData;
}

void MyBitmap::WriteToFile( std::string sFileName )
{
	std::ofstream file;
	file.open( sFileName.c_str(), std::ios::binary );

	//write headers
	file.write( ( char* ) ( &_header ), sizeof( _header ) );
	file.write( ( char* ) ( &_info ), sizeof( _info ) );

	//write pixel data
	file.write( reinterpret_cast<char*>( &_pPixelData[0] ), _sPixelDataSize );

	file.close();
}

And here is how its used


		glReadPixels( 0, 0, g_iScreenWidth, g_iScreenHeight, GL_RGB, GL_UNSIGNED_BYTE, g_pixels );

		MyBitmap bitmap( g_iScreenWidth, g_iScreenHeight, 24, g_pixels, 3 * g_iScreenWidth * g_iScreenHeight );

		bitmap.WriteToFile( "test.bmp" );

The only thing I can think of is that I need to specify an RGB colour mask but that would require switching to BITMAPV4HEADER structure, which I have yet to see anyone recommend to use.

I am missing something painfully obvious?

Looks like you read the pixels as RGB although .bmp is in BGR, which you could get from OpenGL, too.

*face smash* just kill me now xD. Thank you wintertime

This topic is closed to new replies.

Advertisement