maxfire

Members
  • Content count

    59
  • Joined

  • Last visited

Community Reputation

209 Neutral

About maxfire

  • Rank
    Member
  1. OpenGL glm::lookAt with DirectX

    Thanks for the answers guys, my final solution was to use the glm RH ViewMatrix, then implement a projection matrix myself which serves up different z ranges based on a flag of OpenGL or DirectX.    Hodgman, I will give your scale and translate a go, seems like a better solution than making two different matrix's :)
  2. OpenGL glm::lookAt with DirectX

    Buckeye, you utter legend. After you said that I looked into glm its default is a RightHand projection and look at matrix. Once I use D3DXMatrixPerspectiveFovRH instead of D3DXMatrixLookAtLH it sprung to life :D.
  3. I'm creating a DirectX renderer for a project I previously made in OpenGL, reusing the glm library would be really useful.   Is it possible to convert the glm::lookAt matrix to work with DirectX, I have tried the following but the matrix created by DirectX Math Library is different D3DXMatrixLookAtLH( &matView, &D3DXVECTOR3( 0.0f, 3.0f, 5.0f ), &D3DXVECTOR3( 0.0f, 0.0f, 0.0f ), &D3DXVECTOR3( 0.0f, 1.0f, 0.0f ) ); glm::mat4 gmatView = glm::lookAt( glm::vec3( 0.0f, 3.0f, 5.0f ), glm::vec3( 0.0f, 0.0f, 0.0f ), glm::vec3( 0.0f, 1.0f, 0.0f ) ); float* pt = glm::value_ptr( gmatView ); D3DXMATRIX matGLConvertView = D3DXMATRIX( pt ); Thanks
  4. I feel like a tool... it was that simple *dies inside*. Thank you so much
  5. I'm using the boost::iostream libraries to compress char* data however the implementations I have found involve iteration of vectors into string streams which is killing my performance.    Is there any way I can speed this process up? Here is the code from the example I am using  std::string Compress( char* pData, size_t size ) { std::vector<char> s; s.assign( pData, pData + size ); std::stringstream uncompressed, compressed; for ( std::vector<char>::iterator it = s.begin(); it != s.end(); it++ ) uncompressed << *it; io::filtering_streambuf<io::input> o; o.push( io::gzip_compressor() ); o.push( uncompressed ); io::copy( o, compressed ); return compressed.str(); } Thanks :)
  6. Building Boost with zlib

    You are a legend my friend, worked like a charm
  7. Sorry for the newby question, hopefully its an easy question to answer I have tried hard to solve this :P   I'm trying to use the zlib compression filter with boost, however my build of boost does not include zlib (built using bjam with defaults). After going through the documentation I have tried to rebuild boost with zlib by the following steps.   1. Compiled Zlib using cmake and vs2013 2. Copied includes, libs, and dlls into boost folder (same folder as bjam) 3. Tried to run bjam with this command, bjam set ZLIB_BINARY = "zlib\bin\zlibd.dll" set ZLIB_INCLUDE = "\lib\include" set ZLIB_LIBPATH = "zlib\lib"   All I get its a list of module build errors. I'm thinking my command is a load of garbage however due to lack of Command Prompt experience that's what I came up with after reading a few pages online.   Example of error   C:/Users/Max/Downloads/boost_1_57_0/tools/build/src\build-system.jam:583: in loa d from module build-system   If anyone could link me to any information or give advice on the correct procedure that would be most welcome :D   Thanks :D 
  8. Sorry to post again but I am unable to find the answer to this anywhere.   In my c++ server application when I save my bitmap to a file using an ofstream in binary mode and then read that file in c# using File.ReadAllBytes which returns a BYTE array the bytes read in are different than when I send my data (unsigned char*) over a network from the server and read them as bytes on the c# client.    I know a byte in c# is an unsigned integer of 8 bits and an unsigned char is the same as uint_8 so why is the data different? What's the best approach to send my data to c# in the correct format? Am I missing some sort of encoding?   Heres some server code that shows the writing to memory void MyBitmap::WriteToFile( std::string sFileName ) { std::ofstream file; file.open( sFileName.c_str(), std::ios::binary ); //write headers file.write( ( char* ) ( &_header ), sizeof( _header ) ); file.write( ( char* ) ( &_info ), sizeof( _info ) ); //write pixel data file.write( reinterpret_cast<char*>( &_pPixelData[0] ), _sPixelDataSize ); //file.write( ( &_pPixelData[ 0 ] ), _sPixelDataSize ); file.close(); } MyBitmapMemoryBuffer* MyBitmap::WriteToMemoryChar() { size_t stHeaderSize = sizeof( BITMAPFILEHEADER ); size_t stInfoSize = sizeof( BITMAPINFOHEADER ); size_t stTotalImageSize = stHeaderSize + stInfoSize + _sPixelDataSize; MyBitmapMemoryBuffer* pMem = new MyBitmapMemoryBuffer(); pMem->pMemory = new char[ stTotalImageSize ]; pMem->size = stTotalImageSize; //write headers std::memcpy( &pMem->pMemory[ 0 ], ( char* ) ( &_header ), stHeaderSize ); std::memcpy( &pMem->pMemory[ stHeaderSize ], ( char* ) ( &_info ), stInfoSize ); //write pixels std::memcpy( &pMem->pMemory[ stHeaderSize + stInfoSize + 1 ], reinterpret_cast<char*>( &_pPixelData[ 0 ] ), _sPixelDataSize ); return pMem; } MyBitmapMemoryBuffer* MyBitmap::WriteToMemoryByte() { size_t stHeaderSize = sizeof( BITMAPFILEHEADER ); size_t stInfoSize = sizeof( BITMAPINFOHEADER ); size_t stTotalImageSize = stHeaderSize + stInfoSize + _sPixelDataSize; MyBitmapMemoryBuffer* pMem = new MyBitmapMemoryBuffer(); pMem->pMemoryByte = new BYTE[ stTotalImageSize ]; pMem->size = stTotalImageSize; //write headers std::memcpy( &pMem->pMemoryByte[ 0 ], ( BYTE* ) ( &_header ), stHeaderSize ); std::memcpy( &pMem->pMemoryByte[ stHeaderSize ], ( BYTE* ) ( &_info ), stInfoSize ); //write pixels std::memcpy( &pMem->pMemoryByte[ stHeaderSize + stInfoSize + 1 ], ( &_pPixelData[ 0 ] ), _sPixelDataSize ); return pMem; } Here's the c# code that reads the data protected void ChekcData() { int messageSize = 240054; int readSoFar = 0; byte[] msg = new byte[ messageSize ]; while ( readSoFar < messageSize ) { var read = stream.Read( msg, readSoFar, msg.Length - readSoFar ); readSoFar += read; if ( read == 0 ) break; // connection was broken } //compare byte[] tmp = File.ReadAllBytes( @"C:\Users\Max\Documents\Git Repositories\Advanced Programming\test.bmp" ); for ( int i = 0; i < tmp.Length; i++ ) { if ( tmp[ i ] != msg[ i ] ) { textBox.Text = "not same"; } } }
  9. *face smash* just kill me now xD. Thank you wintertime
  10. Sorry for yet another post, after 6 hours I'm pulling my hair out.. So everything seemed to work fine until I started looking at the bitmap file/buffer generated. The colour is different from what is actually rendered. I have searched high and low for Bitmap Headerfile examples but can't find any example that are hugely different from my implementation. Here is the code I use to save to a file and a screenshot.   MyBitmap::MyBitmap( int iWidth, int iHeight, int iNoOfBits, BYTE* pPixelData, size_t sPixelDataSize ) { _header.bfType = 0x4d42; _header.bfSize = sizeof( BITMAPFILEHEADER ) + sizeof( BITMAPINFOHEADER ) + sPixelDataSize; _header.bfReserved1 = 0; _header.bfReserved2 = 0; _header.bfOffBits = sizeof( BITMAPFILEHEADER ) + sizeof( BITMAPINFOHEADER ); _info.biSize = sizeof( _info ); _info.biWidth = iWidth; _info.biHeight = iHeight; _info.biPlanes = 1; _info.biBitCount = 24;//iNoOfBits; _info.biCompression = BI_RGB; _info.biSizeImage = iWidth* iHeight * 3; _info.biXPelsPerMeter = 0; _info.biYPelsPerMeter = 0; _info.biClrUsed = 0; _info.biClrImportant = 0; _info.biXPelsPerMeter = 0; _info.biYPelsPerMeter = 0; _pPixelData = pPixelData; _sPixelDataSize = sPixelDataSize; } MyBitmap::~MyBitmap() { if ( _pPixelData ) delete _pPixelData; } void MyBitmap::WriteToFile( std::string sFileName ) { std::ofstream file; file.open( sFileName.c_str(), std::ios::binary ); //write headers file.write( ( char* ) ( &_header ), sizeof( _header ) ); file.write( ( char* ) ( &_info ), sizeof( _info ) ); //write pixel data file.write( reinterpret_cast<char*>( &_pPixelData[0] ), _sPixelDataSize ); file.close(); } And here is how its used glReadPixels( 0, 0, g_iScreenWidth, g_iScreenHeight, GL_RGB, GL_UNSIGNED_BYTE, g_pixels ); MyBitmap bitmap( g_iScreenWidth, g_iScreenHeight, 24, g_pixels, 3 * g_iScreenWidth * g_iScreenHeight ); bitmap.WriteToFile( "test.bmp" ); The only thing I can think of is that I need to specify an RGB colour mask but that would require switching to BITMAPV4HEADER structure, which I have yet to see anyone recommend to use.    I am missing something painfully obvious?
  11. Ahh I though that I would have to convert to CLI. I decided to create Header and info structures then write them to a buffer manually. Thanks for the additional knowledge guys :)
  12. Sorry for the such a basic question (I very rarely use visual c++) but how do I use the .Net classes if I have a c++ console app in visual studio?
  13. What's the cleanest way to do what im trying to achieve?, Atm im thinking of making my own bitmap class and using boost to serialize it but it seems a little overkill