Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your help!

We need 7 developers from Canada and 18 more from Australia to help us complete a research survey.

Support our site by taking a quick sponsored survey and win a chance at a $50 Amazon gift card. Click here to get started!


MaxFire

Member Since 09 Jun 2011
Offline Last Active Aug 09 2015 06:39 AM

Topics I've Started

glm::lookAt with DirectX

12 July 2015 - 02:56 PM

I'm creating a DirectX renderer for a project I previously made in OpenGL, reusing the glm library would be really useful.

 

Is it possible to convert the glm::lookAt matrix to work with DirectX, I have tried the following but the matrix created by DirectX Math Library is different

	D3DXMatrixLookAtLH( &matView,
		&D3DXVECTOR3( 0.0f, 3.0f, 5.0f ),
		&D3DXVECTOR3( 0.0f, 0.0f, 0.0f ),
		&D3DXVECTOR3( 0.0f, 1.0f, 0.0f ) ); 
	
	glm::mat4 gmatView = glm::lookAt( glm::vec3( 0.0f, 3.0f, 5.0f ),
									glm::vec3( 0.0f, 0.0f, 0.0f ),
									glm::vec3( 0.0f, 1.0f, 0.0f ) );

	float* pt = glm::value_ptr( gmatView );
	D3DXMATRIX matGLConvertView = D3DXMATRIX( pt );

Thanks


Efficiently when compressing char* to std::string

11 December 2014 - 10:49 AM

I'm using the boost::iostream libraries to compress char* data however the implementations I have found involve iteration of vectors into string streams which is killing my performance. 

 

Is there any way I can speed this process up? Here is the code from the example I am using 

std::string Compress( char* pData, size_t size )
{
	std::vector<char> s;
	s.assign( pData, pData + size );

	std::stringstream uncompressed, compressed;

	for ( std::vector<char>::iterator it = s.begin();
		it != s.end(); it++ )
		uncompressed << *it;

	io::filtering_streambuf<io::input> o;
	o.push( io::gzip_compressor() );
	o.push( uncompressed );
	io::copy( o, compressed );

	return compressed.str();
}

Thanks :)


Building Boost with zlib

10 December 2014 - 07:27 AM

Sorry for the newby question, hopefully its an easy question to answer I have tried hard to solve this :P

 

I'm trying to use the zlib compression filter with boost, however my build of boost does not include zlib (built using bjam with defaults). After going through the documentation I have tried to rebuild boost with zlib by the following steps.

 

1. Compiled Zlib using cmake and vs2013

2. Copied includes, libs, and dlls into boost folder (same folder as bjam)

3. Tried to run bjam with this command, bjam set ZLIB_BINARY = "zlib\bin\zlibd.dll" set ZLIB_INCLUDE = "\lib\include" set ZLIB_LIBPATH = "zlib\lib"

 

All I get its a list of module build errors. I'm thinking my command is a load of garbage however due to lack of Command Prompt experience that's what I came up with after reading a few pages online.

 

Example of error

 
C:/Users/Max/Downloads/boost_1_57_0/tools/build/src\build-system.jam:583: in loa
d from module build-system
 
If anyone could link me to any information or give advice on the correct procedure that would be most welcome :D
 

Thanks :D 


C++ Serialize OpengGL pixel data in bitmap formatted byte array

14 October 2014 - 01:40 PM

Hi guys,

 

Im trying to send a screenshot to a remote device using TCP. My networking is working fine however I am unable to serialize the OpenGL pixel data into a format that can be de-serialized into a bitmap image on the client. I am using a Bitmap to keep it simple without compression at the moment.

 

My current attempt is using FreeImage as it is cross platform but would not mind switching Image framework. 

		glReadPixels( 0, 0, g_iScreenWidth, g_iScreenHeight, GL_RGB, GL_UNSIGNED_BYTE, g_pixels );

		FIBITMAP* image = FreeImage_ConvertFromRawBits( g_pixels, g_iScreenWidth, g_iScreenHeight, 3 * g_iScreenWidth, 24, 0xFFFF0000, 0xFF008000, 0xFF0000FF, false );

		FIBITMAP *src = FreeImage_ConvertTo32Bits( image );
		FreeImage_Unload( image );
		// Allocate a raw buffer
		int width = FreeImage_GetWidth( src );
		int height = FreeImage_GetHeight( src );
		int scan_width = FreeImage_GetPitch( src );
		BYTE *bits = ( BYTE* ) malloc( height * scan_width );
		FreeImage_ConvertToRawBits( bits, src, scan_width, 24, 0xFFFF0000, 0xFF008000, 0xFF0000FF, false );
		FreeImage_Unload( src );

		g_pTCPServer->SendDataToClientsBytes( bits, height * scan_width );

However this does not give me a byte format that can be de-serialized as a bitmap image on a c# application.

 

Any advice on how I can get the correct format would be appreciated smile.png.

 

Thanks


Unity CG Custom Sader for Texture Layering using Passes

28 October 2013 - 04:01 AM

This is what I’m trying to accomplish

  1. Texture geometry with primary shader
  2. Layer a second texture over the first one by a projected amount following the normal.

Here is my current shader code:

Shader "Custom/FurShader"
{

Properties

{

_MainTex( "Main Texture", 2D ) = "white" {}

_MaxHairLength( "Max Hair Length", Float ) = 0.5

_NoOfPasses( "Number of Passes", Float ) = 2.0

}



CGINCLUDE

//includes

#include "UnityCG.cginc"



//structures

struct vertexInput

{

float4 vertex : POSITION;

float4 normal : NORMAL;

float4 texcoord : TEXCOORD0;

};



struct fragmentInput

{

float4 pos : SV_POSITION;

half2 uv : TEXCOORD0;

};



//uniforms

uniform float _MaxHairLength;

uniform sampler2D _MainTex;

uniform float4 _MainTex_ST;



uniform sampler2D _SecondTex;

uniform float4 _SecondTex_ST;



uniform float _NoOfPasses;



//function

inline fragmentInput LevelFragmentShader( vertexInput i, int level )

{

fragmentInput o;



float movementDist = ( _MaxHairLength / _NoOfPasses ) * level;



float4 pos = ( i.vertex + ( i.normal * movementDist ) );



o.pos = mul( UNITY_MATRIX_MVP, pos );

o.uv = TRANSFORM_TEX( i.texcoord, _SecondTex );



return o;

}



half4 frag( fragmentInput i ) : COLOR

{

return tex2D( _SecondTex, i.uv );

}



ENDCG



SubShader {

Tags { "Queue" = "Transparent"}

Blend SrcAlpha OneMinusSrcAlpha



Pass

{

CGPROGRAM

// Upgrade NOTE: excluded shader from OpenGL ES 2.0 because it does not contain a surface program or both vertex and fragment programs.

#pragma exclude_renderers gles

#pragma vertex vert

#pragma fragment frag_unique



fragmentInput vert( vertexInput i )

{

fragmentInput o;



o.pos = mul( UNITY_MATRIX_MVP, i.vertex );

o.uv = TRANSFORM_TEX( i.texcoord, _MainTex );



return o;

}



half4 frag_unique( fragmentInput i ) : COLOR

{

return tex2D( _MainTex, i.uv );

}





ENDCG

}

Pass

{

CGPROGRAM

// Upgrade NOTE: excluded shader from OpenGL ES 2.0 because it does not contain a surface program or both vertex and fragment programs.

#pragma exclude_renderers gles

#pragma vertex vert

#pragma fragment frag



fragmentInput vert( vertexInput i )

{

fragmentInput o = LevelFragmentShader( i, 1 );



return o;

}





ENDCG

}



}

FallBack "Diffuse"

} 

But as you can see the result second texture is not projecting perpendicularly edge to edge. Any suggestions would be great, im sure my maths is correct vertxPos + ( Normal * projectionDistance). Could it be something to do with how im using unitys ModelViewProjection Matrix?

Image showing result http://i1265.photobucket.com/albums/jj508/maxfire1/Capture_zpsc8db2b1f.png

Thanks in advanced


PARTNERS