Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


JohnnyCode

Member Since 10 Mar 2008
Offline Last Active Yesterday, 06:26 PM

#5237606 How to set up correctly the camera matrix?

Posted by JohnnyCode on 29 June 2015 - 08:12 PM


T = [1 0 0 0
0 1 0 0
0 0 1 0
P.x, P.y, P.z, 1]

Forward vector (assuming forward means in the direction of positive z-axis) F = normalize(P - L)
Right vector, R = normalize(cross(F, U))
Up vector, U = normalize(cross(R,F))

R = [R.x, R.y, R.z, 0
U.x, U.y, U.z, 0
F.x, F.y, F.u, 0
0 0 0 1]

Finally, camera2world = T * R, therefore world2camera = camera2world.inverse()

is this correct? Perhaps,

yes it is corect, but consider projection matrix- the only matrix that has a value in so called - fourth column base, example :

 

bla,bla,bla,bla

bla,bla,bla,bla

blabla,bla,bla

0,0,1,0

 

as you notice, it is not an invertible matrix with 1s on diagonal, carryinyg a strange 1.0 value where it is multipleied with previous transformations, thus

 

world-view-stuff multipled with Projection results in matrix that will transform a 4d vector (x,y,z,1) from the space of world-view into a 4d vector of (x',y',z',z), wherer, if you notice, fourth component iz z coordinate of world-view previous space. This component is crucial to bring the projected vector in perspective deformed (does not apply to orthonormal projection- perspective less) into rasterized normalized device coordinate, as computed (x',y',z',z). where fourth z devides previous cooridnates

 

just visualize multiplication of standard space transformation with perspective projection one, transform vector by it, and all gpu's- in case of orthonormal projection w is one, but division is still performed- and you can see that fourth component is actualy previous view space z value, used for division to normalize into so called 0.0-1.0 rasterized device coordinates. So view matrix handedness cannot be isolated from projection matrix, but all that matters is realy only the sign of the special "fourth colum/row" constant




#5237520 How to set up correctly the camera matrix?

Posted by JohnnyCode on 29 June 2015 - 10:56 AM


you can see the camera-local-coord-system as being aligned with that of the world-coord-system, both right-handed. So, in that same picture, what would be the "forward" vector? Isn't it the blue vector? or is it the vector on the opposite direction of that of the blue?

Forward vector is usualy defined in world space, and yes, does realy point forward, but in the view matrix it is then "usualy" fliped backwards (store as many operations in matrix as possible), so correct z-depth component can be "passed" into projective matrix w-division. It depends in the end, on wheather you have -1 or 1 as the w component cathcer in the  projection matrix you use, or you can madly change z-buffer default pass-test-pass-write function to greater from lower. The figure cites: "This is a convention used by most 3d aplications". So being right or left handed depends also on the projection matrix.




#5236944 How to set up correctly the camera matrix?

Posted by JohnnyCode on 26 June 2015 - 10:49 AM

your view matrix is left-handed. To build a left or right handed matrix, all that matters is the cross product that provides the right vector, consider:

 

 

updateCameraMatrix()
{
Vector forward = normalize(m_lookAt-m_position);
Vector right = normalize(cross(forward, m_up));

/* switch the arguments of cross() function to generate inverse right vector*/

m_up = cross(right, forward); /*consider inverting or not the up vector then*/

Matrix44 T( 1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
-m_position.x, -m_position.y, -m_position.z, 1);

Matrix44 R(right.x, right.y, right.z, 0.0f,
m_up.x, m_up.y, m_up.z, 0.0f,
-forward.x, -forward.y, -forward.z, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f);

cameraMatrix = T * R;
}




#5235452 how much i can trust the shader compiler?

Posted by JohnnyCode on 18 June 2015 - 06:12 AM

 

It is sort of trivial to not compute unused local values, yet, if you target some smart conditional/write/operational , or, uncheaply accesible memory reads, those all are out of compilation-in-time topic,..... your shader will always be interpreted ;)

This is 2015. Failure to compile means failure to run. Certain platforms such as iOS guarantee this.


L. Spiro

 

My point was that optimization is not of any standard, and counting on it means not taking the most proper action, since you are not the one who compiles, and shoud not be




#5235173 how much i can trust the shader compiler?

Posted by JohnnyCode on 16 June 2015 - 01:12 PM

It is sort of trivial to not compute unused local values, yet, if you target some smart conditional/write/operational , or, uncheaply accesible memory reads, those all are out of compilation-in-time topic,..... your shader will always be interpreted ;)




#5231142 What does this mean: 0 < r, g, b, a < 1

Posted by JohnnyCode on 26 May 2015 - 04:01 PM


Which I believe simply means less or equal then.

there is >, and >=, written of course differently as a glyph, he used simple >, what means close any, but never equal.




#5230710 How to calculate Lumens?

Posted by JohnnyCode on 24 May 2015 - 01:46 PM

All color textures encode visible light that has been illuminated. Since that, only the wavelength of responsing colors and their length are partialy encoded if you do not have occured illuminating intensity description. Those "pair" parameters are self sufficient and do not depend on observation values - but can be universaly described though. But this is in the end the same trapy information as size of gravitational force in space- but at a moment in time... .

 

What I mean to say, is that you can only interpolate upon changed illumination contact intensity and material character, that are defined upon illumination-color fact, just as you only can gess position of an object at a time if you now force at a moment, upon a function instead of behavioral simulation. But I am unsure of this, since theory of relativity encoded time into a "space at a moment with forces" upon functional mathematical relation.




#5230555 HLSL and 4x3 mul

Posted by JohnnyCode on 23 May 2015 - 05:16 AM

You can transform direction vectors by 4 column matricies effectively by bringing 3 component direction vector into a 4 component vector - with the last component set to zero. It will transform the vector exclusively with the 3x3 part

 




#5228236 GLSL halfvector?

Posted by JohnnyCode on 10 May 2015 - 10:04 AM

It does not matter wheather the light is directional or not.

 

If it is directional light, only eye  direction is a per pixel varying vector, if it is a position light, also light direction is per pixel varying vector. You should use explicit computations , even in old  opengl, nothing should restrict you from that.




#5226485 Random Number Generation

Posted by JohnnyCode on 30 April 2015 - 05:53 AM

Wheather it would be trivial point for encyption key generation..... a random number from computer would be great even over unlucky 32 bits really. I construct my procedural textures from explicit data over Perlin algorithms, but, pseudo-randomizing can be decoded, no matter of bite entropy size in fact, towards time unluckuly as well you know




#5225829 Is it necessary to license your game?

Posted by JohnnyCode on 27 April 2015 - 07:16 AM

It is implicit that everything you are the author of, you are also the exclusive license owner of.

 

You need to rather license onto yourself things you are not the author of, what would rather be the creations of your music/sound composer for example.

 

The authorships and licenses/exclusive licenses attributions, are also the topics to solve in front of all potential law suings. You need to document, that if you contracted someone, that he has been understood with license/exclusive license attribution onto you for the particular creation for the compensation delt.

 

So carefully license things you incorporate to your game that has been crated by someone else, there are cases of hobby/indie teams unable to release a game becouse some ex-workers left the team and does not alllow for usage of their work anymore.




#5224715 do float operations give different results in different GPUs?

Posted by JohnnyCode on 21 April 2015 - 11:45 AM

If the floating numbers you use and their operations will result allways in a no-fractinal-part number, you could relly on your predictions across devices.

But once it will step into a fractional part, even a single 10th, precision erros will accumulate .




#5223799 How to create a circular orbit and an angry bird space like orbit ?

Posted by JohnnyCode on 16 April 2015 - 04:13 PM

Oh, thank's for everyone who answered, i've found a way to do that by playing with rotation and gameobjects, For anyone who would like to know how i did it :

  1. Create an empty gameobject and set it postions to the planet pos
  2. make your orbited gameobject a child of this empty object .
  3. Rotate the empty game object with a Z angle ( 2D way ) .

This can only be done using unity.

Once again thank's for everyone, i've learned a lot .

NOt realy, you cane establish excenrtig behaviour upon those values (super acceleratred black hole horizont objects/ or, objects that escape observable universe in microseconds). Simulations of newton (no time) law are interesting. Though, this law seems to truly express tendence of attractions if no time is considered towards functional universe

 

If you are after silly circle orbiting stuff, you can just use sine and cosine relations towards distance.

 

A(z,y)= (sin(x),cos(x))  = position...... what is a unit circle still a demanding analisis explenational group relation




#5223790 Are float4x4 arrays supported in Shader Model 2 (D3D9, vs_4_0_level_9_3)?

Posted by JohnnyCode on 16 April 2015 - 02:59 PM

This:

...
public uint BoneIndex0;
public uint BoneIndex1;
public uint BoneIndex2;
public uint BoneIndex3;
...
is not
new SharpDX.Direct3D11.InputElement("BLENDINDICES", 0, Format.R32G32B32A32_Float, 36, 0),

 

true as well!




#5223780 Are float4x4 arrays supported in Shader Model 2 (D3D9, vs_4_0_level_9_3)?

Posted by JohnnyCode on 16 April 2015 - 01:48 PM

Seems legit, but my gess is that the very vertex buffer data, you bound to render on, actualy does have  4byte indicies and verticies in itself, so all 4 values get swallowed into first component, somehow being 1.0 value in shader. Any sane exporter would export like that anyway, so you seem to have correct art asset, but incorrect declaration in those calls, change them to as I've adviced, 4 bytes/4bytes and try out

 


new SharpDX.Direct3D11.InputElement("BLENDINDICES", 0, Format.R32G32B32A32_Float, 36, 0),
// "BLENDWEIGHT"
new SharpDX.Direct3D11.InputElement("BLENDWEIGHT", 0, Format.R32G32B32A32_Float, 52, 0),





PARTNERS