Jump to content

  • Log In with Google      Sign In   
  • Create Account


Burnt_Fyr

Member Since 25 Aug 2009
Offline Last Active Yesterday, 02:02 PM
*****

#5150176 algorithm for calulating angle between two points

Posted by Burnt_Fyr on 28 April 2014 - 03:04 PM

 


Correction: you would need to add PI, not 2pi, to transform an interval ranging from from -pi..pi, to the range of 0..2pi. In general to convert from a signed -n..n, range  to an unsigned 0..2n range add 1/2 the range, 1n in this case, or 1pi in the situation above.

The transform I mentioned is not linear: With atan2 giving you angles in the range [+pi,-pi) where [+pi,0] is as desired, but (0,-pi) should be (2pi,pi) instead, you have to add 2pi for all resulting angles less than 0. As pseudo code:

    result := angle < 0 ? 2pi+angle : angle   w/   angle := atan2(-x, z)

 

Some day I'll learn not to spout my mouth off before coffee time :) Thanks for the correction on my correction.




#5150149 algorithm for calulating angle between two points

Posted by Burnt_Fyr on 28 April 2014 - 12:22 PM

First some clarifications: You seem me to want to compute the angle from the principal z direction to the look direction vector, in the x-z plane, and in the counter-clockwise manner.  Notice that this is not the same as computing the angle between the difference from the origin to the camera position and the difference from the origin to the look-at position (as you do in the OP). Saying "the angle between" is not direction dependent, i.e. gives the same result if the look vector is e.g. 45° left or 45° right to the axis. In such a case the dot product can be used, as phil_t has mentioned. However, in the OP you make a distinction between (+1,0,0) to result in 90° and (-1,0,0) to result in 270°, so the following method may be better suited...

 

The look vector is a direction vector. You can extract it from the camera matrix (notice that this is the inverse of the view matrix). Or you can compute it as difference from the camera position to the view at position.

 

The atan2 function can be used to compute the angle (in radian) from the x and z components of the said look vector relative to a principal z axis. The question to answer is whether to use atan2(x, z) or atan2(z, x), and whether one of the components need to be negated. Without having proven it, I assume atan2(-x, z) would do the job (please check twice).

 

The result of atan2 is in [+pi,-pi), so you need to add 2pi if the atan2 itself is negative if you actually need [0,2pi) for some reason. Since you want to have the angle in degrees, you further have to transform it, accordingly, of course.

Correction: you would need to add PI, not 2pi, to transform an interval ranging from from -pi..pi, to the range of 0..2pi. In general to convert from a signed -n..n, range  to an unsigned 0..2n range add 1/2 the range, 1n in this case, or 1pi in the situation above.




#5149512 About Shadow Shadow Volume

Posted by Burnt_Fyr on 25 April 2014 - 05:24 PM

Does this happen only when the camera is inside the shadow? this happened in my game, although i didn't mind it much, but i think they're a way to fix it, but yea, we need more information than that.

IIRC, this is only an issue with depth pass shadow volumes, depth fail,(aka Carmack's reverse) should not suffer from this.




#5149401 About Shadow Shadow Volume

Posted by Burnt_Fyr on 25 April 2014 - 09:31 AM

You will need to give us more than just a picture, most likely, if you want a solution.

 

I would start by debuging the vertices that make up your shadow volume, and find out what is going to with them in the shader when the camera moves. It looks to me like some of the shadow volume is being clipped against the near plane, but I'm just guessing here.




#5148990 Better way to make entities and map comunicate in a Hierarchy Component System

Posted by Burnt_Fyr on 23 April 2014 - 10:51 AM

 

You could pass the Map object by reference:

 player->update(theMap);
 (for each Enemy) enemy->update(theMap);

There's really nothing wrong with passing objects through function calls to the objects that depend on them.

 

I thought about doing something like that, but I don't know, it just don't feels right to be passing everything through parameters. Perhaps I'm just too paranoid. I also had some problems with circular dependency when I first tried doing that, but this is probably some mess I did with the code.

 

 

I think the entities having a reference to the map, and vice versa is the way to go. when the entity needs to get a path, it can query the map, which can use a* or other pathfinding to return the path to the entity. If your entity needs to query for nearest enemy or what not, it can query the map, which knows all entities already. In an ECS system, I would have a pathfinding system to handle the logic, which would have a reference to the map, and use the entities position component to get starting information for the graph search.

 

I was thinking of something like that, like the entities storing a pointer for the current map and vice versa, though I feel like I'm going to have some headache to implement that at first. I'll give some more thought and research on the matter before what I'm really going to do.

 

Thank you guys!

 

You may find this a good read




#5148936 worldspace or not, matrix & extents

Posted by Burnt_Fyr on 23 April 2014 - 06:35 AM

Bingo!




#5148858 Understanding a (d3dx) model matrix's content

Posted by Burnt_Fyr on 22 April 2014 - 06:19 PM

The first row is the object’s X vector, which points to the right of the object.
The second row is the object’s up vector.  In a primitive FPS game this will always be [0,1,0] for a player as the player is always standing upright.
The third row is the object’s Z vector, which points forward in front of the object.  It is the direction the object is facing.
The fourth row is the object’s position.
 
The first row is also scaled by the object’s X scale, and the second and third are scaled by the object’s Y and Z scales respectively.
 
Code to create a matrix from scratch (the easiest way to understand each component) is as follows:

    /** Creates a matrix from our data. */
    void COrientation::CreateMatrix( CMatrix4x4 &_mRet ) const {
        _mRet._11 = m_vRight.x * m_vScale.x;
        _mRet._12 = m_vRight.y * m_vScale.x;
        _mRet._13 = m_vRight.z * m_vScale.x;
        _mRet._14 = 0.0f;

        _mRet._21 = m_vUp.x * m_vScale.y;
        _mRet._22 = m_vUp.y * m_vScale.y;
        _mRet._23 = m_vUp.z * m_vScale.y;
        _mRet._24 = 0.0f;

        _mRet._31 = m_vForward.x * m_vScale.z;
        _mRet._32 = m_vForward.y * m_vScale.z;
        _mRet._33 = m_vForward.z * m_vScale.z;
        _mRet._34 = 0.0f;

        _mRet._41 = m_vPos.x;
        _mRet._42 = m_vPos.y;
        _mRet._43 = m_vPos.z;
        _mRet._44 = 1.0f;
    }
These are Direct3D matrices.


L. Spiro

 

In a left handed system




#5148723 Better way to make entities and map comunicate in a Hierarchy Component System

Posted by Burnt_Fyr on 22 April 2014 - 07:39 AM

I think the entities having a reference to the map, and vice versa is the way to go. when the entity needs to get a path, it can query the map, which can use a* or other pathfinding to return the path to the entity. If your entity needs to query for nearest enemy or what not, it can query the map, which knows all entities already. In an ECS system, I would have a pathfinding system to handle the logic, which would have a reference to the map, and use the entities position component to get starting information for the graph search.




#5148537 Plane inside view frustum

Posted by Burnt_Fyr on 21 April 2014 - 08:35 AM

Knowing the view space Z that you want your screen quad to sit at, We can use a bit of trig to calculate the corners. First get the height and width of the far plane. FOV/2 gives you the angle between view direction and and the top or bottom plane. so tan(fov/2) gives you the slope of that plane in regards to the horizontal plane. multiply that by your Z distance to get the rise, or half height of the quad. multiply that by the aspect ratio to get the half width.

 

each corner will be some combination of half height, half width, and the center of your plane, which can be found by multipying your forward view space vector by your distance.

 

all of these points will be in view space, so if you are drawing something on the screen at this point keep that in mind. you may want to transform them into world space view the inverse view transform so they are in world space, or maybe set your view and model transforms to identity before rendering the quad.




#5145746 {XAudio2 2.7] - Source Voice questions

Posted by Burnt_Fyr on 09 April 2014 - 01:07 PM

x3daudio needs to be initialized so that it's outputs will match the speaker configuration you are using, and to match the scale of units to your application. In general yes, you need to release com interfaces for objects.

 

For true to life stereo you would need 2 listeners, set as far apart as the characters ears are. This is the technique that was used for the voices in Pixar's Monsters Inc movie. (http://video.sina.com.cn/v/b/44064572-1604540395.html)

 

But that is overkill, IMHO. What is the delay heard between one ear and the next? 340.29 m / s is the velocity of sound at sea leavel, your head is roughly 0.2 meters across, we are looking at less than a millisecond delay between ears. I read somewhere long ago that humans cannot in general discern delays lower than about 9ms. A much better clue to positional audio is the doppler effect, and the filtering effect cause by the shape of the ears. All of these, however, can be calculated by setting flags in your call to X3dAudioCalculate. The Delay function only works with stereo speaker setups however, as we humans are binaural beasts after all.

 

Not having my codebase with me at the moment, I would assume that your level matrix can be cleaned up right away. According to M$

 

IXAudio2Voice::GetOutputMatrix always returns the levels most recently set by IXAudio2Voice::SetOutputMatrix. However, they may not actually be in effect yet: they only take effect the next time the audio engine runs after the IXAudio2Voice::SetOutputMatrix call (or after the corresponding IXAudio2::CommitChanges call, if IXAudio2Voice::SetOutputMatrix was called with a deferred operation ID).

 

So once SetOutputMatrix has been called, the voice has saved a copy internally, even if it has not been applied to the hardware yet.




#5144121 Oh no, not this topic again: LH vs. RH.

Posted by Burnt_Fyr on 03 April 2014 - 09:45 AM

 

There's nothing in modern (programmable) DirectX/OpenGL that incentivizes you to use one over the other, apart from the libraries you're using (D3DX, XNAMath and DirectXMath all provide LH and RH functions, while GLM unfortunately only provides RH ones).

 

Also, slightly unrelated but always interesting: http://programmers.stackexchange.com/a/88776

 

Yes, as I said, the changes were simple. But now with the physics engine, adapting to LH is incredibly hard.

 
Mapping data structures to 3D space will eventually boil down to familiarity.
 
RH seems much more intuitive and familiar. The one thing I might have problems is having to do some conversion when adding depth based thing or whatever from DirectX world.
 

 


the final vertex transform is identical in both: SomeLeftHandedMatrix * Position or SomeRightHandedMatrix * Position.

Actually, that's not correct. As commonly implemented, it would be Position * SomeLeftHandedMatrix and SomeRightHandedMatrix * Position.

 

 

I've never done that.

 

I always have world = (bone) * (matrix_from_model_to_physics) * w

 

and then the v * p matrix.

 

I never needed to swap any orders when going from LH to RH.

 

I think the key part of what Buckeye said is " as commonly implemented". but no, you should not need to swap matrix multiplication order for handedness, only for majorness, as Mona2000 mentioned. The only real place that handedness matters is in the projection transform, or how our mapping of the 3D vector space is converted to 2D.




#5142007 a camera problem in directx11

Posted by Burnt_Fyr on 25 March 2014 - 09:37 AM

 

In directx11,a camera is identified with a position,a forcus direction,and a up direction.In my opinion,the up vector should be perpendicular to the forcus vertor.But in MS's toturial :

 

    XMVECTOR Eye = XMVectorSet( 0.0f, 3.0f, -6.0f, 0.0f );
    XMVECTOR At = XMVectorSet( 0.0f, 1.0f, 0.0f, 0.0f );
    XMVECTOR Up = XMVectorSet( 0.0f, 1.0f, 0.0f, 0.0f );
    g_View = XMMatrixLookAtLH( Eye, At, Up );
    //cbNeverChanges.mView = XMMatrixTranspose( g_View );
As you can see, the At vec is same with the Up vec.I just confused with this usage.
By the way,what the reason to do the transpose after every XMMatricx**() be called.
 
Thanks for any help!

 

 

You are correct that the look at vector and up vector should not be colinear. But the At vector is not the view direction, but the spot you are looking at.

 

1) As mentioned previously, the function XMMatrixLookAtLH will create a look vector using at-eye to get a vector from the eyepoint to the look at point.

2) The cross product between the view vector and the up vector give a vector perpendicular to the view and up vectors

3) The view vector is crossed with the perpendicular to give a new up vector that finalizes the orthonormal basis.

4) IIRC (can't remember from where) at one point the mul intrinsic in HLSL worked faster with column major layout than row major layout




#5141055 D3DXComputeTangentFrameEx problem

Posted by Burnt_Fyr on 21 March 2014 - 01:21 PM

 


then your call to compute tangent frame ex should reflect that. the indices should reflect the offset from the start of the vertex to the semantic location.

That's not correct.

 

Be careful, Burnt_Fyr. That parameter is NOT the vertex declaration offset, it is the SEMANTIC index. If he has a single D3DDECLUSAGE_TANGENT, then the index is 0.

 

However, you're likely correct about the common usage order.

 

Good call, this is why I should not attempt to help anyone before morning coffee.




#5140939 D3DXComputeTangentFrameEx problem

Posted by Burnt_Fyr on 21 March 2014 - 07:07 AM

while i don't use d3dx libs, I'll give this a shot based on this doc.

 

Your U and V are in the wrong order, and should be swapped. dwUPartialOutSemantic [in] should be tangent, while dwVPartialOutSemantic [in] should be binormal. Also push both tangent and normal and binormal to the same index.

 

if your vertex format is something like this position, normal, tangent, bitangent, texcoords. then your call to compute tangent frame ex should reflect that. the indices should reflect the offset from the start of the vertex to the semantic location. you are using 0, so everything is overwrighting the position data. I would assume that indices should be 8 for tangent, 12 for bitangent, and 16 for normal based on the vertex format i described above. Read the doc I linked above and try to understand the meaning of each of the functions inputs.




#5137042 Billboard Matrix Generation C++, DX11

Posted by Burnt_Fyr on 06 March 2014 - 08:03 PM

http://swiftcoder.wordpress.com/2008/11/25/constructing-a-billboard-matrix/

 

be for warned that the article above uses column vectors, and so the matrices may need to be flipped along the diagonal

 

EDIT: Nehe's tutorial might be a better fit for you: http://nehe.gamedev.net/article/billboarding_how_to/18011/






PARTNERS