The first row is the object’s X vector, which points to the right of the object.
The second row is the object’s up vector. In a primitive FPS game this will always be [0,1,0] for a player as the player is always standing upright.
The third row is the object’s Z vector, which points forward in front of the object. It is the direction the object is facing.
The fourth row is the object’s position.
The first row is also scaled by the object’s X scale, and the second and third are scaled by the object’s Y and Z scales respectively.
Code to create a matrix from scratch (the easiest way to understand each component) is as follows:
I think the entities having a reference to the map, and vice versa is the way to go. when the entity needs to get a path, it can query the map, which can use a* or other pathfinding to return the path to the entity. If your entity needs to query for nearest enemy or what not, it can query the map, which knows all entities already. In an ECS system, I would have a pathfinding system to handle the logic, which would have a reference to the map, and use the entities position component to get starting information for the graph search.
Knowing the view space Z that you want your screen quad to sit at, We can use a bit of trig to calculate the corners. First get the height and width of the far plane. FOV/2 gives you the angle between view direction and and the top or bottom plane. so tan(fov/2) gives you the slope of that plane in regards to the horizontal plane. multiply that by your Z distance to get the rise, or half height of the quad. multiply that by the aspect ratio to get the half width.
each corner will be some combination of half height, half width, and the center of your plane, which can be found by multipying your forward view space vector by your distance.
all of these points will be in view space, so if you are drawing something on the screen at this point keep that in mind. you may want to transform them into world space view the inverse view transform so they are in world space, or maybe set your view and model transforms to identity before rendering the quad.
x3daudio needs to be initialized so that it's outputs will match the speaker configuration you are using, and to match the scale of units to your application. In general yes, you need to release com interfaces for objects.
But that is overkill, IMHO. What is the delay heard between one ear and the next? 340.29 m / s is the velocity of sound at sea leavel, your head is roughly 0.2 meters across, we are looking at less than a millisecond delay between ears. I read somewhere long ago that humans cannot in general discern delays lower than about 9ms. A much better clue to positional audio is the doppler effect, and the filtering effect cause by the shape of the ears. All of these, however, can be calculated by setting flags in your call to X3dAudioCalculate. The Delay function only works with stereo speaker setups however, as we humans are binaural beasts after all.
Not having my codebase with me at the moment, I would assume that your level matrix can be cleaned up right away. According to M$
IXAudio2Voice::GetOutputMatrix always returns the levels most recently set by IXAudio2Voice::SetOutputMatrix. However, they may not actually be in effect yet: they only take effect the next time the audio engine runs after the IXAudio2Voice::SetOutputMatrix call (or after the corresponding IXAudio2::CommitChanges call, if IXAudio2Voice::SetOutputMatrix was called with a deferred operation ID).
So once SetOutputMatrix has been called, the voice has saved a copy internally, even if it has not been applied to the hardware yet.
There's nothing in modern (programmable) DirectX/OpenGL that incentivizes you to use one over the other, apart from the libraries you're using (D3DX, XNAMath and DirectXMath all provide LH and RH functions, while GLM unfortunately only provides RH ones).
Yes, as I said, the changes were simple. But now with the physics engine, adapting to LH is incredibly hard.
Mapping data structures to 3D space will eventually boil down to familiarity.
RH seems much more intuitive and familiar. The one thing I might have problems is having to do some conversion when adding depth based thing or whatever from DirectX world.
the final vertex transform is identical in both: SomeLeftHandedMatrix * Position or SomeRightHandedMatrix * Position.
Actually, that's not correct. As commonly implemented, it would be Position * SomeLeftHandedMatrix and SomeRightHandedMatrix * Position.
I've never done that.
I always have world = (bone) * (matrix_from_model_to_physics) * w
and then the v * p matrix.
I never needed to swap any orders when going from LH to RH.
I think the key part of what Buckeye said is " as commonly implemented". but no, you should not need to swap matrix multiplication order for handedness, only for majorness, as Mona2000 mentioned. The only real place that handedness matters is in the projection transform, or how our mapping of the 3D vector space is converted to 2D.
while i don't use d3dx libs, I'll give this a shot based on this doc.
Your U and V are in the wrong order, and should be swapped. dwUPartialOutSemantic [in] should be tangent, while dwVPartialOutSemantic [in] should be binormal. Also push both tangent and normal and binormal to the same index.
if your vertex format is something like this position, normal, tangent, bitangent, texcoords. then your call to compute tangent frame ex should reflect that. the indices should reflect the offset from the start of the vertex to the semantic location. you are using 0, so everything is overwrighting the position data. I would assume that indices should be 8 for tangent, 12 for bitangent, and 16 for normal based on the vertex format i described above. Read the doc I linked above and try to understand the meaning of each of the functions inputs.
Posted by Burnt_Fyr
on 25 February 2014 - 04:09 PM
After writing my own sampler and mixer with XAudio2, I can tell you that there was little information easily found on the internet at all. I have previously noted that gamdev does not have a specific forum devoted to audio programming.
I assume that your prior knowledge on DSP is not code related, but on how these work on waveforms. If that's the case, I would look into how to manipulate waveforms and attempt to recreate the tools of trade as used in the studio, amplitude effects such as compressors/limiters or expanders, delay effects like flanging and reverb, and filter effects like phasers, EQ, and wahs. This would tap into your knowledge of DSP and give you stuff to work on with your hobby coding.
Posted by Burnt_Fyr
on 25 February 2014 - 03:45 PM
I don't have anything to add here, but I find it unsettling that the OP's posts have almost universally received negative ratings. Is this necessary? I realize the posts show a lack of knowledge, but that is why the thread was started in the first place. It seems they are being punished for asking a question, IMHO.
I have handled this by using a seperate store for components rather than storing them in the entities or systems themselves. The systems operate on the stores themselves. Now I don't worry about a post movement update to copy the position to the physics components or a pre physics update to grab the current postion and transform. As the data exists seperately from the system. So my components are not coupled, and neither are the systems. The systems that need access to components will contain references to the stores for those components. Every problem can be handled by another layer of abstratction is a motto that I see to live and die by more and more.
Posted by Burnt_Fyr
on 23 September 2013 - 01:03 PM
Actually, MJP is the Pettineo and I'm the Zink from your list of authors Can you please specify more precisely where in the book something is unclear about the resource usage? If it would help others, then I can add a mention of something like that on the Hieroglyph 3 codeplex page.