Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

167 Neutral

About Tengato

  • Rank
  1. It turns out XMVector3TransformCoordStream is an inline method so I was able to debug it, and I found out that my transform matrix was not getting returned correctly after all. =)
  2. Hi, I'm working on a bounding volume generation project, and I found a DirectXMath method, XMVector3TransformCoordStream, (http://msdn.microsoft.com/en-us/library/windows/desktop/hh404778(v=vs.85).aspx) that would be perfect for a need I have. Only it is just writing all (1, 1, 1) points to my output.  I'm not sure why this is happening, I've verified that the memory at the input and output pointers is correct beforehand, that the strides are the right size, and the transform is correct.     ArrayList<XMFLOAT3> points; int count = mGeometries[*model]->mVertices.size(); points.clearAndReserve(count);   XMVector3TransformCoordStream(     points.mData,     sizeof(XMFLOAT3),     &(mGeometries[*model]->mVertices.get_item(0)->mPos),     sizeof(Vertex),     count,     (*model)->GetTransform());   points.mCount = count;     Any idea what I'm missing?  Thanks!    
  3. In XNA, for FBX format, model files are processed by a black-box XNA importer component which creates the meshes, the vertices and all their channels, including bone influences, weights, etc.  The importer may place a limit on the number of weights during that step, but I don't know for sure.  Next there is a more open processing step that defines the bone indices and sets up the bind and inverse bind poses. I believe it also conditions the vertex's weight channels into the correct memory size here.  After that, having no information in the resulting model object about the vertex weights, you are still responsible for making sure the effect files get weight parameter values that will work.  This mostly amounts to having knowledge about the creation of the model or simply trying out the values 1, 2, and 4.
  4. I did suspect the weights for a bit, but I checked out the vertex buffer contents and with 4 weights, the values are normalized.  The rest of my code assumes 4 weights.   This is where I got the model, by the way: http://www.dexsoft-games.com/models/futuristic_soldier.html
  5. Hi, a while ago I purchased an animated model from a game content website to use in my XNA projects.  Unfortunately, there are some problems with the model's animations where the bone transforms that XNA's FBX importer produces from the model file seem to be corrupt.  The model animates fine when opened with 3ds Max and the Autodesk FBX Converter, and I have other FBX models that animate correctly when imported into XNA, so I'm wondering if this file might have some uncommon configuration or compression enabled that makes it incompatible with XNA's importer?  Or maybe someone recognizes a common mistake, like the inverse bind pose getting applied twice or something?   Here's a video of the animation problems:     The XNA animation code is pretty much straight from the animation sample: http://xbox.create.msdn.com/en-US/education/catalog/sample/skinned_model   Thanks for any advice!
  6. Tengato

    Shader debugger in DX9

    I have that same exact problem with PIX while trying to debug vertex or pixel shaders in my XNA project...  I guess I'll go see if the old version works for me.
  7. Tengato

    Shimmering Shadow Edges

    After a lot of trial and error I was able to get it to work.  Since my light frustum dimensions are fixed, I convert the center point into texel space, floor the X and Y coordinates, then convert it back to world space: // (sceneBounds.Center has been placed based on the camera frustum.) sceneBounds.Radius = 50.0f; float texelsPerWorldUnit = (float)SMAP_SIZE / (sceneBounds.Radius * 2.0f); Matrix lightViewAtOrigin = Matrix.CreateScale(texelsPerWorldUnit) * Matrix.CreateLookAt(Vector3.Zero, -sceneLights[0].Direction, Vector3.Up); Matrix lightViewAtOriginInv = Matrix.Invert(lightViewAtOrigin); sceneBounds.Center = Vector3.Transform(sceneBounds.Center, lightViewAtOrigin); sceneBounds.Center.X = (float)(Math.Floor(sceneBounds.Center.X)); sceneBounds.Center.Y = (float)(Math.Floor(sceneBounds.Center.Y)); sceneBounds.Center = Vector3.Transform(sceneBounds.Center, lightViewAtOriginInv); lightView = Matrix.CreateLookAt(sceneBounds.Center - sceneLights[0].Direction * sceneBounds.Radius * 2.0f, sceneBounds.Center, Vector3.Up); lightFrustum = Matrix.CreateOrthographic(sceneBounds.Radius * 2.0f, sceneBounds.Radius * 2.0f, 0.0f, sceneBounds.Radius * 6.0f); ?
  8. Hi, I've been messing around with a basic shadow map implementation, and trying out approaches to dealing with the various artifacts.  While reading this Direct3D article on MSDN: Common Techniques to Improve Shadow Depth Maps, there is a section 'Moving the Light in Texel-Sized Increments' which seemed to have a simple solution that I could use, but I must be missing something in my implementation because I can't get rid of the shimmering...   The concept of quantizing the movement of the light frustum makes sense: vLightCameraOrthographicMin /= vWorldUnitsPerTexel; vLightCameraOrthographicMin = XMVectorFloor(vLightCameraOrthographicMin); vLightCameraOrthographicMin *= vWorldUnitsPerTexel; vLightCameraOrthographicMax /= vWorldUnitsPerTexel; vLightCameraOrthographicMax = XMVectorFloor(vLightCameraOrthographicMax); vLightCameraOrthographicMax *= vWorldUnitsPerTexel; but the article glosses over a few details that leave me puzzled.  In particular the statement "The vWorldUnitsPerTexel value is calculated by taking a bound of the view frustum, and dividing by the buffer size." and the following bit of code: FLOAT fWorldUnitsPerTexel = fCascadeBound / (float)m_CopyOfCascadeConfig.m_iBufferSize;        vWorldUnitsPerTexel = XMVectorSet(fWorldUnitsPerTexel, fWorldUnitsPerTexel, 0.0f, 0.0f); I don't understand how to get the fCascadeBound value, and I assume the divisor in the expression is the shadow map resolution?  Also, the vLightCameraOrthographicMin and Max values are in light view space, is that correct?   Thanks,    
  9. Hi, I have a basic understanding of C++, but when I examine real-world code I still encounter lots of confusing bits and pieces.  For example, this class declaration from the Ogre 3D source:   class _OgreExport Root : public Singleton<Root> { ...   What is _OgreExport?   Thanks!
  10. What is the best general structuring approach to building a robust 3d model for use in video games? (i.e. a character/vehicle controlled by the player/AI) Some of my questions in specific are: When should I use multiple models?  This seems useful for things like weapons, but what about vehicle wheels or destructible pieces - how far should you take this concept? When should I use separate meshes within a single model? Is there a 1:1 realtionship between meshes and materials that should be honored? It seems like meshes get sent to the shaders as a unit, although you could possibly break them down into primitives and send those pieces to different shaders... Is it acceptable to animate a rigid model (like a mech) using a single mesh with disconnected pieces, assigning all the vertices in each piece a 100% weight to a corresponding bone? Should you ever create animation channels for meshes (rather than a bone)? Are there standards for how many texture files should be used with a single mesh or a model? This topic may seem more to do with modeling than coding, but the reason I'm asking is because these details inform much of the design of the model handling code, which I would like to be powerful and flexible but elegant and clean as well. Thanks for any advice!
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!