kauna

Members
  • Content count

    1217
  • Joined

  • Last visited

Community Reputation

2922 Excellent

About kauna

  • Rank
    Contributor
  1. As explained earlier you'll need to convert the screen position pixel to the space of the shadow map.   In the deferred renderer you'll typically construct a view space position based on a depth value (either from z-buffer or a separated depth target) and the normalized device coordinates. You may also store the full view space position in a render target but it is naturally slower (due to bandwitdh usage). Check position reconstruction from depth on google.    So when you have your view space position calculated it can be transformed back to world space (ie. use inverse view matrix). This is nothing difficult, just inverse the view matrix on the CPU.    Since you are looking for a texel on the shadow map, you would transform a world space position by the lights view-proj matrix. So again, nothing difficult, just multiply the inverse view matrix from the previous step with the lights view-proj matrix.    At the end you'll have a matrix which transforms a view space position directly to a shadow map position - well you'll need to scale and offset the resulting coordinate (but even this could be added to the matrix).   On CPU:   -create inverse view matrix -create shadow map matrix = inverse view matrix * lights view-projection matrix   On GPU:   - get pixels view space position using any method you find suitable - multiply the position by the shadow map matrix - scale and offset by 0.5  - sample and compare shadow map   OR   - get pixels view space position as before - multiply the position by inverse view matrix - multiply the result by lights view-proj matrix - scale and offset by 0.5 - sample and compare shadow map     Cheers!
  2. pixel.lightViewPosition = Light1ViewPosMapTexture.Sample(ObjSamplerState, pixel.texCoord); pixel.lightViewPosition.xyz /= pixel.lightViewPosition.w;   I'm not sure what this code does - is it the view space position stored in a texture? Why are you sampling the same texture twice:    float shadowMapDepth = Light1ViewPosMapTexture.Sample(ObjSamplerState, pixel.lightViewPosition.xy).r; worldPosition = WorldPositionTexture.Sample(ObjSamplerState, pixel.texCoord);   If it is the view space position of the screen pixel then it is in the wrong space - you have to transform it by inverse view matrix (view space to world space) and then by the lights view-proj matrix (world space to lights projection space).   You also seem to sample a texture with "world position" but the data isn't used.   Cheers!
  3. I think that you have a little misconception about the shadow mapping - when you render the shadow map from the lights point of view only thing you'll need to store is the depth value. Nothing else, unless you do some fancy more complex shadow mapping. Practically the shaders used for shadow map rendering could be the same as the shaders used for regular scene rendering since only thing you want is the depth value - well this works only when you use a hardware depth stencil target as shadow map format.   Any way - first you'll need to fix your shadow map rendering functions.    If I understand correctly, there could be a problem with the deferred light code which applies the shadow - in that code you'll need the screen pixels position in world or view space. This position must be transformed to the shadow maps space, in order to perform the calculations which you are doing.      Cheers!
  4. One solution to try is to update the VP part of the transform only once to a constant buffer - calculating the MVP on the CPU could result a bottle neck. Then use the tbuffer for the model/world matrices of the objects and perform the required transforms in the vertex shader - that is, multiply the vertex with world/model matrix and then with ViewProjection matrix.   When updating the tbuffer, you should try to update it is with as few map/unmaps as possible (per frame), not once per mesh since there is some overhead with it.    Cheers!
  5. http://www.gamedev.net/topic/655543-outline-glow-effect/   Another thread about the subject - controlling the amount of glow will result the effect you are looking for.   Cheers!
  6. Terrain tesselation and collision

      For shadow mapping this is ok as long as you use the same tesselated mesh for the shadow map rendering - there will be visual artifacts if the mesh geometry rendered in the shadow map isn't the same as the mesh geometry used in the lighting calculations.   However, sometimes you may need to optimize the shadow rendering and use maybe a less detailed mesh - you'll need to study what kind of geometrical changes cause less rendering artifacts.    Cheers!
  7. Terrain tesselation and collision

    Typically these 2 things aren't related - the terrain can be tesselated independently from collision handling. The tesselation of a terrain is usually based on the point-of-view and the distance of terrain chunks, while the collision testing needs to work on a data which isn't view point dependent.   If you'd perform collision tests on the tesselated terrain then funny things could happen - consider that your camera is moving and the tesselation changes radically: objects that were on terrain would be in air or maybe inside the terrain. This could make objects fall or maybe get boosted in astronomic speeds (because they are suddenly inside an object). Believe me, I tried this years ago.    Keep the physics separated from the graphics and your life will be easier. Make a separated physical model for the terrain. This is logical since you could be running the game without any graphic output (ie. dedicated server).   Cheers!
  8.   Yes, but when the code is called, it draws one triangle?   Cheers!
  9. Do I understand your code correctly that it draws single triangles at time?   If so, your if-avoidance is insignificant (performance wise) compared to the fact that you are totally under utilising the GPU power and saturating the CPU with endless draw calls.    Cheers!
  10. GPU skinned object picking

    Yeah your code looks about what I was thinking.   - when you transform the ray, you'll have to _transform_ the pick ray start position (ie. rotate and translate) and _rotate_ the ray direction (do not apply translation).    - I think that if you apply an offset to the sphere position like you do in your code, you'll have to rotate the direction by the inverse matrix too, because the (b.pos - b.parent.pos) is a direction in the world space, not in the bone's local space.   I'd start by just performing a ray/sphere hit test with the bone position (without any inverse transforms) to see that the theory works.   To debug hit tests, I'd implement a simple raytracer which shoots rays from the camera position to each pixels direction in order to see easily which rays hit the object and which don't.    Cheers!
  11.       This makes my eyes bleed! (Just kidding)   But honestly, to save yourself from writing things many times, I'd suggest using a vector library / class with operator overloading. With vector library / class you could replace the above code with;   [source] iterator->pos -= Vector(0.001f * iterator->timeSinceCreated); if(iterator->pos.x <= 0.0f) { iterator->pos = Vector(0.0f,0.0f,0.0f); } iterator->pos += iterator->delta * (0.01f) * iterator->timeSinceCreated; iterator->lifeTime -= 1.0f * iterator->timeSinceCreated; [/source]     It will save you time AND reduce errors from bad copy-pasting.
  12. GPU skinned object picking

    You can perform the hit test in the 3d primitives local space (as you would do with any mesh) - that is, transform the ray from the world space to the bone's local space by inverse transform of the bone. With that approach, there is no need to use OOBB, since the bone is always axis aligned in it's local space. The inverse transform needs to be calculated only when performing the actual hit test.   Yes, it is true that matrix inversion isn't the cheapest operation, it is relatively expensive compared to ray/box intersection - But it makes the intersection calculations much easier, also the same approach works with rigid meshes - you can use the same mesh for hit test while any position/rotation/scale are automatically handled by the inverse transform. Well, if you use spheres, then the inverse transform isn't necessary.    Cheers!   [edit] So to test a skinned mesh - just loop through the bones of the mesh and perform ray hit test (you can start with sphere geometry since they don't require complex transforms). 
  13. Item Discontinuity Problem

    I don't see any reason to handle the item differently in any of the presented cases - your item is a entity which can be freely placed in the world (render 3d presentation), it can be linked to a character (the item is still in the world, it is just controlled by the linked entity).   I don't consider the inventory to be a special case - for the inventory rendering I just query the item for a texture image to be used for 2d rendering - the item doesn't even need to care that it is shown in an inventory). Because the inventory GUI is closely liked to what the character is wearing - I can place the item in different slots where some present that item is being weared (ie. linked to some node) or otherwise the item is hidden (removed from the world).    Cheers!
  14. GPU skinned object picking

    I'd stay away from the "render color / ID and read from back buffer" because you can't use that solution for generic ray tracing ie. testing if bullets hit your skinned meshes or if the player is visible for an enemy from any point of view. You have some options - it is a question of efficiency vs. accuracy.    - The most accurate (but probably slowest) way is to calculate the skinned mesh on CPU and then perform the hit test on triangle basis (as you would do with any triangle mesh). Of course you don't need to update the CPU side mesh unless you actually perform a hit test on it.    - The easier and more efficient solution (and probably accurate enough) is to perform the hit test on the bones used by the skin - you can use a box, cylinder shape or a sphere (for example) for the bone geometry.   Cheers!
  15. Here's an example how to implement the critical section in the way as LSpiro described:   http://jrdodds.blogs.com/blog/2004/08/raii_in_c.html   Cheers!