• Advertisement

keym

Member
  • Content count

    35
  • Joined

  • Last visited

Everything posted by keym

  1. Hi guys, I'm struggling again with my deferred renderer. This time I'm rewriting it from OpenGL/Cg to DX11/HLSL. I have almost everything working but (I think) my reconstruction from depth is now working correctly (I compared normals and they look the same as in OGL renderer, so I guess they are fine). The result I get is somewhat correct but bad things start to happen when I move away from the lit object - It becomes darker and darker and eventually gets all black. The thing is that I use almost the same code as when dealing with Cg/OGL and perhaps I've overlooked something with conversion. Fun fact - walls that are closer to the camera gets darker quicker than walls that are further away - maybe this will give some hint.   Posting pics         //compute view space position     #define ZNEAR zNearZFar.x     #define ZFAR zNearZFar.y     #define A (ZNEAR + ZFAR)     #define B (ZNEAR - ZFAR)     #define C (2.0 * ZNEAR * ZFAR)     #define D (ndcPos.z * B)     #define ZEYE (-C / (A + D))     float depth2 = shaderTexture2.Sample(SampleType, texCoord);     float3 ndcPos = (float3(texCoord.x, 1 - texCoord.y, depth2) - 0.5) * 2.0; //<<<Here I had to flip Y coord when compared to the OGL/Cg shader     float4 clipPos;     clipPos.w = -ZEYE;     clipPos.xyz = ndcPos * clipPos.w;     float3 vsPos = mul(invProj, clipPos).xyz;  
  2. So I guees I have to also do this:     from: //From [0,1] to [-1,-1] in xyz float3 ndcPos = (float3(texCoord.x, 1 - texCoord.y, depth2) - 0.5) * 2.0;   to: //From [0,1] to [-1,1] only in xy, z stays in [0,1] float3 ndcPos = float3(texCoord.x, 1 - texCoord.y, depth2); ndcPos.xy = (ndcPos.xy - 0.5) * 2;   But this ^ and changing C to zNear*zFar sadly doesn't produce correct results. : /   Edit: For generating projection matrix I use D3DXMatrixPerspectiveFovRH (right handed coordinate system).  
  3. Hello, I'm having troubles with maintaining aspect ratio while my window is taller than wider. How do I readjust that in my perspective matrix? I calculate aspect ratio simply dividing width / height and it works great when window is wider than taller but the other way around my image is stretched. Any solutions to this? I make my perspective matrix using D3DXMatrixPerspectiveFovRH.
  4. Ok, it seems that the problem was related to my part of code responsible for resizing texture that I use for drawing. When I fixed it, it now works as I want (clipping on sides when the window is taller than wider). I don't even have to alter aspect value and perspective matrix in any way. Funny how often you resolve problems after posting ><   Anyways, thanks for looking ya'll.
  5. Yes, I was casting them to floats from the very beginning so that's not the issue. Thanks though.
  6. Hello, I'm trying to make picking working following this tutorial: http://www.rastertek.com/dx11tut47.html   I'm not sure is it me or the author confuses spaces at the end of this tutorial? Can someone have a fresh look at this? Namely he states that multyplying vector by inverse view matrix we get the result in view space. Shouldn't it be in world space? And then we go from world into object space and there make final test? His ray intersection doesn't take into account sphere position so final test looks like it's in object space, but he also says it's in world space... So yeah, thoughts?
  7. DX11 Picking in DX11

    Solved.   Looks like all my math was ok but I forgot one thing - my rendering WinAPi control has an offset in x,y (cause I have sidebar and other stuff on the side) and I forgot to take that into account when reading mouse position over the viewport. For instance I got [0,0] at the origin of the window, not the rendering control. Now all works well. Thanks for looking.
  8. DX11 Picking in DX11

    Well... shouldn't this be that simple:   object space ----[world a.k.a. model matrix]----> world space world space ----[view a.k.a. camera matrix]----> view space view space ----[projection matrix]----> clip space   object space <----[inverse world a.k.a. model matrix]---- world space world space <----[inverse view a.k.a. camera matrix]---- view space view space <----[inverse projection matrix]---- clip space   ?   Anyways, this is how it *seems right* to me, but I'm not a guru here. Maybe I'm being picky ;) about naming and that was not the intention of this topic (but still I wanted to clarify naming before I ask my question(s) and make more confusion).   So, the reason I post is because (obviously) I have a problem with picking. The issue here is that in my renderer I use right hand coordinate system, like in OpenGL (for sake of compatibility, I have OGL renderer in this app too and I don't want to negate every needed value to get the same result, it would only make more future errors).   So I construct my projection matrix using D3DXMatrixPerspectiveFovRH() and view matrix using D3DXMatrixLookAtRH(). Before sending them to HLSL I transpose them (for some reason I have to do this, otherwise I get incorrect results [DX stores matrices in row major, but in HLSL they need to be in column major?]). All is sweet and dandy until picking occurs. I'm pretty sure that I'm doing something wrong, because this is my first attempt with renderer independent picking. I follow what's in the tutorial but intersection test gives incorrect results. For sake of simplicity my sphere is at (0,0,0) so I don't have to care about world and invWorld matrices. I'm guessing that something is wrong with my matrices but it's hard to track down.   Also I'm not sure what's going on here (tutorial):   // Adjust the points using the projection matrix to account for the aspect ratio of the viewport. m_D3D->GetProjectionMatrix(projectionMatrix); pointX = pointX / projectionMatrix._11; pointY = pointY / projectionMatrix._22;   and how exactly the unprojecting part works. I mean I have mouse coordinates that I rescale into -1, 1 range but how do I get from vec2 to vec3? Where does the 3rd component come from?
  9. Hello, I have simple (I think) problem for all guys with D3D11 background - I already have working TGA loader and now I want to pass this raw data to D3D, create a texture and use it in shader. How do I do that? I googled and only tutorials/helpful tips I could find cover D3D loaders with D3DX11CreateShaderResourceViewFromFile. I can see that there's a D3DX11CreateShaderResourceViewFromMemory but I'm not sure how to use it - the texture appears black (I get E_FAIL result so I must be doing something wrong). What's the usual procedure? I would really appreciate a sample code or snippet, cause it's hard to find anything useful except msdn which is still kinda mystical to me.
  10. DX11 Texture from memory

    Thank you very much! Works like a charm. For other googlers - don't forget to set D3D11_BIND_SHADER_RESOURCE in your texture descriptor structure.
  11. SLERP issues

    So I implemented (finally) my GPU skinning and it works well when I iterate through all frames. It even works when I interpolate between frames of the same animation but I wanted to push it forward and make a simple weight based blending between animations. Here's the problem... Check out the video:   http://www.youtube.com/watch?v=HX0Rv8X5TC0   For this particular test I'm "slerping" current frame with bind pose frame at ratio 0%-50% (weight increases as I continue to play walk animation, I capped it to 50% because here is where the weird stuff is most noticable). When I SLERP between frameN and frameN+1 accordingly to the elapsed time, animation looks alright. The problem occurs when I try to blend two animations or animation with bind pose (as shown in the video). Ideas?
  12. SLERP issues

    Bump. Still stuck.
  13. SLERP issues

    Unfortunately normalising doesn't make a diffrence. I guess it only matters with (n)lerp anyway. I'm starting to think that maybe I should rebuild the positions, as C0lumbo suggested (even though I'm lerping in object space...), but not sure where to start with that.
  14. SLERP issues

    I'm interpolating in object space. Does this require rebuilding bone positions from hierarchy as well?   And my SLERP looks like this: quaternion quaternion::slerp(quaternion q1, quaternion q2, float percent) {     quaternion result;     float dot = q1.dotProduct(q2);              //Dot product - the cosine of the angle between 2 quats     if(dot < -1) dot = -1;                      //Clamp value to [-1,1], just to be sure     if(dot > 1) dot = 1;          if(dot < 0)                                  //If cos(angle) is <0 we negate the quaternion and dotp.     {                                            //(Go with smaller rotation)         q2=-q2;         dot=-dot;     }     if(dot > 0.99 && dot < 1.01) //If dotp is close to 1, do Lerp     {         result = q1 + ((q2 - q1) * percent);         result.normalize();         return result;     }     if(percent < 0.01)             //If percent is close to 0, return quat1     { result = q1;      return result; }     if(percent > 0.99)         //If percent is close to 1, return quat2     { result=q2;         return result;     }     float a = acos(dot);          //Get angle between quats     result = (q1 * (sin((1.0 - percent) * a) / sin(a))) +  ((q2 * sin(percent * a)) / sin(a));     return result; }
  15. SLERP issues

    First of all I doubt that the animation is wrong - it's directly ripped (for educational purposes only!) from Doom3 (directly meaning from the pk4 file, not downloaded from some crappy site). Walk cycle looks ok when played itself.   As I have written, that blending is capped to 0-50%, meaning at max we have 50%-50% of animation-bindpose mix and at min we have 0-100% proportion (no animation, just plain bind pose - parts of the vid where the model is not moving). I did it to emphasise where the problem occurs. If I would do 100%-0%, most of the time you would just see the original walk animation without popping. At the beginning of the video you see roughly 50%-50%.   I tried with Nlerp but it looks even worse - body is completely dismembered and limbs are spinning in various directions and I'm like WTF.
  16. SLERP issues

    No, I'm using slerp on quaternions and lerp on bone position vectors. Model is loaded from id tech4 format (md5mesh+md5anim format). All works good when I interpolate between frames of the same animation. It start's to act strangely when I try to interpolate between frames of two different animations (or for instance with bind pose, it doesn't really matter if it's bind pose or not, believe me, I get similar "pops" when I use other animation instead of bind pose).
  17. So I tried to implement vertex skinning using Cg and OGL. I searched the web and most tutorials do it with 4 bones at max, claiming that "you don't need more than 4 bones anyway". I beg to differ. For instance I used "cyberdemon" from Doom3 for testing and it uses up to 7 bones per vertex. And this game is 2004... I know it has nothing to do with that (just tell artist what he can and can't do and problem is solved, right?). But in this topic http://www.gamedev.net/topic/628092-gpu-skinning-4-bones/ MJP said that I can pass as many bone weights per vertex as I want. Currently I'm packing them into TEXCOORD0-7 and I'm hardly getting away with 6 bones - I even sacrificed TBNs and I want to use them too. So the question is what's this trickery that allows me passing more bones without stress? 1. If anyone's curious - I just wanted to implement up to 8 bones just to be safe, also it may come in handy when we start with facial animation. 2. Yes, I know that I can discard least significant weights and then renormalize, but that's "lazy solution", I want to do it right and MJP's post lead me to that direction. Also see above. 3. I'm fairly new to VBOs so I might be missing something about packing vertex attributes.
  18. My quaternion to matrix code: matrix4x4::matrix4x4(quaternion quat) {     quat.normalize();     this->row[0].x = 1 - 2 * quat.y * quat.y - 2 * quat.z * quat.z;     this->row[0].y = 2 * quat.x * quat.y - 2 * quat.z * quat.w;     this->row[0].z = 2 * quat.x * quat.z + 2 * quat.y * quat.w;     this->row[0].w = 0;     this->row[1].x = 2 * quat.x * quat.y + 2 * quat.z * quat.w;     this->row[1].y = 1 - 2 * quat.x * quat.x - 2 * quat.z * quat.z;     this->row[1].z = 2 * quat.y * quat.z - 2 * quat.x * quat.w;     this->row[1].w = 0;     this->row[2].x = 2 * quat.x * quat.z - 2 * quat.y * quat.w;     this->row[2].y = 2 * quat.y * quat.z + 2 * quat.x * quat.w;     this->row[2].z = 1 - 2 * quat.x * quat.x - 2 * quat.y * quat.y;     this->row[2].w = 0;     this->row[3] = vector4d(0,0,0,1); }   How I build bind pose and inverse bind pose matrices:     for(int i = 0; i < this->numJoints; i++)     {         matrix4x4 matr(this->joints[i].orient);         matr.row[3] = vector4d(this->joints[i].pos);         this->bindPoseMatrices[i] = matr;         this->invBindPoseMatrices[i] = this->bindPoseMatrices[i].getInverse();;     }   How I build animated matrices: void md5OBJ::updateSkeleton(int animNum, int frameNum) {     for(int i = 0; i < this->numJoints; i++)     {         matrix4x4 animMatr(this->animations[animNum].frames[frameNum].joints[i].orient);         animMatr.row[3] = vector4d(this->animations[animNum].frames[frameNum].joints[i].pos);         this->animatedMatrices[i] = animMatr * this->invBindPoseMatrices[i];     } }     Matrices are row major.  
  19. Bump, I'm still stuck.
  20. So it turns out that the other day I was bit too optimistic - tested it with bindpose only and I presumed it's allright. What I did - I got the bind pose matrices and then inverted ones, then multiplied bind pose by inverted and then sent it to the shader (silly me, I forgot that multiplying matrix by its inverse gives identity matrix, so no wonder the bind pose object space vertices were ok). Happy to see that it works I assumed that I only need to build animated matrices and replace bind pose matrices by animated ones in multiplication. But sadly this doesn't work. The result is totally messed up. It helps a little if I invert the final matrix but it's still not right.   What I already double checked: 1. quat to matrix conversion 2. matrix by matrix multiplication 3. matrix inverse routine
  21. Yep, it works. Just finished implementing my quat to matrix + few helpers. Thank you all!
  22. I animate MD5 meshes in a way C0lumbo implied. Post multiply your animation pose matrices with the inverse bind pose. Then you can just send the bind posed vertices to the GPU. I'm not sure how animating is done in Doom 3, but maybe it is possible that they did not use GPU skinning, because they needed the transformed vertices for constructing shadow volumes. Then it should be faster to have the vertices in weight space so that post multiplication by inverse bind pose is avoided.Thank you all for great tips. I'll try to do that but it'll take me some time cause I'm using quaternions so far. So, to sum this up, what I need to do is: 1. Build vertex positions in object space (using bind pose joints) and store them 2. Build bind pose matrices and then inverse bind pose matrices for my bind pose joints (from joint positions and orientations) 3. Build animation pose matrices from joints (like above) 4. Multiply animation pose matrices with inverse bind pose matrices and send them to shader 5. Send bind pose vertices, weight factors and bone indices to shader 7. Compute final matrix from "component" matrices that are weighted 8. Multiply bind pose vertex by final matrix. Is that correct?
  23. I'm confused. Is was under the impression that in fact this is the "bind pose approach" - these weight positions are provided only in bind pose (scratch that, they are pose independant, they only inform about mesh "volume", combined with bones, bind pose or not, they give the final result) and I also have bind pose skeleton provided. So how do I get rid of these weight positions? What's the usual approach here? Right now I have bind pose skeleton but in fact I don't use it while animating. It's only helpful when "unpacking" animation (skeleton) frames but it looks like there's a reason it's there. I have no idea if Doom3 uses these weight positions in shaders. I just followed mentioned tutorial and ended up here.
  24. No, no. The bones are stored as uniform as you say. I store something "extra" that is called "weight position". See here [url="http://tfc.duke.free.fr/coding/md5-specs-en.html"]http://tfc.duke.free.fr/coding/md5-specs-en.html[/url] and scroll down to
  • Advertisement