Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

2 Neutral

About GreenGodDiary

  • Rank

Personal Information

  • Role
    3D Animator
  • Interests
  1. I'm sure that would work but I'm not a big fan of forcing faster clips to have more frames. We're under pretty tight memory constraints for this project so every meg counts. Perhaps one could scale the speed of the blended clips depending on the blend factor? E.g. if you have your run with 20 frames and your walk with 30, and the blendfactor is 0.5 you could scale the speed of the run down and the walk speed up so they sync halfway (since the blendfactor is 0.5)
  2. Sup dudes and dudettes! I'm in the process of implementing an animation state machine and am currently making a 2D blendspace state for it. I think I've figured out how to blend the different clips together given an [x,y] coordinate but I have one problem I'm not sure how to solve; matching the different blended clips' animation speed. Say you have your run-o-the-mill twin-stick character locomotion blendspace, where max Y, zero X means running straight forward, and max Y, max X (in either direction) means running at an angle (thus blending run_forward with run_strafe). In this case the animations' speed probably match, so there's no worry. However, say I'm halfway up Y, meaning I'm "jogging", in the sense I'm halfway between walk_forward and run_forward, and my X is at some arbitrary point. How would I blend these animations together so that their speeds match? Would it be as simple as 'lerping' the animation speed of the walk towards the speed of the run and scaling the speeds of all the clips to match this speed? Sorry if the question is poorly written.
  3. GreenGodDiary

    FBX animation data flips half-way through

    The issue has been resolved! Though I havent confirmed it, I believe the issue was that when we got the key data we used to store it in Euler and then convert to Quaternion. Now we get quaternion data from the start and the issue has gone away. I have fixed some other stuff as well, which is why I can't be certain that was the issue, but I suppose it makes sense.
  4. GreenGodDiary

    FBX animation data flips half-way through

    Ohh now I understand, I didn't quite connect the dots there. In my code, however, I get the global pose and parent inverse global pose every 1 frame, by incrementing `currentTime` by the time of a frame (1 / 24 ), like so: float frameTime = 1.0 / 24.0; int keyCount = anim_curve_rotY->KeyGetCount(); //ghetto approach but works float endTime = frameTime * keyCount; float currentTime = 0.0; while (currentTime < endTime) { FbxTime takeTime; takeTime.SetSecondDouble(currentTime); // #calculateLocalTransform FbxAMatrix ... = GetAbsoluteTransformFromCurrentTake(skeleton->GetNode(), takeTime); FbxAMatrix ... ... ... currentTime += frameTime; } Shouldnt this avoid such a problem? Also, on export from maya I set the option to bake every frame, so again I'm not quite sure how the long-distance interpolation issue can be a thing in my case. Would love it if you could shed some light on that! Again, thank you for your assistance! Edit: added code for clarity
  5. GreenGodDiary

    FBX animation data flips half-way through

    Thank you for your response. I have read that post before and while I'm sure your approach is far better than mine, it's not really what I'm looking for at this time. We do not require a full-featured animation system; we just need to be able to play and switch animation clips on certain models. I'm not really sure how what you said in that post is supposed to help fix our current issue either, since you don't really go into how you get the data from FBX (other than mentioning GetNodeGlobalTransform). I know our approach is not optimal but I also know it should work (partly because our teacher used the same method), and getting it to work is our priority right now. If you'll tell me that our approach simply wont work for whatever reason that's fine, but I'd need to know why that is. Best, E. Finoli
  6. Hello guys! So, I'm currently working on our senior game-dev project and I'm currently tasked with implementing animations in DirectX. It's been a few weeks of debugging and I've gotten pretty far, only a few quirks left to fix but I can't figure this one out. So, what happens is when a rotation becomes too big on a particular joint, it completely flips around. This seems to be an issue in the FBX data extraction and I've isolated it to the key animation data. First off, here's what the animation looks like with a small rotation: Small rotation in Maya Small rotation in Engine Looks as expected! (Other than the flipped direction, which I'm not too concerned about at this point; however, if you think this is part of the issue please let me know!) Now, here's an animation with a big rotation (+360 around Y then back to 0): Big rotation in Maya Big rotation in Engine As you can see the animation completely flips here and there. Here's how the local animation data for each joint is retrieved: while (currentTime < endTime) { FbxTime takeTime; takeTime.SetSecondDouble(currentTime); // #calculateLocalTransform FbxAMatrix matAbsoluteTransform = GetAbsoluteTransformFromCurrentTake(skeleton->GetNode(), takeTime); FbxAMatrix matParentAbsoluteTransform = GetAbsoluteTransformFromCurrentTake(skeleton->GetNode()->GetParent(), takeTime); FbxAMatrix matInvParentAbsoluteTransform = matParentAbsoluteTransform.Inverse(); FbxAMatrix matTransform = matInvParentAbsoluteTransform * matAbsoluteTransform; // do stuff with matTransform } // GetAbsoluteTransformFromCurrentTake() returns: // pNode->GetScene()->GetAnimationEvaluator()->GetNodeGlobalTransform(pNode, time); This seems to work well, but on the keys when the flip happens it returns a matrix where the non-animated rotations (Y and Z in this case) have a value of 180, rather than 0. The Y value also starts "moving" in the opposite direction. From the Converter we save out the matrix components as T, R, S (R in Euler) and during import in engine the rotation is converted to a quaternion for interpolation. I'm not sure what else I can share that might help give a clue as to what the issue is, but if you need anything to help me just let me know! Any help/ideas are very much appreciated! ❤️ E. Finoli
  7. GreenGodDiary

    having problems debugging SSAO

    Thanks alot for these pointers, I will definitely look into it further using your advice. One question though: Are you sure this is the case? Because my kernels are in the range ([-1, 1], [-1, 1], [0, 1]), wont it exclusively sample from the "upper" hemisphere? Or am i thinking about it wrong? Thanks again
  8. GreenGodDiary

    having problems debugging SSAO

    Bump. (sorry)
  9. GreenGodDiary

    [MAYA API] Getting all triangles from a polygon

    While I suppose that would work I, would like to understand how to get it the 'normal' way, i.e. get each triangle through getTriangle() EDIT: Doing what @WiredCat said I got it to work, so if no one knows why my original solution didn't work it's not the end of the world. New code: for (int i = 0; i < numberOfTriangles; i++) { //vN = 0, 1+i, 2+i //Get positions Vertex_pos3nor3uv2 v1, v2, v3 = {}; MPoint p1 = vts[polyIter.vertexIndex(0)]; MPoint p2 = vts[polyIter.vertexIndex(1+i)]; MPoint p3 = vts[polyIter.vertexIndex(2+i)]; //Get normals MVector n1 = nmls[polyIter.normalIndex(0)]; MVector n2 = nmls[polyIter.normalIndex(1 + i)]; MVector n3 = nmls[polyIter.normalIndex(1 + i)]; //First vertex v1.posX = p1.x; v1.posY = p1.y; v1.posZ = p1.z; v1.norX = n1.x; v1.norY = n1.y; v1.norZ = n1.z; //Second vertex v2.posX = p2.x; v2.posY = p2.y; v2.posZ = p2.z; v2.norX = n2.x; v2.norY = n2.y; v2.norZ = n2.z; //Third vertex v3.posX = p3.x; v3.posY = p3.y; v3.posZ = p3.z; v3.norX = n3.x; v3.norY = n3.y; v3.norZ = n3.z; //Changing order of verts for DirectX verts.push_back(v3); verts.push_back(v2); verts.push_back(v1); }
  10. Just realized maybe this doesn't fit in this sub-forum but oh well I'm making an exporter plugin for Maya and I want to export a non-triangulated mesh, while still outputting triangle data, not quad/n-gon data. Using MItMeshPolygon, I am doing the following: for (; !polyIter.isDone(); polyIter.next()) { //Get points and normals from current polygon MPointArray vts; polyIter.getPoints(vts); MVectorArray nmls; polyIter.getNormals(nmls); //Get number of triangles in current polygon int numberOfTriangles; polyIter.numTriangles(numberOfTriangles); //Loop through all triangles for (int i = 0; i < numberOfTriangles; i++) { //Get points and vertexList for this triangle. //vertexList is used to index into the polygon verts and normals. MPointArray points = {}; MIntArray vertexList = {}; polyIter.getTriangle(i, points, vertexList, MSpace::kObject); //For each vertex in this triangle for (int v = 0; v < 3; v++) { //Get point and normal UINT vi = polyIter.vertexIndex(vertexList[v]); UINT ni = polyIter.normalIndex(vertexList[v]); MPoint _v = vts[vi]; MFloatVector _n = nmls[ni]; //Create vertex Vertex_pos3nor3uv2 vert = {}; vert.posX = _v.x; vert.posY = _v.y; vert.posZ = _v.z * -1.0; vert.norX = _n.x; vert.norY = _n.y; vert.norZ = _n.z * -1.0; vert.u = 0.0; very.v = 0.0; verts.push_back(vert); } } } Doing this only gives me half the triangles I'm supposed to get and the result is very distorted. Link above is a picture of a cube exported this way. Edit: I've also tried indexing into the entire mesh vertex array like this: MPointArray vts; meshFn.getPoints(vts); MFloatVectorArray nmls; meshFn.getNormals(nmls); //.... UINT vi = polyIter.vertexIndex(vertexList[v]); UINT ni = polyIter.normalIndex(vertexList[v]); MPoint _v = vts[vi]; MFloatVector _n = nmls[vi]; I can't figure out what's wrong with my code. Any ideas?
  11. GreenGodDiary

    having problems debugging SSAO

    Bumping with new information. I'm getting quite desperate, if someone could help me out I would be forever greatful<3 I have revamped my way of constructing the view space position. Instead of directly binding my DepthStencil as a shader resource (which thinking back made no sense to do), I'm now in the G-buffer pass outputting 'positionVS.z / FarClipDistance' to a texture and using that, and remaking my viewRays in the following way: (1000.0f is FarClipDistance) //create corner view rays float thfov = tan(fov / 2.0); float verts[24] { -1.0f, 1.0f, 0.0f, //Pos TopLeft corner -1.0f * thfov * aspect, 1.0f * thfov, 1000.0f, //Ray 1.0f, 1.0f, 0.0f, //Pos TopRight corner 1.0f * thfov * aspect, 1.0f * thfov, 1000.0f, //Ray -1.0f, -1.0f, 0.0f, //Pos BottomLeft corner - 1.0f * thfov * aspect, -1.0f * thfov, 1000.0f,//Ray 1.0f, -1.0f, 0.0f, //Pos BottomRight corner 1.0f * thfov * aspect, -1.0f * thfov, 1000.0f, //Ray }; In my SSAO PS, I reconstruct view-space position like this: float3 origin = input.ray.xyz * (depthTexture.Load(texCoord).r); origin.x *= 1000; origin.y *= 1000; Why do I multiply by 1000? Because it works. Why does it work? Don't know. But this gives me the same value that I had in the G-pass vertex shader. If someone knows why this works/why it shouldnt, do tell me. Anyway, next I get the world-space normal from the G-buffer and multiply by my view matrix to get view-space normal: float3 normal = normalTexture.Load(texCoord).xyz; normal = mul(view, normal); normal = normalize(normal); I now have a random-vector-texture that I sample. Next I construct the TBN matrix using this vector and the view-space normal: float3 rvec = randomTexture.Sample(randomSampler, input.pos.xy).xyz; rvec.z = 0.0; rvec = normalize(rvec); float3 tangent = normalize(rvec - normal * dot(rvec, normal)); float3 bitangent = normalize(cross(normal, tangent)); float3x3 tbn = float3x3(tangent, bitangent, normal); This is where I'm not sure if I'm doing it right. I am doing it exactly like the article in the original post, however since he is using OpenGL maybe something is different here? The reason this part looks suspicious to me is that when I later use it, I get values that to me don't make sense. float3 samp = mul(tbn, kernel[i]); samp = samp + origin; samp here is what looks odd to me. If the values are indeed wrong, I must be constructing my TBN matrix wrong somehow. Next up, projecting samp in order to get the offset in NDC so that I can then sample the depth of samp: float4 offset = float4(samp, 1.0); offset = mul(offset, projection); offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; // get sample depth: float sampleDepth = depthTexture.Sample(defaultSampler, offset.xy).r; occlusion += (sampleDepth <= samp.z ? 1.0 : 0.0); The result is still nowhere near what you'd expect. It looks slightly better than the video linked in the original post but still same story; huge odd artifacts that change heavily based on the cameras orientation. What am I doing wrong? help im dying
  12. Please look at my new post in this thread where I supply new information! I'm trying to implement SSAO in my 'engine' (based on this article) but I'm getting odd results. I know I'm doing something wrong but I can't figure out what's causing the particular issue im having at the moment. Here's a video of what it looks like . The rendered output is the SSAO map. As you can see the result is heavily altered depending on the camera (although it seems to be unaffected my camera translation). The fact that the occlusion itself isn't correct isn't much of a problem at this stage, since I've hardcoded a lot of stuff that shouldn't be. E.g. I don't have a random-vector texture, all I do is use one of the sample vectors in order to construct the TBN matrix. One issue at a time... My shaders are as follows: //SSAO VS struct VS_IN { float3 pos : POSITION; float3 ray : VIEWRAY; }; struct VS_OUT { float4 pos : SV_POSITION; float4 ray : VIEWRAY; }; VS_OUT VS_main( VS_IN input ) { VS_OUT output; output.pos = float4(input.pos, 1.0f); //already in NDC space, pass through output.ray = float4(input.ray, 0.0f); //interpolate view ray return output; } Texture2D depthTexture : register(t0); Texture2D normalTexture : register(t1); struct VS_OUT { float4 pos : SV_POSITION; float4 ray : VIEWRAY; }; cbuffer cbViewProj : register(b0) { float4x4 view; float4x4 projection; } float4 PS_main(VS_OUT input) : SV_TARGET { //Generate samples float3 kernel[8]; kernel[0] = float3(1.0f, 1.0f, 1.0f); kernel[1] = float3(-1.0f, -1.0f, 0.0f); kernel[2] = float3(-1.0f, 1.0f, 1.0f); kernel[3] = float3(1.0f, -1.0f, 0.0f); kernel[4] = float3(1.0f, 1.0f, 0.0f); kernel[5] = float3(-1.0f, -1.0f, 1.0f); kernel[6] = float3(-1.0f, 1.0f, .0f); kernel[7] = float3(1.0f, -1.0f, 1.0f); //Get texcoord using SV_POSITION int3 texCoord = int3(input.pos.xy, 0); //Fragment viewspace position (non-linear depth) float3 origin = input.ray.xyz * (depthTexture.Load(texCoord).r); //world space normal transformed to view space and normalized float3 normal = normalize(mul(view, float4(normalTexture.Load(texCoord).xyz, 0.0f))); //Grab arbitrary vector for construction of TBN matrix float3 rvec = kernel[3]; float3 tangent = normalize(rvec - normal * dot(rvec, normal)); float3 bitangent = cross(normal, tangent); float3x3 tbn = float3x3(tangent, bitangent, normal); float occlusion = 0.0; for (int i = 0; i < 8; ++i) { // get sample position: float3 samp = mul(tbn, kernel[i]); samp = samp * 1.0f + origin; // project sample position: float4 offset = float4(samp, 1.0); offset = mul(projection, offset); offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; // get sample depth. (again, non-linear depth) float sampleDepth = depthTexture.Load(int3(offset.xy, 0)).r; // range check & accumulate: occlusion += (sampleDepth <= samp.z ? 1.0 : 0.0); } //Average occlusion occlusion /= 8.0; return min(occlusion, 1.0f); } I'm fairly sure my matrices are correct (view and projection) and that the input rays are correct. I don't think the non-linear depth is the problem here either, but what do I know I haven't fixed the linear depth mostly because I don't really understand how it's done... Any ideas are very appreciated!
  13. GreenGodDiary

    How do I output zero verts from geometry shader?

    Don't know if you missed it but I solved it by replacing the 0,0,0 vector in the dot()-evaluation with an actual vector (0,0,0 - vertexpos). I assume the reason it gave me the error is that it detected that all verts would fail that test. Also I dont see any problems with the cross, both arguments are float4 and then I explicitly get the xyz from it.
  14. Solved: didn't think clearly and realized I can't just compare the cross-product with 0,0,0. Fixed by doing this: float3 originVector = float3(0.0, 0.0, 0.0) - v1.xyz; if (dot(cross(e1, e2).xyz, originVector) > 0.0) { //... } I'm trying to write a geometry shader that does backface culling. (Dont ask me why) What I'm doing is checking the cross-product of two edges of the triangle (in NDC space) and checking if it's facing 0,0,0 . The problem is when I compile I get this error: this is i guess because if it isn't facing us, I dont append any verts to the stream. I always assumed maxvertexcount implied I can emit as few verts as I like, but I suppose not. How do I get around this? Shader below: struct GS_IN_OUT { float4 Pos : SV_POSITION; float4 PosW : POSITION; float4 NorW : NORMAL; float2 UV : TEXCOORD; }; [maxvertexcount(3)] void GS_main( triangle GS_IN_OUT input[3], inout TriangleStream< GS_IN_OUT > output ) { //Check for backface float4 v1, v2, v3; v1 = input[0].Pos; v2 = input[1].Pos; v3 = input[2].Pos; float4 e1, e2; e1 = v1 - v2; e2 = v1 - v3; if (dot(cross(e1, e2).xyz, float3(0.0, 0.0, 0.0)) > 0.0) { //face is facing us, let triangle through for (uint i = 0; i < 3; i++) { GS_IN_OUT element; element = input[i]; output.Append(element); } } }
  15. Solved it by doing the following: MItDag it(MItDag::kDepthFirst, MFn::kTransform); it.reset(node, MItDag::kDepthFirst, MFn::kTransform); though I am still curious as to why the original approach didn't work..
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!