Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

2 Neutral

About GreenGodDiary

  • Rank

Personal Information

  • Interests
  1. GreenGodDiary

    having problems debugging SSAO

    Thanks alot for these pointers, I will definitely look into it further using your advice. One question though: Are you sure this is the case? Because my kernels are in the range ([-1, 1], [-1, 1], [0, 1]), wont it exclusively sample from the "upper" hemisphere? Or am i thinking about it wrong? Thanks again
  2. GreenGodDiary

    having problems debugging SSAO

    Bump. (sorry)
  3. GreenGodDiary

    [MAYA API] Getting all triangles from a polygon

    While I suppose that would work I, would like to understand how to get it the 'normal' way, i.e. get each triangle through getTriangle() EDIT: Doing what @WiredCat said I got it to work, so if no one knows why my original solution didn't work it's not the end of the world. New code: for (int i = 0; i < numberOfTriangles; i++) { //vN = 0, 1+i, 2+i //Get positions Vertex_pos3nor3uv2 v1, v2, v3 = {}; MPoint p1 = vts[polyIter.vertexIndex(0)]; MPoint p2 = vts[polyIter.vertexIndex(1+i)]; MPoint p3 = vts[polyIter.vertexIndex(2+i)]; //Get normals MVector n1 = nmls[polyIter.normalIndex(0)]; MVector n2 = nmls[polyIter.normalIndex(1 + i)]; MVector n3 = nmls[polyIter.normalIndex(1 + i)]; //First vertex v1.posX = p1.x; v1.posY = p1.y; v1.posZ = p1.z; v1.norX = n1.x; v1.norY = n1.y; v1.norZ = n1.z; //Second vertex v2.posX = p2.x; v2.posY = p2.y; v2.posZ = p2.z; v2.norX = n2.x; v2.norY = n2.y; v2.norZ = n2.z; //Third vertex v3.posX = p3.x; v3.posY = p3.y; v3.posZ = p3.z; v3.norX = n3.x; v3.norY = n3.y; v3.norZ = n3.z; //Changing order of verts for DirectX verts.push_back(v3); verts.push_back(v2); verts.push_back(v1); }
  4. Just realized maybe this doesn't fit in this sub-forum but oh well I'm making an exporter plugin for Maya and I want to export a non-triangulated mesh, while still outputting triangle data, not quad/n-gon data. Using MItMeshPolygon, I am doing the following: for (; !polyIter.isDone(); polyIter.next()) { //Get points and normals from current polygon MPointArray vts; polyIter.getPoints(vts); MVectorArray nmls; polyIter.getNormals(nmls); //Get number of triangles in current polygon int numberOfTriangles; polyIter.numTriangles(numberOfTriangles); //Loop through all triangles for (int i = 0; i < numberOfTriangles; i++) { //Get points and vertexList for this triangle. //vertexList is used to index into the polygon verts and normals. MPointArray points = {}; MIntArray vertexList = {}; polyIter.getTriangle(i, points, vertexList, MSpace::kObject); //For each vertex in this triangle for (int v = 0; v < 3; v++) { //Get point and normal UINT vi = polyIter.vertexIndex(vertexList[v]); UINT ni = polyIter.normalIndex(vertexList[v]); MPoint _v = vts[vi]; MFloatVector _n = nmls[ni]; //Create vertex Vertex_pos3nor3uv2 vert = {}; vert.posX = _v.x; vert.posY = _v.y; vert.posZ = _v.z * -1.0; vert.norX = _n.x; vert.norY = _n.y; vert.norZ = _n.z * -1.0; vert.u = 0.0; very.v = 0.0; verts.push_back(vert); } } } Doing this only gives me half the triangles I'm supposed to get and the result is very distorted. Link above is a picture of a cube exported this way. Edit: I've also tried indexing into the entire mesh vertex array like this: MPointArray vts; meshFn.getPoints(vts); MFloatVectorArray nmls; meshFn.getNormals(nmls); //.... UINT vi = polyIter.vertexIndex(vertexList[v]); UINT ni = polyIter.normalIndex(vertexList[v]); MPoint _v = vts[vi]; MFloatVector _n = nmls[vi]; I can't figure out what's wrong with my code. Any ideas?
  5. GreenGodDiary

    having problems debugging SSAO

    Bumping with new information. I'm getting quite desperate, if someone could help me out I would be forever greatful<3 I have revamped my way of constructing the view space position. Instead of directly binding my DepthStencil as a shader resource (which thinking back made no sense to do), I'm now in the G-buffer pass outputting 'positionVS.z / FarClipDistance' to a texture and using that, and remaking my viewRays in the following way: (1000.0f is FarClipDistance) //create corner view rays float thfov = tan(fov / 2.0); float verts[24] { -1.0f, 1.0f, 0.0f, //Pos TopLeft corner -1.0f * thfov * aspect, 1.0f * thfov, 1000.0f, //Ray 1.0f, 1.0f, 0.0f, //Pos TopRight corner 1.0f * thfov * aspect, 1.0f * thfov, 1000.0f, //Ray -1.0f, -1.0f, 0.0f, //Pos BottomLeft corner - 1.0f * thfov * aspect, -1.0f * thfov, 1000.0f,//Ray 1.0f, -1.0f, 0.0f, //Pos BottomRight corner 1.0f * thfov * aspect, -1.0f * thfov, 1000.0f, //Ray }; In my SSAO PS, I reconstruct view-space position like this: float3 origin = input.ray.xyz * (depthTexture.Load(texCoord).r); origin.x *= 1000; origin.y *= 1000; Why do I multiply by 1000? Because it works. Why does it work? Don't know. But this gives me the same value that I had in the G-pass vertex shader. If someone knows why this works/why it shouldnt, do tell me. Anyway, next I get the world-space normal from the G-buffer and multiply by my view matrix to get view-space normal: float3 normal = normalTexture.Load(texCoord).xyz; normal = mul(view, normal); normal = normalize(normal); I now have a random-vector-texture that I sample. Next I construct the TBN matrix using this vector and the view-space normal: float3 rvec = randomTexture.Sample(randomSampler, input.pos.xy).xyz; rvec.z = 0.0; rvec = normalize(rvec); float3 tangent = normalize(rvec - normal * dot(rvec, normal)); float3 bitangent = normalize(cross(normal, tangent)); float3x3 tbn = float3x3(tangent, bitangent, normal); This is where I'm not sure if I'm doing it right. I am doing it exactly like the article in the original post, however since he is using OpenGL maybe something is different here? The reason this part looks suspicious to me is that when I later use it, I get values that to me don't make sense. float3 samp = mul(tbn, kernel[i]); samp = samp + origin; samp here is what looks odd to me. If the values are indeed wrong, I must be constructing my TBN matrix wrong somehow. Next up, projecting samp in order to get the offset in NDC so that I can then sample the depth of samp: float4 offset = float4(samp, 1.0); offset = mul(offset, projection); offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; // get sample depth: float sampleDepth = depthTexture.Sample(defaultSampler, offset.xy).r; occlusion += (sampleDepth <= samp.z ? 1.0 : 0.0); The result is still nowhere near what you'd expect. It looks slightly better than the video linked in the original post but still same story; huge odd artifacts that change heavily based on the cameras orientation. What am I doing wrong? help im dying
  6. Please look at my new post in this thread where I supply new information! I'm trying to implement SSAO in my 'engine' (based on this article) but I'm getting odd results. I know I'm doing something wrong but I can't figure out what's causing the particular issue im having at the moment. Here's a video of what it looks like . The rendered output is the SSAO map. As you can see the result is heavily altered depending on the camera (although it seems to be unaffected my camera translation). The fact that the occlusion itself isn't correct isn't much of a problem at this stage, since I've hardcoded a lot of stuff that shouldn't be. E.g. I don't have a random-vector texture, all I do is use one of the sample vectors in order to construct the TBN matrix. One issue at a time... My shaders are as follows: //SSAO VS struct VS_IN { float3 pos : POSITION; float3 ray : VIEWRAY; }; struct VS_OUT { float4 pos : SV_POSITION; float4 ray : VIEWRAY; }; VS_OUT VS_main( VS_IN input ) { VS_OUT output; output.pos = float4(input.pos, 1.0f); //already in NDC space, pass through output.ray = float4(input.ray, 0.0f); //interpolate view ray return output; } Texture2D depthTexture : register(t0); Texture2D normalTexture : register(t1); struct VS_OUT { float4 pos : SV_POSITION; float4 ray : VIEWRAY; }; cbuffer cbViewProj : register(b0) { float4x4 view; float4x4 projection; } float4 PS_main(VS_OUT input) : SV_TARGET { //Generate samples float3 kernel[8]; kernel[0] = float3(1.0f, 1.0f, 1.0f); kernel[1] = float3(-1.0f, -1.0f, 0.0f); kernel[2] = float3(-1.0f, 1.0f, 1.0f); kernel[3] = float3(1.0f, -1.0f, 0.0f); kernel[4] = float3(1.0f, 1.0f, 0.0f); kernel[5] = float3(-1.0f, -1.0f, 1.0f); kernel[6] = float3(-1.0f, 1.0f, .0f); kernel[7] = float3(1.0f, -1.0f, 1.0f); //Get texcoord using SV_POSITION int3 texCoord = int3(input.pos.xy, 0); //Fragment viewspace position (non-linear depth) float3 origin = input.ray.xyz * (depthTexture.Load(texCoord).r); //world space normal transformed to view space and normalized float3 normal = normalize(mul(view, float4(normalTexture.Load(texCoord).xyz, 0.0f))); //Grab arbitrary vector for construction of TBN matrix float3 rvec = kernel[3]; float3 tangent = normalize(rvec - normal * dot(rvec, normal)); float3 bitangent = cross(normal, tangent); float3x3 tbn = float3x3(tangent, bitangent, normal); float occlusion = 0.0; for (int i = 0; i < 8; ++i) { // get sample position: float3 samp = mul(tbn, kernel[i]); samp = samp * 1.0f + origin; // project sample position: float4 offset = float4(samp, 1.0); offset = mul(projection, offset); offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; // get sample depth. (again, non-linear depth) float sampleDepth = depthTexture.Load(int3(offset.xy, 0)).r; // range check & accumulate: occlusion += (sampleDepth <= samp.z ? 1.0 : 0.0); } //Average occlusion occlusion /= 8.0; return min(occlusion, 1.0f); } I'm fairly sure my matrices are correct (view and projection) and that the input rays are correct. I don't think the non-linear depth is the problem here either, but what do I know I haven't fixed the linear depth mostly because I don't really understand how it's done... Any ideas are very appreciated!
  7. GreenGodDiary

    How do I output zero verts from geometry shader?

    Don't know if you missed it but I solved it by replacing the 0,0,0 vector in the dot()-evaluation with an actual vector (0,0,0 - vertexpos). I assume the reason it gave me the error is that it detected that all verts would fail that test. Also I dont see any problems with the cross, both arguments are float4 and then I explicitly get the xyz from it.
  8. Solved: didn't think clearly and realized I can't just compare the cross-product with 0,0,0. Fixed by doing this: float3 originVector = float3(0.0, 0.0, 0.0) - v1.xyz; if (dot(cross(e1, e2).xyz, originVector) > 0.0) { //... } I'm trying to write a geometry shader that does backface culling. (Dont ask me why) What I'm doing is checking the cross-product of two edges of the triangle (in NDC space) and checking if it's facing 0,0,0 . The problem is when I compile I get this error: this is i guess because if it isn't facing us, I dont append any verts to the stream. I always assumed maxvertexcount implied I can emit as few verts as I like, but I suppose not. How do I get around this? Shader below: struct GS_IN_OUT { float4 Pos : SV_POSITION; float4 PosW : POSITION; float4 NorW : NORMAL; float2 UV : TEXCOORD; }; [maxvertexcount(3)] void GS_main( triangle GS_IN_OUT input[3], inout TriangleStream< GS_IN_OUT > output ) { //Check for backface float4 v1, v2, v3; v1 = input[0].Pos; v2 = input[1].Pos; v3 = input[2].Pos; float4 e1, e2; e1 = v1 - v2; e2 = v1 - v3; if (dot(cross(e1, e2).xyz, float3(0.0, 0.0, 0.0)) > 0.0) { //face is facing us, let triangle through for (uint i = 0; i < 3; i++) { GS_IN_OUT element; element = input[i]; output.Append(element); } } }
  9. Solved it by doing the following: MItDag it(MItDag::kDepthFirst, MFn::kTransform); it.reset(node, MItDag::kDepthFirst, MFn::kTransform); though I am still curious as to why the original approach didn't work..
  10. I have a function that takes in an MObject and is supposed to iterate its children and print any nodes it comes across using MItDependencyGraph: void QueueChildrenTransforms(MObject& node) { MItDependencyGraph it ( node, MFn::kInvalid, MItDependencyGraph::kDownstream, MItDependencyGraph::kBreadthFirst, MItDependencyGraph::kNodeLevel ); MString s = "Found: "; for (; !it.isDone(); it.next()) { MFnDagNode child(it.currentItem()); s += child.name(); s += " "; } MGlobal::displayInfo(s); } When it is called, it only outputs the name of the node passed in to it. (If i use kInvalid as filter, which should iterate all nodes) It doesn't seem to make a difference changing the direction or traversal priority. I've tried different filteres as well such as kTransform and kMesh (these are the ones I want in the end, but using these filter seem to not output even the node passed in) I'm not sure if im using the correct type of iterator for this but this is the only one I've found that lets you define the root node of the search, which I need to do. So what am I missing here? End goal is to pass in a transform node, and iterate through its children in order to queue any transforms found for export.
  11. GreenGodDiary

    Depth issue in geometry shader

    This was the issue. I called OMSetRenderTargets only before I actually created the depth-stencil view. Works as expected now, thank you!
  12. GreenGodDiary

    Depth issue in geometry shader

    I dont think ive done that explicitly, no, but I can't recall having done that for any of the other DX apps ive worked on. Isn't the Z-test done automatically by the rasterizer?
  13. Having some issues with a geometry shader in a very basic DX app. We have an assignment where we are supposed to render a rotating textured quad, and in the geometry shader duplicate this quad and offset it by its normal. Very basic stuff essentially. My issue is that the duplicated quad, when rendered in front of the original quad, seems to fail the Z test and thus the original quad is rendered on top of it. Whats even weirder is that this only happens for one of the triangles in the duplicated quad, against one of the original quads triangles. Here's a video to show you what happens: Video (ignore the stretched textures) Here's my GS: (VS is simple passthrough shader and PS is just as basic) struct VS_OUT { float4 Pos : SV_POSITION; float2 UV : TEXCOORD; }; struct VS_IN { float4 Pos : POSITION; float2 UV : TEXCOORD; }; cbuffer cbPerObject : register(b0) { float4x4 WVP; }; [maxvertexcount(6)] void main( triangle VS_IN input[3], inout TriangleStream< VS_OUT > output ) { //Calculate normal float4 faceEdgeA = input[1].Pos - input[0].Pos; float4 faceEdgeB = input[2].Pos - input[0].Pos; float3 faceNormal = normalize(cross(faceEdgeA.xyz, faceEdgeB.xyz)); //Input triangle, transformed for (uint i = 0; i < 3; i++) { VS_OUT element; VS_IN vert = input[i]; element.Pos = mul(vert.Pos, WVP); element.UV = vert.UV; output.Append(element); } output.RestartStrip(); for (uint j = 0; j < 3; j++) { VS_OUT element; VS_IN vert = input[j]; element.Pos = mul(vert.Pos + float4(faceNormal, 0.0f), WVP); element.Pos.xyz; element.UV = vert.UV; output.Append(element); } } I havent used geometry shaders much so im not 100% on what happens behind the scenes. Any tips appreciated!
  14. SOLUTION: The problem seemed to be due to the different epsilon values (comparison value with dot-product) not being consistent throughout all tests. I realized this when I replaced them all with EPSILON being #defined. It still confuses me how that could generate those artifacts, though. Hi, i wasnt sure where to post this but i suppose this is good enough I'm implementing a bunch of Ray collision tests for a ray-tracer assignment for school. All tests (plane, sphere, triangle) have worked fine but the OBB one gives weird results. To test the ray in order to get the distance from the ray origin to the intersection I'm using the algorithm presented in Real-time Rendering, third edition (Three slabs method) The actual intersection test seems to work fine, however when I'm calculating the normal for the intersection (in order to shade it), a lot of pixels seem to fail the tests and thus aren't shaded properly. Here's the code where I calculate this normal: pointOnSurface = camPos + lastT*ray_dir.xyz; //world space position of intersection obb_type o = obb_data[objIndex]; //Contains centre of box, basis vectors and half widths vec4 arr[3] = {o.u_hu, o.v_hv, o.w_hw}; //xyz = basis, w = half width vec3 normal; for (int i = 0; i < 3; i++) //for every basis vector { //vector from pointOnSurface to middle of plane //If pointOnSurface is on the same plane we are testing, the dot product calculated //later will be close to 0 vec3 planeVector = (o.centre.xyz + (arr[i].xyz * arr[i].w)) - pointOnSurface; float dotProduct = dot(planeVector, arr[i].xyz); if ( (dotProduct > -0.00001) && (dotProduct < 0.00001) ) { normal = normalize(arr[i].xyz); } else //check other plane in slab (arr.xyz * -1.0) { vec3 planeVector = (o.centre.xyz - (arr[i].xyz * arr[i].w)) - pointOnSurface; dotProduct = dot(planeVector, arr[i].xyz); if ( (dotProduct > -0.000001) && (dotProduct < 0.000001) ) { normal = normalize(arr[i].xyz * -1.0); } } } return shade(pointOnSurface, normal, obb_data[objIndex].colour.xyz); As you can see, if the dot product is never close to 0, the normal is never changed with means shade() gets the wrong normal, creating the artifacts. So, why does this test fail for so many pixels? I've tried changing the tolerance both up and down without success. There is always some significant error. Worth noting is that for three of the planes on the box, the artifacts show up only when I zoom out very far. For the other three, they disappear when I get very close. I'm really at a loss here... Here's a snapshot of the box:
  15. GreenGodDiary

    Rendering CS output to backbuffer [SOLVED]

    Good to know! Thanks
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!