Jump to content
  • Advertisement

mrMatrix

Member
  • Content count

    16
  • Joined

  • Last visited

Community Reputation

330 Neutral

2 Followers

About mrMatrix

  • Rank
    Member

Personal Information

  • Interests
    Art
    Programming

Social

  • Github
    bergjones

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. mrMatrix

    Smooth "infinity" ortho zoom

    I'm making some progress. From what I've read so far basically the algorithm is scaling individual objects up and down and disabling ortho zoom while retaining the ability to move L/R. This gives the benefit of having an ortho camera in regards to 2d plane placement always lining up and makes it so you aren't limited by hitting zero on the ortho zoom parameter. You can write a function to calculate if the plane has scaled so much that it surrounds the screen meaning you can now cull it. Also if it gets too small you can cull it. Of course you can still move the ortho cam L/R
  2. I'm wondering if I'm missing any steps or making incorrect assumptions anywhere in creating an "infinite zoom" effect in a viewport similar to what happens in the software Mischief where you can pan LRUD and zoom in and out. Mischief claims trillions of levels of zoom. In this prototype in Maya as you can see I have a front facing orthographic camera and a tree-like structure of 1x1 planes that have been scaled down and/or translated in Z. Imagine the planes as having arbitrarily sized textures on them. Later I want this to be in 3D stereo or VR with depth playing a factor as well as size. I am prototyping this in Maya before I move it to my engine in GL with GLM. I can successfully zoom the ortho camera and set camera bookmarks that I can recall. However I quickly run out of camera zoom resolution as the zoom parameter approaches zero. For example in this scene pictured above I only have 24 1x1 planes but when I'm zoomed far enough in that I can only see the smallest one, the camera zoom is near zero, (even if I query the internal double value), everything starts to shake, and I can't really go any smaller than 0. With mischief, I can zoom in way deeper, to where I haven't found a limit, and do it smoothly. How can I keep on zooming in more with ortho? I tried using a perspective cam and just transforming it, but the perspective distortion warps the perceived position of the planes too much to where once you zoom in far enough everything begins drifting behind each other. They could also be moving objects back in world space with a real flat perspective cam as they "zoom" in...? Anotherquestion is why have I only seen this implemented in Mischief out of all the other programs? Is the implementation that difficult or not useful and if its not useful, why did the Foundry buy out the program?
  3. mrMatrix

    Dynamic AABB with bone animation ?

    Got it working by multiplying each skeleton BB vector by the bone animation matrix and then doing min/max   glUseProgram2("pBB", myAbj); for (auto &i : myAbj.frustumObjs) { if (i->bb->val_b) { i->aabbV4.clear(); for (auto &j : i->bbSkelAll) { for (auto &k : i->aiGbones) { if (k.name == j.name) { i->obbMVP = glm::transpose(k.animatedXform) * j.obbMVP; //////// //AABB - STEP 1 - GATHER / STORE /////// //gather objects' transformed bone oobb glm::vec4 bbSkelXformMin = glm::transpose(k.animatedXform) * glm::vec4(j.min, 1.f); glm::vec4 bbSkelXformMax = glm::transpose(k.animatedXform) * glm::vec4(j.max, 1.f); i->aabbV4.push_back(bbSkelXformMin); i->aabbV4.push_back(bbSkelXformMax); //break; } } i->mvpGet(); i->render(); } } } //////// //AABB - STEP 2 - MIN / MAX /////// for (auto &i : myAbj.frustumObjs) { if (i->bb->val_b) { glm::vec4 aabbMin = (i->aabbV4.empty()) ? glm::vec4(0.f) : i->aabbV4[0]; glm::vec4 aabbMax = (i->aabbV4.empty()) ? glm::vec4(0.f) : i->aabbV4[0]; for (uint j = 0; j < i->aabbV4.size(); ++j) { aabbMin = glm::min(i->aabbV4[j], aabbMin); aabbMax = glm::max(i->aabbV4[j], aabbMax); } glm::vec3 aabbSize = aabbMax - aabbMin; glm::vec3 aabbCenter = .5f * (aabbMin + aabbMax); i->aabbMVP = glm::translate(glm::mat4(), aabbCenter) * glm::scale(glm::mat4(), aabbSize); i->obbMVP = glm::translate(glm::mat4(), aabbCenter) * glm::scale(glm::mat4(), aabbSize); //i->obbMVP = i->aabbMVP; i->aabbTgl = 1; i->mvpGet(); i->render(); i->aabbTgl = 0; } }  
  4. I'm having trouble with generating an AABB for an object with multiple bones that animate. I used a pre-calculated local space bounding box for each bone that I transform at render time with the per-frame bone matrix to get dynamic bounding box movement.    To get a static AABB in the bind pose I just do a min/max check like this, with bbSkelAll being a vertex with that has a greater than zero influence for the bone.   glm::vec3 aabbMin = (obj->bbSkelAll.empty()) ? glm::vec3(0.f) : obj->bbSkelAll[0].min; glm::vec3 aabbMax = (obj->bbSkelAll.empty()) ? glm::vec3(0.f) : obj->bbSkelAll[0].max; for (auto &i : obj->bbSkelAll) { aabbMin = glm::min(i.min, aabbMin); aabbMax = glm::max(i.max, aabbMax); } glm::vec3 aabbSize = aabbMax - aabbMin; glm::vec3 aabbCenter = .5f * (aabbMin + aabbMax); obj->aabbMVP = glm::translate(glm::mat4(), aabbCenter) * glm::scale(glm::mat4(), aabbSize);   This works for a static pose, but I want a dynamic AABB that translates and scales. The bounding boxes for the bones are at different rotations now so I have to loop through them, but I cant conceptualize what space I should be in or how to get the bones' world space positions given that I only have their pre-calculated local BB and per-frame bone matrix. Any help would be appreciated. This video and image shows what I'm talking about    
  5. But isnt it common practice to shader debug with Nsight? And all that I've read says that it can only be done with a Kepler gpu such as a 670/680. I've been looking at this problem for over a week now and I'll do anything to solve it. The problem does have to do with NaN but why it happens isnt easy for me to describe or fix without having that "edge" that I've seen in Nsight tutorial videos. I think my best option would be to order a Kepler gpu. This could help me in the future as well as my shaders get more complex.
  6. I had some issues with both RenderDoc and Nsight. With RenderDoc I was getting consistently the wrong final color. With Nsight I am unable to do live shader debugging on my current card (gtx 950) since they only support it on old Kepler / Fermi cards. Is it worth it to spend cash on an old card to do Shader Debugging in Nsight?
  7. Thanks, NaN was indeed the cause for the black squares. However, a real solution is proving to be quite tricky.
  8. I have a bloom shader which works perfectly when I have static objects. I do a 6x down sampling then 6x gaussian blur then I composite the bloom over the source.   When I add in Assimp animation I get NaN errors at certain angles in the form of black squares which I can confirm are NaN by doing isnan(). Certain angles while animating also make my tonemapping shader go crazy. I've tried to add in fences with glFenceSync and glClientWaitSync but it doesnt do anything. Is synchronization the issue? I assume the animated assimp model isn't in sync with the bloom.  
  9. I have a semi-working assimp implementation for ANIMATION after I've rigged and bound. By semi-working I mean that in order to get FBX to work correctly in Maya under Skin->Bind->DropoffRate I have to put 50 - 100 otherwise the WHOLE mesh rotates. No scaling or setting of scale on the FBX solves this. Thus, I cant use dropOffRates that are reasonable and I always get a hard rotate. Not only that but I get weird triangulated data such as normals and UVs near joints. However, following OGLdev article 38 it animates correctly. Does anyone have any idea on what I could be doing wrong? Attached is my import code and pic of the problem.           void Object::VBOup_assimp_anim(uint meshIdx, aiMesh *myMesh, aiNode *myNode) { auto obj = make_shared<Object>(myAbj); obj->type = "OBJ"; obj->rename(myNode->mName.data); //obj->rename("test"); cout << "obj->name->val_s = " << obj->name->val_s << endl; myAbj.assimpNames.push_back(obj->name->val_s); obj->assimpSceneParent = myAbj.myAssimpSceneName; cout << "name / obj->assimpSceneParent = " << myNode->mName.data << " / " << obj->assimpSceneParent << endl; cout << "name / num verts = " << myNode->mName.data << " ... " << myMesh->mNumVertices << endl; for (unsigned int i = 0; i < myMesh->mNumVertices; ++i) { glm::vec2 vector2; glm::vec3 vector; vector.x = myMesh->mVertices[i].x; vector.y = myMesh->mVertices[i].y; vector.z = myMesh->mVertices[i].z; obj->pE.push_back(vector); vector.x = myMesh->mNormals[i].x; vector.y = myMesh->mNormals[i].y; vector.z = myMesh->mNormals[i].z; obj->nE.push_back(vector); if (myMesh->mTextureCoords[0]) { vector2.x = myMesh->mTextureCoords[0][i].x; vector2.y = myMesh->mTextureCoords[0][i].y; } else vector2 = glm::vec2(0.f); obj->uvE.push_back(vector2); vector.x = myMesh->mTangents[i].x; vector.y = myMesh->mTangents[i].y; vector.z = myMesh->mTangents[i].z; obj->tE.push_back(vector); } for (unsigned int i = 0; i < myMesh->mNumFaces; ++i) { aiFace myFace = myMesh->mFaces[i]; for (unsigned int j = 0; j < myFace.mNumIndices; ++j) { obj->idxE.push_back(myFace.mIndices[j]); } } }  
  10. I have tried some ward and ggx spec models which work well without a normal map. When a tangent space normal map is with the light and view vectors in tangent space, my aniso models don't follow the normals at all / nearly as well as an isotropic spec.  What should I do? float HdotT = dot(H, Tn) / aX; float HdotB = dot(H, Bn) / aY; float ward() { if (NdotL <= 0.f) return 0.f; float expon = exp(-2.f * (pow(HdotT, 2.f) + pow(HdotB, 2.f)) / (1.f + HdotN)); float aniso = sqrt(max(0.f, NdotL / NdotV)) * expon; return aniso; }
  11.   I have something working by calling glBindTextureUnit(THE_UNIT, HANDLE) which I've read replaces glActiveTexture / glBindTexture and is to be used with DSA. I think what I really want is bindless (sparse?) textures to go along with DSA...right?
  12. I have TWO different objects with TWO different shaders and each frag shader has texture units like this   layout(binding = 0) uniform sampler2D tex1; layout(binding = 1) uniform sampler2D tex2;   I create two new textures with DSA like this, one after another with "someTexA" and "someTexB"   glCreateTextures(GL_TEXTURE_2D, 1, &sometTexH); unsigned char *img = SOIL_load_image(string(pathTex + "sometTexA.png").c_str(), &imgW, &imgH, &chan, SOIL_LOAD_RGBA); glTextureStorage2D(sometTexH, 1, GL_RGBA16, imgW, imgH); glTextureSubImage2D(sometTexH, 0, 0, 0, imgW, imgH, GL_RGBA, GL_UNSIGNED_BYTE, img); glBindTextureUnit(0, sometTexH); SOIL_free_image_data(img);   They have different debug textures in slot 0; If I bind two in a row with the above code, the glBindTextureUnit function puts the same tex "sometTexA" into both shaders so when I render they both have the same texture being used.   I assume I have to clear something between subsequent calls to create the tex?   My rendering looks like this. Ive removed the old ActiveTexture / glBindTexture between each program as they should be redundant with DSA tex       //1     glUseProgram(pFBO1);     // glActiveTexture(GL_TEXTURE0); // glBindTexture(GL_TEXTURE_2D, someTexH); // GLuint someTexBindLoc = glGetUniformLocation(pFBO, "someTexA"); // glUniform1i(someTexBindLoc, 0);     glBindVertexArray(myVAO1);     glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);     glBindBuffer(GL_ARRAY_BUFFER, 0);     glBindVertexArray(0);     //2     glUseProgram(pFBO2);     glBindVertexArray(myVAO2);     glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);     glBindBuffer(GL_ARRAY_BUFFER, 0);     glBindVertexArray(0);
  13. mrMatrix

    glm/opengl orbital camera C++

    I have decent target / orbit cam following this thread but how do I now use the View code onto the camera object itself? I have an obj file that is modelled to be a camera and from above is basically my ViewMatrix glm::vec3 T = glm::vec3(0,0,dist); T = glm::vec3(R*glm::vec4(T,0.0f)); *position = *target + T; *look = glm::normalize(*target-*position); *up = glm::vec3(R*glm::vec4(*UP,0.0f)); *right = glm::cross(*look,*up); *V = glm::lookAt(*position, *target, *up);
  14. mrMatrix

    Pivot movement on rotated model

    I've been looking at the Transformation Matrix in Maya and also the xform python commands. where they talk about having various offsets so that when a pivot is moved while being rotated or scaled it doesnt change the host. However, I've been working on implementing it for a few days, with no luck.   Like, even trying to to replicate their matrix in opengl by simplifying my above example even further.   In maya I have a torus at translate(0) thats rotated 90 in x.  I query the end result matrix by doing the following. m = cmds.xform(cmds.ls(sl=1)[0], q=1, ws=1, m=1) print "m[0:4] =", m[0:4] print "m[4:8] =", m[4:8] print "m[8:12] =", m[8:12] print "m[12:16] =", m[12:16] The resulting matrix is this, which I cant reproduce by following the rotation steps in the guide. And because I cant reproduce it, I cant get the same result for more involved transforms. Ive even looked at FBX importers, but none of them were to a modern opengl. Where is the 2.22044 from?   m[0:4] = [1.0, 0.0, 0.0, 0.0] m[4:8] = [0.0, 2.220446049250313e-16, 1.0, 0.0] m[8:12] = [0.0, -1.0, 2.220446049250313e-16, 0.0] m[12:16] = [0.0, 0, 0 1.0]
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!