• Advertisement

recp

Member
  • Content count

    33
  • Joined

  • Last visited

Community Reputation

215 Neutral

About recp

  • Rank
    Member

Personal Information

  • Interests
    Art
    Design
    Programming

Social

  • Github
    recp

Recent Profile Visitors

2248 profile views
  1. It seems this is most reasonable thing to do, thanks for your feedback.
  2. It could save me to create another framebuff for opaque objects, since I render opaque objects to default frame buffer I need to default depth buffer for second pass. There are three passes (see page): 3D opaque surfaces to a primary framebuffer 3D transparency accumulation to an off-screen framebuffer 2D compositing transparency over the primary framebuffer ( using alpha blending ) For transparency pass it says "Test against the depth buffer, but do not write to it or clear it.". So it seems I don't need to write depth buffer in transparency pass, I only need to read opaque's one to test depth which is default depth buffer. Binding default depth buffer ( after opaque pass ) to transparency framebuffer would be nice but I'm not sure if it is possible or not. As alternative I created a depth buffer for off-screen framebuffer (transparency) which is GL_DEPTH_COMPONENT24, it seems glBlitFramebuffer offer copying a buffer to another frame buffer. But what if formats mismatch? 32-bit vs 24-bit Another alternative is that creating a framebuffer for opaque surfaces too. Then since I have depth buffer ID, I can bind it to another framebuffer I guess, then set depth mask to false. But I need to create another color buffer / color attachment and depth buffer for this framebuffer. I don't know which one is faster copying default depth using glBlitFramebuffer or this method.
  3. I'm trying to implement Weighted, Blended Order-Independent Transparency in my engine. I need to read/access default framebuffer's depth buffer in off-screen transparency pass. What is the best way to do that? Is there a way to read default depth buffer directly instead of copying it in every frame? If no how can I copy it then? Because I don't know default depth format do I?
  4. Ok I think the I found the error about first question, I was used wrong index: /* wrong */ 0.5f * (-C * camProj[2][3] + camProj[3][3]) / C + 0.5f /* fixed */ 0.5f * (-C * camProj[2][2] + camProj[3][2]) / C + 0.5f now I got colorful scene, I will try to understand the simple math used here What about lambda value ?
  5. I followed NVIDIA's tutorial (CSM and PSSM pdf) and sample project but I'm confused how to compare depth with split distances. My fragment shader looks like: uniform sampler2DArrayShadow uShadMap; uniform mat4 uShadMVP[SHAD_SPLIT]; uniform float uShadDist[SHAD_SPLIT]; float shadowCoef() { vec4 shadCoord; int i; for (i = 0; i < SHAD_SPLIT; i++) { if (gl_FragCoord.z < uShadDist[i]) break; } if (i >= SHAD_SPLIT) return 1.0; shadCoord = uShadMVP[i] * vPos; shadCoord.w = shadCoord.z; shadCoord.z = float(i); return texture(uShadMap, shadCoord); } uShadDist contains distances in view space e.g. [ 12.818679, 26.606153, 46.403923, 100.000061 ] but I guess I need to convert them to clip space coord: [ 0 : 1 ] before comparing with gl_FragCoord.z, right? Here NVIDIA's code: // f[i].fard is originally in eye space - tell's us how far we can see. // Here we compute it in camera homogeneous coordinates. Basically, we calculate // cam_proj * (0, 0, f[i].fard, 1)^t and then normalize to [0; 1] far_bound[i] = 0.5f*(-f[i].fard*cam_proj[10]+cam_proj[14])/f[i].fard + 0.5f; I don't understand what (or how) it does I tried to multiply split dist with camera's perspective proj but I always get >= 1, Can someone explain what value should I compare with gl_FragCoord.z? One another question is that I'm using Practical Split Scheme but I don't understand what lambda value should I use, I used 0.5. C = (Clog + Cuni) * 0.5f . But I'm not sure if it is best value or not.
  6. I think I found it! Here the code it may help someone else who don't want to use re-extract new frustum corners to split frustum. v = normalize(Far - Near) d = distance(Far, Near) size = d * C / Far newCorner. = nearCorner + v * size Code: /* this is for one point between NEAR and FAR planes, others get calculated same */ vec3 corner; float dist; dist = glm_vec_distance(corners[j + 4], corners[j]); glm_vec_sub(corners[j + 4], corners[j], corner); glm_vec_scale_as(corner, dist * C / f, corner); glm_vec_add(corners[j], corner, subFrustum.corners[j + 4]);
  7. C++ precision

    This would help: https://stackoverflow.com/a/16606128/2676533
  8. I want to split view frustum for shadows mapping ( CSM/PSSM ). I need to get corners of each frustum split. I already have 8 corners of view frustum and view frustum planes. So I thought that maybe I don't need to create new invViewProj matrix and get corners using clip space method, it is too expensive ( for each split ) To get center of two point/vector we use this: center = ( V1 + V2 ) * 0.5 So, using same information could give us splitted frustum corner by this: newNearTopLeft = ( nearTopLeft + farTopLeft ) * ( near / planeDistance ) /* this plane is between far and near points */ /* repeat this for each 8 corner */ Does the math correct? If it is correct then this will save my engine from many calculations
  9. Customizing floating point seems interesting. I'm using C for rendering and math. Since C doesn't support 16-bit floats I also need to do that manually or something else (just speaking for 16-bit): What I'm thinking to do is; since I don't think all CPUs (except ARM maybe) supports 16 bit float arithmetic, I'm considering to do half-precision arithmetic as 32-bit float then convert it back to 16-bit float for storing in memory (https://software.intel.com/en-us/articles/performance-benefits-of-half-precision-floats). I think this would be better then implement the arithmetic manually if performance is matter (single instruction vs multiple) In the future I may look somethings what you are looking now, just wanted to share my thoughts about 16-bit floats.
  10. OpenGL GLSL Light structure

    I tried the same it worked for me. Copy-pasting structure is not enough it must be used in frag or vert shader, also I tried to use only position in vertex shader even that case I got others' locations (0, 1, 2). It seems GLSL compiler don't remove individual members of structure, if I don't use any structure member then I get all -1
  11. OpenGL GLSL Light structure

    Did you use diffuse/attenuation variables in fragment shader too when getting -1? If not, probably compiler removed them as unused code so you can't get their locations
  12. I found a way to find perpendicular vector easily, I like it: https://www.quora.com/How-do-I-find-a-vector-perpendicular-to-another-vector second answer (Tom's). I must add this to cglm if v1 = (x, y, z) one possible perpendicular is v2 = (y-z, z-x, x-y) and it makes dot(v1, v2) = 0 Updated version: up[0] = dir[1] - dir[2]; up[1] = dir[2] - dir[0]; up[2] = dir[0] - dir[1]; glm_vec_normalize(up); glm_vec_add(cam->frustum.center, dir, target); glm_lookat(cam->frustum.center, target, up, view); What do you think now?
  13. Goog point! It was attractive at first, I must find a perpendicular one Before splitting frustum this implementation must be correct
  14. I updated the way to creating matrices, I will update it for CSM, it seems work with aliasing problems, I hope I will fix that with CSM/PSSM void gkTransformsForLight(kScene *scene, GkLight *light, mat4 *viewProj, int splitCount) { mat4 view, proj; GkCamera *cam; cam = scene->camera; switch (light->type) { case GK_LIGHT_TYPE_DIRECTIONAL: { vec4 *corner, v; vec3 box[2], target; int32_t i; memset(box, 0, sizeof(box)); glm_vec_add(cam->frustum.center, light->dir, target); glm_lookat(cam->frustum.center, target, GLM_YUP, view); corner = cam->frustum.corners; for (i = 0; i < 8; i++) { glm_mat4_mulv(view, corner[i], v); box[0][0] = glm_min(box[0][0], v[0]); box[0][1] = glm_min(box[0][1], v[1]); box[0][2] = glm_min(box[0][2], v[2]); box[1][0] = glm_max(box[1][0], v[0]); box[1][1] = glm_max(box[1][1], v[1]); box[1][2] = glm_max(box[1][2], v[2]); } glm_ortho(box[0][0], box[1][0], box[0][1], box[1][1], box[0][2], box[1][2], proj); break; } case GK_LIGHT_TYPE_POINT: case GK_LIGHT_TYPE_SPOT: default: break; } glm_mat4_mul(proj, view, viewProj[0]); } I always used Y_UP here because I thought that since all models/primitives will be transformed with same matrix even it maybe be upside down or rotated somehow, and since we don't show the results to user without depth visualizing and it is just depth testing, so it will not be problem I think, please correct me if I'm wrong, I don't want to do that in wrong way. Any feedback is welcome to fix the way for creating matrices if there is better way. The next step is CSM/PSSM I will follow tutorials to implement it.
  15. Thank you for sharing your thoughts, I want to arrive (or pass) those points. I'm learning lot of things while working on these stuff and it makes me happy to spend / spare my time on these projects and sub libraries AssetKit will full support COLLADA 1.4, 1.5+ and glTF. It is not finished yet but it has nice features already; single interface for COLLADA 1.4, 1.5 and glTF, hierarchical fantastic memory allocator, small binary size, you can convert document's coord-sys to any new coord-sys e.g. convert LH -> RH or even custom one while importing (or later), it can load sub / external COLLADA files and its files (relative to document or any URI including HTTP), and caches (until freed), fixes external doc's coord-sys for opened one... Also it supports technique_hint e.g. by settings ak_setPlatform("PS3") will affect how to load / select instance effects, computing normals, bbox while loading (optional)...... User can enable/disable some features: https://github.com/recp/assetkit/blob/master/include/ak-options.h I like to provide options. It will support some other formats e.g. .obj as extra library I'm happy with results I hope it will be better by time
  • Advertisement