gboxentertainment

Members
  • Content count

    124
  • Joined

  • Last visited

Community Reputation

772 Good

About gboxentertainment

  • Rank
    Member
  1. Voxel cone tracing problem

    Actually, I just realised you are talking about ambient occlusion; which I didn't implement using VCT because its too inaccurate.
  2. Voxel cone tracing problem

    Can you show some screenshots of your problem? I used dominant direction method with a single 3D texture and never had any issues with my VCT. [attachment=21961:giboxv3-0.png]
  3. I've recently gotten Anttweakbar to work with my graphics engine. I have done a quick test to be able to adjust the position of an object in space using the anttweakbar gui.   What I want to do now is to do this for multiple objects - so that I can change the active object and adjust the position of each different object individually.   Here's where I initialize the GUI: bool init(void) { TwInit(TW_OPENGL, NULL); myBar = TwNewBar("GiBOX"); TwDefine(" GiBOX size='240 320' "); TwDefine(" GiBOX valueswidth=140 "); // - Directly redirect GLUT mouse button events to AntTweakBar glutMouseFunc((GLUTmousebuttonfun)TwEventMouseButtonGLUT); // - Directly redirect GLUT mouse motion events to AntTweakBar glutMotionFunc((GLUTmousemotionfun)TwEventMouseMotionGLUT); // - Directly redirect GLUT mouse "passive" motion events to AntTweakBar (same as MouseMotion) glutPassiveMotionFunc((GLUTmousemotionfun)TwEventMouseMotionGLUT); // - Directly redirect GLUT key events to AntTweakBar glutKeyboardFunc((GLUTkeyboardfun)TwEventKeyboardGLUT); // - Directly redirect GLUT special key events to AntTweakBar glutSpecialFunc((GLUTspecialfun)TwEventSpecialGLUT); TwAddVarRW(myBar, "Object Id", TW_TYPE_INT32, &objId, ""); TwAddVarRW(myBar, "Object Position", TW_TYPE_DIR3F, &(models->objPos[objId]), ""); } I use object id to specify which object would be active. So in the gui, I can change the object id. The problem is even when I change the object id, it only adjusts the position of the first object (which is objId = 0).   It seems that when I set models->objPos[objId] it doesn't update the objId in this function. Has anyone who's used Anttweakbar ever gotten it to work with multiple objects and can update their attributes individually?
  4. OpenGL Voxel Cone Tracing Experiment - Part 2 Progress

    Now I haven't posted for a while on my progress with this engine - that's because I've been too busy at work and have put a halt to any development. However, now I'm willing to startup again during the holidays.   My current scene is only very small and I am planning to extend it efficiently to a larger world. I want to stay away from octrees for now and cascades are out of the question due to the many artifacts that it results in.   My idea is a substitute for partially resident textures on video cards that do not support it yet. I have found that the optimal resolution is 64x64x64 voxels and that there is little difference in quality between this and 32x32x32. I want to create a grid of voxel textures in a way where the camera will be located in a 64x64x64 voxel texture which is surrounded by 32x32x32 voxel textures at every dimension. When the camera travels outside of that voxel volume, the next volume will become 64x64x64 resolution and the previous one will become 32x32x32. I'm hoping that i can trace cones into multiple voxel textures by using some sort of offset.   Has anyone tried something similar before?
  5. OpenGL Voxel Cone Tracing Experiment - Part 2 Progress

      Good point. So it turns out that it was to do with my voxel visualizer that was causing the massive increase in system ram. I've turned that off and it doesn't seem to have any effect on framerate. Looking at gpu ram, it makes sense now - 64 voxel depth (with all other resources) uses up about 750mb. This increases to 1.8gb when using 512 voxel depth.
  6. Arauna2 path tracer announcement

    Have you or will you plan to implement any type of noise filtering? e.g. random parameter filtering looks interesting: http://www.youtube.com/watch?v=Ee51bkOlbMw However, Sam Lapere, who's working on the Brigade engine, i'm pretty sure said that rpf doesn't really provide good results, but I would like to see some proof of that. I think you worked on the Brigade engine as well didn't you?
  7. OpenGL Voxel Cone Tracing Experiment - Part 2 Progress

      Actually I'm using the task manager to get the amount of ram that my application is using.
  8. OpenGL Voxel Cone Tracing Experiment - Part 2 Progress

    I just tested this with my brand new EVGA GTX780 and it runs at average 95fps at 1080p with all screen space effects turned on (ssao, ssr, all soft shadows). In fact, screen space effects seem to make little dent in the framerate.   I discovered something very unusual when testing the voxel depth. Here's my results: 32x32x32 -> 95fps (37MB memory) 64x64x64 -> 64fps (37MB memory) 128x128x128 -> 52fps (37MB memory) 256x256x256 -> 31fps (38MB memory) 512x512x512 -> 7fps (3.2GB memory)   How on earth did I jump from 38MB memory to 3.2GB of memory used when going from 256 to 512 3d texture depths?!
  9. OpenGL Voxel Cone Tracing Experiment - Part 2 Progress

    So I've managed to remove some of the artifacts from my soft shadows: Previously, when I had used front-face culling I got the following issue: [attachment=18552:givoxshadows8-0.jpg]   This was due to backfaces not being captured by the shadow-caster camera when at overlapping surfaces, thus leading to a gap of missing information in the depth test. There's also the issue of back-face self shadowing artifacts.   Using back-face culling (only rendering the front-face) resolves this problem, however, leads to the following problem: [attachment=18553:givoxshadows8-1.jpg] Which is front-face self shadowing artifacts - any sort of bias does not resolve this problem because it is caused by the jittering process during depth testing.   I came up with a solution that resolves all these issues for direct lighting shadows, which is to also store an individual object id for each object in the scene from the shadow-caster's point of view. During depth testing, I then compare the object id from the player camera's point of view with that from the shadow-caster's point of view and make it so that each object does not cast its own shadow onto itself: [attachment=18554:givoxshadows8-2.jpg]   Now this is all good for direct lighting, because everything that is not directly lit I set to zero, including shadows, and then I add the indirect light to that zero - so there's a smooth transition between the shadow and the non-lit part of each object. [attachment=18557:givoxshadleak2.jpg]   For indirectly lit scenes with no direct lighting at all (i.e. emissively lit by objects), things are a bit different. I don't separate a secondary bounce with the subsequent bounces, all bounces are tied together - thus I cannot just set a secondary bounce as the "direct lighting" and everything else including shadows to zero, then add the subsequent bounces. This would require an additional voxel texture and I would need to double the number of cone traces. I cheat by making the shadowed parts of the scene darker than the non-shadowed parts (when a more accurate algorithm would be to make shadowed areas zero and add subsequent bounces to those areas). This, together with the removal of any self-shadowing leads to shadow leaking: [attachment=18555:givoxshadleak1.jpg][attachment=18556:givoxshadleak0.jpg]   So I think I have two options: Add another voxel texture for the second bounce and double the number of cone traces (most expensive). Switch back to back-face rendering with front-face culling for the shadow mapping only for emissive lighting shadows (lots of ugly artifacts). I wonder if anyone can come up with any other ideas.
  10. Rain droplets technique

    Here's something:   http://www.cescg.org/CESCG-2007/papers/Hagenberg-Stuppacher-Ines/cescg_StuppacherInes.pdf   It seems they use particles and store these in a height map texture.   [edit] Styves pretty much describes what is written here.
  11. Rain droplets technique

      That's a good point. Originally I was thinking you store movement of the camera in variables for each direction and multiply this by the s-direction texture coordinate, but then that doesn't account for the bending of the drops.
  12. Rain droplets technique

    All it involves is just texture masking - not expensive at all. You create several textures with semi-transparent water droplets using the alpha channel (this can probably be done in photoshop) then have their vertical texture coordinates change as a variable of time in the shader code.
  13. OpenGL Voxel Cone Tracing Experiment - Part 2 Progress

    I've managed to increase the speed of my ssR to 5.3ms at the cost of reduced quality by using variable step distance - so now i'm using 20 steps instead of 50.   [attachment=18456:giboxssr10.png]   Even if I get it down to 10 steps and remove the additional backface cover, it will still be 3.1ms - is this fast enough? or can it be optimized further?
  14. OpenGL Voxel Cone Tracing Experiment - Part 2 Progress

    Here's my ssR code for anyone that can help me optimize whilst still keeping some plausible quality: vec4 bColor = vec4(0.0); vec4 N = normalize(fNorm); mat3 tbn = mat3(tanMat*N.xyz, bitanMat*N.xyz, N.xyz); vec4 bumpMap = texture(bumpTex, texRes*fTexCoord); vec3 texN = (bumpMap.xyz*2.0 - 1.0); vec3 bumpN = bumpOn == true ? normalize(tbn*texN) : N.xyz; vec3 camSpaceNorm = vec3(view*(vec4(bumpN,N.w))); vec3 camSpacePos = vec3(view*worldPos); vec3 camSpaceViewDir = normalize(camSpacePos); vec3 camSpaceVec = normalize(reflect(camSpaceViewDir,camSpaceNorm)); vec4 clipSpace = proj*vec4(camSpacePos,1); vec3 NDCSpace = clipSpace.xyz/clipSpace.w; vec3 screenSpacePos = 0.5*NDCSpace+0.5; vec3 camSpaceVecPos = camSpacePos+camSpaceVec; clipSpace = proj*vec4(camSpaceVecPos,1); NDCSpace = clipSpace.xyz/clipSpace.w; vec3 screenSpaceVecPos = 0.5*NDCSpace+0.5; vec3 screenSpaceVec = 0.01*normalize(screenSpaceVecPos - screenSpacePos); vec3 oldPos = screenSpacePos + screenSpaceVec; vec3 currPos = oldPos + screenSpaceVec; int count = 0; int nRefine = 0; float fade = 1.0; float fadeScreen = 0.0; float farPlane = 2.0; float nearPlane = 0.1; float cosAngInc = -dot(camSpaceViewDir,camSpaceNorm); cosAngInc = clamp(1-cosAngInc,0.3,1.0); if(specConeRatio <= 0.1 && ssrOn == true) { while(count < 50) { if(currPos.x < 0 || currPos.x > 1 || currPos.y < 0 || currPos.y > 1 || currPos.z < 0 || currPos.z > 1) break; vec2 ssPos = currPos.xy; float currDepth = 2.0*nearPlane/(farPlane+nearPlane-currPos.z*(farPlane-nearPlane)); float sampleDepth = 2.0*nearPlane/(farPlane+nearPlane-texture(depthTex, ssPos).x*(farPlane-nearPlane)); float diff = currDepth - sampleDepth; float error = length(screenSpaceVec); if(diff >= 0 && diff < error) { screenSpaceVec *= 0.7; currPos = oldPos; nRefine++; if(nRefine >= 3) { fade = float(count); fade = clamp(fade*fade/100,1.0,40.0); fadeScreen = distance(ssPos,vec2(0.5,0.5))*2; bColor.xyz += texture(reflTex, ssPos).xyz/2/fade*cosAngInc*(1-clamp(fadeScreen,0.0,1.0)); break; } } else if(diff > error){ bColor.xyz = vec3(0); sampleDepth = 2.0*nearPlane/(farPlane+nearPlane-texture(depthBTex, ssPos).x*(farPlane-nearPlane)); diff = currDepth - sampleDepth; if(diff >= 0 && diff < error) { screenSpaceVec *= 0.7; currPos = oldPos; nRefine++; if(nRefine >= 3) { fade = float(count); fade = clamp(fade*fade/100,2.0,20.0); bColor.xyz += texture(reflTex, ssPos).xyz/2/fade*cosAngInc; break; } } } oldPos = currPos; currPos = oldPos + screenSpaceVec; count++; } } Note that the second half of the code (after the else if(diff > error)) is where I cover the back face of models (depthBTex is a depth texture with frontface culling) so that the back of models are reflected.
  15. OpenGL Voxel Cone Tracing Experiment - Part 2 Progress

      Did some debugging and found out that (because I'm using forward rendering) I had accidentally used the hi-res version of the Buddha model for my ssao and ssr (over a million tris). So instead of 1.0ms from the vertex shader with the low-poly model, I was getting 10ms. My ssao is about 8.5ms now. However, when I previously reported my results, I didn't actually have any ssr turned on, so my ssr, when turned on for the entire scene (all surfaces) is an additional 8.8ms. I guess there's still a lot of room to optimize my ssr - when I implemented it, I was looking more for getting the best quality I could get than performance. I've managed to reduce my ssao to 4.7ms without too much quality loss.   I'm trying to calculate whether deferred shading has an advantage over my current forward shading. With deferred shading, I have to render the Buddha at full res for position, normal and albedo textures so this will be a fixed vertex shader cost of 30ms. At the moment with forward shading, I render the model at full res once and at low-res 7 times, so that makes 17ms altogether for vertex shader costs.