Wh0p

Members
  • Content count

    81
  • Joined

  • Last visited

Community Reputation

407 Neutral

About Wh0p

  • Rank
    Member
  1. Yes, I also thought about dithering, I just wanted to make sure I didn't miss something obvious (like the colour picker trick... Simple, awesome, effective and totally made me drop my jaw since I couldn't come up with it myself ).   Well now that I know for sure that it's the discretization when rendering to the window's back buffer, I can do things again, thanks a lot!   So I guess the banding is visible in darker images, due to the contrast enhancement for darker regions in the human vision? I think I heard something familiar to this in a lecture about visualization...       Last time I heard something about 30bit colordepth it was 2009 when Nvidia released some paper about the Quadro series to support it, thanks for the reminder, too.
  2. Hi, I am experiencing annoying banding artefacts on monochromic surfaces. I narrowed down the cause to the attenuation over distance to the light source. At least I think it's that, since all ambient, diffuse, specular and any other rendertarget, I made sure to be floating point, so I shouldn't loose precision. Playing around with my simple phong shading also had no effect on the banding artefacts...   This is how I calculate the attenuation, L is the not yet normalized light vector return 1 / (factor * max(dot(L, L), 0.0f) + 1) - g_LightCutoff; I posted since I need some inspiration on where else to look for the possible causes, since I do want to have at least some sort of attenuation for god ol' physics sake (not that my attenuation calculation has anything to do with the physics I got tought at university )... In a "real" scene with more interesting texturing, the banding is rather hard or even impossible to detect. It is more a conceptual matter of not having those artefacts. Here are some images to show you what I mean. If you look closely you can see the banding on the floor and the green wall: Attenuation and fall off from the spot light enabled Attenuation enabled, but no fall off from the spot light Only the phong shading
  3. Stencil Clear doesn't work

      If nothing else happend this should be initialized to 0xFF... glGetIntegerv (GL_STENCIL_CLEAR_VALUE..)? If you are clearing a FBO's stencil buffer you might as well try out glClearBufferiv (GL_DEPTH_STENCIL, buf, depth, stencil) and see what happens.
  4. Nice, that's sufficient for a start. I guess I can be creative on this one - like doing things in parallel.
  5. Thanks! This is very appreciated. I guess I expressed myself not quite accuratly by mentioning the term scene graph (I know what this is and only mentioned it because I'm calculating buckets for my objects while traversing and updating) I just wondered about the priorities...   @L. Spiro: Are there any further sources you can recommend on how to exploit the temporal coherence? That sounded very promising and interesting to try out.
  6. Hi, like the topic states I am currently tweaking the rendering order of my scene graph. After VFC and occlusion culling happened i'd sorted the rendering lists by the objects state changes like   1. Pipeline state (enabling/disabling depth test, blending etc) 2. gl program  3. VBO 4. Material   I've never done a huge scene with lots and lots of different objects and I'm just wondering, if the priorities are set correctly. In this case similar materials might be sorted into seperate batches because their gl programs differ... however I thought the register cache will be flushed anyway when the gl program binding changes, won't it? I am kind of walking alone in the dark regarding what properties to cache first when rendering the objects.   Looking forward to your responses!
  7. First of all I would suggest to define your transformation matrices as uniform parameters (at some point you'd have to do this anyway). so your vertex shader might look like this: #version 330 uniform mat4 ModelMatrix; uniform mat4 ViewMatrix; uniform mat4 ProjMatrix; uniform mat4 NormalMatrix; in vec3 Position; in vec3 Normal; in vec2 Texcoord; out vec3 px_position; // for lighting in viewspace out vec3 px_normal; // for lighting in viewspace out vec2 px_texcoord; void main () { // transform into viewpace gl_Position = ModelMatrix * vec4 (Position, 1.0); gl_Position = ViewMatrix * gl_Position; px_position = gl_Position.xyz; // project gl_Position = ProjMatrix * gl_Position; px_normal = (NormalMatrix * vec4 (Normal, 0.0)).xyz; px_texcoord = Texcoord; } You then want to set up the Model, View and Projection matrices in the host app (the Normal matrix is the inverse transposed of the Modelview matrix, this has something to do with scale operations are not affine transformation, for now i'd just ask you to accept it as it is or check out the math ) If you do not want to implement lighting in your fragment shader feel free to omit the normals for now...   I'd reccomend you to check out this tutorial for how to set up your matrices correctly: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/   Either way scaling is an operation you are performing in object space, which yields that it is part of your Modelmatrix.         That is harsh but I like it Doing this also yields the advantage of code completion and syntax highlighting. fyi: I once made the super include for glsl syntax, which defines all the types and functions, just tell your c++ compiler glsl-files are to treat as cpp headers and all the sweet benefits of autocompletion and stuff are ready for your glsl code. Check it out: https://github.com/Wh0p/Wh0psGarbageDump/blob/master/syntax http://www.gamedev.net/topic/657788-glsl-syntax-highlight-and-auto-completion/
  8. The "min" approach seems legit to me. Multiplying the lambert lighting term with the shadow scale will just dim the shadowed areas. I think of shadowed areas as areas, where only the ambient term effects the shading of the lightsource and diffuse and speculare are zero. Having not solved your problem with this I only may recommend you this article from nvidia for edgeblurred soft shadows: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter17.html   Theres a neat explanation with code and pretty decent results.
  9. Have a look into bounding volume hierarchies to further speed up the detection process. Common tree structures are BVHs, octrees and kd-trees.
  10. @bioglaze,JvdWulp: my clustered deferred implentation goes up to 60K lights at smooth 60fps :D   I've been implementing an LBVH for culling lights and used it along with my tiled/clustered deferred renderer. Tree construction (includes sorting the lights with morton codes), traversal for culling lights for each cluster can all be done pretty fast with computeshaders.
  11. The banding artifact is very clearly visible in the image you posted.  Have a look here: http://mtnphil.wordpress.com/2013/06/26/know-your-ssao-artifacts/ Theres a good explanation how to get rid of those bandings by using mixups when sampling.   Also are you blurring the SSAO texture. A 3x3 gaussian blur can help a lot against the high frequency noise in there.
  12. Regarding your 1st Question: from the DirextXMath.h: // Fix-up for (1st-3rd) XMVECTOR parameters that are pass-in-register for x86, ARM, and Xbox 360; by reference otherwise #if ( defined(_M_IX86) || defined(_M_ARM) || defined(_XM_VMX128_INTRINSICS_) ) && !defined(_XM_NO_INTRINSICS_) typedef const XMVECTOR FXMVECTOR; #else typedef const XMVECTOR& FXMVECTOR; #endif So in most cases FXMVECTOR is a reference to the XMVECTOR, and all funcions I know of are taking FXMVECTORs as arguments. I do not want to start a discussion over the difference on pointers and references, but since we are talking about function arguments here, think of a reference as a pointer that can't be assigned NULL, which offers some nice advantages over pointers.   Edit: Too slow :O
  13. Deferred Context

    As far as I know the motivation behind the deferred contexts are draw sequence ordering. Like you already said, you submit draw calls and state changes to your deferred context, they are appended to a command list for the respective context on each thread. After that you can execute each threads contexts command list on your main thread and present the scene.
  14. From the Reference pages:           As I understood this is exactly what you want? In addition to what beans said the code of your draw call might be helpful.
  15. Depends... I am sorting my objects shader 1st then mesh then pipeline state changes (like enable depth/cull/blend and stuff). In general the benefit from sorting is minimizing api overhead, since most of the objects share the same shader, I figured to sort the objects according to their shader first. geometry changes more often than the pipeline state and so on... I do not have any proof that this is the best way to do it, but this is the way I am doing it and it works pretty nicley.