Jump to content
  • Advertisement

directNoob

Member
  • Content Count

    224
  • Joined

  • Last visited

Community Reputation

130 Neutral

About directNoob

  • Rank
    Member
  1. Hi and thanks. I installed the drivers and now it works. My GL context returns version 4.0 now and all those extension are listed in the GL_EXTENSIONS string. Thanks
  2. Hi. Ok, I currently havent installed the beta driver, but its downloading right now. But what I do not understand is the relation between the OpenGL version returned from glGetString and spec statement about the "against OpenGL ..." thing. The tessellation spec says "3.2 comp profile". My GL context is 3.2 comp profile. So what exactly is the problem? And how can I really know what ARB extension is supported by a secific GL version? And in the spec, what exactly means "Complete. Approved by the ARB at ..." What is meant by "complete"? What is meant by "approved"? I mean, is the spec included in the registry "after" several vendors implemented the spec or is it included "before" several verdors implemented it? Thanks
  3. Hi. Currently I figured out that some new OpenGL ARB extensions are approved. One is GL_ARB_tessellation_shader no.91. The spec says that this extension is "complete" and "approved". And "This extension is written against the OpenGL 3.2 (Compatibility Profile) Specification." My OpenGL Context says that it is a version 3.2 context(I use qt) and uses compatibility profile. The newest AMD drivers are installed and I have a Radeon 5750. Now the question. The ARB list form "glGetString" does not return the new extension. So the question is, why, and what does the listing of an ARB extension mean? Does it mean, that it is widely supported or does it say that it is meant to be widely supported? Thanks [Edited by - directNoob on March 26, 2010 3:30:50 PM]
  4. Hi. Currently Im totally stucked. I try to do HDR rendering with the LogLuv approach. All this is done using OpenGL and GLSL. (I use multiple knowlege from the Inet. I post it at the end if anyone is interested.) 1. The scene is rendered into an RGBA8 render target with LogLuv encoding. 2. A histogram is created via vertex shader scattering. 3. A Summed Area Table(SAT) of the histogram is created resulting in the CDF of the input image. 4. The histogram and the SAT, both in a texture are available to the tone mapping pixel shader. Here is a image of the histogram(top) and the SAT(bottom): The green bar(#3) in the histogram(top) is the center, this is where log2(L=1)=0 values are mapped to. This means, L<1(also means: -64<=LogL<0) value are at the left side and L>1(also means: 0<LogL<=64) are mapped to the right of the green bar. #1 are very dark values, i.e. near black. #2 are a bit dark values. They are < 1.0. All this is done per frame on the GPU. This histogram has 2000 bins and the input image is a rendered scene(which is very dark) and has res 800x600. The histogram is created with blending which accumulates 1/num_pixels at each scattered fragment. This means, if the image is pure white, the histogram has a single white bar at the most right bin. The slots/bins are selected in the vertex shader like so: void main() { // // Get the scenes logLuv value at this pixel(tex coord!). // Note that the pixel value is fetched in the vertex shader. // Encoded: // x: Ue // y: Ve // z: High LogLe // w: Low LogLe // // See www.realtimecollisiondetection.net for LogLuv HDR (!!!) // vec4 logLuv = texture2D( tex_scene, gl_MultiTexCoord0.xy ) ; // // decode the LogLe( lograrithmic luminance ) // logLe in [0,255] // float logLe = logLuv.z * 254.0 + logLuv.w ; // // Scale to range in [-64,64] // logLe = (logLe - 127.0)/2.0f ; // // Scatter the logLe to the propper histogram bin. // The w component is set to 64, this is the max. logl value. // // This means, -64 is scattered to -1, where // 64 is scattered to +1. // After this, going into projection space. // The screen space trafo then, transforms into propper histogram bin. // gl_Position = vec4( logLe, 0.0, 0.0, 64.0 ) ; } Now, I dont want to blow up the post. My first question is, is the scattering done correctly? I mean, can I do it like this or do I have to do some kind of scaling before the scattering? But the histogram looks plausible. The scene has two very dark lights which can be seen in the histogram. Please remember, LogL values are accumulated and stored in the histogram. Thanks Alex
  5. directNoob

    LogLuv and HDR

    I'm sorry. Alex [Edited by - directNoob on December 7, 2009 6:30:20 PM]
  6. directNoob

    SSAO random texture

    Hi. I implemented SSAO from this article and I used a simple rand() function from C++. Just fill the texture with rand() and use it. It is some time ago I implemented this one, but the math behind it is to randomly sample depth values with the vectors from the texture, if I remember right.
  7. Hi. Currently I need to access a texture on the GPU where I need to get a black color when I sample out of the [0,1] range. I need this for a summed area table after histo creation. All this is done on the GPU. I observed the OpenGL border color, but dont know if this is the best solution, or if there are other solutions. Thanks Alex
  8. To ultimately show you what I mean. Here are the pics. Here is the image where the untouched depth values are visualized. The pixel shader is: uniform sampler2DShadow texDepth ; void main() { float fDepth = shadow2D( texDepth, vec3(gl_TexCoord[0].xy,1.0) ).r; gl_FragColor = vec4(fDepth); } No depth comparision is done. Now, here is the good depth buffer: The shader here is: uniform sampler2DShadow texDepth ; void main() { float fDepth = pow(shadow2D( texDepth, vec3(gl_TexCoord[0].xy,1.0) ).r,15.0) ; gl_FragColor = vec4(fDepth); } As you can see, I need to power the read depth values up! I dont need the z value itself. I just need the depth buffer values in range [0.1]. Can someone please explain if this is the usual way or if it is possible to get the "corrected" depth buffer values from opengl itself via useing depth textures with GL_DEPTH_COMPONENT, instead of using a self backed color-depth-texture or the shader approach by pow up the values. I dont use the texture comparison mode. So shadow2D is just returning the depth value itself. I mean glTexParameteri(tt, GL_TEXTURE_COMPARE_MODE, GL_NONE ) ; Would be nice if someone could clarify whats going on! Thanks Alex P.S. Oh yes, near=0.1, far=1000.0 at both pictures. Its nearly the same for near=1.0f. I dont want to use larger near values! Bacause this clipps the geometry away!
  9. ok. I think I couldn't point out whats wrong. Imagine the camera is at origin. Since I use LH-Coords, z>0. View matrix: ------------- So the view matrix V ist the identity matrix. So it follows, that an object in world space is also in view space without transformation. Projection matrix: ------------- I use the projection matrix P noted above. This matrix can be found in Real Time REndering, at d3d website, in 3D Computer Graphincs(Watt) book, and many other places. Lets do some numerical math: ----------------------------- p: Point in projection space v: vertex in view space. p = P * v View space vectors are column space and w=1.0. Lets put v=[0.0f,0.0f,5.0f], far=100.0, near=1.0, fov=pi/4, aspect=... This gives for v.z=5 in view space: p = [0,0,4.04...,5.0] After homo.div. p.z = 0.90... ---------------------------------------- Lets put v=[0.0f,0.0f,10.0f], far=100.0, near=1.0, fov=pi/4, aspect=... This gives for v.z=10 in view space: p = [0,0,9.09...,10.0] After homo.div. p.z = 0.909... ---------------------------------------- Lets put v=[0.0f,0.0f,1.5f], far=100.0, near=1.0, fov=pi/4, aspect=... This gives for v.z=1.5 in view space: p = [0,0,0.5...,1.5] After homo.div. p.z = 0.336... As you can see, the projected z values(i.e. the depth buffer values) are quickly approaching 1.0!!! Is this correct? Ok, the depth buffer values is fine graind at the beginning, but is this wanted? Shouldn't those values be scaled? I mean, what are those depth values worth, if I just have good precision in depth range from 1.0-2.0 and than from 2.0-100.0, the values are nearly the same? [Edited by - directNoob on August 19, 2009 7:47:44 PM]
  10. Still not lucky. Cant fix it. Is opengl doing any transformation to the z value afterwards? And I dont realy see the relation to the precision here, because the formula still puts the depth values to the end of the depth range. See calculation I put down before. Thanks P.S. I use a FBO's depth attachment to write out depth values, i.e. its written out automatically by ogl.
  11. Hi and thanks. But even if I use n=1.0 and f=100.0, the depth buffer values are still too bright! If I put n=10.0, then the depth values are looking good. Even with n=5.0, I get quite good results. So, what are good values for n and f? What do you putting those values at? I mean, if n=10.0, then objects can never reach the camera! This looks rather ugly. Thanks Alex P.S. I use 32 bit float depth buffer!
  12. Hi. First of all, I dont use the ogl view, projection matrix stuff. Instead, I load the stuff myself. I use column vectors and a left handed coord sys. Just like D3D. I mean, the vertex's coord are defined in positive z space. When I transform the camera, everything seems to be ok. The camera goes just where I want it to be... It seems, that everything is just rendered fine. Some days ago, I implemented SSAO and realized, that the depth buffer is very bright. I mean, the depth buffer values are very high at 1.0. If I pow the depth values in the pixel shader by 30 or so, i.e. pow( depth, 30 ), the depth values looks just fine! the depth buffer just looks like a depth buffer and not as before, a white plane. So, depth values are just there. Then I observed the projection matrix. It is the one from the book Realtime Rendering. This matrix can be found nearly on every other website, including the microsoft's d3d site. It's the following: vec4_t vcX(2.0f*n/width,0.0f, 0.0f, 0.0f) ; vec4_t vcY(0.0f, 2.0f*n/height, 0.0f, 0.0f) ; vec4_t vcZ(0.0f, 0.0f, f/(f - n), -2.0f*n*f/(f-n) ) ; vec4_t vcW(0.0f, 0.0f, 1.0f, 0.0f ) ; I hope the formatting is ok! Now, lets just quickly observe the values for the projected z coordinate, which goes into the depth buffer. It is x_p = ... y_p = ... z_p = (z*f/(f-n)-2.0*n*f/(f-n))/w_p w_p = z , if you mult the view space vector with the projection matrix. z: view space z value f: far plane dist n: near plane dist Now, lets assume f=1000, n=0.1, z=1.0 Now, lets calc: z_p = (1.0*1000.0/(1000.0)-2.0*0.1*1000.0/(1000.0-0.1))/1.0 /// I put 1000-0.1 = 1000.0 z_p = 1.0 - 2.0*0.1 = 0.8 I hope this is clear enough. But as you can see, the view space z=1.0 value maps to the depth value of z_p=0.8. If you use z=2.0, then z_p=0.9!!!! So what happening here? My depth buffer's values are defined within the space [0.8,1.0] What did I do wrong, or whats wrong with the projection matrix? Thanks Alex
  13. directNoob

    glsl compilation error line numbers?

    What? You put four shaders into one shader object? I never thought about this. What? #include in the shader!? I have to check this out! If you put multiple shaders sources into one shader object, how can you distinguish between them during linking, and that how do you use the correspondig functions from the animation shader, light shader at the particular time you use the shader? Four shaders in one shader object, how do you select the right one at rendering?
  14. directNoob

    glsl compilation error line numbers?

    Yes, but,... my point is, why then provide an argument for the number of line, if it just makes a line of code out of it anyway? The shader source is just a dummy shader for testing the glsl api! So pleas dont worry about the shader code, its just rubbish in there. So then, put it down, how would the ogl function look like: void glShaderSource( GLuint shader, GLsizei count, const GLchar** string, const GLint* len ) { char* cat_string = new ... ; ... // concat the strings... but why? for( int i=0; i<count; ++i ){ cat_string += string ... } ... } Dont worry about the syntax, its just pseudo code, indicating how... So why concatenating the strings anyway? This is the question. So the func could just look like this: void glShaderSource( GLuint shader, const GLchar* string ) { char* cat_string = new ... ; // ups, nothing to do, the concat string is just there... cat_string = string ... } You see what I mean? It would make perfect sense(to me), if ogl sees the shader source passed to the glShaderSource function as a multiline shader source(if I used the count arg), rather that a singe line shader source, although I use the count arg. But maybe I dont really understand how to use the function in all its flavors! I dont see any reason to put multiply lines of shader source in one of the strings passed by the third parameter. Who is doing something like this?
  15. directNoob

    so whats in opengl version 3?

    What about vbo. I have never used immediate mode drawing. If you did d3d before, you would have thought that there is something like imm-mode... Alex P.S. Stop using immediate-mode now! Use vbos for drawing!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!