jmaupay

Members
  • Content count

    66
  • Joined

  • Last visited

Community Reputation

151 Neutral

About jmaupay

  • Rank
    Member
  1. On windows 7, When creating a fullscreen swapchain without any other window occluding on the screen, the function CreateSwapChain returns randomly DXGI_STATUS_OCCLUDED instead of S_OK.   It looks related to Areo parameters, does anyone experience this ?   Thank you !
  2. In a GLSL fragment shader, I have a different result of a texture fetch (on a mipmapped texture) if called inside an "if" or outside: Inside an "if" statement, the limits of the polygons is visible: Outside any "if" statement, the fetch is correct. I simplified the code to the maximum and the problem is visible with that piece of code: if (gl_FragCoord.x > 500.) { FragColor = texture2D(gUniTextureUnit0,gl_TexCoord[0].st) ; } else { FragColor = texture2D(gUniTextureUnit0,gl_TexCoord[0].st) ; } In that case I have a line that appears at x=500 as if the texture fetch use a non mipmapped texture instead of mipmapped one. Tested on GeForces 7/8/9 various drivers versions. Any idea ? [Edited by - jmaupay on July 7, 2009 9:17:16 AM]
  3. Could you tell me what is the correct calculation from a z in post-perspective space (Zp) to a gl_FragDepth in a shader to obtain the same value as fixed function ? First transform Zp: // Zp is in [-1,1] transform it to [0,1] => g float g = Zp * 0.5 + 0.5 ; I found several explanations here and there: gl_FragDepth = (gl_DepthRange.far * gl_DepthRange.near) / ( gl_DepthRange.far - g *( gl_DepthRange.far - gl_DepthRange.near )) ; Or gl_FragDepth = gl_DepthRange.diff / gl_DepthRange.far * g ; Or gl_FragDepth = (1/gl_DepthRange.near - 1/g) / (1/gl_DepthRange.near - 1/gl_DepthRange.far) Or ... And also, where to add a constant offset to simulate a glPolygonOffset ?
  4. OpenGL OpenGL + VBO + C#

    I don't know C# ... But you may check if: - that extension is supported by your graphic card ? - how you assign the glGenBufferARB function: how do you load the extension ? do you use a parser like glee or glew ( I don't know the C# version) ? Or do you have to do the assignment yourself ( gr.glGenBuffersARB = wglgetProcAddress ...) ?
  5. Is there an OpenGL extension to manage the MSAA resolution in OpenGL ? If I understand it's possible to modify how the samples are resolved to a fragment, and it's required for Direct3D 10. GF80 / R600 seems to have that functions. Is there any OpenGL extension (I suppose it's vendors extensions for the moment), or any way of doing that ?
  6. If I understand you are trying to: - render to a texture. - get the texture to an array in RAM ("memory bitmap") Render to a texture: The good method in OpenGL with modern graphic cards is the Frame Buffer Object (FBO). you have to create a new FBO, then attach it to a texture. There is plenty of demos, have a look at that article here at gamedev. Note that you don't need neither Depth texture, nor MRT, just one FBO with a texture attached is enough. Then you need to get back the texture values (FBO are stored on GPU memory): - not efficient method is to fetch the values using glGetTexImage - efficient methods uses the Pixel Buffer Extension (PBO): ReadPixels() to a PBO . Examples could be found at gpgpu for instance (see "fast transfer"): gpgpu
  7. Deferred rendering + MRT + MSAA

    Quote:Original post by eq Yes? I did it amongst other things on Xbox (original, not 360). Quote:(I thought that resolve ("blit") functions on application frame buffer are quite new) ? It's been around since DX9 at least, I have a 3 year old laptop with a GeForce (5600 Go?) that supports PS/VS 2.0 and there's no problem there... Edit: DX9 was released late 2002 (according to wikipedia ). Thank you for the information. I'm sorry sometimes I feel I'm working on brand new functions and when I see that you worked on that 3 years ago, I feel I'm just out of order.
  8. Deferred rendering + MRT + MSAA

    Quote:Original post by AndyTX Custom resolves are supported on G80 and R600 (they are required for D3D10). I assume the G80 OpenGL extensions expose something similar, but I'd have to check them out. Excellent ! Does some one knows the name of that extension ? (I probably have to create a new thread on OpenGL forum)
  9. Deferred rendering + MRT + MSAA

    Quote:Original post by AndyTX That said, custom resolves can be implemented on the G80 So if I use only a GeForce 8800, I can program the resolve function, that's it ? Then I can have an antialiased resolve function for the color buffer and no anti-aliasing for the position and normals ... Is it correct ? Can I do it just now ? what function to call (on XP/OpenGL) ? Or you mean that I have to render to a 4X buffer (for 4X AA) then code AA myself ?
  10. Deferred rendering + MRT + MSAA

    Quote:Original post by eq I abandonned my deferred redenderer a couple of years ago. But I recall that the artifacts that bothered me most was that sometimes it looked like there was no AA and most of the time the "edges" were enhanced, like a sobel edge detection filter albeit more subtle. And "a couple of years ago" you were able to render into an AA buffer ? (I thought that resolve ("blit") functions on application frame buffer are quite new) ?
  11. Deferred rendering + MRT + MSAA

    Quote:Original post by eq Deferred rendering: DefPixel = F((Normal0 + Normal1 + Normal2 + Normal3) / 4, (Position0 + Position1 + Position2 + Position3) / 4, (Color0 + Color1 + Color2 + Color3) / 4) Ok, I see. So with deferred, the post-processing shader receive the "mean" (average) of normal, position and color sub-fragments. For my information, what is the result ? A blurred image ? An aliased image ? I imagine that post-processing shaders (lighting ...) don't work nicely with that sort of input ? (someone has a snapshot ?) [Edited by - jmaupay on May 15, 2007 6:10:50 AM]
  12. For short: Is it a good solution on modern hardware (let's say GF8800) ? And how to have hardware anti-aliasing (MSAA) with deferred shading ? Long explanation: I had a look on that threads: - thread424468 - thread396527 and : - Yann L questions For the moment, I implemented a render to texture (FBO) of my engine (with MSAA: using glBlitFramebufferEXT) and it works pretty well. But all the calculations (shadow mapping ...) are done during that first pass. Then I have post-processing to do some simple 2D effects. Now I would like to do the shadows/lighting/fog/etc... as post-processing passes. My firts idea is to use Multiple-Render-Target (MRT) in my first pass to implement Deferred shading. But I have doubts: a) if I understand Yann, this is not a good solution (not possible) to use MSAA MRT with deferred rendering buffers (diffuse / position / normals) and then to resolve (blit) to textures. am I right ? This gives stange textures for position/normals ... ? Is it possible to use non AA renderbuffers for position/normals ... b) What are the alternate solutions ? (I don't want to do AA myself, please). Yann says he has got a first pass to render in MSAA buffers then a second pass to render the deferred buffers in not-AA buffers ? Could the first pass also be used as a Z-prepass and the second pass to be very cheap ? c) Other ideas ? Blending ? Avoid deferred ?
  13. Ok, I think I found the answer in the Framebuffer object specs http://www.opengl.org/registry/specs/EXT/framebuffer_object.txt : (attachable "image" is either a "render buffer" or a "texture") << By default, the GL uses the window-system-provided framebuffer. The storage, dimensions, allocation, and format of the images attached to this framebuffer are managed entirely by the window-system. Consequently, the state of the window-system-provided framebuffer, including its **images, can not be changed by the GL**, nor can the window-system-provided framebuffer itself, or its images, be deleted by the GL. >> So what I undestand: OpenGL can't modify the image provided by the window system framebuffer.
  14. I would like to change the anti-aliasing parameter of my current OpenGL window. Do I have to change the current OpenGL window (in Windows: wglChoosePixelFormatARB) or can I do it using the glRenderbufferStorageMultisampleEXT on the FrameBuffer 0 ? For example that code seems not to change anything for me, is it normal ? GLuint mRB1 ; GLsizei mWidth ; GLsizei mHeight ; GLsizei mNumSamplesMultiSampling ; GLsizei mNumSamplesMultiSamplingCoverage ; glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0 ) ; glGenRenderbuffersEXT(1, &mRB1); glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, mRB1); glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFER_EXT, mNumSamplesMultiSampling, GL_RGBA8, mWidth, mHeight); glRenderbufferStorageMultisampleCoverageNV(GL_RENDERBUFFER_EXT, mNumSamplesMultiSamplingCoverage,mNumSamplesMultiSampling, GL_RGBA8, mWidth, mHeight); glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, mRB1); glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
  15. Well I feel the problem is not much the sort method (a qsort should be enough, search "qsort" on google), but how to create and update your list of visible transparent polygons (before sorting). For example, you should investigate a culling method that "extract" transparent shapes and store them in a list for sorting. But concerning your problem, I have an other question to gurus out there. Some times ago, on SGI machines, the "multisampling" process was able to draw transparent blended object without sorting. Does/will the "multisampling" available on new graphic cards (GeForce 7/8 for example: http://www.nvidia.com/object/feature_intellisample4.0.html ), allow to do the same ? If yes, we don't have to sort anymore ?