Sign in to follow this  
sylpheed

OpenGL How do you do post-process in OpenGL + CG?

Recommended Posts

sylpheed    125
First of all, sorry for my poor English, and sorry again if this thread is not in the correct section, I am new in the forum :) I have an application that uses C++, OpenGL and CG. I have no problem rendering my objects with CG shaders. In fact, I'm using FX Composer to create Shaders, so I have to use Techniques and Passes. The problem is that I do not know how to apply a post-process shader to the whole scene. I have downloaded a shader from Nvidia library, specifically this: http://developer.download.nvidia.com/shaderlibrary/packages/post_deepMask.cgfx.zip It works in FX Composer, but I can't use it in my app because I'm not sure how to implement in my code. For standard shaders, I do this:
Quote:
CGpass pass = cgGetFirstPass(technique); while (pass) { cgSetPassState(pass); glCallList(...); // Render the mesh cgResetPassState(pass); pass = cgGetNextPass(pass); }
Please, could you help me? I've searched for two days in Google but I don't find anything :(

Share this post


Link to post
Share on other sites
bluntman    255
Well I haven't downloaded and looked at the shader you link to, but all (?) post process effects shaders are applied by first rendering your scene to a texture*, and then rendering this texture as a fullscreen quad using the post processing shader (either two another texture if you intend to apply more post process effects**, or to the backbuffer).


* Framebuffer Object [FBO] on newer hardware, Pixelbuffer Object [PBO] on older hardware, or render to backbuffer and copy to texture on even older hardware.
** Usually for multiple post process effects which cannot be combined into a single shader (this is much preferable, as changing target, regardless of type, is one of the most costly operations, if not the most costly) the most common technique is "ping-pong", where two targets are used:
1) first the scene is rendered to A,
2) then B is rendered to using the contents of A,
3) then A is rendered to using the contents of B
4) if not done goto 2)

Share this post


Link to post
Share on other sites
sylpheed    125
You couldn't imagine how much I appreciate your help :D

Thank you very, very much. After two days in the hell, I see a piece of hope in my code.

Best regards.

Share this post


Link to post
Share on other sites
sylpheed    125
I'm almost there... need help though. I've downloaded another shader from Nvidia Library, its algorithm is very similar and provides various techniques, one of them is very "simple":

- Pass1 -> Calc normals.
- Pass2 -> "Do something" (particularly detect edges from a object) using normals from the pass before.

The technique is like this:
Quote:
technique NormsOnly <
string Script =
"Pass=Norms;"
"Pass=ImageProc;";
> {

/*
* PASS 1
*/
pass Norms <
string Script = "RenderColorTarget0=gNormTexture;"
"RenderDepthStencilTarget=gDepthBuffer;"
"ClearSetColor=gClearColor;"
"ClearSetDepth=gClearDepth;"
"Clear=Color;"
"Clear=Depth;"
"Draw=Geometry;";
> {
VertexProgram = compile vp40 simpleVS(gWorldITXf,gWorldXf,
gViewIXf,gWvpXf,gWorldViewXf);
DepthTestEnable = true;
DepthMask = true;
CullFaceEnable = false;
BlendEnable = false;
DepthFunc = LEqual;
FragmentProgram = compile fp40 normPS();
}

/*
* PASS 2
*/
pass ImageProc <
string Script = "RenderColorTarget0=;" // re-use
"RenderDepthStencilTarget=;"
"Draw=Buffer;";
> {
VertexProgram = compile vp40 edgeVS(QuadScreenSize,gNPixels);
DepthTestEnable = false;
DepthMask = false;
BlendEnable = false;
CullFaceEnable = false;
DepthFunc = LEqual;
FragmentProgram = compile fp40 normEdgePS(gNormSampler,gThreshhold);
}
}


Now, thanks to bluntman, I have rendered the normals to a texture (using frame buffer objects) and now the Shader has access to this texture. This is executed in pass number one and it works, I have tested it:

http://www.subirimagenes.com/otros-norms-1655474.html

It's a teapot on a triangled plane :).


Problems come in Pass2 due to FX Composer Scripts. I don't know how to perform the script "Draw=Buffer;" using Open GL. I've readed SAS from CGFX:

Quote:

Passes can also specify what to draw in each pass – either the geometry from
the scene, or a screen-aligned quadrilateral that will exactly fit the render
window. We use the pass Script “Draw” command to chose: either
“Draw=Geometry;” for (you guessed it) geometric models, or “Draw=Buffer;” for
a full-screen quad. If neither is specified, a “Draw=Geometry;” will be implied at
the end of your pass Script.


That sounds pretty cool using FX Composer, but I have no idea how to implement using Open GL and CGFX API. How could I do this?

This is a nightmare, ten hours non stop, I only see code around me :(

Please, anyone could help me? If anybody has an example of post-processing using OpenGL + CGFX, please share it, I would appreciate it so much.

Best regards.

Share this post


Link to post
Share on other sites
bluntman    255
Well I haven't used FxComposer, but from what I understand from what you say, the draw=buffer is used to cause a full screen quad to be drawn to the screen using the specified shader. So to do this in OpenGL with Cg you need to:

1) set the shader you want to apply to the fullscreen quad (the edge detection shader I guess).
2) attach the texture (your FBO containing the render of the normals) to the shader parameter.
3) draw a quad that fills the entire screen.

That should be it!
I have code but it is quite modularised and split up, so wouldn't be easy to immediately understand, but it should be simple (ish) to do what you are trying to do.

Share this post


Link to post
Share on other sites
sylpheed    125
Quote:
Original post by bluntman
Well I haven't used FxComposer, but from what I understand from what you say, the draw=buffer is used to cause a full screen quad to be drawn to the screen using the specified shader. So to do this in OpenGL with Cg you need to:

1) set the shader you want to apply to the fullscreen quad (the edge detection shader I guess).
2) attach the texture (your FBO containing the render of the normals) to the shader parameter.
3) draw a quad that fills the entire screen.

That should be it!
I have code but it is quite modularised and split up, so wouldn't be easy to immediately understand, but it should be simple (ish) to do what you are trying to do.


Again, thanks a lot for your help.

For now, we have moved the post-process for two weeks, we're in a hurry with other things of the application.

I will tell you when we take up again.

Thanks for your time.

Best regards.

Share this post


Link to post
Share on other sites
sylpheed    125
Finally... I DID IT!!

Tears in my eyes, is a beginner code, but it has been soooo long... and now is a happy end :)

Thanks Bluntman for your help, you give me the trick! The steps are:

1) Render all previous passes in FBO's (the last pass uses all FBO's textures for calc the results).
2) In the last pass, render a full screen Quad (it is equivalent to "target=buffer").

For those who have the same problem, dont't forget binding FBO's textures to CGFX texture samplers!! ;)

Best regards.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now