Sign in to follow this  
browny

OpenGL implementing antialiasing

Recommended Posts

browny    139
I have been porting a big OpenGL tool to Direct3D. In the process i had to port some variable-width line rendering codes ( glLineWidth() ) to Direct3D. While doing that, first i used the ID3DXLine interface which had a major flaw, that i later realized. The flaw appears mainly in the Perspective Projection mode. If one vertex of a line, in eye-space , goes, beyond the near-plane, it gets projected in a wrong way (which is mathematically understood) but ID3DXLine interface doesn't clip that line inside. Hence the ID3DXLine interface was totally useless for me. So i had to implement my own variable-width line drawing modules using two-triangle method (similar to ID3DXLine ). Everything works perfectly now but i want to implement the AntiAliasing (ID3DXLine->SetAntialiasing() ). What i did was, i used a thin alpha edge in the texture that gradually goes from 1 to 0 (alpha goes from 255 to 0) and used that texture to texture the two triangles. These scheme works fine when the line width is big enough. But it never gives the same visual output as ID3DXLine when antialiasing is turned on for smaller line widths. I am just wondering how ID3DXLine achieved antialiasing?

Share this post


Link to post
Share on other sites
remigius    1172
I don't know how ID3DXLine does this 'under the hood', but maybe it's an option for you to enable antialiassing for the entire application? If so, this can be done by setting the appropriate values for MultiSampleType and MultiSampleQuality in the D3DPRESENT_PARAMETERS used to create your device.

Hope this helps :)

Share this post


Link to post
Share on other sites
browny    139
But i think ID3DXLine achieves that without using MSAA.. Any ideas how i can antialias my fake line (using 2 triangles).

Share this post


Link to post
Share on other sites
MasterWorks    496
To use a texture (and 2 triangles) to make a line, be sure to:

-enable bilinear filtering (this might be what you mean by antialiasing)
-disable mipmaps on the texture
-make sure the actual line graphic is interior to the texture, ie have a row of blank pixels above and below your line.

Share this post


Link to post
Share on other sites
Sc4Freak    643
There's not much you can do, really. If your line is so thin that the rasteriser renders it as a segmented series of dots on the screen, then no texture can fix that.

The best you can do is render all your lines to a super-high-resolution (probably something like 2048x2048 or larger) rendertarget then downsample it and overlay over the backbuffer once you're done. In other words, doing "manual" supersampling.

Share this post


Link to post
Share on other sites
browny    139
Ok, i have a new idea.. since that texturing idea actually sucked ( :P ).
I will create an extra render target (having same dimension as my back buffer)
with MSAA enabled and then all my lines in that render target. Later i will use that surface as a texture on a screen-aligned quad to overlay the whole thing on the backbuffer (with Alpha Blend and Alpha test enabled). Hope that will render antialiased lines. Now the first question is...

1. How do i use IDirect3DSurface9 as texture ?

My guess is to copy the content of the IDirect3DSurface9 to a texture previously created.

Share this post


Link to post
Share on other sites
browny    139
Ok, i have a new idea.. since that texturing idea actually sucked ( :P ).
I will create an extra render target (having same dimension as my back buffer)
with MSAA enabled and then all my lines in that render target. Later i will use that surface as a texture on a screen-aligned quad to overlay the whole thing on the backbuffer (with Alpha Blend and Alpha test enabled). Hope that will render antialiased lines. Now the first question is...

1. How do i use IDirect3DSurface9 as texture ?

My guess is to copy the content of the IDirect3DSurface9 to a texture previously created.

Share this post


Link to post
Share on other sites
Sc4Freak    643
Quote:
Original post by browny
Ok, i have a new idea.. since that texturing idea actually sucked ( :P ).
I will create an extra render target (having same dimension as my back buffer)
with MSAA enabled and then all my lines in that render target. Later i will use that surface as a texture on a screen-aligned quad to overlay the whole thing on the backbuffer (with Alpha Blend and Alpha test enabled). Hope that will render antialiased lines. Now the first question is...

1. How do i use IDirect3DSurface9 as texture ?

My guess is to copy the content of the IDirect3DSurface9 to a texture previously created.


Yup. You need to do a stretchrect (or something similar) to copy the data from the MSAA render target surface to the texture so you can overlay it over your scene.

Share this post


Link to post
Share on other sites
browny    139
I used a extra backbuffer (using CreateRenderTarget) with MSAA enabled and then created a texture. But when i do StretchRect from the new RenderTarget it doesn't work. Later i found from the docs that the StretchRect from RT surface to a texture only works when the texture is RT Texture ( ie. created with D3DUSAGE_RENDERTARGET ). I did that too.. and StretchRect atleast returns S_OK, but when i lay that texture on the original backbuffer, all garbage seem to appear.. as if the StretchRect dint work !

Could someone give me a small code snippet of how to get this working

Share this post


Link to post
Share on other sites
browny    139
well.. let me show u the layout of the code..im basically doing the following

BeginScene();
CreateRenderTarget( with MSAA enabled );
SaveOldRenderTargers();
SetRenderTarget();
RenderFewLines();
CreateTexture();
StretchRect( from MSAA rendertarget to the texture )
Restore Original BackBuffer and ZBuffer
Draw a screen aligned quad with the texture
EndScene();
Present();

The the DX doc says that StretchRect doesn't work between BeginScene() and EndScene() pair.. but i have seem examples where StretchRect are put between BeginScene() and EndScene();

Could someone clear me up here

Share this post


Link to post
Share on other sites
Sc4Freak    643
How exactly are you doing your StretchRect?

You need to create the texture, then use GetSurfaceLevel() to retrieve the top-level surface. That's the surface you need to stretchrect to.

For example:

IDirect3DTexture9* LineTexture;
IDirect3DSurface9* LineSurface;
D3D9DeviceInterface->CreateTexture(ScreenWidth,ScreenHeight,1, D3DUSAGE_RENDERTARGET, D3DFMT_X8R8G8B8,D3DPOOL_DEFAULT,&LineTexture, NULL);
LineTexture->GetSurfaceLevel(0,&LineSurface);


D3D9DeviceInterface->StretchRect(MSAALineSurface, NULL, LineSurface, NULL, D3DTEXF_NONE);

// Now draw LineTexture using a screen-aligned quad


PS: You should never create textures like that every frame. It means that every frame your program has to go through the slow process of creating textures, and the ramifications of forgetting to release the textures are massive if you're creating them each frame.

Share this post


Link to post
Share on other sites
browny    139
I read in a blog that when u change render target, the old render target's contents may become undefined and may not retain its data. If that is the case then i will run into a problem. Specifically, the blog said that itz true for XBox renderers. Comments ?

Share this post


Link to post
Share on other sites
Sc4Freak    643
Hasn't happened to me. However, are you remembering to clear the MSAA line RenderTarget before rendering to it? Just creating the texture doesn't clear it, and it's filled with garbage data.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now