Sign in to follow this  
jonathon99

OpenGL Problem between OpenGL and DirectX renderings

Recommended Posts

jonathon99    120
I have a couple issues with inconsistencies between OpenGL and DirectX. I wrote my editor and other tools in DirectX and the game itself is in OpenGL (for easy porting). I've noticed differences between the texture filtering/quality, thickness of the fog, coloring and other issues. All the settings (color, fog intensity/amount, etc are identical between the two). I've made sure I'm running most of the same settings or as close as I could get them, however, my OpenGL experience is limited. If anyone can spot something that I may be overlooking please feel free to comment. It will be greatly appreciated!

// Comparison screenshots with and without fog, the top image in the screenshots are in DirectX the other is OpenGL
[url="http://www.moonlightminions.com/problem/nofog.jpg"][img]http://www.moonlightminions.com/problem/nofog_sm.jpg[/img][/url] [url="http://www.moonlightminions.com/problem/fog.jpg"][img]http://www.moonlightminions.com/problem/fog_sm.jpg[/img][/url]

You can see that in the top image (DirectX) are much better and the desired look. Does anyone have any idea why this is happening? Here is my code for most of the generic rendering states.

[size=2]//DirectX Texture Filtering[/size]
[size=2]Device->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR);[/size]
[size=2]Device->SetSamplerState( 0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR);[/size]
[size=2]Device->SetSamplerState( 0, D3DSAMP_MIPFILTER, D3DTEXF_LINEAR);[/size]

[size=2]//OpenGL Texture Filtering[/size]
[size=2][size=2]glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR );[/size][/size]
[size=2][size=2]glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR );[/size][/size]
[size=2][size=2]glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 1);[/size][/size]

[size=2][size=2]//DirectX Alpha Settings[/size][/size]
[size=2][size=2][size=2]Device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);[/size][/size][/size]
[size=2][size=2][size=2]Device->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);[/size][/size][/size]

[size=2][size=2][size=2]//OpenGL Alpha Settings[/size][/size][/size]
[size=2][size=2][size=2][size=2]glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);[/size][/size][/size][/size]

[size=2][size=2][size=2][size=2]//DirectX Alpha Test Settings[/size][/size][/size][/size]
[size=2][size=2][size=2][size=2][size=2][size=2]Device->SetRenderState( D3DRS_ALPHAREF , (DWORD)0xAA );[/size][/size][/size][/size][/size][/size]
[size=2][size=2][size=2][size=2][size=2][size=2]Device->SetRenderState( D3DRS_ALPHAFUNC, D3DCMP_GREATEREQUAL );[/size][/size][/size][/size][/size][/size]
[size=2][size=2][size=2][size=2][size=2][size=2]Device->SetRenderState( D3DRS_ALPHATESTENABLE, TRUE );[/size][/size][/size][/size][/size][/size]

[size=2][size=2][size=2][size=2][size=2][size=2]//OpenGL Alpha Test Settings[/size][/size][/size][/size][/size][/size]
[size=2][size=2][size=2][size=2][size=2][size=2][size=2]glAlphaFunc(GL_GREATER,0.1f);[/size][/size][/size][/size][/size][/size][/size]

[size=2][size=2][size=2][size=2][size=2][size=2]//DirectX Z Function[/size][/size][/size][/size][/size][/size]
[size=2][size=2][size=2][size=2][size=2][size=2][size=2]Device->SetRenderState(D3DRS_ZFUNC, D3DCMP_LESSEQUAL); [/size][/size][/size][/size][/size][/size][/size]

[size=2][size=2][size=2][size=2][size=2][size=2][size=2]//OpenGL Z Function[/size][/size][/size][/size][/size][/size][/size]
[size=2][size=2][size=2][size=2][size=2][size=2][size=2][size=2]glDepthFunc(GL_LEQUAL); [/size][/size][/size][/size][/size][/size][/size][/size] Edited by jonathon99

Share this post


Link to post
Share on other sites
L. Spiro    25621
For starters, your alpha-testing states are not the same.
D3DCMP_GREATEREQUAL != GL_GREATER and 0xAA (0.667) != 0.1.

Considering the brightness of your fog and flames, I would consider that not enough pixels are being rejected in OpenGL, which would be due to the low alpha-test value.


L. Spiro Edited by L. Spiro

Share this post


Link to post
Share on other sites
jonathon99    120
[size="2"]glAlphaFunc(GL_GEQUAL,0.667f);[/size]

[size="2"]Getting the same result. I think maybe it has something to do with my billboard system for the fog perhaps but even if that was the case, the texture quality seems a bit lower for some reason.[/size]

Share this post


Link to post
Share on other sites
Arkhyl    626
[quote name='jonathon99' timestamp='1341098916' post='4954398']
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR );
[/quote]
You can only set mag filter to GL_LINEAR or GL_NEAREST. Also, for your alpha func, the equivalent OpenGL to D3DCMP_GREATEQUAL is GL_GEQUAL.

Share this post


Link to post
Share on other sites
jonathon99    120
[quote name='Arkhyl' timestamp='1341134366' post='4954468']
[quote name='jonathon99' timestamp='1341098916' post='4954398']
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR );
[/quote]
You can only set mag filter to GL_LINEAR or GL_NEAREST. Also, for your alpha func, the equivalent OpenGL to D3DCMP_GREATEQUAL is GL_GEQUAL.
[/quote]

I ended up changing it to GL_GEQUAL and GL_LINEAR for the MAG

[quote name='mhagain' timestamp='1341141498' post='4954494']
Also this is going to reduce the effectiveness of your mipmapping:[code]glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 1);[/code]
[/quote]

The textures look extremely blurry when I take out that line of code.

I appreciate all the help so far but it seems even with mipmapping disabled entirely, I'm still getting thicker fog in OpenGL as well as lower quality fog texturing. I realize this could be many different reasons so I'm going to pull an all nighter trying to investigate. I appreciate everyone's help so far!

Share this post


Link to post
Share on other sites
mhagain    13430
[quote name='jonathon99' timestamp='1341174720' post='4954644']
The textures look extremely blurry when I take out that line of code.
[/quote]
That indicates you've got something else wrong, perhaps with your texture loading setup or some state you haven't shown us, as otherwise GL_LINEAR/GL_LINEAR_MIPMAP_LINEAR is exactly equivalent to D3DTEXF_LINEAR/D3DTEXF_LINEAR/D3DTEXF_LINEAR (an example might be a bad value for GL_MAX_TEXTURE_SIZE compared to your D3DCAPS9::MaxTextureWidth and D3DCAPS9::MaxTextureHeight causing you to resample textures down too much in your GL renderer) - at the most fundamental level all that the API does is tell the hardware what to do, and what the hardware does is API-agnostic, so your OpenGL and D3D renderings should look identical if everything else is equal.

Share this post


Link to post
Share on other sites
jonathon99    120
I still haven't really been able to pinpoint what exactly is causing the inconsistencies but it doesn't seem to have much to do with everything we've discussed here. It's quite puzzling.

Share this post


Link to post
Share on other sites
lawnjelly    1247
Are there any differences in the vertex colours being applied? If you are not meaning to use vertex colours but it's still switched on by accident this can cause a tinge. It almost looks like there is an extra constant being added to the fog additive values.

There seems to be a blue tinge in the opengl no fog screenshot which is not present on the dx no fog screenshot. Are you drawing the background the same in both?

I would simplify the problem as much as possible and only try and draw certain parts of the scene and compare for differences. i.e. background on it's own, geometry on it's own, geometry and background, fog on it's own etc.

There may also be default settings for the driver which are different in dx and opengl. In my nvidia setup you can adjust the defaults for each application, including things like mip map quality etc. Although I am not sure this is causing the difference you are seeing here.

Other than that it does kind of look like the opengl is using a 'better quality' version of the fog, either the dx is using a lower mipmap, or the dx version of the fog texture looks almost like a dxt compressed version versus an uncompressed version in the opengl (some premultiplied alpha thing I dunno?). Or something in the blending mode as the others are suggesting. Edited by lawnjelly

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
  • Popular Now