• Advertisement
Sign in to follow this  

OpenGL OpenGL & DirectX

This topic is 4480 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi. I know much of DirectX and when MS released DX9 I found very nice stuff in it. After googling the net I found almost none tutorial for DirectX for some advance programing, like shaders, HDR and stuff. Meanwhile I noticed that there are lot of examples for OpenGL but also I didn't found lot of shader examples. Few years ago almost every noticable FPS game was writen in OpenGL, and almost none in DirectX. And that games runned like hell on my Pentium II. Well I buyed new PC, and not so long ago a new graphic card (ATI9800Pro) and tried Half-Life2 (DX9), FarCry(DX9), and Doom3(OpenGL). Not to mention that DX games runned on max details with decent fps, and doom3 with medium details on 20fps (I was unable to set it in higher setting). My curiosity for OpenGL raised, I dl lot of examples for OpenGL and every example was detailed so people can learn from it much. And now I don't know what to do...I need BSP, octrees, lightmaps, and that stuff I can find for OpenGL and for DX nothing!!! I was thinking to port my whole engine into OpenGL, but then I didn't find lot of shader examples for OpenGL, nor HDR lighting... But on the other hand new engines runs poor on OpenGL, and fast on DirectX... I don't know what to do! I thought that two APIs are similar and that both are good. But what is the truth? Please someone give me a guideline. Thanks in advance

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by streamer
I thought that two APIs are similar and that both are good. But what is the truth?


If you are talking about Direct3D and OpenGL, then that is the truth (depending on what you mean by "similar").

Share this post


Link to post
Share on other sites
Quote:
Original post by Roboguy
Quote:
Original post by streamer
I thought that two APIs are similar and that both are good. But what is the truth?


If you are talking about Direct3D and OpenGL, then that is the truth (depending on what you mean by "similar").


Yes I was thinking about Direct3D (DX9 droped support for DirectDraw), but if that is the truth why Doom3 is so "hardware eating engine". I don't think that Id software have some newbie programers in team, they proved a lot in past. But for example Half-life 2 looks beautifull! and two engines take best of hardaware they can.
For example I looked at code that MS use for skinning meshes. It was awfull!! I didn't understand a thing at first look. But f.e. when I looked at code for rendering md3 format for OpenGL everything looked understandable even if I don't know OpenGL!

Share this post


Link to post
Share on other sites
And I don't want to start Directx vs OpenGL war.
Can you tell me what are the weak spots and what are advantages of OpenGL?

Share this post


Link to post
Share on other sites
1)Both APIs are just as good and powerful. There are some issues with NVidia having better OpenGL support, and ATI better D3D, but IMO it's not a big deal. Doom3 runs slower because it uses a much more complex shading model(100% dynamic, while HL2 uses for the most part precalculated radiosity maps). If you're looking for a reason to choose between GL and D3D, you won't find it in performance. Make your pick based on your personal preferences and coding style.
That's why noone can tell you what are the "weak spots" of each one; it's up on you. For me, OpenGL being procedural is a pro, for others is a con.

2)BSPs,octrees,lightmaps are all techniques irrelevant of the API. If you know the API syntax well enough you can implement them easily.

3)There exist plenty Shader examples, but shaders(and especially HDR) are relatively new, so it's logical that they're fewer shader examples than those that use fixed-function pipeline. Again, for someone that knows the syntax, translating from D3D to OpenGL is trivial.

In sort, stop dwelling on this thought that one API is better than the other, and it's important that you find out which one is it, so it will magically make you a better programmer and your games run faster. Just pick one(or both) and learn. They're both deserve the time you'll spend on them.

Share this post


Link to post
Share on other sites
Quote:
Original post by mikeman
Make your pick based on your personal preferences and coding style.


Portability might be a factor in your choice also.

Share this post


Link to post
Share on other sites
First of all, you really can't compare HL2 on D3D to Doom3 on OGL, and expect the comparison to mean anything about the performance. Both games look great, granted, but Doom3 does a LOT of pixel shading effects, where this is generally more limited in HL2.

Both APIs in their current version (and with appropriate extensions in GL's case) expose identical, or nearly identical features in the very least. The bottom Line is that an API is only that, choosing one or the other when they're so similar is not going to make any application better or worse, its about how you use it.

GL has some advantages, such as cross-platform compatibility, that DirectX doesn't even attempt to touch. Moving to the future, the PS3 used OpenGL|ES as its graphics API, and Nintendo reportedly uses a custom GL-like api, Macs and any non-Microsoft OS also use GL. Microsoft and its Xboxen are the only players promoting/using DX, so basically its MS and DirectX versus everyone else and OpenGL. Granted that due the massive windows user base, one could argue that the two are neck-and-neck, but GL has more supporters, especially in console games which are what really drive the industry.

Also, any GOOD engine should abstract the 3D API away from the game code, and developing an equivilant renderer for the other API is possible (and IMHO, should always be done.)


The bottom line though, is that you should use whatever one you are more comfortable with, and that is purely personal preference and up to you.

Share this post


Link to post
Share on other sites
I'll start with the easy ones.

HDR in OpenGL courtesy of humus.ca
Lots of shaders in Cg, GLSL, vp/fp, HLSL, etc

These people are right, the only reason that Doom3 runs slower then the other games is the full per-pixel realtime lighting. The performance difference between OpenGL and Direct3D is virtually zero.

I would argue, however, that OpenGL can be easier to learn for a beginner. It allows you to easily render geometry to at least get something up on the screen (I'm referring to glVertex3f and friends). Direct3D doesn't have support for so-called "immediate" mode rendering like this. While a production game or 3d application will probably never use immediate mode rendering because of the inherent slowness, it is helpful to get the concepts down.

Share this post


Link to post
Share on other sites
Thanx guys [smile] you help a lot, but now I'm even more messed up.
I know D3D very well, and OpenGL not. I understand difference between engines (now). I have thoughts (for about week or two) that OpenGL is better in some way, and thats why I posted here. I have found articles on net of chaos in OpenGL extensions for newest stuff support. Is that true? Or things are better? And is there OpenGL 2.0?

Share this post


Link to post
Share on other sites
Well, here.
OpenGL:

#include <windows.h>
#include <GL/gl.h>

HWND hWindow;
HDC hDC;
HGLRC hRC;

struct Vertex
{
float x, y, z;
float r, g, b;
};

LRESULT WINAPI WndProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam )
{
switch( msg )
{
case WM_CLOSE:
PostQuitMessage( 0 );
return 0;
case WM_PAINT:
ValidateRect( hWnd, NULL );
return 0;
}

return DefWindowProc( hWnd, msg, wParam, lParam );
}

int main()
{
HINSTANCE hInstance = GetModuleHandle( NULL );

//create a window
WNDCLASSEX wc = { sizeof(WNDCLASSEX), CS_CLASSDC, WndProc, 0, 0, hInstance, NULL, NULL, NULL, NULL, "MiniOGL", NULL };
RegisterClassEx( &wc );
hWindow = CreateWindowEx( WS_EX_APPWINDOW | WS_EX_WINDOWEDGE, "MiniOGL", "MiniOGL", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT,
300, 300, NULL, NULL, hInstance, NULL );
ShowWindow( hWindow, SW_SHOW );
UpdateWindow( hWindow );

//Set up pixel format and context
hDC = GetDC( hWindow );
PIXELFORMATDESCRIPTOR pfd = {0};
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cDepthBits = 16;
pfd.cStencilBits = 0;
int pixelFormat = ChoosePixelFormat( hDC, &pfd );
SetPixelFormat( hDC, pixelFormat, &pfd );

hRC = wglCreateContext( hDC );
wglMakeCurrent( hDC, hRC );

//now create our triangle
Vertex Triangle[3] =
{
{ 0.0f, 0.9f, 0.5f, 1.0f, 0.0f, 0.0f },
{ -0.9f, -0.9f, 0.5f, 0.0f, 1.0f, 0.0f },
{ 0.9f, -0.9f, 0.5f, 0.0f, 0.0f, 1.0f }
};

MSG msg;
bool RunApp = true;
while( RunApp )
{
if( PeekMessage( &msg, NULL, 0, 0, PM_REMOVE ) )
{
if( msg.message == WM_QUIT )
RunApp = false;

TranslateMessage( &msg );
DispatchMessage( &msg );
}

//render stuff
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

glBegin( GL_TRIANGLES );
for( int v = 0; v < 3; ++v )
{
glColor3f( Triangle[v].r, Triangle[v].g, Triangle[v].b );
glVertex3f( Triangle[v].x, Triangle[v].y, Triangle[v].z );
}
glEnd();

SwapBuffers( hDC );
}

wglMakeCurrent( 0, 0 );
wglDeleteContext( hRC );
ReleaseDC( hWindow, hDC );
DestroyWindow( hWindow );

return 0;
}





Direct3D 9:

#include <windows.h>
#include <d3d9.h>

HWND hWindow;
IDirect3D9* D3D;
IDirect3DDevice9* Device;

struct Vertex
{
float x, y, z, rhw;
D3DCOLOR Color;
};
#define FVF_VERTEX (D3DFVF_XYZRHW | D3DFVF_DIFFUSE)

LRESULT WINAPI WndProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam )
{
switch( msg )
{
case WM_CLOSE:
PostQuitMessage( 0 );
return 0;
case WM_PAINT:
ValidateRect( hWnd, NULL );
return 0;
}

return DefWindowProc( hWnd, msg, wParam, lParam );
}

int main()
{
HINSTANCE hInstance = GetModuleHandle( NULL );

//create a window
WNDCLASSEX wc = { sizeof(WNDCLASSEX), CS_CLASSDC, WndProc, 0, 0, hInstance, NULL, NULL, NULL, NULL, "MiniD3D", NULL };
RegisterClassEx( &wc );
hWindow = CreateWindowEx( WS_EX_APPWINDOW | WS_EX_WINDOWEDGE, "MiniD3D", "MiniD3D", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT,
300, 300, NULL, NULL, hInstance, NULL );
ShowWindow( hWindow, SW_SHOW );
UpdateWindow( hWindow );

//Set up d3d
D3D = Direct3DCreate9( D3D_SDK_VERSION );
D3DPRESENT_PARAMETERS d3dpp = { 0 };
d3dpp.Windowed = TRUE;
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD;
d3dpp.BackBufferFormat = D3DFMT_UNKNOWN;
D3D->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hWindow, D3DCREATE_HARDWARE_VERTEXPROCESSING, &d3dpp, &Device );

//now create our triangle
Vertex Triangle[3] =
{
{ 150.0f, 50.0f, 0.5f, 1.0f, D3DCOLOR_XRGB( 255, 0, 0 ) },
{ 250.0f, 250.0f, 0.5f, 1.0f, D3DCOLOR_XRGB( 0, 255, 0 ) },
{ 50.0f, 250.0f, 0.5f, 1.0f, D3DCOLOR_XRGB( 0, 0, 255 ) }
};

MSG msg;
bool RunApp = true;
while( RunApp )
{
if( PeekMessage( &msg, NULL, 0, 0, PM_REMOVE ) )
{
if( msg.message == WM_QUIT )
RunApp = false;

TranslateMessage( &msg );
DispatchMessage( &msg );
}

//render stuff
Device->Clear( 0, 0, D3DCLEAR_TARGET, D3DCOLOR_XRGB( 0, 0, 0 ), 1.0f, 0 );

Device->BeginScene();
Device->SetFVF( FVF_VERTEX );
Device->DrawPrimitiveUP( D3DPT_TRIANGLELIST, 1, Triangle, sizeof(Vertex) );
Device->EndScene();

Device->Present( 0, 0, 0, 0 );
}

Device->Release();
D3D->Release();

DestroyWindow( hWindow );
return 0;
}





These two programs do almost exactly the same thing, draw a single triangle on screen with a different color at each vertex. Take a look at the code, and decide for yourself which one you prefer. Functionally, Direct3D and OpenGL are basically equivalent. It's a matter of taste.

Share this post


Link to post
Share on other sites
Ah, you've picked a great topic to start a flame war with here. Personally I've seen enough of such arguments so respect to all so far for not taking the bait.

At the end of the day it's really down to personal preference, or the requirements of your project. Try both & decide which is right for you.

DirectX/3D provides a lot of useful helper functions that aren't present in the OpenGL standard - if you need part or all of this functionality in OpenGL you have to write it yourself (or use existing code from elsewhere). So in this respect development times can be cut substantially by using D3D.
On the other hand the usual argument given supporting OGL is that its power lies in the simplicity of its design, portability, etc etc.

DX also provides other features such as audio and input, which can be used in conjunction with either API.

Personally I use OpenGL for rendering alongside DirectInput & Sound, but that's not to say my way is the best.

Share this post


Link to post
Share on other sites
Well thanx a lot to everybody [smile] I made decission and I'll start to learn OpenGl. I'll post some screenshoots after few weeks of my work in progress.
Thank you again.

Share this post


Link to post
Share on other sites
I see little reason to not learn both. There are a lot of good resources out there for both. Why exclude some book because you code in DirectX and it's written using OpenGL? You might code primarily in one, but you should be able to read both. A third of my books are DirectX, another third OpenGL and the other third is just the math behind both. If it seems a good book I buy it regardless of the API it's examples use.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
    • By Andrey OGL_D3D
      Hi all!
      I try to use the Sun shafts effects via post process in my 3DEngine, but i have some artefacts on final image(Please see attached images).
      The effect contains the following passes:
      1) Depth scene pass;
      2) "Shafts pass" Using DepthPass Texture + RGBA BackBuffer texture.
      3) Shafts pass texture +  RGBA BackBuffer texture.
      Shafts shader for 2 pass:
      // uniform sampler2D FullSampler; // RGBA Back Buffer uniform sampler2D DepthSampler; varying vec2 tex; #ifndef saturate float saturate(float val) {     return clamp(val, 0.0, 1.0); } #endif void main(void) {     vec2 uv = tex;     float sceneDepth = texture2D(DepthSampler, uv.xy).r;     vec4  scene        = texture2D(FullSampler, tex);     float fShaftsMask     = (1.0 - sceneDepth);     gl_FragColor = vec4( scene.xyz * saturate(sceneDepth), fShaftsMask ); } final shader:
      // uniform sampler2D FullSampler; // RGBA Back Buffer uniform sampler2D BlurSampler; // shafts sampler varying vec4 Sun_pos; const vec4    ShaftParams = vec4(0.1,2.0,0.1,2.0); varying vec2 Tex_UV; #ifndef saturate  float saturate(float val) {     return clamp(val, 0.0, 1.0); } #endif vec4 blendSoftLight(vec4 a, vec4 b) {   vec4 c = 2.0 * a * b + a * a * (1.0 - 2.0 * b);   vec4 d = sqrt(a) * (2.0 * b - 1.0) + 2.0 * a * (1.0 - b);       // TODO: To look in Crysis what it the shit???   //return ( b < 0.5 )? c : d;   return any(lessThan(b, vec4(0.5,0.5,0.5,0.5)))? c : d; } void main(void) {     vec4 sun_pos = Sun_pos;     vec2    sunPosProj = sun_pos.xy;     //float    sign = sun_pos.w;     float    sign = 1.0;     vec2    sunVec = sunPosProj.xy - (Tex_UV.xy - vec2(0.5, 0.5));     float    sunDist = saturate(sign) * saturate( 1.0 - saturate(length(sunVec) * ShaftParams.y ));     sunVec *= ShaftParams.x * sign;     vec4 accum;     vec2 tc = Tex_UV.xy;     tc += sunVec;     accum = texture2D(BlurSampler, tc);     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.875;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.75;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.625;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.5;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.375;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.25;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.125;     accum  *= 0.25 * vec4(sunDist, sunDist, sunDist, 1.0);           accum.w += 1.0 - saturate(saturate(sign * 0.1 + 0.9));     vec4    cScreen = texture2D(FullSampler, Tex_UV.xy);           vec4    cSunShafts = accum;     float fShaftsMask = saturate(1.00001 - cSunShafts.w) * ShaftParams.z * 2.0;              float fBlend = cSunShafts.w;     vec4 sunColor = vec4(0.9, 0.8, 0.6, 1.0);     accum =  cScreen + cSunShafts.xyzz * ShaftParams.w * sunColor * (1.0 - cScreen);     accum = blendSoftLight(accum, sunColor * fShaftsMask * 0.5 + 0.5);     gl_FragColor = accum; } Demo project:
      Demo Project
      Shaders for postprocess Shaders/SunShaft/
      What i do wrong ?
      Thanks!
       


  • Advertisement