Sign in to follow this  
streamer

OpenGL OpenGL & DirectX

Recommended Posts

Hi. I know much of DirectX and when MS released DX9 I found very nice stuff in it. After googling the net I found almost none tutorial for DirectX for some advance programing, like shaders, HDR and stuff. Meanwhile I noticed that there are lot of examples for OpenGL but also I didn't found lot of shader examples. Few years ago almost every noticable FPS game was writen in OpenGL, and almost none in DirectX. And that games runned like hell on my Pentium II. Well I buyed new PC, and not so long ago a new graphic card (ATI9800Pro) and tried Half-Life2 (DX9), FarCry(DX9), and Doom3(OpenGL). Not to mention that DX games runned on max details with decent fps, and doom3 with medium details on 20fps (I was unable to set it in higher setting). My curiosity for OpenGL raised, I dl lot of examples for OpenGL and every example was detailed so people can learn from it much. And now I don't know what to do...I need BSP, octrees, lightmaps, and that stuff I can find for OpenGL and for DX nothing!!! I was thinking to port my whole engine into OpenGL, but then I didn't find lot of shader examples for OpenGL, nor HDR lighting... But on the other hand new engines runs poor on OpenGL, and fast on DirectX... I don't know what to do! I thought that two APIs are similar and that both are good. But what is the truth? Please someone give me a guideline. Thanks in advance

Share this post


Link to post
Share on other sites
Quote:
Original post by streamer
I thought that two APIs are similar and that both are good. But what is the truth?


If you are talking about Direct3D and OpenGL, then that is the truth (depending on what you mean by "similar").

Share this post


Link to post
Share on other sites
Quote:
Original post by Roboguy
Quote:
Original post by streamer
I thought that two APIs are similar and that both are good. But what is the truth?


If you are talking about Direct3D and OpenGL, then that is the truth (depending on what you mean by "similar").


Yes I was thinking about Direct3D (DX9 droped support for DirectDraw), but if that is the truth why Doom3 is so "hardware eating engine". I don't think that Id software have some newbie programers in team, they proved a lot in past. But for example Half-life 2 looks beautifull! and two engines take best of hardaware they can.
For example I looked at code that MS use for skinning meshes. It was awfull!! I didn't understand a thing at first look. But f.e. when I looked at code for rendering md3 format for OpenGL everything looked understandable even if I don't know OpenGL!

Share this post


Link to post
Share on other sites
1)Both APIs are just as good and powerful. There are some issues with NVidia having better OpenGL support, and ATI better D3D, but IMO it's not a big deal. Doom3 runs slower because it uses a much more complex shading model(100% dynamic, while HL2 uses for the most part precalculated radiosity maps). If you're looking for a reason to choose between GL and D3D, you won't find it in performance. Make your pick based on your personal preferences and coding style.
That's why noone can tell you what are the "weak spots" of each one; it's up on you. For me, OpenGL being procedural is a pro, for others is a con.

2)BSPs,octrees,lightmaps are all techniques irrelevant of the API. If you know the API syntax well enough you can implement them easily.

3)There exist plenty Shader examples, but shaders(and especially HDR) are relatively new, so it's logical that they're fewer shader examples than those that use fixed-function pipeline. Again, for someone that knows the syntax, translating from D3D to OpenGL is trivial.

In sort, stop dwelling on this thought that one API is better than the other, and it's important that you find out which one is it, so it will magically make you a better programmer and your games run faster. Just pick one(or both) and learn. They're both deserve the time you'll spend on them.

Share this post


Link to post
Share on other sites
Quote:
Original post by mikeman
Make your pick based on your personal preferences and coding style.


Portability might be a factor in your choice also.

Share this post


Link to post
Share on other sites
First of all, you really can't compare HL2 on D3D to Doom3 on OGL, and expect the comparison to mean anything about the performance. Both games look great, granted, but Doom3 does a LOT of pixel shading effects, where this is generally more limited in HL2.

Both APIs in their current version (and with appropriate extensions in GL's case) expose identical, or nearly identical features in the very least. The bottom Line is that an API is only that, choosing one or the other when they're so similar is not going to make any application better or worse, its about how you use it.

GL has some advantages, such as cross-platform compatibility, that DirectX doesn't even attempt to touch. Moving to the future, the PS3 used OpenGL|ES as its graphics API, and Nintendo reportedly uses a custom GL-like api, Macs and any non-Microsoft OS also use GL. Microsoft and its Xboxen are the only players promoting/using DX, so basically its MS and DirectX versus everyone else and OpenGL. Granted that due the massive windows user base, one could argue that the two are neck-and-neck, but GL has more supporters, especially in console games which are what really drive the industry.

Also, any GOOD engine should abstract the 3D API away from the game code, and developing an equivilant renderer for the other API is possible (and IMHO, should always be done.)


The bottom line though, is that you should use whatever one you are more comfortable with, and that is purely personal preference and up to you.

Share this post


Link to post
Share on other sites
I'll start with the easy ones.

HDR in OpenGL courtesy of humus.ca
Lots of shaders in Cg, GLSL, vp/fp, HLSL, etc

These people are right, the only reason that Doom3 runs slower then the other games is the full per-pixel realtime lighting. The performance difference between OpenGL and Direct3D is virtually zero.

I would argue, however, that OpenGL can be easier to learn for a beginner. It allows you to easily render geometry to at least get something up on the screen (I'm referring to glVertex3f and friends). Direct3D doesn't have support for so-called "immediate" mode rendering like this. While a production game or 3d application will probably never use immediate mode rendering because of the inherent slowness, it is helpful to get the concepts down.

Share this post


Link to post
Share on other sites
Thanx guys [smile] you help a lot, but now I'm even more messed up.
I know D3D very well, and OpenGL not. I understand difference between engines (now). I have thoughts (for about week or two) that OpenGL is better in some way, and thats why I posted here. I have found articles on net of chaos in OpenGL extensions for newest stuff support. Is that true? Or things are better? And is there OpenGL 2.0?

Share this post


Link to post
Share on other sites
Well, here.
OpenGL:

#include <windows.h>
#include <GL/gl.h>

HWND hWindow;
HDC hDC;
HGLRC hRC;

struct Vertex
{
float x, y, z;
float r, g, b;
};

LRESULT WINAPI WndProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam )
{
switch( msg )
{
case WM_CLOSE:
PostQuitMessage( 0 );
return 0;
case WM_PAINT:
ValidateRect( hWnd, NULL );
return 0;
}

return DefWindowProc( hWnd, msg, wParam, lParam );
}

int main()
{
HINSTANCE hInstance = GetModuleHandle( NULL );

//create a window
WNDCLASSEX wc = { sizeof(WNDCLASSEX), CS_CLASSDC, WndProc, 0, 0, hInstance, NULL, NULL, NULL, NULL, "MiniOGL", NULL };
RegisterClassEx( &wc );
hWindow = CreateWindowEx( WS_EX_APPWINDOW | WS_EX_WINDOWEDGE, "MiniOGL", "MiniOGL", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT,
300, 300, NULL, NULL, hInstance, NULL );
ShowWindow( hWindow, SW_SHOW );
UpdateWindow( hWindow );

//Set up pixel format and context
hDC = GetDC( hWindow );
PIXELFORMATDESCRIPTOR pfd = {0};
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cDepthBits = 16;
pfd.cStencilBits = 0;
int pixelFormat = ChoosePixelFormat( hDC, &pfd );
SetPixelFormat( hDC, pixelFormat, &pfd );

hRC = wglCreateContext( hDC );
wglMakeCurrent( hDC, hRC );

//now create our triangle
Vertex Triangle[3] =
{
{ 0.0f, 0.9f, 0.5f, 1.0f, 0.0f, 0.0f },
{ -0.9f, -0.9f, 0.5f, 0.0f, 1.0f, 0.0f },
{ 0.9f, -0.9f, 0.5f, 0.0f, 0.0f, 1.0f }
};

MSG msg;
bool RunApp = true;
while( RunApp )
{
if( PeekMessage( &msg, NULL, 0, 0, PM_REMOVE ) )
{
if( msg.message == WM_QUIT )
RunApp = false;

TranslateMessage( &msg );
DispatchMessage( &msg );
}

//render stuff
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

glBegin( GL_TRIANGLES );
for( int v = 0; v < 3; ++v )
{
glColor3f( Triangle[v].r, Triangle[v].g, Triangle[v].b );
glVertex3f( Triangle[v].x, Triangle[v].y, Triangle[v].z );
}
glEnd();

SwapBuffers( hDC );
}

wglMakeCurrent( 0, 0 );
wglDeleteContext( hRC );
ReleaseDC( hWindow, hDC );
DestroyWindow( hWindow );

return 0;
}





Direct3D 9:

#include <windows.h>
#include <d3d9.h>

HWND hWindow;
IDirect3D9* D3D;
IDirect3DDevice9* Device;

struct Vertex
{
float x, y, z, rhw;
D3DCOLOR Color;
};
#define FVF_VERTEX (D3DFVF_XYZRHW | D3DFVF_DIFFUSE)

LRESULT WINAPI WndProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam )
{
switch( msg )
{
case WM_CLOSE:
PostQuitMessage( 0 );
return 0;
case WM_PAINT:
ValidateRect( hWnd, NULL );
return 0;
}

return DefWindowProc( hWnd, msg, wParam, lParam );
}

int main()
{
HINSTANCE hInstance = GetModuleHandle( NULL );

//create a window
WNDCLASSEX wc = { sizeof(WNDCLASSEX), CS_CLASSDC, WndProc, 0, 0, hInstance, NULL, NULL, NULL, NULL, "MiniD3D", NULL };
RegisterClassEx( &wc );
hWindow = CreateWindowEx( WS_EX_APPWINDOW | WS_EX_WINDOWEDGE, "MiniD3D", "MiniD3D", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT,
300, 300, NULL, NULL, hInstance, NULL );
ShowWindow( hWindow, SW_SHOW );
UpdateWindow( hWindow );

//Set up d3d
D3D = Direct3DCreate9( D3D_SDK_VERSION );
D3DPRESENT_PARAMETERS d3dpp = { 0 };
d3dpp.Windowed = TRUE;
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD;
d3dpp.BackBufferFormat = D3DFMT_UNKNOWN;
D3D->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hWindow, D3DCREATE_HARDWARE_VERTEXPROCESSING, &d3dpp, &Device );

//now create our triangle
Vertex Triangle[3] =
{
{ 150.0f, 50.0f, 0.5f, 1.0f, D3DCOLOR_XRGB( 255, 0, 0 ) },
{ 250.0f, 250.0f, 0.5f, 1.0f, D3DCOLOR_XRGB( 0, 255, 0 ) },
{ 50.0f, 250.0f, 0.5f, 1.0f, D3DCOLOR_XRGB( 0, 0, 255 ) }
};

MSG msg;
bool RunApp = true;
while( RunApp )
{
if( PeekMessage( &msg, NULL, 0, 0, PM_REMOVE ) )
{
if( msg.message == WM_QUIT )
RunApp = false;

TranslateMessage( &msg );
DispatchMessage( &msg );
}

//render stuff
Device->Clear( 0, 0, D3DCLEAR_TARGET, D3DCOLOR_XRGB( 0, 0, 0 ), 1.0f, 0 );

Device->BeginScene();
Device->SetFVF( FVF_VERTEX );
Device->DrawPrimitiveUP( D3DPT_TRIANGLELIST, 1, Triangle, sizeof(Vertex) );
Device->EndScene();

Device->Present( 0, 0, 0, 0 );
}

Device->Release();
D3D->Release();

DestroyWindow( hWindow );
return 0;
}





These two programs do almost exactly the same thing, draw a single triangle on screen with a different color at each vertex. Take a look at the code, and decide for yourself which one you prefer. Functionally, Direct3D and OpenGL are basically equivalent. It's a matter of taste.

Share this post


Link to post
Share on other sites
Ah, you've picked a great topic to start a flame war with here. Personally I've seen enough of such arguments so respect to all so far for not taking the bait.

At the end of the day it's really down to personal preference, or the requirements of your project. Try both & decide which is right for you.

DirectX/3D provides a lot of useful helper functions that aren't present in the OpenGL standard - if you need part or all of this functionality in OpenGL you have to write it yourself (or use existing code from elsewhere). So in this respect development times can be cut substantially by using D3D.
On the other hand the usual argument given supporting OGL is that its power lies in the simplicity of its design, portability, etc etc.

DX also provides other features such as audio and input, which can be used in conjunction with either API.

Personally I use OpenGL for rendering alongside DirectInput & Sound, but that's not to say my way is the best.

Share this post


Link to post
Share on other sites
Well thanx a lot to everybody [smile] I made decission and I'll start to learn OpenGl. I'll post some screenshoots after few weeks of my work in progress.
Thank you again.

Share this post


Link to post
Share on other sites
I see little reason to not learn both. There are a lot of good resources out there for both. Why exclude some book because you code in DirectX and it's written using OpenGL? You might code primarily in one, but you should be able to read both. A third of my books are DirectX, another third OpenGL and the other third is just the math behind both. If it seems a good book I buy it regardless of the API it's examples use.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627740
    • Total Posts
      2978884
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now