• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
TheCompBoy

DirectX vs OpenGL ?

40 posts in this topic

[quote name='Yann L' timestamp='1313956459' post='4851994']
I'm sorry to be so blunt, but your results are bogus from a performance analysis point of view.

First of all, you are comparing benchmarks in frames per second, ie. in non-linear space. Read [url="http://www.mvps.org/directx/articles/fps_versus_frame_time.htm"]this article[/url] for an explanation how this approach is flawed.[/quote]
I remember that from some time ago but wasn’t thinking too hard about it during my tests. At the time I wasn’t planning on it being a serious benchmark; just using it for my own reference to see what made the system go up and down. What is skewed my presentation is the amount of change.
I will go back to my old test cases and change to frame time and update my blog, but but the results will still show Direct3D 9 as the clear winner in actual speed.

[quote name='Yann L' timestamp='1313956459' post='4851994']
Second, you are benchmarking with a far too high framerate. As a rough and dirty guideline, all benchmarks that include FPS rates above 1000 are useless. The frametime at 1000 fps is 1ms. No API is designed to operate at this framerate. You will run into all kinds of tiny constant overheads that may affect performance in every possible unpredictable way. You don't leave the time for the GPU to amortize overheads. For real solid benchmarking results, you must do much heavier work. Not necessarily more complex, just more. Get your frametimes up, remove constant driver and API overheads. Benchmark your API within the range it is supposed to operate in. And be sure to know what part of the pipeline you're actually benchmarking, which leads us to the next point.[/quote]
I have tried many more tests than just what I posted. I have tried complex models and small ones, and on many types of computers with various graphics cards.
The numbers go up and down per machine, but never are they disproportional. The results, under all conditions on all Windows x86 and x64 machines using various ATI and GeForce cards I tried is that OpenGL loses in speed.
After upgrading to the latest DirectX SDK OpenGL doesn’t even win under fluke conditions. The gap is just too high.



[quote name='Yann L' timestamp='1313956459' post='4851994']
This does not make any sense. Again, read up on bottlenecks. Even if an API would reduce its overhead to zero (which is partially possible by removing it entirely and talking to the hardware directly, as it is done in many consoles), the final impact on game performance is often very small. Sometimes it's not even measurable if the engine bottleneck is on the GPU (which is almost always the case on modern engines). The more work is offloaded to the GPU, the less important API overhead becomes.

The much more important question, which can indeed make a large difference between APIs and drivers, is the quality of the optimizers for the APIs native shader compiler.[/quote]
It is true that I have yet to use extensions specific to vendors.
But until now I have limited chances to do so.
The reason I say my results are reliable (regardless of my flawed presentation of them) is that they represent the most primitive set of API calls possible.
What happens on each frame:
#1: Activate vertex buffers.
#2: Activate shaders.
#3: Update uniforms in shaders.
#4: Render with or without an index buffer.

Not even assigning textures to slots for those benchmarks. Not changing any non-essential states such as culling or lighting. All matrix operations are from my own library and exactly the same overhead in DirectX.
I have very little room for common OpenGL performance pitfalls. In such a simple system, I am open to ideas of what I may have missed that could help OpenGL come closer to Direct3D.
I tried combinations of VBO’s. VBO’s for only large buffers, VBO’s for all buffers, etc.
Redundancy checks prevent the same shader from being set twice in a row. That did help a lot. But it helped Direct3D just as much.

I have no logical loop. Only rendering, so although you say I am benchmarking the CPU, I am only doing so as far as I am bound to call OpenGL API functions from the CPU, which is how all others are bound too.


I said clearly in my blog that OpenGL on Windows suffers compared to OpenGL on other platforms. I will be able to benchmark those soon, but the facts are that OpenGL implementations on Windows are mostly garbage. I am not saying OpenGL itself is to blame directly, but indirectly they are to blame for not forcing quality checks on vendor implementations.


L. Spiro


PS: Those are some good articles. I consider anything worth doing is worth doing right and fully, so I will be sure to follow the advice of those articles in further testing and re-testing.
0

Share this post


Link to post
Share on other sites
Perhaps a pro/contra-comparison would help the thread-starter a bit more in his decision.
And I'm also interested in a comparison :)

What about completing a list:

DirectX:
+ has multiple "modules" for everything you need (3D, sound, input...)
+ "Managed DirectX" (works with .NET) => many programming languages
+ can load meshes and textures
+ good tools (PIX, debug runtime, visual assist...)
+ some professional programs (Autodesk Inventor)
+ frequently used (mostly: games) => good support
+ msdn documentation

- more difficult to understand
- only works with Windows (~> wine on linux)


OpenGL:
+ easy to get first results (but also difficult with complex ones)
+ runs on different platforms
+ frequently used with professional programs (CAD...), sometimes also in games
+ many open source libs (SDL, GLFW...)

- suffers from bad drivers (mostly with Windows, low cost graphic cards)
- GLSL: different errors (or no errors) on different machines
- "extension chaos"
- new standard takes long time


Any amendments, completions or mistakes I made?
(note: I omit the performance issue, cause there seem to be many conflicting analysis/benchmarks)

[EDITS:
* changed "open source" to "many open source libs"
* changed "easier" in "easy to get first results"
* removed closed source from DX
* added "good tools" in DX
* added "some prof progs" in DX
* added GLSL problem
]
0

Share this post


Link to post
Share on other sites
[quote name='DEADC0DE' timestamp='1314016005' post='4852246']
OpenGL
+ open source
[/quote]

This misconception needs to be killed stone dead. OpenGL is not open source and it's not actually even software. The "Open" in it's name refers to it's status as a standard and has absolutely nothing whatsoever to do with source code. OpenGL is a specification for an interface between your program and the graphics hardware, and vendors implement this interface in their drivers (hence the fact that you see frequent references to your "OpenGL implementation" in the docs).

I'd remove "closed source" as a minus for D3D too as it's not a relevant comparison (due to the above), and add the fact that D3D has better tools (Pix, the debug runtimes, etc) for developers. And maybe also add that D3D is now a frequent choice for CAD tools too (Autodesk Inventor being one example that I linked to earlier).
1

Share this post


Link to post
Share on other sites
[quote name='shdpl' timestamp='1313878028' post='4851735']
[quote name='Gl_Terminator' timestamp='1313512841' post='4849912']
Use directX i was an OpenGL fan but finally i gave up, directX has a lot more support, and complex stuff are handled more easy,
[/quote]
Could you be more specific please?
[/quote]

heheh has you even tried to enable full screen anti-aliasing with openGl, or tried to draw 2D content, or use VBO, or better has you even tried to make your own GSL script. dude I am telling you OpenGL at the end is more difficult than DX and I find that out after making my own game in opengl and then porting it to DX.
0

Share this post


Link to post
Share on other sites
[quote name='YogurtEmperor' timestamp='1313992228' post='4852139']
The reason I say my results are reliable (regardless of my flawed presentation of them) is that they represent the most primitive set of API calls possible.
[/quote]
As outlined above, unless you completely change your approach to benchmarking the APIs, your results are invalid. It's not a question about presentation, it's your core benchmarking method that is flawed. If you want reliable benchmarking results, then I suggest your first learn about how to properly benchmark a graphics system. Then build a performance analysis framework around this. You will be surprised by the results - because there will more than likely be zero difference, except for the usual statistical fluctuations.

[quote name='YogurtEmperor' timestamp='1313992228' post='4852139']
Not even assigning textures to slots for those benchmarks. Not changing any non-essential states such as culling or lighting. All matrix operations are from my own library and exactly the same overhead in DirectX.
I have very little room for common OpenGL performance pitfalls. In such a simple system, I am open to ideas of what I may have missed that could help OpenGL come closer to Direct3D.
I tried combinations of VBO’s. VBO’s for only large buffers, VBO’s for all buffers, etc.
Redundancy checks prevent the same shader from being set twice in a row. That did help a lot. But it helped Direct3D just as much.

I have no logical loop. Only rendering, so although you say I am benchmarking the CPU, I am only doing so as far as I am bound to call OpenGL API functions from the CPU, which is how all others are bound too.
[/quote]
This perfectly outlines what I was trying to explain to you in my post above: [i]you don't understand what you are benchmarking[/i]. You are analyzing the CPU bound API overhead. Your numbers may even be meaningful within this particular context. However, and that is the point, these numbers don't say anything about the API 'performance' (if there even is such a thing) !

I will try to explain this a bit more. What you need to understand is that the GPU is an independent processing unit that largely operates without CPU interference. Assume you have a modern SM4+ graphics card. Assuming further a single Uber-shader (which may not always be a good design choice, but let's take this as an example), fully atlased/arrayed textures, uniform data blocks and no blending / single pass rendering. Rendering a full frame would essentially look like this:
[code]
ActivateShader()
ActivateVertexStreams()
UploadUniformDataBlock()
RenderIndexedArray()
Present/SwapBuffers() -> Internal_WaitForGPUFrameFence()
[/code]
In pratice you would use at least a few state changes and possibly multiple passes, but the basic structure could look like this. What happens here ? The driver (through the D3D/OpenGL API) sends some very limited data to the GPU (the large data blocks are already in VRAM) - and then waits for the GPU to complete the frame unless it can defer a new frame or cue up more frames to the command FIFO. Yup, the driver [i]waits[/i]. This is a situation that you call GPU-bound. Being fill-rate limited, texture or vertex stream memory bandwidth bound, vertex transform bound - all these are GPU-bound scenarios.

And now comes the interesting part: neither OpenGL nor D3D have [i]anything[/i] to do with all this ! Once the data and the commands are on the GPU, the processing will be exactly the same, whether you're using OpenGL or D3D. There will be absolutely no performance differences. Zero. Nada.

What [i]you[/i] are measuring is only the part where data is manipulated CPU side. This part is only relevant if you are CPU bound, ie. if the GPU is waiting for the CPU. And this is a situation that an engine programmer will do anything to avoid. It's a worst case scenario, because it implies you aren't using the GPU to its fullest potential. Yet, this is exactly the situation you currently benchmark !

If you are CPU bound, then this usually means that you are not correctly batching your geometry or that you are doing too many state changes. This doesn't imply that the API is 'too slow', it usually implies that your engine design, data structures or assets need to be improved. Sometimes this is not possible or a major challenge, especially if you are working on legacy engines designed around a high frequency state changing / FFP paradigm. But often it is, especially on engines designed around a fully unified memory and shader based paradigm, and you can tip the balance back to being GPU bound.

So in conclusion: CPU-side API performance doesn't really matter. If you are GPU bound, even a ten-fold performance difference would have zero impact on framerate, since it would be entirely amortized while waiting for the GPU. Sure, you can measure these differences - but they are meaningless in a real world scenario. It is much more important to optimize your algorithms, which will have orders of magnitude more effect on framerate than the CPU call overhead.

[quote name='DEADC0DE' timestamp='1314016005' post='4852246']
* changed "open source" to "many open source extensions"
[/quote]
There are no "open source extensions". OpenGL has [i]absolutely nothing[/i] to do with open source.
0

Share this post


Link to post
Share on other sites
[quote]
There are no "open source extensions". OpenGL has absolutely nothing to do with open source.
[/quote]

I thought of SDL, GLFW and so on. Is "many open source libs" better understandable?
0

Share this post


Link to post
Share on other sites
D3D is not 'more difficult to understand' either... if anything OpenGL is the harder as there are no central docs, many many out dated tutorials and hitting 'the fast path' is pretty damned hard unless you know what you are doing.

D3D doesn't have a 'slow path' for things like submission of data etc so it's easier to produce things which perform well.
2

Share this post


Link to post
Share on other sites
[quote name='Gl_Terminator' timestamp='1314024529' post='4852316']
[quote name='shdpl' timestamp='1313878028' post='4851735']
[quote name='Gl_Terminator' timestamp='1313512841' post='4849912']
Use directX i was an OpenGL fan but finally i gave up, directX has a lot more support, and complex stuff are handled more easy,
[/quote]
Could you be more specific please?
[/quote]

heheh has you even tried to enable full screen anti-aliasing with openGl, or tried to draw 2D content, or use VBO, or better has you even tried to make your own GSL script. dude I am telling you OpenGL at the end is more difficult than DX and I find that out after making my own game in opengl and then porting it to DX.
[/quote]

glEnable(GL_MULTISAMPLE) to enable full screen anti-aliasing


in Direct3D you have to use vertex buffers too

GLSL isn't any harder/easier than HLSL




0

Share this post


Link to post
Share on other sites
[quote name='i_luv_cplusplus' timestamp='1314040877' post='4852463']GLSL isn't any harder/easier than HLSL[/quote]
The shading languages languages themselves are quite equivalent, but the infrastructure you need to build in your program to use them is somewhat more involved with OpenGL. D3D can be just one call to D3DXCreateEffectFromFile and you're ready to start drawing, compared to OpenGL's load (and you must write the loader yourself), compile, attach, link, validate thing. Some kind of saner shader-management library is badly wanting for OpenGL (and let's not make it GPL as proprietary programs may want to use it too).

Another downside is that with OpenGL each driver writer must provide their own shader compiler, whereas with D3D there is a single shader compiler provided by Microsoft. That greatly enhances the consistency and robustness of compiled HLSL shaders. No driver vendor can screw up (or put in dubious optimizations), everyone's shaders get compiled the same way by the same compiler, and the world is a happier place.

It also helps that HLSL has been quite stable for longer periods of time, giving it a good chance for bugs to shake out through wide usage. SM3 HLSL is utterly rock-solid for example. GLSL by contrast has been a bit of a moving target recently, with many upgrades, incompatibilities between versions, and more exciting ways to shoot yourself in the foot. Not a problem if you're only running on your own hardware, you just code to what your hardware can and can't do. But as soon as you need to run on other people's hardware, and if those other people are in different countries so you can't get at their machines for a debugging session, you really do appreciate the value of stability and predictability.
0

Share this post


Link to post
Share on other sites
[quote name='mhagain' timestamp='1314056356' post='4852576']The shading languages languages themselves are quite equivalent, but the infrastructure you need to build in your program to use them is somewhat more involved with OpenGL. D3D can be just one call to D3DXCreateEffectFromFile and you're ready to start drawing, compared to OpenGL's load (and you must write the loader yourself), compile, attach, link, validate thing. Some kind of saner shader-management library is badly wanting for OpenGL (and let's not make it GPL as proprietary programs may want to use it too).[/quote]As a pro/con argument, this just boils down to "[i]D3D has the D3DX utility library, while GL has no official utility library[/i]". This has quite a big impact on usability on a smaller scale (e.g. when learning), but less of an impact as you scale up to well-manned projects. Don't get me wrong though, it's always nice to have utility libraries available!

The de-facto equivalent of D3DX-Effects for GL is CgFX. These two standards are close enough that you can write a single file that works under both systems ([i]D3DX on Microsoft, and CgFX on non-Microsoft platforms[/i]).
[quote]Another downside is that with OpenGL each driver writer must provide their own shader compiler, whereas with D3D there is a single shader compiler provided by Microsoft. That greatly enhances the consistency and robustness of compiled HLSL shaders. No driver vendor can screw up (or put in dubious optimizations), everyone's shaders get compiled the same way by the same compiler, and the world is a happier place[/quote]Yeah I've been burnt by this a lot ([i]I'm looking at you, nVidia...[/i]) -- where an invalid shader compiles without error under one driver, but fails under a stricter driver, which just leads to [url="http://www.google.com.au/search?q=works+on+my+machine"]works on my machine[/url] syndrome...
0

Share this post


Link to post
Share on other sites
[quote name='Yann L' timestamp='1314036556' post='4852426']
Too much to quote it all.
[/quote]
I understand how it works. As I said, I will do more serious (real) benchmarking later, but the results will not show OpenGL as the winner.
I have models purchased from Turbo Squid for hundreds of dollars with several millions of triangles, and have low-end machines which run at only 20 FPS (GPU bound), etc.
I have done lightweight scenarios and heavyweight.

The thing about my benchmarks is that I wanted them to be using the same model viewed from the same position so that I could compare each change I made to the previous state of the engine.
I wasn’t intending it to be a real benchmark; I was only trying to see what things improved performance and with a general idea of by how much. The numbers are meant to be compared to the previous numbers above them, not to each other.


The FPS I posted were high.
I posted those results from my home machine, which has two Radeon HD 5870’s crossfired and a Core i7 975. Frankly I have trouble getting [i]anything[/i] to run below many thousands of frames per second. It may seem skewed, but that is why I also test on my low-end machines, which are around ~20 FPS for the same objects. The results were the same, but lower numbers.

My only point is that the presentation of my numbers is off, but there is no case in which OpenGL has ever been faster than Direct3D in all tests I have run on all machines.



[color="#1C2837"][size="2"][quote]Any amendments, completions or mistakes I made?[/quote][/size][/color]
[size="2"][color="#1c2837"]Anything about one being easier to learn is heavily subjective. I would personally disagree heavily that OpenGL is easier to learn. It uses less-common terminology for things including pixels (which are “fragments” in OpenGL), is difficult to do “correctly” thanks to mostly outdated tutorials, etc. There are multiple ways to do everything (immediate mode, vertex buffers, VBO’s) so it is more difficult to learn “the right way”.[/color][/size]
[size="2"][color="#1c2837"]Immediate mode makes it easier to get a result on the screen, but if you are trying to use OpenGL properly, the level of difficulty is purely subjective.[/color][/size]
[size="2"] [/size]
[size="2"] [/size]
[size="2"] [/size]
[size="2"][color="#1c2837"][color="#000000"][size="3"][quote name='Hodgman' timestamp='1314060092' post='4852593']Yeah I've been burnt by this a lot ([i]I'm looking at you, nVidia...[/i]) -- where an invalid shader compiles without error under one driver, but fails under a stricter driver, which just leads to [url="http://www.google.com.au/search?q=works+on+my+machine"]works on my machine[/url] syndrome...[/size][/color][/color][/size][size="2"][color="#1c2837"][color="#000000"][size="3"][/quote][/size][/color]
[/color][/size]
My low-end NVidia cards were unable to set boolean uniforms. Literally. I even have a test-case that does nothing but set a boolean which changes the color of a triangle if set. If the OpenGL implementation works correctly it will be red, otherwise blue. Shows red on high-end GeForces and all ATI cards, shows blue on GeForce 8400, GeForce 8600, and GeForce 8800.
Wow. Unable to set a bool.
So in my new engine I will not be using boolean uniforms unless I cannot avoid it.



L. Spiro
0

Share this post


Link to post
Share on other sites
[quote name='Gl_Terminator' timestamp='1314024529' post='4852316']
heheh has you even tried to enable full screen anti-aliasing with openGl, or tried to draw 2D content, or use VBO, or better has you even tried to make your own GSL script. dude I am telling you OpenGL at the end is more difficult than DX and I find that out after making my own game in opengl and then porting it to DX.
[/quote][font="arial, verdana, tahoma, sans-serif"][size="2"]
Don't take my question too personal mate because we're having a nice objective discussion here. If You're stating an opinion, please add few examples because i believe this post isn't about counting how many people vote for ogl or dx. Probably every 'vs' question brings other: 'in what conditions'.

According to Your question, yes i did, but i didn't even played with DirectX. And yes, I'm new in gamedev (although not in IT), and i'm very curious how it looks compared to Direct3D.[/size][/font]

[quote name='mhagain' timestamp='1314056356' post='4852576']
Another downside is that with OpenGL each driver writer must provide their own shader compiler, whereas with D3D there is a single shader compiler provided by Microsoft. That greatly enhances the consistency and robustness of compiled HLSL shaders. No driver vendor can screw up (or put in dubious optimizations), everyone's shaders get compiled the same way by the same compiler, and the world is a happier place.
[/quote]


I've read in a book, that consorcium has provided compiler front-end implementation for GLSL compiler*, so shaders could be validated by this reference implementation, and just run in vendor-specific one. Is this true, and isn't it sufficient step to nivelate this kind of problems?


Furthermore, I liked idea of giving to vendors more chances to optimize (by running platform). Is this bad in practice, or problems come out because of small competition and interest in providing high quality OpenGL implementations?


* EDIT
0

Share this post


Link to post
Share on other sites
[quote name='shdpl' timestamp='1314064583' post='4852610']I've read in a book, that consorcium has provided compiler front-end implementation for GLSL compiler*, so shaders could be validated by this reference implementation, and just run in vendor-specific one. Is this true, and isn't it sufficient step to nivelate this kind of problems?

Furthermore, I liked idea of giving to vendors more chances to optimize (by running platform). Is this bad in practice, or problems come out because of small competition and interest in providing high quality OpenGL implementations?[/quote]Different drivers still accept wildly different forms on GLSL code.
For example, nVidia's drivers actually accept HLSL and Cg keywords, which don't exist at all in the GLSL spec! This is actually a great marketing tactic, because ([i]invalid[/i]) shaders that [i]do [/i]run on nVidia cards, [i]fail to run[/i] on ATI cards, which makes it look like ATI are buggy.

But yes, if you validated your code using a neutral, known-compliant, front-end, you could likely avoid many of these problems.

Regarding vendor optimisation of shaders -- this happens with both GLSL and HLSL. With HLSL, D3D compiles your code down into an assembly language, however, this assembly language cannot run natively on GPUs. The GPU vendors then have to compile this assembly [i]again[/i] into their real, native assembly languages.
The HLSL compiler does all of the heavyweight, general purpose optimisations on your code (such as removing dead code, simplifying expressions, etc), and then the drivers perform hardware-specific optimisations, like instruction re-scheduling.

With GLSL, the quality of the first type of optimisation (e.g. dead code removal, etc) varies a lot by platform. To get around this, the Unity team actually compile their GLSL code to assembly using a standard compiler, and the de-compile the assembly [i]back [/i]into GLSL again!


If you're authoring a very popular game ([i]which is likely to be used for benchmarks[/i]), then you can even expect the folks at ATI/nVidia to secretly extract the shader code from your game, re-write it by hand to be more optimal, and then ship their shader-replacements in their drivers! At runtime, they detect your game, detect your shader, and instead load their hand-tuned replacement shaders to get a better benchmark score from reviewers [img]http://public.gamedev.net/public/style_emoticons/default/laugh.gif[/img]

[quote name='YogurtEmperor' timestamp='1314062991' post='4852606']
My low-end NVidia cards were unable to set boolean uniforms. Literally. I even have a test-case that does nothing but set a boolean which changes the color of a triangle if set. If the OpenGL implementation works correctly it will be red, otherwise blue. Shows red on high-end GeForces and all ATI cards, shows blue on GeForce 8400, GeForce 8600, and GeForce 8800.
Wow. Unable to set a bool.
So in my new engine I will not be using boolean uniforms unless I cannot avoid it.[/quote]This is my biggest gripe with GL -- too many potential silent failures.
For example, if I compile a shader that exceeds the GPU's instruction limit, it succeeds... but then runs on the CPU at 0.1Hz.!
If I compile a shader that uses an unsupported hardware feature (e.g. array-index-in-pixel-unit), it succeeds... but then runs on the CPU at 0.1Hz.!
Then there's cases where you try to use a feature, and it still runs on the GPU, but just does nothing at all -- like your bool example.

The problem with these cases is there's no way to know whether the GPU is doing what you're asking of it, except by setting up a test-case, and reading back pixel colours... or setting up a test-case and checking the frame-time to [i]guess [/i]whether it ran in hardware or software :/
2

Share this post


Link to post
Share on other sites
I've got some questions:

Is it possible in DirectX 11 to set up a program without setting the hole graphic pipeline (e.g. Vertex-, Hull-, Domain-, Pixelshader)?
I didn't find a way to get a triangle on screen without it. (might be an objective point in the "first contact is easier" discussion)

Furthermore: many posts here say that GLSL has a lot of "potential silent failures". How do the existing programs (prof. OpenGL programs as well as OpenGL-games) handle this? I never had problems in running such programs on my computer.
Or is this fact just due to some drivers accepting more than the standard and others seem to have bugs if they stick to the standard. (whatever the standard is...)

Thanks ;)
1

Share this post


Link to post
Share on other sites
[quote name='i_luv_cplusplus' timestamp='1314040877' post='4852463']
[quote name='Gl_Terminator' timestamp='1314024529' post='4852316']
[quote name='shdpl' timestamp='1313878028' post='4851735']
[quote name='Gl_Terminator' timestamp='1313512841' post='4849912']
Use directX i was an OpenGL fan but finally i gave up, directX has a lot more support, and complex stuff are handled more easy,
[/quote]
Could you be more specific please?
[/quote]

heheh has you even tried to enable full screen anti-aliasing with openGl, or tried to draw 2D content, or use VBO, or better has you even tried to make your own GSL script. dude I am telling you OpenGL at the end is more difficult than DX and I find that out after making my own game in opengl and then porting it to DX.
[/quote]

glEnable(GL_MULTISAMPLE) to enable full screen anti-aliasing


in Direct3D you have to use vertex buffers too

GLSL isn't any harder/easier than HLSL





[/quote]
Ok, then tell me and equivalent to direct draw(2D) in opengl,
0

Share this post


Link to post
Share on other sites
[quote name='DEADC0DE' timestamp='1314090594' post='4852709']
I've got some questions:

Is it possible in DirectX 11 to set up a program without setting the hole graphic pipeline (e.g. Vertex-, Hull-, Domain-, Pixelshader)?
I didn't find a way to get a triangle on screen without it. (might be an objective point in the "first contact is easier" discussion)

Furthermore: many posts here say that GLSL has a lot of "potential silent failures". How do the existing programs (prof. OpenGL programs as well as OpenGL-games) handle this? I never had problems in running such programs on my computer.
Or is this fact just due to some drivers accepting more than the standard and others seem to have bugs if they stick to the standard. (whatever the standard is...)

Thanks ;)
[/quote]

You need vertex and pixel at a minimum, but then again the same also applies to a modern core OpenGL context (where VBOs are not optional either).

Real commercial programs avoid problems (or at least try to avoid them!) before they happen through testing, bugfixing and working around identified driver issues. If a given combination of blend modes and texture setup doesn't work on an ATVIDIATEL driver, even if the spec says that it should, then you insert a lot of cruddy code to work around it. Because people won't blame their driver or it's manufacturer for not working ("everything else works") - they'll blame your program.

See for example the Autodesk Inventor PDF link I posted earlier - at the time of writing, despite only using 1997-level OpenGL features, and despite having had 10 years to solidify their code base, they had 44 driver workarounds.
1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0