Which API to learn first

Started by
18 comments, last by SoldierOfLight 6 years, 2 months ago

Hello!

I want to get into graphics programming so I've started learning DX11 but I'm not sure if it's the best choice to learn first. Obviously the end goal is to know as many as possible, but we all have to start somewhere. My first question is should I start by learning one of the newer, lower level, "bleeding edge" APIs like Vulkan/DX12, or should I start with DX11/OpenGL? I heard some people say that Vulkan/DX12 aren't really THAT much harder than the others but I've also heard the opposite. My thinking for choosing DX11 was that it was going to be easier and it would give me some good base knowledge to go and learn the more complicated APIs later.

My second less important question is should I start with the DX or GL size? I've heard that DX is more programmer friendly and is easier to debug so will be better for beginners, is this true?

A bit of background on my competency: I feel like I have a good knowledge of C and C++, I've completed a handful of games with SDL and SFML and I do embedded C programming as a job. My 3D math is lacking but I feel like it's something I can learn.

Thanks for your help :)

Advertisement

I'd definitely recommend starting with D3D11. IMHO it really is the best all around graphics API. All the concepts that you learn in pretty much any GPU API will translate to every other API, so learning the "wrong one" not a waste of time. GL would be my second choice, and then Vulkan/D3D12 in tied third place.

My main points would be something like:


                    |D3D9 |D3D11|D3D12 |Vulkan| GL  |
Easily draw a cube  | Yes | No  | No   | No   | Yes |
Validation Layer    | No  | Yes | Yes  | Yes  | No* |
Validated Drivers   | MS  | MS  | MS   | Open | No  | 
Legacy APIs mixed in| Yes | No  | No   | No   | Yes |
Vendor extensions   | No^ | No^ | No^  | Yes  | Yes |
CPU/GPU concurrency |Auto |Auto |Manual|Manual|Auto |
Can crash the GPU   | No  | No  | YES  | YES  | No* |
HLSL                | Yes | Yes | Yes  | Yes# | No$ |
GLSL                | No$ | No$ | No$  | Yes  | Yes |
SPIR-V              | No  | No$ | No$  | Yes  | No* |
Windows             | Yes | Yes | Yes  | Yes  | Yes |
Linux               | No$ | No  | No   | Yes  | Yes |
MacOS               | No$ | No  | No   | No$  | Yes@|
* = available with vendor extensions
^ = not officially, but vendors hacked them in anyway
# = work in progress support
$ = DIY/Open Source/Middleware can get you there...
@ = always a version of the spec that's 5 years old...

D3D10 useless now -- D3D11 lets you support D3D10-era hardware and do all the same things -- so we'll ignore it.

The one good thing with ancient APIs (e.g. GL v1.1, D3D9) is that very simple apps are very simple. In comparison, modern APIs make you do a lot of legwork to even get started. When I was starting out, writing simple GL apps with glBegin, glVertex, etc, was great fun :D If you came across any readable tutorials or books for these old API versions, they could still be a fun learning exercise.

Having a validation layer built into the API is really useful for catching your incorrect code. Of course you want to check all of your function calls for errors, but having the debugger halt execution and a several-sentence-long error message appear describing your coding mistake is invaluable. D3D does a great job here.
D3D9 used to have a validation layer but MS has broken it on modern Windows (got a WinXP machine handy? :D ) 

GL2/3/4 tries to clean up their API every version and officially throws out all the old ways of doing things... but unofficially, all the old ways of doing things still hang around (except on Mac!), making it possible to end up with a horrible mixture of three different APIs. It can also make tutorials a bit suspect when you're not quite sure if you're learning core features from the version you want or not :|
D3D9 also suffers from this, with it supporting both an ancient-style fixed-function drawing API and a modern shader-based drawing API...

Vendor extensions are great -- they allow you to access the latest features of every GPU before those features become standard, but for a beginner they just add confusion. D3D made the choice of banning them. They're actually still there, but you have to download some extra vendor-specific SDKs to hack around the official D3D restrictions :D

D3D12 and Vulkan code has to be perfect. If you've got any mistakes in it, you could straight up crash your GPU. This isn't too bad, as Windows will just turn it off and on again... but it can be a nightmare to debug these things. That doesn't make for a good learning environment. This would make them unusable, except that they've got the great validation layers to help guide you!

D3D9/D3D11/GL present an abstraction where it looks like your code is running in serial with the GPU -- i.e. you say to draw something, then the GPU draws it immediately. In reality, the GPU is often buffering up several frames of commands and acting on them asynchronously (in order to achieve better throughput), however, D3D11/GL do a great job of hiding all the awful details that make this possible. This makes them much easier to use.
In D3D12/Vulkan, it's your job to implement this yourself. To do that, you need to be competent at multi-threaded programming, because you're trying to schedule two independent processors and keep them both busy without either ever stalling/locking the other one. If you mess this up, you can either instantly halve your performance, or worse, introduce memory corruption bugs that only occur sporadically and seem impossible to fix :(

D3D is a middle layer built by Microsoft -- there's your app, then the D3D runtime, then your D3D driver (Intel/NVidia/AMD's code). Microsoft validates that the runtime is correct and that the drivers are interacting with it properly. Finding out that your code runs differently on different GPU's is exceedingly rare.
GL is the wild west -- your app talks directly to the GL driver (Intel/NVidia/AMD's code), and there's no authority to make sure that they're implementing GL correctly. Finding out that your code runs differently on different GPU's is common.
Vulkan is much better -- your app still talks directly to the Vulkan driver (Intel/NVidia/AMD's code), but there's an open source suite of tests that make sure that they're implementing Vulkan correctly, and the common validation layer written by Khronos.

For shading languages, GLSL and HLSL are both valid, but I just have a personal preference for HLSL. There's also a lot of open source projects designed at converting from HLSL->GLSL, but not as many for GLSL->HLSL.

Also note, the above choices are valid for desktop PC's. For browsers you have to use WebGL. On Android you have to use GL|ES, and on iOS you can use GL|ES or Metal. On Mac you can use Metal too. On game consoles, there's almost always a custom API for each console. If you end up doing graphics programming as a job, you will learn a lot of different APIs!

15 minutes ago, Hodgman said:

Having a validation layer built into the API is really useful for catching your incorrect code. Of course you want to check all of your function calls for errors, but having the debugger halt execution and a several-sentence-long error message appear describing your coding mistake is invaluable.

I never did D3D. So maybe this is something different. But isn't that similar to GL_ARB_debug_output ?

1 hour ago, _Silence_ said:

I never did D3D. So maybe this is something different. But isn't that similar to GL_ARB_debug_output ?

Yep, I counted it above as a vendor extension because as in that link: Implementations may create messages at their discretion. In debug contexts, the implementation is required to generate messages for OpenGL Errors and GLSL compilation/linking failures. Beyond these, whether additional warnings and so forth are generated is a matter of the implementation's discretion and quality.
i.e. the only thing that it's specified to do is the same as checking error codes. In older code, you would check glGetError after every operation, but now you can use this mechanism to do that instead.

The big problem is that you're asking your vendor's driver to police your app, so the quality of the QA that they do for you is going to vary depending on your driver. With D3D, it's Microsoft policing your app, and with Vulkan it's Khronos policing your app. Everyone gets policed the same way. The reason this distinction /separation is important is because, for example, say your app does something illegal, but Vendor A's driver is built to tolerate it -- You should expect that Vendor A's driver will choose not to warn you about this flaw in your program that Vendor B's driver will (correctly) crash on. Later on, you'll claim that your app ran fine under a debug context so Vendor B's drivers must be buggy, when actually Vendor B's drivers are correct and Vendor-A-failing-to-warn-you is the actual bug :o

The other good thing about the validation layer actually being standardized is that you can rely on its contents as a developer. e.g. if I want to track down a specific performance warning in D3D12, I can look up its enum value and tell the runtime to break into my debugger on any line of code that triggers that warning, e.g.


infoQueue->SetBreakOnID(D3D12_MESSAGE_ID_CLEARRENDERTARGETVIEW_MISMATCHINGCLEARVALUE, TRUE)

In GL this is technically possible too (if your particular driver supports the particular warning you're looking for... remember the whole layer is completely unspecified!), but you've got to reverse engineer the log message from your particular driver and then write your own code to watch for the message and trigger a breakpoint :/

Thanks for the clarification !

APIs are just a means to an end, in this context a way to express certain graphics algorithm and concepts. With that said wrt to what graphics API to learn first, I'll keep my bias out it and go with the recommendations already given. However, before you delve into API ( which are useless by themselves ) how is your understanding of the basics:
-3D maths
-Lighting.
-Shaders ( not the actual implementation( API specifics), but the concept of )
...

Without having a basic understanding of these, you will find that you will be fighting a battle on 2 front, on one hand the basic concepts, on the other hand API itself.. The two are NOT one and the same.
 

I hear these are pretty good D3D11 tutorials for beginners ;P

https://www.braynzarsoft.net/viewtutorial/q16390-braynzar-soft-directx-11-tutorials

@Hodgman - I would put a note besides GL support for MacOS.  MacOS stopped supporting OpenGL back in version 4.2,which is over 5 years old now?  Otherwise great summary.

"Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety." --Benjamin Franklin

1 hour ago, Mike2343 said:

@Hodgman - I would put a note besides GL support for MacOS.  MacOS stopped supporting OpenGL back in version 4.2,which is over 5 years old now?  Otherwise great summary.

Did they stop supporting OpenGL ( kinda find that hard to believe, as this would force all legacy application to have to update ), or they just don't support any version greater than 4.2? 

6 minutes ago, cgrant said:

Did they stop supporting OpenGL ( kinda find that hard to believe, as this would force all legacy application to have to update ), or they just don't support any version greater than 4.2? 

Older apps are fine.  They're no longer updating the drivers beyond the 4.2 standard.  They're also only fixing very critical bug issues that may lead to security issues.

"Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety." --Benjamin Franklin

This topic is closed to new replies.

Advertisement