The future of graphic APIs

Started by
19 comments, last by wodinoneeye 10 years, 9 months ago

Hey people,

I am in the (pre)process of making a 3D engine/framework to fool around in. This is mainly a learning thing for me at the moment, but I do want to keep an eye on the future. Most likely this thread will be more of a "rubber ducking" thing rather than a "Help me choose" thread, but I value good argumented opinions.

Now one of the bigger choices I am facing is whether to use OpenGL or DirectX, but I am not entirely sure which one would be the best pick with the future in mind together with learning new stuff.

The thing is. I made a couple of 3D frameworks for smaller assignments and I used ("modern") OpenGL for that, giving me an advantage over the fact that I already know a good deal about it, but in terms of using the graphics API in itself, there is less to learn in contrast to DirectX, which I only really was able to scratch the surface of in the past with D3D9.

One of the things that I am wondering about however is the future of both of these APIs.

From sources I am not sure I am allowed to mention, I heard that NVidia is experimenting with ditching the graphics APIs as a whole and instead lets us program the graphics card directly. I am also not sure how valid this is, it can also be something misinterpreted.

I also noticed that some companies (like valve) started switching to OpenGL for the sake of multiplatform compatibility and I don't really see DirectX going multiplatform anytime soon (if ever).

Also with the exception of the xbox consoles, I think most of the consoles use OpenGL or something more similar to OpenGL.

What do you think the future has in store for us? I did google a bit, but I can't really find any nice articles of the direction of graphics APIs for the (near) future.

What would you choose to do if you are chasing a career in graphics programming? Expand your knowledge to a broader spectrum, or go more in depth with what you already know?

Advertisement

The APIs are getting closer and closer to how the hardware works so there is not much space for differences at the API level. This means they will expose rougly the same functionality, just with different names for certain concepts and how you use them.

That leaves pretty much two things to consider:

-Support across different platforms

-Architecture (ease of use)

The first is plus for OpenGL, the second for DirectX since the state machine approach of openGL is apparently broken and makes some things not as nice as with directX. Im not sure if that is going to a better direction (likely is) but for that reason you might want to at least try directX to see how things are implemented there.

There are also things like GPGPU computing, which you might want to use alongside the graphics API. There might be differences in how the graphics API interfaces with whatever GPGPU API between directX and openGL.

o3o

From sources I am not sure I am allowed to mention, I heard that NVidia is experimenting with ditching the graphics APIs as a whole and instead lets us program the graphics card directly. I am also not sure how valid this is, it can also be something misinterpreted.


This I find questionable. There is so much going on behind the scenes that the only way for this to be possible is for nVidia to write their own API for their hardware. I wouldn't put it past them, and they have already done similar with CUDA, but I doubt this.

What would you choose to do if you are chasing a career in graphics programming? Expand your knowledge to a broader spectrum, or go more in depth with what you already know?


If I could go back and do this all over again, I would cut back my time spent learning OpenGL and add time learning DirectX. Both APIs have their merits, but I only know OpenGL and that puts me at a distinct disadvantage. If I were to want to make a career out of this, I would want as broad a skill set as possible.

From sources I am not sure I am allowed to mention, I heard that NVidia is experimenting with ditching the graphics APIs as a whole and instead lets us program the graphics card directly. I am also not sure how valid this is, it can also be something misinterpreted.

This was the state of affairs before Glide/GL/DX/etc arrived on the scene. It was a nightmare for developers.
Sure, we'd all love to pare back GL/D3D a bit these days so they're a thinner abstraction, but going back to a driver free-for-all would be terrible.

Interpreting this statement another way though, your source could be implying that with the arrival of OpenCL/CUDA/DirectCompute, the GPU hardware is becoming more and more open to how it is used, rather than forcing us to follow the traditional pipeline specified by GL/D3D. That sentiment is definitely true -- GPU's have certainly become "GPGPU's" or "compute devices", that are extremely flexible.

I also noticed that some companies (like valve) started switching to OpenGL for the sake of multiplatform compatibility and I don't really see DirectX going multiplatform anytime soon (if ever).

IMHO GL's "portability" is a poisoned chalice. Every different driver contains a different OpenGL implementation. Even on a single OS, like Windows7, you've got half a dozen (or more) different OpenGL implementations that you will need to test your game on, and possibly have to make changes for. Khronos makes the spec, but they don't actively enforce 100% compliance with it.

Doing professional QA for a GL engine on Windows/Linux (or any engine on Mobile/Web) is a complete nightmare. You need a tonne of different devices/GPUs and a lot of time.

I made a personal decision (other opinions will vary!) to simply choose D3D on Windows because I found the maintenance costs to be lower, due to it's behaviour being dictated (and tested/validated/enforced) by Microsoft.
On MacOS, the situation is similar with Apple being a benevolent dictator ruling over GL, ensuring it's implemented properly, so GL isn't as flaky there.
On Linux, there is no benevolent dictator. D3D9 can be mostly emulated in Wine, and GL support is entirely up to the driver.
On mobile, GLES support varies wildly from device to device. You'll really want to test your code on every single different device... sad.png
On web, your user's browser might have WebGL in some capacity, or flash, probably, maybe. But web has always been a compatibility/porting nightmare.
On consoles, you've probably only got the choice of using some proprietary API that's different to all of the above.
So personally, I choose to use multiple APIs -- the most natural/stable one for each platform, D3D9, D3D11, GL3, GLES2, etc...

Also with the exception of the xbox consoles, I think most of the consoles use OpenGL or something more similar to OpenGL.

No, unless by "similar to OpenGL" you mean that they have a plain-C interface, or unless you'd also say that D3D is similar to OpenGL (which is kinda is) biggrin.png
The exception is the PS3, which provides it's own native graphics API (which everyone should be using), and also provides a crappy wrapper around it called PSGL, which makes it look similar to GL.

What would you choose to do if you are chasing a career in graphics programming? Expand your knowledge to a broader spectrum, or go more in depth with what you already know?

Both tongue.png
I jumped back and forth between D3D and GL at different points in time, and the more you do it, the more they both feel the same. Graphics programming should eventually be about what you make with the APIs, not the irrelevant details of how you use them. I'd much prefer to hire a graphics programmer who can implement different types of lighting systems (forward/deferred/tiled/clustered/etc), post-processing effects, materials, special effects, and so on, on any single API at all, rather than someone that knows all APIs inside out, but can't demonstrate practical use of them.
The knowledge of how to achieve some effect on top of an API is portable to all APIs of the same class (e.g. D3D9 vs GL2, D3D11 vs GL4, etc), and it is a much broader and deeper skill set. Learning the nuts and bolts of a particular API's interface is more rigid, structured learning, which any graphics programmer can do in time. Once you've learned one API, picking up another is pretty straightforward.


The first is plus for OpenGL, the second for DirectX since the state machine approach of openGL is apparently broken and makes some things not as nice as with directX. Im not sure if that is going to a better direction (likely is) but for that reason you might want to at least try directX to see how things are implemented there.

I heard counter arguments though, that OpenGL in its current state is more robust. Can you clarify?


No, unless by "similar to OpenGL" you mean that they have a plain-C interface, or unless you'd also say that D3D is similar to OpenGL (which is kinda is)
The exception is the PS3, which provides it's own native graphics API (which everyone should be using), and also provides a crappy wrapper around it called PSGL, which makes it look similar to GL.

Well. I actually only programmed on the PSP and PS3 and in general it just felt pretty much the same as OpenGL, that's actually what I meant :)


Interpreting this statement another way though, your source could be implying that with the arrival of OpenCL/CUDA/DirectCompute, the GPU hardware is becoming more and more open to how it is used, rather than forcing us to follow the traditional pipeline specified by GL/D3D. That sentiment is definitely true -- GPU's have certainly become "GPGPU's" or "compute devices", that are extremely flexible.

I think that is most likely the case indeed. I'm not from the age of driver free programming, but I can imagine that there will always be a layer between or else there will most likely be a commercial entity that will make it.

Both major APIs are converging towards the hardware, and the hardware from competing vendors is converging towards similar logical programming models, if not physical architecture. Beyond that, I think the trend will be to push programmability down into more stages of the pipeline -- truly programmable tessellation seems a shoe-in at some point, lots of smart people have said programmable texture sampling would be nice (although that gets rather hard to implement in hardware). Currently, the most modern GPUs are very optimized for compute, but still clearly graphics-first; I think in the future GPUs will flip this around and compute will really be the first-class citizen, but they'll maintain the necessary fixed-function components (texture sampling, ROPs, etc) to maintain leading graphics performance.

The only really divergent thing we know that's going to happen, is that nVidia is going to start putting ARM CPU cores in their GPUs. That'll probably have a lot of interesting applications that people are yet to think of.

throw table_exception("(? ???)? ? ???");

Up to a year ago I would have unreservedly recommended D3D - it's a cleaner API, drivers are more robust, better tools and support, and more consistent behaviour on different hardware/driver combos. Nowadays - I'm not so sure. I would have hoped that MS had learned their lesson from Vista - locking D3D versions to Windows versions is not a good idea - but it seems that they haven't. None of that takes away from the fact that D3D9 and 11 are still the best-in-class of their generations, and even with the new XBox being 11.2 it's a safe bet that the majority of titles will still target vanilla 11 in the PC space for at least the next few years.

OpenGL's portability is not as big a deal as it's often made out to be. Even if you don't give a damn about portability, you can still hit ~95% of the PC target market (assuming that the latest Steam survey is representative). Unless you're going for a very specific, very specialized target audience where you know for a fact that the figures are different - don't even bother worrying about it.

The really big deal with OpenGL is that it does allow new hardware features to be accessed without also requiring an OS upgrade. You can also drop in handling for them without having to also rewrite the rest of your renderer. That of course needs to be balanced against the driver situation (a safe bet is to just copy whatever id Software titles do as you can be certain that at least is well supported) and the mess of vendor-specific extensions (which can be seriously detrimental to hardware portability).

Longer term it is always most beneficial to know both, of course.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

firstly, that's not a rant, but my point of view :)

From sources I am not sure I am allowed to mention, I heard that NVidia is experimenting with ditching the graphics APIs as a whole and instead lets us program the graphics card directly. I am also not sure how valid this is, it can also be something misinterpreted.

This was the state of affairs before Glide/GL/DX/etc arrived on the scene. It was a nightmare for developers.

secondly, I see it the other way around, it was unified, nice and easy to develop before the various APIs arrived. you've written your software in c, everything, from game logic, to sound mixing, to rasterizing triangles, blending pixels etc. was just one unified code.

you could write a triangle rasterizer in combination with voxel (heightmap) tracer like commanche or outcast. all you cared was one unified code base.

then those APIs came up, which started a real nightmare from a programmer's point (no matter wheter software or hw api, aka commandbuffer). it was the only way to get access to the speed of rasterization HW, those first version, like S3, were not even faster than CPUs, but were running in parallel, so you could do other stuff and in total it was faster. with Glide, GL, DX you've then completely lost your freedom to the devil.

back then, you've worried how many triangles or vertices or sprites you could render, that turned now to how many API calls you can do! imagin how retarded that actually is. with low level access on consoles with 10y HW, you can manage 10times the drawcalls of newest PC HW, just due to API limitations. and those are not getting better, but worse every time. DX was already like twice slower than oGL on windowsXP (that's what NVidia claimed in some presentation), Vista+DX10 should speed things up, by introducing state objects, but games for DX9 and DX10 showed that DX9 ran actually fast, for the simple reason, that DX9 games sparsely updated just few needed states, while a state object always changes the states, even if 90% of it is equal to the previous one (drivers don't do state guarding, it's not their job). DX11 with compute, you got another slow down, vendors recommend to not switch more than twice a frame between compute and 3d rendering, as this is a pipeline reconfiguration, they have to flush, halt, reconfigure, restart the pipe. (it's less bad nowadays, but starting with DX11 that was the case).

we can now handle more lights than we can handle objects on PC (without instancing) and if you look at games like Crysis from 2005 and compare to current games, you will see the same amount of drawcalls. just due to API limitations.

GPU vendors try to sell APIs like the solution for big incompatibility problems, but that's really just marketing. Look on CPUs, you can run 30y old code on current CPUs, you can recompile your c/c++ code to x86,arm,mips,powerpc and mostly 'just run' it.

programming GPUs 'without an API' doesn't mean you write your commandbuffer on HW level, that's not the point, that's the retarded start of APIs. Writing for GPUs would mean, that you create your c++ code (or whatever language you prefer that compiles). you compile it to an intermediate language (e.g. llvm opcode) and execute it on some part of the GPU. that part would start 'jobs' that do what you've intended to do.

similar to the culling code DICE runs on compute, but for everything. you can transform all objects with simple c++ code, you can apply skinning, water simulation. you can draw triangles or trace heightmap voxels, if you want you can use path tracing or simple draw 2d sprites for partcile without any API calls from your desktop application to the gpu!

Nowadays even NVidia and ATI start to be unhappy with the 'API', which actually rather means that they want other APIs, but MS is just not updating as frequent as back then, the industry just does not care, most games would still run nicely on DX9, current consoles ARE DX9 level of HW.

so, anyone who wants a truly future API, should write 99% of the engine in OpenCL/Cuda. (I recommend the intel SDK, you can profile, debug etc. in Visual studio, just like normal c++). you can push 100k drawcalls @ 60Hz if you like, you can keep DCT compressed textures on GPU and decompress tiles of them on demand, if you want. you can bake lightmaps on demand if you like to (like quake1 did with surface caches). you can implement some sub-d, you can do occlusion culling while drawing, on GPU, with 0 latency, you can filter PTX textures without hacking around to get proper filtering on borders.

and lets not even asking "isn't it slow". Vertexshader ran slower than TnL, and proper Pixelshader (see GeforceFX) were also ridiculously slow compared to 'hard wired pixel pipelines'. you could fit 3540 voodoo graphics chips into your GTX680 transistor limit, rasterizing 10+Billion triangle/s, 10x of what the GTX680 can. of course, that's naive+useless math, just like comparing a pure gpgpu pipeline with some hard wired triangle rasterizers.

I'm really not happy with either API. sad.png

Direct3D 10 and 11 have been pretty solid (more so than OpenGL IMO), but I really don't trust where Microsoft has been heading and it seems worse to me every year (e.g. not making the latest graphics API available on all their popular operating systems). It's always been a problem for me that Direct3D is tied to a single company and OS, and now it seems it may be increasingly tied to specific versions of that OS (artificial scarcity anyone?)

The fact that OpenGL is, well, open and available to a wide variety of platforms is great. That's a HUGE advantage over Direct3D. Unfortunately, I think the design of OpenGL is shit, to be honest, and then you combined that with the poor quality of the various drivers. I would love to see a new version of OpenGL (OGL 5 maybe?) that isn't based off a new set of hardware features, but is instead a ground up redesign of the API, with no focus on making it compatible with the previous versions and instead focus on making things work well. Maybe they could start by copying Direct3D 11 and then improve from there? biggrin.png I can dream.

What are you to do? Those are really your only two options if you want accelerated 3D today, and that is a damn shame, I think. What I would really love to see is for companies like AMD and Nvidia to open up their hardware specs and drivers. Maybe then it would be easier for competitive drivers or even competitive APIs to emerge. Maybe there will soon be a massive shift in CPU architecture. Instead of a handful of heavy-weight cores you'll have hundreds of light-weight cores. It would basically be like a GPU, only more freely programmable (no more drivers!) and at that point you could implement your OpenGL or alternate graphics API entirely in software. Again, I can dream.

I would love to see a new version of OpenGL (OGL 5 maybe?) that isn't based off a new set of hardware features, but is instead a ground up redesign of the API, with no focus on making it compatible with the previous versions and instead focus on making things work well. Maybe they could start by copying Direct3D 11 and then improve from there? biggrin.png I can dream.


They tried that, it was known as Longs Peak and got scrapped in favour of GL3.0 to much uproar from many.
Both NV and AMD were firmly behind it but it got killed about 6 months before GL3.0's release due to reasons which were never really explained.

(Many people blamed CAD companies at the time but I heard from an IHV employee who was working on the LP spec that it wasn't them - my personal theory is that Apple and/or Blizzard put the boot in as Apple probably had no desire to redo their API and Blizzard wanted cross platform coverage with latest features... but that's just my opinion..)

This topic is closed to new replies.

Advertisement