On APIs.

posted in Not dead...
Published March 23, 2011
Advertisement
Right now 3D APIs are a little... depressing... on the desk top.

While I still think D3D11 is technically the best API we have on Windows the fact that AMD and NV currently haven't implimented multi-threaded rendering in a manner which helps performance is annoying. I've heard that there are good technical reasons why this is a pain to do, I've also heard that right now AMD have basically sacked it off in favour of focusing on the Fusion products. NV are a bit further along but in order to make use of it you effectively give up a core as the driver creates a thread which does the processing.

At this point my gaze turned to OpenGL, and with OpenGL4.x while the problems with the API are still there in the bind-to-edit model which is showing no signs of dying feature wise it is to a large degree caught up. Right now however there are a few things I can't see a way of doing from GL, but if anyone knows differently please let me know...


  • Thread-free resource creation. The D3D device is thread safe in that you can call its resource recreation routines from any thread. As far as I know GL still needs to use a context which must be bound to the 'current' thread to create resources.
  • Running a pixel shader at 'sample' frequency instead of pixel frequency. So, in an MSAA x4 render target we would run 4 times per pixel
  • The ability to write to a structured memory buffer in the pixel shader. I admit I've not looked too closely at this but a quick look at the latest extension for pixel/fragment shaders doesn't give any clues this can be done.
  • Conservative depth output. In D3D a shader can be tagged in such a way that it'll never output depth greater than the fragment was already at, which will conserve early-z rejection and allow you to write out depth info different to that of the primative being draw.
  • Forcing early-z to run; when combined with the UAV writing above this allows things like calculating both colour and 'other' information per-fragment and only have both written if early-z passes. Otherwise UAV data is written when colour isn't.
  • Append/consume structured Buffers; I've not spotted anything like this anyway. I know we are verging into compute here which is OpenCL but Pixel Shaders can use them


There are probably a few others which I've missed, however these spring to mind and, many of them, I want to use.

OpenGL also still has the 'extension' burden around it's neck with GLee out of date and GLEW just not looking that friendly (I took a look at both this weekend gone). In a way I'd like to use OpenGL because it works nicely with OpenCL and in some ways the OpenCL compute programming model is nicer than the Compute model but with apprently API/hardware features missing this isn't really workable.

In recent weeks there has been talk of ISVs wanting the 'API to go away' because (among other things) it costs so much to make a draw call on the PC vs Consoles; while I somewhat agree with the desire to free things up and get at the hardware more one of the reasons put forward for this added 'freedom' was to stop games looking the same, however in a world without APIs where you are targetting a constantly moving set of goal posts you'll see more companies either drop the PC as a platform or license an engine to do all that for them.

While people talk about 'to the metal' programming being a good idea because of how well it works on the consoles they seem to forget it often takes half a console life cycle for this stuff to become used/common place and that is targetting fixed hardware. In the PC space things change too fast for this sort of thing; AMD themselves in one cycle would have invalidated alot of work by going from VLIW5 to VLIW4 between the HD5 and HD6 series, never mind the underlaying changes to the hardware itself. Add into this the fact that 'to the metal' would likely lag hardware releases and you don't have a compelling reason to go that route, unless all the IHVs decide to go with the same TTM "API" at which point things will get.. intresting (see; OpenGL for an example of what happens when IHVs try to get along.).

So, unless NV and AMD want to slow down hardware development so things stay stable for multiple years I don't see this as viable at all.

The thing is SOMETHING needs to be done when it comes to the widening 'draw call gap' between consoles and PCs. Right now 5 year old hardware can out perform a cutting edge system when it comes to CPU cost of draw calls; fast forward 3 year to the next generation of console hardware which is likely to have even more cores than now (12 min. I'd guess), faster ram and DX11+ class GPUs as standard. Unless something goes VERY wrong then this hardware will likely allow trivial application of command list/multi-threaded rendering further openning the gap between the PC and consoles.

Right now PCs are good 'halo' products as they allow devs to push up the graphics quality settings and just soak up the fact we are being CPU limited on graphics submissions due to out of order processors, large caches and higher clock speeds. But clock speeds have hit a wall and when the next generation of consoles drops they will match single threaded clock speed and graphics hardware... suddenly the pain of developing on a PC, with its flexible hardware, starts to look less and less attractive.

For years people have been saying about the 'death of PC gaming' and the next generation of hardware could well cause, if not that, then the reduction of the PC to MMO, RTS, TBS and 'facebook' games while all the large AAA games move off to the consoles where development is easier, rewards are greater and things can be pushed futher.

We don't need the API to 'go away' but it needs to become thinner, both on the client AND the driver side. MS and the IHVs need to work together to make this a reality because if not they will all start to suffer in the PC space. Of course, with the 'rise in mobile' they might not even consider this an issue..

So, all in all the state is depressing.. too much overhead, missing features and in some way doomed in the near future...
1 likes 11 comments

Comments

Gaiiden
I just linked to this post in my Daily, where I mentioned [url="http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1"]AMD talking about getting rid of DirectX[/url]. Good timing! :)
March 24, 2011 01:59 AM
Alpha_ProgDes
So we are going back to the days of 386 and 486? Oh god, no. Also, how will this affect indie developers? Will we all need to be assembler experts again? What it sounds like to me is that you have 3 real competitors in the graphics area: AMD, NV, Intel. And allowing programmers to program to the metal will eventually create camps. Sooner or later, one graphics chips will win out the APIs wars. I think that's what AMD, at least, is looking towards.
March 24, 2011 03:43 AM
Jansic
I think the 'to-the-metal' terminology is horrendously misleading - no game developers should be programming the GPUs directly at the hardware level. I spend my working days writing GPU microcode for my employer. Worrying about cache coherency and page-break frequency when reading vertex data and scheduling multiple processing cores while feeding DMA distribution buffers is my job - I really wouldn't want to wish these tasks on anybody whose intention is to just write a game. Not least because of the internal knowledge of the chips developers would need to achieve this...

That's why we write drivers and compilers and choose standard APIs to present it. Thinner API's would be a good thing from the CPU point of view, but you'd be surprised at the amount of distance you still have between the developer and the hardware.
March 24, 2011 09:40 AM
Krohm
[b]I agree completely on the bind-to-edit sentence.[/b]
It's even more depressing if you consider GL as a matrix.
Entry: profile
Entry: bindless?
Entry: direct state access?

Even more depressing: being stuck in pre-D3D10.
March 24, 2011 12:05 PM
Scoob Droolins
I've been around awhile, since the first days of 3D hardware. Back in '96 or so, our engine supported multiple vendor-specific APIs in addition to D3D IM - GLide (3dfx), CIF (ATI), Speedy (Rendition), even S3 had one, can't recall the name. It was tedious for our small company to support separate render backends for every vendor. I fear that a to-the-metal approach to the GPU interface will result in spotty support for new hardware, and possibly drive developers to the big engine houses as they give up on their own in-house engines. And this is supposed to make each game look unique?
March 24, 2011 03:01 PM
Halifax2
I believe I am missing something, what special thing can't we render now with D3D/OGL that we could do "without an API?" All this sounds completely tasteless to me and off the mark. I just can't "imagine" this revolutionary graphics they are talking about, so I'm wondering if anyone could help me out?
March 24, 2011 03:16 PM
Matias Goldberg
Yes, yes! I can already see it:

"Write games, [s]not engines[/s] not driver microcode"

"Buy the latest machine to play the next-next-next Gen, but keep the old one to play your old games"

"I need a GPU emulator to play this old 2012 game but it doesn't run very smooth in my 50 Petahertz PC"




God... [b]the horror[/b]! This doesn't make sense in so many levels. From a developer side as well from a consumer side. You can get great performance benefit? Sure. And I won't deny the draw call hard limit, but you're missing the huuuuuuuuuuuuuuuuuuge disadvantage of vendor and device differences you have to deal with. Something of course, you don't get in a console.

Even Vesa was a pain in the ass, which solved a lot of problems before it came, and that thing wasn't even 3D.

The AMD guy is also missing one point: 512 stream processors in the GTX 580 vs 48 in the Xenos doesn't mean it's 12x times more powerful. It's because of the [url="http://en.wikipedia.org/wiki/Amdahl%27s_law"]Amdahl law[/url].

Like NineYearCycle said, the real limitations are art directions, publishers, time constraints, and alike. Go to youtube. Search [url="http://www.youtube.com/results?search_query=CryEngine+3+GDC+11&aq=f"]Cryengine 3 licensors showcase[/url]. Watch the video. The engine is the same in all . The only one (or at least one of the very few) that looks astounding is Crysis 2. It's not because of technology, it's because of art direction and budget size.

The Samaritan demo from UDK shows a glimpse of what can be achieved in D3D11, but again, it's more a showoff for art direction than technical stuff.




May be AMD wanted to say D3D should go away for the XBox? That would make a lot more sense.




What is more feasible are stuff extra instructions for the CPU. Special CPU instructions that directly perform the draw call (and similar). Instead of calling a function, an instruction would be executed. Very complex at the design level to solve, but could work out. The API would still exist and be absolutely necessary, but the overhead could go away. But that's more a wish than an idea.




March 25, 2011 02:29 AM
Josh Klint
I was critical of the initial OpenGL 3 spec when it was released, as it seemed like a very small increment forward, but now that we have the much better GL 3.3 I am very pleased. It looks like OSX Lion will support at least OpenGL 3.2, so that's encouraging. The single biggest problem is Intel, who I don't believe has ever made any graphics hardware that conformed to any version of the OpenGL specification.

I'm hoping in a few years there will be one version of OpenGL that is used for every device (except those made by Microsoft), but until then we'll deal with it.

Throwing out graphics APIs would be a horrendous move.
March 25, 2011 05:46 AM
y2kiah
I understand the average reaction to this "no API" talk, it's a shocking prospect. I do think we're being a bit near-sighted when it comes to this discussion though. The idea of throwing out the APIs within the next 10 years makes very little sense, but fast forward to 10, 15 even 20 years from now. The "GP" in GPU is transitioning from [i]Graphics Processing[/i] to [i]General Purpose[/i] in a very obvious way, we see this trend happening before our very eyes. Soon there will be very little reason NOT to completely integrate the GPU and CPU into a single cohesive chip that has all of the performance of a CPU/GPU mix, but none of the overhead. We can already see this type of thing happening with Sandy Bridge. I think the idea of starting over with our APIs is to approach the design and implementation from a completely new angle that is more in line with the way new hardware will be, once it gets to that point. There will still be APIs, but eventually Direct3d and OpenGL will have run their course I think.
March 29, 2011 02:35 PM
Jason Z
As long as the CPU and the GPU use fundamentally different paradigms (i.e. a CPU is good at branching, and a GPU is good at data parallel computation) then there will be a need to have two different ways of programming them. Until we get to the point that they actually are the same core, then we will need to program them differently.

In general, we don't want to write the programs in multiple languages, so we have an API that abstracts the GPU programming into an API. I don't see this going away any time soon, and any predictions 10+ years from now are just speculation - there is no way to know what is going to happen that far out.

@Phantom: In my testing I see a 40% performance improvement when using multithreaded DX11 in some cases. I think that is more than enough justification to add the architecture to an engine/game project. Granted the test cases are a bit constructed to prove the point that they work, but in general there is a good improvement in CPU bound applications.
March 30, 2011 09:21 PM
zerotri
It'd be interesting to see a day when GPUs all do share the same core architecture. I think that there will always be someone out there that wants to program down "to the metal," as is seen in cases today with software using inline x86 assembly. If GPU vendors offered a lower level way to take advantage of the GPU, someone would find a use for it. In the case of games being written for one or multiple GPUs, game developers would need to choose between supporting multiple GPU architectures or losing a percentage of the gaming population.

There is also a bit of talk about the difference graphics quality between PCs and Consoles. I don't see much talk here about the other differences in architecture between the two though. The PS3 for example, has a CPU with 6 vector processing cores that are often used for post processing. Another advantage that consoles have is their more limited range of output resolutions. In the case of the PS3 and 360, you have x480, x720, and x1080 resolutions that you would normally target. With PCs, you have a much wider range of output resolutions, some higher and some lower that also could greatly affect the time needed to draw. Granted, Consoles can push higher quality graphics with older hardware but they are designed for the purpose of pushing those graphics. Their hardware and software(OS, APIs) are much more tailored for the purpose of gaming whereas PCs have to handle so much more at the same time.
April 04, 2011 08:33 PM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Advertisement

Latest Entries

Advertisement