DX maybe dead before long....

Started by
41 comments, last by Ravyne 13 years, 1 month ago

Agreed... I have tried this myself and saw very little IQ improvements. Maybe it would matter more if you were zoomed in on a surface? I have no idea, but I am guessing that it would.

True enough, but with most games, how often do you have time to sit there and zoom in on something? Most of the time you are busy fighting aliens/nazis/zombies/soldiers/robots/ninjas. If a graphical improvement isn't easily noticeable, does it really make that much of a difference?

(I'm just playing the devil's advocate here. I'm all for having better graphics, but there does eventually come a point where throwing more hardware at the problem doesn't have as great an impact as the art direction).
Advertisement
The thing is as scenes become closer to 'real' it is the suble things which can make the difference and give the eye/brain small cues as to what is going on.

Take tesselation for example; it's major usage is doing what the various normal mapping schemes can't namely adjusting the silhouette of an object. Normal mapping is all well and good for faking effects but a displace tesslated object is going to look better, assuming the art/scene is done right of course. (It is also useful for adding extra detail into things like terrain).

Another effect would be subsurface scattering; this is, if done correctly, a suble effect on the skin/suface of certain objects which provide a more life like feel to the object. It shouldn't jump out and grab you like when normal mapping or shadows first appeared but the over all effect should be an improvement.

Also the arguement about DX vs a new API isn't so much about the graphical output but about the CPU overhead and coming up with ways to have the GPU do more work on its own. Larrabee would have been a nice step in that direction; having a GPU re-feed and retrigger itself removing the burden from the CPU. So, while lower CPU costs for drawing would allow us to draw more at the same time it would simplify things (being able to throw a chunk of memory at the driver which was basically [buffer id][buffer id[pbuffer id][shader id][shader id][count] for example via one draw call would be nice) and give more CPU time back to game play to improve things like AI and non-SIMD/batch friendly physics (which will hopefully get shifted off to the GPU part of an APU in the future).

Edit;

When it comes to the suble thing right now my biggest issue with characters are their eyes. Take Mass Effect 2 on the 360; the characters look great, move great (much props to the mo-cap and animation guys) and feel quite real, so much so it was scary at times... right up until you look into their eyes and then it's "oh.. yeah...". Something about the lighting on them still isnt' right, it's subtle but noticable, more so when everything else is getting closer to 'real'. (It's probably the combination of a lack of subsurface scattering, diffuse reflection of local light sources and micro movement of the various components of the eye which are causing the issue.)

Something else that hasn't really been mentioned so far in this thread or the article is the law of diminishing returns. Sure, my graphics card might be 10x more powerful... but what good is that power if it is adding 10x more polygons to a scene that already looks pretty good?

Looking over screenshots of DirectX 11 tessellation in that recent Aliens game, I found it somewhat difficult to distinguish between the lower-res model and the tessellated one. It's not that we aren't using that extra graphics horsepower - it's that it isn't easily visible.

On the subject of normal mapping: There was a recent presentation done by Crytek about various methods of texture compression (including normals). For their entire art chain, they are attempting to do 16 bits per channel, including normal maps. The difference was subtle, but it was there. Now here's the thing - what's a bigger difference - going from no normal map to an 8-bit normal map or going from an 8-bit normal map to a 16-bit normal map?


I think the primary issue here is: why add pretty much anything when:

1) Console hardware can't handle it
2) PC sales are a relativly small portion of the total.
3) The end user is unlikely to notice anyway.
4) The PC user who have extra horsepower to spare could just crank up the resolution, anti aliasing, etc to make use of their newer harder.

As more and more PC games are released first on consoles this issue becomes more noticable, we will probably see another fairly big jump in visuals when the next generation of consoles hit the market.
The main thing that seems quite restricted on console->pc ports these days is the use of graphics memory, texture resolutions are often awfully low (Bioware did release a proper high resolution texture pack for DA2 atleast, but most developers don't do that)
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
As we are here I would like to point one thing out; many games where people say 'oh its a port' infact ARENT ports. The PC version is developed and maintained along side the console one, very often for testing reasons if nothing else.

Yes, consoles tend to be the 'lead' platform and due to lower ROI on PC sales the PC side tends to get less attention but generally it needs less attention as well to make it work. (and I say that as the guy at work who spent a couple of weeks sorting out PC issues pre-sub, including fun ones like 'NV need to fix their driver profiles for our game to sanely support SLI', which a new API really needs to expose, leaving it up to the driver is 'meh').

The textuers thing however is right, which trust me is just as annoying to the graphics coders as it is the end user. At work one of our demands for the next game from rendering to art is for them to author textures at PC levels and then we'll use the pipeline to spit out the lower res console versions. (That said, even on our current game the visual difference between console and PC high is pretty big, I was honestly blown away first time I saw it running fullscreen maxed out having been looking at the 360 version mostly up until that point).
Yes DX11 features are great and I am glad they are here finally. Tessellation is great for adding detail(actual detail not faked) and this feature is really need on characters face/head IMO. I agree with Phantom for once, and that the meshes for the actual player/enemies need to have their polygon count increased. The low polygon counts need to be dropped from games final image rendering completely. With that said I would think that the movements, meaning when an arm bends you acutally have a real looking elbow vs. the rubberband effect.

And yes, I really really wanted Larrabee to take off, as the possibilities were limitless... Here's to hoping for the future.

And no PC sales aren't dying, they are actually quite healthy.

In fact EA has stated this about PC gaming....

http://www.techspot.com/news/42755-ea-the-pc-is-an-extremely-healthy-platform.html
I think the future, the real future in Graphics, lies with the unification of the GPU and the CPU. A General Massively Parallel Processing Unit. This would comfortably pave the way for the IMHO only real way forward in graphics: physics based lighting.

The lines are already quite blurred between the GPU and the CPU. Uncomfortably so. I don't know enough about hardware to know if replacement of CPUs is achievable with current GPUs. But I do know that we would need a new way to program GPUs to achieve that, one that is more flexible and powerful, or in other words more low level. If that were to materialise I think it could well be in the form of a new DX version. After all, the current version of DirectX has something quite close to that already in the form of DirectCompute.

On the other hand, as Larrabee was meant to once, perhaps the CPU will replace the GPU as the GMPPU. In that case, we will certainly kiss DX goodbye and wave hello to, emm, C++?

Either way, whether CPUs replace GPUs or GPUs replace CPUs, it doesn't fit in with what AMD envisions

EDIT: Just got me wondering something nuts. Could one theoretically do work and visualise it on a monitor using only a power source, a graphics card, a mobo, maybe a hardrive, and appropriate software? :o
Its not that PC sales are, necessarily, shrinking terribly in terms of numbers -- its more the fact that console sales have grown by huge bounds in the past 15 years or so. That same link says it straight out -- Console sales account for 72% of EA's revenue, and I'd be willing to bet that the remaining 28% isn't just PC sales, but other revenue streams like iPhone/android sales and MMO subscriptions. And the PC is a platform where a publisher stands to make, perhaps twice as much per sale, since no platform license fees are assessed and there is less manufacturing involved (as the article also states, PC gaming *retail* is markedly down, but services like Steam are thriving). So, a platform which is twice as profitable per sale, has only 1/5th the total revenue and requires doubling the input effort (and I'd say "doubling" fairly conservatively).

I really want things to be more programmable, and I was as much a Larrabee fan as anyone -- heck, I'd buy one today for a reasonable price, even if it was a lackluster GPU -- but the fundamental issue with the PC ecosystem is that it spans too broad a range to make the necessary effort of optimizing for even a subset of the most popular configurations worthwhile. Creating "minimal" abstractions is really the best we can realistically hope for. We'll get there, to be sure, but its going to take time, and its never going to be as thin as some (perhaps even most) will want it to be.

throw table_exception("(? ???)? ? ???");

The problem with any sort of complete merging of the GPU and CPU into one core (which isn't what an APU is as that still has its x64 and ALU cores seperate) is one of workload and work dispatch.

The GPU is good at what it does because it is a deep pipeline with high latency which executes multiple concurrent threads in lock step. It executes in wave fronts/warps of threads setup in such a way as to hide latency for memory requests which aren't in cache. It serves highly parallel workloads well however as soon as noncoherant branching or scattering enters the equation you can kiss the performance good bye as the architecture wastes time and resources on unneeded work and performing poorly localised writes.

CPUs on the other hand are great at scattered branching tasks but suffer when executing work loads where frequent uncached trips to memory are required as there is no real ability to hide the latency and do more 'useful' work as the GPU can do. At best out-of-order archs let you hide some of the latency by placing the request early but it'll still hurt you.

Effectively any merging of the two is going to result in just more 'cpu like' cores rather than GPU like cores as it is easier to 'emulate' the GPU work load (lots of threads doing the same thing) via the CPU method than the other way around which would require the GPU look at every running thread and try to regroup a-like workloads as best it can. Of course without some form of hardware to reschedule threads to hide latency you have the CPU problem all over again of waiting for memory (which is pretty damned slow).

Maybe the people at AMD and Intel will come up with a way to do it but given AMD have been talking 'Fusion' for about 5 years now and have only just got around to doing it I'm not pinning my hopes on them 'solving' this problem any time soon, never mind the problem of how you get all this stuff down the memory bus...
The next wave that will finally replace consoles is/will be smartphones. In the next few years you will have decent power and the ability to hook that phone to your TV and game with wireless devices to the phone for input. So yeah IMO eventually consoles will be a very small market if not replaced by items such as Nintendo DSi or Sony PSP and Smartphones, and PC gaming will still be around.

That is my prediction, not sure the time frame but 10years be my guess.

Also with Nvidia's Maxwell GPU's coming out in 2013 they will have an ARM based CPU onboard which makes for some very interesting avenues....

http://www.bit-tech....ll-to-directx/1

Comments....
[color="#222222"][font="Arial, Helvetica, sans-serif"]It seems pretty amazing, then, that while PC games often look better than their console equivalents, they still don't beat console graphics into the ground. [/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]according to AMD, this could potentially change if PC games developers were able to program PC hardware directly at a low-level, rather than having to go through an API, such as DirectX.[/quote]Ok - so the argument goes like this:[/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]** Consoles have worse hardware, but can program the device at a low-level, resulting in better bang for your buck.[/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]** PC is stuck having to go through DX's abstractions, which adds unnecessary overhead.[/font]
[font="Arial, Helvetica, sans-serif"] [/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]Both these points are true, but the thing that makes it seem like nonsense to me is that the low-down-close-to-the-metal API on Xbox360, which lets us get awesome performance out of the GPU is.... DX 9 and a half.[/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]It's DirectX, with some of the layers peeled back. You can do your own VRam allocations, you can create resources yourself, you've got access to some of the API source and can inline your API calls, you've got access to command buffers, your can take ownership over individual GPU registers controlling things like blend states, and you've got amazing debugging and performance tools compared to PC.... but you still do all of these things through the DirectX API![/font]
[font="Arial, Helvetica, sans-serif"] [/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]This means the argument is a bit of a red herring. The problem isn't DirectX itself, the problem is the PC-specific implementations of DirectX that are lacking these low-level features.[/font]
[font="Arial, Helvetica, sans-serif"] [/font]
[font="Arial, Helvetica, sans-serif"][color="#222222"]The above argument is basically saying that DirectX9.5 games can achieve better performance than DirectX9 games... which is true... but also seems like a fairly obvious statement...[/font]
I been saying DX is a dog for years, all the DX nut jobs, no its fast your doing something wrong… Bah eat crow…[/quote]Wow. Way to start a nice level-headed discussion... Attacking fanbois just makes you look like a fanboi from a different camp... Don't do that.

This topic is closed to new replies.

Advertisement