Jump to content

  • Log In with Google      Sign In   
  • Create Account


DX maybe dead before long....


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
42 replies to this topic

#21 Moe   Crossbones+   -  Reputation: 1248

Like
1Likes
Like

Posted 18 March 2011 - 05:57 PM

Agreed... I have tried this myself and saw very little IQ improvements. Maybe it would matter more if you were zoomed in on a surface? I have no idea, but I am guessing that it would.

True enough, but with most games, how often do you have time to sit there and zoom in on something? Most of the time you are busy fighting aliens/nazis/zombies/soldiers/robots/ninjas. If a graphical improvement isn't easily noticeable, does it really make that much of a difference?

(I'm just playing the devil's advocate here. I'm all for having better graphics, but there does eventually come a point where throwing more hardware at the problem doesn't have as great an impact as the art direction).

Sponsor:

#22 phantom   Moderators   -  Reputation: 6800

Like
0Likes
Like

Posted 18 March 2011 - 06:42 PM

The thing is as scenes become closer to 'real' it is the suble things which can make the difference and give the eye/brain small cues as to what is going on.

Take tesselation for example; it's major usage is doing what the various normal mapping schemes can't namely adjusting the silhouette of an object. Normal mapping is all well and good for faking effects but a displace tesslated object is going to look better, assuming the art/scene is done right of course. (It is also useful for adding extra detail into things like terrain).

Another effect would be subsurface scattering; this is, if done correctly, a suble effect on the skin/suface of certain objects which provide a more life like feel to the object. It shouldn't jump out and grab you like when normal mapping or shadows first appeared but the over all effect should be an improvement.

Also the arguement about DX vs a new API isn't so much about the graphical output but about the CPU overhead and coming up with ways to have the GPU do more work on its own. Larrabee would have been a nice step in that direction; having a GPU re-feed and retrigger itself removing the burden from the CPU. So, while lower CPU costs for drawing would allow us to draw more at the same time it would simplify things (being able to throw a chunk of memory at the driver which was basically [buffer id][buffer id[pbuffer id][shader id][shader id][count] for example via one draw call would be nice) and give more CPU time back to game play to improve things like AI and non-SIMD/batch friendly physics (which will hopefully get shifted off to the GPU part of an APU in the future).

Edit;

When it comes to the suble thing right now my biggest issue with characters are their eyes. Take Mass Effect 2 on the 360; the characters look great, move great (much props to the mo-cap and animation guys) and feel quite real, so much so it was scary at times... right up until you look into their eyes and then it's "oh.. yeah...". Something about the lighting on them still isnt' right, it's subtle but noticable, more so when everything else is getting closer to 'real'. (It's probably the combination of a lack of subsurface scattering, diffuse reflection of local light sources and micro movement of the various components of the eye which are causing the issue.)

#23 SimonForsman   Crossbones+   -  Reputation: 5804

Like
0Likes
Like

Posted 18 March 2011 - 06:57 PM

Something else that hasn't really been mentioned so far in this thread or the article is the law of diminishing returns. Sure, my graphics card might be 10x more powerful... but what good is that power if it is adding 10x more polygons to a scene that already looks pretty good?

Looking over screenshots of DirectX 11 tessellation in that recent Aliens game, I found it somewhat difficult to distinguish between the lower-res model and the tessellated one. It's not that we aren't using that extra graphics horsepower - it's that it isn't easily visible.

On the subject of normal mapping: There was a recent presentation done by Crytek about various methods of texture compression (including normals). For their entire art chain, they are attempting to do 16 bits per channel, including normal maps. The difference was subtle, but it was there. Now here's the thing - what's a bigger difference - going from no normal map to an 8-bit normal map or going from an 8-bit normal map to a 16-bit normal map?


I think the primary issue here is: why add pretty much anything when:

1) Console hardware can't handle it
2) PC sales are a relativly small portion of the total.
3) The end user is unlikely to notice anyway.
4) The PC user who have extra horsepower to spare could just crank up the resolution, anti aliasing, etc to make use of their newer harder.

As more and more PC games are released first on consoles this issue becomes more noticable, we will probably see another fairly big jump in visuals when the next generation of consoles hit the market.
The main thing that seems quite restricted on console->pc ports these days is the use of graphics memory, texture resolutions are often awfully low (Bioware did release a proper high resolution texture pack for DA2 atleast, but most developers don't do that)
I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

#24 phantom   Moderators   -  Reputation: 6800

Like
0Likes
Like

Posted 18 March 2011 - 07:26 PM

As we are here I would like to point one thing out; many games where people say 'oh its a port' infact ARENT ports. The PC version is developed and maintained along side the console one, very often for testing reasons if nothing else.

Yes, consoles tend to be the 'lead' platform and due to lower ROI on PC sales the PC side tends to get less attention but generally it needs less attention as well to make it work. (and I say that as the guy at work who spent a couple of weeks sorting out PC issues pre-sub, including fun ones like 'NV need to fix their driver profiles for our game to sanely support SLI', which a new API really needs to expose, leaving it up to the driver is 'meh').

The textuers thing however is right, which trust me is just as annoying to the graphics coders as it is the end user. At work one of our demands for the next game from rendering to art is for them to author textures at PC levels and then we'll use the pipeline to spit out the lower res console versions. (That said, even on our current game the visual difference between console and PC high is pretty big, I was honestly blown away first time I saw it running fullscreen maxed out having been looking at the 360 version mostly up until that point).

#25 MARS_999   Members   -  Reputation: 1239

Like
1Likes
Like

Posted 18 March 2011 - 07:40 PM

Yes DX11 features are great and I am glad they are here finally. Tessellation is great for adding detail(actual detail not faked) and this feature is really need on characters face/head IMO. I agree with Phantom for once, and that the meshes for the actual player/enemies need to have their polygon count increased. The low polygon counts need to be dropped from games final image rendering completely. With that said I would think that the movements, meaning when an arm bends you acutally have a real looking elbow vs. the rubberband effect.

And yes, I really really wanted Larrabee to take off, as the possibilities were limitless... Here's to hoping for the future.

And no PC sales aren't dying, they are actually quite healthy.

In fact EA has stated this about PC gaming....

http://www.techspot.com/news/42755-ea-the-pc-is-an-extremely-healthy-platform.html

#26 forsandifs   Members   -  Reputation: 154

Like
0Likes
Like

Posted 18 March 2011 - 08:07 PM

I think the future, the real future in Graphics, lies with the unification of the GPU and the CPU. A General Massively Parallel Processing Unit. This would comfortably pave the way for the IMHO only real way forward in graphics: physics based lighting.

The lines are already quite blurred between the GPU and the CPU. Uncomfortably so. I don't know enough about hardware to know if replacement of CPUs is achievable with current GPUs. But I do know that we would need a new way to program GPUs to achieve that, one that is more flexible and powerful, or in other words more low level. If that were to materialise I think it could well be in the form of a new DX version. After all, the current version of DirectX has something quite close to that already in the form of DirectCompute.

On the other hand, as Larrabee was meant to once, perhaps the CPU will replace the GPU as the GMPPU. In that case, we will certainly kiss DX goodbye and wave hello to, emm, C++?

Either way, whether CPUs replace GPUs or GPUs replace CPUs, it doesn't fit in with what AMD envisions

EDIT: Just got me wondering something nuts. Could one theoretically do work and visualise it on a monitor using only a power source, a graphics card, a mobo, maybe a hardrive, and appropriate software? :o

#27 Ravyne   Crossbones+   -  Reputation: 6778

Like
1Likes
Like

Posted 18 March 2011 - 08:12 PM

Its not that PC sales are, necessarily, shrinking terribly in terms of numbers -- its more the fact that console sales have grown by huge bounds in the past 15 years or so. That same link says it straight out -- Console sales account for 72% of EA's revenue, and I'd be willing to bet that the remaining 28% isn't just PC sales, but other revenue streams like iPhone/android sales and MMO subscriptions. And the PC is a platform where a publisher stands to make, perhaps twice as much per sale, since no platform license fees are assessed and there is less manufacturing involved (as the article also states, PC gaming *retail* is markedly down, but services like Steam are thriving). So, a platform which is twice as profitable per sale, has only 1/5th the total revenue and requires doubling the input effort (and I'd say "doubling" fairly conservatively).

I really want things to be more programmable, and I was as much a Larrabee fan as anyone -- heck, I'd buy one today for a reasonable price, even if it was a lackluster GPU -- but the fundamental issue with the PC ecosystem is that it spans too broad a range to make the necessary effort of optimizing for even a subset of the most popular configurations worthwhile. Creating "minimal" abstractions is really the best we can realistically hope for. We'll get there, to be sure, but its going to take time, and its never going to be as thin as some (perhaps even most) will want it to be.

#28 phantom   Moderators   -  Reputation: 6800

Like
0Likes
Like

Posted 18 March 2011 - 08:23 PM

The problem with any sort of complete merging of the GPU and CPU into one core (which isn't what an APU is as that still has its x64 and ALU cores seperate) is one of workload and work dispatch.

The GPU is good at what it does because it is a deep pipeline with high latency which executes multiple concurrent threads in lock step. It executes in wave fronts/warps of threads setup in such a way as to hide latency for memory requests which aren't in cache. It serves highly parallel workloads well however as soon as noncoherant branching or scattering enters the equation you can kiss the performance good bye as the architecture wastes time and resources on unneeded work and performing poorly localised writes.

CPUs on the other hand are great at scattered branching tasks but suffer when executing work loads where frequent uncached trips to memory are required as there is no real ability to hide the latency and do more 'useful' work as the GPU can do. At best out-of-order archs let you hide some of the latency by placing the request early but it'll still hurt you.

Effectively any merging of the two is going to result in just more 'cpu like' cores rather than GPU like cores as it is easier to 'emulate' the GPU work load (lots of threads doing the same thing) via the CPU method than the other way around which would require the GPU look at every running thread and try to regroup a-like workloads as best it can. Of course without some form of hardware to reschedule threads to hide latency you have the CPU problem all over again of waiting for memory (which is pretty damned slow).

Maybe the people at AMD and Intel will come up with a way to do it but given AMD have been talking 'Fusion' for about 5 years now and have only just got around to doing it I'm not pinning my hopes on them 'solving' this problem any time soon, never mind the problem of how you get all this stuff down the memory bus...

#29 MARS_999   Members   -  Reputation: 1239

Like
1Likes
Like

Posted 18 March 2011 - 08:29 PM

The next wave that will finally replace consoles is/will be smartphones. In the next few years you will have decent power and the ability to hook that phone to your TV and game with wireless devices to the phone for input. So yeah IMO eventually consoles will be a very small market if not replaced by items such as Nintendo DSi or Sony PSP and Smartphones, and PC gaming will still be around.

That is my prediction, not sure the time frame but 10years be my guess.

Also with Nvidia's Maxwell GPU's coming out in 2013 they will have an ARM based CPU onboard which makes for some very interesting avenues....

#30 Hodgman   Moderators   -  Reputation: 27837

Like
0Likes
Like

Posted 18 March 2011 - 10:26 PM

http://www.bit-tech....ll-to-directx/1

Comments....

It seems pretty amazing, then, that while PC games often look better than their console equivalents, they still don't beat console graphics into the ground.
according to AMD, this could potentially change if PC games developers were able to program PC hardware directly at a low-level, rather than having to go through an API, such as DirectX.

Ok - so the argument goes like this:
** Consoles have worse hardware, but can program the device at a low-level, resulting in better bang for your buck.
** PC is stuck having to go through DX's abstractions, which adds unnecessary overhead.

Both these points are true, but the thing that makes it seem like nonsense to me is that the low-down-close-to-the-metal API on Xbox360, which lets us get awesome performance out of the GPU is.... DX 9 and a half.
It's DirectX, with some of the layers peeled back. You can do your own VRam allocations, you can create resources yourself, you've got access to some of the API source and can inline your API calls, you've got access to command buffers, your can take ownership over individual GPU registers controlling things like blend states, and you've got amazing debugging and performance tools compared to PC.... but you still do all of these things through the DirectX API!

This means the argument is a bit of a red herring. The problem isn't DirectX itself, the problem is the PC-specific implementations of DirectX that are lacking these low-level features.

The above argument is basically saying that DirectX9.5 games can achieve better performance than DirectX9 games... which is true... but also seems like a fairly obvious statement...

I been saying DX is a dog for years, all the DX nut jobs, no its fast your doing something wrong… Bah eat crow…

Wow. Way to start a nice level-headed discussion... Attacking fanbois just makes you look like a fanboi from a different camp... Don't do that.

#31 MARS_999   Members   -  Reputation: 1239

Like
-4Likes
Like

Posted 18 March 2011 - 11:51 PM


http://www.bit-tech....ll-to-directx/1

Comments....

It seems pretty amazing, then, that while PC games often look better than their console equivalents, they still don't beat console graphics into the ground.
according to AMD, this could potentially change if PC games developers were able to program PC hardware directly at a low-level, rather than having to go through an API, such as DirectX.

Ok - so the argument goes like this:
** Consoles have worse hardware, but can program the device at a low-level, resulting in better bang for your buck.
** PC is stuck having to go through DX's abstractions, which adds unnecessary overhead.

Both these points are true, but the thing that makes it seem like nonsense to me is that the low-down-close-to-the-metal API on Xbox360, which lets us get awesome performance out of the GPU is.... DX 9 and a half.
It's DirectX, with some of the layers peeled back. You can do your own VRam allocations, you can create resources yourself, you've got access to some of the API source and can inline your API calls, you've got access to command buffers, your can take ownership over individual GPU registers controlling things like blend states, and you've got amazing debugging and performance tools compared to PC.... but you still do all of these things through the DirectX API!

This means the argument is a bit of a red herring. The problem isn't DirectX itself, the problem is the PC-specific implementations of DirectX that are lacking these low-level features.

The above argument is basically saying that DirectX9.5 games can achieve better performance than DirectX9 games... which is true... but also seems like a fairly obvious statement...

I been saying DX is a dog for years, all the DX nut jobs, no its fast your doing something wrong… Bah eat crow…

Wow. Way to start a nice level-headed discussion... Attacking fanbois just makes you look like a fanboi from a different camp... Don't do that.


And if you read at the end of my post

"Anyway I am for whatever gets us the best IQ and FPS on the hardware gamers spend their hard earned money for."

I could care less about the API, but what I am sick of is the people saying DX is better and when its not in fact, so if you are sticking up for DX than you are just as much of a fanboy....

And that should have been apparent by me saying I wish Larrabee would have taken off...

You want to take another shot....

#32 Hodgman   Moderators   -  Reputation: 27837

Like
0Likes
Like

Posted 18 March 2011 - 11:59 PM

ok


[edit]Wow, BIG thumbs up for using the S word below!! I'm sorry for going off topic by getting stuck on the nut job comment too...[/edit]

#33 MARS_999   Members   -  Reputation: 1239

Like
1Likes
Like

Posted 19 March 2011 - 12:03 AM

ok


Sorry about jumping on you... Not usually what I like to do. :)

#34 MJP   Moderators   -  Reputation: 10243

Like
1Likes
Like

Posted 19 March 2011 - 01:30 AM

I could care less about the API, but what I am sick of is the people saying DX is better and when its not in fact, so if you are sticking up for DX than you are just as much of a fanboy....


Better than "what" exactly? What is a practical and realistic alternative to DX or OpenGL on PC? That's the crux of the issue here. If you know anything about how these PC API's work then you know why they're slower than the equivalent on consoles, but you also know that they're 100% necessary.

#35 forsandifs   Members   -  Reputation: 154

Like
0Likes
Like

Posted 19 March 2011 - 06:29 AM

Better than "what" exactly? What is a practical and realistic alternative to DX or OpenGL on PC? That's the crux of the issue here. If you know anything about how these PC API's work then you know why they're slower than the equivalent on consoles, but you also know that they're 100% necessary.


I think there is both truth and over pessimism in that post.

It is true that we will always need a way to talk to the hardware on our PC. A way to feed the right information to the GPU at the right time in order to get it to calculate and display what we want when we want. In this case the chain goes like DirectX -> Drivers -> Hardware. The drivers are really part of the operating system of the machine. So I don't think that layer can be removed. And therefore neither can the DirectX layer.

Having said that, I don't think its true that this chain has to be slow. The drivers layer and the DirectX layer simply need to be better. Given the drivers are already about as low level as you can get, the biggest improvement in that chain can probably come in the form of making DirectX more low level (as has been stated before in this thread). And I don't think there is any reason why that can't happen.

Programmable shading was a step in the right direction. Now we need full programming access to the GPU in way that facilitates parallel programming. I think we need to get rid of the idea of treating the GPU as a way to process individual triangles through the vertex and pixel shaders. I think we need to instead start thinking about the GPU as way to perform a task thousands of times at the same time, and thus get that task done thousands of times faster. This will mean that the G in GPU no longer means Graphics, but instead means General. It also means that filling the ~10^6 pixels on a monitor will become a purpose of the GPU instead of the purpose.

Either that or we get rid of the GPU and go from ~10 core CPUs to ~1000 core CPUs. I would prefer that tbh, would be more efficient I think. Duplicating information on the GPU that I already have much better organised on the CPU is a pain for a start. Let the GPU die.

#36 wanderingbort   Members   -  Reputation: 136

Like
0Likes
Like

Posted 19 March 2011 - 08:10 AM

The problem isn't DirectX itself, the problem is the PC-specific implementations of DirectX that are lacking these low-level features.


And the question really should be, is that low level even feasible on a PC?

Lets just talk draw calls. The 360 and the PS3 GPUs read the same memory that the CPU writes. The API level is just blasting the draw call and state information out of the CPU caches to main RAM. On a PC, that data has to be flushed, then DMA'd (which depend on non GPU or CPU hardware) to the GPU's memory where it can be processed. It may not seem like much, but its a substantial amount of work that makes this all happen reliably.

Even if the PC version can be stripped down to the barest essentials, you would still see large discrepancies in the number of things you could draw without instancing. The PC drawcall limits haven't really gone up in years despite huge performance improvements in GPU chipsets. This is because that type of data transfer (small block DMA) is not what the rest of the PC architecture has been made to handle.

Either that or we get rid of the GPU and go from ~10 core CPUs to ~1000 core CPUs. I would prefer that tbh, would be more efficient I think. Duplicating information on the GPU that I already have much better organised on the CPU is a pain for a start. Let the GPU die.


I actually like the separation, it makes programming GPU's far easier. Modern GPU's are a nice example of constrained and focused parallelism. Even neophyte programmers can write shaders that operate massively parallel and never deadlock the GPU. Most veteran programmers I've met that are working on general massively parallel systems still fight with concurrency issues and deadlocks.

Sure, you could constrain your massively parallel system on these new 1000 core CPUs such that you have the same protection, but then you have just re-invented shaders on less focused and probably slightly slower hardware.

With great power comes great responsibility. My experience with seasoned programmers and dual core machine has lead me to be skeptical that these massively parallel systems will actually be "general" in practice.

My prediction, 95% of the engineers that would use such a thing would subscribe to an API/Paradigm as restrictive as the current shader model on GPUs. The other 5% will release marginally better titles at best and go stark raving mad at worst.

#37 Hodgman   Moderators   -  Reputation: 27837

Like
0Likes
Like

Posted 19 March 2011 - 10:09 AM

And the question really should be, is that low level even feasible on a PC?

Yeah I don't think so, without putting a whole lot of extra work onto the game developers... but that's what the ATI rep in the article is suggesting. Some companies with enough manpower/money might be able to take advantage of such a low-level (portability nightmare) API though...

The 360 and the PS3 GPUs read the same memory that the CPU writes. The API level is just blasting the draw call and state information out of the CPU caches to main RAM. On a PC, that data has to be flushed, then DMA'd (which depend on non GPU or CPU hardware) to the GPU's memory where it can be processed. It may not seem like much, but its a substantial amount of work that makes this all happen reliably.

Well, the PS3's GPU can read from the CPU-local system RAM, but it does have two main banks (one local to GPU and one local to CPU), just like a PC does.
The CPU can write directly into VRAM, but sometimes it's faster to have a SPU DMA from system-RAM into it's LS and then DMA from there into VRAM (thanks, sony). However, on PS3, the CPU actually writes draw-calls to system RAM, and the GPU reads them directly from the system RAM (not from VRAM).
But yes, it's these kinds of details that DX (thankfully) protects us from ;)

#38 wanderingbort   Members   -  Reputation: 136

Like
2Likes
Like

Posted 19 March 2011 - 10:59 AM

Well, the PS3...


I did gloss over a lot of the gory details (hard to believe in a post as long as it was). The whole main RAM/VRAM distinction on the PS3 is the kind of thing that makes direct hardware access a pain. Yet it is still less restrictive than a modern PC architecture.

Perhaps this is the solution to this thread... all those who are in favor of ditching DX for "low level goodness", spend a dev cycle on a PS3. You will learn to love your DirectX/PC safety mittens roughly the first time you read about or independently re-invent an idea like using the GPU to DMA memory between RAM banks.

#39 forsandifs   Members   -  Reputation: 154

Like
0Likes
Like

Posted 19 March 2011 - 11:32 AM

Perhaps this is the solution to this thread... all those who are in favor of ditching DX for "low level goodness", spend a dev cycle on a PS3. You will learn to love your DirectX/PC safety mittens roughly the first time you read about or independently re-invent an idea like using the GPU to DMA memory between RAM banks.


I am in favour of low level goodness, but I am not in favour of ditching DirectX (unless we also ditch the GPU all together). I simply think DirectX should be evolved, just as it it evolved from fixed function to programmable shading, it should now evolve to more flexible, efficient, and powerful programmable shading, which if I'm not mistaken is a synonym for lower level.

#40 phantom   Moderators   -  Reputation: 6800

Like
1Likes
Like

Posted 19 March 2011 - 11:51 AM

Either that or we get rid of the GPU and go from ~10 core CPUs to ~1000 core CPUs. I would prefer that tbh, would be more efficient I think. Duplicating information on the GPU that I already have much better organised on the CPU is a pain for a start. Let the GPU die.


The problem with this is that as any significant move in one direction by either the CPU or GPU will hurt what the CPU or GPU is good at as I previously indicated.

A 1000 core CPU still isn't going to match a GPU simply because of how the modern GPU works vs how a CPU works when it comes to processing work loads.

SPUs are a pretty good indication of what a more 'general' GPU would look like; horrible branch performance, but great at data processing, but a CPU made up of cores like that is going to fail horribly at the more general tasks which a CPU has to do.

From a hardware point of view imo an ideal setup would be;
  • x64/ARM cores for normal processing
  • SPU/ALU array on the same die as the above for steam style processing
  • DX11+ class GPU for graphics processing
That way you get a decent mix of the various processing requirements you have in a game; I'm kinda hoping the next consoles pick up this sort of mix of hardware tbh.







Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS