• Advertisement
Sign in to follow this  

DX maybe dead before long....

This topic is 2500 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1

Comments....

Mine is,
I been saying DX is a dog for years, all the DX nut jobs, no its fast your doing something wrong… Bah eat crow…

Anyway I am for whatever gets us the best IQ and FPS on the hardware gamers spend their hard earned money for.

Share this post


Link to post
Share on other sites
Advertisement
*sigh*

DX might be a 'dog' and it's a known fact it has overhead but it still remains the best we've got right now. Would a lower overhead API be nice? Sure.

The problem with that piece is two fold.

Firstly, with regards to DX11 performance, the small fact they fail to mention is that multi-threaded rendering isn't helping because neither AMD nor NV have got drivers in a state to allow it. NV have something but they basically say 'yeah, give us a core' which you can't use for your game. AMD don't even have that. Everyone is seeing performance loss when going MT with DX11 right now, so for an IHV to come out and say 'DX11 has too much overhead' when they don't have drivers to expose something properly... well.. lulz? (Current sitrep; AMD/NV blame MS. MS blame AMD/NV).

Secondly, it goes on to say 'look what they can do on consoles, clearly CTM is the answer!' (para-phrased). The problem with this is that consoles are known hardware types, yet it takes YEARS for the best to be extracted from this KNOWN hardware. Graphics cards also tend to change up every 18 to 24 months.

In fact, HD5 to HD6 for AMD, which was about 12 months if memory serves, went from VLIW5 to VLIW4 under the hood. Effectively it lost the ability to co-issue certain instructions which means that if you've spent a year hand tuning code to that hardware BAM! the work was wasted and you have to start again.

And that's just one generation; if we assume there are at any given time 4 generations of cards from both vendors in the wild that is effectively 8 pieces of target hardware with differing internal details all of which you'd have to deal with yourself (without including Intel in the mix AND a problem which will get worse when you factor AMD's APUs into the mix)

A matter made worse if the CTM API can't be agreed between NV, AMD and Intel; suddenly you might end up supporting N-APIs all with their issues across X pieces of target hardware.

The great irony from that piece is that it complains about 'games looking the same', but with a situation they outline it means that cutting edge games will just continue to license existing engines making the problem WORSE. Or AAA games will drop PC support as 'not worth the bother', which isn't as unlikely as you might think given how hard it can be to convince the higher ups to drop DX9 support in favor of DX11 for a game coming out in 24+ months.

Basically all that will happen is either a new API will appear to fill this need or MS will adapt DX to this.

Yes, we'd love an API which has a lower overhead, however at the same time you need something to deal with the scope of hardware out there.

Until someone comes up with such an API or DX adapts to this need DX11 is the best we've got.

edit;
And the change of APIs is nothing new.

First it was custom APIs, then OpenGL held the crown, now DX holds it, so it's only natural that at some point this will change.

Share this post


Link to post
Share on other sites
Aside from the issues Phantom pointed out, the article actually does a fairly good job discussing the pros/cons of APIs like DirectX vs going Close To Metal (CTM hereafter).

The issue is really one of scale -- there's first the trend that PC games has been, recently, a shrinking market. A hit PC game might sell N units, but a hit console game sells probably around 10x that. So the equation you end up with is that roughly 10% of your sales come from a platform that represents 8 "primary" GPU configurations (4 generations from 2 vendors, as the article suggests) -- not to mention various speed-grades and other non-architecture differences that have to be accounted for -- versus 90% of your sales coming from a set of 2 platforms representing exactly 2 GPU configurations (I'm ignoring the Wii here, mostly because it doesn't compete in the same space as enthusiast PCs, PS3s and 360s) -- and having precise knowledge of speed and capability. In other words, you get perhaps 9x the return for approximately 1/4th the effort -- That's a potential return on investment factor or 36x, for those who are following the math at home. Now consider that, even on the closed platform of consoles, going CTM isn't a no-brainer decision. Many games still use the APIs for light-lifting, and non-bleeding-edge games eschew it entirely.

This clearly indicates that simply exposing the metal on the PC isn't going to change the situation for the better. We have to regain some ROI in order to make the option more appealing to devs. We can't do much to increase the size of the PC gaming market directly (the "return" portion of the equations), so we have to attack the investment part of the equation -- and to do that, we have to reduce the number of platforms that we have to pour our efforts into. Our options there are to abstract some things away behind layers of software APIs (OpenGL, Direct3D, higher-level libraries or engines), or we have to reduce the hardware differences (or at least the programming model, as the x86 and its derivatives have done long ago, and today, internally, is a RISC-like processor with hundreds of registers.) There's really no win here for innovation, BTW, we're just giving buying a more-flexible software-level API at the expense of imposing a stricter hardware-level API -- this is, perhaps, the way to go, but its important to at least acknowledge what it is, because going down that path has the potential to stifle hardware innovation in the future, just as OpenGL and Direct3D have stifled software innovation in some ways.

Programmability is probably the key to regaining some ground on the investment front -- Today, APIs like OpenCL or CUDA are seen as somewhat imposing -- you have to fit the algorithm to their programming model -- but ultimately I think it will lead toward a loose hardware standardization of sorts -- paving the way for the thinner APIs of the future. Larrabee, for all its shortcomings as hardware, will also prove to have been a very important R&D effort -- having instigated research on how to write software rasterization against very wide SIMD units and across a massively parallel system -- they also identified new, key instructions with applicability not only to graphics but to many parallel computations. I don't know if something like texture sampling will ever be as programmable as a modern shader (though perhaps as programmable as the fixed-function shaders of yore), at least efficiently; but I think we'd be in a pretty good place if texture sampling was the least programmable hardware on the GPU.

My take is that Direct3D and other APIs will evolve into a thinner API, or perhaps be supplanted by a thinner API, but we will never be able to give the API abstractions away on the PC platform. The PC ecosystem is simply too diverse, and always will be, to support a CTM programming model. I think its probably fairly likely that CPUs will eventually go the same route that the x86 took -- meaning that the programming model GPUs will expose will bear little resemblance to the hardware internals; in some sense, this is already true, but current models expose too much detail of what goes on (which I realize is opposite of what the article claims that devs want) -- for example, with explicit caching and private/shared data areas. There's much work to be done by GPU vendors in developing a model which elides such details while also allowing such resources to be employed efficiently behind the scenes, and much work to be done by them, along with API vendors, to define APIs which help the hardware use its resources most efficiency without being so explicit about it.

Share this post


Link to post
Share on other sites
DX isn't needed as such anymore. At one point it was everything and kitchen sink.

But as GPUs developed, everything moved to shader programming. The abstractions introduced by old pipeline have become redundant in favor of shaders for everything.

The old view of mega frameworks that run on full spectrum of hardware has also become mostly irrelevant. While technically possible, it has little market impact. OGL, trying to abstract platform completely, isn't doing many favors to developers to whom emulated pipeline doesn't help, especially if these details are hidden by driver.


A very viable model today is something like OGL-ES. Don't build frameworks, libraries, and everything else. Those are best left to users or engine developers. Be a ubiquitous simple hardware-friendly API aimed at tasks performed by GPUs.

This change in focus would be a good thing. Do one thing, but do it well and think of the hardware. Developers will quickly adjust, engines will be able to do less of data juggling and there will be less bloat which isn't needed at that level. After all, nobody programs DX anymore. They use Unreal, UnrealScript + custom shaders. Or Unity. Or Flash. Or ...

DX (and perhaps OGL) is in same position as WinAPI. There are two companies that actually still need to know it. The rest builds on top of third-party engines (not graphics pipeline frameworks) that add actual value to problems that need to be solved.

Share this post


Link to post
Share on other sites
The more things change, the more they stay the same. Technology advances typically outstrip the ability to use them.

I began programming when hex/octal/binary was required, stuffing bytes into memory. Then good assemblers helped the effort. Then interpreters (e.g., Basic) were the rage. They provided a layer between the software and the hardware. Compilers speeded up things even more so programmers could take further advantage of improvements in technology, often to take advantage of specific hardware improvements. As mentioned in the comments to that article, the game world was rampant with "You must have a super-duper-whammy sound card to run this game."

APIs (OpenGL, Directx, etc.) appeared on the scene to help integrate entire applications, providing an even more generalized isolation layer between software and hardware. Although less now than in the last few years, common solutions to problems are still "I updated my driver and now it works." However, one big benefit of those APIs was to force manufacturers to design to common interface standards. Without that impetus, shaders would be, if not a lost cause, in the same category as hardware-specific drivers.

Dropping the API? Pfui. AMD would certainly like it if the world reverted to "You must have an AMD super-duper-whammy graphics card to run this game." But, in agreement with phantom, I don't think for a second that'll happen tomorrow or the next day. DirectX and OpenGL will be around until something better comes along, adapting (as they have in the past ) to take advantage of technology.

"Something better" will certainly come along, I'm sure. But, for the time being, I'll take my "chances" investing time in DirectX and OpenGL programming.



Share this post


Link to post
Share on other sites
After all, nobody programs DX anymore. They use Unreal, UnrealScript + custom shaders. Or Unity. Or Flash. Or ...


That sounds a lot like the same fallacious argument that you frequently find made about managed languages: "nobody programs in C/C++ anymore, these days it's all Java/.NET/Ruby/insert-flavour-of-the-month-here/etc". Where it falls apart is that Unreal, Unity or whatever ultimately have to be written too, and these need an API to be written to. All that you're doing is moving up one level of abstraction, but the very same messy details still exist underneath it all (and still have to be handled by a programmer somewhere - and believe me that we are truly f--ked if we ever produce a generation of programmers that knows nothing about the low-level stuff. Who is gonna write the device drivers of tomorrow? That's what I'd like to know.)

What I believe the crux of the matter is is that there has been no real innovation on the hardware front in almost 10 years: sometime roundabout 2002/2003/2004 hardware suddenly stopped being crap (this is a generalisation; of course there's still plenty of crap hardware about) and since then it's just been a continuous ramp up of performance. After all, even a geometry shader is something that can be handled by the CPU; where's all the new paradigm-shifting stuff? The last real break from the past we had was programmable shaders.

On account of this it's natural for some measure of uneasiness to settle in. APIs are offering nothing really new so why do we need them, etc? This is gonna last until the next big jump forward, which might be real-time accelerated raytracing or might be something else; I'm not a prophet and don't know. But think about the current situation as being akin to the gigahertz arms race of yore in CPU land and it makes a little more sense.

Share this post


Link to post
Share on other sites

That sounds a lot like the same fallacious argument that you frequently find made about managed languages: "nobody programs in C/C++ anymore, these days it's all Java/.NET/Ruby/insert-flavour-of-the-month-here/etc". Where it falls apart is that Unreal, Unity or whatever ultimately have to be written too, and these need an API to be written to.
Which is why I wrote a sentence later, only two companies in the world still need access to that low level.

and believe me that we are truly f--ked if we ever produce a generation of programmers that knows nothing about the low-level stuff. Who is gonna write the device drivers of tomorrow? That's what I'd like to know.)[/quote]And that is the same falacious argument made by low level people. With ever higher levels of abstraction, we'd still be weaving memory, rotating drums and punching cards.

The same was said about functions (see goto considered harmful, a blasphemous article of that time whose true meaning has been completely forgotten and is misunderstood each time it's quoted). They were bloated, abstract, inefficient, limiting. Same was said about everything else. But for every guru, there is one million people earning bread without such skills.

Where are the low-level people who program by encoding FSMs? For the rare cases when they are needed, they are still around. But most developers today don't even know what a FSM is.

To low level people, DX is a pain and inefficient (so is OGL or any other API). To high-level people DX (et al) is too low level.

That was the crux of my argument, since most of functionality once used by majority of developers provided by DX isn't needed anymore at that level. It's either higher or lower. It's more about fragmentation, like everywhere else. Rather than having one huge monolithic framework one uses small one-thing-done-well libraries.

Share this post


Link to post
Share on other sites

To low level people, DX is a pain and inefficient (so is OGL or any other API). To high-level people DX (et al) is too low level.

So true.

I am a reformed low level person. At this point, I consider my previous-self foolishly arrogant and woefully mistaken about the important things in game development.

I was:
arrogant, because I insisted that I could do it better.
mistaken, because I thought that small edge was worth so much of my time.

When is the last time anyone looked at crysis and said, "If these graphics were 5% better, this game would be way more fun".

Share this post


Link to post
Share on other sites
Games all look similar as most games are using the same damn engine or a select few... e.g. Unreal3, Source, ect...

And the other problem is most programmers or whoever is coding these shaders are just copying and pasting crap they found on the net or what other engines are using.

Share this post


Link to post
Share on other sites

Games all look similar as most games are using the same damn engine or a select few... e.g. Unreal3, Source, ect...

And the other problem is most programmers or whoever is coding these shaders are just copying and pasting crap they found on the net or what other engines are using.



Utter balls.

Shaders look the way they do because those in control of art want them to look that way. Art direction is the reason games look alike; nothing todo with engines or shaders, but the choice of art work has more impact than anything else.

And as I work as a graphics programmer I know how this works from a professional point of view.

Share this post


Link to post
Share on other sites

That was the crux of my argument, since most of functionality once used by majority of developers provided by DX isn't needed anymore at that level.



The thing is, if you take DX11 the core functionality it provides are;

  • device creation
  • resource creation / uploading
  • resource binding
  • dispatch/drawingAside from that there are a few debug layers, which are needed to make any sense of whats going on at times, and... well.. that's about it. Any replacement API is still going to need that set of functionality as that is the functionality which is hit the most.

    The only real difference between PC and console in this regard is that, because it's a unified system, we can combine the 3rd and 4th step by building command buffers ourselves and pointing the GPU at them... although various bits of the 360 and PS3 APIs are used to hook this up.

    Having a low latency on that last part is pretty much what devs are after, and I agree with them on that... still need some form of API however.

    On another note; I think you (and many others) underestimate the amount of low level graphics API work going on, certainly in AAA games who are the ones who really want this type of thing. Even if you license something like UE3 chances are your dev team is still going to have to pull it apart to add functionality you require and that'll touch onto platform specific graphics APIs.

Share this post


Link to post
Share on other sites

[quote name='MARS_999' timestamp='1300477157' post='4787648']
Games all look similar as most games are using the same damn engine or a select few... e.g. Unreal3, Source, ect...

And the other problem is most programmers or whoever is coding these shaders are just copying and pasting crap they found on the net or what other engines are using.



Utter balls.

Shaders look the way they do because those in control of art want them to look that way. Art direction is the reason games look alike; nothing todo with engines or shaders, but the choice of art work has more impact than anything else.

And as I work as a graphics programmer I know how this works from a professional point of view.
[/quote]

NO I don't agree, so you say for an example, normal mapping is different than from one engines code to the next?

Utter balls back at you.

Now artwork, say Borderlands vs. Battlefield 3 yes there is a difference.

Share this post


Link to post
Share on other sites

On another note; I think you (and many others) underestimate the amount of low level graphics API work going on, certainly in AAA games who are the ones who really want this type of thing.

Far from it. It's the level of abstraction provided.

Graphics accelerators were originally just that. Over time, they transitioned into heavy duty compute units, the fabled 1000-core CPUs. As such, they lost their specialization into just pushing pixels.

Does it still make sense for graphics API to be hiding how much memory you have available? For some, yes. But in a same way UE hides it. Meanwhile, a dedicated developer wanting to truly push would probably embrace the ability to query this memory.

It's a delicate balance between exposing too much or too little of underlying concepts. Java was mentioned and it demonstrates this well. In Java, you don't have a clue how much memory you have. As soon as you hit the limit, the answer is always the same - buy more memory. Which is crap, when dealing with upward unbounded algorithms that could use terrabytes if available. Sometimes you do want the basic layout of hardware.

DX has actually gone this route. From full-blown graphics API to basically a shader wrapper.

I also think that the yellow journalism style of this article makes it a fluff piece. Without understanding any kind of internals calling for death is void, as with all other similar declarations. My guess would be that MS realizes that developers, especially those that do any kind of advanced work prefer a simple, low level hardware abstraction, rather than an enterprise graphics framework. So instead of providing everything in one piece, the future might mean even further streamlining, perhaps exposing everything as compute shaders, on top of which the old graphics pipeline would be built - either by using some third-party engine or by using the API directly. DX11 is a large step in this direction already.

And there is still the extra stuff - is COM still needed at this level? Does it really make sense that very same API needs to cater to everything, from WPF and VisualStudio frontend and right down to real-time graphical extravaganza?

It's more about soundness of the API. Does presenting everything as OO model and abstracting the way things are now still make sense, or is there perhaps a different design better suited. I don't think anyone, at MS or elsewhere would, for a second consider exposing raw hardware again. Except perhaps on consoles, but DX isn't abandoning the desktop - it's simply too tied into it. Then again, mobile, consoles and all the dedicated hardware inside walled gardens does solve many hardware fragmentation problems. While at same time, MS has never in history tried to counter natural fragmentation. The company thrives on this anti-Apple concept and is probably one of few that has managed to be productive in such ecosystem. So as far as MS+DX go, death simply doesn't add up - different style of API however does.

The stuff about how things look or various broad performance discussions however don't matter.

Share this post


Link to post
Share on other sites

[quote name='phantom' timestamp='1300478558' post='4787657']
[quote name='MARS_999' timestamp='1300477157' post='4787648']
Games all look similar as most games are using the same damn engine or a select few... e.g. Unreal3, Source, ect...

And the other problem is most programmers or whoever is coding these shaders are just copying and pasting crap they found on the net or what other engines are using.



Utter balls.

Shaders look the way they do because those in control of art want them to look that way. Art direction is the reason games look alike; nothing todo with engines or shaders, but the choice of art work has more impact than anything else.

And as I work as a graphics programmer I know how this works from a professional point of view.
[/quote]

NO I don't agree, so you say for an example, normal mapping is different than from one engines code to the next?

Utter balls back at you.

Now artwork, say Borderlands vs. Battlefield 3 yes there is a difference.
[/quote]


Well, yes, it can be depending on how the normals are encoded or what is being combined and how to produce the effect. So, while normal mapping itself might be easy to do this doesn't consitute your assertion that they are just 'copy and pasting crap'; that is just lazy thinking to try and make out that the coders are being lazy for some reason and utterly ignoring the reality of things which is that while standard techniques might well exist online these are generally recreated and recoded directly in order to produce a more optimal output.

So, if you want to try and have a discussion feel free to stick to talking about things you know of and not trotting out the normal 'game coders are lazy' routine; it's bad enough when gamers do it never mind people who should be vaguely aware of the work load required to produce effects.

Share this post


Link to post
Share on other sites

NO I don't agree, so you say for an example, normal mapping is different than from one engines code to the next?


Sure!

Are you using tangent space or object space normals? Are your normal maps encoded such that you need to recreate the value on a given axis (dxn or dxt5 to avoid normal cross-talk)? Where does this normal once you've computed it? NdotL equation, I should normalize it? Wait, are they normalized in my asset pipeline? Did I just bloat my shader unecessarily? Am I using the normal to offset the color texture? Should I fetch another normal to fix it up? What if I'm deferred? Do i need to pack this into a G-buffer? Are we producing a special texture for a post effect? Should I bias it into RGB space? Is that camera space? or World space?

Yep, pretty standard normal mapping. And in the end, its a bumpy texture.

Share this post


Link to post
Share on other sites

[quote name='MARS_999' timestamp='1300480270' post='4787673']
[quote name='phantom' timestamp='1300478558' post='4787657']
[quote name='MARS_999' timestamp='1300477157' post='4787648']
Games all look similar as most games are using the same damn engine or a select few... e.g. Unreal3, Source, ect...

And the other problem is most programmers or whoever is coding these shaders are just copying and pasting crap they found on the net or what other engines are using.



Utter balls.

Shaders look the way they do because those in control of art want them to look that way. Art direction is the reason games look alike; nothing todo with engines or shaders, but the choice of art work has more impact than anything else.

And as I work as a graphics programmer I know how this works from a professional point of view.
[/quote]

NO I don't agree, so you say for an example, normal mapping is different than from one engines code to the next?

Utter balls back at you.

Now artwork, say Borderlands vs. Battlefield 3 yes there is a difference.
[/quote]


Well, yes, it can be depending on how the normals are encoded or what is being combined and how to produce the effect. So, while normal mapping itself might be easy to do this doesn't consitute your assertion that they are just 'copy and pasting crap'; that is just lazy thinking to try and make out that the coders are being lazy for some reason and utterly ignoring the reality of things which is that while standard techniques might well exist online these are generally recreated and recoded directly in order to produce a more optimal output.

So, if you want to try and have a discussion feel free to stick to talking about things you know of and not trotting out the normal 'game coders are lazy' routine; it's bad enough when gamers do it never mind people who should be vaguely aware of the work load required to produce effects.
[/quote]

Chat away, and get off my back, as nothing has changed with you in years... Still in my face... And now I disregard your post as the time spent reading them is a waste of time.

Share this post


Link to post
Share on other sites
Something else that hasn't really been mentioned so far in this thread or the article is the law of diminishing returns. Sure, my graphics card might be 10x more powerful... but what good is that power if it is adding 10x more polygons to a scene that already looks pretty good?

Looking over screenshots of DirectX 11 tessellation in that recent Aliens game, I found it somewhat difficult to distinguish between the lower-res model and the tessellated one. It's not that we aren't using that extra graphics horsepower - it's that it isn't easily visible.

On the subject of normal mapping: There was a recent presentation done by Crytek about various methods of texture compression (including normals). For their entire art chain, they are attempting to do 16 bits per channel, including normal maps. The difference was subtle, but it was there. Now here's the thing - what's a bigger difference - going from no normal map to an 8-bit normal map or going from an 8-bit normal map to a 16-bit normal map?

Share this post


Link to post
Share on other sites

Something else that hasn't really been mentioned so far in this thread or the article is the law of diminishing returns. Sure, my graphics card might be 10x more powerful... but what good is that power if it is adding 10x more polygons to a scene that already looks pretty good?

Looking over screenshots of DirectX 11 tessellation in that recent Aliens game, I found it somewhat difficult to distinguish between the lower-res model and the tessellated one. It's not that we aren't using that extra graphics horsepower - it's that it isn't easily visible.

On the subject of normal mapping: There was a recent presentation done by Crytek about various methods of texture compression (including normals). For their entire art chain, they are attempting to do 16 bits per channel, including normal maps. The difference was subtle, but it was there. Now here's the thing - what's a bigger difference - going from no normal map to an 8-bit normal map or going from an 8-bit normal map to a 16-bit normal map?


Agreed... I have tried this myself and saw very little IQ improvements. Maybe it would matter more if you were zoomed in on a surface? I have no idea, but I am guessing that it would.

Share this post


Link to post
Share on other sites

Agreed... I have tried this myself and saw very little IQ improvements. Maybe it would matter more if you were zoomed in on a surface? I have no idea, but I am guessing that it would.

True enough, but with most games, how often do you have time to sit there and zoom in on something? Most of the time you are busy fighting aliens/nazis/zombies/soldiers/robots/ninjas. If a graphical improvement isn't easily noticeable, does it really make that much of a difference?

(I'm just playing the devil's advocate here. I'm all for having better graphics, but there does eventually come a point where throwing more hardware at the problem doesn't have as great an impact as the art direction).

Share this post


Link to post
Share on other sites
The thing is as scenes become closer to 'real' it is the suble things which can make the difference and give the eye/brain small cues as to what is going on.

Take tesselation for example; it's major usage is doing what the various normal mapping schemes can't namely adjusting the silhouette of an object. Normal mapping is all well and good for faking effects but a displace tesslated object is going to look better, assuming the art/scene is done right of course. (It is also useful for adding extra detail into things like terrain).

Another effect would be subsurface scattering; this is, if done correctly, a suble effect on the skin/suface of certain objects which provide a more life like feel to the object. It shouldn't jump out and grab you like when normal mapping or shadows first appeared but the over all effect should be an improvement.

Also the arguement about DX vs a new API isn't so much about the graphical output but about the CPU overhead and coming up with ways to have the GPU do more work on its own. Larrabee would have been a nice step in that direction; having a GPU re-feed and retrigger itself removing the burden from the CPU. So, while lower CPU costs for drawing would allow us to draw more at the same time it would simplify things (being able to throw a chunk of memory at the driver which was basically [buffer id][buffer id[pbuffer id][shader id][shader id][count] for example via one draw call would be nice) and give more CPU time back to game play to improve things like AI and non-SIMD/batch friendly physics (which will hopefully get shifted off to the GPU part of an APU in the future).

Edit;

When it comes to the suble thing right now my biggest issue with characters are their eyes. Take Mass Effect 2 on the 360; the characters look great, move great (much props to the mo-cap and animation guys) and feel quite real, so much so it was scary at times... right up until you look into their eyes and then it's "oh.. yeah...". Something about the lighting on them still isnt' right, it's subtle but noticable, more so when everything else is getting closer to 'real'. (It's probably the combination of a lack of subsurface scattering, diffuse reflection of local light sources and micro movement of the various components of the eye which are causing the issue.)

Share this post


Link to post
Share on other sites

Something else that hasn't really been mentioned so far in this thread or the article is the law of diminishing returns. Sure, my graphics card might be 10x more powerful... but what good is that power if it is adding 10x more polygons to a scene that already looks pretty good?

Looking over screenshots of DirectX 11 tessellation in that recent Aliens game, I found it somewhat difficult to distinguish between the lower-res model and the tessellated one. It's not that we aren't using that extra graphics horsepower - it's that it isn't easily visible.

On the subject of normal mapping: There was a recent presentation done by Crytek about various methods of texture compression (including normals). For their entire art chain, they are attempting to do 16 bits per channel, including normal maps. The difference was subtle, but it was there. Now here's the thing - what's a bigger difference - going from no normal map to an 8-bit normal map or going from an 8-bit normal map to a 16-bit normal map?


I think the primary issue here is: why add pretty much anything when:

1) Console hardware can't handle it
2) PC sales are a relativly small portion of the total.
3) The end user is unlikely to notice anyway.
4) The PC user who have extra horsepower to spare could just crank up the resolution, anti aliasing, etc to make use of their newer harder.

As more and more PC games are released first on consoles this issue becomes more noticable, we will probably see another fairly big jump in visuals when the next generation of consoles hit the market.
The main thing that seems quite restricted on console->pc ports these days is the use of graphics memory, texture resolutions are often awfully low (Bioware did release a proper high resolution texture pack for DA2 atleast, but most developers don't do that)

Share this post


Link to post
Share on other sites
As we are here I would like to point one thing out; many games where people say 'oh its a port' infact ARENT ports. The PC version is developed and maintained along side the console one, very often for testing reasons if nothing else.

Yes, consoles tend to be the 'lead' platform and due to lower ROI on PC sales the PC side tends to get less attention but generally it needs less attention as well to make it work. (and I say that as the guy at work who spent a couple of weeks sorting out PC issues pre-sub, including fun ones like 'NV need to fix their driver profiles for our game to sanely support SLI', which a new API really needs to expose, leaving it up to the driver is 'meh').

The textuers thing however is right, which trust me is just as annoying to the graphics coders as it is the end user. At work one of our demands for the next game from rendering to art is for them to author textures at PC levels and then we'll use the pipeline to spit out the lower res console versions. (That said, even on our current game the visual difference between console and PC high is pretty big, I was honestly blown away first time I saw it running fullscreen maxed out having been looking at the 360 version mostly up until that point).

Share this post


Link to post
Share on other sites
Yes DX11 features are great and I am glad they are here finally. Tessellation is great for adding detail(actual detail not faked) and this feature is really need on characters face/head IMO. I agree with Phantom for once, and that the meshes for the actual player/enemies need to have their polygon count increased. The low polygon counts need to be dropped from games final image rendering completely. With that said I would think that the movements, meaning when an arm bends you acutally have a real looking elbow vs. the rubberband effect.

And yes, I really really wanted Larrabee to take off, as the possibilities were limitless... Here's to hoping for the future.

And no PC sales aren't dying, they are actually quite healthy.

In fact EA has stated this about PC gaming....

http://www.techspot.com/news/42755-ea-the-pc-is-an-extremely-healthy-platform.html

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement