Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


DX maybe dead before long....


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
42 replies to this topic

#1 MARS_999   Members   -  Reputation: 1289

Like
-5Likes
Like

Posted 18 March 2011 - 09:49 AM

http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1

Comments....

Mine is,
I been saying DX is a dog for years, all the DX nut jobs, no its fast your doing something wrong… Bah eat crow…

Anyway I am for whatever gets us the best IQ and FPS on the hardware gamers spend their hard earned money for.



Sponsor:

#2 phantom   Moderators   -  Reputation: 7411

Like
3Likes
Like

Posted 18 March 2011 - 10:54 AM

*sigh*

DX might be a 'dog' and it's a known fact it has overhead but it still remains the best we've got right now. Would a lower overhead API be nice? Sure.

The problem with that piece is two fold.

Firstly, with regards to DX11 performance, the small fact they fail to mention is that multi-threaded rendering isn't helping because neither AMD nor NV have got drivers in a state to allow it. NV have something but they basically say 'yeah, give us a core' which you can't use for your game. AMD don't even have that. Everyone is seeing performance loss when going MT with DX11 right now, so for an IHV to come out and say 'DX11 has too much overhead' when they don't have drivers to expose something properly... well.. lulz? (Current sitrep; AMD/NV blame MS. MS blame AMD/NV).

Secondly, it goes on to say 'look what they can do on consoles, clearly CTM is the answer!' (para-phrased). The problem with this is that consoles are known hardware types, yet it takes YEARS for the best to be extracted from this KNOWN hardware. Graphics cards also tend to change up every 18 to 24 months.

In fact, HD5 to HD6 for AMD, which was about 12 months if memory serves, went from VLIW5 to VLIW4 under the hood. Effectively it lost the ability to co-issue certain instructions which means that if you've spent a year hand tuning code to that hardware BAM! the work was wasted and you have to start again.

And that's just one generation; if we assume there are at any given time 4 generations of cards from both vendors in the wild that is effectively 8 pieces of target hardware with differing internal details all of which you'd have to deal with yourself (without including Intel in the mix AND a problem which will get worse when you factor AMD's APUs into the mix)

A matter made worse if the CTM API can't be agreed between NV, AMD and Intel; suddenly you might end up supporting N-APIs all with their issues across X pieces of target hardware.

The great irony from that piece is that it complains about 'games looking the same', but with a situation they outline it means that cutting edge games will just continue to license existing engines making the problem WORSE. Or AAA games will drop PC support as 'not worth the bother', which isn't as unlikely as you might think given how hard it can be to convince the higher ups to drop DX9 support in favor of DX11 for a game coming out in 24+ months.

Basically all that will happen is either a new API will appear to fill this need or MS will adapt DX to this.

Yes, we'd love an API which has a lower overhead, however at the same time you need something to deal with the scope of hardware out there.

Until someone comes up with such an API or DX adapts to this need DX11 is the best we've got.

edit;
And the change of APIs is nothing new.

First it was custom APIs, then OpenGL held the crown, now DX holds it, so it's only natural that at some point this will change.

#3 Ravyne   GDNet+   -  Reputation: 7843

Like
3Likes
Like

Posted 18 March 2011 - 11:39 AM

Aside from the issues Phantom pointed out, the article actually does a fairly good job discussing the pros/cons of APIs like DirectX vs going Close To Metal (CTM hereafter).

The issue is really one of scale -- there's first the trend that PC games has been, recently, a shrinking market. A hit PC game might sell N units, but a hit console game sells probably around 10x that. So the equation you end up with is that roughly 10% of your sales come from a platform that represents 8 "primary" GPU configurations (4 generations from 2 vendors, as the article suggests) -- not to mention various speed-grades and other non-architecture differences that have to be accounted for -- versus 90% of your sales coming from a set of 2 platforms representing exactly 2 GPU configurations (I'm ignoring the Wii here, mostly because it doesn't compete in the same space as enthusiast PCs, PS3s and 360s) -- and having precise knowledge of speed and capability. In other words, you get perhaps 9x the return for approximately 1/4th the effort -- That's a potential return on investment factor or 36x, for those who are following the math at home. Now consider that, even on the closed platform of consoles, going CTM isn't a no-brainer decision. Many games still use the APIs for light-lifting, and non-bleeding-edge games eschew it entirely.

This clearly indicates that simply exposing the metal on the PC isn't going to change the situation for the better. We have to regain some ROI in order to make the option more appealing to devs. We can't do much to increase the size of the PC gaming market directly (the "return" portion of the equations), so we have to attack the investment part of the equation -- and to do that, we have to reduce the number of platforms that we have to pour our efforts into. Our options there are to abstract some things away behind layers of software APIs (OpenGL, Direct3D, higher-level libraries or engines), or we have to reduce the hardware differences (or at least the programming model, as the x86 and its derivatives have done long ago, and today, internally, is a RISC-like processor with hundreds of registers.) There's really no win here for innovation, BTW, we're just giving buying a more-flexible software-level API at the expense of imposing a stricter hardware-level API -- this is, perhaps, the way to go, but its important to at least acknowledge what it is, because going down that path has the potential to stifle hardware innovation in the future, just as OpenGL and Direct3D have stifled software innovation in some ways.

Programmability is probably the key to regaining some ground on the investment front -- Today, APIs like OpenCL or CUDA are seen as somewhat imposing -- you have to fit the algorithm to their programming model -- but ultimately I think it will lead toward a loose hardware standardization of sorts -- paving the way for the thinner APIs of the future. Larrabee, for all its shortcomings as hardware, will also prove to have been a very important R&D effort -- having instigated research on how to write software rasterization against very wide SIMD units and across a massively parallel system -- they also identified new, key instructions with applicability not only to graphics but to many parallel computations. I don't know if something like texture sampling will ever be as programmable as a modern shader (though perhaps as programmable as the fixed-function shaders of yore), at least efficiently; but I think we'd be in a pretty good place if texture sampling was the least programmable hardware on the GPU.

My take is that Direct3D and other APIs will evolve into a thinner API, or perhaps be supplanted by a thinner API, but we will never be able to give the API abstractions away on the PC platform. The PC ecosystem is simply too diverse, and always will be, to support a CTM programming model. I think its probably fairly likely that CPUs will eventually go the same route that the x86 took -- meaning that the programming model GPUs will expose will bear little resemblance to the hardware internals; in some sense, this is already true, but current models expose too much detail of what goes on (which I realize is opposite of what the article claims that devs want) -- for example, with explicit caching and private/shared data areas. There's much work to be done by GPU vendors in developing a model which elides such details while also allowing such resources to be employed efficiently behind the scenes, and much work to be done by them, along with API vendors, to define APIs which help the hardware use its resources most efficiency without being so explicit about it.

#4 Antheus   Members   -  Reputation: 2397

Like
-1Likes
Like

Posted 18 March 2011 - 11:41 AM

DX isn't needed as such anymore. At one point it was everything and kitchen sink.

But as GPUs developed, everything moved to shader programming. The abstractions introduced by old pipeline have become redundant in favor of shaders for everything.

The old view of mega frameworks that run on full spectrum of hardware has also become mostly irrelevant. While technically possible, it has little market impact. OGL, trying to abstract platform completely, isn't doing many favors to developers to whom emulated pipeline doesn't help, especially if these details are hidden by driver.


A very viable model today is something like OGL-ES. Don't build frameworks, libraries, and everything else. Those are best left to users or engine developers. Be a ubiquitous simple hardware-friendly API aimed at tasks performed by GPUs.

This change in focus would be a good thing. Do one thing, but do it well and think of the hardware. Developers will quickly adjust, engines will be able to do less of data juggling and there will be less bloat which isn't needed at that level. After all, nobody programs DX anymore. They use Unreal, UnrealScript + custom shaders. Or Unity. Or Flash. Or ...

DX (and perhaps OGL) is in same position as WinAPI. There are two companies that actually still need to know it. The rest builds on top of third-party engines (not graphics pipeline frameworks) that add actual value to problems that need to be solved.

#5 Buckeye   Crossbones+   -  Reputation: 5706

Like
1Likes
Like

Posted 18 March 2011 - 11:41 AM

The more things change, the more they stay the same. Technology advances typically outstrip the ability to use them.

I began programming when hex/octal/binary was required, stuffing bytes into memory. Then good assemblers helped the effort. Then interpreters (e.g., Basic) were the rage. They provided a layer between the software and the hardware. Compilers speeded up things even more so programmers could take further advantage of improvements in technology, often to take advantage of specific hardware improvements. As mentioned in the comments to that article, the game world was rampant with "You must have a super-duper-whammy sound card to run this game."

APIs (OpenGL, Directx, etc.) appeared on the scene to help integrate entire applications, providing an even more generalized isolation layer between software and hardware. Although less now than in the last few years, common solutions to problems are still "I updated my driver and now it works." However, one big benefit of those APIs was to force manufacturers to design to common interface standards. Without that impetus, shaders would be, if not a lost cause, in the same category as hardware-specific drivers.

Dropping the API? Pfui. AMD would certainly like it if the world reverted to "You must have an AMD super-duper-whammy graphics card to run this game." But, in agreement with phantom, I don't think for a second that'll happen tomorrow or the next day. DirectX and OpenGL will be around until something better comes along, adapting (as they have in the past ) to take advantage of technology.

"Something better" will certainly come along, I'm sure. But, for the time being, I'll take my "chances" investing time in DirectX and OpenGL programming.




Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.


#6 mhagain   Crossbones+   -  Reputation: 8142

Like
2Likes
Like

Posted 18 March 2011 - 12:18 PM

After all, nobody programs DX anymore. They use Unreal, UnrealScript + custom shaders. Or Unity. Or Flash. Or ...


That sounds a lot like the same fallacious argument that you frequently find made about managed languages: "nobody programs in C/C++ anymore, these days it's all Java/.NET/Ruby/insert-flavour-of-the-month-here/etc". Where it falls apart is that Unreal, Unity or whatever ultimately have to be written too, and these need an API to be written to. All that you're doing is moving up one level of abstraction, but the very same messy details still exist underneath it all (and still have to be handled by a programmer somewhere - and believe me that we are truly f--ked if we ever produce a generation of programmers that knows nothing about the low-level stuff. Who is gonna write the device drivers of tomorrow? That's what I'd like to know.)

What I believe the crux of the matter is is that there has been no real innovation on the hardware front in almost 10 years: sometime roundabout 2002/2003/2004 hardware suddenly stopped being crap (this is a generalisation; of course there's still plenty of crap hardware about) and since then it's just been a continuous ramp up of performance. After all, even a geometry shader is something that can be handled by the CPU; where's all the new paradigm-shifting stuff? The last real break from the past we had was programmable shaders.

On account of this it's natural for some measure of uneasiness to settle in. APIs are offering nothing really new so why do we need them, etc? This is gonna last until the next big jump forward, which might be real-time accelerated raytracing or might be something else; I'm not a prophet and don't know. But think about the current situation as being akin to the gigahertz arms race of yore in CPU land and it makes a little more sense.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#7 wanderingbort   Members   -  Reputation: 136

Like
6Likes
Like

Posted 18 March 2011 - 12:33 PM

I would pay twice the performance cost for what DirectX11 is offering... in fact, I used to... it was called DirectX9. And I still pay that, because there is still a market for it.

Go tinker with some open source graphics drivers and see how much fun it is to support multiple chipsets. In the meanwhile, all your competitors will release several games that "just work" and are beneath your expectations.

Sounds silly right? You are a magnificent engineer! Surely, you can work up a nice abstraction layer that reduces the amount of work you repeat. Congratulations, you have invented DirectX. Except now, YOU have to maintain it. YOU have to support the next Intel-vidi-ATI card. YOU have to reverse engineer it, because the specs are confidential multi-million-dollar trade secrets.

The economics of this situation are so bad; this article is trash.

Games don't all look similar because of a tech hurdle... they look similar because people will buy that aesthetic.

#8 Antheus   Members   -  Reputation: 2397

Like
0Likes
Like

Posted 18 March 2011 - 12:52 PM

That sounds a lot like the same fallacious argument that you frequently find made about managed languages: "nobody programs in C/C++ anymore, these days it's all Java/.NET/Ruby/insert-flavour-of-the-month-here/etc". Where it falls apart is that Unreal, Unity or whatever ultimately have to be written too, and these need an API to be written to.

Which is why I wrote a sentence later, only two companies in the world still need access to that low level.

and believe me that we are truly f--ked if we ever produce a generation of programmers that knows nothing about the low-level stuff. Who is gonna write the device drivers of tomorrow? That's what I'd like to know.)

And that is the same falacious argument made by low level people. With ever higher levels of abstraction, we'd still be weaving memory, rotating drums and punching cards.

The same was said about functions (see goto considered harmful, a blasphemous article of that time whose true meaning has been completely forgotten and is misunderstood each time it's quoted). They were bloated, abstract, inefficient, limiting. Same was said about everything else. But for every guru, there is one million people earning bread without such skills.

Where are the low-level people who program by encoding FSMs? For the rare cases when they are needed, they are still around. But most developers today don't even know what a FSM is.

To low level people, DX is a pain and inefficient (so is OGL or any other API). To high-level people DX (et al) is too low level.

That was the crux of my argument, since most of functionality once used by majority of developers provided by DX isn't needed anymore at that level. It's either higher or lower. It's more about fragmentation, like everywhere else. Rather than having one huge monolithic framework one uses small one-thing-done-well libraries.

#9 wanderingbort   Members   -  Reputation: 136

Like
3Likes
Like

Posted 18 March 2011 - 01:30 PM

To low level people, DX is a pain and inefficient (so is OGL or any other API). To high-level people DX (et al) is too low level.

So true.

I am a reformed low level person. At this point, I consider my previous-self foolishly arrogant and woefully mistaken about the important things in game development.

I was:
arrogant, because I insisted that I could do it better.
mistaken, because I thought that small edge was worth so much of my time.

When is the last time anyone looked at crysis and said, "If these graphics were 5% better, this game would be way more fun".

#10 MARS_999   Members   -  Reputation: 1289

Like
-6Likes
Like

Posted 18 March 2011 - 01:39 PM

Games all look similar as most games are using the same damn engine or a select few... e.g. Unreal3, Source, ect...

And the other problem is most programmers or whoever is coding these shaders are just copying and pasting crap they found on the net or what other engines are using.

#11 phantom   Moderators   -  Reputation: 7411

Like
4Likes
Like

Posted 18 March 2011 - 02:02 PM

Games all look similar as most games are using the same damn engine or a select few... e.g. Unreal3, Source, ect...

And the other problem is most programmers or whoever is coding these shaders are just copying and pasting crap they found on the net or what other engines are using.



Utter balls.

Shaders look the way they do because those in control of art want them to look that way. Art direction is the reason games look alike; nothing todo with engines or shaders, but the choice of art work has more impact than anything else.

And as I work as a graphics programmer I know how this works from a professional point of view.

#12 phantom   Moderators   -  Reputation: 7411

Like
1Likes
Like

Posted 18 March 2011 - 02:11 PM

That was the crux of my argument, since most of functionality once used by majority of developers provided by DX isn't needed anymore at that level.



The thing is, if you take DX11 the core functionality it provides are;

  • device creation
  • resource creation / uploading
  • resource binding
  • dispatch/drawing
Aside from that there are a few debug layers, which are needed to make any sense of whats going on at times, and... well.. that's about it. Any replacement API is still going to need that set of functionality as that is the functionality which is hit the most.

The only real difference between PC and console in this regard is that, because it's a unified system, we can combine the 3rd and 4th step by building command buffers ourselves and pointing the GPU at them... although various bits of the 360 and PS3 APIs are used to hook this up.

Having a low latency on that last part is pretty much what devs are after, and I agree with them on that... still need some form of API however.

On another note; I think you (and many others) underestimate the amount of low level graphics API work going on, certainly in AAA games who are the ones who really want this type of thing. Even if you license something like UE3 chances are your dev team is still going to have to pull it apart to add functionality you require and that'll touch onto platform specific graphics APIs.



#13 MARS_999   Members   -  Reputation: 1289

Like
-4Likes
Like

Posted 18 March 2011 - 02:31 PM


Games all look similar as most games are using the same damn engine or a select few... e.g. Unreal3, Source, ect...

And the other problem is most programmers or whoever is coding these shaders are just copying and pasting crap they found on the net or what other engines are using.



Utter balls.

Shaders look the way they do because those in control of art want them to look that way. Art direction is the reason games look alike; nothing todo with engines or shaders, but the choice of art work has more impact than anything else.

And as I work as a graphics programmer I know how this works from a professional point of view.


NO I don't agree, so you say for an example, normal mapping is different than from one engines code to the next?

Utter balls back at you.

Now artwork, say Borderlands vs. Battlefield 3 yes there is a difference.

#14 Antheus   Members   -  Reputation: 2397

Like
1Likes
Like

Posted 18 March 2011 - 02:38 PM

On another note; I think you (and many others) underestimate the amount of low level graphics API work going on, certainly in AAA games who are the ones who really want this type of thing.

Far from it. It's the level of abstraction provided.

Graphics accelerators were originally just that. Over time, they transitioned into heavy duty compute units, the fabled 1000-core CPUs. As such, they lost their specialization into just pushing pixels.

Does it still make sense for graphics API to be hiding how much memory you have available? For some, yes. But in a same way UE hides it. Meanwhile, a dedicated developer wanting to truly push would probably embrace the ability to query this memory.

It's a delicate balance between exposing too much or too little of underlying concepts. Java was mentioned and it demonstrates this well. In Java, you don't have a clue how much memory you have. As soon as you hit the limit, the answer is always the same - buy more memory. Which is crap, when dealing with upward unbounded algorithms that could use terrabytes if available. Sometimes you do want the basic layout of hardware.

DX has actually gone this route. From full-blown graphics API to basically a shader wrapper.

I also think that the yellow journalism style of this article makes it a fluff piece. Without understanding any kind of internals calling for death is void, as with all other similar declarations. My guess would be that MS realizes that developers, especially those that do any kind of advanced work prefer a simple, low level hardware abstraction, rather than an enterprise graphics framework. So instead of providing everything in one piece, the future might mean even further streamlining, perhaps exposing everything as compute shaders, on top of which the old graphics pipeline would be built - either by using some third-party engine or by using the API directly. DX11 is a large step in this direction already.

And there is still the extra stuff - is COM still needed at this level? Does it really make sense that very same API needs to cater to everything, from WPF and VisualStudio frontend and right down to real-time graphical extravaganza?

It's more about soundness of the API. Does presenting everything as OO model and abstracting the way things are now still make sense, or is there perhaps a different design better suited. I don't think anyone, at MS or elsewhere would, for a second consider exposing raw hardware again. Except perhaps on consoles, but DX isn't abandoning the desktop - it's simply too tied into it. Then again, mobile, consoles and all the dedicated hardware inside walled gardens does solve many hardware fragmentation problems. While at same time, MS has never in history tried to counter natural fragmentation. The company thrives on this anti-Apple concept and is probably one of few that has managed to be productive in such ecosystem. So as far as MS+DX go, death simply doesn't add up - different style of API however does.

The stuff about how things look or various broad performance discussions however don't matter.

#15 phantom   Moderators   -  Reputation: 7411

Like
3Likes
Like

Posted 18 March 2011 - 02:43 PM



Games all look similar as most games are using the same damn engine or a select few... e.g. Unreal3, Source, ect...

And the other problem is most programmers or whoever is coding these shaders are just copying and pasting crap they found on the net or what other engines are using.



Utter balls.

Shaders look the way they do because those in control of art want them to look that way. Art direction is the reason games look alike; nothing todo with engines or shaders, but the choice of art work has more impact than anything else.

And as I work as a graphics programmer I know how this works from a professional point of view.


NO I don't agree, so you say for an example, normal mapping is different than from one engines code to the next?

Utter balls back at you.

Now artwork, say Borderlands vs. Battlefield 3 yes there is a difference.



Well, yes, it can be depending on how the normals are encoded or what is being combined and how to produce the effect. So, while normal mapping itself might be easy to do this doesn't consitute your assertion that they are just 'copy and pasting crap'; that is just lazy thinking to try and make out that the coders are being lazy for some reason and utterly ignoring the reality of things which is that while standard techniques might well exist online these are generally recreated and recoded directly in order to produce a more optimal output.

So, if you want to try and have a discussion feel free to stick to talking about things you know of and not trotting out the normal 'game coders are lazy' routine; it's bad enough when gamers do it never mind people who should be vaguely aware of the work load required to produce effects.

#16 wanderingbort   Members   -  Reputation: 136

Like
3Likes
Like

Posted 18 March 2011 - 03:31 PM

NO I don't agree, so you say for an example, normal mapping is different than from one engines code to the next?


Sure!

Are you using tangent space or object space normals? Are your normal maps encoded such that you need to recreate the value on a given axis (dxn or dxt5 to avoid normal cross-talk)? Where does this normal once you've computed it? NdotL equation, I should normalize it? Wait, are they normalized in my asset pipeline? Did I just bloat my shader unecessarily? Am I using the normal to offset the color texture? Should I fetch another normal to fix it up? What if I'm deferred? Do i need to pack this into a G-buffer? Are we producing a special texture for a post effect? Should I bias it into RGB space? Is that camera space? or World space?

Yep, pretty standard normal mapping. And in the end, its a bumpy texture.

#17 MARS_999   Members   -  Reputation: 1289

Like
-17Likes
Like

Posted 18 March 2011 - 04:09 PM




Games all look similar as most games are using the same damn engine or a select few... e.g. Unreal3, Source, ect...

And the other problem is most programmers or whoever is coding these shaders are just copying and pasting crap they found on the net or what other engines are using.



Utter balls.

Shaders look the way they do because those in control of art want them to look that way. Art direction is the reason games look alike; nothing todo with engines or shaders, but the choice of art work has more impact than anything else.

And as I work as a graphics programmer I know how this works from a professional point of view.


NO I don't agree, so you say for an example, normal mapping is different than from one engines code to the next?

Utter balls back at you.

Now artwork, say Borderlands vs. Battlefield 3 yes there is a difference.



Well, yes, it can be depending on how the normals are encoded or what is being combined and how to produce the effect. So, while normal mapping itself might be easy to do this doesn't consitute your assertion that they are just 'copy and pasting crap'; that is just lazy thinking to try and make out that the coders are being lazy for some reason and utterly ignoring the reality of things which is that while standard techniques might well exist online these are generally recreated and recoded directly in order to produce a more optimal output.

So, if you want to try and have a discussion feel free to stick to talking about things you know of and not trotting out the normal 'game coders are lazy' routine; it's bad enough when gamers do it never mind people who should be vaguely aware of the work load required to produce effects.


Chat away, and get off my back, as nothing has changed with you in years... Still in my face... And now I disregard your post as the time spent reading them is a waste of time.

#18 phantom   Moderators   -  Reputation: 7411

Like
0Likes
Like

Posted 18 March 2011 - 04:12 PM

Ever considered that I'm "on your back" because you are saying things which are wrong?

#19 Moe   Crossbones+   -  Reputation: 1248

Like
3Likes
Like

Posted 18 March 2011 - 04:47 PM

Something else that hasn't really been mentioned so far in this thread or the article is the law of diminishing returns. Sure, my graphics card might be 10x more powerful... but what good is that power if it is adding 10x more polygons to a scene that already looks pretty good?

Looking over screenshots of DirectX 11 tessellation in that recent Aliens game, I found it somewhat difficult to distinguish between the lower-res model and the tessellated one. It's not that we aren't using that extra graphics horsepower - it's that it isn't easily visible.

On the subject of normal mapping: There was a recent presentation done by Crytek about various methods of texture compression (including normals). For their entire art chain, they are attempting to do 16 bits per channel, including normal maps. The difference was subtle, but it was there. Now here's the thing - what's a bigger difference - going from no normal map to an 8-bit normal map or going from an 8-bit normal map to a 16-bit normal map?

#20 MARS_999   Members   -  Reputation: 1289

Like
-1Likes
Like

Posted 18 March 2011 - 05:00 PM

Something else that hasn't really been mentioned so far in this thread or the article is the law of diminishing returns. Sure, my graphics card might be 10x more powerful... but what good is that power if it is adding 10x more polygons to a scene that already looks pretty good?

Looking over screenshots of DirectX 11 tessellation in that recent Aliens game, I found it somewhat difficult to distinguish between the lower-res model and the tessellated one. It's not that we aren't using that extra graphics horsepower - it's that it isn't easily visible.

On the subject of normal mapping: There was a recent presentation done by Crytek about various methods of texture compression (including normals). For their entire art chain, they are attempting to do 16 bits per channel, including normal maps. The difference was subtle, but it was there. Now here's the thing - what's a bigger difference - going from no normal map to an 8-bit normal map or going from an 8-bit normal map to a 16-bit normal map?


Agreed... I have tried this myself and saw very little IQ improvements. Maybe it would matter more if you were zoomed in on a surface? I have no idea, but I am guessing that it would.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS