• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
MARS_999

DX maybe dead before long....

42 posts in this topic

http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1

Comments....

Mine is,
I been saying DX is a dog for years, all the DX nut jobs, no its fast your doing something wrong… Bah eat crow…

Anyway I am for whatever gets us the best IQ and FPS on the hardware gamers spend their hard earned money for.

-5

Share this post


Link to post
Share on other sites
DX isn't needed as such anymore. At one point it was everything and kitchen sink.

But as GPUs developed, everything moved to shader programming. The abstractions introduced by old pipeline have become redundant in favor of shaders for everything.

The old view of mega frameworks that run on full spectrum of hardware has also become mostly irrelevant. While technically possible, it has little market impact. OGL, trying to abstract platform completely, isn't doing many favors to developers to whom emulated pipeline doesn't help, especially if these details are hidden by driver.


A very viable model today is something like OGL-ES. Don't build frameworks, libraries, and everything else. Those are best left to users or engine developers. Be a ubiquitous simple hardware-friendly API aimed at tasks performed by GPUs.

This change in focus would be a good thing. Do one thing, but do it well and think of the hardware. Developers will quickly adjust, engines will be able to do less of data juggling and there will be less bloat which isn't needed at that level. After all, nobody programs DX anymore. They use Unreal, UnrealScript + custom shaders. Or Unity. Or Flash. Or ...

DX (and perhaps OGL) is in same position as WinAPI. There are two companies that actually still need to know it. The rest builds on top of third-party engines (not graphics pipeline frameworks) that add actual value to problems that need to be solved.
-1

Share this post


Link to post
Share on other sites
The more things change, the more they stay the same. Technology advances typically outstrip the ability to use them.

I began programming when hex/octal/binary was required, stuffing bytes into memory. Then good assemblers helped the effort. Then interpreters (e.g., Basic) were the rage. They provided a layer between the software and the hardware. Compilers speeded up things even more so programmers could take further advantage of improvements in technology, often to take advantage of specific hardware improvements. As mentioned in the comments to that article, the game world was rampant with "You must have a super-duper-whammy sound card to run this game."

APIs (OpenGL, Directx, etc.) appeared on the scene to help integrate entire applications, providing an even more generalized isolation layer between software and hardware. Although less now than in the last few years, common solutions to problems are still "I updated my driver and now it works." However, one big benefit of those APIs was to force manufacturers to design to common interface standards. Without that impetus, shaders would be, if not a lost cause, in the same category as hardware-specific drivers.

Dropping the API? Pfui. AMD would certainly like it if the world reverted to "You must have an AMD super-duper-whammy graphics card to run this game." But, in agreement with phantom, I don't think for a second that'll happen tomorrow or the next day. DirectX and OpenGL will be around until something better comes along, adapting (as they have in the past ) to take advantage of technology.

"Something better" will certainly come along, I'm sure. But, for the time being, I'll take my "chances" investing time in DirectX and OpenGL programming.



1

Share this post


Link to post
Share on other sites
[quote name='Antheus' timestamp='1300470088' post='4787570']After all, nobody programs DX anymore. They use Unreal, UnrealScript + custom shaders. Or Unity. Or Flash. Or ...[/quote]

That sounds a lot like the same fallacious argument that you frequently find made about managed languages: "nobody programs in C/C++ anymore, these days it's all Java/.NET/Ruby/insert-flavour-of-the-month-here/etc". Where it falls apart is that Unreal, Unity or whatever ultimately have to be written too, and these need an API to be written to. All that you're doing is moving up one level of abstraction, but the very same messy details still exist underneath it all (and still have to be handled by a programmer somewhere - and believe me that we are truly [i]f--ked[/i] if we ever produce a generation of programmers that knows [i]nothing[/i] about the low-level stuff. Who is gonna write the device drivers of tomorrow? That's what I'd like to know.)

What I believe the crux of the matter is is that there has been no real [i]innovation[/i] on the hardware front in almost 10 years: sometime roundabout 2002/2003/2004 hardware suddenly stopped being crap (this is a generalisation; of course there's still plenty of crap hardware about) and since then it's just been a continuous ramp up of performance. After all, even a geometry shader is something that can be handled by the CPU; where's all the new paradigm-shifting stuff? The last real break from the past we had was programmable shaders.

On account of this it's natural for some measure of uneasiness to settle in. APIs are offering nothing really new so why do we need them, etc? This is gonna last until the next big jump forward, which might be real-time accelerated raytracing or might be something else; I'm not a prophet and don't know. But think about the current situation as being akin to the gigahertz arms race of yore in CPU land and it makes a little more sense.
2

Share this post


Link to post
Share on other sites
[quote name='mhagain' timestamp='1300472317' post='4787594']
That sounds a lot like the same fallacious argument that you frequently find made about managed languages: "nobody programs in C/C++ anymore, these days it's all Java/.NET/Ruby/insert-flavour-of-the-month-here/etc". Where it falls apart is that Unreal, Unity or whatever ultimately have to be written too, and these need an API to be written to.[/quote]Which is why I wrote a sentence later, only two companies in the world still need access to that low level.

[quote]and believe me that we are truly [i]f--ked[/i] if we ever produce a generation of programmers that knows [i]nothing[/i] about the low-level stuff. Who is gonna write the device drivers of tomorrow? That's what I'd like to know.)[/quote]And that is the same falacious argument made by low level people. With ever higher levels of abstraction, we'd still be weaving memory, rotating drums and punching cards.

The same was said about functions (see goto considered harmful, a blasphemous article of that time whose true meaning has been completely forgotten and is misunderstood each time it's quoted). They were bloated, abstract, inefficient, limiting. Same was said about everything else. But for every guru, there is one million people earning bread without such skills.

Where are the low-level people who program by encoding FSMs? For the rare cases when they are needed, they are still around. But most developers today don't even know what a FSM is.

To low level people, DX is a pain and inefficient (so is OGL or any other API). To high-level people DX (et al) is too low level.

That was the crux of my argument, since most of functionality once used by majority of developers provided by DX isn't needed anymore at that level. It's either higher or lower. It's more about fragmentation, like everywhere else. Rather than having one huge monolithic framework one uses small one-thing-done-well libraries.
0

Share this post


Link to post
Share on other sites
Games all look similar as most games are using the same damn engine or a select few... e.g. Unreal3, Source, ect...

And the other problem is most programmers or whoever is coding these shaders are just copying and pasting crap they found on the net or what other engines are using.
-6

Share this post


Link to post
Share on other sites
[quote name='Antheus' timestamp='1300474321' post='4787624']
That was the crux of my argument, since most of functionality once used by majority of developers provided by DX isn't needed anymore at that level.[/quote]


The thing is, if you take DX11 the core functionality it provides are;

[list][*]device creation[*]resource creation / uploading[*]resource binding[*]dispatch/drawing[/list]Aside from that there are a few debug layers, which are needed to make any sense of whats going on at times, and... well.. that's about it. Any replacement API is still going to need that set of functionality as that is the functionality which is hit the most.

The only real difference between PC and console in this regard is that, because it's a unified system, we can combine the 3rd and 4th step by building command buffers ourselves and pointing the GPU at them... although various bits of the 360 and PS3 APIs are used to hook this up.

Having a low latency on that last part is pretty much what devs are after, and I agree with them on that... still need some form of API however.

On another note; I think you (and many others) underestimate the amount of low level graphics API work going on, certainly in AAA games who are the ones who really want this type of thing. Even if you license something like UE3 chances are your dev team is still going to have to pull it apart to add functionality you require and that'll touch onto platform specific graphics APIs.

1

Share this post


Link to post
Share on other sites
[quote name='phantom' timestamp='1300478558' post='4787657']
[quote name='MARS_999' timestamp='1300477157' post='4787648']
Games all look similar as most games are using the same damn engine or a select few... e.g. Unreal3, Source, ect...

And the other problem is most programmers or whoever is coding these shaders are just copying and pasting crap they found on the net or what other engines are using.
[/quote]


Utter balls.

Shaders look the way they do because those in control of art want them to look that way. Art direction is the reason games look alike; nothing todo with engines or shaders, but the choice of art work has more impact than anything else.

And as I work as a graphics programmer I know how this works from a professional point of view.
[/quote]

NO I don't agree, so you say for an example, normal mapping is different than from one engines code to the next?

Utter balls back at you.

Now artwork, say Borderlands vs. Battlefield 3 yes there is a difference.
-4

Share this post


Link to post
Share on other sites
[quote name='phantom' timestamp='1300479095' post='4787661']
On another note; I think you (and many others) underestimate the amount of low level graphics API work going on, certainly in AAA games who are the ones who really want this type of thing. [/quote]
Far from it. It's the level of abstraction provided.

Graphics accelerators were originally just that. Over time, they transitioned into heavy duty compute units, the fabled 1000-core CPUs. As such, they lost their specialization into just pushing pixels.

Does it still make sense for graphics API to be hiding how much memory you have available? For some, yes. But in a same way UE hides it. Meanwhile, a dedicated developer wanting to truly push would probably embrace the ability to query this memory.

It's a delicate balance between exposing too much or too little of underlying concepts. Java was mentioned and it demonstrates this well. In Java, you don't have a clue how much memory you have. As soon as you hit the limit, the answer is always the same - buy more memory. Which is crap, when dealing with upward unbounded algorithms that could use terrabytes if available. Sometimes you do want the basic layout of hardware.

DX has actually gone this route. From full-blown graphics API to basically a shader wrapper.

I also think that the yellow journalism style of this article makes it a fluff piece. Without understanding any kind of internals calling for death is void, as with all other similar declarations. My guess would be that MS realizes that developers, especially those that do any kind of advanced work prefer a simple, low level hardware abstraction, rather than an enterprise graphics framework. So instead of providing everything in one piece, the future might mean even further streamlining, perhaps exposing everything as compute shaders, on top of which the old graphics pipeline would be built - either by using some third-party engine or by using the API directly. DX11 is a large step in this direction already.

And there is still the extra stuff - is COM still needed at this level? Does it really make sense that very same API needs to cater to everything, from WPF and VisualStudio frontend and right down to real-time graphical extravaganza?

It's more about soundness of the API. Does presenting everything as OO model and abstracting the way things are now still make sense, or is there perhaps a different design better suited. I don't think anyone, at MS or elsewhere would, for a second consider exposing raw hardware again. Except perhaps on consoles, but DX isn't abandoning the desktop - it's simply too tied into it. Then again, mobile, consoles and all the dedicated hardware inside walled gardens does solve many hardware fragmentation problems. While at same time, MS has never in history tried to counter natural fragmentation. The company thrives on this anti-Apple concept and is probably one of few that has managed to be productive in such ecosystem. So as far as MS+DX go, death simply doesn't add up - different style of API however does.

The stuff about how things look or various broad performance discussions however don't matter.
1

Share this post


Link to post
Share on other sites
[quote name='phantom' timestamp='1300481013' post='4787683']
[quote name='MARS_999' timestamp='1300480270' post='4787673']
[quote name='phantom' timestamp='1300478558' post='4787657']
[quote name='MARS_999' timestamp='1300477157' post='4787648']
Games all look similar as most games are using the same damn engine or a select few... e.g. Unreal3, Source, ect...

And the other problem is most programmers or whoever is coding these shaders are just copying and pasting crap they found on the net or what other engines are using.
[/quote]


Utter balls.

Shaders look the way they do because those in control of art want them to look that way. Art direction is the reason games look alike; nothing todo with engines or shaders, but the choice of art work has more impact than anything else.

And as I work as a graphics programmer I know how this works from a professional point of view.
[/quote]

NO I don't agree, so you say for an example, normal mapping is different than from one engines code to the next?

Utter balls back at you.

Now artwork, say Borderlands vs. Battlefield 3 yes there is a difference.
[/quote]


Well, yes, it can be depending on how the normals are encoded or what is being combined and how to produce the effect. So, while normal mapping itself might be easy to do this doesn't consitute your assertion that they are just 'copy and pasting crap'; that is just lazy thinking to try and make out that the coders are being lazy for some reason and utterly ignoring the reality of things which is that while standard techniques might well exist online these are generally recreated and recoded directly in order to produce a more optimal output.

So, if you want to try and have a discussion feel free to stick to talking about things you know of and not trotting out the normal 'game coders are lazy' routine; it's bad enough when gamers do it never mind people who should be vaguely aware of the work load required to produce effects.
[/quote]

Chat away, and get off my back, as nothing has changed with you in years... Still in my face... And now I disregard your post as the time spent reading them is a waste of time.
-17

Share this post


Link to post
Share on other sites
[quote name='Moe' timestamp='1300488460' post='4787739']
Something else that hasn't really been mentioned so far in this thread or the article is the law of diminishing returns. Sure, my graphics card might be 10x more powerful... but what good is that power if it is adding 10x more polygons to a scene that already looks pretty good?

Looking over screenshots of DirectX 11 tessellation in that recent Aliens game, I found it somewhat difficult to distinguish between the lower-res model and the tessellated one. It's not that we aren't using that extra graphics horsepower - it's that it isn't easily visible.

On the subject of normal mapping: There was a recent presentation done by Crytek about various methods of texture compression (including normals). For their entire art chain, they are attempting to do 16 bits per channel, including normal maps. The difference was subtle, but it was there. Now here's the thing - what's a bigger difference - going from no normal map to an 8-bit normal map or going from an 8-bit normal map to a 16-bit normal map?
[/quote]

Agreed... I have tried this myself and saw very little IQ improvements. Maybe it would matter more if you were zoomed in on a surface? I have no idea, but I am guessing that it would.
-1

Share this post


Link to post
Share on other sites
[quote name='MARS_999' timestamp='1300489231' post='4787741']
Agreed... I have tried this myself and saw very little IQ improvements. Maybe it would matter more if you were zoomed in on a surface? I have no idea, but I am guessing that it would.
[/quote]
True enough, but with most games, how often do you have time to sit there and zoom in on something? Most of the time you are busy fighting aliens/nazis/zombies/soldiers/robots/ninjas. If a graphical improvement isn't easily noticeable, does it really make that much of a difference?

(I'm just playing the devil's advocate here. I'm all for having better graphics, but there does eventually come a point where throwing more hardware at the problem doesn't have as great an impact as the art direction).
1

Share this post


Link to post
Share on other sites
The thing is as scenes become closer to 'real' it is the suble things which can make the difference and give the eye/brain small cues as to what is going on.

Take tesselation for example; it's major usage is doing what the various normal mapping schemes can't namely adjusting the silhouette of an object. Normal mapping is all well and good for faking effects but a displace tesslated object is going to look better, assuming the art/scene is done right of course. (It is also useful for adding extra detail into things like terrain).

Another effect would be subsurface scattering; this is, if done correctly, a suble effect on the skin/suface of certain objects which provide a more life like feel to the object. It shouldn't jump out and grab you like when normal mapping or shadows first appeared but the over all effect should be an improvement.

Also the arguement about DX vs a new API isn't so much about the graphical output but about the CPU overhead and coming up with ways to have the GPU do more work on its own. Larrabee would have been a nice step in that direction; having a GPU re-feed and retrigger itself removing the burden from the CPU. So, while lower CPU costs for drawing would allow us to draw more at the same time it would simplify things (being able to throw a chunk of memory at the driver which was basically [buffer id][buffer id[pbuffer id][shader id][shader id][count] for example via one draw call would be nice) and give more CPU time back to game play to improve things like AI and non-SIMD/batch friendly physics (which will hopefully get shifted off to the GPU part of an APU in the future).

Edit;

When it comes to the suble thing right now my biggest issue with characters are their eyes. Take Mass Effect 2 on the 360; the characters look great, move great (much props to the mo-cap and animation guys) and feel quite real, so much so it was scary at times... right up until you look into their eyes and then it's "oh.. yeah...". Something about the lighting on them still isnt' right, it's subtle but noticable, more so when everything else is getting closer to 'real'. (It's probably the combination of a lack of subsurface scattering, diffuse reflection of local light sources and micro movement of the various components of the eye which are causing the issue.)
0

Share this post


Link to post
Share on other sites
[quote name='Moe' timestamp='1300488460' post='4787739']
Something else that hasn't really been mentioned so far in this thread or the article is the law of diminishing returns. Sure, my graphics card might be 10x more powerful... but what good is that power if it is adding 10x more polygons to a scene that already looks pretty good?

Looking over screenshots of DirectX 11 tessellation in that recent Aliens game, I found it somewhat difficult to distinguish between the lower-res model and the tessellated one. It's not that we aren't using that extra graphics horsepower - it's that it isn't easily visible.

On the subject of normal mapping: There was a recent presentation done by Crytek about various methods of texture compression (including normals). For their entire art chain, they are attempting to do 16 bits per channel, including normal maps. The difference was subtle, but it was there. Now here's the thing - what's a bigger difference - going from no normal map to an 8-bit normal map or going from an 8-bit normal map to a 16-bit normal map?
[/quote]

I think the primary issue here is: why add pretty much anything when:

1) Console hardware can't handle it
2) PC sales are a relativly small portion of the total.
3) The end user is unlikely to notice anyway.
4) The PC user who have extra horsepower to spare could just crank up the resolution, anti aliasing, etc to make use of their newer harder.

As more and more PC games are released first on consoles this issue becomes more noticable, we will probably see another fairly big jump in visuals when the next generation of consoles hit the market.
The main thing that seems quite restricted on console->pc ports these days is the use of graphics memory, texture resolutions are often awfully low (Bioware did release a proper high resolution texture pack for DA2 atleast, but most developers don't do that)
0

Share this post


Link to post
Share on other sites
As we are here I would like to point one thing out; many games where people say 'oh its a port' infact ARENT ports. The PC version is developed and maintained along side the console one, very often for testing reasons if nothing else.

Yes, consoles tend to be the 'lead' platform and due to lower ROI on PC sales the PC side tends to get less attention but generally it needs less attention as well to make it work. (and I say that as the guy at work who spent a couple of weeks sorting out PC issues pre-sub, including fun ones like 'NV need to fix their driver profiles for our game to sanely support SLI', which a new API really needs to expose, leaving it up to the driver is 'meh').

The textuers thing however is right, which trust me is just as annoying to the graphics coders as it is the end user. At work one of our demands for the next game from rendering to art is for them to author textures at PC levels and then we'll use the pipeline to spit out the lower res console versions. (That said, even on our current game the visual difference between console and PC high is pretty big, I was honestly blown away first time I saw it running fullscreen maxed out having been looking at the 360 version mostly up until that point).
0

Share this post


Link to post
Share on other sites
Yes DX11 features are great and I am glad they are here finally. Tessellation is great for adding detail(actual detail not faked) and this feature is really need on characters face/head IMO. I agree with Phantom for once, and that the meshes for the actual player/enemies need to have their polygon count increased. The low polygon counts need to be dropped from games final image rendering completely. With that said I would think that the movements, meaning when an arm bends you acutally have a real looking elbow vs. the rubberband effect.

And yes, I really really wanted Larrabee to take off, as the possibilities were limitless... Here's to hoping for the future.

And no PC sales aren't dying, they are actually quite healthy.

In fact EA has stated this about PC gaming....

http://www.techspot.com/news/42755-ea-the-pc-is-an-extremely-healthy-platform.html
1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0