Jump to content

  • Log In with Google      Sign In   
  • Create Account

Not dead...



More Threads.

Posted by , 14 February 2010 - - - - - - · 258 views

While I do have a plan to follow up on my last entry with some replies and corrections (I suggest reading the comments if you haven't already) my last attempt to do so made it to 6 pages and 2500+ words so I need to rethink it a bit I think [grin]

However, this week I have some very much 'work in progress' code which is related to my last entry on threading. This is very much an experiment (and a very very basic one right now) so keep that firmly in mind when reading the below [grin]

As my last entry mentioned I toying with the idea of using a single always active rendering thread which was fed a command list and just dumbly executed it in order. While this is doable with current D3D9 and D3D10 tech (where your command list is effectively a bunch of functions to be called/states to be set) D3D11 makes this much easier with its deferred contexts.

In addition to this with the release of the VS2010 RC the Concurrency Runtime also moves towards final and provides us with some tools to try this out, namely Agents.

Agents, in the CR are defined as;
Quote:

The Agents Library is a C++ template library that promotes an actor-based programming model and in-process message passing for fine-grained dataflow and pipelining tasks.


This allows you to set them off and then have them wait for data to appear before processing it and then passing on more data to another buffer and so on.

In this instance we are using an agent as a consumer of data for rendering, with a little feedback to the submitting thread to keep things sane.

Agents are really easy to use, which is an added bonus, simply inherit from the Concurrency::Agents class, impliment the 'run' method and you are good to go. At which point its just a matter of calling 'start' on an instance and away it goes.

In this instance I have an Agent which sits in a tight loop, reading data from a buffer and then, post-present, writing a value back which tells the sending thread it is ready for more data. The latter command is to try and prevent the data sender from throwing too much work at the thread in question so that it gets too far behind on frames (most likely in a v-sync setup where your update loop is taking < 16ms to process. For example if your loop took 4ms then you could write 4 frames before the renderer had processed one).

The main loop for the agent is, currently, very simple;

void RenderingAgent::run()
{
Concurrency::asend(completionNotice, 1);
bool shouldQuit = false;
while(!shouldQuit)
{
RendererCommand command = Concurrency::receive(commandList);

switch(command.cmdID)
{
case EDrawingCommand_Quit:
shouldQuit = true;
break;
case EDrawingCommand_Present:
g_pSwapChain->Present(1, 0);
Concurrency::asend(completionNotice, 1);
break;
case EDrawingCommand_Render:
g_pImmediateContext->ExecuteCommandList(command.cmd, FALSE);
SAFE_RELEASE( command.cmd );
break;
}
}
done();
}




The first 'asend' is used to let the data submitter know the agent is alive and ready for data, at which point it enters the loop and blocks on the 'recieve' function.

As soon as data is ready at the recieve point the agent is woken up and can process it.

Right now we can only understand 3 messages;
- Quit: which terminates the renderer, calling 'done' to kill the agent
- Present: which performs a buffer swap and, once that is done, tells the data sender we are ready for more data
- Render: which uses a D3D11 command list to do 'something'

'Render' will be the key to this as a D3D11 Command list can deal with a whole chunk of rendering without us having to do anything besides call it and let the context process it.

The main loop itself is currently just as simple;

Concurrency::unbounded_buffer<RendererCommand> commandList;
Concurrency::overwrite_buffer<int> completionNotice;

RenderingAgent renderer(commandList,completionNotice);
renderer.start();
RendererCommand present(EDrawingCommand_Present, 0);

g_pd3dDevice->CreateDeferredContext(0, &g_pDeferredContext);

// Single threaded update type message loop
DWORD baseTime = timeGetTime();
while(WM_QUIT != msg.message)
{
if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
else
{
DWORD newTime = timeGetTime();
if(newTime - baseTime > 16)
{
Concurrency::receive(completionNotice);
float ClearColor[4] = { 0.0f, 0.125f, 0.3f, 1.0f };
g_pDeferredContext->ClearRenderTargetView(g_pRenderTargetView, ClearColor);
ID3D11CommandList * command = NULL;
g_pDeferredContext->FinishCommandList(FALSE, &command);
Concurrency::asend(commandList, RendererCommand(EDrawingCommand_Render, command));
Concurrency::asend(commandList, present);
baseTime = newTime;
}
SwitchToThread();
}
}

RendererCommand quitCommand (EDrawingCommand_Quit, 0);
Concurrency::send(commandList, quitCommand);

Concurrency::agent::wait(&renderer);




The segment starts by creating two buffers for command sending and sync setup.

The 'unbound_buffer' allows you to place data into a quque for agents to pull out later. The 'overwrite_buffer' can store one value only, with any new messages overwriting the old ones.

After that we create out agent and start it up and create a 'present' command to save us from doing it in the loop. Next a deferred context is created and we go into the main loop.

In this case its hard coded to only update at ~60fps, although changing the value to '30' from '16' does drop the framerate down to ~30fps (both values were checked with PIX).

After that we check to see if the renderer is done and if so clear the screen, save that command list off, construct a new RendererCommand containing the pointer and send it of to the renderer. After that it passes the present command to the renderer and goes back around.

The final section is the shut down which is simply a matter of sending the 'quit' command to the renderer and waiting for the agent to enter its 'done' state.

At which point we should be free to shut down D3D and exit the app.

The system works, at least for simple clear screen setups anyway, I need to expand it a bit to allow for proper drawing [grin] althought thats more a case of loading the shaders and doing it than anything else.

The RendererCommand itself is the key link;
- This needs to be 'light' as it is copied around currently. If this proves a problem then pulling them from a pool and passing a pointer might be a better way to go in the future. Fortuntely such a change is pretty much a template change and a couple of changes from '.' to '->' in the renderer.

- The RendererCommand is expandable as well; right now it is only one enum value and a pointer, however it could be expanded to include function pointers or other things which could be dealt with in the renderer itself. This would allow you to send functions to the renderer which execute D3D commands instead of just saved display lists.

One of the key things with this system will be the use of a bucket sorted command list where each bucket starts with a state setup for that bucket to be processed (saved as a command list) and then each item in the bucket just setups its local state and then does its rendering.

I'm not 100% sure on how I'm going to handle this as yet however.

I'm also currently toying with making the main game loop an agent itself, effectively pushing the message loop into its own startup/window thread and using that to supply the game agent with input data.

There is however another experiment I need to do first with groups of tasks, requeueing and joining them to control program flow as my biggest issues are;
- how to control the transistion between the various stages of data processing, specifically when dealing with entity update and scene rendering.
- how to control when scene data submission occurs. This is more than likely going to end up as a task which runs during either the 'sync' or 'update' phase as at that point the data for the rendering segment should be queued up and sorted ready to go.

So, still a few experiments and problems to solve, but as this works I finish the weekend feeling good about some progress [grin]


Direct X vs OpenGL revisited... revisited.

Posted by , 04 February 2010 - - - - - - · 999 views

A short time ago David over at Wolfire posted a blog entry detailing why we should use OpenGL and not DirectX.

The internet, and indeed, his comments asploded a bit under that. I posted a few comments at the time but didn't revisit the site afterwards, mostly because it wouldn't keep me logged in and the comment system was god horrible.

The comments could be split up into the following groups;

- MS haters who want everything they touch to die, largely OSX and Linux users.
- OSX and Linux gamers who, despite not having any technical background, have decided that OpenGL is 'best', for mostly the same reasons as the above.
- Die hard OpenGL supporters, who believe that DX and MS are terrible but offer no technical details as to why.
- Die hard DX supporters who were just as bad as their OpenGL counterparts.
- A few people with both the technical opinion and experiance to back up what they were saying - the only bit of signal in alot of "fanboi" noise.

In a way it was sad to see this as this could have been a good chance to thrash out a few things.

David has since made a follow up post to address some of the arguments made, and while a bit more balenced still shows a slight anti-MS "I'm the only one who can see the truth" bias to it.

However, we'll come to that post in a bit, firstly I want to cover the inital post, which I meant to do at the time but never got around to it.

Before I go on I'd like to make a point or two to clear up my 'position' if you will;
- For those who have been around the site for a few years you'll know that I was the OpenGL forum moderator for some time and was heavily involved in that subforum.
- I've also written two articles on OpenGL for using the (at the time much misunderstood) FBO extension and I've had a chapter published in More OpenGL Game Programming regarding GLSL.

In short I'm not someone who grew up on DX and who lived and breathed MS's every word about the API. I spent some years on the OpenGL side of the fence "defending" it from those who use D3D and attacked it, both on the forums and on IRC. Its only in the last year and a half or so that I've dropped OpenGL in favour of D3D, first D3D10 then D3D11, so I've over 4x more years using OpenGL than I have D3D.

Why you should use OpenGL and not DirectX
(apprently)

The opening of the blog somewhat sets the tone, overly dramatising things and painting a picture that 'open is best!'.

It opens by saying they are met by 'stares of disbelief' and that 'the temperature of the room drops' when they mention they are using OpenGL. As I said, dramatic and I feel the first question is a valid one.

If someone said to me 'I plan to make a game, I plan to use OpenGL' I might well act surprised and at the same time I would ask the important question; Why?.

Not 'why opengl?' but why this choice; what technical reason did you have to decide that, yes for this project OpenGL is the best thing for you. If the answer comes back as 'we plan to target Windows, OSX and Linux' or mentions either of the latter OSes in a development sense than fair play, carry on you've made a good technical case for you.

Which is somewhat the key point here; the title suggest that everyone should use OpenGL over DirectX. Not 'use it if its technically right' or 'because our development choice requires it' but because you just should. It then goes on to try to explain these reasons, with a bias I considered quite intresting.

The technical case is a good place to start, because in that inital article he starts off by attacking anyone who 'goes crazy' over MS's newest proprietary API.

Quote:

What kind of bizarro world is this where engineers are not only going crazy over Microsoft's latest proprietary API, but actively denouncing its open-standard competitor?


To which I have to say this; what kind of bizarro world would it be if engineers ignored the best technical solution for them in favour of one which doesn't fit it just because the one which does is 'open'?

This, to a degree, is part of my problem with the whole inital piece; it paints those who us DX as mindless sheep; people who are woed by shiney presentations and a few pressed hands at a meeting. Maybe some are, in much the same way that many people who use OpenGL do so because 'MS is evil'.

But, any true self respecting engineer or designer would look at things like this, look at the alternative and see how it fit their use case. MS reps could press hands all they want, but if someone goes away, tries it and it sucks... well, it won't catch on.

The History

There are only "minor" factual errors here, the main one painting a picture of OpenGL being on every things out there apart from the XBox. That last bit is true, the XBox doesn't expose OpenGL to the developer (nor is its D3D strictly D3D9) but everywhere?

Firstly, OpenGL|ES is not OpenGL. There are differences in both feature set and the way you would program the two APIs. As much as I'm sure people would love to be able to write precisely the same code for both your desktop and mobile device the reality is there are still differences. OpenGL and OpenGL|ES are working towards unity, but it isn't there yet.

OpenGL also isn't on the Wii or PS3. Well, the PS3 does have an OpenGL|ES layer but its slow and people just goto the metal. The Wii has an OpenGL-like fixed function interface, however the functions are different as it coding it.

So, in reality, OpenGL has Windows, Linux and OSX.
And yes, I won't deny that is more than the windows you get with just raw D3D (although XNA does tip things again).

I'm not going to argue numbers, I'm not market analysis or business man, I'm just correcting some facts.

Why does everyone use DirectX?

The first section of note, because there is nothing really in the 'network effects' section worth talking about is the 'FUD' section.

This, predictably, centres around Vista and the slide which, apprently, shows that OpenGL will only work via D3D. He even links to an image here and the HEC presentation to prove the point.

Amusingly both show his point, and indeed the point of the OpenGL community to be wrong; if there was any FUD then they cause it themselves.
Now, I was about at the time, when I first heard the news I too was outraged at the idea, there may even be a few posts by myself condeming things, so I was taken in by some negative spin as well.
32
The problem is lets really really look at that image; see the 'OpenGL32/OGL->D3D' box? See the line going from that top 'OpenGL32' section to 'OpenGL ICD'? Yep, that's right.. the OpenGL subsystem was always linked to the OpenGL ICD, an IHV-written segment of code.

So, if there was any FUD it was caused by an over reaction to a non-event; an over reaction NOT caused by MS but by the community itself. Talk about shooting youself in the foot; if anyone hurt OpenGL that day it was, ironically enough, those who liked it the most.


The misleading marketing campaigns... well, I still feel this is a bit of 'he said, she said'. DX10 did bring some new things to the table but it never really got much of a true run out to decide either way due to how Vista was greated. However the one thing to come out of all of this was that D3D9 games stuck around, no one jumped on D3D10 but I feel this is less of a problem with the API than the OS it was tied to.

Finally, there is the old favorite in any OpenGL vs DX debate; The John Carmack quote.

In a way I feel sorry for him because ANYTHING he says re:graphics gets jumpped on as 'the one true way' and a big thing gets made out of it by any camp who can use it to their own ends.

The quote in question is;
Quote:

“Personally, I wouldn’t jump at something like DX10 right now. I would let things settle out a little bit and wait until there’s a really strong need for it,”


The problem is, the quote is missing context;

Quote:

John Carmack, the lead programmer of id Software and the man behind popular Doom and Quake titles, said he would not like to jump to DirectX 10 hardware, but would rather concentrate on his primary development platform – the Xbox 360 game console.

“Personally, I wouldn’t jump at something like DX10 right now. I would let things settle out a little bit and wait until there’s a really strong need for it,” Mr. Carmack said in an interview with Game Informer Magazine.

This is not the first time when Mr. Carmack takes Microsoft Xbox 360 side, as it is easier to develop new games for the consoles. Mr. Carmack said that graphics cards drivers have been a big headache for him and it became more complicated to determine real performance of application because of multiply “layers of abstraction on the PC”. The lead programmer of id Software called Xbox 360’s more direct approach “refreshing” and even praised Microsoft’s development environment “as easily the best of any of the consoles, thanks to the company's background as a software provider”.

“I especially like the work I’m doing on the [Xbox] 360, and it’s probably the best graphics API as far as a sensibly designed thing that I’ve worked with,” he said.


And thus the reason becomes clear; its not because he loves OpenGL but because he prefers working on the XBox right now and finds the API, the one very much like D3D9 (although with some differences), "refreshing".

The usage of the original quote to try to prove a point is misguided at best, dishonest at worst.

At this point you could point at 'Rage' and say 'ah, but that is in OpenGL, therefore OpenGL MUST be better!'. However any such claims fail to take into account the years of work iD have done with OpenGL, the tools they have, the code base they have AND the technical requirements; as a proper engineer should.

Finally, we get down to the 'meat' of the main article, and this is where technical accuaracy takes a bit of a dive;

So why do we use OpenGL?

Quote:

... in reality, OpenGL is more powerful than DirectX, supports more platforms, and is essential for the future of games.


Now, more platforms is true, I've covered it already and it can't be argued with, the other two statements however are an issue...

OpenGL is more powerful than DirectX

Well, we'll ignore the poor phrasing as DirectX is much more than OpenGL just from API weight alone, so lets focus on the "facts" presented.

So, D3D9 has slower draw calls than OpenGL; this is true on XP. No one is going to dispute this fact and it is down to a poor design choice of having the driver transition to kernel mode for each draw call. This is practically the reason 'instancing' was invented, and while it was invented to get around the cost of small object draw calls it did also open up some intresting techniques. OpenGL, while being faster for small object draw calls, lacked this feature. People asked for it, it didn't turn up until OpenGL3.x in any offical version (NV might well have had an extension for it before then but that's hardly the same as cross vendor support).

The thing is, that was XP, Vista changed the driver model to remove this problem and it no longer exists in a modern OS. It's still a consideration if you are doing D3D9/XP development but going forward is simply isn't an issue. More important is the need to reduce your draw calls to stop burning CPU time on them anyway.

This brings us nicely to the issue of 'extensions'.

Even when I was developing with OpenGL I viewed these as a blessing and a curse; ignoring the need to access them via a trival extension loader there was the issue of cross vendor support.

It's no secret that until recently ATI/AMD's OpenGL support was spotty. They didn't support as much as NV did and often lagged behind on newer versions. While I personally developed on an ATI machine this was still a source of annoyance at times (such as the slow appearance of async buffer copies from an FBO to a VBO via a PBO which appeared first in their Vista driver some time after NV's own effort.) even if I did manage to miss most of the bugs.

So while they do allow access to newer feature sets there is the cross vendor cost to pay; the choice if this is an advantage or if D3D methods of capbits (D3D9), fixed functionality (D3D10.x) or 'feature levels' (DX11) suits you better is a personal choice.

D3D also had a rudimentry extension system; ATI mostly included some 'hacks' you could perform to do special operations; such as hardware instancing on older cards and render to vertex buffer directly. These few features have the same 'cross vendor' cost as above however.

At which point we get to some down right incorrect 'facts' surounding tesselation.
The main one was;
Quote:

The tesselation technology that Microsoft is heavily promoting for DirectX 11 has been an OpenGL extension for three years


Unfortuntely this is pretty much all wrong. The extension in question, provided only by AMD due to NV currently having no hardware which can do it, didn't appear in the public domain until last year. I know this because I was watching for it to see when it would finally make an appearance.

The other problem is that it is NOT the same thing; D3D11's tesselator consists of 3 stages;
- hull shader
- domain shader
- tesselation

This extension provides the 3rd part but not the other two. Right now, looking at the extension registry, I see nothing to indicate OpenGL supports these features, nor do I believe it will until NV get Fermi out of the door (May this year?) and have their own extension.

There is also this assertion;
Quote:

I don't know what new technologies will be exposed in the next couple years, I know they will be available first in OpenGL.


This is nothing more than hand wavy feel good nonsense and is easy to disprove; where is OpenGL's Hull and Domain shader support? More importantly, on the subject of 'power' where is OpenGL's support to have 'N' deferred contexts onto which I can build "display lists" from 'N' threads and have them draw on the main thread? Or the other multi-thread things D3D11 brings to the table? Multi-core is the future, even if your final submit has to be on a single thread the ability to build up your data in advance is very important right now.

Finally, based on recent history and the way things are going MS are driving the tech now; if it continues as it is then the ONLY ways OpenGL is going to get a feature before a D3D version does is if the ARB stop playing 'catch up' with the spec and get a GL version ahead of D3D OR a vendor releases a card before a D3D release with OpenGL exension ready to go.

That point also goes hand in hand with the comment about 'the future of games'.
There are two things are work here which go against that statement;
- Firstly OpenGL hasn't been a threat to D3D for some time. D3D has been driving development of hardware forward while the ARB seemed to flounder around arguing internally. They have got better of late, but they are still behind.

- Secondly; consoles. They have the numbers and they are significantly easier to develop for. They are the driving force behind the big games at least and to a degree influence what those coming behind want to do.

Don't get me wrong, I wouldn't declare PC gaming to be 'dead' but it is very much a second fiddle these days.

The finally arguement put forward in this section is that 50% of users still have XP systems; yep can't fault that either, the Valve hardware survey which it was linked to does indeed back this up and this is a fine number to cling to if you are releasing a game now or maybe even in the next few months. However there is a reality here which it ignores; Windows 7.

The uptake of Windows 7 has been nothing if not fantasic, certainly after Vista's panning, and this is a trend expected to continue as new PCs come with it and gamers, seeing a 'proper' update from XP move across to it. If only half of those on XP with a DX10 card move across to Win7 then people who can run D3D11 based games, all be it on the D3D10 feature level, will 63% and I would suspect this will be true in reasonably short order. In short; if you are starting a game now to be released in a year or two then there isn't a good reason NOT to use D3D/DX11 based on those numbers (provided you are targetting only windows of course).

OpenGL is cross platform

Yep, agreed, although again we find a Carmack quote pulled out and then twisted to spin positive for OpenGL usage;
Quote:

As John Carmack said when asked if Rage was a DirectX game, "It’s still OpenGL, although we obviously use a D3D-ish API [on the Xbox 360], and CG on the PS3. It’s interesting how little of the technology cares what API you’re using and what generation of the technology you’re on. You’ve got a small handful of files that care about what API they’re on, and millions of lines of code that are agnostic to the platform that they’re on." If you can hit every platform using OpenGL, why shoot yourself in the foot by relying on DirectX?


Again, the final assertion ignores the tools and existing tech iD have when dealing with OpenGL which makes it a viable choice for them, it also seems to ignore that they target D3D-ish for the X360 and the native lib for the PS3. Also, as pointed out earlier, OpenGL doens't get you 'every platform'; more than D3D, yes, however it then goes on to point out that XP users are the biggest single desktop gaming platform and with the migration to Win7 well underway it becomes less cut and dry.

OpenGL is better for the future of games

This... well, it reads more as an attack on MS than anything else. Talk of monopolistic attack and an 'industry too young to protect itself' is nothing more than a plea to the heart than a fact based arguement... so lets bring in some facts!

The 'attack' spoken of here would seem to speak of the FUD section earlier, but there are two problems with that.

Firstly, as I pointed out, the FUD over Vista was self inflicted. The community did it to themselves and yet somehow MS got the blame.

The second is the idea that programmers and engineers would use something just because someone showed them some pretty slides and said 'hey, use this' without taking the time to look into it. If D3D didn't deliver then no one would touch it outside of the Xbox, indeed if it hadn't then something else would have come up or the XBox wouldn't have existed in the first place.

Then there is an "industry too yong to protect itself"; the industry isn't that young.
I was playing games back when I was 5 years old, thats 25 years ago. The Atari 2600 was released in 1977, 33 years ago. The industry has been here for over 30 years, over many platforms so the idea that it is 'young' and undefended is strange. If anything programmers are one group who are, tradionally, very resistant to change, doing things they way they did back in the old day because it was good enought then. So for them to switch to D3D from OpenGL means there must have been a good reason.

And there is something which is rarely pointed out; OpenGL was indeed there first. Granted, for 3D acceleration it was beaten out by GLIDE initally with many games supporting it, however as the ICDs appeared games started to move across to OpenGL and away from GLIDE. Half-Life and Unreal Tournament stand out as two games which had GLIDE, OpenGL and D3D support with D3D being the less choice in those days.

In short, it was OpenGL's position to lose and they lost it. MS might well have had a hand in this when they were on the ARB (I don't know for sure) but they left in 2003 and yet nothing happened.

Which brings us to, what was for me, the highlight of the blog in the final section;

Can OpenGL recover?

Firstly, I would say yes it can, but it will need the features, cross vendor and the tools on windows and better support. It will need to give people a reason to switch away from D3D11 (or whatever follows).

However, this isn't the key bit as among the tugs on the heart strings and the 'exists only to stop you getting games on XP, Mac or Linux' rant there was this little gem;

Quote:

If there's anything about OpenGL that you don't like, then just ask the ARB to change it -- they exist to serve you!


This gave me a few minutes of laughter for a good reason; experiance has taught me that the ARB couldn't find its own arse with 4 tries and a detailed map.

Infact this is a good entry point into the follow up article as well...

Quote:

OpenGL 3.0 sucked! It was delayed drastically and didn't deliver on its promises!
OpenGL 3.0 was not the revolutionary upgrade that it was hyped to be, but it was still a substantial improvement. OpenGL 3.1 and 3.2 addressed many of the concerns not addressed by 3.0, and it looks like it's on track to keep improving! If more game developers start using OpenGL again, the ARB will have more incentive and ability to keep improving OpenGL's gaming features.


And between the two here in lies part of the problem.

Developers, including game developers, were presented with a much improved and above all modern API by the ARB. They told the ARB they loved the direction, gave feedback and generally made a big noise about looking forward to it; after the mess which was OpenGL2.0 and the amount of time it took to get VBOs it finally looked like D3D10 had given them the kick they needed.

You see, D3D10 has been hailed as a great improvement to the API; its usage fell flat due to Vista, however D3D11 is very much a slight refinement of it. Longs Peak was in the same manner, indeed it was better than D3D10 based on what little we had seen.

The ARB talked, we said 'awesome!' and then... well.. I don't know if we'll ever truely know.

It went silent, people asked for updates, nothing happened and finally, after a wall of silence had decended OpenGL3.0 was released and the Opengl.org forum asploded. Yep, they had done it again.

This is the problem with OpenGL and the ARB; they do it to themselves.

On the day OpenGL3.0 was announced numberous people, myself included, made a noise and then walked off to D3D10 and D3D11 land. Much like the FUD problems before the OpenGL community had crippled itself.

I know from a few PMs I had at the time there were people who worked on the Longs Peak spec who were just as upset about this turn of events.. well, more so.. than the end users who walked away. As I said I doubt we'll ever really know, all I do know is that, despite what went around at the time, I was told it wasn't the CAD developers who caused the problem.
(Personally, I think Blizzard and one of the IHVs sunk it... but again, we'll probably never know).

All of which brings me back to the two quotes above; the ARB have shown time and time again they can't get things done. MS, on the other hand, deliver. Between that and the tools, docs and stability of the drivers I know whos hands I'd rather put my future in.

Quote:

Are you saying that AAA developers use DirectX just because they're too stupid to see through Microsoft's bullshot comparison ads? You're the only one who's smart enough to figure it out?
No, of course not. That kind of marketing primarily affects game developers via gamers. Since gamers and game journalists are not graphics programmers, they believe Microsoft's marketing. Then, when the gaming press and public are all talking about DirectX, it starts to make rational short-term business sense for developers to use DirectX and ride Microsoft's marketing wave, even if it doesn't make sense for other reasons.

On the other hand, game developers are directly targeted by DirectX evangelists and OpenGL FUD campaigns. At game developer conferences, the evangelists are paid to shake your hand and deliver painstakingly-crafted presentations and well-tested arguments about why your studio should use DirectX. Since nobody does this for OpenGL, it can be hard to make a fully informed decision. Also, not even the smartest developers could have known that the plans for dropping OpenGL support in Vista were false, or that the terrible Vista beta drivers were not representative of the real ones. It doesn't leave a bad taste in your mouth to be manipulated like this? It sure does for me.

Are these the only reasons why DirectX is so much more popular than OpenGL? No, but they're a significant factors. As I discussed at length in the previous post, there are many network effects which cause whichever API is more popular to keep becoming more popular, so small factors become very large in the long run.


I somewhat covered this earlier, but I feel its worth addressing this directly.

The first paragraph is certainly true now; gamers do talk alot more about D3D however it wasn't always the way. At one point OpenGL was the big name, not as big because it was a few years back now and the internet wasn't as connected as it is now, but it was still a major factor. For some years it was always a belief that OpenGL games looked better and I recall the asplosion which occured when it turned out Half-Life 2 wouldn't support OpenGL.

Which brings up two points;
- OpenGL was popular, yet it lost it before the "fanboys" and gamers had latched onto D3D
- Developers were already switching across to D3D only at this point, before the marketing factor which exists today kicked in

That alone tells us something about the fight between OpenGL and D3D on a technical merit.

The second paragraph seems to, yet again, cast doubt on the ability of other engineers to make a technical choice. Again, we need to view this with some history attached; before D3D9 DX wasn't really a 'big deal', DX7 sucked and DX8 wasn't better. MS, while putting cash in, wouldn't have been putting anywhere near as much in and, due to its popularity, OpenGL would have had a high share of technical knowledge; many people back in 2000 wanted to work with OpenGL simply because they attached the name Carmack to it.

Again, this was OpenGL's position to lose.

The final section about the FUD and beta drivers seems to also continue this theme of engineers and developers being naïve.

The FUD is somewhat forgiveable for the younger engineers and those who didn't look at closely to start with; someone misread something, a shit storm appeared and hurt OpenGL but not from MS as already mentioned. In fact, I dare say most developers who took the time to look (including myself after a while) would have realised no such dropping of OpenGL was going to happen.

As for the beta drivers... well; read your release notes.
No one should expect 'beta' to be final quality and, ATI at least, pointed out they didn't have any OpenGL drivers. I have a vague memory that NV made a point of saying they weren't final as well but I wouldn't swear to it.

Given this situation I feel it is David who in his reply is trying to maniplate the reader into something which, when looked at more closely, never happened or was never really a problem if people had thought about it. So, yes, it does leave a bad taste in my mouth when someone tries to repaint the past and manipluate me.

As for the rest of the reply, well he does do a decent job of being more honest and less evanglical than before;

- While he points out that you might not want OpenGL if you are doing a console exclusive or XBox game with a cheap windows port I disagree with the assertion that in other cases OpenGL is the logical place to start. Technically speaking D3D11 is the better API, it is cleaner and offers more features, but beyond that there is the issue of what you want to do; even if you are just developing for a home computer that doesn't mean you'll want to support OSX or Linux; with that consideration OpenGL isn't the logical choice although it is still a choice.

For me it wouldn't be, which is an example of technical thinking; I want to push cores to the limit and OpenGL just doens't have that multi-threaded support that D3D11 does; this is a technical barrier and only D3D11 can support what I want to do.

Infact this links into this statement;
Quote:

If we take a larger view, the core functionality of Direct3D and OpenGL are so similar that they are essentially identical


To which I disgree; depending on the level of support and what constraints you are under (such as above) there are large difference between Core GL3.2 and DX11.

Quote:

The most important differences are that one is an open standard, and the other proprietary, and that one works on every desktop platform, and the other does not.


Frankly, the first statement is rubbish; open standard vs propritary doesn't matter if one can do things the other can't. The second statement only matters if it dovetails with your plans, which may or may not be restricted by technical reasons.

After that it is mostly minor issues, a retraction on tesselation, all be it with a down play of the tech and saying OpenGL will be ready when it is important/popular (AvP would like to have a word with you about that one), which while nice doesn't really help developers get on and use it. He does also say that older methods can produce the same visable result as the older methods, which might be true however speed is an issue here and I doubt it'll be as fast (less so on Fermi as it can tesselate 4 triangles at once via a change in hardware). He questions the fixed function stage. ignoring the two programmable stages around it it seems and also ignoring there is no need for it to be programmable given its nature. It might well become more programmable in future, although I'm not sure how, but this is a sane stepping stone if that is the case. Finally a comment about ATI having tessleation for a decade but not being used, for good reason;
- Tru-form wasn't great
- The tesselator on the 360 wasn't great either
- The tesselator in the consumer cards wasn't exposed until last year

By contrast D3D11's setup has 1 game out using it (Dirt2) and AvP coming soon which makes heavy use of it with other games sure to follow.

And I think that covers all the important points.

As I hope you can see things aren't a cut and dried as some would like you to believe; D3D didn't muscle out the little guy via funding and FUD. The little guy was once the big guy and simply lost because it didn't improve and because it generated its own FUD.

So, to reformat and ask the original blog's question again I think would be a good way to end;

Why should you use OpenGL?

You should use it when it meets your technical demands.

But please; try to leave emotion at the door, its just an API after all, no need to try and tug at the heart strings.


Threads.

Posted by , 13 January 2010 - - - - - - · 271 views

As I've mentioned before I've been working on a highly threaded particle system (not of late, but you know, its still in the pipeline as you'll see in a moment) however this has got me thinking about threading in general and trying to make optimal use of the CPU.

Originally my particle system was going to use Intel's Threading Building Blocks, however as I want to release the code most likely under zlib the 'GPL with runtime exception' license TBB is under finally freaked me out enough that I've decided to drop it in favour of using MS's new Concurrency Runtime which is currently shipping with the VS2010 beta.

One thing the CR lets you do is setup a scheduler which controls how many threads are working on things at any give time; if it matches hardware threads, priority, over subscription etc are options which can be set which grants you much more control over how the threads are used when compared to TBB.

Looking at this I got thinking about how to use threads in a game and more importantly how tasks can be applied to them.

If we consider the average single threaded single player game then the loop looks somewhat like this;

update world -> render

There might be variations on how/when the update happens but its basically a linear process.

When you enter the threaded world you can do something like this;

update \ sync
update ---> sync ---> render
update / sync /


Again, when and where the update/sync happens is a side point the fact is rendering again pulls us back to a single thread. You could run the update/sync threads totally apart from the render thread however that brings with it a problem of scalability and sync.

If you have 4 cores and you spawn 4 threads, one for each update and a render thread, and run them all at once then you need to sync between them which will involve a lock of some sort on the world. Scalibility also becomes a concern, more so if you assign each thread a task to carry out as when you throw more cores at it they will go unused.

You could still use a task based system however a key thing is that you might not be rendering all the time; so you could use those 3 threads to update/sync based on tasks but for some of the time the rendering thread will go idle which is time you might be able to use.

For example, assuming your game can render/update at 60fps, your rendering time might only take 4ms of time, which means that for ~12ms a frame a core could very well be idle and not doing useful work.

This is where over subscription comes into play; creating more threads than we have hardware to deal with it.

In a way, if you do a task based system which uses all the cores and you use something like FMOD then you'll already be doing this as it will create at least one thread in the background and other audio APIs do the same.

The key thought behind this is that a device in D3D (and OGL) terms is only ever owned by one thread, so unless you can force a rendering task onto a thread all the time issues start to come up. You might be able to grab the device to a thread and release it again however if this is even possible it would probably cause bad voodoo. For this reason you are pretty much stuck with what thread you render from.

As you are stuck with a thread anyway then why not create one specifically for the task of rendering? You could feed it work in the form of per-frame rendering data and let it do its thing while you get on and update the next frame of the game.

However, this would impact your performance as you'd have more threads looking for resources to run on than you'd have hardware to run them. So, the question becomes would it be better to lose Xms or would the fighting cost you less in the long run?

The matter of cache also comes up however the guys who worked on the CR bring up an important point; during your threads life you are more than likely to preempted anyway, at which point if you have affinity and masks set you'll stall until the CPU has freed that core, or you bounce cores and lose your cache. Chances are however even if you stick around and cost yourself time your cache is going to be messed with anyway so it might not be worth the hastle. (The CR will bounce threads as needed between cores to keep things busy for this reason).

The advent of D3D11 also makes this more practical as you can setup things as follows;


update \ sync \ pre-render
update ---> sync ---> pre-render ---> next frame
update / sync / pre-render /

----- render ------------------------>


In this case the pre-render stage can use tasks and deffered contexts to create the data the render thread will ultimately punt down to the GPU. This could also improve framerate as it will allow more object setup and maybe more optimal data to be passed to the GPU.

There remains matters of syncing the data to be rendered and what happens if you throw a fixed time step into the mix (although this is most likely solved by having the pre-render step run every loop regardless of update status and have it deal with interpolation) however the idea seems workable to me.

If anyone can see any serious flaws in this idea feel free to comment on them, I probably wont get around to this idea for a few months as it stands as I've a few things to do (not least of all the particle system [wink]) but its certainly an idea I'd like to try out.


Goodbye and Hello..

Posted by , 31 December 2009 - - - - - - · 132 views

So, 2009... reasonably productive all things considered.

Most of the stuff seems to have been done in the latter half of the year;
- got the start of a TBB particle system going
- learnt about Compute shaders and how to tie them into D3D11
- got a handle on the UDK
- started to get a handle on Unity
- Read about and did some learning for F#

Going into next year my plans are;
- refactor and finish up particle system to get a game made
- Get cracking properly on a UDK game
- Knock up a Unity based game
- Use F# for something productive

A reasonable set of goals I feel.. as to if I manage to carry them out is another matter completely [grin]

Also, I'd like next year to be a year where everything I do at work doesn't get thrown away.

Since I moved projects from the first game I worked on until now everything I've done has been dropped in some form or another; be it feature cuts or projects getting canned. I'd take it personally if I didn't know my work was solid.. and, heck, I get paid... still, its a bit silly.

So, yeah... new year, new goals... maybe by this time next year I'll have my first game properly done; my grand plan is to have something up for sale by the end of next year... we'll see, eh.

Have a good one boys and girls.


Modern Warfare 2 : No Russian.

Posted by , 16 November 2009 - - - - - - · 199 views

Having completed Modern Warfare 2 this weekend in what was, for me, record time of obtaining a game I've decided to also weight in on the 'no russian' level as many people have been doing.

If you have not finished the game there WILL be spoilers below. Turn away NOW.

So, as you probably know by now Modern Warfare 2, while being a massive hit has its fair share of detractors. From complaints about the size of the install vs lenght of game play, lack of 'modern game play' and of course the now infamous 'no russian' level.

The first two can be handled quickest so I'll cover them first.

The size vs lenght (and vs cost as well) arguement is an intresting one and one which, in my opinion, really isn't a major one. The fact is a large game which takes 5h or so to play (my approximate play time, spread over 3 sittings) is, if nothing else, testimate to the amount of assets present in the game. The first focus is, as always, on the graphics, which aren't by any means poor (although I might be a tad bias given my gaming rig) however it doesn't end there. The between mission cinematics are, like the first game, very impressive giving you a wealth of secondary information and story detail while the voice overs explain what is going on.

After that comes the sound scape put together; something which is often missed in a number of games. The characters you are fighting with and against feel more real. They have accuate situational things to say and their general interaction with the player is engaging enough that you find yourself becoming attached to them somewhat. We aren't totally out of the 'canned single line response' yet, as they are still present but there is a definate improvement in this regard.

All of which feeds back into cost and, lets not ignore it, profit. End of the day Activision are a company, they are there to make money, and while I'm sure they have made a profit on this game the fact is it would not have been a cheap game to make by any stretch of the imagination.

Finally there is length and the fact that people are complaining that it is only 5h long. I'm not going to compare it to the cinema, or indeed to other games, however while it might have "only" been 5h long I feel it was pretty much long enough.

You see, the problem with many games is if they try to go for 'epic game play' then unless they have a story and tasks to match it then you are just getting into padding. It becomes 'oh, another <place> filled with bad guys I have to kill. yay' and then it really does become 'just another shooter'. Two games which are hailed, for in my view mind boggling reasons, as the 'best FPS games ever' suffer from this for me. I'm talking of course of Half-Life and Half-Life 2, where I've never made it past a few levels of the former and the latter I only played because I had nothing better todo with my time and the later levels just dragged on.

All of which brings me to 'modern game play' elements and, frankly, I'm not sure what this really is. Is it the lack of "physics puzzles" people are talking about, because if so GOOD! Those things often turn out to be quite pointless and contrived just to show that 'wooo! we have a physics engine'. Once you get beyond that there isn't much else that I can see; I admit I'm not a game designer by a long shot but I never found myself playing MW2 and thinking "yeah, this game needs X".

What MW2 represents to me is a FPS game doing what an FPS game should and sticking to being an FPS game. You still have to be stealthy because thats in the context of the game, but beyond that you have a gun, you have your orders, and you have to carry them out in, what is at times, the cluster fuck of modern combat.

End of the day MW2 lives up to its original as a solid, FPS game, by no means glorifying war and is well worth the price of admission.

All of which brings us to 'No russian'.

As you may or may not know 'No Russian' is an early mission in the game. Having played the level to 'learn the game' and select your difficulty level and have some combat experiance vs some enemies you are recruited to go undercover as someone close to the main villian of the piece (a Vladimir Makarov, a former protégé of Imran Zakhaev from MW1) as part of a multi-national task force (aka some SAS members and some cannon fodder). The rest of the task force, containing one of the other characters you play 'Roach', is off in Russia obtaining some hardware.

Once you complete that mission you leap back to the undercover guy to the mission in question. The mission plays out with you and 3 other guys, including Makarov, walking through a Moscow airport, cutting down civilians and guards alike with heavy machine guy fire.

This level alone generated alot of outrage and people calling it "sick", the thing is I disagree and I'm going to give them a pass on this.

Now, the one complaint I have heard about this level was the lack of setup, and I do agree with that to a degree; a mission before hand involving yourself and Makarov to introduce things and get you on this 'team' might have been a better introduction to this section of the story, but the level itself I have no issue with.

I'll come straight out and say it; I shot the civies. A fair few in fact, because I felt, given the position I was in and the guy I was with that if I hadn't done that I might have been punished for it in some way (ironic considering your are shot and killed at the end of the level by Makarov). There have been complaints that there was no reason to shoot them, but given that mindset and the fact you went into this mission being told that 'you would lose a piece of yourself' (or words to that effect) and that it was for 'the greater good' would probably have convinced someone in that situation to follow through as well.

The fact of the matter is that game is set in modern times and terrorism is a part of modern warfare as much as massive battleships, armor and remotely targetted missiles and this brings it home what some might do in the name of their cause.

Now, that this ignites an invasion of the US by the Russians has also been critised but lets consider this from a few angles;

Firstly those in charge of russia at that point in the game were anti-american and would have been looking for an excuse to do something like this. The ACS system, which I can only assume was a key piece of technology for the main land defense, gave them the technical ability and the attack gave them the 'moral' ability to get the people behind it.

You might say that an investigation into the shooting would have found out it was a setup but then if you want an excuse who does a real investigation? If you want "proof" however you can look to the real world and the events post-9/11. Now, I'm not going to say the American Goverment is like the Ultranationals however even then given a single terrorist event on home soil the USA (and the UK) went on to invade two countries and this was with a moderate goverment; imagine if the goverment had been actively looking for an excuse to go in.

End of the day, could "no russian" have been handled better? Possible, however I feel this level IS an important landmark in gaming history because it is the first time that I can think of where a game has taken on an issue such as terrorism and the role people might play in it head on in, what I feel, was a mature way. There was no shying away from the death and suffering there (a theme which continues in the rest of the game with the realities of war played out). The response will certainly have an effect on how this subject matter is handled in the future but I feel that with this Infinity Ward have opened a door and its one I think we should use if we are going to tell stories set in the modern world; a bit less jingoism and a bit more of a look at how things are done is never a bad thing and gaming is a powerfull way to get that message across.

Intrestingly, this whole level probably over shadowed another aspect of the game completely one I'm surprised didn't get more reaction from the Americans out there.

In Modern Warfare 1 the SAS were, pretty much, the guys who got stuff done. The USMC did their part, however their part was pretty much screwing things up and getting nuked. While the SAS weren't the nicest bunch they got in their, got shit done, and got out again. It isn't until the last mission where things go boobs skywards and even then they are bailed out by the russians.

Modern Warfare 2 kinda continues this theme to a degree; the fubar mission in russia, getting their arses kicked on home soil and the shelling of the gulag while Task Force 141 (aka SAS and some cannon fodder) are still inside, nearly killing them in the process. Meanwhile TF-141 carry out their missions pretty much flawlessly, save Washington (at the cost of the ISS I admit) and generally continue to run about fixing things.

Frankly, as a non-American its refreshing to see games which don't show the US in an all positive light, however the one thing I was surprised didn't get more coverage was the end of the game where, having completed the objective yourself (playing Roach) and Ghost are met by General Shepherd who shoots them both and sets fire to them (all seen from the eyes of Roach, while Cpt. Price who you previously rescued, is yelling over the mic that Shepherd isn't to be trusted). The game then jumps you to playing Soap again, with Price as they escape the ambush, hunt down and kill Shepherd (and as many people as get in their way).

To me, the most shocking 'what he hell?' moment of the game was that betrayal by the americans of the SAS members, and the lack of complaint about 'painting the US' in a negative light I find intresting.

I'm going to end this with some words from Adam Biessener of Game Informer, as summerised by wikipedia, which I happen to agree with;
Quote:

In his review for Game Informer, Adam Biessener writes that while the level "makes the player a part of truly heinous acts", he also notes that the "mission draws the morality of war and espionage into sharp focus in a way that simply shooting the bad guys cannot". Biessner concludes that it is one of the more emotionally affected moments in the game, is "proud that our medium can address such weighty issues without resorting to adolescent black-and-white absolutes".







Recent Entries

Recent Comments



PARTNERS