Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 15 Dec 2001
Offline Last Active Private

#5038702 GPU geometry shader particles frustrum culling?

Posted by phantom on 03 March 2013 - 04:39 AM

Unfortunately the geometry shader is horrendous when it comes to performance and the general IHV advice is generally "really, don't use it..." - if you are doing a particle system then you'd be better off instancing a single quad or indeed doing the work in the tessellation stages rather than us the geo. shader.

(I've promised people at work that if they use a GS in the shaders the very first thing to happen is I will appear behind them with a flaming sword demanding the reason why they need to use it... dry.png)

#5038234 OpenGL Programming Guide 4.3

Posted by phantom on 01 March 2013 - 07:31 PM

AMD have GL4.2 support, so while some 4.3 functionality is currently missing there is no reason to only focus on 3.x...

#5036148 Get unique IDs by reading the Instruction Pointer Register

Posted by phantom on 24 February 2013 - 01:02 PM

This... is a horrible idea...

You have 3 'button' widgets; you run the code to get the instruction pointer - what makes you think this will be different for each instance, given they are running the same code...

Just use an int and have someone increment it each time you create a control... if you really think 32bit is too small then you could always use a 64bit int..

#5033320 What's your preferred way of letting objects affect the game?

Posted by phantom on 17 February 2013 - 04:12 AM

Apoch has covered design nicely, but I think this is still worth calling out.

But games are complicated. Objects interact with each other and the entire game's state regularly throughout the game. Units collide, one unit attacks another, a building makes units, a unit can create projectiles, and those projectiles can affect a large number of other units, etc. In other words, when programming this, you need a way for one object to be able to affect another object (or the game state itself, possibly by creating other units or projectiles). But it's not just a one-time case; many different units have different affects and interactions, which can result in messy code when trying to make all these possible interactions and events possible.

The bold section, in particular the end, is key to this I feel; a single object in the game is not going to touch the whole game state. Not ever and certainly not directly. This is something you go on to basically say yourself with the list of interactions below and this is a key place to start when thinking about it - each object has a pre-defined interaction with the world. If you were going to make an RTS game, for example, your 'tank' super-object (super-object => collection of parts) isn't suddenly going to be marrying The Hulk any time soon ;)

If we take a closer look at one of the examples you gave, "a building makes units", as a typical factory in an RTS game.
While this might look complicated it isn't really, at the highest level the building does one thing 'creates a unit', at which point beyond throwing out a few notifications it is done.

Its output might simply be a message (in the generic sense, not a 'message system' sense) which says "I've made vehicle <type> at location <here> for <player>". Other systems would be listening for that event and respond accordingly; a player's unit collection might update and the unit spawner might create a new object so it can be rendered.

At which point the messages could ripple further afield (unit collection => UI to display message/icon; spawner to any AI and physics systems to insert it into their world view; and so on) but the key interaction is done.

As to how you expose this... well, there is the erstwhile mentioned 'message system' which you could route all the messages to - however this global routing might not be what you want. Your 'factory' super-object might contain a list of delegates/functions to call when certain events happen and systems 'subscribe' to them or you could abstract the whole thing and come up with a processing graph which represents spawning in data; this would require your super-class still outputs events but the hookup of those input and output pins becomes data driven and potentially modified per level.

The key thing, however, is that each unit has a predetermined level of interaction with the world so it can't mutate all the state itself nor does it have to deal with every possible thing that can happen. Even something as simple as 'collision' isn't really the units problem as it is a problem for the physics under the hood.

#5033186 stack vs heap

Posted by phantom on 16 February 2013 - 06:01 PM

The funny thing is with the speed difference between CPUs (and the fact they are out of order monsters) and memory access you are probably better off worrying about your memory layout before worrying about every CPU cycles.

Sure, don't do more work than is required but often data access patterns can make a bigger difference than your instruction count.

#5032876 PC vs Console

Posted by phantom on 15 February 2013 - 06:47 PM

But I also know that a lot of games (not sure of the percentage) are developed on the XBox and then ported over (I know Skyrim was, hence it's terrible UI for PC).

Ugh... I hate how this myth persists... in most cases the console and PC games are developed at precisely the same time from the same code base and the same basic asset sets - there is no 'porting' of code going on. In our games ~95% of the code base is the same between platforms and at any point during development you could compile the game for the target platform and it would run.

If the UI is 'bad' it is because it was a design choice made which probably had very little to do with what platforms were being targeted.

I put up with this myth/misunderstanding on gamer forums, because frankly those guys are laughably clueless at times, but it shouldn't be something which persists on a development forum...

#5032076 Game development on: Linux or Windows

Posted by phantom on 13 February 2013 - 05:51 PM

it isn't perfect, but it works.

Well, "works" in that you have to use it as you've no other choice than to suck the pain and live with it.

That's the thing about OpenGL; it's the best choice if you have no choice.

(OpenGL is also the ONLY API to enforce this model; PS2, PS3, X360, Wii/WiiU, D3D9/10/11 - none of these use this model.)

yes, but the issue is being either tied to Windows or having to write/maintain 2 versions of a renderer isn't ideal either.

The point is that better programming models exist; pointing out minor unrelated flaws with D3D's API (COM & 'not modern C++') does not detract from OpenGL's model being fundamentally broken which was the original point which apparently triggered the rage.

For the record; I'd personally do two API level interfaces. If you are targeting D3D11 and OpenGL then the workload isn't going to be that high and the rest of your renderer is likely to swamp it out code wise.

(I'm currently involved in a complete ground up re-build of our renderer at work and we've taken the path of doing a renderer backend per platform, which means we need to support at least 4 code paths but it does mean we can do API and platform specific optimisations and paths while getting the best from each API. Unfortunately at some point this will probably mean I'll have to touch OpenGL|ES... *sigh*)

#5031936 Game development on: Linux or Windows

Posted by phantom on 13 February 2013 - 12:44 PM

If you love perspective so much, you should read further then this post, for example at the extremely biased posts by phantom, where my post was mainly made in response of. Calling things 'pants-on-head retarded' is typical behaviour of trolling fanboys. Maybe I should have been more clear in stating who I responded to, but it's odd how trolling fanboys are allowed to post garbage, but it's not allowed to post a response to it in the same manner.

The difference is my post is commenting on a fundamental flaw in the OpenGL programming model where as yours touches upon COM (a minor point of D3D11 coding, mostly related to start up) and C++ form which OpenGL also doesn't prompt in sensible way.

If you can convince me that having to bind a resource to the pipeline in order to edit it is a good idea then I'll be impressed.
Not to mention that binding anything makes it editable which can lead to fun such as when you bind a VAO you can, via a bug, accidently change its content just because it is bound.

Pants. On. Head.

D3D programming model, on the other hand, has both immutable objects AND doesn't require you to have a resource bound to edit it. The rest of the API over that is just gravy.

Oh, and for the record, I spent about 8 years working exclusively with OpenGL, including writing a chapter on GLSL for a book back in 2005, until the ARB finally screwed up one time to many (see GL3.0/Longs Peak) and I took a look at the D3D world where, since D3D10, things are saner to work with.

#5031264 Game development on: Linux or Windows

Posted by phantom on 11 February 2013 - 06:09 PM

If the fact that the Windows SDK and the DirectX SDK are separate (as they very-well should be)

Although they aren't any more; June 2010 was the last DX SDK update for DX11.
With Windows 8 DX/D3D is now part of the platform SDK and will be updated (or not) as that is updated smile.png

#5030489 OpenGL and Mac: No D3D11 level functionality?

Posted by phantom on 09 February 2013 - 03:40 PM

However the number of games currently supporting the feature set is somewhat unimportant; if the feature set isn't exposed then no one can create games which target it anyway...

#5030156 Game development on: Linux or Windows

Posted by phantom on 08 February 2013 - 01:14 PM

What about Direct3D... I heard it's API is really crappy, so is the documentation. So, it could be hard to learn it, true?

Judging by your opening post about loving Linux it isn't overly surprising you've heard this however it couldn't be further from the truth.

Of the two APIs D3D11 is the better of the two; it is well documented and all together saner.

OpenGL, while feature wise is on a level with D3D11, remains tied to the broken bind-to-edit model which makes working with it pants-on-head retarded.

That said working with OpenGL won't hurt you initially so you don't feel you have to swap over to Windows in order to progress; all the basic knowledge of 3D rendering is transferable between the two, you just have to learn a different way of doing things. You'll probably want to pick up D3D at some point however right now you can focus on expanding your knowledge with OpenGL on Linux.

#5029022 Best way to convert or load custom texture format

Posted by phantom on 05 February 2013 - 07:17 AM

I would recommend just writing a small program to do the conversion for you before runtime.

100000000x this.

Your game/engine/runtime should NOT be dicking about converting image (or any other data) at run time; your texture payload should match the D3D supported ones so that you can simply load the data in and hand it off directly.

Doing the conversion every time you load and run the program is just wasteful...

#5028988 What are some libraries and techniques that game dev's should know?

Posted by phantom on 05 February 2013 - 04:26 AM

swiftcoder, on 05 Feb 2013 - 00:40, said:
In almost all other cases, the standard library (used correctly) is going to be faster and more robust than a home-grown solution.

There was even a case on this forum when the Doom3 (or Quake3, I forget which) source code was released and someone tested the iD developed containers against the MS VS 2005 (iirc) implementation and found the latter beat the former by a considerable margin in many cases.

That's not to say that the containers as provided by the C++ Std. Lib are the perfect solution to everything, but don't go thinking you can beat them on speed without having a very specific set of requirements which a general solution can't be optimised towards.

In otherword; use the C++ Standard Library until proven otherwise.

(It still amuses me that C++ remains one of the only languages where people go out of their way to avoid the standard, library instead of learning how to use it properly, based on 10 year old hearsay and rumours...)

#5028471 How to avoid slow loading problem in games

Posted by phantom on 03 February 2013 - 05:22 PM

being nice to all kind of mangled formats/dimensions is useful during production, but once you hit the finish line, store the stuff in the exact format you need in memory.


We don't even allow that.


EVERY resource which is loaded, during every stage of development, is a processed one. All images are converted first to DDS then to our custom format (basically a smaller header with a platform specific payload attached which is in the hardware format required) so they can be directly streamed in as fast as possible.


The only difference to development vs final build is that for the final build the various packaged files are consolidated into larger volumes which are compressed using zlib.


There is no good reason to be dicking about at runtime with custom formats and if you aren't giving the GPU DXTn/BCn compressed data to work with then you are Doing It Wrong™ and a GOOD DXTn/BCn compresseor can take quite some time to run so if you are trying to run time compress to any decent degree then there is no way you'll get fast loading times.

#5027639 OpenCL is very slow comparing to cpu.

Posted by phantom on 31 January 2013 - 03:53 PM

I'm not convinced you are using the GPU how you think you are.

While you are telling OpenCL to launch 0x4000000 threads you are telling it that each work group consists of a single thread, which means you are wasting a vast amount of GPU resources as it will be launching 'preferred_work_group_size_multiple * 0x4000000' warps or wave fronts but only using one thread in each one.

If you run 'clinfo' in a cmd window you should get told the preferred work group size multiple; set the local_ws value to that and you should end up using all the GPU threads instead of just one in each warp/wave front launched.
(For example on my card this value is 64, so a work group of less than 64 is going to waste resources.)