Jump to content

  • Log In with Google      Sign In   
  • Create Account

Calling all IT Pros from Canada and Australia.. we need your help! Support our site by taking a quick sponsored surveyand win a chance at a $50 Amazon gift card. Click here to get started!


phantom

Member Since 15 Dec 2001
Online Last Active Today, 04:25 AM

#5202023 Seperate update and draw code by thread - an Idea

Posted by phantom on 05 January 2015 - 01:05 PM

This isn't an uncommon approach already; the main difficulty is dealing with data which needs to flow between 'update' and 'render' threads - do you copy? double buffer? more?

But as things go, yes, it has been a tried and tested idea for a while now smile.png

The biggest issue, going forward, is that is doesn't scale - you are using two threads and thus two cores but CPUs these day can come with upwards of 4 cores so you are under-utilising the CPU. At this point people go off into 'task' systems so that work can be spread even more (although rendering submission gets stuck on a single thread right now).

However as a starter; yes this is a sane way to go about things smile.png


#5202015 do most games do a lot of dynamic memory allocation?

Posted by phantom on 05 January 2015 - 12:46 PM

but as i said before, i'm not doing monster engines. i'm mostly doing basic sims with 100 targets max kinda thing. so setting sizes upfront isn't too difficult. start with MAXTGTS=100. maybe kick it to 200 before release, that kind of thing.


Maybe you aren't, but you've rocked up here questioning things as though your minor project is in some way shining a light as to The Best way when it is just one example of a specific thing done in a specific way which works for you - lets not pretend that playing the static allocation game is anything less than wasteful bad practice which just happens to work for you. 
 
 

as for global, yes its bad. its unsafe - easy to misuse.  the safety lacking in the code syntax must be replaced with coding policies and methodologies which must be well documented, and rigorously followed with strict coder discipline to avoid problems. but its the way i started, so i'm used to it. in the long run, all i really use it for its to avoid calling setter and getter methods everywhere. i suspect that it might be possible to write a game where all data is in private modules acessable only via getter and setter methods, and then you have control code modules that call getter and setter methods and perform operations on the values. the only thing that any data module would have to know about any other module would be any custom data structure definitions used to get and set its values. the only thing controller code modules would need to know would be custom data structures used to get and set values of data modules they use. an extremely modular system.  but as you can see, there would be a lot of get and set calls.


No, what I can see is a straw man argument built up using a lot of poor examples and something which screams 'bad design' at me - a good system design does not have 'getters and setters everywhere' and does not require module after module to know about the structures or internal setup of other systems.

A good system design decouples. A good system design hides. A good system design does not vomit all over your code base which is typically the case with global objects.

Fun example: Previous place I worked had a system to manage sharing system textures/render targets. This system was global. It was written by a junior with little experience and it was a mess. I designed and wrote a system to replace it (plus do more) which was not global (because the bloody thing was only used in the renderer anyway). Once the replacement was completed it took a couple of days to unwire the old system which had gotten everywhere. The new system was faster, cleaner, had more functionality and never once had a single bug tracked back to it. Nor did it have loads of 'get and set' functions.


granted , many might get inlined, but i just bypass them and do the assignments directly, IE:


And wasn't it you who not long ago had to make a chance which required searching all over his code base to do because the thing you were changing was accessed from so many places? 
 

think about it, if you were going to code breakout or galaga, or space invaders or missile command or pong right quick and dirty, you wouldn't break out unity, and start creating CES systems and whatnot - its overkill: "using a tank to squash an ant" as they used to say in design sciences. you'd load up a couple bitmaps, and declare a few variables, and go for it. especially if you'd done it dozens of times before. for me, writing these sims (other than caveman - its a whole different sort of beast) is kind of like that.


Just because I wouldn't use an CES system doesn't mean I'd go around hardcoding things into arrays in data segments either; no, more likely I would grab existing code to read config files... hell, with projects as trivial as that I'd probably just grab Lua and use that for the logic and just plug it into some C++ framework.

But I'm assuming your 'sims' are more detailed than the trivial examples you gave so that is again not a sane comparison nor one which is representative of the scale of things.

The point is, regardless of the scale of things I would spare some brain time to do it properly because you don't know where things are going to go and spinning a few brain cells to correctly split up code rather than vomiting out some monstrosity is the way I will do things.

By all means continue developing and coding as you've done forever... it's no skin of my nose and when I do see your code I get a wonderful amusement out of it... but at the same time don't pretend it is in any way, shape, or form 'good practise' to do things your way because it simply isn't.


#5201329 do most games do a lot of dynamic memory allocation?

Posted by phantom on 02 January 2015 - 10:43 AM

my same exact question.  from the video that inspired the post, it seems as though newing and deleting things like entities, dropped objects, and projectiles was something that games in general were commonly doing on the fly as needed. which struck me as inefficient and error prone.


Except in most cases the new/delete is going via a pre-allocated heap so the cost of creating and deleting isn't that great and depends on the nature of the thing being created/destroyed and most of the overhead is going to be in object deinit where it's child objects are being cleaned up, which unless you are leaving your objects in some kind of zombie state you should be doing anyway and the cost is practically identical.

As mentioned it depends on the things being allocated and how of course; big game entities are infrequently allocated and de-allocated so a sensible clean up via a new/delete (placement, in the C++ case) isn't going to cost you much in overall run time. Transient objects, such a structures which only exist for a frame, are likely to be simple and will more than likely use a simpler allocation scheme ('stack allocation' in the sense that you have a scratch buffer objects a placement-new'd into for ease of construction but never deleted from, the allocation pointer is just reset to the start each frame; useful for things like rendering when you need to build temp data structures).

games taking over memory management when asset size > ram makes much more sense. its what any good software engineer would do.

 
This has nothing to do with 'asset size > ram'; this is all about keeping things clean. If I don't need that memory allocated then why hang on to it? Why effectively waste it? If you know in your front end you'll need 4Meg for something but only in the front end then you might as well share that pool with the game and overlap the memory usage allowing the system to (OS) to make better use of the physical ram for other things. If you had a 4Meg chunk in the data segment then that 4Meg is now gone forever and while it might not seem much the OS can still make use of it.

If you are allocating a scratch buffer every frame of the same size then just allocate once, at start up, and just reuse it.

 
i'm saying - that's what i would do.

 
So why does the allocation speed matter? Most allocations are so infrequent as to not make a difference and frequent ones would be spotted and replaced pretty early on (or not done at all if the developers have any amount of experience).

well, fortunately for me, its not that hard to determine required data structure sizes at compile time. caveman has taken about 2-3 man-years to make. MAXPLAYERS, MAXTARGETS, and MAXPROJECILES have never changed since first being defined.  occasionally i'll up MAXTEXTURES from 300 to 400 etc, as the number of assets grows. but that's about it. and i could have simply declared all those really big, then just right-sized them once before release.


And that's a very small subset of buffers which might exist in a game and a very specific example where you apparently don't care about wastage or good software design (hint: global things are bad software design but having seen your code in the past this doesn't surprise me in the least...) but as a general case solution is does not work and it does not scale. Fixed compile time buffers are the devil; the flexibility gained from just pulling from the heap for a value pulled from a config file far outweighs anything else in the general case.


#5201327 do most games do a lot of dynamic memory allocation?

Posted by phantom on 02 January 2015 - 10:20 AM

quite true, one extra de-reference per access, i believe.  hardly anything to write home about, unless you do a couple million or billion per frame unnecessarily.


I'm intrigued as to where you have pulled this 'extra dereference' from?
A pointer to a chunk of dynamic memory and a pointer to a chunk of memory which came in with the exe are going to be the same...


#5201134 do most games do a lot of dynamic memory allocation?

Posted by phantom on 01 January 2015 - 09:12 AM

Yes, a single allocation every frame can soon add up but my question would be; why are you constantly reallocating anyway?

If you are allocating a scratch buffer every frame of the same size then just allocate once, at start up, and just reuse it.

At which point you might say 'well, why not just allocate a static array and be done with it?' and sure, you could (and for some things this is sane because it is a hard limit which you won't want to change for various reasons) but what happens when that buffer is suddenly too small? You now have to rebuild to test and find a sane number; if you had it in a data file and dynamically allocated it you'd fiddle with one buffer, reload the compiled exe and see if it works. Job done.

There are reasons to allocate statically sized buffers, generally architectural ones such as, on the PS3, references to SPUs or on the 360 a buffer to represent GPU constant slots, but for the sake of one allocation, which can be made at start up, the flexibility far outweighs any perceived advantage of trying to guess the amounts in advance.

As for games, on the ones I've worked on, during the normal runtime phase memory allocation is very light; allocations might get made from existing pools (more than likely by allocating chunks in order and then just resetting the pointer at the start/end of the frame so it doesn't persist). Dynamic allocations tend to only happen when data is streamed in, at which point you have to pay the cost (because it is dynamic), and you can control that so you only service so many requests per frame. (and to be fair, even that can come from pre-allocated pools so you don't have to touch the system allocator and instead can use one of the numerous faster allocators out there)


#5200787 Multi-threading for performance gains

Posted by phantom on 30 December 2014 - 07:39 AM

I would argue that in a game, where you have complete control of things, that isn't a great use either; you'll either over subscribe the cores meaning that you'll run the risk of important tasks getting staved out or at least delayed which hurts the update rate (and wastes resources as the OS has to schedule them in and the resulting overhead of that) OR you'll need to under use them meaning that at any given time resources are sitting idle if they didn't have work to do.

If you've got work to do which isn't quick and needs to be run over a number of frames then write the code in such a way that it can do that so that it uses a fixed amount of time per frame to do some work and allow it to continue next time it is called.

Just because results aren't due in a frame doesn't mean the work can't be broken down to make better use of the cores and control the dispatch of the work.


#5200700 Multi-threading for performance gains

Posted by phantom on 29 December 2014 - 07:14 PM

I'm not sure I buy the latency argument, not when your suggested solution is 'push messages to another thread and have that do some work to kick the read and wait on the result'; the latency difference is unlikely to be all that critical in a game situation anyway when dealing with disk IO which is already pretty high latency.

The solution I came up with, and we are talking a good 3 or 4 years back now so I don't have the code to hand (largely because I'm away from my PC) involved the FileRead family of functions (probably FileReadEx, but I'm not 100% on that).

The code involved was very short, in the order of 10 or 20 lines maybe?, and if memory serves was a case of;
- TTB Task requests chunk of memory to load into and uses FileReadEx to start the async read and record the handle.
- Handles were collected up
- IO check task was used to check the state of the handles (WaitForMultipleObject series, immediate time out)
- For all completed file handles; push tasks into completed task queue for execution

As it was a proof-of-concept it basically only looped on point 3 until all the files were done.

The code really couldn't have been any simpler if I had tried, the most complex bit was setting up the file IO as I seem the recall the docs being a little less than clear at the time; heck that was probably the biggest segment of code in the whole test.

I'd be VERY surprised if you could write something with a lower bug count than that (only likely bugs are going to be post-hand-off in the decoding stage or whatever; memory was owned by a data structure which got passed about so it wasn't like it leaked or anything, was a direct in place load) and I'd be really surprised if the latency was anything to write home about considering that I've got one task on one thread polling at most once a frame.

If I remember I'll try to dig the code up when I get back home, although that's likely only going to happen if this thread is still on the front page as I'm going to be destroying by brain probably 3 more times before I get back to my PC where the code might still live...


#5200473 Is Unity good for learning?

Posted by phantom on 28 December 2014 - 06:58 PM

UDK isn't really a Thing any more; it is no longer being updated.

Instead if you are interested in getting hold of the Unreal Engine to use then you should be looking at UE4 at http://www.unrealengine.com.


#5199921 When to set animations?

Posted by phantom on 24 December 2014 - 08:47 PM

FPS movement would need is_attacking, is_blocking, walk/run/sprint, is_jumping, direction of movement, is_crouching, etc. easier to just fire off the correct ani when you know what ani to play.


Welcome to the world of N-way blending and animation channels.

With a good animation system there should be no conflict and no problems; want to run while drawing a knife? Apply both animations and have your animation system take care of how they are applied; running would apply to legs and arms unless another animation is being applied to the arms which overrides it; in this case the draw knife animation.

While this isn't an easy problem to solve the problem has been largely dealt with already in I'd have said all major engines by now.


#5199702 Is this correct sRGB conversion?

Posted by phantom on 23 December 2014 - 07:41 AM

... all offscreen buffers as non-sRGB ...
 
... all internal (offscreen buffer) blending will be done linearly...


Unless those offscreen buffers are high precision (R10G10B10A2, float 16 or float 32) then you really don't want to do that; anything which contains diffuse colour information needs to be sRGB encoded when going to RGB8 formats or you'll lose the high end information. (You could do it yourself with a crazy pow2 style thing but the hardware will do it for free and correctly.)

Unless you are on DX9 hardware then you don't have to worry about blending either since DX10-class hardware the blending units all blend in linear space correctly having first converted from sRGB if needed.

So, the practical upshot is, unless you have a higher precision colour buffer just set it to sRGB and let the hardware Do The Right Thing these days.


#5198489 The Game Environment: Not just Graphics

Posted by phantom on 16 December 2014 - 03:47 AM

I think humans are mostly visual and that. on average, our eyesight is stronger than our hearing. </speculation>


I think you'll find that's the other way around; our eye sight is pretty poor and limited - good for tracking animals to hunt and to jump between trees, pretty poor otherwise. (Fun fact; when your eyes move your brain stops processing visual data until it stabilises again - so when you look left or right as a car driver all you saw was what was in front of you when you started and where you were looking at the end, your brain made the rest up.)

In fact you'll notice audio drops/glitches much easier than you'll notice visual ones simply because your brain isn't doing as much compensation.
(As for a dog; dog's worlds are very smell directed rather than visual based thus the rabbit problem.)

Graphics are one of those things which are just easier to show off; you get screen shots and wonderful flashy trailers which look good on a 1080p screen and give you lots of Hollywood wiz-bang; where as audio setup for most people tends to suck - $10 speakers attached to an onboard sound system with the same fidelity as an ant blowing into a trumpet.

That said, audio tends to get a fair amount of processing time dedicated to it; in OFP:Red River for every graphics frame we were rendering on the console (30fps) you would see at least 5 audio frames of processing happening and eating pretty much a whole core on the X360 so audio can get a pretty good chunk of resources. (The audio guys also put a lot of work into the sound design, the J-DAM explosion sound was a thing of beauty and worked well with the particle system around it.)

As others have said, the audio design in many games is good, you just don't get it thrown in your face because it tends to be more subtle.
My favourite bit which springs to mind is the background computer chatter in Dead Space; sets the tone really well and when you focus in on it then it is damn creepy... Vampire : Bloodlines also had some pretty cool sound design, in fact I hold up the first mission proper in a haunted hotel as one of the best bits of game design I've encountered as the graphics, sound design and pacing are just spot on.


#5198277 VERY weird error. Structs in std::vector being updated more than once

Posted by phantom on 15 December 2014 - 04:12 AM

Another, slightly more major idea, is to use the remove/erase idiom to deal with the removal and std::foreach to deal with
 
particleList.erase(std::remove_if(std::begin(particleList), std::end(particleList), [](particleType const &p) { return p->lifetime < 0.0f }), std::end(particleList));
std::for_each(std::begin(particleList), std::end(particleList), [](particleType &p) { p->y += 0.1f});

Lambdas can, of course, be replaced with other function object implementations.


#5197193 Particles: Batching VS instancing

Posted by phantom on 09 December 2014 - 10:22 AM

You don't have to increase your data; you only need one XY value per particle.

In the vertex shader you take the vertex ID as an input (can't recall GLSL for this) and then use that to figure out which particle you are (int id = vertex_ID % 4), and use that to index into the UBO to get the data.

void main(int vertex_id)
{
int particleID = vertex_id % 4;
vec2 pos = particlePostions[particleID];

// do things...

}



#5195659 OpenGL 5 - Release?

Posted by phantom on 01 December 2014 - 03:23 AM

The reason for a new API, ground up ditch all the shite version, can be surmised quite simply;

Currently OpenGL, OpenGL|ES and D3D11 are the only 3 major APIs in the wild which do not support 'going wide' on their command buffer building or do not see any speed up from doing so. (3 of 9 it should be added.)

Next year OpenGL and OpenGL|ES will be the only APIs not to support this.

CPU archs are wide.
Graphics setup is naturally wide.

So, from just an ease of writing and compatibility mindset OpenGL will require a bunch of hoop jumping just to use sanely; maintaining this going forward is not helpful.


#5195191 OpenGL 5 - Release?

Posted by phantom on 28 November 2014 - 08:24 AM

Wow, for a moderator you are a real ******* (I know I am risking getting banned here, who cares with such a community).


Yep, and I'm ok with it...
(and judging a whole community on one person? really? drama much?)

Also that reply wasn't direct just at you, however I missed an 's' which is why you could take it as such.

Either way I stand by it because frankly I'm bored of the constant 'MS are evil/bad/whatever' narrative which continues to run rampant (in general, not just with GL) and if someone is willing to buy into that without either doing the research or asking a question before putting forward a strong opinion then, well, you'll get such a reply from me.

And as you are apparently new around here; there is an unofficial agreement in place that if a moderator takes part in a thread they will not moderate in that thread, certainly not for things involving them, which is something I've stuck by - if you have a problem with that post, or indeed any post, feel free to report it I won't stop you or act on it and if anyone else in the mod team has a problem with my conduct then they will say something and if needs be I'll give up the position.




PARTNERS