Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 20 Mar 2012
Offline Last Active May 11 2016 05:21 AM

#4977578 What improves better memory performance?

Posted by lawnjelly on 07 September 2012 - 05:55 AM

Have to say I'm with Hodgman on this one.

Using OS calls for allocation and deallocation at runtime in a game is one of the cardinal sins, but GC too? Urggg!! Do you know what these calls do behind the scenes? I wasn't even going to mention it.Posted Image

If you do look into memory pools, stuff can get quite complicated so sometimes it might be nice to leave it to the GC platform's memory pool.

Well it would 'be nice' to be lazy and leave everything to a GC, but unfortunately there are reasons why people don't tend to use this kind of thing for time dependent stuff. I understand looking after memory is 'an extra bother' and 'complicated' but it's necessary if you want to make fast, stable code. I've also had to spend weeks sorting out problems caused by 'programmers' who thought memory management was 'a bother', and delayed shipping products, and left them bug ridden messes.Posted Image

#4977562 What improves better memory performance?

Posted by lawnjelly on 07 September 2012 - 04:54 AM

Fixed size memory pools FTW. They are great.Posted Image

The downsides are you have to (typically) know in advance how many of the objects maximum you will want worst case scenario. In addition the memory preallocated for the pool is not available for other uses.

Upsides are they are blazingly fast, constant time allocation and deallocation, there is no fragmentation, and provided you choose the maximums correctly your program CANNOT crash due to an allocation failure.

You can also implement your own heap with buckets but it's not something I'm a fan of.

You can also (in c++) override new and delete to keep track of your allocations. You can use different heaps / counters for different modules, and budget your memory between them. Very useful on consoles and limited memory devices. This can also report to you any memory leaks on closing, which module and which file they are from.

Other tricks are things like, when you load in a game level, load it as a binary file laid out in usable form in memory. Then fixup the pointers within it from offsets within the file to actual locations in memory. This gives you super fast loading, no fragmentation, and cache coherency. And of course level size etc is one of the biggest 'changables' within a game, so if you can isolate this down to one allocation, you shouldn't really need to do much else in the way of allocation. And even for this you can just pre-allocate a big chunk for the biggest level size, that's what I've tended to do on console-like environments.

Of course this is for game code, where stability and speed are paramount. For tools and apps I'll be a lot more lax, and use dynamic allocation etc (sometimes I don't even override new and delete, when I'm feeling like living life close to the edge Posted Image ).

It's also worth mentioning that there are some allocations you can't avoid, depending on the OS - API allocations such as directx and opengl. You can of course use pooling systems with your API resources too. In addition on consoles you can often completely avoid this problem by using a resource directly from memory as they may be UMA or give you more control over memory.

#4977535 Stuck in late development

Posted by lawnjelly on 07 September 2012 - 02:41 AM

Many (most?) games have elaborate debugging systems / tools built for them that you never see as a player. There are certain things you can debug with an IDE debugger, but other things are incredibly slow and tedious to debug this way (often AI) and can be easier with to debug with a custom system often involving lots of logging, GUIs in the game etc.

It can be a bit of an effort to pull your finger out and write initially write these tools but once you have them you realise you couldn't live without them.

#4965205 Collision detection / response for enclosed 3d levels

Posted by lawnjelly on 01 August 2012 - 07:57 AM

Navigation meshes are extremely cool and fast, and well worth it for AI creatures (doing real physics on a whole bunch of AI characters uses up a lot of CPU and opens up a can of worms with giving 'solid' behaviour). They *can* also be used for your main character physics, providing you are willing to put up with a lot of limitations. This kind of approach is very valid for e.g. a mobile phone, where you can save yourself a lot of CPU using this approach.

But you have to have a playstyle that will work with it, otherwise you end up needing to implement a full collision detection / physics system in addition to the navmesh. I think the projectiles could be the problem, you are going to end up needing either a full collision mesh for the level, or at least building a simplified version of it for the projectile collisions. At which point you have to ask whether it would be better to have full physics main character and be done with it.

An alternative is to tweak your game design so you don't need the full collision mesh. Perhaps instead of using projectiles, only allow your characters melee combat.

#4958856 Fixed time step input and smooth rendering of player

Posted by lawnjelly on 13 July 2012 - 12:19 PM


Essentially if you are at time 3.4 (in ticks) then you should have physics simulated 0, 1, 2 and 3. Keep a record of the current physics position and the previous physics position.

Then to render, render exactly 1 tick behind, so you would render at tick 2.4, intepolate 0.4 fraction between tick 2 and tick 3 position.

#4958699 Thinking of Rolling My own GUI toolkit.

Posted by lawnjelly on 13 July 2012 - 02:25 AM

I agree with felix, there are some quite reasonable 3rd party GUIs so consider using these first.

Off the top of my head: CeGUI, MyGUI, GWEN, then bigger ones like QT, GTK.
As well as this there are often propriety solutions for each platform, if you are not multiplatform.

If you want to have a go at rolling your own it's quite doable, depending how good your kung fu is, and depending how much or how little you want it to do. Each widget needs it's own code, so if for example, you don't need a treeview widget, don't write one till you need one.

I found text rendering to be as big (or bigger) an undertaking as writing the rest of the system. Consider using something like the freetype library to do this for you (I believe many of the other GUIs use freetype).

If you choose to have a go at text rendering this may help : I wrote my own subpixel text renderer with layout (justification etc), but used something like BMFont to precreate some pre-rendered fonts at the required sizes. Even so it still took a good week to get working to a decent standard, and getting subpixel rendering to look 'good' is not an easy task. The handling of subpixel spacing, kerning, and using 2 passes for justification makes it slightly more involved than you'd think, there was a lot of debugging layout problems. And mine (currently) can't handle images interspersed with the text as in html.

Then for the actual GUI itself I probably spent about a week of coding on it so far (on and off), and mine is pretty basic. I know it's probably heresy to say it, but I found it pretty easy, but maybe because I had a good idea of what would be involved / how to do it from the outset.

If you use inheritance, once you have the basic widgets / functionality, it becomes easier to build new more complex widgets by deriving from and combining the basic widgets.

#4958689 Converting audio files into numerical outputs to influence game spawns

Posted by lawnjelly on 13 July 2012 - 01:39 AM

I was going to suggest using ogg to avoid any licensing complications from mp3, but if you want your users to load their own mp3s you'll need a solution. Easiest may be to see if android / unity has an inbuilt library for converting mp3 to raw samples, that is fully licensed.

Essentially you want to use a third party library to convert from mp3 / ogg to raw data, probably 16 bit stereo or stereo floats. Then as said you could use something like FFT (either a library or find some source code), or you could simply do something like add up every 'window' of 1000 samples or find the peak over a window and use this loudness to determine your stuff happening in the game.

The more info you want to get out of the audio data, the more CPU it's going to use, which may be an issue targeting mobile phones. Things like extracting BPM and pitch are not trivial, but I don't think you really need to do that.

There have been games in the past that do the exact thing, I have a hazy memory of a spaceship shooter where the tunnel you are flying through is determined by the music playing, so it's very doable.

#4955698 Efficient GUI rendering

Posted by lawnjelly on 04 July 2012 - 01:09 PM

A GUI rendering can be slow, because text rendering can be slow if there's a lot of text. So you can look into text rendering optimization.

I know you didn't ask for comments on the reinventing thing, but it's pretty strange that you are really reinventing the look of the standard win32 GUI.
If you want to release stuff with this GUI, it will kick your ass. And it will kick the users' ass. A GUI that just looks like the well known Windows GUI but doesn't work like it is a major user-anti-experience.

I agree about the user experience, there are some well executed and user-friendly third party GUIs and some that just leaving you scratching your head.

In this case I'd already written the basics of the GUI for a light weight in-game menu system, so thought why not add regular application menus and try running with it for a simple app. It's all good experience. I'm just planning on using it for a simple internal 3d model editor for now, nothing fancy.

And of course not being tied to win32 leaves you more options, me and a couple of friends have just released an app on iOS that used a simplified version of this.

#4955665 Efficient GUI rendering

Posted by lawnjelly on 04 July 2012 - 10:47 AM

This has probably come up before here, so feel free to link me straight to a good source of info .. but ..

Anyway for various reasons I have written a little GUI system for a game I'm working on, and decided to have a go at fleshing it out to make it useful for writing tools or little apps, possibly cross platform. Oh gawd, yet another reinventing the wheel I hear you cringe, yes, well, can't say much on that ...

Here's a screeny showing I have it working ok, it didn't take too long:

Posted Image

For the game I was using directx to do the actual GUI rendering, but I've swapped over to opengl for general use, and it's a long time since I did much opengl, I'm very out of date with it.

So my question is probably to people who have done this before, what did you find the most efficient way of rendering the GUI objects? I can see there's a few trade offs involved. I was until recently just doing everything in software, writing to one big software surface, then locking a big viewport size texture and copying the RGB data across. Not particularly elegant or fast, but simple and it works.

That is until I realised I wanted to have some opengl 3d viewports displaying 3d models inside the app, with GUI elements possibly overlayed (such as menus, or dialog boxes). And to future proof things it would be nice to not lock the entire viewport every time there is a little change, so it works at reasonable speed.

So my guesses for some alternative methods are:

1) As each 'widget' is changed, I render this to the big software texture, and lock and upload just a part of the texture (using glTexSubImage2D?). This isn't as simple as it could be though, because it appears the source data can't have a 'pitch' to jump across the x coord on each line (if the viewport is much wider than the 'dirty rectangle'), so I'd have to first convert the big software texture to a temporary smaller one before uploading to opengl.

I could also keep a list of dirty rectangles that need uploading to opengl to avoid uploading the same area more than once in a frame.

2) Same as above but keep a separate software surface for each 'widget'. That way it can be uploaded without fiddling. However it makes changing the size of widgets potentially more problematic (as the software surface size will change), and would be nicer to avoid all those unnecessary memory allocations (although I could use memory pools I spose).

3) Have a separate opengl texture for each widget. Probably faster for rendering but pretty ugly in terms of memory allocation / deletion.

4) Try and render everything directly on the 3d card, without using software textures.

So, anyone know is there a standard good way of doing this? Anyone know what CeGUI, MyGUI etc do?
Cheers Posted Image