lawnjellyMember Since 20 Mar 2012
Offline Last Active Aug 24 2016 07:11 AM
God of programming. Except the hard stuff like 3d geometry. And maths. All these bugs creep in. Godamned floating point. And other hard stuff.
43 years old, going a bit senile. Worked in game dev, now retired.
Basic - Age 9 Spectrum
6502 Machine Code - Age 12 BBC micro
Pascal - Age 19 unix
C - Age 20 PC
C++ - Age 23 or so? PC
Now programming games again, as well as side projects (interactive children's books, DNA stuff, natural language stuff, music software, image editing software). Have dabbled in a huge number of areas, I'm probably more a jack of all trades master of none.
- Group Members
- Active Posts 114
- Profile Views 5,788
- Submitted Links 0
- Member Title Member
- Age 43 years old
- Birthday June 30, 1973
Tearing my hair out debugging 3d geometry.
Semantic networks and natural language processing.
Beating up small children.
Posted by lawnjelly on 25 September 2012 - 06:52 AM
As a caveat I haven't personally dealt with this type of scenario, but here's some guesses:
When it does break down I believe it is usually best to rely on the server being authoritative. If you 'see' on the client you've planted a bomb and trapper another player, but as far as the server is concerned, that player has already moved out of the way (their input is ahead of yours, for example), you have a choice, you either let the server be authoritative and you correct your client, or you wind back the server, and say 'hey this client made a hit' and replay everything from there and send the result to everyone.
I personally wouldn't want to wind back the server, it strikes me as a bit of a nightmare in terms of balancing, potential for cheating, etc .. (what happens if there's a stutter on the client, and the other player is not updated, and thus easier to trap?).
So with an authoritative server you are going to get situations where either the trapper or the trappee is corrected (or both), and it's going to look kind of annoying, like in a FPS when you get shot round a corner or snapped back. Whether this totally mucks up gameplay is a question - some games maybe are not suitable for this sort of multiplayer I guess. You'll just have to try it and see.
It maybe something that works as a splitscreen game (with zero lag), but totally falls down when playing with lag.
The other thing you could do if all else failed, easy solution - don't use client side prediction lol. Then the problem is solved. It just means you'd have to either play with fast connections, or lan games, or try and hide the delay somehow in the gameplay.
There may also be other possible solutions with client-client communication, but I don't know much about that.
Posted by lawnjelly on 24 September 2012 - 07:28 AM
I'm toying with sending inputs in a sort of reliable fashion. It's a queue of inputs, and I keep sending them until they are acknowledged. I also have a bandwidth throttle to keep the bandwidth usage in check.
This is a fairly standard way of doing it, I've been doing it for a while, there's a gaffer on games article describing this I think:
I think (without looking at the code) I send the server calculated position with the ack for the client input. In fact this may all be part of the regular update packet from server to client (with actor positions).
Then when the client gets the ack, it can compare the server calculated position with its own client side predicted, and if it's different, roll back and recalculate the client position based on the input history. And once it has the server ack up to tick 'blah' it only now needs to send input out from tick 'blah' onwards, etc etc.
Posted by lawnjelly on 24 September 2012 - 02:31 AM
A question out to all the code-heads out there. I'd like to get a sense for what you as a programmer like to see out of designers other then cold hard cash to spend a few dozen/hundred hours on writing code. Obviously a good idea is a pretty useful thing but lets get beyond that. When you're reading the classifieds you're looking for something out of a team or individual to instill a sense of commitment and components to indicate a unified design idea but what are those components. In a priority list what matters to you? Art, documentation, pre-recorded audio, etc?
I can't speak for other programmers, particularly beginners (to whom this question might be more relevant).
Being a 'designer' is a very difficult sell to try and start a project yourself. Unless you have prior experience / track record, realistically you are probably going to need money to pay, or have other skills (do you have a good looking girlfriend, who is *really* committed to the project?).
If we take money out of the equation, other skills would be (as AlterOfScience says) things like producing artwork / programming, and *possibly* running a business (prior experience). I would have thought producing artwork is most likely to be the successful avenue, as that is what most programmers usually can't do themselves / aren't interested in doing. To be realistic, if you have any chance of being part of a non-paid project doing purely 'designing' (probably writing scripts or making levels), it's most likely to be joining an existing project rather than starting one yourself (and consequently working on someone else's ideas, that's what a designer does 99% of the time).
I would suggest to anyone who wants to get started in designing to either start making games themselves, with a game toolkit or mod tool of somekind that doesn't require programming, or to learn to produce artwork of some kind, either 3d or 2d, so they have some marketable 'skillz' to bring to the game. For indie games, artwork doesn't necessarily have to be AAA quality, you just have to be prepared to spend the time learning how to do it, and spending the hours and hours and hours on it.
In short, designer is a hard sell. Designer / artist (primarily) and to a lesser extent designer / programmer is a better bet.
Not really. The game world isn't short of ideas. It's short of people who have the skills / time / commitment to do stuff. But I'm sure this has been discussed ad infinitum.
Obviously a good idea is a pretty useful thing
Posted by lawnjelly on 08 September 2012 - 07:02 AM
Yup, that's pretty much how we ended up doing it too! Snap lol. I think it probably ended up as a malloc on a reserved heap on pc build for the level file, and just loading into the prereserved block on consoles.
For the streaming I think I had more chunks (I called them banks), maybe 8 or 16 something like that, then parts had the option to use e.g. 2 banks worth.
For deciding which banks needed to be loaded I used a PVS calculated from the artist's levels and portals, and a potentially loadable set derived programmatically from this. There were areas though I'm sure where the artists had overcooked it and they needed to put in visibility blocks of some kind. I think the tool chain alerted them to this. Worked a charm, especially with decent asynchronous streaming support.
Getting way off topic though there hehe!
Posted by lawnjelly on 08 September 2012 - 01:18 AM
So, I mostly agree with the rest of your post, but this point isn't quite as straightforward as you suggest.
The other is that there is no question over the time taken over a deallocation / allocation. It is determined by your code and can be tightly determined - usually a constant very short time.
Malloc/new are not deterministic. The cost of an individual allocation is generally much higher than that of a garbage collector, and it is not a fixed cost. But you do get the (to my mind, dubious) benefit that the performance cost is incurred at the call site (whereas garbage collection incurs a performance cost at an indeterminate later date).
If you actually need deterministic allocation cost, then you have to go with other solutions (probably ahead-of-time allocation: pool allocators, SLAB allocators, etc.)
Ahha .. this may be where the confusion lies.
I didn't want to suggest 'using malloc / free at runtime is better than garbage collectors'. Far from it... they both have related downsides.
In c++, if you override new, you don't need to use OS calls for memory management. You can use whatever system you want for grabbing memory from wherever you want, then you have the opportunity to call the constructor yourself with placement new.
In addition there is a distinction between one off allocation / allocations at startup, and their corresponding deletion at shutdown, and dynamic use (i.e. the kind of things you might use lots of times in a frame). The second case is what we are interested in here. For actually reserving your memory at startup, you could use whatever you want .. an OS heap, garbage collected system. Ultimately your memory has got to come from somewhere.
(There is also the slightly less stringent case of level load / unload, where you *could* if necessary be a bit more lenient / take some shortcuts on some platforms).
What we are after in games, in an ideal world, for dynamic allocation (things that happen a lot rather than just startup and shutdown) is stability (no failed calls) and constant time (and fast) allocation and deallocation.
Sorry I should have been more clear on this. I would on the whole use things like fixed size memory allocators (and potentially other constant time allocators) for things that need to be created / destroyed dynamically (see my first post on page 1). You can use this for constant time incredibly fast allocations / deallocations, suitable for things like nodes in algorithms, even particle type systems.
For things that are truly variable size (levels etc) the tradeoff can be to prereserve space at startup for worst case, and work with that. Alright you lose a bit from the theoretical maximum, but you gain in simplicity and stability. On levels with not much geometry, you can e.g. add more sound, or more textures, and vice versa. For your level file you can prepack into the best format possible, with zero fragmentation, and make use of the whole of your budget in megs. If you need to use more than this, then you need to support streaming of level data on the fly (this is a whole other topic with similar concerns, guess what, you can use fixed size bank slots for this too!).
You can do this for GPU resources too .. reserve e.g. 5000 verts for a character and then stick to that budget or lower for your artwork, and you can guarantee they will always fit in that 'slot'.
You can also pre-designate blank 'slots' for various items in the level data RAM allotment to give more flexibility, if it seems a better idea than deciding ahead of time the maximum number of item 'blah'. If you do this you get the benefit of zero fragmentation, and best use of memory for that level.
In short there are lots of handy 'helper' bits of functionality offered to programmers, like 'general purpose' heaps, variable size strings etc. There are whole languages dedicated to making things 'easier' for the programmer where these things are a given (basic, php etc etc). In most situations this is a real benefit because it makes you much more productive as a programmer - less code, simpler code, less potential for bugs, and the 'costs' are not going to appear to the user.
It's just that in some situations, particularly time critical applications, and those on limited memory devices, it can become worth it to not use some of the helper functionality. An extreme example would be missile control software. You might have limited memory. If your program crashes, people die. If your program takes too long to faff around restructuring the heap, people die. It's only if it works predictably and as per spec that the right people die.
Other examples where you have to be a bit more stringent include things like financial software, medical software, some engineering software.
Would you want the nuke heading towards your neighbours house programmed in java with garbage collection, or c++ with no external allocations? I know know which one I'd rather have heading towards my neighbours.
(edit) Some good search terms to google in this area are : 'real time programming', and 'mission critical programming'. (/edit)
Posted by lawnjelly on 07 September 2012 - 02:39 PM
Well it would 'be nice' to be lazy and leave everything to a GC,
Sometimes the developer (or team) may not be quite skilled enough to have a choice and I hope to god that if working in a team, I wouldn't have to use some bodgy monstrosity.
In my spare time, I port software to FreeBSD and notice quite a few cases where developers have tried to hand roll their own stuff, nothing flags up bugs in this type of thing better than porting to an entirely new platform (and older version of GCC). So I suggest using a garbage collector unless you really know how to use the language properly.
While I havn't properly touched managed languages for well over 4 years, the Boehm GC works satisfactory on C++.
For software which needs no clever memory management however, the only solution is tr1/shared_ptr!.
Yup, don't get me wrong, in almost the majority of apps I'd be all for using all the tricks in the book to make things simpler. Garbage collection, you name it.
Sorry if I come off as opinionated on the subject, I was a bit unfair on you Karsten .. I've had to deal with the mess caused in the past and it's not been pleasant. It's not very fair when people's jobs are on the line, and their families depending on them etc.
It's just in the specific case of (professional) games, particularly on fixed low memory devices (and some other software on embedded systems), my personal belief is that controlling the memory yourself can be the best option. That doesn't mean it's necessarily the best approach for people learning .. it's more an approach for making a solid professional product.
The two main reasons I would argue for this are:
Stability - no worries about failed allocations .. your game will run each time, every time, no matter how many levels you load, what combinations of objects need to be loaded. There's no, ah but if character B walks round the back of building A, carrying object C and opens the door on level BLAH, then it crashes. Sometimes. Which is pretty much what you don't want to hear about when you are trying to ship something. Or what happens if someone is running such and such a program in the background in a multitasking environment.
Of course it's possible you could get round this to some extent with your Garbage Collection system - if it can allow you to pre-reserve your memory, (depending on its implementation regarding fragmentation), and if you keep a tight handle on your numbers of various objects. But once you get to this extent you are almost doing the work of doing it yourself anyway.
The other is that there is no question over the time taken over a deallocation / allocation. It is determined by your code and can be tightly determined - usually a constant very short time. There's no worry about dropping frames etc. Using a third party allocation / deallocation system leaves you at the mercy of their implementation. That's not to say there aren't good implementations, but there are also bad ones, and worst cases. Windows for example is quite happy to grind to a halt and do some disk swapping when it thinks it's necessary during an allocation / deallocation.
I fully understand that it can be a bit of extra effort (sometimes quite a bit) to manage memory yourself, although it's usually mainly a one off cost setting up your project. But development isn't just the time putting the code together, it's also beta testing, trying lots of different scripts, game levels, combinations of factors. In this situation the more potential problems you can remove the better.
If you are working to a time schedule with milestones and a budget and staff costs to pay, the last thing you want is some vague uncertainty over 'yeah it may take 2 years to beta test this thing'. That's one of the (several) reasons why games get canned / companies go under.
But anyway at the end of the day it's up to whoever is technical lead on a project to make these kind of decisions. Right I'm tired that's enough essaying it's bedtime!
Posted by lawnjelly on 07 September 2012 - 05:55 AM
Using OS calls for allocation and deallocation at runtime in a game is one of the cardinal sins, but GC too? Urggg!! Do you know what these calls do behind the scenes? I wasn't even going to mention it.
If you do look into memory pools, stuff can get quite complicated so sometimes it might be nice to leave it to the GC platform's memory pool.
Well it would 'be nice' to be lazy and leave everything to a GC, but unfortunately there are reasons why people don't tend to use this kind of thing for time dependent stuff. I understand looking after memory is 'an extra bother' and 'complicated' but it's necessary if you want to make fast, stable code. I've also had to spend weeks sorting out problems caused by 'programmers' who thought memory management was 'a bother', and delayed shipping products, and left them bug ridden messes.
Posted by lawnjelly on 07 September 2012 - 04:54 AM
The downsides are you have to (typically) know in advance how many of the objects maximum you will want worst case scenario. In addition the memory preallocated for the pool is not available for other uses.
Upsides are they are blazingly fast, constant time allocation and deallocation, there is no fragmentation, and provided you choose the maximums correctly your program CANNOT crash due to an allocation failure.
You can also implement your own heap with buckets but it's not something I'm a fan of.
You can also (in c++) override new and delete to keep track of your allocations. You can use different heaps / counters for different modules, and budget your memory between them. Very useful on consoles and limited memory devices. This can also report to you any memory leaks on closing, which module and which file they are from.
Other tricks are things like, when you load in a game level, load it as a binary file laid out in usable form in memory. Then fixup the pointers within it from offsets within the file to actual locations in memory. This gives you super fast loading, no fragmentation, and cache coherency. And of course level size etc is one of the biggest 'changables' within a game, so if you can isolate this down to one allocation, you shouldn't really need to do much else in the way of allocation. And even for this you can just pre-allocate a big chunk for the biggest level size, that's what I've tended to do on console-like environments.
Of course this is for game code, where stability and speed are paramount. For tools and apps I'll be a lot more lax, and use dynamic allocation etc (sometimes I don't even override new and delete, when I'm feeling like living life close to the edge ).
It's also worth mentioning that there are some allocations you can't avoid, depending on the OS - API allocations such as directx and opengl. You can of course use pooling systems with your API resources too. In addition on consoles you can often completely avoid this problem by using a resource directly from memory as they may be UMA or give you more control over memory.
Posted by lawnjelly on 07 September 2012 - 02:41 AM
It can be a bit of an effort to pull your finger out and write initially write these tools but once you have them you realise you couldn't live without them.
Posted by lawnjelly on 01 August 2012 - 07:57 AM
But you have to have a playstyle that will work with it, otherwise you end up needing to implement a full collision detection / physics system in addition to the navmesh. I think the projectiles could be the problem, you are going to end up needing either a full collision mesh for the level, or at least building a simplified version of it for the projectile collisions. At which point you have to ask whether it would be better to have full physics main character and be done with it.
An alternative is to tweak your game design so you don't need the full collision mesh. Perhaps instead of using projectiles, only allow your characters melee combat.
Posted by lawnjelly on 13 July 2012 - 12:19 PM
Essentially if you are at time 3.4 (in ticks) then you should have physics simulated 0, 1, 2 and 3. Keep a record of the current physics position and the previous physics position.
Then to render, render exactly 1 tick behind, so you would render at tick 2.4, intepolate 0.4 fraction between tick 2 and tick 3 position.
Posted by lawnjelly on 13 July 2012 - 02:25 AM
Off the top of my head: CeGUI, MyGUI, GWEN, then bigger ones like QT, GTK.
As well as this there are often propriety solutions for each platform, if you are not multiplatform.
If you want to have a go at rolling your own it's quite doable, depending how good your kung fu is, and depending how much or how little you want it to do. Each widget needs it's own code, so if for example, you don't need a treeview widget, don't write one till you need one.
I found text rendering to be as big (or bigger) an undertaking as writing the rest of the system. Consider using something like the freetype library to do this for you (I believe many of the other GUIs use freetype).
If you choose to have a go at text rendering this may help : I wrote my own subpixel text renderer with layout (justification etc), but used something like BMFont to precreate some pre-rendered fonts at the required sizes. Even so it still took a good week to get working to a decent standard, and getting subpixel rendering to look 'good' is not an easy task. The handling of subpixel spacing, kerning, and using 2 passes for justification makes it slightly more involved than you'd think, there was a lot of debugging layout problems. And mine (currently) can't handle images interspersed with the text as in html.
Then for the actual GUI itself I probably spent about a week of coding on it so far (on and off), and mine is pretty basic. I know it's probably heresy to say it, but I found it pretty easy, but maybe because I had a good idea of what would be involved / how to do it from the outset.
If you use inheritance, once you have the basic widgets / functionality, it becomes easier to build new more complex widgets by deriving from and combining the basic widgets.
Posted by lawnjelly on 13 July 2012 - 01:39 AM
Essentially you want to use a third party library to convert from mp3 / ogg to raw data, probably 16 bit stereo or stereo floats. Then as said you could use something like FFT (either a library or find some source code), or you could simply do something like add up every 'window' of 1000 samples or find the peak over a window and use this loudness to determine your stuff happening in the game.
The more info you want to get out of the audio data, the more CPU it's going to use, which may be an issue targeting mobile phones. Things like extracting BPM and pitch are not trivial, but I don't think you really need to do that.
There have been games in the past that do the exact thing, I have a hazy memory of a spaceship shooter where the tunnel you are flying through is determined by the music playing, so it's very doable.
Posted by lawnjelly on 04 July 2012 - 01:09 PM
A GUI rendering can be slow, because text rendering can be slow if there's a lot of text. So you can look into text rendering optimization.
I know you didn't ask for comments on the reinventing thing, but it's pretty strange that you are really reinventing the look of the standard win32 GUI.
If you want to release stuff with this GUI, it will kick your ass. And it will kick the users' ass. A GUI that just looks like the well known Windows GUI but doesn't work like it is a major user-anti-experience.
I agree about the user experience, there are some well executed and user-friendly third party GUIs and some that just leaving you scratching your head.
In this case I'd already written the basics of the GUI for a light weight in-game menu system, so thought why not add regular application menus and try running with it for a simple app. It's all good experience. I'm just planning on using it for a simple internal 3d model editor for now, nothing fancy.
And of course not being tied to win32 leaves you more options, me and a couple of friends have just released an app on iOS that used a simplified version of this.
Posted by lawnjelly on 04 July 2012 - 10:47 AM
Anyway for various reasons I have written a little GUI system for a game I'm working on, and decided to have a go at fleshing it out to make it useful for writing tools or little apps, possibly cross platform. Oh gawd, yet another reinventing the wheel I hear you cringe, yes, well, can't say much on that ...
Here's a screeny showing I have it working ok, it didn't take too long:
For the game I was using directx to do the actual GUI rendering, but I've swapped over to opengl for general use, and it's a long time since I did much opengl, I'm very out of date with it.
So my question is probably to people who have done this before, what did you find the most efficient way of rendering the GUI objects? I can see there's a few trade offs involved. I was until recently just doing everything in software, writing to one big software surface, then locking a big viewport size texture and copying the RGB data across. Not particularly elegant or fast, but simple and it works.
That is until I realised I wanted to have some opengl 3d viewports displaying 3d models inside the app, with GUI elements possibly overlayed (such as menus, or dialog boxes). And to future proof things it would be nice to not lock the entire viewport every time there is a little change, so it works at reasonable speed.
So my guesses for some alternative methods are:
1) As each 'widget' is changed, I render this to the big software texture, and lock and upload just a part of the texture (using glTexSubImage2D?). This isn't as simple as it could be though, because it appears the source data can't have a 'pitch' to jump across the x coord on each line (if the viewport is much wider than the 'dirty rectangle'), so I'd have to first convert the big software texture to a temporary smaller one before uploading to opengl.
I could also keep a list of dirty rectangles that need uploading to opengl to avoid uploading the same area more than once in a frame.
2) Same as above but keep a separate software surface for each 'widget'. That way it can be uploaded without fiddling. However it makes changing the size of widgets potentially more problematic (as the software surface size will change), and would be nicer to avoid all those unnecessary memory allocations (although I could use memory pools I spose).
3) Have a separate opengl texture for each widget. Probably faster for rendering but pretty ugly in terms of memory allocation / deletion.
4) Try and render everything directly on the 3d card, without using software textures.
So, anyone know is there a standard good way of doing this? Anyone know what CeGUI, MyGUI etc do?