Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 15 Dec 2001
Offline Last Active Private

#5192959 is there a better way top refer to assets in a game?

Posted by phantom on 15 November 2014 - 03:30 AM

In any case, you shouldn't have code like playwav("some_sound.wav"); though -- more like:
Sound* some_sound = loadwav("some_sound.wav"); //filename processing paid once, pointer obtained
playwav(some_sound); // no details of filesystem involved per frame

1000x this.

Load sound once; obtain handle to sound; play via handle either playwav(handle) or soundObj->play().

At BEST I might have a playprecached("some_sound"); which would do the name hashing and lookup from the resource container there and then but still no disk I/O at that point.

#5192476 Why don't you use GCC on windows?

Posted by phantom on 12 November 2014 - 01:33 PM

And as of today, with the offering of a Community Visual Studio 2013 which gets you plugin support I'd say there is even less reason to use anything but VS on Windows.

And the 2015 stuff which is coming is just added gravy on top... (see link).

#5191847 Assets reloading and streaming

Posted by phantom on 08 November 2014 - 03:39 PM

I've worked on a system which used a mixture of the two things you mention.

When a resource is loaded it can request a handle to other resources (so a material might say "give me a handle to a texture with the name 'grass'") and in the process it is registered as 'interested' in that resource.

When a resource is loaded, or unloaded, anyone interested in it is sent a message and can react accordingly to the message - so in the case a material it could swap for a 'default' texture or just flag itself as 'unrenderable' which would then ripple upwards, so the mesh using the material would know it couldn't render and thus would prevent itself from doing so.

When something was loaded in anyone interested in it could do whatever 'bind' work was needed on the loaded message - so in the case of the material it would grab the texture handle (for the 3D device being used, this system was cross platform) and cache it for a later draw call.

This meant that we could remove ANY rendering resource and with the correct call backs in place handle it gracefully - which in our environment meant that an artist could make a chance to a texture, save it and with one press of a button it would be cooked to a game ready format, the game would be told to unload the old one (which would stop it rendering whatever it was on but only those things), load the new one and rebind with no hitch at all - the system was so good we could unload the whole scene description for the render (what passes) and reload it as if nothing had happened.

The key point being the bind/unbind caching should only be done once; so there should be zero overhead when compared to a normal system.

#5189830 Current state of custom and commercial game engines as of 2014

Posted by phantom on 28 October 2014 - 06:28 PM

Either way, I still believe there are more Unity programmers being "produced" than hardcore ones, and that happens because of this engine monopoly.

However I don't think we've lost anything in this regard; people interested in low level will go low level because they'll have a desire to find out more - Unity and UE4 just open up the game dev world to those who don't care about the details and just want to make games.

OR to put it another way;
Lets say pre-Unity there were 100,000 programmers and 50,000 of them were low level.
Now we might have 400,000 programmers and 60,000 low level - the proportion of low level might have dropped but the over all pool has increased.

Before today people the people doing low level hardcore stuff where the ones who wanted to learn; in my case I looked at a BBC and wondered 'how does that work...' and went from there. You'll still have those people, you'll always have them, they just make up a lower percentage of the programmers over all and that's ok.

In the end we get more cool games to play and the demands of those games feed back into the tech - people who know the tech will always be needed to build and maintain it but it'll be the games which drive it forward.

As to what you should do, well, follow your passion.
If your thing is tech then go make the coolest tech you can.
If you want to make a game then make a game - pick an engine, make your own, but do it because you want to.

That is what has build this business after all.

#5188790 Is optimization for performance bad or is optimizating too early bad?

Posted by phantom on 23 October 2014 - 12:49 PM

To quote more than a few different tech leaders, the mentality that "premature optimization is evil" is why Word takes 10x longer to open today than it did on significantly weaker hardware over a decade ago. (I'm not in 100% agreement with that, but not in complete disagreement either.)

Honestly, I would disagree with it completely if that's the full extent of their usage of the quote; the whole point of the 'premature optimisation' thing isn't "don't optimise until you need to" but "don't try to optimise until you have profiled" with the rider "but don't write dumb code to start with either..".

People who throw quotes away in a lazy manner annoy the fuck out of me because they mangle the message to serve their own agenda...

#5188508 Why not use UE4?

Posted by phantom on 22 October 2014 - 06:41 AM

This was just a rendering error in a test scene; G2 looked fine, Nexus 5 was missing some textures as I recall - I've not cycled back around to that problem in a few days.

As for shaders, currently we just on demand compile them on the device; I don't think that will change any time soon as we have an init step which basically throws a pixel shader at the hardware to figure out what it can and can't deal with and then patches shaders on load to work around issues.

(FYI: don't trust this either - I'm pretty sure the spec says "if a device can not support a precision requested compile or link will fail" but I've seen Mali-400 devices silently convert code from highp to mediump, with just a note in the log, which then causes fun errors because Mali-400 only has 10 bits of precision at medium so you end up with problems such as artefacts in sky domes...)

#5187627 Why not use UE4?

Posted by phantom on 17 October 2014 - 04:48 AM

nowadays there are lots of platforms to target...

And in the case of Android that is basically 'one platform per phone per chipset per driver' as Android isn't one platform it's a utter fuck tonne of them all broken in various ways and lying to you in others.

Case and point; Nexus 5 is, hardware wise, an LG G2 - the former has rendering issues the latter doesn't have... well played Android, well played.

#5187412 max size for level using floats

Posted by phantom on 16 October 2014 - 08:57 AM

Huge ranges are just bad bad voodoo waiting to happen.


I worked on Operation Flashpoint - Red Dragon and our setup was such that we broke the world up into tiles each roughly 512*512 units in size (where 1 = 1meter) and we kept 9 of them in memory at once (the tile the player was in plus the 8 around it) and everything beyond that was kept in macro tiles (where 9 of these 512*512 = 1 macro) where the colour information was baked in.


During rendering we had two passes for depth; 0.33 -> 333.33 (near/far) for close objects and 333.33 -> 3333 (iirc) for 'far' and fog removed things beyond that.

The 0.33 near was thanks to some experimentation by myself to find the closest value which would stop the player clip but also let the road decals in near scene rendering work correctly without clipping.

(The roads being rendered in such a way that after a certain distance the verts where translated upwards by an amount which scaled with distance so as to avoid clipping with the terrain they were being rendered over.)


All objects in a tile where expressed in title local coordinates so rendering required building a world transform and the world's (0,0,0) was rebased every so often (on a grid of 300*300 if memory serves due to shadow map issues at the edge of tiles if we didn't rebase more frequently) and was effectively uncoupled from the view position which also underwent this transform change.


Physics also had to be rebased every so often, I believe that was also based around a 300*300 grid, so that physics updates/rebase occurred at world rebase frequency.


With the streaming tech we had this allowed us to move from one corner of a 24*15 (again, iirc) tile world to the other steamlessly with zero precisions issues.


If you introduce an extra far pass (so near-mid-far) you might be able to get planets and other large objects in scene without clipping issues but you really do need to work on reducing the range as much as you can - this applies also to the camera; push near out as far as you can without causing clipping issues (I guess with a space game you can push out a fair distance) to get back as much precision as you can from the z-buffer.


The physics islands is probably the one that people don't think of the most however; you need to keep that as 'local' as you can for the same precision issues and rebase as the world moves around the camera.

#5187342 Why not use UE4?

Posted by phantom on 16 October 2014 - 03:28 AM

So, cards on the table, I work on the engine thus apply any bias you think is required to the reply....


So, why not?

Well, off the top of my head the main reason would be because the engine doesn't suit your goals - if your aim is to make a game without all the fiddly stuff then by all means look at UE4 or Unity and evaluate how they stack up for you; UE4 doubly so if you are a student as you can now get it for free for at least a year.


If, however, your goal is to learn the fiddly bits of rendering (or indeed engine design) from the ground up then it probably isn't a good idea, heck even as a learning resource it might not be great because it is the product of a good 15 years+ of building upon the last changes which means there are bits which aren't pretty about it.


I personally wouldn't worry too much about the 5% business; while it means after the fact you have to hand over some cash the overall deal removes a large amount of risk (and it is 5% after a threshold so you can still take $12,000/year gross) so if you don't do big numbers you won't have shelled out a lot of money for no return and if you do big numbers then you can always use the income to get a different up front deal for your next game :)


Ultimately I'd say that UE4 etc are not a great idea if you want to learn the tech, but is a good idea if the feature set closely matches your requirements for a game and throwing out a game reasonably quickly is your primary concern.

#5187143 Learning the details of a DAW rather than "preset surfing"

Posted by phantom on 15 October 2014 - 07:05 AM

I'm in the hobbyist/rank amateur level in all this as well, however of late I've found various youtube videos to be quite interesting/educational when it comes to various synths and VSTs over the last few months. 

#5186765 Do you use UML or other diagrams when working without a team?

Posted by phantom on 13 October 2014 - 03:14 PM

I feel that code structure diagrams are unnecessary, period.

Basically this.

At best I'll sketch out relationships/structure on a pad of paper (not using UML or anything, just boxes and arrows/lines) while designing a system but once that is roughed out and I'm up and running it becomes out of date reasonably quickly.

#5182839 Best comment ever

Posted by phantom on 25 September 2014 - 03:04 AM

From 3rd party code for a game I worked on;

// This is always 2
#define MAX_PLAYERS 3

#5181624 Can you explain to me this code !

Posted by phantom on 19 September 2014 - 03:11 PM

Yes, you should because that second check is not dependant on the first.

If you type 100 then first of all it checks to see if it contains '0' or '1', if it does then it stores the number.
THEN it goes on to check if that number is less than 50, which it isn't, thus the 'greedy' message.

In your second post you say you input '55', which does not contain '0' or '1' thus you go down the 'learn to type' path, which appends 'good job' in the function it calls before quitting.

#5180825 Optimization philosophy and what to do when performance doesn't cut it?

Posted by phantom on 16 September 2014 - 03:08 PM

a much more economical approach would be to optimize "as you go" or basically making things as efficient as possible before moving on.

This is the thing covered in the full quote;

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.

Trying to make everything as fast as possible as you write it is, and will remain, a waste as you will slow overall progress for little to no real gain.

The speed of the critical path and hotspots on that path are the key points and often rewriting or rethinking the problem in those areas has very little impact on the overall application (assuming you have designed it properly which is a side issue).

Experience in time will guide you, but even experienced programmers will check themselves before making changes and profile to make sure their gut isn't throwing them off course.

Code is read more than it is written, so programming for readability at the cost of some speed (although being sensible still applies) is preferable - you can always come back and make it faster later but making it correct and maintainable is a much harder job.

#5180644 Using C++ codebase

Posted by phantom on 16 September 2014 - 03:10 AM

#include is to do with bringing in files; it is a pre-processor command which basically copies and pastes the code in the named file into the current .cpp being compiled.

In the case of 'math.h' this is a standard header file and #include will pull it into the file you are compiling.

DLLs have nothing to do with C or C++ directly, this is correct - they are an OS thing.

The UE4 blueprint stuff does largely work how you expect; you derive your new blueprint class or function from an existing base class and implement your code there. The docs on Blueprint development might help although I think you might need to improve your C++ knowledge a bit before getting into trying to code them.

Finally, a note on your noise lib of choice; the UE4 license specifically calls out the fact you are not allowed to use it with GPL code - while you keep it on your machine you are technically OK but you can never release your code, nor anything based on it, due to this restriction.