Jump to content

  • Log In with Google      Sign In   
  • Create Account

phantom

Member Since 15 Dec 2001
Offline Last Active Yesterday, 06:02 PM

#5187627 Why not use UE4?

Posted by phantom on 17 October 2014 - 04:48 AM

nowadays there are lots of platforms to target...


And in the case of Android that is basically 'one platform per phone per chipset per driver' as Android isn't one platform it's a utter fuck tonne of them all broken in various ways and lying to you in others.

Case and point; Nexus 5 is, hardware wise, an LG G2 - the former has rendering issues the latter doesn't have... well played Android, well played.


#5187412 max size for level using floats

Posted by phantom on 16 October 2014 - 08:57 AM

Huge ranges are just bad bad voodoo waiting to happen.

 

I worked on Operation Flashpoint - Red Dragon and our setup was such that we broke the world up into tiles each roughly 512*512 units in size (where 1 = 1meter) and we kept 9 of them in memory at once (the tile the player was in plus the 8 around it) and everything beyond that was kept in macro tiles (where 9 of these 512*512 = 1 macro) where the colour information was baked in.

 

During rendering we had two passes for depth; 0.33 -> 333.33 (near/far) for close objects and 333.33 -> 3333 (iirc) for 'far' and fog removed things beyond that.

The 0.33 near was thanks to some experimentation by myself to find the closest value which would stop the player clip but also let the road decals in near scene rendering work correctly without clipping.

(The roads being rendered in such a way that after a certain distance the verts where translated upwards by an amount which scaled with distance so as to avoid clipping with the terrain they were being rendered over.)

 

All objects in a tile where expressed in title local coordinates so rendering required building a world transform and the world's (0,0,0) was rebased every so often (on a grid of 300*300 if memory serves due to shadow map issues at the edge of tiles if we didn't rebase more frequently) and was effectively uncoupled from the view position which also underwent this transform change.

 

Physics also had to be rebased every so often, I believe that was also based around a 300*300 grid, so that physics updates/rebase occurred at world rebase frequency.

 

With the streaming tech we had this allowed us to move from one corner of a 24*15 (again, iirc) tile world to the other steamlessly with zero precisions issues.

 

If you introduce an extra far pass (so near-mid-far) you might be able to get planets and other large objects in scene without clipping issues but you really do need to work on reducing the range as much as you can - this applies also to the camera; push near out as far as you can without causing clipping issues (I guess with a space game you can push out a fair distance) to get back as much precision as you can from the z-buffer.

 

The physics islands is probably the one that people don't think of the most however; you need to keep that as 'local' as you can for the same precision issues and rebase as the world moves around the camera.




#5187342 Why not use UE4?

Posted by phantom on 16 October 2014 - 03:28 AM

So, cards on the table, I work on the engine thus apply any bias you think is required to the reply....

 

So, why not?

Well, off the top of my head the main reason would be because the engine doesn't suit your goals - if your aim is to make a game without all the fiddly stuff then by all means look at UE4 or Unity and evaluate how they stack up for you; UE4 doubly so if you are a student as you can now get it for free for at least a year.

 

If, however, your goal is to learn the fiddly bits of rendering (or indeed engine design) from the ground up then it probably isn't a good idea, heck even as a learning resource it might not be great because it is the product of a good 15 years+ of building upon the last changes which means there are bits which aren't pretty about it.

 

I personally wouldn't worry too much about the 5% business; while it means after the fact you have to hand over some cash the overall deal removes a large amount of risk (and it is 5% after a threshold so you can still take $12,000/year gross) so if you don't do big numbers you won't have shelled out a lot of money for no return and if you do big numbers then you can always use the income to get a different up front deal for your next game :)

 

Ultimately I'd say that UE4 etc are not a great idea if you want to learn the tech, but is a good idea if the feature set closely matches your requirements for a game and throwing out a game reasonably quickly is your primary concern.




#5187143 Learning the details of a DAW rather than "preset surfing"

Posted by phantom on 15 October 2014 - 07:05 AM

I'm in the hobbyist/rank amateur level in all this as well, however of late I've found various youtube videos to be quite interesting/educational when it comes to various synths and VSTs over the last few months. 




#5186765 Do you use UML or other diagrams when working without a team?

Posted by phantom on 13 October 2014 - 03:14 PM

I feel that code structure diagrams are unnecessary, period.


Basically this.

At best I'll sketch out relationships/structure on a pad of paper (not using UML or anything, just boxes and arrows/lines) while designing a system but once that is roughed out and I'm up and running it becomes out of date reasonably quickly.


#5182839 Best comment ever

Posted by phantom on 25 September 2014 - 03:04 AM

From 3rd party code for a game I worked on;

// This is always 2
#define MAX_PLAYERS 3



#5181624 Can you explain to me this code !

Posted by phantom on 19 September 2014 - 03:11 PM

Yes, you should because that second check is not dependant on the first.

If you type 100 then first of all it checks to see if it contains '0' or '1', if it does then it stores the number.
THEN it goes on to check if that number is less than 50, which it isn't, thus the 'greedy' message.

In your second post you say you input '55', which does not contain '0' or '1' thus you go down the 'learn to type' path, which appends 'good job' in the function it calls before quitting.


#5180825 Optimization philosophy and what to do when performance doesn't cut it?

Posted by phantom on 16 September 2014 - 03:08 PM

a much more economical approach would be to optimize "as you go" or basically making things as efficient as possible before moving on.


This is the thing covered in the full quote;

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.


Trying to make everything as fast as possible as you write it is, and will remain, a waste as you will slow overall progress for little to no real gain.

The speed of the critical path and hotspots on that path are the key points and often rewriting or rethinking the problem in those areas has very little impact on the overall application (assuming you have designed it properly which is a side issue).

Experience in time will guide you, but even experienced programmers will check themselves before making changes and profile to make sure their gut isn't throwing them off course.

Code is read more than it is written, so programming for readability at the cost of some speed (although being sensible still applies) is preferable - you can always come back and make it faster later but making it correct and maintainable is a much harder job.


#5180644 Using C++ codebase

Posted by phantom on 16 September 2014 - 03:10 AM

#include is to do with bringing in files; it is a pre-processor command which basically copies and pastes the code in the named file into the current .cpp being compiled.

In the case of 'math.h' this is a standard header file and #include will pull it into the file you are compiling.

DLLs have nothing to do with C or C++ directly, this is correct - they are an OS thing.

The UE4 blueprint stuff does largely work how you expect; you derive your new blueprint class or function from an existing base class and implement your code there. The docs on Blueprint development might help although I think you might need to improve your C++ knowledge a bit before getting into trying to code them.

Finally, a note on your noise lib of choice; the UE4 license specifically calls out the fact you are not allowed to use it with GPL code - while you keep it on your machine you are technically OK but you can never release your code, nor anything based on it, due to this restriction.


#5179669 Sorting/structuring renderables and cache locality

Posted by phantom on 11 September 2014 - 01:56 PM

For distance you'll want the second version, however instead of working out the distance stick with the squared distance as it is cheaper to calculate as it doesn't need the square root operation, and does the same job.


#5179609 Code organization without the limitations of files

Posted by phantom on 11 September 2014 - 09:17 AM

However if you're asking "I want to see all the code that executes when I cast a fireball" then... well... that's what a debugger is for, surely? You're asking a run-time question.


Not really; you come to a new system, you are trying to add 'something' to it but in order to do so you need to follow the flow of the code and this can get complicated FAST when moving across multiple dependent and independent files.

I've had to deal with this a lot recently and my current method of dealing with it is to use the 'pin tab' functionality of VS to track files I've been in (and if jumping among those files even that isn't great) as well as noting down the flow on paper... all of which is a bit of a faff.

Even a small example, adding ADPCM support for Android in UE4, resulted in my having four or five files pin'd while trying to track all the logic going on.

What would have made this easier would have been anything which could have integrated the code fragments into a single view (or a sub-set of them into a couple of views if that made sense), bonus points for being able to display call linkage either in the same view or in a 'map' view so I can jump around and see where things go to/come from.

So, in the sound case, I wanted to see how 'createbuffer' was called; my current solution was to use VAX to 'find references' and open the correct file from there. What would have been 'nicer' would be same initial flow ('find references') but instead of files the output would be a list of tagged versions so that I could say 'ahah! AndroidAudioDevice!', double click on it and get that function inserted below the current code (which I could then optionally drag/drop above so that the flow made more sense) so now I can see call site and destination in the same view.

By the end of this session I might have 7 or 8 functions in the chain on screen which would have meant having 4 or 5 files open but instead all viewable on screen in a single session.

----

This does feel like something which needs to be also tackled at the language level however; many existing languages rely on the concept of a file and file system structure to control compile order or dependency information which could cause problems and, ultimately, not make it useful for things like C or C++.

(Although, for languages like that you could maintain the existing file setup and have the IDE work magic to give you the new views and handle changing the original data when you update the functions).

Version control would either have to be based on 'fragments' too or you'd have to provide a compatibility layer so that while the fragment 'void foo()' doesn't sit in a file normally the comp layer shoves it into one for version tracking - however being able to track on the fragment level would probably be better, bonus points for allowing a checkin to reference multiple fragments; "changed Audio to support ADPCM" for example would reference the 7 or 8 fragments changed to support it which makes it clear what has changed in that check in.

I think the idea has merits, but you will need to build it in from Day 1 and it would require retooling for a few things as well as a fundamental shift in how people think about code and structure.

I could see it making life easier however when you are tasked with doing Things in a large code base.


#5179111 Sorting/structuring renderables and cache locality

Posted by phantom on 09 September 2014 - 10:46 AM

That assumes one draw call per object, plus by the time you've got to the 'sort draw calls' you'll have already done a lot of dead object removal so you should never see a 'dead' object in your draw call lists to sort by.

At the highest 'game' level you'd be tracking the game entity which any attached renderables (1 or more draw calls) are associated; when these die the renderer never sees them.

Vis-culling per "camera", again above renderer submission, takes care of visible objects for a given scene.

Only once you get beyond vis-culling do you start breaking renderables down into their draw-call components and start sorting them and routing them to the correct passes for a scene.


#5179075 Sorting/structuring renderables and cache locality

Posted by phantom on 09 September 2014 - 06:59 AM

Dead flags, while they might seem like a good idea, are something that requires some thought however and it is important to always make sure the work you are skipping is worth the cost of the compare and potentially mispredicted jump as well as the extra cache space you are taking up. This one flag has bulked your data up by 4bytes.

In some cases you might be better off keeping the 'dead' flag separate from the objects themselves; that way a single cache line can contain information about the alive state of 16 objects and keep the size of your larger structure down meaning more cache wins and less overall trips to memory.

In other situations killing and sorting might be a better win; if you can execute your main loop with the assumption that everything is alive it makes the logic easier there and a second sweep can be performed to kill any dead elements and compact your data structures.

I built an experimental particle system on this principle with the 'simulate and draw' phases assuming all particles are alive and then a 'kill' phase which would walk the (separate) life time information and do a 'mark and compact' scheme on the particle data. So if you had 1000 particles you would walk up until the first 'dead' particle and remember that point and continue walking until you hit the next 'alive' particle and note that, then you'd walk again until the next 'dead' particle (or end) at which point you would copy the data (which was in separate memory locations per component) for the 'alive' block down so all particles were alive up until that point. Repeat from step two until end of particles.

Particle data was just data so a nice fast update/draw phase, the 'are you dead?' phase touched the minimal amount as life time information was in it's own place in memory. The copy down might have had some cost but it wouldn't have been paid often and was a trade off vs fast update and it was a linear access so would have been prefetcher friendly.

(This was a few years ago but recently an article surfaced which backed up my idea where someone tested the cost of sorting a huge array + access vs jumping about in memory a bit (including pools and the like); short version - sort + linear access won.)

But, as with all things, it requires some thought; don't go blindly throwing in 'dead' flags but at the same time don't blindly throw the idea out either.

Know your data.
Know your processing.


#5178848 Sorting/structuring renderables and cache locality

Posted by phantom on 08 September 2014 - 07:27 AM

Btw are you implying that I should sort by both transform and key? Say, I first sort by keys, and then again within all renderables that have the identical key(unlikely) sort again by transforms?


Yes.

You'll want your draw calls ordered by material (shader+textures+constants for example) and then the instances inside that in a rough front to back order (z sort based on the centre point of a sphere would probably do it) which will get you good z coverage (for non-translucent objects; stuff which blends generally need to be drawn back to front).

However, you can build all this into a single sort key where the top 32bit hold details about pass, material id and other stuff and the bottom 32bits contain the depth information allowing you to sort on a single thing in one pass.


#5178845 Unreal engine 4 free for students !

Posted by phantom on 08 September 2014 - 07:21 AM

One of the nice details about this is that if you use the engine on your course even once you've finished the course you can keep using the version of the code you had when you left without having to pay to get access (you won't get updated code without paying of course) which means you can effectively make and release a game at zero cost.

(On release you are subject to the normal royalty fees of course but that is on money made so your outlay for the engine could still be zero.)




PARTNERS