Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 10 Nov 2005
Offline Last Active Yesterday, 08:40 PM

Posts I've Made

In Topic: An alternative to "namespaces"

18 October 2016 - 04:39 AM

If you want to avoid using namespaces, but retain a sense of structure, then C++ allows the use of nested classes. Just be sure to make the declaration public, so you can access it from outside the scope:

class IGame {
   struct scene_t {

I rarely use this myself (in fact, I limit nested classes to a more local scope to encapsulate logic that is and should be limited to the parent class itself, but whose members would otherwise unnecessarily pollute the class member list):

class IGame {
    struct properties_t {
      eGameState state;
      } properties;
    struct ui_t {
      eUIState state;
      } ui;

In Topic: Minimizing draw calls and passing transform/material data to a shader in Open...

11 October 2016 - 05:35 PM

Thanks for the thorough response, TheChubu! I got a chance to look into ARB_shader_draw_parameters and even my GTX 960M doesn't have it, so I don't see a reason to add a codepath for it. I still haven't had the chance to sit down and work on the actual code, but in the very least ARB_base_instance is present, which is indeed there since GL3.1. PS - I appreciate the link to your render loop architecture.


As for materials: frankly I have been working on the deep innards of my game thus far and the visual side is in dire need of an upgrade from a texture-based to an actual material-enabled approach. I generally try to minimize code iteration and time spent on reimplementing features as much as possible. As such, I'd like to upgrade my render pipeline to draw meshes with multiple materials as well as extend it to handle layered materials in one swell swoop. I may be misguided here, but given the evidently fairly small maximum array size in shaders of the time this introduces a potentially limiting complication to the pipeline in order to not reduce the whole effort of minimizing draw calls to constantly remapping materials, which kinda defeats the whole purpose. In any case, I haven't gotten to adding materials to my shader pipeline yet, so it was more of a preparatory "best practices" type of question :).

In Topic: Calculating distance between very large values.

25 August 2016 - 06:24 AM

Frankly a loss of precision on these scales is a non-issue. I knew that much before I started this thread :). That being said, on the one hand that's really not the point of the discussion and on the other hand I hadn't considered directly casting stuff to doubles, which on closer inspection results in far less error than I initially figured, considerably alleviating the problem without need for much further thought.


However... I did do some further calculations to see what the limitations would be if I wanted to encapsulate the entire observable universe and calculate distances at a precision that would be absolutely minuscule with respect to the numbers involved. And I think I've figured out a somewhat reasonable, mathematically sound scheme to do so, which is in no shape or form overly fascinating. Or useful. In fact, without changing the above data structure AFAICT distances can be calculated to roughly within one micrometer across the entire observable universe, while only accepting some - though arguably considerable - waste of storage space. All of which is something every science nerd needs more of. Right?


tl;dr: please stop reading now. I'm way overthinking this. 




In case the tl;dr didn't discourage you, here's some facts and presumptions:


1) the size (diameter) of the observable universe is 93 billion light years.

1.1) I was initially thinking of using 1 AU (astronomical unit) as the cutoff distance, but I wanted to be able to store a normal-sized solar system within a single precision range, which an AU cannot do

2) speed and storage are not really issues

3) however, I don't want to use arbitrary precision math. Eg everything should stay within the pipeline a regular 64-bit CPU can handle

4) the biggest presumption here is that my math is actually correct. Which it may very well not be.


And here's the rundown:


a) assume overflows are unacceptable and it is undesirable to make use of the upper parts of either double precision floating point or some spatially non-symmetrical fixed-point distance metric. The maximum squared distance between two 64-bit coordinates located within a cube hence becomes:


x2+x2+x2 = 3*x2 =  263 = 9223372036854775808 km
Which resolves to:
x2 = 3074457345618258602.6(6) km =>
x = 1753413056.1902003251168019428139 km, or =>
x = 5.6824247180601 pc


Which is ~200 times less than the kiloparsec range I was initially trying to stretch the global coordinates to. No worries. Let's just reduce the global scale to the range [1 km; 1 pc].


b) with this there's enough precision to calculate distances directly without fear of overflow. The precision cutoff point of 1 full unit in double precision floating point format (eg where the precision dips below one full unit) is around 1e+20, which is notably ~10x larger than than the 9223372036854775808 km upper limit for the squared distance. Which is actually a pretty nice fit.


c) the bigger problem here is wasted storage space. Using 64 bits to store distances from 1 km to 1 kiloparsec nets a whopping log2(974932) - log2(1000) = 9.9291577864 bits of wasted space per axis. Reducing the upper limit to one parsec bumps this up to 19.8949420711 bits.


Which is a total of ~60 bits of unused space when storing a single coordinate. However, that's not all.


d) the same logic can be applied to the intergalactic scope, which is also a 64-bit integer 3-vector, boosting the amount of wasted space to around 15 bytes per coordinate. Which is a LOT.


e) that being said, using 44 bits of precision per axis on the intergalactic on top of the 1 parsec local scale amounts to a maximum universe size of (2^44 * 3.26) ly / 93000000000 ly = ~616x the size of the observable universe.


Success! Right? Well, yeah, when not considering each coordinate wastes more space than is used by a full blown 3-vector used for rendering.


There are a couple of upshots to this, however:


a) the extra bits can optionally be used to store rotational or other intrinsic data, such as brightness and/or color.


b) assuming most of the universe is procedurally generated and can hence be quickly discarded on demand, the number of objects that need to be stored with extended precision at any one time is actually relatively small. Likely in the upper hundreds or lower thousands. Which doesn't really amount to too much wasted space in the end.


So - voila. Here's to 2 hours well spent! Because SCIENCE!


Incidentally, if anyone's bored, please feel free to check my math.

In Topic: Exmple of games with no tutorial (help please!)

20 July 2016 - 02:15 AM

Of bigger budget titles The Witness is an interesting example. It does have "tutorial phases" for its variety of puzzle types, which essentially present you with a series of increasingly difficult puzzles of a single kind. However, it performs next to no hand holding, which essentially means that you'll find yourself staring at a small set of symbols and trying out various combinations in order to figure out the logic behind them.


Frankly I love The Witness' approach to puzzle solving - the game mechanics themselves are puzzle that you need to work out through trial and error and deductive thinking before you can go on to solve actual more complex puzzles.

In Topic: undefined symbol

04 July 2016 - 01:54 AM

Are you including the header in main.cpp?