Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Oberon_Command

Member Since 07 May 2003
Offline Last Active Yesterday, 06:12 PM

#5238216 What would you be willing to trade to get your ideal job in the gaming industry?

Posted by Oberon_Command on Yesterday, 11:21 AM

Also, for those who went to college full time, an 8 hour day is a reprieve. Homework, and even a job on top of class usually turns out to be more than a 40 hour work week. I know I was constantly staying up late and waking up early to get my 14 credits of work in to achieve my goal of an A and work to get where I am today.


Speaking as someone who went to college full time myself, no, not always. School is not like work. Going to sit in a lecture hall watching someone talk at you a few hours a week is not like solving programming problems 8 hours a day. Doing homework where the only person your failure will affect is yourself is not like debugging a crash bug that puts everyone on the team's salary in danger if you don't solve it in a few hours. I remember when I left my last co-op work term thinking that going back to school was the reprieve! I've always found that I have much less energy after a day of working than I ever did after a day of homework + classes. I think many or even most people would agree with me.

Of course, and this is with me enjoying my game development job, and I am not particularly interested in changing careers at all. But it is a job.


#5237152 If-else coding style

Posted by Oberon_Command on 27 June 2015 - 03:33 PM

The reason I didn't use "get; private set;" in this case is because the property doesn't actually exist as data: it's returning the result of an expression. Under certain conditions it's not able to do what it's supposed to, hence the "return null".


Why is it even a property in the first place, then? Without more context, this just seems like property abuse to me. If your property access can fail or has some kind of side effect as a result of evaluating the property, it should be an appropriately-named method to make this more clear.

You seem to be interpreting the review as saying when reading the inside of the function, it's "very misleading".
My guess is that the reviewer is saying that to someone using the function, it's misleading.


This.


#5237131 If-else coding style

Posted by Oberon_Command on 27 June 2015 - 12:21 PM

In some places, if you have a method that can either return some valid value or null, it's considered good practice to call the method "TryAndGetTheThing". That might be part of what was "very misleading" - "GetTheThing" implies that it will always retrieve a valid thing, whereas "TryAndGetTheThing" tells the person using the method that it can fail and return something invalid.

I don't think that really applies to properties, per se, but properties should be named like variables, since that's effectively the interface they're providing meaning there you wouldn't call your property "GetTheThing" in the first place.


#5236422 Is it really as simple as read a book and then try to figure things out?

Posted by Oberon_Command on 23 June 2015 - 02:06 PM

I will point out that no programmer I know would call the ability to type a part of a programmer's skill (except you, apparently - have you never taken a paper exam where you programmed?) and syntax is tangential at best to what I, at least, mean when I refer to "programming." We don't measure the skill of a programmer by their typing speed, we measure it by what they actually accomplish and how.

 
Really? I know plenty of programmers who appear to focus on learning all of the IDE shortcuts and macros available to speed up code generation, while putting no effort into learning algorithms or learning any programming language in more depth. I believe that all falls under prioritizing "the ability to type".


Knowledge of one's tools isn't totally irrelevant, practically-speaking. A programmer who is proficient with code browsing tools will probably be faster at learning new systems and fixing bugs, but that's not really a part of "code generation skills" as it is analytical skills and understanding how to make use of resources given to you. The point I was making was that "typing", as in, "the physical act of producing the code", is IMO at most only a minor part of programming skill. Being skilled with an editor or keyboard interface isn't being skilled at programming, it's being skilled with that editor. Knowing some macros won't help me if I can't make sound decisions on what good code is for my project. Knowing time-saving features of my editor may increase my productivity, but what good is increased productivity if the product being produced is shit that belongs on The Daily WTF? Then my editing skills are just a laxative.

Suppose you know a programmer who can pump out lots of code quickly, but when their code is reviewed or tested, they have to rewrite all of it multiple times because it's hard to understand, takes days to debug it to a working state, and doesn't do what was actually asked. Suppose you also know a programmer who produces the actual code more slowly, but it takes less debugging to get it right and the resulting code is straightforward to follow. Which one would you say is the better programmer?


#5235375 Is it really as simple as read a book and then try to figure things out?

Posted by Oberon_Command on 17 June 2015 - 07:55 PM

Fluid intelligence (logical thinking, analytical thinking, pattern recognition, etc.; a factor of general intelligence) has been consistently proven to remain relatively stable throughout adult life, so improving it through practice would be—I'm afraid—sisyphean.


I note that you leave critical thinking from your definition. Has critical thinking ability been shown to be static over ones' life? I rather suspect not, given that my own personal experience is that I have become more and more logical and a better critical thinker as I've aged.
 

If programming skill is to be entirely an acquirable skill that can be improved through training, then it must be distinct from fluid intelligence.


Not necessarily. Part of programming skill is learning to apply your intelligence. Pattern recognition skills are not useful if the patterns that are recognized are irrelevant. Analytical skills are useless if you apply them only to minutae when big picture thinking is needed or vice versa. Solving a problem becomes painfully slow if you don't have experience to your inform fluid intelligence, as without experience, solving the problem necessitates inventing (which I note you're not accounting for at all in your posts) something that has already been invented, probably badly. Design patterns books and the like are an attempt to solve that problem somewhat. I'm a little disenchanted of "design patterns" of late, however. From what I've seen of Comp. Sci. majors who study them, they tend to encourage rote memorization and "thinking of the problem in terms of the solution" as opposed to actual programming.

If you want an anecdote: I started learning to program from books at age 9, but my ability didn't really improve beyond copy-pasting blobs of syntax around until I pushed myself into having to solve problems that didn't have neat solutions in the back of the books. It therefore seems obvious to me not only that a huge part of programming is the mindset one brings to the problem solving programming entails, but also that this mindset can be learned and unlearned.
 

If programming skill is to be entirely an acquirable skill that can be improved through training, then it must be distinct from fluid intelligence.

Programming performance then depends on programming skill and fluid intelligence.


If I heard someone talking about a programmer's "performance," that would imply to me a past tense - what a programmer was able to build and how quickly. What you're calling "programming performance" is what everyone I know would call "programming skill/ability." Many people believe that programming ability is something one is born with and that experience only makes you better at it.

A key part of "programming skill" that you've left out is the ability to turn theory into practice and how that is approached. That is vitally important. More than typing, I'd say. See my point above on "mindset."
 

If we thus factor out fluid intelligence,


If. I am unconvinced that factoring it out results in a useful distinction.
 

(1) knowledge of theory (i.e., understanding how different factors work together in different situations etc., this includes understanding gained from experience)


So you concede the point? smile.png
 

I believe if you re-read my posts with this understanding, and understanding of the terminology I use, you will understand my position.


Redefining words away from their commonly-accepted usage in order to fit them to your model of the situation has indeed made your position more clear. I will point out that no programmer I know would call the ability to type a part of a programmer's skill (except you, apparently - have you never taken a paper exam where you programmed?) and syntax is tangential at best to what I, at least, mean when I refer to "programming." We don't measure the skill of a programmer by their typing speed, we measure it by what they actually accomplish and how.
 

My conclusion was not just “jumped” to, but directly drawn from Kolb's theory.


Which is not without criticism, I note. The point that "empirical support for the model is weak" seems particularly damning.

This person would likely be far more motivated to learn by reading a book, and perhaps even give up when forced to learn too much by doing. But once sufficient knowledge and understanding has been acquired, the person might be keen to get to work, and perform very well; whereas when attempting to learn by doing during that time despite feeling discomfort, the person might perform significantly worse after the same amount of time.


Again, we're talking about different things. You seem to be talking about the process of learning individual things; we're talking about the big picture of learning to be a skillful programmer. At some point, one must leave the books behind to forge one's own path through the quagmire of others' bug-riddled, nigh-unreadable code (until you've learned to work with it) and poor documentation, and actually write some code oneself to become a good programmer. That is the point being made. Apologies if that point was not quite clear.

I believe that there are some aspects of programming that one can only master by doing. Being one who learns faster by reading is irrelevant if there's nothing to read. I say this as one who does prefer to learn new things about programming from books, myself. tongue.png


#5235335 Is it really as simple as read a book and then try to figure things out?

Posted by Oberon_Command on 17 June 2015 - 03:46 PM

There are essentially three components to programming: theory, syntax, typing. The better your theoretical understanding, the more syntax you know, and the faster you can type without errors; the better you can program.


I'm afraid this is very wrong. Programming is much more than that. It is also critical thinking, model forming, and knowledge synthesis. It is making decisions about how to accomplish something given limited resources and an necessarily incomplete understanding of the totality of the system. It is a thought process, a cognitive skill, not a body of knowledge that can be memorized and applied mechanically. One could easily learn what (eg.) the visitor pattern is, and in what kinds of circumstances one might wish to apply it, but whether or not the visitor pattern is an appropriate implementation choice is a decision based mainly in knowledge of the situation, experience, and critical thinking. Very little of that decision would be based in theory, almost none of it would be in syntax, and typing? Really? Most programmers spend more time reading code than typing it.

What you're talking about is "programming" in the same way that solving a quadratic equation is "mathematics." Which is to say, one could argue that it isn't really.
 

I am directly comparing gaining a theoretical understanding and learning the syntax of programming to gaining a theoretical understanding and learning the syntax of addition and subtraction.


Except that programming is far more than syntax, so this comparison isn't all that useful. To slightly modify a well-known quote, syntax is to programming what telescopes are to astronomy.
 

Let me share a personal anecdote then. I have consistently outperformed every other student in math class in every school I have visited. “Gaining experience” solving problems that I could already consistently solve correctly (and in a fraction of the time others needed) from the very beginning did practically nothing for me. By reading on however I could expand my theoretical knowledge, and thus expand the variety of problems I could solve accurately and fast.

 
Again, you make the mistake of conflating learning to compute with learning to program well. The two are very different. For starters, in programming, there is no definitely "correct" answer - there are many, some of which are considered preferable to others. If you want a better comparison with a mathematical ability, then I submit that learning to program is more like learning to do proofs.

Of course, even that comparison is imperfect. With proofs you're exploring a formal system to find a meaningful way to show that a statement is consistent with that system; with programming, you're creating a new formal system or (more likely) modifying an old system to do what you need.
 

A general statement that doing x is always best to learn y (for cognitive skills) is failure to acknowledge the diversity of human cognition. It is perhaps true of some or even many people, but certainly not of all.


It's more that a lot of a programmer's skill comes from their experience and exposure to programming situations in general, not from learning knowledge explicitly. What you're talking about seems to be the acquiring knowledge through schooling part. That's not what we're actually discussing, as jHaskell points out. I will let others with more experience fill in other reasons why this is, but suffice to say that experience, not knowledge, is a huge part of what makes a programmer skilled. One could easily learn what heap corruption is, but knowing how to debugging heap corruption is much more than that, it's knowing how to use your tools, knowing your codebase, and consequently knowing how and where to start looking. That kind of knowledge comes almost entirely from experience.

At some point, if you want to be a skilled programmer, you need to program lots of things. Otherwise, you won't learn things that you'll need to know that aren't written down anywhere. Theoretical knowledge and understanding how to solve particular examples of problems only goes so far.


#5235316 Is it really as simple as read a book and then try to figure things out?

Posted by Oberon_Command on 17 June 2015 - 12:58 PM

A 2004 literature review identified 71 different learning styles theories.


Which one were you thinking of in particular? And how does it account for the fact that actual programming (not merely knowing the syntax of a programming language) is a combination of creativity, applied experience, and problem solving, much of which I'd argue actually cannot be developed to a useful level by reading alone? I don't think you can learn inventiveness from a book. Like sailing, playing an instrument, and scale modelling, it's a skill that has to be developed by actually using that skill even if the foundations can come from a book.

I would be quite impressed if you found somebody who learned to a ride a bicycle only from a book - as in, they read said book, got on a bicycle, and rode off without falling once or having balance problems.

 
It is curious that you replied only 14 minutes after I made my post. Did you have ample time to thoroughly read the 205-page document referenced in your quote from Wikipedia to get a deep and educated overview and understanding of the field, or did you perhaps quote the first sentence on the Wikipedia article that in your lack of understanding seemed to confirm your already preconceived opinion? Take any of the 71 identified learning styles theories you want, they all explain that there are different learning styles—that's kind of the point.

 
My point was that it's difficult to argue against any theory you're advancing if I don't know which one that actually is. I'm well aware that there are different learning styles, but different theories aggregate the different traits of those styles into different categories. I don't need to read the entire link to make that observation based on an executive summary of the material.
 

You are directly comparing cognitive skill to motor skill. I do not think you are aware of the fallaciousness of your argument.


It was an analogy. I'm aware that the two are not the same. The purpose of the analogy was twofold:

1. to show that there exist situations in which a "learning style" is largely irrelevant to how learning happens.
2. to state the case that programming is one such case. Others can no doubt expound better than I on why.
 

but good luck explaining to an extremely gifted child who understands addition and subtraction from a simple description in ten seconds flat why s/he needs to “do” whatever you think s/he needs to do in order to learn something s/he has already learned.


1. You're directly comparing learning to program to learning rote addition. Given how very different those cognitive tasks are (about on the level of punching numbers vs. a calculator and discovering a proof of the irrationality of sqrt(2)), I hope upon reconsideration of the quoted post you will become aware of the fallaciousness of your argument. Programming is not computation. Learning how something (eg. addition) works is very different from inventing new things.
2. One may choose to believe that they can learn to program faster by reading about programming than by doing, but what an "extremely gifted child" thinks and what is actually true are not necessarily the same thing.

I will also say that a big part of learning to program well and be a useful software developer is learning to read others' code. Reading about programming can certainly help with that, but there is no substitute for actual exposure to large swathes of people's code. Do note that when I say "learning to program," I really mean "learning to program well."


#5235287 Is it really as simple as read a book and then try to figure things out?

Posted by Oberon_Command on 17 June 2015 - 08:47 AM

I find that a dubious claim at best.  Reading is certainly an important part of learning to write code, but I've never met a single proficient developer who hasn't written a ton of code, while I have worked with more than one CompSci Ph.d. that was barely competent at actual software development.

 
In the end, proficiency and expertise are attained by doing.

My claim is grounded in theory, your evidence is anecdotal. wink.png

 

A 2004 literature review identified 71 different learning styles theories.


Which one were you thinking of in particular? And how does it account for the fact that actual programming (not merely knowing the syntax of a programming language) is a combination of creativity, applied experience, and problem solving, much of which I'd argue actually cannot be developed to a useful level by reading alone? I don't think you can learn inventiveness from a book. Like sailing, playing an instrument, and scale modelling, it's a skill that has to be developed by actually using that skill even if the foundations can come from a book.

I would be quite impressed if you found somebody who learned to a ride a bicycle only from a book - as in, they read said book, got on a bicycle, and rode off without falling once or having balance problems.


#5235198 Why didn't somebody tell me?

Posted by Oberon_Command on 16 June 2015 - 04:29 PM

I just discovered you can toggle bits on/off in the Windows Calculator when it's in programmer mode just by clicking on the 1's and 0's themselves. That's... I can't... why didn't someone tell me?!


mind-blown.gif




#5234979 Yet Another Asset Manager...

Posted by Oberon_Command on 15 June 2015 - 05:25 PM

What do you mean by parametrizable loader?


Imagine that you want your game to support multiple languages. This will probably mean having several string tables, at the very least, and probably different UI elements for the different languages (since some languages are read left-to-right while others are read right-to-left, etc.). So when you load your assets, you'd need some way of telling your loader to load the assets for the language the user has selected.

Or imagine that you want your game to support user-configurable quality settings. This could mean having several different versions of the same model available. That would mean that your loader would have to know somehow what quality setting to use and what that quality setting means in terms of what assets its should load.

These are things that stateful asset loader might help you deal with, while a stateless asset loader could mean more work to keep all these configuration parameters straight.


#5234966 Yet Another Asset Manager...

Posted by Oberon_Command on 15 June 2015 - 03:32 PM

1. Assets tied to a game world. These should be preloaded when the world is loaded. The world itself manages these assets.


YMMV, but in most mid-sized to large game engines I've worked on, the game world itself doesn't own the assets. Instead, the game world is just a description of the current state of the world; assets like models are dealt with purely by the renderer meaning that the game logic doesn't need to know about them (or shouldn't - I've worked on one game where it did, and it caused some nasty problems, though those might be unique to that particular situation), while object definitions and the like are cached together in one place that both the renderer and the game logic can access.
 

2. Things used on the fly. This comes up do to some happenstance. A model is needed for an enemy, a certain GUI fires off. In this case the management should be by whatever uses it.


What if that certain GUI or enemy is probably needed more than once, but you also don't want to preload it? If you reload it every time the event that owns the asset fires off, then you could be thrashing around loading assets when you don't have to. In that case, you'd want to cache the asset somewhere for a time until you're sure you aren't going to need it anymore.


#5234109 Engine design, global interfaces

Posted by Oberon_Command on 10 June 2015 - 12:22 PM

Well, I know you made a big assumption, and didn't read what I said. Because you are talking as if all singleton usage means you declare a class as singleton. So does Oberon_Command.


Because a class that can have only one instance of itself is literally the definition of a singleton. Anything else is not a singleton. Failure to understand this constitutes failure to pay attention to and understand how modern software engineers use words.
 

vec<SoundData> OptionsMenu::getAllPlayingSounds()
{
   static vec<SoundData> sounds;
   return &sounds;
}

void OptionsMenu::playSound(SoundData sd)
{
   static vec<SoundData>* sounds = getSoundData();
   sounds.add(sd);
}


Why do you think these need to be statics in the first place? Just make them member variables. I can think of no reason these sound lists need to be static.
 

This is another form of singleton, nothing to do with a singleton class.


Nobody but you would call what you're posting about a "singleton." If getAllPlayingSounds were a member function of of vec<SoundData>, and vec<SoundData> had private constructors so that it could only be used from within getAllPlayingSounds, then this would be an example of Meyer's singleton. But it's not, so it isn't a singleton. Unfortunately, I don't know (for certain) any better term. "Lazy-initialized global," maybe? Seems appropriately descriptive, since this is fundamentally a global that has its initialization hidden in such a way that it isn't initialized until first use.
 

The fact is, you cannot avoid this 'pattern'. Any project of decent size will use this, or rely on a library which uses it. It's impossible to avoid because of how C++ works.

 
Yes, you can avoid it by not using static state in the first place and avoiding libraries that do this kind of thing. Which libraries are you thinking of that force you to deal with this problem?
 

because you can't entirely escape singleton use/static dynamic initialization in C++ if you use static objects that is just a fact.


So don't use static objects in the first place. You have given zero valid reasons why static state is necessary in the first place, aside from a reference to libraries using it, which is fair enough, but why given the choice would you use those libraries in the first place? Nobody is disputing that the static initialization order fiasco isn't a problem. We're only disputing that dealing with it is always necessary in every project, ever, and that a well-designed program will eventually need to use lazy-initialized globals or singletons to deal with it.

Obviously, all these maxims to not use static state go out the window if you're dealing with shitty legacy code that uses static state and you can't refactor it. But you haven't specified that this is the case that you're talking about. Your posts give an opinion that all large programs must eventually use static state. Are you in fact making this claim?

edit: Although, now that I think of it, there is one case in C++ where I could justify some amount of static state - when you have a scripting library/system that's been set up to allow the scripts to call arbitrary functions defined in the engine, but those functions are represented as regular old function pointers instead of something like std::function that can handle closures. Then you might need some static state if you wanted to give your game state to the script. But there are better ways around that and in any case that doesn't require singletons nor workarounds for the static initialization order fiasco. My side project uses Lua scripting and works around the problem by simply (gasp) passing the data the scripts need directly from the call site and not allowing scripts to modify game state directly.


#5234100 IDEs vs editors

Posted by Oberon_Command on 10 June 2015 - 11:47 AM

Cmake is pretty awful. That is one reason I did not stick with Ogre.


Have you ever taken a look at premake? I've seen it used in at least one AAA shop I've worked at and it seemed to work pretty well. Of course, there we had dedicated tools and build engineers to maintain this stuff.
 

You have an extra build step, and at the end you still have to build things.


Every place I've worked at that used tech like this had batch scripts to automate all of this. They had to, order for the build server to work! Most build server processes I've seen work like this:
- sync to latest code/data
- generate projects for the requested platform and configuration
- build the code and tools using the generated projects
- optionally build the data
- run automated tests on the game
- if the tests passed, check in the built binaries

One extra step that takes almost no time isn't a big deal, here.
 

You are asking why I would like to not use an IDE, and the reason is that it is much, much faster to just fire off a batch of commands instead of opening the IDE (takes a while to load), load the solution and all projects (takes even longer to load), wait for intellisense to do it's parsing, and then finally activate the project that you want to build, select the right configuration and hit build. Whew, that's a lot of waiting and clicking, isn't it?
Of course, when you are coding, then an IDE is great, because then you actually need the intelli-sense and the nice UI. Not when you just want to configure and build the damn thing. smile.png

 
How often are you building the game when you aren't coding? In multiple places where I've worked, simply building the game (eg. release candidates) is often done by a build server that invokes the command-line tools like MSBuild, but actual work is done through the IDE. MSBuild exists and works fine, there's no reason to open your IDE to build.

Working with a raw editor is certainly possible on Windows with cygwin, it's just that there's no particularly convincing reason not to use the IDE if it's there. Imagine that you're given the choice between using a slide-rule and a pocket calculator to do some engineering calculations. Both will work fine, but which one would you use?


#5233813 Engine design, global interfaces

Posted by Oberon_Command on 09 June 2015 - 09:12 AM

Another problem I've thought of. Suppose I did use the "dummy int trick" to initialize everything before main:


namespace

{

    int foo = InitializeEverything();
}
 
int main()
{
    /...
}

 

All my singletons are initialized before main, as intended. But what about my other things, the systems that depend on them? I've got to initialize them at some point. I have several options:

1) Initialize them from somewhere in main (preferred)

2) Initialize them in InitializeEverything()

3) Leave them as globals and initialize them whenever - the compiler decides their initialization order.

 

If I choose the first or second option, then there's no difference between taking this approach and simply calling InitializeEverything() as the first thing I do in main(), so this approach is pointless.

If I choose the third option, then I haven't solved the problem - the compiler could still up and decide to initialize the dependencies before InitializeEverything() is called. Maybe some compilers will always put that InitializeEverything first, but there's no guarantee that any compiler will, and it's generally bad to depend on compiler quirks to solve your problems. So my only recourse is to put more things in InitializeEverything, which means that I'm back with option 2.

If I have other globals that don't depend on the singletons, then they don't really care what order they're initialized in so long as it's before everything else, so they can once again be initialized in main(). I can even instantiate them in main() if my globals are pointers or smart pointers like my code example in the previous post. 

 
So... why are we bothering with this, again? Am I missing something? Does the C++ standard actually guarantee that InitializeEverything would be called before anything else?
 

Though, I recognize that  reworking things to get this initialization happening from within main can be a bit of a pain in shitty legacy codebases that throw globals everywhere like confetti. But in those cases, adding singletons in the first place is like taking the shit and lighting it on fire. Before you had a shit problem, now you have a flaming shit problem that you need to put out with the dummy int trick, when you didn't actually need to light the shit on fire in the first place. And once again, if you can move the initialization of something to some InitializeEverything() function to escape the static initialization order fiasco, you can move it to main() and do the same thing.




#5233704 Engine design, global interfaces

Posted by Oberon_Command on 08 June 2015 - 08:43 PM

Change it to what, though?


To not having static state in the first place?
 

The alternative you offer, doesn't even work.


Which one, specifically? I have actually presented several.
 

Making it static is what caused the problem that needs to be solved. The dynamic static initialization problem.


So don't make it static? Don't put initialization code in constructors. Problem solved.
 

What dependencies? It's guaranteed that the object will always be initialized. It's perfectly fine to use them in that manner.


I'm more pointing out that if you make it convenient to access global state, that will encourage lazy programmers to write code that depends on the global state. Dependencies on global state make it more difficult to reason about code correctness and more difficult to write correct multithreaded code.

I get the distinct sense that your work hasn't involved much in the way of multithreading and that you've never studied a functional language. Then again, I am not psychic, so I could be completely wrong.
 

The reason they can cause problems is because of initialization order. That is not an issue in the case I presented, because it's not a system resources object.


That's not the only reason. See my above point on global state.
 

Dead wrong. That all depends on what sort of programming you do. If you are an API programmer you absolutely have to worry about this. It will come up. If it didn't come up for you, it's because the uber-API you use (like a game engine) already sorted out these 'minor' problems for you. If you make a game engine yourself, you need to worry about it.


For example?
 

This is wrong, too. The code I used doesn't produce any dependencies.


Sure it does. When a programmer uses your singleton, their code depends on your singleton. That's a dependency.
 

You are completely off base with all your comments. If I listened to you then I'd have to make every static object member a global with extern, and that would not guarantee anything whatsoever.


No, if you actually listened to me, you would avoid static members in the first place since they constitute global state, in the sense that is used when discussing multithreading. See my example above that demonstrates how I would handle your need for a list of GUI components that need to be rendered.
 

You're not convinced because you don't know what it's for, why it exists. Google static dynamic initialization. Google iostream.h and how they solved this problem to make sure it's always initialized.


Except that that's not what the singleton pattern is for, as Hodgman pointed out. I could easily abuse your singletons to force initialization out of proper order. What's ACTUALLY solving that is your "dummy int" idea. I'm honestly kind of amused that you're conflating the purposes of your own ideas.

I'm not convinced because you aren't being very convincing. You do have quite an attitude, though. Sometimes that is indicative of knowledge and competence. Then again, the last time I encountered (in person) someone with your attitude, said someone thought doing memset(this, 0, sizeof(this)); in the constructor of a class with virtual functions was a good idea. I don't think I need to point out what's wrong with that. smile.png
 

Simply deciding you are not going to initialize anything, is not an option.


Yes, it is. I can choose to initialize everything after my static objects are constructed, and thereby avoid the problem entirely. Note that "initialize" is not the same thing as "instantiate." I'm likewise amused that you claim to be as good of a programmer as you are, yet you apparently don't understand the difference between these two things.
 

Object creation happens before main gets run.


Are you implying that I HAVE to have some global state that must get initialized outside of main? What's stopping me from doing this:

Logger g_logger;
MemoryManager g_memory;

int main (int argc, char *argv[])
{
g_logger.Initialize();
g_memory.Initialize();

}


Or even this, if I wanted to preserve the construction = initialization thing (and maybe put cleanup code in the destructors)?


// these all start out as null
std::unique_ptr<Logger> g_logger;
std::unique_ptr<MemoryManager> g_memory;

int main (int argc, char *argv[])
{
// standard library gurus will no doubt call me out on the more idiomatic way of initializing a unique_ptr
g_memory.reset(new MemoryManager());
g_logger.reset(new Logger());
}


There, now I can initialize these things in the constructor, clean them up in the destructor, and everything is fine, because I'm not initializing anything before main, so bad things can't happen. Your aversion to the standard library has been noted, however, so I don't expect you've thought of this approach.




PARTNERS