Jump to content

  • Log In with Google      Sign In   
  • Create Account

Avoiding Global Variables


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
71 replies to this topic

#41 Potatoman   Members   -  Reputation: 108

Like
0Likes
Like

Posted 22 March 2012 - 09:41 AM

Perhaps it's easiest to put it this way: I have never regretted having written a class in such a way that it could have more than one instance; on the other hand, I regularly suffer from classes that were [incorrectly] designed to only have one instance.

It's more difficult to regret something that doesn't exist. Have you regretted running out of time to implement all the features you would have liked to get in? I understand why it's improved in a general sense, to support multiple objects. My contention is, to use an analogy - you don't use gold plated terminals, when copper terminals will do.

had a bit of a laugh seeing all the globals and #defines in there


Yes, there's nothing wrong with globals or defines. They're preferred way of representing certain data.

That's not why "globals are bad".

Care to elaborate, with a bit of context?

would cryengine be a better product had it been developed using software design best practices

Maybe they could make a game that wouldn't be forgettable 2 minutes after launch.

You think improved OO design would have helped achieve this?

But making a game with a game engine was never their priority, it's just a tech demo and it wins in that category.


That's really my point, it's about priorities. The code was effective as assessed against the dominant metric.

why everything has to be a black and white/all or nothing proposition.


Such as why there can't anything be possibly wrong with global references?

Yes, though you will need to provide some quotes to illustrate what you mean by that, because I don't believe believe anyone has posted anything (even remotely) to this effect.

I've never seen any large scale pure C projects, though I imagine there's quite a bit of global data passed about (I could be totally wrong, somewhat liberal assumption on my part). Where I'm heading with this though, is do you consider all pure C projects to be rubbish, on the basis they don't follow modern OO principles?

Sponsor:

#42 yckx   Prime Members   -  Reputation: 1210

Like
0Likes
Like

Posted 22 March 2012 - 10:16 AM

Don't be naive. As soon as app code uses the library that only allows one instance it will make assumptions that there will only be one instance.


In other words, this prognostication doesn't really help you. If your system has an internal design which would allow multiple instances of something, but only ever creates one, and the codebase that uses that system builds on the assumption that it'll only ever create one, that will cause problems when the system is updated to create multiple instances, regardless of whether or not the change to the system required an internal class restructure.

You've misread the quote. Only allowing one instance is not the same as allowing multiple instances while using just one.

#43 Telastyn   Crossbones+   -  Reputation: 3726

Like
0Likes
Like

Posted 22 March 2012 - 11:44 AM

I'm seeing assumptions from both sides, one that you will need more than one of these objects in future, and the other a simplifying assumption that chances are you won't.

What of the simplifying assumption that this class doesn't support multithreading? Or do I have to support mulitthreading for all classes because if I scale the system up I will likely need that one day?


If you look at both assumptions though:

Assume that your code likely won't need more than one instance.

You can make global. If it's global and you were right, you saved negligible time. If it's global and you were wrong, you now have a large amount of painful rework.
You can make it not global. If it's not global and you were right, you spent a little bit of time to make the code more explicit (read: easier to pick-up/work with). If it's not global and your assumption was wrong, you pass in the new/different instance.

Since assumptions cannot be reliably predicted in software development due to ever changing requirements, the good design decision is to avoid even the possibility of the huge downside in our 4 possible scenarios.

I just don't understand why everything has to be a black and white/all or nothing proposition.


Very few things are. There are best practices and exceptions. For the examples given in this thread though, globals are bad. It's all or nothing (at the very least) because you should be unit testing that sort of code and making that sort of stuff global prevents doing that effectively.

Can some things be global? Sure. Are you ever going to be meaningfully harmed by making things not-global? I seriously doubt it.

#44 Antheus   Members   -  Reputation: 2397

Like
0Likes
Like

Posted 22 March 2012 - 12:09 PM

I've heard this said before, but I'm curios if you have any specific examples in mind.


STL doesn't specify whether allocator parameter may be stateful or not, which makes it impossible to define pools or thread-local allocators.

The Allocator parameter basically needs to be a global and stateless, even though it may be passed during construction. I seem to recall there were some corner cases as to why the ambiguity, but it makes Allocator parameter unsuitable for many common use cases.

Maybe this has changed in +11.

#45 VReality   Members   -  Reputation: 436

Like
0Likes
Like

Posted 22 March 2012 - 03:50 PM


Don't be naive. As soon as app code uses the library that only allows one instance it will make assumptions that there will only be one instance.


In other words, this prognostication doesn't really help you. If your system has an internal design which would allow multiple instances of something, but only ever creates one, and the codebase that uses that system builds on the assumption that it'll only ever create one, that will cause problems when the system is updated to create multiple instances, regardless of whether or not the change to the system required an internal class restructure.

You've misread the quote. Only allowing one instance is not the same as allowing multiple instances while using just one.


The difference and similarity between the two was, in fact, the point in question.

The quote was originally critical of the fact that if your system has any inherent restriction, then there is the danger that client code will come to rely on that restriction, causing problems when it's changed. I was pointing out that if alternatively, you're careful to avoid some particular inherent restriction, but still, as a matter of system behavior, restrict the usage allowed, the same objection still applies. Client code assumptions about system behavior still cause the same problems.

Making assumptions about how a system will work is a problem, but it's a completely different issue than the idea which the quote was originally criticizing - namely, that code doesn't need to speculatively support future changes in the laws of science, especially in light of the notions that redesign is inevitabe, as requirements change, and that the nature of those changes is unpredictable.

#46 VReality   Members   -  Reputation: 436

Like
0Likes
Like

Posted 22 March 2012 - 04:26 PM

I'm seeing assumptions from both sides, one that you will need more than one of these objects in future, and the other a simplifying assumption that chances are you won't.


Right. Though we might go a little beyond "chances are".

The example given by OP was a resource which tracks which, of all Widgets, was being hovered. Sure it's theoretically possible that at some point you might want to track which of some Widgets is being hovered. But that wouldn't remove the need for the original resource. A redesign which did, would probably be such a major re-think of how the GUI behaved that this issue of some past limiting design choice would be insignificant.

As another example, your platform might require setup and cleanup for the use of Sockets. You might wrap that system resource in a NetworkConnection class. You might want a count of all instances of that class, so you can perform setup when the first one is created, and cleanup when the last one is destroyed. You could make a seperate system which owns them all for the purpose of tracking them (which could cause problems if more than one of the system objects were ever to exist), or you could give the class a static int.

Yeah, "chances are" you'll never have more than one Sockets system on your platform which needs to be setup and cleaned up. But I'd say it goes a fair bit beyond "chances are".

#47 Hodgman   Moderators   -  Reputation: 30385

Like
5Likes
Like

Posted 22 March 2012 - 08:57 PM

My personal measure of code quality takes into account heavily: can I read and understand the code, maintain it, and re-use it on different projects easily. Usually this correlates to small, decoupled items of code.

Globals are bad on all these measures, because they add fixed dependencies (more code to read/understand, more side-effects to be wary of when maintaining, more code to move between projects), they assume it's valid to access global state as they will (possibly impossible to port between projects), and often hide dependencies altogether.

On my current project, if I can't reason about how long a function will execute, which other functions it's going to cal, and which parts of RAM will be read and written by that function, then it's bad code and it can't be included in the (real-time) main loop. This enforces an extreme version of "no globals allowed" - you can't make use of any global state at all, which includes common crutches like printf, new, malloc, etc, etc... It also greatly reduces the amount of pointers you can use, as an unbounded pointer value can be used to access global state.

And honestly, working in these seemingly ridiculous constraints... I'm writing some of my best code - it's generally small, simple, easy to understand, easy to reason about, easy to bug-fix, easier to handle resources than with smart pointers, portable everywhere and extremely performant. So not only would I say that it's possible, but I'd say it's a superior methodology.

The arguments for putting up with singletons because "I only need 1 instance right now" seems like a decent appeal to YAGNI, which I'd normally agree with... but only if you're writing "throw away" code, where you don't care at all about quality, and only if it's actually quicker and easier than writing quality code (which it's not).

#48 VReality   Members   -  Reputation: 436

Like
0Likes
Like

Posted 23 March 2012 - 02:31 AM

The arguments for putting up with singletons because "I only need 1 instance right now" seems like a decent appeal to YAGNI, which I'd normally agree with... but only if you're writing "throw away" code, where you don't care at all about quality, and only if it's actually quicker and easier than writing quality code (which it's not).


A+ to your whole post.

Though I must admit I've never heard of someone using a singleton because "I only need 1 instance right now". I think the pattern exists for when people say, "There'll be problems if there are ever more than one of these." Writing code to enforce a single instance seems like unnecessary trouble for not needing more than one.

#49 Washu   Senior Moderators   -  Reputation: 5189

Like
0Likes
Like

Posted 23 March 2012 - 04:06 AM

The arguments for putting up with singletons because "I only need 1 instance right now" seems like a decent appeal to YAGNI, which I'd normally agree with... but only if you're writing "throw away" code, where you don't care at all about quality, and only if it's actually quicker and easier than writing quality code (which it's not).


A+ to your whole post.

Though I must admit I've never heard of someone using a singleton because "I only need 1 instance right now". I think the pattern exists for when people say, "There'll be problems if there are ever more than one of these." Writing code to enforce a single instance seems like unnecessary trouble for not needing more than one.


The singleton pattern is not just "I need to enforce that only a single instance of this is ever created." which can be done trivially without using the singleton antipattern. The singleton pattern is two parts... "There is only a single instance of X" and "There is a global access point to that instance." The common example of this is the static "getInstance" method you will see most "singletons" implement.

The important thing to realize here is that you almost never actually need to enforce that there is only ONE instance of something. Its actually a relatively preposterous idea, if you think about it. What you're saying is that "(I'm/You're) such an incompetent developer that I need to ensure that (I/You) don't create more than one of this particular object!" If you don't need or want more than one instance of an object... don't create more than one instance. There are very, very, very few cases where having more than one instance of an object will do any particular harm. Yet you see plenty of people looking for problems to solve with their singletons: "Oh, I'll only ever need one graphics device, lets make it a singleton."

Then we get into global access... globals aren't always a bad idea, but only when used sparingly and in a highly restrictive manner. For instance, you might actually have a "global" graphics instance, but it would not be exposed to the outside components and only accessed internally by the rendering methods (think of it as a "localized global"). That doesn't mean it's a good idea (I can think of plenty of cases where you want more than one D3D device). The thing is that when you make the assumption that something "needs" to be a global then you have already decided that you "need" to access that something everywhere. Which is usually a good sign that you have a design issue. Lets look at logging. A "logger" is the most common global I see anymore. The thing is, you often don't actually want a global logger...what you want are logs that are broken up into many pieces. You want informational logs, you want debugging logs, exception logs, diagnostics/warning logs. For large systems you want your logs broken up by component, module, or even system. Frankly, having a logging global is often more of a hinderance than a helper. There's an adage that I've taken to living by, and it goes as such: "The frequency of reading the logs is inversely proportional to the size of the log." That is: the bigger the log file gets, the less you look at it. Too much cruft and junk to crawl through, even with decent tools. By breaking things down into compartmentalized logs you quickly find that you can get to the bottom of "log-able" issues much faster than you would had you just crammed everything into your "mondologglobalomgimsoawesome" class.

Yeah, "chances are" you'll never have more than one Sockets system on your platform which needs to be setup and cleaned up. But I'd say it goes a fair bit beyond "chances are".

Really? I'm pretty positive that in most games that use sockets you're more likely to encounter multiple sockets being required by the application. For instance you might have a web service call to update a leaderboard, along with matchmaking calls to find a suitable game, and then there's the actual game connection and synchronization, which goes on as you're chatting in the lobby waiting for the game to finish initializing. Making the assumption that you'll need only one socket is a surefire way to get your ass bit.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.
ScapeCode - Blog | SlimDX


#50 VReality   Members   -  Reputation: 436

Like
0Likes
Like

Posted 23 March 2012 - 05:34 AM

The singleton pattern is two parts... "There is only a single instance of X" and "There is a global access point to that instance."


This is a good point. The two parts being orthogonal, as they are, they had become decoupled in my mind, and I was only considering the first part to be the essence of a singleton. If there are those who only think of the second part, then comments about using them out of laziness start to make sense.


The important thing to realize here is that you almost never actually need to enforce that there is only ONE instance of something. Its actually a relatively preposterous idea, if you think about it. What you're saying is that "(I'm/You're) such an incompetent developer that I need to ensure that (I/You) don't create more than one of this particular object!" If you don't need or want more than one instance of an object... don't create more than one instance.


It's only preposterous in the same sense as language enforced type checking (why would you ever use an incompatible type?), or using protected/private members (competent engineers should know better than to access data members directly), etc. If it's harmful to have more than one, it's better to programmatically enforce that limit than to simply assume a codebase will never run afoul of it.


Yeah, "chances are" you'll never have more than one Sockets system on your platform which needs to be setup and cleaned up. But I'd say it goes a fair bit beyond "chances are".


Really? I'm pretty positive that in most games that use sockets you're more likely to encounter multiple sockets being required by the application.


Multiple sockets - yes. Multiple Sockets systems - no.

On Win32 platforms, the "WSAStartup" function initiates use of the Winsock DLL by a process. It must be called in order for the program to use any Sockets functionality. But no matter how many "sockets" the program uses, the DLL should only be initialized once (per cleanup). Off the top of my head, there are three general options for managing this sort of thing:

  • Dump setup and cleanup code into a couple of big setup and cleanup functions for the application.
  • Pass around a factory object whose sole purpose is to own networking objects' lifetimes, so it can do setup and cleanup as needed.
  • Statically "reference count" all networking objects to do setup and cleanup as needed.

I'm not married to it, but I like the last option because it presents the bare-bones simplest interface to the world. Just create, use, and destroy networking objects whenever and however you want.

#51 Antheus   Members   -  Reputation: 2397

Like
0Likes
Like

Posted 23 March 2012 - 08:56 AM

You think improved OO design would have helped achieve this?


Less OO would help.

No longer code related, but same principles apply to management.

Basic singleton design:
- Joe is in charge of procurement. It works great at a start-up with 5 employees. "Joe, I need a new 1TB disk". Start-up grows. "joe, we need 20 new blades for the rack". Grows further. "Joe, we need to ressuply our 16,000 vendor partners in EMEA in full compliance with the law in each of 43 countries".

Theoretical company works like that. Boss -> Subordinate -> Subordinate -> .... In theory it works. In practice it no longer does. While formal chain of command remains, just about all companies transformed into a flatter structure. Individual units have more control and independence.

So it becomes this:
First there's Joe. Then there's 5 people in charge of procurement. Then there's 20 people. Then there's 200 departments with 50 people each.

Distinction is crucial. In first case, procurement is about Joe. "Joe, procure ....". As company grows, it has no process to delegate this to someone else.

In latter case, procuremnt is about obtaining stuff, with different hierarchies and structures involved. Process is set up for growth since start.
---

If thinking about some basic trivial project, not thinking about that is perfectly fine. Just like example above will likely grow to several hundred people, so will code last for a while.

At same time, all current studies of corporate development strongly emphasize distributed control and individual involvement rather than top-down org-chart.

Reason is simple. When it comes to org charts, the Big Name Orgs (think 100k employees or more) have it down. If someone needs to get 10,000 people going, they'll do it before breakfast.

So to compete, you need a different strategy, one that competition isn't capable of. And that is breadth.

Management examples aren't made up, they're simple fact of survival. Companies that rely on top-down management either died, exist due to government protection or will die very soon. World no longer tolerates single points of failure.

#52 Washu   Senior Moderators   -  Reputation: 5189

Like
0Likes
Like

Posted 23 March 2012 - 12:33 PM

Yeah, "chances are" you'll never have more than one Sockets system on your platform which needs to be setup and cleaned up. But I'd say it goes a fair bit beyond "chances are".


Really? I'm pretty positive that in most games that use sockets you're more likely to encounter multiple sockets being required by the application.


Multiple sockets - yes. Multiple Sockets systems - no.

On Win32 platforms, the "WSAStartup" function initiates use of the Winsock DLL by a process. It must be called in order for the program to use any Sockets functionality. But no matter how many "sockets" the program uses, the DLL should only be initialized once (per cleanup). Off the top of my head, there are three general options for managing this sort of thing:
  • Dump setup and cleanup code into a couple of big setup and cleanup functions for the application.
  • Pass around a factory object whose sole purpose is to own networking objects' lifetimes, so it can do setup and cleanup as needed.
  • Statically "reference count" all networking objects to do setup and cleanup as needed.

I'm not married to it, but I like the last option because it presents the bare-bones simplest interface to the world. Just create, use, and destroy networking objects whenever and however you want.

On Win32, You can safely call WSAStartup as many times as you want without harm. This is explicitly noted in the documentation. Furthermore, you need to call WSACleanup once per WSAStartup call. Hint: This sounds like a prime place for a constructor and destructor.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.
ScapeCode - Blog | SlimDX


#53 VReality   Members   -  Reputation: 436

Like
0Likes
Like

Posted 23 March 2012 - 04:36 PM

On Win32, You can safely call WSAStartup as many times as you want without harm. This is explicitly noted in the documentation. Furthermore, you need to call WSACleanup once per WSAStartup call.


Yeah, it says,

...if an application calls WSAStartup three times, it must call WSACleanup three times. The first two calls to WSACleanup do nothing except decrement an internal counter; the final WSACleanup call for the task does all necessary resource deallocation for the task.


Although it should be noted that the reason the documentation mentions for calling it multiple times is to attempt initialization on different WinSock revisions. Making spurious calls throughout execution may or may not be something you want to do. The only thing the documentation is promising is that it's legal. But this is telling us that Win32 is, itself, using this sort of static "reference count" technique for us. And the result is zero complication added to the API for this issue (including no static init nightmeres).

Anyway, the point was simply that the notion, of there being things here and there that are quite singular in nature, is not far-fetched (the occasional implementation loophole provided by a Win32 API notwithstanding).

#54 L. Spiro   Crossbones+   -  Reputation: 13595

Like
-1Likes
Like

Posted 23 March 2012 - 07:10 PM

It simply does not matter whether there are true singularities in nature—the only place globals have in coding is where they are the only possible way to achieve something.

I want all of my textures to have a unique ID. A static ID counter and critical section is the only way to achieve this.
If anything is possible without globals, it is wrong to use globals.

You would think that your game can have only one window, and only one game instance. Until you realize that having multiple windows could be helpful for debugging.
You will think it is fine to have only one OpenGL context and keep it as a global. Until you start making tools in Qt and realize that each OpenGL view has its own context and your resources won’t work between them.

I have made a few things globals long ago in my inexperienced days and have been kicking myself in the ass ever since. They sure seemed appropriate at the time. “Gosh, when would I ever need more than one of these?”, I thought. “Oh. When I start taking my project seriously,” I replied.

Not to mention the security risks. A global game class?? Talk about easy hacking. Just find the static base pointer and you are on your way to the characters and enemies, followed by auto-aim, bots, what-have-you. And be sure to use MHS for all your hacking needs.

I have only ever used one menu manager per project ever. But there is no justification for making it a global. It doesn’t make sense. As an instance, I know exactly what can access it, when, and how. I know when it will be freeing its memory and leaving existence and when it will be coming back as a new instance.
Which brings up another point. How often do you destroy your globals? Do you not feel the need to release some of that memory that doesn’t need to be used?

How do you feel about going into a part of the game in which there is no menu, but having residual memory-manager residue laying around in the background?
Do you plan to make a global system to tell each global system when to release and renew its resources? Good luck managing that mess.


Globals: They just don’t make sense.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#55 Washu   Senior Moderators   -  Reputation: 5189

Like
1Likes
Like

Posted 23 March 2012 - 07:22 PM

I want all of my textures to have a unique ID. A static ID counter and critical section is the only way to achieve this.

No globals needed there either... pass around the texture cache to the appropriate things that should be loading textures (hint, there aren't that many things that SHOULD ever load a texture).
class TextureCache {
private:
    int currentTextureId;
    std::mutex cacheLock;
//...
public:
    int LoadTexture(std::string const& filename);
    void ReleaseTexture(int textureId);
};
and for those who speak with a Porsche Owners Club accent and go "Ahh yes, but we're on C!"
typedef struct TextureCache_t {
	int currentTextureId;
	MutexHandle mutex;
	//...
}TextureCache, *TextureCachePtr;

int LoadTexture(TextureCachePtr cache, char const* filename);
void ReleaseTexture(TextureCachePtr cache, int textureId);
Similarily, on the WinSock front... no global there either. A static instance, but that's for convenience only. Plus the class can be instantiated more than once without harm. Its also nicely hidden behind an anonymous namespace, and if placed properly within a singular compilation unit, will be entirely invisible to the rest of the code.
namespace {
    struct SocketInit {
        SocketInit() {
            WSAStartup(MAKEWORD(2,2), &data);
        }

        ~SocketInit() {
            WSACleanup();
        }
        WSADATA data;
    private:
        static SocketInit socketInit;
    };
    SocketInit SocketInit::socketInit;
}

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.
ScapeCode - Blog | SlimDX


#56 Madhed   Crossbones+   -  Reputation: 2974

Like
0Likes
Like

Posted 23 March 2012 - 07:31 PM

Whoa, another thread turnig philosophical? Well, minimizing relationship is always good. Global variables means, everything is dependent on everything else,... potentially. Keeping dependencies local means invariants are easier to maintain. Theoretically at least.

#57 L. Spiro   Crossbones+   -  Reputation: 13595

Like
0Likes
Like

Posted 23 March 2012 - 08:18 PM

No globals needed there either... pass around the texture cache to the appropriate things that should be loading textures (hint, there aren't that many things that SHOULD ever load a texture).

class TextureCache {
private:
	int currentTextureId;
	std::mutex cacheLock;
//...
public:
	int LoadTexture(std::string const& filename);
	void ReleaseTexture(int textureId);
};

That is not my use for texture ID’s. The ID is not to be used to identify textures, it is to be used for debugging and optimization only.
The part related to debugging is primarily why every texture needs a unique ID regardless of how many texture caches or managers there are.
Additionally, for optimizations, this ID actually needs to be unique between both the render target systems and the texture systems.
Of course you could have a manager/cache at a lower level where the difference between a texture and a render target is abstract and modify your example accordingly, but that doesn’t take care of the debugging issue, and passing around the lowest-level manager for that system is undesirable—I want users to think of them as separate systems with only a few things in common, so I would be passing a TextureManager and a RenderTargetManager around, but one would really just be a pointer cast of the other and fairly redundant.

Of course just passing around a graphics device would solve that problem, but design issues are not the point here. It is more about using a system-wide ID, unrelated to the number of graphics devices or contexts you have, to aid in certain subsystems, including general-purpose debugging.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#58 yckx   Prime Members   -  Reputation: 1210

Like
0Likes
Like

Posted 24 March 2012 - 12:15 AM

The difference and similarity between the two was, in fact, the point in question.

My apologies. Your response started with "In other words…" so I read the whole paragraph from that viewpoint, and believed you were conflating the two options instead of offering an alternate view.


#59 Potatoman   Members   -  Reputation: 108

Like
0Likes
Like

Posted 24 March 2012 - 11:37 AM

The arguments for putting up with singletons because "I only need 1 instance right now" seems like a decent appeal to YAGNI, which I'd normally agree with... but only if you're writing "throw away" code, where you don't care at all about quality, and only if it's actually quicker and easier than writing quality code (which it's not).


I would argue that fewer lines of code is quicker and easier to write, and globals use demonstrably fewer lines of code. I'm not saying to use that metric to decide what approach to take, but it usually is quicker and easier - I feel it devalues your argument to suggest otherwise.

#60 Potatoman   Members   -  Reputation: 108

Like
0Likes
Like

Posted 24 March 2012 - 11:41 AM


I'm seeing assumptions from both sides, one that you will need more than one of these objects in future, and the other a simplifying assumption that chances are you won't.

What of the simplifying assumption that this class doesn't support multithreading? Or do I have to support mulitthreading for all classes because if I scale the system up I will likely need that one day?


If you look at both assumptions though:

Assume that your code likely won't need more than one instance.

You can make global. If it's global and you were right, you saved negligible time. If it's global and you were wrong, you now have a large amount of painful rework.
You can make it not global. If it's not global and you were right, you spent a little bit of time to make the code more explicit (read: easier to pick-up/work with). If it's not global and your assumption was wrong, you pass in the new/different instance.


I see largely the reverse, though certainly it depends on the situation. In my experience, if you were wrong, you just wasted a potentially nontrivial amount of time on developing and supporting infrastructure you will never need. If you were right, you saved neglibigle time overall because you can go in after the fact and update it. The key benefit for building in the support is allowing rapid design changes at late stages of the project, at the cost of spending more time up front. This might be useful for a project, or it might not be.

Can you give an example of a situation where the rework is more significant than simply changing the singleton to be to passed in, that is not the result of poor design in general (as in, it's a painful rework because of the general code structure, not the use of a singleton). What I mean is if we're talking about singleton vs passing in the argument, I see no difference between implementing it now or later. Why is it more painful later? At a coarse level, I imagine the 'pain' of implementing it later is proportional to the amount of time saved by making this simplifying assumption in the first place.

making that sort of stuff global prevents doing that effectively.

It doesn't prevent it at all, instead of passing the parameter in, you set the global to that value. It might not be as 'clean' or as intuitive to the reader as you'd like, but I think 'prevents' is a bit strong.

void TestMyContainer()
{
  std::unique_ptr<SharedData> data(new SharedData());
  MyCustomContainer c(data);
  c.DoSomething()
}

versus

void TestMyContainer()
{
  std::unique_ptr<SharedData> data(new SharedData());
  MyCustomerContainer::setSharedInstance(data.get());

  MyCustomContainer c;
  c.DoSomething()

  MyCustomerContainer::setSharedInstance(nullptr);
}

Now again with the disclaimers, I'm not recommending a container class that uses a global shared instance, that's not the point. I'm merely illustrating that it doesn't prevent you from unit testing. Please can we refrain from asking why you would actually have a container taking a shared instance in the manner demonstrated above.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS