There's really no debate to be had, that global state is bad is very much agreed upon and well-quantified in the ways that it is bad. Still, its a convenience (I daresay crutch) that some programmers and/or program architectures rely upon.
Some of the problems are:
- Any sub-system can modify any global state that it can see. This makes stringent testing effectively impossible, because the only practical way to "test" anything in such a system is to begin by presuming that certain subsystems do not modify it even though they can see it. This is obviously less reliable than ensuring that such a subsystem cannot modify it through proper dependency management.
- Any sub-system can read any global state that it can see. This makes dependency management a matter of convention by fallible programmers, rather than something they are aided in by the compiler. What's more, dependencies that develop often go unseen by the inattentive programmer, and the web of dependencies can lead to coupling of the whole system in the worst case.
- When state is shared through global means, there is no good way to delegate different levels of access without relying on subverting the type system -- You might think that you could protect some global state by marking it const, but what if another subsystem really does need write access to that state? You must either leave it non-const, or mark it const and then subvert the type system to initialize it (if you cannot do so when it is declared). When state is passed via parameters, the subsystem can promise not to modify the state by taking a const reference to the potentially non-const data.
The only arguments in favor of global state are convenience and laziness, because it saves you the appearance of having to think about any of the things above and the myriad other issues global state can cause. Of course, by not thinking about it, it then becomes unnecessarily hard to reason about bugs, so whatever you gain in the convenience of writing code is not infrequently swallowed by the difficulties you gain in debugging it.
That being said, its sometimes an evil that can be (in small programs) or must be tolerated -- for example, in a large program, where significant new features must be added without changing the interfaces to the abstraction layers in between (e.g. you cannot add dependency injection) because you must preserve compatibility for whatever reason, you might create a global object so that you can initialize it in an outer scope, but then use it in an inner scope. But even still, just because scenarios like this can justify it ex-post-facto, it doesn't endorse its use as a starting point. You'll often see things like loggers touted as a justifiable global, and the truth of the matter is a whole debate, but its not hard to see that the status quo likely emerged for the reason above -- simply that no one had thought to build logging in, and so it was added after the fact.