When did allocating memory in a constructor become a bad idea?
Welcome to engineering.
Over course of years and many projects, correlation was established between successful projects and those not so much. There are no hard rules or laws, but certain practices have been found to be better than others.
One observation was that low-coupled objects result in code that is easier to modify and test. Projects with such properties lead to easier experimentation and shorter turnaround times, providing more end-user value (aka what really matters).
When it comes to resource allocation, there are two sides to the story which make it undesirable. One is the issue of testing, covered by several
Google Talks. While it favors certain development style, lessons there are somewhat universal, even if presented in scope of Java.
Second issue is non-determinism of resource allocation. "But it's fast enough" doesn't cut it. Resource allocation is almost without exception non-deterministic (may take arbitrary amount of time) and has nothing to do with the rest of the code - a renderer obviously needs to load everything before it's useful, an algorithm obviously needs entire data before it can do useful work.
Effects of non-determinism can be observed when testing. As project grows, the loading times increase, even though algorithms don't change.
The latter issues are especially important when dealing with real-time systems. Ideal real-time system has resource allocation completely decoupled during initialization phase while operational state cannot allocate anything (be it file, memory, even socket). Memory is frequently considered "fast enough", but in well-designed OO system it often becomes a bottle-neck (30 second Tomcat restart cycles).
Finally, from design perspective, putting all allocation in constructor creates artifical oblivion to rest of design. Assume Window which always creates Widgets. Whenever you create a new window it will ignore the programmer's knowledge that other widgets already exist and it will recreate them. Since most OO systems favor resource-oblivious encapsulation (ignore external knowledge) they end up encapsulating more than they should. A new resource typically implies "new identity" even though many resources are shared and "equal". JVM exploits this fact by reusing string instances (.intern()) and C++ compilers offer string pooling and constant resources which may technically violate the expected behavior of each allocation being unique.
Resource allocation of any kind should be considered strictly orthogonal to algorithms that operate on data. It's fairly common to use resource loaders or caches as well as different allocators in same application reinforcing such observation. While rarely explicitly stated, many projects independently arrive at the need for decoupling the two.
As a C++ detail, operator new tends to be overused by programmers coming from Java or C# which results in unneeded verbosity.
Most of these issues can be solved by parametrizing "allocator" either at compile or run-time. STL has allocator parameter, constructor might take a Factory or Allocator interface which can act as cache, C idiom is to pass allocator function or pointer to memory block (see zlib), dynamic languages may use a different (sqlite instead of MySql).
Some of "best practices" come through as FUD or just something consultants like to repeat to justify their existence. But as with everything, there are both exceptions as well as grains of truth.
Generalizations do not help much and truth be told, most projects simply aren't big or important enough for such details to matter. But it helps to be aware of such issues since they can make or break a certain part of development.