8 hours ago, thatguyiam said:
I've seen quite a few competent developers recently discuss disliking tdd first, even test advocates, as they consider it to impede rapid iteration of an evolving code base.
It does both.
TDD is amazing for stuff that has a long tail. I've used it in development for server features and code with a long-term support.
Automated tests ensure permanence. They ensure that if you modify the behavior you get test failures, which in turn prevent accidentally changing your behavior. For libraries and for long-running code that must be supported for years, this is great. The initial costs are offset by many years of reduced maintenance cost.
Since it ensures permanence, that means it works against rapid iteration. For game code you intend to throw away on completion, where the code should generally be fluid and changes frequently, it is not so great. Instead of having a collection of automated tests that remain green as you change implementation details, when you are changing visible behavior then every change requires changing a test.
Automated tests also have a cost in time and effort. There have been several studies on it. TDD is usually the cheapest, at about 1:1 cost for developing the main code plus developing the tests. Adding tests immediately after writing increases the cost slightly since some things need to be modified, and adding tests to established systems tends to have significantly higher costs since parts of the system must be retooled or adapted. So if you are going to be implementing tests, TDD is the most cost-efficient way to do it.
For your short lived code in Unity or similar engines, TDD is a poor decision from a business standpoint. The extra investment won't be recovered over time like it would be with a long-lived project. For a long-term system that needs to be resilient against change under maintenence, TDD is wonderful.
8 hours ago, thatguyiam said:
I make extensive use of couroutines and...
Co-routines are an easy way to introduce delayed processing without violating Unity's model where the scripts follow a reliable linear flow. You can use the full suite of multiprocessing/multithreading code through the .net runtime, but it opens the standard sets of difficulty with parallel code. It adds about an order of magnitude of complexity to your bugs, which is a great reason for the game engine designers to direct beginner and intermediate developers away from it.
8 hours ago, thatguyiam said:
Really the only solid principle I'm not adhering to is dependency inversion, which a lot of people suggest is the most important one
Dependency inversion is extremely important when you have many subclasses, when you have a bunch of code that gives slight variations on the implementation details but provides a common interface.
Dependency inversion usually isn't necessary if you have no subclasses, if you have only one type of thing and never need variations under a common interface.
Unity scripts already do this in many ways, such as following the GameObject interfaces, or the Component interface, Joint that is implemented by assorted hinges and springs, Renderer that applies to MeshRenderer and CanvasRenderer and BillboardRenderer and ParticleRenderer and other visual things. For these situations you only need to know the interface, the specific type of object usually doesn't matter.
Many Unity scripts I've written over the years do not need additional interfaces. Some advanced systems do, especially systems where you have a bunch of interchangeable parts, but when you implement them they usually don't grow from MonoBehaviour.
9 hours ago, thatguyiam said:
ps. it annoys me that empty objects I use for maintaining structure and convenience in the hierarchy have transforms.
It is a design decision, but I think a good one. Since nearly everything else requires exactly one transform, building it directly into the object allows for space savings. Many items built for maintenance as you describe get converted into prefabs, and then you really want the GameObjects that provide structure to also have the transform.
In non-trivial worlds you constantly add and remove scenes, which means adding a GameObject container, positioning it appropriately, and loading the new scene into it. The position is important to make the world continuous, and the container part is necessary so it can be cleanly unloaded later.
A few heavily scripted items may live inside a utility GameObject, but they're a tiny number. Generally I've seen one single game object with a unique name and layer living at the root layer that serves as the service locator. Find that object and use GetComponent<>() to pull off whatever service you need.
The extremely rare inconvenience is far outweighed by the large benefit to every other object in the game world.
21 hours ago, thatguyiam said:
write clean, modular, scalable code quickly
The common, standard answers apply.
Write with namespaces so one system's code doesn't interfere with another system's code, Unity does not wrap code in namespaces by default. Configure Unity's files for scenes and asset information are text based rather binary, so they can be more easily merged and integrated. There are two models to version control, one that restricts access and locks by default, another that merges when posting and is unlocked by default; you can save headaches if you configure version control to make the files (.asset, .scene, .meta, etc.) automatically lock on checkout. Divide your systems into small enough chunks that pieces can be worked on concurrently, such as designers editing prefabs rather than editing scenes, code broken into sufficiently small subsystems, etc.
Writing clean code is basically the same regardless of the tool you're using. It should be well named, have uniform naming standards and uniform conventions, and otherwise follow standard coding guidelines and practices.