Right, I don't actually advocate that every bit of functionality be added up front, mearly that it may be better to do so if there's a good possibility that the functionality will be needed, even if it's not now. I'm also not advocating that things be overdesigned or overengineered, thats equally harmfull because it convoludes debugging both now and in the future.
Furthermore, vectors are somewhat of a poor example simply because they're so well defined -- the penalty for the "mental context switch" is very small for those with a good understanding of vectors and how they work. Something more complex would introduce a greater penalty. Say something like adding reference counting to a resource manager. Here, not only do you have to place code correctly, but you must mentally re-evaluate the manager to make sure that it works as expected, that it meshes with any possible corner cases and generates no ill side-effects of its own. Even this is a relatively simple example. The more complex the system and the more complex the added functionality, the greater the penalty becomes.
It takes a long time to develope the "taste" for when "good enough" really is good enough. It also takes a great deal of experience to write components and systems that are flexible enough to sway in the wind of future changes if necessary, rather than needing to be rewritten from the ground up.
Approach to developing games
I actually use a hybrid model for myself. Its what I call design for tommorow, build for today.
I start with the memory of the OP's experience (I too had a period where I wrote nothing but inferior subpieces of the dreams I held of grand designs I might one day use).
To answer that I use the Agile mentality as tenet 1. "Develop what you need now, now." Expect to change later, as the need arises.
To that core I add, "think before you act." And so I pull out the pen and paper and plan for what my actual goal is before I start spewing a bunch of classes across my hard-drive.
And throw in a dash of experience about what types of things are easy to change (the gui layout, the highest level app code) and are hard to change (the core domain model, its terms and basic relations) ... and some things in between (the data model, the interfaces I have built along the way).
So now I have an empty project and a clear goal (the initial requirements analysis has been done) ... what to do?
Design! Pull out more paper, or a whiteboard. Draw things, sketch workflows, identify the knowns, the unknowns. etc.
Now I do this at 2 levels. Spy Sattelite Level), I make sure EVERYTHING I need in my known goal as a very very rough place in my design. Somewhere on my paper is an item like "Enemy AI", or "Multiplayer Lobby" at least. Not anything detailed yet, jut a sketch of what I'm interested in, and bullet points lying around the periphery for the rest. Skyline Level), I pick 1 or 2 areas I'm going to work on first and I actually fleash out their shape. I define how they looked in my previous projects, how that was ugly, how it was good. I draw how I need them to look, how I'd like them to look. I envision them with features that customer's might want in 5 years time. I decorate them with interactions to modules that aren't in this game, but might be in the next one. I tackle problems in these imaginary future versions, or at least go far enough to see if the problems seem solvable given some time. When I'm satisfied with my grand future vision, then I stop, step-back and regress the design. I take my visions of problems and how I might tackle them tommorow and I group them into "this version", "next version", "someday (maybe)".
The I take the "this version" vision, and I go into Blueprint Level) Labeling everything, drawing up the interfaces, the responsibilities, stick-figures and UML(ish) diagrams ... and CODE, oh glorious code.
Or so I dream (somedays it works out too).
But basically I try to stike a balance - OVER designing, but JUST-RIGHT engineering. Partially just because I enjoy the designing so much, and the wasted time is so much lower than the time wasted to over-engineering.
Never write a document for something easier said in code. Never solve a problem in a text editor easier tackled at the white board.
The debuging version is: Never explain a problem to a coworker for 15 minutes that google could have solved in 3. Never stare at a screen for a whole afternoon what you could show a coworker in 15 minutes.
I start with the memory of the OP's experience (I too had a period where I wrote nothing but inferior subpieces of the dreams I held of grand designs I might one day use).
To answer that I use the Agile mentality as tenet 1. "Develop what you need now, now." Expect to change later, as the need arises.
To that core I add, "think before you act." And so I pull out the pen and paper and plan for what my actual goal is before I start spewing a bunch of classes across my hard-drive.
And throw in a dash of experience about what types of things are easy to change (the gui layout, the highest level app code) and are hard to change (the core domain model, its terms and basic relations) ... and some things in between (the data model, the interfaces I have built along the way).
So now I have an empty project and a clear goal (the initial requirements analysis has been done) ... what to do?
Design! Pull out more paper, or a whiteboard. Draw things, sketch workflows, identify the knowns, the unknowns. etc.
Now I do this at 2 levels. Spy Sattelite Level), I make sure EVERYTHING I need in my known goal as a very very rough place in my design. Somewhere on my paper is an item like "Enemy AI", or "Multiplayer Lobby" at least. Not anything detailed yet, jut a sketch of what I'm interested in, and bullet points lying around the periphery for the rest. Skyline Level), I pick 1 or 2 areas I'm going to work on first and I actually fleash out their shape. I define how they looked in my previous projects, how that was ugly, how it was good. I draw how I need them to look, how I'd like them to look. I envision them with features that customer's might want in 5 years time. I decorate them with interactions to modules that aren't in this game, but might be in the next one. I tackle problems in these imaginary future versions, or at least go far enough to see if the problems seem solvable given some time. When I'm satisfied with my grand future vision, then I stop, step-back and regress the design. I take my visions of problems and how I might tackle them tommorow and I group them into "this version", "next version", "someday (maybe)".
The I take the "this version" vision, and I go into Blueprint Level) Labeling everything, drawing up the interfaces, the responsibilities, stick-figures and UML(ish) diagrams ... and CODE, oh glorious code.
Or so I dream (somedays it works out too).
But basically I try to stike a balance - OVER designing, but JUST-RIGHT engineering. Partially just because I enjoy the designing so much, and the wasted time is so much lower than the time wasted to over-engineering.
Never write a document for something easier said in code. Never solve a problem in a text editor easier tackled at the white board.
The debuging version is: Never explain a problem to a coworker for 15 minutes that google could have solved in 3. Never stare at a screen for a whole afternoon what you could show a coworker in 15 minutes.
The age-old problem. I find it the most difficult part of programming. Finding a framework that is not too convoluted, yet flexible enough and does not require constant re-factoring and re-designs. Also, if you don't start with a clear idea where you want to go, you may arrive at dead ends, where it will just not work for the intended purpose, and you'd have to re-write a large chunk of code to make a piece of the puzzle fit (e.g. from recent experience, adding networking, cross-platform, multi-threading as an after-thought :)). It's even more difficult when you start working within a team. Then things tend to slip quite a bit, the code starts to rot and smells, and some things don't work the way you need them to work. Too many cooks spoil the broth, no matter how good they are individually!
Good code design patterns and practises. The holy grail, I guess! Plus, I suck at the theoretical part of that :)
Good code design patterns and practises. The holy grail, I guess! Plus, I suck at the theoretical part of that :)
I don't like to code to much when I'm just "thinking" so I prototype my ideas in FreeBasic so I can see it in action and get an idea how it would work best. If I'm at the idea stage, I don't even bother with C++. There are a lot more things to consider when writing in C++ and those extra considerations tend to hinder the creative process.
Once I'm ready to put an idea into action, I start off with a test app and build it out in C++. This keeps the implementation of the module seperate from the main app and allows you to run it through a battery of tests without having to work around the main app.
For the main app framework, I use a command pattern and a simple parser that can take text commands and convert them to command objects. During initialization, a script is loaded which fires off the initial commands (video, input, etc.) Once the app is running, hard-coded commands may be called and a built-in console allows commands from the user (good for testing).
Overall, I stick to simplicity when designing/building an idea. I don't have a lot of time for programming outside of work so feature creep is NEVER an issue. Best bet is to build what you know you can handle.
Once I'm ready to put an idea into action, I start off with a test app and build it out in C++. This keeps the implementation of the module seperate from the main app and allows you to run it through a battery of tests without having to work around the main app.
For the main app framework, I use a command pattern and a simple parser that can take text commands and convert them to command objects. During initialization, a script is loaded which fires off the initial commands (video, input, etc.) Once the app is running, hard-coded commands may be called and a built-in console allows commands from the user (good for testing).
Overall, I stick to simplicity when designing/building an idea. I don't have a lot of time for programming outside of work so feature creep is NEVER an issue. Best bet is to build what you know you can handle.
Quote:Original post by Xai
I actually use a hybrid model for myself. Its what I call design for tommorow, build for today.
I like this a lot.
Quote:Never write a document for something easier said in code.Which is why a good technical DD is actually partially pseudo-code anyway.
Quote:Never solve a problem in a text editor easier tackled at the white board.You are now approaching diety status in my book.
Quote:The debuging version is: Never explain a problem to a coworker for 15 minutes that google could have solved in 3. Never stare at a screen for a whole afternoon what you could show a coworker in 15 minutes.Have I told you lately that I loved you? I wish you would write that up on parchment for distribution. Or stone tablets. But those are so hard to CC in email. *sigh*
A cousin to refactoring is simply itterative design anyway. This is a big one for me. Obviously there are certain chunks of things you can't itterate away and must do in one leap, but many things can be done in small, workable, testable steps.
I like what Brian Reynolds said in a GDC lecture a few years back: "A random number from 1 to 3 is a perfectly valid temporary AI." There is a lot of wisdom there. Just get it to work for now, then gradually build the intelligence into it.
Its very interesting to hear everyones thoughts on this. I guess what I'm realising is that refactoring design has become so easy today that the old assumptions about changing design and implementation does not hold anymore. There is less danger in initially having a too narrow design.
I think this is a good example of my previous mindset. It might be relatively harmless to anticipate vector functionality and add them right away. But I would argue that it is more inefficient in the long run because of the mindset of constantly anticipating problems. You focus on problems and worry about possible problems, you get overwhelmed, I think it makes you less efficient. Maybe there is a healthy way to both anticipate problems but still focus on finding solutions to the ones at hand.
I had an interesting thought. I guess this is a philosphy that can be translated to real life too. Alot of anxiety and stress is caused by fears that comes from anticipating or imagining future problems.
Quote:Original post by ravyne2001
Say, for example, that in the course of writing your game you decide that you need a vector class. You only need to be able to add and multiply vectors right now. Is it more efficient to write just the functionality you need now, adding additional functions as you discover a need for them (adding a mental context switch,) or would it be easier to write a vector class with all the basic functionality up front? If you anticipate needing more functionality in the future, I would argue that the latter is more efficient in the long run.
I think this is a good example of my previous mindset. It might be relatively harmless to anticipate vector functionality and add them right away. But I would argue that it is more inefficient in the long run because of the mindset of constantly anticipating problems. You focus on problems and worry about possible problems, you get overwhelmed, I think it makes you less efficient. Maybe there is a healthy way to both anticipate problems but still focus on finding solutions to the ones at hand.
I had an interesting thought. I guess this is a philosphy that can be translated to real life too. Alot of anxiety and stress is caused by fears that comes from anticipating or imagining future problems.
Quote:Original post by Opwiz
Its very interesting to hear everyones thoughts on this. I guess what I'm realising is that refactoring design has become so easy today that the old assumptions about changing design and implementation does not hold anymore. There is less danger in initially having a too narrow design.
I don't think there has been that much 'danger' since the days of punch cards.
Seriously, I think the value of Big Up Front Design came about from the time when programming a computer was a long and arduous business and editing the code was non-trivial. This attitude was then been perpetuated by the "programming is engineering" paradigm. I think this arose because academics and/or industry wanted to inject some sort of legitimacy and rigour into the process, but unfortunately it implies that changing a few functions in a code file is as costly and expensive as resurfacing a road or fabricating new pistons for an engine. Sadly the resulting drive to lock down rigid specifications early in the process makes this a self-fulfilling prophecy, as any later change involves rewriting a load of documentation and propagating any changes throughout the system.
In fact, code is much, much more fluid than any other engineering material and the manipulation of it should be done with that in mind.
Quote:Original post by Opwiz[original post]Excellent! Unlike 99% of people (including programmers) these days, you actually pay attention to what IS happening to you, try to understand it, and do something about it.
The computer (especially software) industry is going backwards at warp speed, mostly because they get suckered (by promoters, marketers, media, PR) into believing all kinds of crazy assertions. Usually they go "adopt my tool or library or approach and you can be lazy, sloppy, careless, confused and not bother to understand the actual or efficient architecture of your problem --- yet SOMEHOW our projects will become easy, reliable, wonderful, world-class, state-of-the-art, utopia! In other words, we can all be mental couch-potatoes and let the products/approaches those other people promote do the hard work. Hell, why hire smart people when monkeys can design space shuttles? Result? Even NASA cannot design space shuttles - or go to the moon - anymore.
You are taking responsibility for understanding your own mental processes and their consequences (your work). I wish more than a tiny minority did that.
The issues everyone is talking about in this excellent thread can be formulated and described in many different ways. When it comes to programming, I usually gravitate towards a few key concepts - like "atomic". You and others mentioned this issue in various ways (though I do not recall seeing the term "atomic"). This simply refers to the fact that we can identify certain operations, functions or processes that we need to perform often. Just as the entire universe is nothing but ~100 atoms in millions of configurations (and their constant actions/changes), many software processes (simple to complex) can be thought of as "atoms" - because we keep seeing them over and over again.
So these are opportunities to write routines that we KNOW we can apply dozens if not hundreds/thousands of times in a programming career. As you suggest, it is perfectly fine to learn as we gain experience - and tweak and hone these routines into one/few forms that more-or-less cover every need that arises. But we always know it is truly open-ended - in the sense that we may always encounter good justifications to tweak the routine a little, or make a new versions for a new set of cases, or make a special-purpose one-time version.
This is far too practical and utilitarian for promoters and marketeers. Their entire existence depends on making you [think you] depend on them! Only then can they fake you into the quicksand of dead-end canyons where they GOT YA.
Unfortunately, most people are weak-minded and can only regurgitate ideas, not actually formulate, process and asses them. So the promoters spend lots of time, money, effort and attention on filling the minds of weakminded fools with endless slogans to regurgitate at the appropriate moments. This seems like a silly waste of time, until you realize they end up with millions of advocates this way --- millions of advocates who instantly denigrate any thinking person for merely asking questions honestly and seriously. After all, the one true answer is *obvious* to them - the religiously captivated! Never mind that the next time a new product is released, only THAT can save you - nothing else. And you definitely NEED the new version, of course. And of course you had problems with the old version --- you need the NEW version, which is PERFECT.
Sigh.
What makes me saddest (and happy to read this thread) is the knowledge it has become infinitely more difficult to accomplish ANYTHING [with software] today. Whatever advances *may* have been made in software in the past 30 years, have been counteracted one million fold by incompatibilities and confusion.
Just consider this. No matter HOW much better language x, y or z supposedly is than vanilla C, ask yourself where would we be today if EVERYONE had written nothing but C function libraries and C applications for the last 30 years? Well, let's see. How about this. We would all have 100 million function libraries that we could call functions in. But wait! We no longer NEED such an astronomical number of libraries, because we no longer need hundreds of different versions of each (for every language, OS, tool, scheme, version). Long ago people would have realized that EVERYONE would be way ahead if their efforts (and everyone elses) were applied to never-ending improvements of a small set of carefully crafted libraries - plus occasional new libraries for honestly new work. Just imagine all the great tools we would all have access to by now! This kind-of assumes an open-source approach, but that would be a natural consequence of this scenario.
Instead, we have endless self-proclaimed authorities trying to "herd cats". What we have is endless chaos and endless crap. And only a teenie, tiny percentage of people (and programmers) ever gain enough self-confidence or tendency towards introspection (or self responsibility) to ask the kind of questions you asked.
I see the problem is part of the world-gone-crazy on the hyper-materialistic, hyper-short-term-orientation modus-operani we can see is so dominant today. How many people are even willing to take the time to reflect upon the questions in this thread? Not many --- just read most threads in most forums. And this is an excellent website, far above average.
My other software policy is to write everything myself. And when I do build upon the work of others, I always deal with only the lowest level interface. Of course it took me many painful experience before I realized that EVERY time I tried to adopt a "high-level" interface I had endless IRRESOLVABLE problems, as opposed to modest (but resolvable) problems with lowest-level interfaces.
Quote:Original post by bootstrapQuote:Original post by Opwiz[original post]Excellent! Unlike 99% of people (including programmers) these days, you actually pay attention to what IS happening to you, try to understand it, and do something about it.
The computer (especially software) industry is going backwards at warp speed, mostly because they get suckered (by promoters, marketers, media, PR) into believing all kinds of crazy assertions. Usually they go "adopt my tool or library or approach and you can be lazy, sloppy, careless, confused and not bother to understand the actual or efficient architecture of your problem --- yet SOMEHOW our projects will become easy, reliable, wonderful, world-class, state-of-the-art, utopia! In other words, we can all be mental couch-potatoes and let the products/approaches those other people promote do the hard work. Hell, why hire smart people when monkeys can design space shuttles? Result? Even NASA cannot design space shuttles - or go to the moon - anymore.
You are taking responsibility for understanding your own mental processes and their consequences (your work). I wish more than a tiny minority did that.
The issues everyone is talking about in this excellent thread can be formulated and described in many different ways. When it comes to programming, I usually gravitate towards a few key concepts - like "atomic". You and others mentioned this issue in various ways (though I do not recall seeing the term "atomic"). This simply refers to the fact that we can identify certain operations, functions or processes that we need to perform often. Just as the entire universe is nothing but ~100 atoms in millions of configurations (and their constant actions/changes), many software processes (simple to complex) can be thought of as "atoms" - because we keep seeing them over and over again.
So these are opportunities to write routines that we KNOW we can apply dozens if not hundreds/thousands of times in a programming career. As you suggest, it is perfectly fine to learn as we gain experience - and tweak and hone these routines into one/few forms that more-or-less cover every need that arises. But we always know it is truly open-ended - in the sense that we may always encounter good justifications to tweak the routine a little, or make a new versions for a new set of cases, or make a special-purpose one-time version.
This is far too practical and utilitarian for promoters and marketeers. Their entire existence depends on making you [think you] depend on them! Only then can they fake you into the quicksand of dead-end canyons where they GOT YA.
Unfortunately, most people are weak-minded and can only regurgitate ideas, not actually formulate, process and asses them. So the promoters spend lots of time, money, effort and attention on filling the minds of weakminded fools with endless slogans to regurgitate at the appropriate moments. This seems like a silly waste of time, until you realize they end up with millions of advocates this way --- millions of advocates who instantly denigrate any thinking person for merely asking questions honestly and seriously. After all, the one true answer is *obvious* to them - the religiously captivated! Never mind that the next time a new product is released, only THAT can save you - nothing else. And you definitely NEED the new version, of course. And of course you had problems with the old version --- you need the NEW version, which is PERFECT.
Sigh.
What makes me saddest (and happy to read this thread) is the knowledge it has become infinitely more difficult to accomplish ANYTHING [with software] today. Whatever advances *may* have been made in software in the past 30 years, have been counteracted one million fold by incompatibilities and confusion.
Just consider this. No matter HOW much better language x, y or z supposedly is than vanilla C, ask yourself where would we be today if EVERYONE had written nothing but C function libraries and C applications for the last 30 years? Well, let's see. How about this. We would all have 100 million function libraries that we could call functions in. But wait! We no longer NEED such an astronomical number of libraries, because we no longer need hundreds of different versions of each (for every language, OS, tool, scheme, version). Long ago people would have realized that EVERYONE would be way ahead if their efforts (and everyone elses) were applied to never-ending improvements of a small set of carefully crafted libraries - plus occasional new libraries for honestly new work. Just imagine all the great tools we would all have access to by now! This kind-of assumes an open-source approach, but that would be a natural consequence of this scenario.
Instead, we have endless self-proclaimed authorities trying to "herd cats". What we have is endless chaos and endless crap. And only a teenie, tiny percentage of people (and programmers) ever gain enough self-confidence or tendency towards introspection (or self responsibility) to ask the kind of questions you asked.
I see the problem is part of the world-gone-crazy on the hyper-materialistic, hyper-short-term-orientation modus-operani we can see is so dominant today. How many people are even willing to take the time to reflect upon the questions in this thread? Not many --- just read most threads in most forums. And this is an excellent website, far above average.
My other software policy is to write everything myself. And when I do build upon the work of others, I always deal with only the lowest level interface. Of course it took me many painful experience before I realized that EVERY time I tried to adopt a "high-level" interface I had endless IRRESOLVABLE problems, as opposed to modest (but resolvable) problems with lowest-level interfaces.
I agree with most of what you say. With the exception that I wouldn't call the people 'weakminded', but more inexperienced. Also these people don't accept their errors (and therefor forget they happend).
I belonged to this 'weakminded'-group and maybe I still am. Yet I have come to realise alot of things. I do now stand more open and am able to accept someones other view and opinions about topics. Before it was "MY WAY OR THE HIGHWAY, DUDE" and "I KNOW WAY BETTER"-attitude.
I think the first step to 'improve' yourself is to admit and accept you can be wrong at first hand...
It is not that I can blame the 'weakminded'-people for being as they are, eductions play a big part here. In the early days it was more: "Pick a side and stick with it". Now it is more growing towards the: "Take a few moments to re-evaluate yourself and your work now and then". Seemingly this last approach leads to more open-minded attitude, which is better in group-work relations.
I think it is oke to be 'weakminded' as long you learn from your mistakes. It is certainly not obvious to autoreflect now and then, it something you need learn.
Now for the original purpose of the thread:
The software development methology (SDM) I am currently using or better said, trying out is that you first spend enough time to write your ideas on paper. After that you translate those ideas into goals.
When the goals are ready you design the path straight to the goal. After you build this 'spine' which is not able to work on its own, you can build the neccesary limps (neccesary features, like garbage collectors, resourcebuilders/loaders).
Forgot to mention, that after you build the spine, you fully test that before continue to building the limps, which you afterward test too first standalone and then as a whole with spine.
After the spine and limps are ready you can refine it with the addition of 'ribs' (the little extra's, like modelloaders which can load multiple file formats)
I certainly don't state that this SDM is the best approach, I am trying it out and will evaluate it at the end. For me it was the most 'obvious' next SDM to try.
Writing code is very easy, think of features is easy, implementing every feature is easy, letting every feature work together is quite hard. Making a project where everything works flawlessly together where the code is also very clean and easy to read, is incredibly hard. This is where programmers-life starts.
Regards,
Xeile
Quote:Deferring work until the last possible minute is rarely the best solution outside programming.
true but WRT programming is often is, i agree with OrangyTang here
never implement something because u think u might need it, or will be useful later on.
i wish someone taught me this many years ago, the single most important advice for productivity
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement