• Advertisement
Sign in to follow this  

Approach to developing games

This topic is 3952 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello I just wanted to share some of my thoughts about game development. I have been programming for almost 7 years. I started programming in C where the approach was very hands-on. Then I started to learn OO programming and my approach was to find general solutions to my often very fuzzy goals (e.g. write a cross-platform 3d engine so I can make a super mmorpg that does everything). I started to see my productivity sink and I spent much time writing code for imaginary problems. I kept doing the same thing because I thought it was the right way to do things. I thought: I probably need feature X sometime so I prepare my self and code it now. Later I've realised that it resulted in lots of wasted energy because of unused code. Later I started to use a new approach where I looked for more specific problems to solve. Instead of searching for problems to solve I simply took care of the ones that were at hand, i.e. I need feature X right now so I write code for it. My productivity increased drasticly. Problem was that I still tried to find general solutions to my problems. E.g. if I needed to render a 3d model I wrote a model loader, renderer, render-list, event system, transform hierarchy, camera, resource manager, scene manager, effect system, etc. etc. Now I'm starting to think if I find an even better approach. I'm starting to wonder if the best approach is to be hands-on, find specific problems and write a specific solution to that problem. If I later find that my solution need to be more general I simply rewrite it. The conventional wizdom says that this approach will result in spagetti code with lots of bugs. I'm not sure this is true. I've always assumed it to be true because it sounds true but now there is an entire development process that involves frequently changing the design and rewriting the code (agile/evolutionary programming). Anyway, I wonder if people here have any experience in being more hands-on and not trying to find general solutions to problems. Are you productive?

Share this post


Link to post
Share on other sites
Advertisement
I use a pencil and paper. If you're just sitting down and hacking this stuff out, you're going to end up with what I like to call "OO spaghetti code," which is to procedural spaghetti code what polonium is to rat poison.

Share this post


Link to post
Share on other sites
Quote:
Original post by Opwiz
If I later find that my solution need to be more general I simply rewrite it. The conventional wizdom says that this approach will result in spagetti code with lots of bugs. I'm not sure this is true. I've always assumed it to be true because it sounds true but now there is an entire development process that involves frequently changing the design and rewriting the code (agile/evolutionary programming). Anyway, I wonder if people here have any experience in being more hands-on and not trying to find general solutions to problems. Are you productive?


I have always worked in an experimental and specific fashion, refactoring code to make it more general when needed, and only when needed. However, it still pays to learn the overall structural stuff, so that you can anticipate how things may need change if they ever do so.

The worst code I read is actually the stuff that is written for the general case - usually so general that it's 90% flexibility scaffolding and only 10% semantic content.

Share this post


Link to post
Share on other sites
Some of my core development tenets: "YAGNI", "Refactor Mercilessly", "Once, Twice, Refactor", "Zero, One, Infinity". In addition to this, I usually unit-test my modules (and force myself to make the modules unit-testable by writing the tests before the modules).

I keep together a module graph (on paper) which I regularly update from the code, and I have developed two interesting tricks to detect places that need refactoring from that graph.

The code evolves in write-refactor cycles: I write new functionality using existing one, test it, then refactor the involved code areas (testing them along the way) until the "Once, Twice, Refactor" consideration is done. Then, I update the module graph, identify places which need refactoring, and repeat the refactor-and-test loop again until the graph is clean. One last full system build and test later, I commit to my source control, and repeat the process.

The key word here is functionality. I only write code which affects the resulting object. That is, it should be able to display or compute something new in the hands of an user (as opposed to the hands of a programmer), which also includes performance increases. The rule here is simple: if a non-technical end user cannot see the difference between the program before the feature, and the program after the feature, then it's not worth adding.

Share this post


Link to post
Share on other sites
I would add the probably-obvious caveat that you should, to the best of your ability, determine as much of the requirements before hand as possible. Basically, this amounts to where you set the bar for "immediate need" or the granularity with which you apply your solution. As others have said, it's equally important to anticipate possible changes in your needs as well; just to avoid painting yourself into a corner.

Its also true that these changes will sometimes cascade into other parts of your program. This may make you *feel* more efficient because you're spending a lot of time doing and less time thinking. Ideally, you simply want to be spending less time. Period. If spending 15 minutes thinking saves an hour of doing, then thats the more efficient route.

Say, for example, that in the course of writing your game you decide that you need a vector class. You only need to be able to add and multiply vectors right now. Is it more efficient to write just the functionality you need now, adding additional functions as you discover a need for them (adding a mental context switch,) or would it be easier to write a vector class with all the basic functionality up front? If you anticipate needing more functionality in the future, I would argue that the latter is more efficient in the long run.

Ultimately, good practices are not about swinging from one extreme to the other, or subscribing to the development buzzword de jure, its about finding the balance that works for you.

Share this post


Link to post
Share on other sites
Quote:
Original post by ravyne2001
Say, for example, that in the course of writing your game you decide that you need a vector class. You only need to be able to add and multiply vectors right now. Is it more efficient to write just the functionality you need now, adding additional functions as you discover a need for them (adding a mental context switch,) or would it be easier to write a vector class with all the basic functionality up front? If you anticipate needing more functionality in the future, I would argue that the latter is more efficient in the long run.

There was/is a great comment on the c2 wiki about this kind of thinking, unfortunately I can't find it right now, so I'll paraphrase as much as I can.

It's going to take the same time to write the methods now as it is in two weeks, or two months, when you finally get round to needing them. You could spend half an hour now, or half an hour in a few weeks time, or you could end up never needing it - in which case you obviously gain from defering the work until later.

The rebuttal is of course "but I know how to do this now". So you're telling me this code is so complicated, that even you won't be able to understand it in a few months time? In that case you're screwed either way.

The whole idea that you can avoid a "mental concept switch" by doing more work now when you don't need to is misplaced and IMHO just leads to over-engineered and over-complicated code.

Share this post


Link to post
Share on other sites
Quote:
Original post by OrangyTang
It's going to take the same time to write the methods now as it is in two weeks, or two months, when you finally get round to needing them. You could spend half an hour now, or half an hour in a few weeks time, or you could end up never needing it - in which case you obviously gain from defering the work until later.

What if, unknown to you, adding that extra functionality later will require modifying other parts of the code you haven't written yet? You could spend half an hour now, or two hours in a few weeks time.

If you're working in a team, it's also possible that, a few weeks later, some other members of your team won't be able to continue development until a particular feature is implemented. Half-an-hour of potentially wasted time for one person has become half-an-hour of definitely wasted time for one or more members of your team.

Deferring work until the last possible minute is rarely the best solution outside programming. Deciding when to write a piece of code should be a balancing act between the cost of writing code that isn't needed, and the cost of deferring it until it really is needed.

Black-and-white "don't write any code you don't need right away" advice, like most black-and-white advice, is usually wrong.

Share this post


Link to post
Share on other sites
Quote:
Original post by Nathan Baum
Deferring work until the last possible minute is rarely the best solution outside programming. Deciding when to write a piece of code should be a balancing act between the cost of writing code that isn't needed, and the cost of deferring it until it really is needed.

Black-and-white "don't write any code you don't need right away" advice, like most black-and-white advice, is usually wrong.

Oh I agree, but I'm not suggesting doing everything as late as possible, but rather that ravyne2001's suggestion of doing everything as early as possible isn't always a good idea.

Share this post


Link to post
Share on other sites
Quote:
Original post by OrangyTang
Quote:
Original post by Nathan Baum
Deferring work until the last possible minute is rarely the best solution outside programming. Deciding when to write a piece of code should be a balancing act between the cost of writing code that isn't needed, and the cost of deferring it until it really is needed.

Black-and-white "don't write any code you don't need right away" advice, like most black-and-white advice, is usually wrong.

Oh I agree, but I'm not suggesting doing everything as late as possible, but rather that ravyne2001's suggestion of doing everything as early as possible isn't always a good idea.

ravyne2001 didn't actually suggest that. What was said was "if you anticipate needing more functionality in the future" it'll probably be faster in the long run to add it now. Of course it will take experience for your anticipations to match up with reality. In the specific example, though, it is probably reasonable to anticipate that the vector will eventually need a dot product operation, say, even though you don't need it now.

Share this post


Link to post
Share on other sites
Right, I don't actually advocate that every bit of functionality be added up front, mearly that it may be better to do so if there's a good possibility that the functionality will be needed, even if it's not now. I'm also not advocating that things be overdesigned or overengineered, thats equally harmfull because it convoludes debugging both now and in the future.

Furthermore, vectors are somewhat of a poor example simply because they're so well defined -- the penalty for the "mental context switch" is very small for those with a good understanding of vectors and how they work. Something more complex would introduce a greater penalty. Say something like adding reference counting to a resource manager. Here, not only do you have to place code correctly, but you must mentally re-evaluate the manager to make sure that it works as expected, that it meshes with any possible corner cases and generates no ill side-effects of its own. Even this is a relatively simple example. The more complex the system and the more complex the added functionality, the greater the penalty becomes.

It takes a long time to develope the "taste" for when "good enough" really is good enough. It also takes a great deal of experience to write components and systems that are flexible enough to sway in the wind of future changes if necessary, rather than needing to be rewritten from the ground up.

Share this post


Link to post
Share on other sites
I actually use a hybrid model for myself. Its what I call design for tommorow, build for today.

I start with the memory of the OP's experience (I too had a period where I wrote nothing but inferior subpieces of the dreams I held of grand designs I might one day use).

To answer that I use the Agile mentality as tenet 1. "Develop what you need now, now." Expect to change later, as the need arises.

To that core I add, "think before you act." And so I pull out the pen and paper and plan for what my actual goal is before I start spewing a bunch of classes across my hard-drive.

And throw in a dash of experience about what types of things are easy to change (the gui layout, the highest level app code) and are hard to change (the core domain model, its terms and basic relations) ... and some things in between (the data model, the interfaces I have built along the way).

So now I have an empty project and a clear goal (the initial requirements analysis has been done) ... what to do?

Design! Pull out more paper, or a whiteboard. Draw things, sketch workflows, identify the knowns, the unknowns. etc.

Now I do this at 2 levels. Spy Sattelite Level), I make sure EVERYTHING I need in my known goal as a very very rough place in my design. Somewhere on my paper is an item like "Enemy AI", or "Multiplayer Lobby" at least. Not anything detailed yet, jut a sketch of what I'm interested in, and bullet points lying around the periphery for the rest. Skyline Level), I pick 1 or 2 areas I'm going to work on first and I actually fleash out their shape. I define how they looked in my previous projects, how that was ugly, how it was good. I draw how I need them to look, how I'd like them to look. I envision them with features that customer's might want in 5 years time. I decorate them with interactions to modules that aren't in this game, but might be in the next one. I tackle problems in these imaginary future versions, or at least go far enough to see if the problems seem solvable given some time. When I'm satisfied with my grand future vision, then I stop, step-back and regress the design. I take my visions of problems and how I might tackle them tommorow and I group them into "this version", "next version", "someday (maybe)".

The I take the "this version" vision, and I go into Blueprint Level) Labeling everything, drawing up the interfaces, the responsibilities, stick-figures and UML(ish) diagrams ... and CODE, oh glorious code.

Or so I dream (somedays it works out too).

But basically I try to stike a balance - OVER designing, but JUST-RIGHT engineering. Partially just because I enjoy the designing so much, and the wasted time is so much lower than the time wasted to over-engineering.

Never write a document for something easier said in code. Never solve a problem in a text editor easier tackled at the white board.

The debuging version is: Never explain a problem to a coworker for 15 minutes that google could have solved in 3. Never stare at a screen for a whole afternoon what you could show a coworker in 15 minutes.

Share this post


Link to post
Share on other sites
The age-old problem. I find it the most difficult part of programming. Finding a framework that is not too convoluted, yet flexible enough and does not require constant re-factoring and re-designs. Also, if you don't start with a clear idea where you want to go, you may arrive at dead ends, where it will just not work for the intended purpose, and you'd have to re-write a large chunk of code to make a piece of the puzzle fit (e.g. from recent experience, adding networking, cross-platform, multi-threading as an after-thought :)). It's even more difficult when you start working within a team. Then things tend to slip quite a bit, the code starts to rot and smells, and some things don't work the way you need them to work. Too many cooks spoil the broth, no matter how good they are individually!

Good code design patterns and practises. The holy grail, I guess! Plus, I suck at the theoretical part of that :)

Share this post


Link to post
Share on other sites
I don't like to code to much when I'm just "thinking" so I prototype my ideas in FreeBasic so I can see it in action and get an idea how it would work best. If I'm at the idea stage, I don't even bother with C++. There are a lot more things to consider when writing in C++ and those extra considerations tend to hinder the creative process.

Once I'm ready to put an idea into action, I start off with a test app and build it out in C++. This keeps the implementation of the module seperate from the main app and allows you to run it through a battery of tests without having to work around the main app.

For the main app framework, I use a command pattern and a simple parser that can take text commands and convert them to command objects. During initialization, a script is loaded which fires off the initial commands (video, input, etc.) Once the app is running, hard-coded commands may be called and a built-in console allows commands from the user (good for testing).

Overall, I stick to simplicity when designing/building an idea. I don't have a lot of time for programming outside of work so feature creep is NEVER an issue. Best bet is to build what you know you can handle.

Share this post


Link to post
Share on other sites
Quote:
Original post by Xai
I actually use a hybrid model for myself. Its what I call design for tommorow, build for today.

I like this a lot.
Quote:
Never write a document for something easier said in code.
Which is why a good technical DD is actually partially pseudo-code anyway.
Quote:
Never solve a problem in a text editor easier tackled at the white board.
You are now approaching diety status in my book.
Quote:
The debuging version is: Never explain a problem to a coworker for 15 minutes that google could have solved in 3. Never stare at a screen for a whole afternoon what you could show a coworker in 15 minutes.
Have I told you lately that I loved you? I wish you would write that up on parchment for distribution. Or stone tablets. But those are so hard to CC in email. *sigh*

A cousin to refactoring is simply itterative design anyway. This is a big one for me. Obviously there are certain chunks of things you can't itterate away and must do in one leap, but many things can be done in small, workable, testable steps.

I like what Brian Reynolds said in a GDC lecture a few years back: "A random number from 1 to 3 is a perfectly valid temporary AI." There is a lot of wisdom there. Just get it to work for now, then gradually build the intelligence into it.

Share this post


Link to post
Share on other sites
Its very interesting to hear everyones thoughts on this. I guess what I'm realising is that refactoring design has become so easy today that the old assumptions about changing design and implementation does not hold anymore. There is less danger in initially having a too narrow design.

Quote:
Original post by ravyne2001
Say, for example, that in the course of writing your game you decide that you need a vector class. You only need to be able to add and multiply vectors right now. Is it more efficient to write just the functionality you need now, adding additional functions as you discover a need for them (adding a mental context switch,) or would it be easier to write a vector class with all the basic functionality up front? If you anticipate needing more functionality in the future, I would argue that the latter is more efficient in the long run.


I think this is a good example of my previous mindset. It might be relatively harmless to anticipate vector functionality and add them right away. But I would argue that it is more inefficient in the long run because of the mindset of constantly anticipating problems. You focus on problems and worry about possible problems, you get overwhelmed, I think it makes you less efficient. Maybe there is a healthy way to both anticipate problems but still focus on finding solutions to the ones at hand.

I had an interesting thought. I guess this is a philosphy that can be translated to real life too. Alot of anxiety and stress is caused by fears that comes from anticipating or imagining future problems.

Share this post


Link to post
Share on other sites
Quote:
Original post by Opwiz
Its very interesting to hear everyones thoughts on this. I guess what I'm realising is that refactoring design has become so easy today that the old assumptions about changing design and implementation does not hold anymore. There is less danger in initially having a too narrow design.


I don't think there has been that much 'danger' since the days of punch cards.

Seriously, I think the value of Big Up Front Design came about from the time when programming a computer was a long and arduous business and editing the code was non-trivial. This attitude was then been perpetuated by the "programming is engineering" paradigm. I think this arose because academics and/or industry wanted to inject some sort of legitimacy and rigour into the process, but unfortunately it implies that changing a few functions in a code file is as costly and expensive as resurfacing a road or fabricating new pistons for an engine. Sadly the resulting drive to lock down rigid specifications early in the process makes this a self-fulfilling prophecy, as any later change involves rewriting a load of documentation and propagating any changes throughout the system.

In fact, code is much, much more fluid than any other engineering material and the manipulation of it should be done with that in mind.

Share this post


Link to post
Share on other sites
Quote:
Original post by Opwiz[original post]
Excellent! Unlike 99% of people (including programmers) these days, you actually pay attention to what IS happening to you, try to understand it, and do something about it.

The computer (especially software) industry is going backwards at warp speed, mostly because they get suckered (by promoters, marketers, media, PR) into believing all kinds of crazy assertions. Usually they go "adopt my tool or library or approach and you can be lazy, sloppy, careless, confused and not bother to understand the actual or efficient architecture of your problem --- yet SOMEHOW our projects will become easy, reliable, wonderful, world-class, state-of-the-art, utopia! In other words, we can all be mental couch-potatoes and let the products/approaches those other people promote do the hard work. Hell, why hire smart people when monkeys can design space shuttles? Result? Even NASA cannot design space shuttles - or go to the moon - anymore.

You are taking responsibility for understanding your own mental processes and their consequences (your work). I wish more than a tiny minority did that.

The issues everyone is talking about in this excellent thread can be formulated and described in many different ways. When it comes to programming, I usually gravitate towards a few key concepts - like "atomic". You and others mentioned this issue in various ways (though I do not recall seeing the term "atomic"). This simply refers to the fact that we can identify certain operations, functions or processes that we need to perform often. Just as the entire universe is nothing but ~100 atoms in millions of configurations (and their constant actions/changes), many software processes (simple to complex) can be thought of as "atoms" - because we keep seeing them over and over again.

So these are opportunities to write routines that we KNOW we can apply dozens if not hundreds/thousands of times in a programming career. As you suggest, it is perfectly fine to learn as we gain experience - and tweak and hone these routines into one/few forms that more-or-less cover every need that arises. But we always know it is truly open-ended - in the sense that we may always encounter good justifications to tweak the routine a little, or make a new versions for a new set of cases, or make a special-purpose one-time version.

This is far too practical and utilitarian for promoters and marketeers. Their entire existence depends on making you [think you] depend on them! Only then can they fake you into the quicksand of dead-end canyons where they GOT YA.

Unfortunately, most people are weak-minded and can only regurgitate ideas, not actually formulate, process and asses them. So the promoters spend lots of time, money, effort and attention on filling the minds of weakminded fools with endless slogans to regurgitate at the appropriate moments. This seems like a silly waste of time, until you realize they end up with millions of advocates this way --- millions of advocates who instantly denigrate any thinking person for merely asking questions honestly and seriously. After all, the one true answer is *obvious* to them - the religiously captivated! Never mind that the next time a new product is released, only THAT can save you - nothing else. And you definitely NEED the new version, of course. And of course you had problems with the old version --- you need the NEW version, which is PERFECT.

Sigh.

What makes me saddest (and happy to read this thread) is the knowledge it has become infinitely more difficult to accomplish ANYTHING [with software] today. Whatever advances *may* have been made in software in the past 30 years, have been counteracted one million fold by incompatibilities and confusion.

Just consider this. No matter HOW much better language x, y or z supposedly is than vanilla C, ask yourself where would we be today if EVERYONE had written nothing but C function libraries and C applications for the last 30 years? Well, let's see. How about this. We would all have 100 million function libraries that we could call functions in. But wait! We no longer NEED such an astronomical number of libraries, because we no longer need hundreds of different versions of each (for every language, OS, tool, scheme, version). Long ago people would have realized that EVERYONE would be way ahead if their efforts (and everyone elses) were applied to never-ending improvements of a small set of carefully crafted libraries - plus occasional new libraries for honestly new work. Just imagine all the great tools we would all have access to by now! This kind-of assumes an open-source approach, but that would be a natural consequence of this scenario.

Instead, we have endless self-proclaimed authorities trying to "herd cats". What we have is endless chaos and endless crap. And only a teenie, tiny percentage of people (and programmers) ever gain enough self-confidence or tendency towards introspection (or self responsibility) to ask the kind of questions you asked.

I see the problem is part of the world-gone-crazy on the hyper-materialistic, hyper-short-term-orientation modus-operani we can see is so dominant today. How many people are even willing to take the time to reflect upon the questions in this thread? Not many --- just read most threads in most forums. And this is an excellent website, far above average.

My other software policy is to write everything myself. And when I do build upon the work of others, I always deal with only the lowest level interface. Of course it took me many painful experience before I realized that EVERY time I tried to adopt a "high-level" interface I had endless IRRESOLVABLE problems, as opposed to modest (but resolvable) problems with lowest-level interfaces.

Share this post


Link to post
Share on other sites
Quote:
Original post by bootstrap
Quote:
Original post by Opwiz[original post]
Excellent! Unlike 99% of people (including programmers) these days, you actually pay attention to what IS happening to you, try to understand it, and do something about it.

The computer (especially software) industry is going backwards at warp speed, mostly because they get suckered (by promoters, marketers, media, PR) into believing all kinds of crazy assertions. Usually they go "adopt my tool or library or approach and you can be lazy, sloppy, careless, confused and not bother to understand the actual or efficient architecture of your problem --- yet SOMEHOW our projects will become easy, reliable, wonderful, world-class, state-of-the-art, utopia! In other words, we can all be mental couch-potatoes and let the products/approaches those other people promote do the hard work. Hell, why hire smart people when monkeys can design space shuttles? Result? Even NASA cannot design space shuttles - or go to the moon - anymore.

You are taking responsibility for understanding your own mental processes and their consequences (your work). I wish more than a tiny minority did that.

The issues everyone is talking about in this excellent thread can be formulated and described in many different ways. When it comes to programming, I usually gravitate towards a few key concepts - like "atomic". You and others mentioned this issue in various ways (though I do not recall seeing the term "atomic"). This simply refers to the fact that we can identify certain operations, functions or processes that we need to perform often. Just as the entire universe is nothing but ~100 atoms in millions of configurations (and their constant actions/changes), many software processes (simple to complex) can be thought of as "atoms" - because we keep seeing them over and over again.

So these are opportunities to write routines that we KNOW we can apply dozens if not hundreds/thousands of times in a programming career. As you suggest, it is perfectly fine to learn as we gain experience - and tweak and hone these routines into one/few forms that more-or-less cover every need that arises. But we always know it is truly open-ended - in the sense that we may always encounter good justifications to tweak the routine a little, or make a new versions for a new set of cases, or make a special-purpose one-time version.

This is far too practical and utilitarian for promoters and marketeers. Their entire existence depends on making you [think you] depend on them! Only then can they fake you into the quicksand of dead-end canyons where they GOT YA.

Unfortunately, most people are weak-minded and can only regurgitate ideas, not actually formulate, process and asses them. So the promoters spend lots of time, money, effort and attention on filling the minds of weakminded fools with endless slogans to regurgitate at the appropriate moments. This seems like a silly waste of time, until you realize they end up with millions of advocates this way --- millions of advocates who instantly denigrate any thinking person for merely asking questions honestly and seriously. After all, the one true answer is *obvious* to them - the religiously captivated! Never mind that the next time a new product is released, only THAT can save you - nothing else. And you definitely NEED the new version, of course. And of course you had problems with the old version --- you need the NEW version, which is PERFECT.

Sigh.

What makes me saddest (and happy to read this thread) is the knowledge it has become infinitely more difficult to accomplish ANYTHING [with software] today. Whatever advances *may* have been made in software in the past 30 years, have been counteracted one million fold by incompatibilities and confusion.

Just consider this. No matter HOW much better language x, y or z supposedly is than vanilla C, ask yourself where would we be today if EVERYONE had written nothing but C function libraries and C applications for the last 30 years? Well, let's see. How about this. We would all have 100 million function libraries that we could call functions in. But wait! We no longer NEED such an astronomical number of libraries, because we no longer need hundreds of different versions of each (for every language, OS, tool, scheme, version). Long ago people would have realized that EVERYONE would be way ahead if their efforts (and everyone elses) were applied to never-ending improvements of a small set of carefully crafted libraries - plus occasional new libraries for honestly new work. Just imagine all the great tools we would all have access to by now! This kind-of assumes an open-source approach, but that would be a natural consequence of this scenario.

Instead, we have endless self-proclaimed authorities trying to "herd cats". What we have is endless chaos and endless crap. And only a teenie, tiny percentage of people (and programmers) ever gain enough self-confidence or tendency towards introspection (or self responsibility) to ask the kind of questions you asked.

I see the problem is part of the world-gone-crazy on the hyper-materialistic, hyper-short-term-orientation modus-operani we can see is so dominant today. How many people are even willing to take the time to reflect upon the questions in this thread? Not many --- just read most threads in most forums. And this is an excellent website, far above average.

My other software policy is to write everything myself. And when I do build upon the work of others, I always deal with only the lowest level interface. Of course it took me many painful experience before I realized that EVERY time I tried to adopt a "high-level" interface I had endless IRRESOLVABLE problems, as opposed to modest (but resolvable) problems with lowest-level interfaces.


I agree with most of what you say. With the exception that I wouldn't call the people 'weakminded', but more inexperienced. Also these people don't accept their errors (and therefor forget they happend).

I belonged to this 'weakminded'-group and maybe I still am. Yet I have come to realise alot of things. I do now stand more open and am able to accept someones other view and opinions about topics. Before it was "MY WAY OR THE HIGHWAY, DUDE" and "I KNOW WAY BETTER"-attitude.
I think the first step to 'improve' yourself is to admit and accept you can be wrong at first hand...

It is not that I can blame the 'weakminded'-people for being as they are, eductions play a big part here. In the early days it was more: "Pick a side and stick with it". Now it is more growing towards the: "Take a few moments to re-evaluate yourself and your work now and then". Seemingly this last approach leads to more open-minded attitude, which is better in group-work relations.

I think it is oke to be 'weakminded' as long you learn from your mistakes. It is certainly not obvious to autoreflect now and then, it something you need learn.

Now for the original purpose of the thread:

The software development methology (SDM) I am currently using or better said, trying out is that you first spend enough time to write your ideas on paper. After that you translate those ideas into goals.
When the goals are ready you design the path straight to the goal. After you build this 'spine' which is not able to work on its own, you can build the neccesary limps (neccesary features, like garbage collectors, resourcebuilders/loaders).
Forgot to mention, that after you build the spine, you fully test that before continue to building the limps, which you afterward test too first standalone and then as a whole with spine.
After the spine and limps are ready you can refine it with the addition of 'ribs' (the little extra's, like modelloaders which can load multiple file formats)

I certainly don't state that this SDM is the best approach, I am trying it out and will evaluate it at the end. For me it was the most 'obvious' next SDM to try.

Writing code is very easy, think of features is easy, implementing every feature is easy, letting every feature work together is quite hard. Making a project where everything works flawlessly together where the code is also very clean and easy to read, is incredibly hard. This is where programmers-life starts.

Regards,

Xeile

Share this post


Link to post
Share on other sites
Quote:
Deferring work until the last possible minute is rarely the best solution outside programming.

true but WRT programming is often is, i agree with OrangyTang here
never implement something because u think u might need it, or will be useful later on.
i wish someone taught me this many years ago, the single most important advice for productivity

Share this post


Link to post
Share on other sites
Quote:
Original post by OpwizIt is not that I can blame the 'weakminded'-people for being as they are, eductions play a big part here. In the early days it was more: "Pick a side and stick with it". Now it is more growing towards the: "Take a few moments to re-evaluate yourself and your work now and then". Seemingly this last approach leads to more open-minded attitude, which is better in group-work relations.

I think it is okay to be 'weakminded' as long you learn from your mistakes. It is certainly not obvious to autoreflect now and then, it something you need learn.
I believe "not learning from our mistakes" is a major form of weakmindedness, so perhaps we agree more than you realize.

But if not learning from our mistakes is weakminded, what do you call it when people strongly advocate ideas/approaches/technologies/products before they even work with them (and their alternatives) [much]? This is even more weakminded, because the person supports (and fails to question) [popular] fads and opinions before they observe their consequences with their own eyes/work/projects.

Anyone who thinks for himself and seriously applies his/other ideas, then honestly observes and identifies the consequences of their application - is not weakminded. But this is not common in the short-attention-span, instant-gratification, media-hype-PR world of today.

Let me try to be more precise about what weakminded and not weakminded means. We are not weakminded when we are completely honest with ourselves, including how strong/certain our opinion can be given the degree of time and effort we've invested in a specific topic. In contrast, weakminded people take positions because they are popular (or somehow easier), and therefore "safe" (or lazy) to advocate (the crowd will not look down on you (or effortless)). Honestly, I think many people have never bothered to force themself to be ruthlessly independent in their thinking and opinions when it is significantly inconvenient or uncomfortable. They have little idea what this notion means, because they have thoroughly habituated lazy/easy [weakminded] "thinking".

BTW (elsewhere in your message), I too fill dozens if not hundreds of pieces of paper with ideas and drawing before (and during) many complex, non-trivial, important projects (and parts thereof).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement