Jump to content

  • Log In with Google      Sign In   
  • Create Account

The Bag of Holding



Inventing Destiny

Posted by , 04 March 2006 - - - - - - · 181 views

I aspire to greatness.


For as long as I can remember, I've had a sort of thread running in the CPU of my brain, monitoring my activities and recording things the way one might assemble a documentary. Sometimes this thread focuses on the mundane: "Here we see a sickly-looking specimen of Apoch brushing his teeth." Every now and then it gets presumptuous and starts interviewing other parts of my mind, constructing entire timelines of my life, and all the grand things I imagine I might do. It's a sort of biographer, talking to the future me at the end of my time, discussing all that happened and what I think about it - except that none of it has happened yet.

Maybe none of it ever will.

I imagine that many people have aspirations of greatness. It seems a deeply vital part of human society for someone to want to reach beyond, to look at what might be, and to make it so. These are the people that brought us fire, iron tools, and the Internet. These are the people that move us from a grunting herd of barbaric animals into a realm of suits, ties, and power lunches. These are the people who are history.

The term "youthful idealism" is not foreign to me. I think I have a particularly large dose of it, at least by comparison to many of my peers. I'm the annoying guy that always walks into a room and finds five things that could be done better. I'm the one that talks all that boring, level-headed nonsense about planning for the future while everyone else is having a grand old time getting drunk at the party of the week. I literally devote more of my time to figuring out the long-term career effects of my sleeping habits than figuring out what to eat, what to wear, and what to do on Friday night. Hell, on Friday nights I usually read a book.

Aspirations are not really great conversation pieces. They don't make one interesting at parties. Sometimes it seems like aspirations don't do a whole lot besides get in the way and waste a lot of time that could be spent doing stuff. And yet they stubbornly hang on, refusing to submit to the cold realities of practical daily life.

How is it, then, that one can aspire for so long, and wake up to discover that only aspiration has happened, and no accomplishment? How is it that this awakening can occur on a regular basis, even daily, and yet the problem is never cured? These aspirations are grand. These are the sorts of dreams that form history, change the world, utterly reshape the way people think of life. Very few people are priviledged with playing such a role, and yet I seem to expect no less of myself.


Some might talk of destiny, of things meant to be, of what a life should involve. These things hold no answers. It is foolish to spend a lifetime thinking intently on what is meant to be, and then slip off into the sleep of death with the sad realization that it never was. What could be a worse torment, than to aspire for so long, and fail to do?

Maybe some things are meant to be. I do not see nearly enough of the flow of reality to answer that question. But I do know that there is more than just meant to be, more than daydreams and aspirations and Time articles in the imagination. The true shapers of things did not merely aspire; indeed, many aspire, and yet never accomplish. To cross over into the realm of real deeds requires much more than mere vision, mere ambition, mere desire. Action is not a simple thought, nor an easy successor to careful planning.


I have no lack of aspiration. Some might even call it egomania, a narcissistic self-obsession, an arrogance. They may be right. I do not fear either lack or excess of ambition; for my purposes, I have the sufficient quantity, and that is comfort enough. I am not concerned with "failure" as it might be called; to aspire, and strive, and yet not meet the expectations of one's dream. Expectations are easy to inflate; reality is not so willing. To simply fall short of expectation is no failure, no loss, no shame. I fear none of this; it does not cost me sleep, nor haunt my thoughts.


I fear only that I may lack that unnamed quantity which moves a person to attempt any action at all. To dream, and be consumed by a dream, and then awaken - this is a harsh shock, a jolt of pain that lingers and aches. I fear awakening from the dreams of my life to find that they never came to be.


Games... and cars... and STUFF!

Posted by , 02 March 2006 - - - - - - · 216 views

So yesterday I bought a black 2003 Taurus. Not the sexiest ride in the world, but it works, and it's a lot more fun to drive than a Buick. Dropping $13,000 in a single chunk is a bit surreal, but it was fun.

I got the car from CarMax, and the buying experience was great. This is a very good thing, because I've been feeling like total crap for the last three days, and if I'd had any problems buying the car, I would have shot someone. Or maybe just hacked up a lung all over them and given them a communicable disease. (I came close to doing that second one anyways.)


We're nearing the tail end of code cleanup work at Egosoft. Today I went through the script codebase and did a massive file reorganization, basically moving stuff into more sensible directory hierarchies and that sort of thing. There's a few more modules to clean up, and a lot of documentation to write, but for the most part the end is in sight. Later tonight I'll be finishing up some of the less drastic refactoring tweaks, and probably starting on the team's "Official Good Practices" manual, which is currently sorely out of date.


Verbosity

Posted by , 26 February 2006 - - - - - - · 220 views

It has been brought to my attention that my last post is ridiculously huge. I'm not sure whether to offer to pay hospital bills for those who endure reading the whole thing at a single sitting, or offer prizes.

In other news, shopping for good used cars is a pain in the butt.


Towards A Richer Toolbox

Posted by , 26 February 2006 - - - - - - · 298 views

Some historical context
NOTE: If you don't care about my navel-gazing musings, you can skip to the good stuff, starting after the horizontal bar below.

Those of you who have followed my activities for a while (you sick, voyeuristic freaks!) know that a while ago I was working heavily on research for a realtime raytracing engine. The project was called Freon 2/7, for no really good reason, and basically started out as an attempt to achieve high graphics quality without taking all day to render. Specifically, the project arose as a result of my tinkerings with the POVRay raytracing system; I liked the images it could produce, but I hated the render times.

I started out some research, found some other people to help tinker with the project, and over the space of about three years started to produce some results. Those of you who are familiar with the demoscene know that realtime raytracing has been more or less around for many years - but it works by making many assumptions, and many sacrifices. The goal of the Freon project was to find rendering solutions that were fast, generic enough to be useful for things like realtime games, and most importantly high-quality.

Eventually, I made some interesting discoveries, like some additions to kernel-based photon mapping estimations for global illumination. (Sorry, too lazy to explain that for anyone who hasn't spent a couple years dabbling in light simulation theory [wink]) Early on, it became obvious that dedicated raytracing hardware was the solution. After a lot of thought and investigation, I determined (and remain convinced) that the future of 3D graphics will be in programmable hardware that is specifically designed around raytracing, not polygon rasterization. I even started the early phases of negotiating with hardware designers and investors to actually produce a prototype chipset.

However, I was largely foiled by two critical problems: lack of money, and lack of time. These problems were, in fact, two facets of a single issue; I lacked time because I had to spend my time doing things that actually earn money. Like having a job. When I buckled from my contractor habit and started my day job, the project basically died completely. What free time I had was already consumed by games programming, so Freon has been untouched for over two years now.

This has always bugged me a little bit, and every now and then the whole deal comes back to mind and I feel compelled to finish what I started. (I had made repeated oaths, to myself and others, that Freon would finally be the first big project I actually finished. Oops.) Even more pressing is the fact that today raytracing hardware remains an unexplored frontier. Five years ago, my plan was to have consumer-ready cards on the market and in the hands of developers by now. Full-scale games targetting the hardware were due as early as 2007. Back in reality, the only really viable raytracing hardware project remains something of a non-starter.

What's ironic (to me at least) is that I've watched the SaarCOR project make a lot of philosophical mistakes that I dodged early on. At the risk of sounding like a pompous ass, I correctly predicted that certain ways of thinking about computer graphics would hamper RTRT efforts, and require ridiculous amounts of hardware power. I have not widely discussed my work on Freon for two reasons: first, because it is somewhat heretical from an academic point of view, and I don't have the energy to defend my all of my decisions; and secondly, because I'm still a little smug that I've got ideas for far more efficient solutions than are currently being explored. I continue to feel compelled to work on Freon because I still wholeheartedly believe that I've got a unique - and intensely marketable - concept here.


I don't bring this up to gloat about how I'm smarter than a bunch of highly educated researchers. That would be rather stupid. I bring this up not because I've got it all figured out, and they don't, but rather because I made a massive failing early on. I made one of my own critical philosophical mistakes, and it ended up costing me an opportunity to have a tremendously promising product on the market by now.

Originally, I was opposed to programmable hardware. I didn't disagree with the concept, I just didn't see it as being a viable entry point. Programmable logic is hard to build, and build fast. Doing a dedicated circuit first is cheaper, and educational: it helps illuminate the bottlenecks and weaknesses of a design while the design is still fairly easy to change. So, when I first started writing the Freon emulator software, the idea was to lay out the algorithms for a photon mapping engine in C code, and then translate that to hardware via tools like SystemC. It was a good concept... for a trivial and boring circuit. Realtime graphics is not trivial.

The mistake I made was in clinging to the "generic" requirement, and trying to make it fit with the "dedicated circuit" requirement. I wanted something that could rival (and, of course, surpass) the capabilities of rasterizer hardware. After all, if the raytracing card couldn't look better than Doom III, nobody would buy it. What I failed to realize was that programmable shaders have long since made dedicated circuits obsolete. Today, a single top-end video card can do some amazing things with shaders - things that were once the domain of raytracers. Things that I was counting on being able to do, that Doom III couldn't, so that I could sell my hardware.

Over the past two years it has become painfully clear that I screwed that whole decision up, royally. A dedicated circuit can't be extended and hacked on like a programmable one. A 1960's era telephone will never do more than relay sounds from one point to another; a 21st century cell phone can do all manner of things. The difference is programmability. A dedicated, inflexible raytracer card will never compete with a programmable rasterizer card, in the real world, because programmability is simply too powerful. Maybe the raytracer can do more nifty effects now, but in another year, someone will figure out how to do those same effects on a Radeon, and now your raytracer has no marketable strengths. More games will work on the Radeon, so the raytracer loses, and probably dies altogether.


After having a couple of years to ponder the whole adventure, and after a couple of years of seeing my "only serious competitor" fail to do anything really astounding with the technology, I've come to the conclusion that there's still a chance for Freon - but it will come at the cost of a massive philosophical shift. That shift is to build a platform, not a product. I really hate those terms (they smell like marketing and buzzwords to me), so let me explain what I'm getting at.




Platforms
The term "platform" is a fairly familiar description of a certain type of tools. Specifically, in the programming realm, a platform gives us the ability to develop new, interesting things, by giving us some pre-packaged tools to work with. Platforms exist on many levels, and come in many different sizes.

A computer architecture, like the IA32 architecture, is a platform: it gives us hardware to run programs on. An operating system is a platform: it gives us an easy way to talk to the hardware, and lets us build more programs. C++ is a platform: it's a specification for a set of tools (compilers, etc.) that we can use to build a whole lot of different types of programs. .Net is a platform: it gives us a massive library of prefabricated tools that we can use to make our own programs much more quickly and reliably. Spreadsheets like Excel are also platforms: they can be used to solve a huge variety of problems.

Usually, though, that's where platforms stop - at the "programming tool" level. This is mostly a philosophical issue; someone invents a platform, and wants it to appeal to Just About Everyone, so they make it into a programming tool. (The Unix realm is heavily polluted with this way of thinking.) Programming tools are great, but there's really only so many of them we can handle.


Domain-specific platforms

If we look a little closer, though, we find that there are more platforms out there, lurking under the guise of applications. Interaction tools like Exchange and Lotus Notes are platforms: they are designed to allow other types of software to do interesting things, without having to do the groundwork. However, we often misunderstand these, and just think of them as applications; this misunderstanding can lead to a lot of hate (c.f. how pretty much everyone despises the client end of Notes).

Games are also becoming platforms, to an increasing degree. "Modding" tools let players change the way games look, sound, and work. Some modding tools even let people build entirely new games. Traditionally, we don't call these systems "platforms" - we use words like "engine" and "scripting" and "data-driven architecture." But the idea is the same; games designed for heavy moddability are designed, in essence, to be platforms, on which more games can be made.


Seeking a more productive balance

I think software development, as a field, could benefit immensely from looking at this concept in a new light. Platforms are usually extremely generic (Windows, C++, .Net) or extremely domain-specific (the Half Life engine). Platforms often fail because they try to do too many things, and the end result doesn't do much of anything (99% of the Linux distributions out there). Some things fail because they really should be platforms, but people see them instead as applications (Notes... and Freon).

What we need is a balance, a mid-point between these extremes. Game engines are a good example: tools like Torque and OGRE give us the ability to do many things, but still within a specific domain. Torque isn't designed to make spreadsheet applications. Excel isn't so hot at making games. Yet both are extremely useful in their respective domains.

Code reuse is one of the most hyped notions in modern software engineering. It began with structured programming and subroutines, and it's been raging on ever since. Yet how often do we really get it right? Maybe we reuse one class in three projects. That's good. Maybe we make a platform that virtually everyone can reuse, and we produce .Net. That's good too. But there are a lot more than three projects in the world, and very few people have the resources to produce another .Net. Most people probably wouldn't use another .Net; the one we've got works great.

What we need is platforms that are generic enough to be useful for a lot of things, but still recognize that they can't do everything. These mini-platforms need to recognize their own limits, and refrain from collapsing under the weight of their own bloated feature creep. Many such platforms arise out of a sincere desire to acheive good code reuse. Yet many of them could be far more effective if they were thought of as platforms from the very beginning.


I've worked on a lot of programming projects over the years. I've written games, accounting tools, business management systems, store inventory systems, collaboration systems, highly specialized applications. Every time I enter a new problem domain, I wish dearly for a platform - not quite a .Net that does everything under the sun, but some sort of blob of existing, swappable concepts that I can use to avoid repeating a lot of effort. The "Don't Repeat Yourself" principle is burned strongly into me, both as a piece of my personality, and a part of my work ethic. If I've already done it once, I shouldn't have to do it again.

This is not a new concept at all. However, it is usually pursued with wild abandon, and that leads to madness. We like to get carried away, and make our platform do everything for everyone. Inevitably, these attempts make people disgusted. Moderation is the key.


I run a sort of idle curiosity called Tiny KeyCounter. The project innocently counts every keystroke and mouse click on your computer, and lets you compare your numbers with other people. Despite being utterly pointless and stupid, it's pretty popular. At the moment, the server for the project is dead (hardware failure) and I'm strongly considering leaving it dead until I build the fabled Version 2.0 of the software.

I want to do a Version 2.0 because I want to add more statistics: things like how many miles you moved your mouse today. When I first started thinking about TKC2.0, I had some vague notion of making it a "distributed statistics platform" where the software could track any kind of numbers you want, and compare them all through a unified framework. The more I thought about it, the more I realized the idea sucks. Nobody would want to use it.

I still want to make it more flexible, though. I want to make a platform for tracking silly, meaningless computer usage statistics. TKC has been successful because it doesn't take itself seriously. The project admits (sometimes very loudly) that it's all more or less a big joke. A "distributed statistics platform" would totally negate that, and make the entire concept more or less worthless. What I need is a mini-platform: extensible, so that others can potentially expand on the concept, but humble enough to know that it should not attempt to become a panacaea.

The way to do this, I think, is not to just build programs that let you write plugins. TKC1.0 tried that, and nobody wrote any plugins. Writing plugins in C++, VB, C#, etc. is boring and takes work. Even writing plugins in embedded languages takes skill and learning. Instead, I think the mini-platform concept should embrace the idea of highly domain-specific languages.

The essence of the mini-platform notion is to develop a high-level, highly abstract language where the concepts of a domain can be expressed and explored freely. This language should exist not as a binding to an existing language, nor as a language that tries to let one write any conceivable type of program. The language should be specifically and solely for discussing the problem domain, and for doing interesting things inside that domain.


I want to do this same thing with Freon. I did a lot of experimentation with new and wacky rendering techniques and algorithms. Usually, changing from one algorithm to another meant massive code rewrites. Since I was trying to build a dedicated application, I lacked dexterity and flexibility in my design philosophy. I have a backlog of dozens of cool algorithms I want to try, but would take literally months to write from the ground up as new rendering engines.

Towards the end of the project, I started to realize just how desperately I wanted a mini-platform. I wanted a way to easily explore new methods of rendering. I wanted to be able to describe rendering algorithms at a high level, with lots of abstraction, but without having to build them an existing language like C. Now, almost two years later, I've figured out what I really needed: a programming language that was designed specifically and solely for ray-oriented graphics technology.

More interestingly, I think that's the future of the hardware, too. A dedicated circuit is a dinosaur that will not compete against programmable technologies. Even a programmable photon mapping engine is limited, though. What I think we really need is a hardware mini-platform: it speaks the language of rays, but does not restrict itself to one particular rendering method. Path tracing, Whitted raytracing, photon mapping, and some of my own techniques should all be possible on the hardware - and not merely possible, but naturally easy to build.



My opinion is that this same pattern could have very good effects in other areas as well. I think we're close already; a lot of software sort of accomplishes an existence as a mini-platform, without ever directly realizing it. I'd like to see some smart people apply some time and thought to this notion, and see where we can go by capitalizing on it.

What kinds of things do you see being useful as mini-platforms? How could existing projects be given more clarity, more direction, and more appeal by doing less? We've had the dream of highly extensible "plugin-centric" software for years, and code reuse is right up there with it. How might a mini-platform concept get us closer to those ideals?

If we set out to build a domain-specific language for every interesting domain, how much more efficient could we be? Most importantly, how many doors could we open for those who know the domain well, but not the existing platforms like C++?


Time Management and Multiple Personalities

Posted by , 24 February 2006 - - - - - - · 247 views

Originally, I set out to do a sort of "Day in My Life" entry, that broke down what I do at my job on a per-hour basis. I started 2 days ago around 6:30 AM, which is when I woke up. [My sleep schedule has been totally random lately, which is partly really cool, and partly really disorienting.] I then logged what I was doing for the next massive chunk of time (close to 28 hours, not including an incidental nap).

Unfortunately, it didn't work out so well as a journal entry. Often I'd just totally forget to log what happened for a few hours, leaving weird gaps and mysterious inconsistencies in my accounting of time. Once I just fell asleep for 6 hours. Worse, most of the notes I kept were really trifling little blobs; they aren't interesting to anyone who isn't working on the project. After a while, I started just interjecting stupid thoughts and observations, and it became a mass of jumbled idiocy that was neither fun nor informative to read.

So I chucked it out. There really wasn't much use to it, and it wasn't worth the time.


That's not what I find interesting, though. One curious side-effect was how it affected my work patterns. I started to see little web-surfing breaks as massive chunks of time, rather than just vague "oh, look, it's getting dark out and I still haven't done any work." Quantifying the amount of time I spent working and goofing off really made me feel like an idiot for slacking around too much, so I ended up doing a lot more work instead.

That much wasn't too surprising - time logging is a well-known technique for time management. What was really different was the way my thought process was affected. In taking notes on what I was doing, I had to justify aloud some of my design decisions. Now these decisions are all made with discussion and agreement from the rest of the team, so they're by no means arbitrary and off-the-cuff things. Yet, when I started trying to explain them in a way that would make sense to an outsider, I started seeing all kinds of new opportunities to make even better decisions.

Discussing and verbalizing design decisions is not a surprising new technique, either; I've been doing it for years. It's part of why team programming (and micro-scale practices like pair programming) is such an effective tool when used correctly. It's a sort of extra mind to make sure that everything is sane and sensible.

So often, though, that extra mind is an insider, and if one insider is prone to make some oversight or bad decision, another insider is just as vulnerable. We all get so involved in the system that we can't see beyond it, and at some points, even peer-review won't catch the really deep, subtle, and subconscious assumptions that give designs the most trouble.

When I get into a system, learn it, and start to develop a zone-sense for it, I get a sort of holistic, top-down feel for how it all works. I have an intuitive feel for what will happen if I poke the system in certain ways, and that by and large is how I do design. For the most part, this works great; but the danger is that one can get too accustomed to that sense. One might know how the system works now, but when doing refactoring and redesign, that is not enough information. You have to know what else could be. A redesign requires you to step outside the system, look at all of the truly available options, and consider them objectively. Getting stuck in a rut of "how it already works" can be deadly.


So, the next time you go to adjust a design, develop multiple personalities. Create an imaginary friend that has never touched your system before, knows nothing about the design, and maybe even isn't a programmer/designer. Then justify everything you do to this imaginary friend. Avoid the temptation to make excuses and empty rationalizations; really sell this guy on what you're doing, and why it's the best possible choice.


Death Marches, Vision, and Mutant Space Potatoes

Posted by , 19 February 2006 - - - - - - · 287 views

Writing is a funny thing. Every now and then, I get a weird sense that I have something to say, but I can't quite place what it is. Typically, I'll cast about for a few hours, trying to figure out what exactly it is that's making my brain itch; typically, I fail at this. At some point I figure out that it's just better to start writing about absolutely nothing, and see what comes out. After a few false starts, the original stuff manages to chop its way out of the jungles of my mind, and then I can spew it all into incoherent paragraphs for posterity.

I do not think that right now is one of those times. I think right now is primarily procrastination. So, I invite you to come along with me, and see where my brain ends up taking us. I have a feeling we'll both be surprised; I can't promise any satisfaction or enjoyment, though. There might be mutant space potatoes with ray guns.



Anyone who has ever worked on a software project, even a tiny one, knows about the Death March phase. It's that ugly, dreary chunk of time, usually at the end of the project. It's when everyone is sick and tired of the project, would rather do something else - anything else - and just wants to ship the damn thing and move on. Tempers are short, hours are long, and even the most rabid coffee connoisseurs quit whining about the burnt Folger's in the office coffee pot and drink it just because it has traces of caffiene in it. The Death March is the "make it or break it" period for a project; either you pull through and put out a win, or you sort of phase out and dribble out something that vaguely resembles that product you were dreaming about so eagerly not too long ago. In the worst case, the project may just get canned entirely.

Death Marches occur, in various degrees, in all projects in the software realm. I think they also occur fairly often in other realms, too, but I don't know much about those realms, so I can't say for sure. (This is not to imply that I know anything about the software realm, mind you - I'm just better at pretending to know things about the software realm.) That nifty little applet you're writing for Grandma is going to hit a Death March. Massive corporate systems have Death Marches. Games have Death Marches; in fact, games usually have some of the most publicized and well-documented Death Marches out there.

The reasons for Death Marches are diverse. People much smarter than me have spent much more time than I have thinking about and writing about those reasons. (I've only spent a couple of minutes thinking about it.) My theory is that the biggest cause is feature lock: eventually, you reach a point where you have to stop creating new stuff, envisioning new possibilities, and just finish the thing that you're already committed to. I can't pretend to really know why, but that transition point seems to mark a decided drop-off in enthusiasm and drive. Good project leadership is prepared for this and knows how to handle it. Bad project leadership is likely to just let the whole thing fall into the crapper and drown. (Pleasant metaphor, isn't it?)


Anyways, I'm not really smart enough (or edumacated enough in things like psychology and management) to know how to cure the Death March phenomenon. Hell, I don't think it can be "cured" at all in the sense of just making it go away; I think Death Marches arise from very real shifts in the emphasis of projects over time. I do know that good leadership makes all the difference in whether or not people actually die during Death Marches. (While I have not, in fact, seen anyone really die under bad leadership in a Death March, I know of many cases where people wished they were dead.)

What I'm interested in at the moment isn't so much the Death March phenomenon. I'm not involved in any Death March projects at the moment, so I don't have any fresh thoughts on the whole situation. I also have a bad tendency to totally forget things when I quit dealing with them actively, so I have no particularly pertinent recollections from past Death Marches to share.

The thing that's been brewing in my mind, making my brain itch so to speak, isn't Death Marches - it's a similar phenomenon, but much less documented, at the other end of the time scale. Death Marches begin when it's time to just buckle down and get the stupid thing done. But something else happens, something similar and yet more subtle, and much more insidious, at the beginning of a project. That Thing is what, I think, has been sparking a lot of my own procrastination lately.

I have no good name for it, so I'll make one up (you may wish to avert your eyes): Pre-Vision Boredom. During the most passionate, invigorated, and adrenaline-charged segments of game development, there's something that really keeps the team going; that thing is responsible for getting everyone to work early, keeping them there late, and keeping them raving about it in thoroughly excited tones. That thing is what makes it so awesome to be working on the project. That thing is why we love our jobs (even if the job isn't game development, but some other thing).

That thing is Vision.

Vision is the core of motivation. Communicating vision is the core of leadership. Project leadership is all about rallying the team behind a vision, getting everyone excited about being a part of something. Vision is the ultimate fuel for our deep human need to be a part of something larger than ourselves. Vision is triumphantly getting some chunky images on the screen, and knowing that when the game is all done, it's gonna be so freakin' cool. Vision is looking up the long, hard road of development towards the finished product, and being so caught up in the joy of that product that we stop seeing the road itself, and just walk.

Death Marches occur when vision starts to fade. The feature list is shrinking, and most of the cool stuff is done. The bug list is starting to grow - probably very rapidly, as alpha testing and usability testing starts in full force. Suddenly, the finished product isn't so easy to see anymore. The road starts to fill our sight, and without good leadership to keep the vision alive, the team is liable to stare so hard at the road that they fall face-first into it and crash.

This much is pretty well known. I'm sure some very eloquent and insightful people have said it better, in many places that are not my journal. But I think they've missed an equally dangerous, but much less visible, problem with vision.


The problem is what the team does before they catch the vision. At the beginning of a project, especially an expansion or sequel type project, this is a very real issue. It's something I personally have been hit hard with, especially without the infectuous physical presence of enthusiastic coworkers. Vision is so much easier to convey in person; it just seems to lose something when all you've got to refer to is the project Wiki and some IM conference logs.

One of the best bits of advice I've ever heard for self-motivation is to just start - don't worry about getting it done, just get started. Once you start, catching sight of the finished product - the vision - is that much easier. Once you've got the vision, wanting to finish isn't a problem anymore. But if you just can't see that vision, you may never really truly get started, and that's where problems arise.


I've almost caught it. There's some possibilities for this next project at Egosoft - largely of a technical nature - which are really exciting me. When I get into the code, and start doing real refactoring and documentation, real work, I can start to see the edges of the vision coming into focus. There's just so much potential, so much opportunity - but not quite yet enough to hit that critical mass and propel things forward into solid, dedicated work. There is potential for much good, yes, but there is also potential for a lot of failure.

Grabbing on to the Vision is everything. I think it is largely the responsibility of leadership to maintain, keep, and spread Vision. However, it is important to remember that everyone has to have a slice of the Vision to work optimally - and the best way to sell someone on a Vision is to let them play a part in it personally. Egosoft is absolutely exceptional for letting individuals play a part in the Vision, but we still seem to lack a little something in the realm of developing a coherent and compelling Vision up front.

I think that, for any team, the sooner Vision is ready for everyone to start being a part of, the better things are liable to go. Losing the Vision at the moment of truth is a danger, but it can be just as deadly to neglect the Vision when the first critical (and irreversible) decisions of the project are being made.


VS2005 Woes

Posted by , 16 February 2006 - - - - - - · 225 views

I'm having a very mixed experience with VS2005.

The look is nice, aside from the toolbars, which just look dull. (Why does Microsoft have this compulsive need to invent a new way to render toolbars every year?) I really like the new Start Page over the VS2003 launch mode. The installation was fast (compared to VS2003) and easy, without any weird Prerequisite steps. One thing I never understood about VS2003 was why the Prereqs CD has to be run first even though the "Disc 1 - Install" CD says to put it in first. Seems like a silly oversight, and at least that kind of goofiness is gone in 2005.

Other than that, though, I hate it so far. The install failed to properly register solutions to open in VS2005, so all my code tries to open in VS2003, which of course can't read the new solution files. The new "Repair File Assocations" feature is nice, but didn't work - .sln files remain stoically associated with VS2003. I had to do some manual registry tweaking to get that to go away.

So once I could actually open solutions, I went to do some compile testing to see how things look. I had to do some twiddling with Include and Library paths, which is somewhat to be expected with all the SDKs I use, although I must say I'm disappointed that nobody at Microsoft has yet realized that they can import my old settings. Every time I upgrade VS, I dream that I won't have to redo all of my personalization. Every time, I'm disappointed.

I had to go back through all the menus, toolbars, and little tear-off panes and arrange them the way I like. I can understand some things not being importable from prior versions, but it's just a nuisance to have to redo all of it. That's all one-off migratory pain, though, which I'm willing to let slide, disappointing though it may be.


The real kicker is that IntelliSense now sucks. Approximately every 2 minutes, VS2005 will "Update IntelliSense." This process takes about 35 seconds to a minute on a project the size of the one I'm working on, and totally pegs the CPU for the duration. It doesn't background it, doesn't thread it to keep it from deadlocking VS - the entire IDE is dead and the rest of my system is sluggish until the update finishes. There is no button or key that will cancel it as near as I can tell.

As if that wasn't bad enough, it also does this spontaneously during builds. What the HELL, Microsoft? How can you not think to disable CPU-intensive tasks during a build? I can't even change build settings for other build configurations while a compile is going on, but somehow it slipped your minds that "updating IntelliSense" during a build is really stupid?

But wait! It gets even better! About 60% of the time, this "update" process totally kills VS. That's right - dead. Locked. Frozen. Kaput. Open task manager again, end process. This has apparently already been investigated and fixed according to the public VS bug tracker. The "workaround" is to remove a DLL so that the entire IntelliSense subsystem fails to run. Err... sorry, no, that's not going to cut it. I may as well use Notepad at that point. IntelliSense controls statement completion, class hierarchy generation, and apparently a bunch of other stuff, because disabling it basically turns VS into a glorified brick.


I want to like VS2005. I really do. Hell, I want to love it. The function caller/callee diagramming stuff is just awesome, and the OpenMP support is very much important for our future plans.

Sadly, though, until we get SP1, I think I'm going to be forced to hate it.


Cleaning Stuff Up

Posted by , 15 February 2006 - - - - - - · 188 views

Well, I found a reasonable service that would haul off my car and get me a tax-deductible charity donation receipt for it. So I don't get any outright cash, but I will get a writeup worth, at the absolute least $150, and likely substantially more. I get to find out the actual value once the car is auctioned off; apparently the tax law goes that I can deduct market value for the car, or some such. That'll be a relief, especially since I'm technically self-employed now, and tax writeoffs are welcome in any form.

I've also been doing a lot of cleaning and reorganizing around my flat. It's very refreshing to actually have the energy and motivation to do that kind of stuff, since I very rarely did during the Evil Day Job.




Cleaning up seems to be Theme of the Week in my life. In preparation for Projects Future, we've been doing a massive code analysis and refactoring process. The first phase of this is a dead code hunt; during the hectic crunch mode on the X3 project, a lot of code got shuffled around, orphaned, etc. So I'll be taking care of cutting out excess junk, as well as updating any code that relies on deprecated methods.

After that is a hefty physical architecture change, moving files into new directories, splitting modules into more finely-grained and aptly-named files, and so on. Once that is done we'll be doing actual refactoring to group together some bits of logic thhat are currently spread out across several modules, as well as eliminating some redundant routines and building a few centralized libraries to replace a lot of special-case hard code. A few previously hard-coded things will be converted to a more data-driven design as well, which will be immensely helpful in the balancing phases of Projects Future.


One of the most promising things that has happened is the adoption of some good project management and documentation tools. We've started using a pair of packages from Atlassian: JIRA for task tracking, and Confluence (a WIKI package) for doing broader documentation and specifications. Both are fairly nice; they won't dethrone massive teamware that already exists, but they cost a heck of a lot less, which makes them more than worth it. So far both tools have been perfectly adequate for what we need, and customizable enough that we can adapt them to our own methods. JIRA has already paid for itself just by virtue of the ability to see, at a glance, what works is being done and by whom. It'll probably be a while before we have any considerable amount of content in the Confluence wiki, but it's shaping up to be worth its (digital) weight in gold, too.

Atlassian's products seem to have one niggling little oversight: they require a lot of clicking to get around. On a high-speed server, though, that's not that big of a deal, and the ability to batch-manipulate tasks mitigates that problem quite nicely. There's a few things that could be done better, but the price is most definitely right. Considering what these tools replace (a nasty hacked amalgam of forums, email newsgroups, and tiny custom ASP scripts) it's a huge step upwards.


Day... ah, screw it, I have no idea what day it is

Posted by , 12 February 2006 - - - - - - · 249 views

Well, my car is dead. There's a rod broken loose in the engine that's pretty much destroyed things, and the timing computer has tried to "compensate" by downtiming the engine so much that it barely runs. It currently sits, dejected and stripped of all my belongings, at a local garage.

They want to make me pay $25 to have it "impounded" and removed, which is total B.S., because there's a good $500 worth of salvageable parts and goodies left on that car, all of which are in perfect condition. Heck, there's four perfectly good door panels on the thing which I know from bad experience are worth a pretty penny. So I'm looking for a way to get some cash back out of it without having to actually do the work of dismantling the usable parts and selling them myself (which I have neither the time nor tools to do).

Incidentally, if anyone knows of any services that will pay for such cars, please let me know - especially any in the Atlanta metro area.




Random thought for the day: why hasn't anybody written a book on software engineering and production methods for games? Anyone with any real world experience knows that different types of software have very different constraints and requirements, and yet it seems like all the good engineering-practices books out there are aimed at business logic or shrinkwrap products. Seems like a prime opportunity for someone to come along and fill that void.


Day... something, plus... something else

Posted by , 08 February 2006 - - - - - - · 259 views

I've been up since 4PM yesterday, and it's great. It feels kind of weird to be winding down at 6:40 AM, but at least I'm no stranger to bizarre schedules.

I finally figured out how to start a GDNet Gathering group, so now all us other Atlanta area people have a group. Hurray. I really don't expect much of anything to come from this, especially since nobody else has set up a group yet. I mostly did it so I can stop worrying about the GDNet Gathering interface being smarter than me (that Create Group icon is really buried).


Nothing much interesting on the work front; doing some refactoring and code cleanup stuff, and a little bit of research on project coordination systems. Huzzah.

I'm starting to feel a little fuzzy in the brain, so as soon as my Mountain Dew wears off, I'm going to go curl up with my shiny new copy of The Mythical Man Month and absorb some good ol' wisdom.






September 2016 »

S M T W T F S
    123
45678910
11121314151617
18192021222324
2526 27 282930 


PARTNERS