I've figured out that no matter what I have to pay $1300 for a year of insurance here so I'll have my car when I come home. Then, if I take it out to Illinois my premium will go up about $200 + gas money + wear & tear.
Pros: Don't have to drive places with my roommates, don't have to deal with shipping my belongings to and from school, and I get more value for my money than if I just leave my car here.
Cons: My car (2000 Subaru Impreza) already has 107,000 miles on it and its a 1000 mile drive to school from here. My parents agreed to pay for a hotel halfway in between here and there and gas money - but I'm still working on the $180/semester it costs to park. I have to stop working a day early - which doesn't seem like too much but its like an additional 100 bucks or so I don't get.
I think I'm leaning towards taking it - but I need to get my insurance agent to tell me the exact premium price as they've been slightly shady in the past.
Work is going well. Right now I'm concentrated on cleaning up a large body of legacy C code - and although its slightly aggrevating, I think it's a good experience.
I think every programmer as they become more experienced will go back at look at something they wrote a while ago and be clueless as to what they were doing. Those are the breaks of not writing the most readable, commented or efficient code while you're learning.
This is kind of a similar thing - only it is someone elses mess. I think looking at and fixing all this stuff is a pretty good way to improve your own coding and debugging skills. Definitely the last one - theres no debugger because this stuff is so close to the metal, so you really have to use your intuition to decide what to attack first - see the results - and try something else.
Now, I'm pretty sure that the original authors were pretty competent - but a combination of a change of domains and time periods makes it need serious reworking.
Basically the code achieves the minimum standard possible to be commercially viable - it usually does what you expect it to and its not pretty, but who did they expect was going to look at it? The reality in the way we are using it is that there is no room for bugs - it needs to either work as specified or shut down.
The second part - and the reason why it's a struggle to interpret sometimes, I think comes from when it was written. Here are a couple of the things I'm seeing, and a reasonable guess as to the original rationale:
- No header files -
Instead any external functions or variables are declared at the beginning of the function with the "extern" keyword. In the early 90's C I don't think was heavily standardized. Also, systems back then would have been slower and a compile of a project with tens of thousands of lines of source code could have taken a long time, so they would have tried to compartmentalize everything so that changing one file would just mean recompiling once and relinking - instead of changing a header file thats included across dozens of files.
- Playing fast and loose with typing -
Any data structure that is reused seems to have one or more void* fields that they will stick a different type in with every use. In some places - I think this is a cheap hack to get around a bad software design - other times it's sort of a poor man's template.
- No defensive programming and convoluted code for the sake of brevity -
Today, we've got loads of space to keep the program in - but obviously that hasn't always been the case. Theres a lot of code where they leave out defensive programming constructs (like null checking), reuse variables to save space, or write less than readable code to increase speed or reduce the application's footprint.
Overall, I think I'm going to come out of this a better developer and definitely if I have to look at legacy code again I'll have a lot of tricks up my sleeve.
Anyway I need some dinner!