Return to Fast-Land
Epoch language design
A few days ago, the Epoch compiler could self-host in about 60 seconds.
My last run of the self-hosting process clocked in at 6.59 seconds - nearly ten times faster than when I started out. That's not bad for a couple afternoons worth of work.
As I suspected, there was a lot of lazy nonsense in the compiler that led to the slowness. The only data structure I implemented was a singly linked list, so lots of things were O(n) lookups or O(n^2) or O(n!) depending on how stupid the surrounding algorithm was.
I've cleaned up the most egregious time-wasters, which accounted for a major portion of the speedup. In addition to just making things less dumb, I've been adding hints and other shortcuts to minimize the number of list traversals being done across the board. There's a lot of code that does bad things, like iterate across the list of all functions looking for the function's return type, and then immediately forgetting where that function lived in the list and looking up something else about it, traversing the whole list from the beginning.
A little more effort shaves off some time and leaves the high watermark of 6.52 seconds. When 0.07 seconds is enough to get me excited, it's probably time to start looking for other ways to optimize. I have a long way to go to reach my target of sub-second compiles.
One trick I've pulled out a couple of times is rigging up the JIT system to do a kind of crude instrumentation profiling; with a simple #define I can turn on a mode where all JIT code tracks how long it runs for. This greatly inflates execution times because the overhead is nontrivial, but the data is still mostly useful - and probably will continue to be, up until the point where I'm pushing under a second.
There's a serious danger to trying to make a compiler faster, and that is the dreaded miscompile. Whenever a compiler emits something that is just plain wrong, it's a scary thing; and when relying on a compiler to compile itself, it's easy to accidentally wind up with a chain of bad compilers that have slowly worsening bugs embedded in them.
I actually wasted a couple of hours earlier chasing miscompiles that I introduced during optimization attempts. So there's a fair bit of lost time that could have been spent making things faster, but oh well.
6.24 seconds. It's now 2:45AM and I'm starting to get fuzzy. The type checker is now down to less than 1.7 seconds of runtime, which is well past my goal of 2 seconds for the day.
The clock strikes 3:20AM and I've gotten to 6.18 seconds. A few more tweaks and I hit 6.14.
Twenty more minutes of hackery only nets me 6.13 seconds. One one-hundredth of a second is nice, but not nearly enough. It's time to break out the profiler and study the results closely.
Optimization is often an exercise in judicious caching; the compiler is no exception. Carefully storing off commonly-recomputed data gets me all the way down to 5.9 seconds. A little more and I'm clocking 5.68 seconds.
Another important optimization trick is knowing how to balance long chains of conditionals. If the most common case is buried in an "else" all the way at the end, the code has to run through all the less common cases to get to it. Rearranging a few conditionals so that the most common options are first gets me down to 5.59 seconds.
The lexer is particularly painful at the moment, so I take some time to clean up some of its performance sins. 5.27 seconds. Cleaning up similar sins in the parser gets things down to 5.12 seconds.
It takes until 5:30AM to reach 5.02 seconds. Progress is painfully slow, and I'm starting to get a little discouraged. I had hoped that there was enough low-hanging fruit to get further than 5 seconds, but it's starting to look like I'm going to have to dig real deep to get to my desired target time.
One thing that consistently shows up on the profiler as being nasty is the string table system. The Epoch runtime uses a pool of statically known strings for various purposes; during compilation, we need to do many lookups into this pool. Since my only data structure is a linked list... things are painful.
It turns out that converting the search function from recursive walking of the linked list to iterative traversal gains a decent chunk of speed; 4.89 seconds just as the 6:00AM hour rolls around.
I spend the next half hour or so implementing a basic prefix trie data structure. This will be used for lookups into the string table. It's a solid win over even the iterative search function - landing at 3.52 seconds. Throw in another half hour worth of minor fiddling, and we're at 3.46 seconds.
Another cursory skim-through of the profiler results shows that we're wasting a lot of time looking up metadata for variables; this can be trimmed back a lot to search a smaller subset of the data, netting a speedup that brings us to 3.42 seconds.
I found some easy improvements to my trie implementation; minimizing the amount of string processing going on helps attain a compile time of 2.95 seconds. The clock reads 7:15AM and I'm well past the point of being overly tired.
I will threaten to hang up my hat for the night (morning?) and post this entry. Don't be shocked if I wander back in soon and post a comment with another speed record.