Jump to content

  • Log In with Google      Sign In   
  • Create Account





Making a Faster Compiler

Posted by ApochPiQ, 16 July 2011 · 307 views

Recently, my major undertaking (outside of work of course) has involved rewriting the Epoch compiler. I'm doing this for a few reasons, but the main one is performance; the Release 11 compilation model involves using boost::spirit::classic to parse the input code 3-4 times, progressively elaborating the syntax tree until it is ready to be turned into bytecode.

This model is fragile, ad hoc, poorly designed, and incredibly slow. So my first goal for Release 12 was to rewrite the parser to use boost::spirit::qi and generate a true Abstract Syntax Tree from the parsed source, then do a series of refinement passes over that AST for actual compilation. This will involve implementing several similar but different AST representations, each one representing the work done in separate passes of the compiler. Instead of mutating the AST in-place as decorations and improvements are performed, the entire AST will be immutable and converted into a new, parallel representation in each pass.

Most production compilers operate this way, and for good reason; it is far easier to think about how the code works in this model than when a monolithic AST structure is used and manipulated in-place during compilation. Paradoxically, it's also faster - because despite the copying of the AST in each pass, each IR can be optimized to do exactly what it needs to do and no more.

As of a few days ago, I had qi generating a rough AST; pass times dropped from ~10 seconds in the old implementation to ~4 seconds just by upgrading the parser. But 4 seconds for a 20KB input source file is still painfully slow, so I broke out the profilers and started looking for ways to improve on the AST generation pass.

Turns out that the vast majority of execution time was spent allocating and freeing memory, which is due to the default semantics of qi. Essentially, things like lists and variants are constructed every time the parser tries to match a production in the grammar - and then destructed if the match fails. This means that for nontrivial grammars, a huge amount of time is spent allocating memory that is never used.

To attack this, I wrote a simple "deferred construction" template which lazily allocates memory for AST nodes only once the production succeeds. From there on out, the node's contents are copied around and the final AST is constructed using only allocations that are absolutely necessary.

This dropped parse times on the test input from ~4 seconds to ~75 milliseconds - which is a very, very nice gain indeed.

My next step is to eliminate excessive copying of nodes once they are successfully allocated; each branch of the AST is immutable once the parser has successfully constructed it, so there's no need to store copies of every branch as parsing continues. This involves changing the deferred construction wrapper from using raw pointers to boost::shared_ptr, and just handing around references to the allocated node instead of making full deep copies.

Using the same input source as before, tests reveal that this new minimal-copy approach can generate an AST in an average of 47 milliseconds. At this point, a single pass is now just over 200 times faster than it used to be.

Profiling indicates that there are still several points where I could make use of the deferred construction wrapper to further improve things, so I'm going to go ahead and try that next. I'll keep you posted!




Optimizations like this are incredibly gratifying. Seeing all the same work run in that much less time makes programming worth doing.<br>
Very nice. That is a great improvement and what is even better is your profiling indicates you can use the same solution to solve the problem elsewhere to get even more performance. Keep up the great work go go profiling. When you make progress on performance this much it just makes you enjoy your project so much more and relieves so much stress.
19ms and counting!
Excellent!!!

December 2014 »

S M T W T F S
 123456
78910111213
14151617181920
21 222324252627
28293031   

Recent Entries

Recent Comments

PARTNERS