Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Don't forget to read Tuesday's email newsletter for your chance to win a free copy of Construct 2!






Deploying a new lexer

Posted by ApochPiQ, 18 July 2011 · 217 views

When I last left a status update here, I had the new Epoch compiler running on my test case in right around 18.5 milliseconds. For reference, when I started this compiler rewrite, parse times were around 10 seconds. Yes, seconds.

Obviously, a 500-fold speed improvement is nothing to sneeze at, and maybe normal people would have looked at this rocket-disguised-as-compiler and decided that, hey, you know, 18.5 milliseconds is good enough.


Well... I have never claimed to be normal.

18.5 milliseconds may be nice, but it isn't there yet. Profiling the compiler shows that a huge amount of time is spent in the parser doing "backtracking." (For reference, I had to move up to a 2MB file for testing, because the 20KB test module I started with doesn't even register on the profiler anymore.)

Backtracking is when a parser looks at a bit of code, says "hmm, I think I'm looking at a Foo" and then gets another byte down the road and goes "whoops! That wasn't a Foo, it's a Foe!" and has to discard all the work it did assuming it was looking at a Foo.

When you do this on a byte-for-byte basis, in a 2MB file, it's clear that things are not as efficient as they could be.

The correct solution to this is lexical analysis. A lexer is a tool which performs lexical analysis. In a nutshell, this is a way of taking the raw code and treating it as a sequence of atomic "tokens" instead of bytes. Now, the parser doesn't think in terms of "F" followed by "o" followed by "o-- whoops, I mean e." Instead, it sees "Foo" and "Foe" as indivisible and obviously different chunks.

During parsing, this dramatically cuts back on backtracking, because the parser can look at the code at a more coarse level. It's like cutting your input size down by a factor roughly equal to the average length of a token in your code - which, as it happens in my Epoch test, is about 6 bytes.


The Epoch compiler is built on top of boost::spirit::qi, which has a companion library boost::spirit::lex. As you can probably guess from the name, lex is a lexer generator. And since, as we've established, 18.5 milliseconds is just too darn slow, it's time to deploy lex.

As a matter of fact, I started this about 24 hours ago, and I've been dabbling in compiler tweaks ever since. It's slow going, because with all the template library magic that's in the Epoch compiler, the build times of that code are getting into the several-minute range. It's endlessly ironic that I'm sacrificing hours of cumulative compile time in the Epoch compiler to shave milliseconds off the runtime... but oh well.

There have been a lot of dumb brain-dead mistakes made in those hours, mainly because I'm splitting my attention between this project and half a dozen other things, so I'm constantly losing track of what I was thinking or doing.


At the moment, I have a hacked version of things running at long last - which is tremendously rewarding. The problem is, it doesn't actually support the Epoch syntax extension mechanism - so all those cool constructs like "if" and "do" and "function" that the standard library provides... don't work.

There seems to be a fundamental assumption someplace in lex that says that you shouldn't ever need to modify the lexing tables on the fly. This strikes me as painfully naive and limiting, so I'm doing my best to hack around it. So far it's slow going, but there is light on the horizon.


I'll poke at this a bit more tonight and then get some sleep and come at it strong tomorrow. Will keep everyone posted!




Doesn't boost::spirit do piles of template black magic to convert the grammar declarations to other stuff? If ::lex follows that same pattern (and it wouldn't be a huge surprise if it did) then no runtime modification would follow from that.
Actually, both spirit and lex have a hefty runtime component in the default use case; all the operator overloading gibberish is actually building the grammar/lexer rules at runtime.

From what I can tell, the main limitation that prevents lex from being mutated at runtime is that it doesn't allow you to specify new precedence orders for the patterns it matches. Specifically, if you have a default rule that matches any token, and then add a more-specific rule that looks for the exact token "foo", the rule added later will never trip because the default rule appears first in the precedence order. This makes it impossible to effectively extend the lexer while it runs.

I'm working on ways around this using custom directive operators for spirit, though; it's actually pretty fun Posted Image
Hmm, that makes sense. I've been reading your posts btw, even if I don't often have many comments.
I'm a little pissed off at the moment. Many hours of work and it looks like the lexer is actually slightly slower than running without it!

I'm still looking for obvious mistakes in my implementation that would contribute to this terribleness, but it's pretty disappointing, because it indicates that the implementation of lex isn't good enough to counteract the algorithmic inefficiencies of not lexing before parsing.

I'm hoping I just bungled a copy constructor or something...
Hello,

I am the author of lexertl, the lexer-generator library underneath boost::lex. Note that if you are building lexers at runtime and have a non-trivial lex spec, then your 18.5 milliseconds will probably be dwarfed by building the lexer, never mind parsing your file. It is possible to generate code instead to avoid this start-up cost, but that won't help you if you want to regenerate the lexer on the fly.

Regards,

Ben

November 2014 »

S M T W T F S
      1
2345678
9101112131415
16171819202122
2324 25 26272829
30      

Recent Comments

PARTNERS