Grammar tweaking for fun and profit
I finally killed off most of the remaining dynamic memory allocations in my parser. There's still a few lurking in the internals of boost::spirit::lex (it supports "fast" parser backtracking by buffering tokens on the fly) but they're not a huge chunk of the runtime anymore, so I'm not terribly worried.
It's time to switch over to more mundane things, like algorithmic improvements to the grammar spec itself. I'm planning on writing up a full list of tricks I'm using once I finish, so I won't delve too far into the tweaks I've been doing just yet. In brief, though, I'm basically tuning the rules so that productions fail as quickly as possible and don't test for things that are rare unless they absolutely have to.
For instance, it's common to have "optional" stuff in a given piece of Epoch syntax. One big killer is "preoperation" and "postoperation" statements, where postoperations are the real problem.
Consider the Epoch statement below:
To match this, we have to first see if we have an identifier (check) followed by a member access operator, the dot (check) followed by any repeated number of such accesses (check) followed by a postfix operator (check).
Now consider this alternative Epoch statement:
So we have an identifier (check) followed by a dot (check) followed by some repeats (check) followed by a postfix opera--- oops! That's an open parenthesis! Now the parser has to backtrack through all those tokens, discard any work it tried to do in the meantime, and try again from scratch. When the function call rule is tried, it has to re-match all of the member accesses, check again to see if it has a parenthesis (which we should have already known) and then continue.
Needless to say, for a production which appears in the majority of a given program's code, this is painfully inefficient.
Another killer is unary prefixes:
foo = !bar
The rule for this is pretty simple, at least in its naive form:
expression = prefixes* (parenthetical | literal | statement | identifier)
The rule's execution goes something like this:
- Look ahead by a token
- Ok, we have a prefix operator, the !
- Consume the ! and look ahead another token
- Check the prefix operator list again, this time failing because baz is not a prefix operator
- Backtrack 1 token
- Successfully match the Kleene star since we found 1 prefix operator
- Look ahead 1 token
- Is it a parenthetical? Nope.
- How about a literal? Nope.
- A statement? Well, it begins with an identifier!
- Look ahead 1 token
- Is it a parentheses-enclosed list of parameters?
- Nope. Backtrack 1 token
- Look ahead 1 token
- Oh, it's a standalone identifier
- Successfully match the identifier
- Successfully match the RHS expression
- Successfully match the assignment
a = b = c = foo()
Now we have to repeat all that backtracking and peeking ahead for every single assignment in the chain. Ouch!
Let's change the rule up a bit.
expressionchunk = parenthetical | literal | statement | identifier expression = expressionchunk | (prefixes+ expressionchunk)
Now the rule can early-out when it finds an expression that doesn't involve a prefix. In the worst case, if the expression isn't really a valid expression, it looks at the "prefixes+" subrule (which matches one or more prefixes) and can quickly fail there without having to redundantly check for a valid trailing expression chunk.
This kind of subtle but vital tweaking is pretty much the last bit of fertile ground for optimizing the Epoch parser at this point. I'm still hunting down improvements, and they're getting solid gains - but it's tediously slow work, because in order to test an individual tweak I have to rebuild the Epoch compiler (and even one subset of that can take a couple of minutes to compile), make sure all the different syntactically valid Epoch constructs are still accepted, make sure that illegal constructs are not accepted, and so on.
"Diminishing returns" is the name of the game now. All the easy, obvious stuff was optimized long ago, and much of it was outright deleted. Memory allocations are down to a tiny fragment of what they used to be, and only really account for about 15% of the execution time, so it's not like we can gain some major wins by further culling allocations.
Work is ongoing, but current parse times on the original test case are just over 10ms. That's just a hair over a second to parse the 2MB test file.