Jump to content

  • Log In with Google      Sign In   
  • Create Account

The Bag of Holding

Epoch self-hosting progress

Posted by ApochPiQ, 25 August 2013 - - - - - - · 469 views
So the Epoch compiler is now written end-to-end in Epoch. There is no C++ left in the bootstrapping process aside from the runtime environment which does garbage collection and such.

Sadly this doesn't mean that we're quite to self-hosting just yet. Only about a third of the compiler test suite is passing, and a lot of the remaining work centers around getting the parser finished and rigging up the code generation to the AST traversal logic.

As you may or may not recall from previous updates, the code generator is already done and capable of generating the full Epoch compiler given a properly formatted AST; so what's remaining here is just glue between parsing and the code generation system to get it to where all the tests pass again and we can try the evil experiment of feeding the complete compiler into itself.

There are a number of parser features left to complete before this can happen:
  • Parenthetical expression support
  • Literal function parameters (used for pattern matching)
  • Higher order functions
  • References
  • Entities (flow control, etc.)
  • Function tagging
  • Sum type definitions
  • Templates
  • Some literal cleanup (floating point support being the biggest one for now)
After that, some general features still need to be wired into the compiler. These are things that need nontrivial logic to exist between the parser and the code generation layer:
  • Overload resolution
  • Type inference for assignments and parenthetical expressions
  • Chained assignments
  • Function pointer members in structures
  • Pattern matching on values
  • Pattern matching on types
  • Template support
  • Some built-in operators and functions
  • Type demotion (e.g. integer -> 16-bit integer where appropriate)
  • Standalone code blocks
  • Batch test framework
  • Executable binary generation
I also have a couple of known bugs to clean up before it's all said and done, but they're rather boring for the most part.

On the one hand, this amounts to a whole lot of work; on the other hand, it's a measurable checklist and I still have until the end of the year to hit my personal deadline for the project.

It's actually kind of a relief to see a finite list of things left to knock out. "Write a self hosting compiler" is kind of a tall order, and to finally be in a place where the end is clearly in sight is a great feeling.

There are also some other reasons for me to be excited about the future of Epoch, but I can't really go into detail just yet. Stay tuned, life's about to get interesting.

Epoch Optimizations and Garbage Collection

Posted by ApochPiQ, 10 August 2013 - - - - - - · 439 views
Epoch, language design
Following the Great Garbage Collection Debug Spree of the past few weeks, I've noticed a general trend towards bad performance in the realtime raytracer benchmark. At one point it was down to ~6.5FPS peak performance, which is just unacceptably bad.

Profiling revealed two main causes of this slowdown. One is the garbage collector invocations themselves; Epoch is designed to have a "greedy" collector which trips frequently and collects aggressively. Unfortunately, the cost of walking the stack and heap for live objects is nontrivial, so tripping the GC all the time leads to lower performance throughput.

It's worth noting that in standard apps like Era, the difference isn't even visible; I've yet to see Era hitch noticeably with the fixed garbage collector. But for something that wants to peg a CPU core and go flat-out as fast as possible, triggering collection cycles all the time is a bad idea.

The other major cause of performance woes is register spilling. Without going into painful amounts of detail, one of the requirements of LLVM's garbage collection support is that you can no longer store pointers to GC-able objects in CPU registers. Instead, all objects that might be collected must be stored in memory on the machine stack. Moreover, any time you want to update those values, it has to be done on the stack, so there is overhead in moving from register values back onto the stack every time you touch a variable that might be garbage collected.

This may not seem like much, but it turns out to be pretty painful in a benchmark that relies heavily on computations done on objects. Since every object can potentially be collected by the GC, every object's pointer must live on the stack at all times. This winds up being a substantial perf hit.

On the upside, both problems can be fixed with a single tactical strike. The key observation is that many Epoch functions never do anything that generates garbage. A typical Epoch program is comprised of many small, nearly-trivial functions; and if those functions can't possibly create any garbage, there's no purpose checking the GC when they finish.

As a bonus, if a function never needs to call the GC, and never calls any other functions which need the GC, we can turn off register spilling for that function!

The Shootout: Epoch vs. C++
Tonight I implemented a quick little function tag called [nogc]. This tag tells the JITter two things: first, the function cannot generate garbage, and therefore should not invoke the garbage collector; and secondly, because the GC is never invoked from a [nogc] function, register spilling can be disabled in the body of that function.

The results are duly impressive: the raytracer is back to running at ~26.5FPS on my dev machine - a full 4x speedup over the worst-case performance it was exhibiting earlier today.

But it gets even cooler.

Out of sheer curiosity, I wrote a port of the Epoch raytracer in C++. It's not optimal C++, to be fair, and cuts a few corners to make it function more similarly to the Epoch program. But, line for line, it does basically exactly what the Epoch version does.

The C++ port of the raytracer runs at only 23FPS.


Posted by ApochPiQ, 02 August 2013 - - - - - - · 261 views
Turns out my garbage collection woes are over.

I was strongly suspicious of the inliner in my last post, and it turns out that this hunch was (albeit indirectly) completely correct.

The obvious thing to do when facing a bug like this is to compare the code generated; dump out a listing of the version that works, and a listing of the version that doesn't, and run a diff tool to see what's mismatched between the two.

Thankfully, LLVM comes with a handy dump() function that lets us easily see what the IR bitcode looks like for a given program, so it's trivial to arrange this.

Combing through the diff, I noticed that there was indeed some inlining going on - but not of the functions that I suspected were causing problems. Moreover, suppressing inlining on the functions I did think were problematic made no difference!

As I looked closer, I noticed another interesting distinction between the working and broken versions of code: the @llvm.lifetime.start and @llvm.lifetime.end intrinsics.

It took some digging on the googles to figure out what exactly these mean. Semantically, they just define when two variables (supposedly) do not overlap in lifetime, and can theoretically be given the same stack slot. Except if we marked one of those stack slots as containing a GC root... well, I'm sure you can figure out the rest.

The intrinsics are inserted by the optimizers at the IR level but not paid attention to until native code emission phases, so all I needed to do was strip them out of the IR. Thankfully I already have a late-running pass over the optimized code for GC setup purposes, which is actually part of what generates the stack maps in the first place. I modified this pass to obliterate calls to those intrinsics, re-enabled function inlining everywhere I had tried to club it to death previously, and...

Everything works!

Incidentally, there are already bugs filed against the so-called "Stack Coloring" pass which uses these intrinsics. One that actually helped me track this issue down is filed at http://llvm.org/bugs/show_bug.cgi?id=16095

I'm pondering filing a second bug just to note that stack coloring/slot lifetime in the presence of GC roots is a bad idea.

The ongoing mission

Posted by ApochPiQ, 02 August 2013 - - - - - - · 293 views
Nailed down some more quirks in the GC over the past day; unfortunately the last one is a real bugger.

It appears that LLVM's function inlining optimizations cause the GC to go slightly insane. I have yet to ascertain the exact interaction between the inliner and the GC, but disabling all inline functions makes the GC run beautifully.

I also remembered a nasty lesson that I should have had in mind a long time ago: deferred operations and garbage collection are a nasty mix. My particular problem was using PostMessage() to send a string to a widget. In between the call to PostMessage and the message pump actually dispatching the message, the GC would run. It would detect that nothing referenced the string anymore, and correctly free it.

Of course, when the message pump dispatched the notification in question, the control would try and read from the now-freed string, and bogosity would result.

Fixing that was just a matter of a find/replace and swapping in SendMessage, since I didn't really need the deferred behavior of PostMessage in the first place.

So my GC bug list is shrinking once again. I hope that I'm actually fairly close this time and not just shrugging off a "small" issue that turns out to require a major rewrite of something. I really don't want to ship without function inlining support, since that's a huge performance win in a language like Epoch where everything is a tiny function. However, I desperately need to find out why inlined functions cause the GC to vomit, and that's going to take a lot of time and patience I suspect. It may be worth taking the perf hit temporarily to get working software out the door. We shall see.

Now that things seem to be behaving themselves, and soak tests of memory-intensive programs reveal no issues, I have a few things I'd like to get back to working on besides garbage collection.

One is shoring up the Era IDE so that it's actually functional enough to use on a consistent basis. That'll likely involve making random additions and refinements over time, but there's some low-hanging fruit I'd like to hit sooner rather than later (mostly surrounding tabbed editing for now).

After Era reaches a point where I'm comfortable spending a few hours in it at a time, I want to go back to working on the self-hosting compiler and getting the parser and semantic analysis code up to snuff. The major semantic features left center around overload resolution and pattern matching; the parser needs to support basically the entire full language grammar instead of the trivialized subset it recognizes now.

Despite the lengthy diversion, I still feel like I'm on track to finish self-hosting by the end of the year. It may be a tight squeeze, and there may well be a number of bugs to sort out before it's 100%, but I think I can get the majority of the port done by then.

Once self-hosting is reached, there's a lot of simple but tedious optimization to do in the runtime itself. One major thing I've mentioned in the past is moving away from storing Epoch "VM bytecode" in the .EXE file and directly storing LLVM bitcode instead. That'll cut back on the dependencies needed to run Epoch programs, and reduce launch times substantially as well.

From there, it's mostly a matter of introducing language and runtime features. I suspect that once the compiler is done there will be a lot of crufty code that can be dramatically improved with some minor language tweaks. There's also the obvious stuff like separate compilation support, better data structures, parallelism...

Obviously still quite a lot to be done, but things are getting pretty exciting. I'm starting to have semi-realistic daydreams about people joining on with the project as it gains momentum and helping to smooth out some of the many remaining rough edges... hint hint :-P

Mostly though I hope that reaching self-hosting will be enough to gain some serious attention. The language is still very, very far from ready for prime-time, but the more people start playing with it the sooner it'll get there.

August 2013 »