Juicy crunchy tasty sweet

Published January 17, 2010
Advertisement
I've scraped together a couple of free hours and spent some time hacking on Epoch Release 8 this weekend. A couple of minor task-list items have been knocked out, and I've done some architectural improving in the parser to help boost compile times.

At the moment I'm working on Issue 9 which involves adding thread pool support to the language's native threading features. Currently, any time you fork a task block or invoke a future, the VM spins off an OS thread under the hood. This is fine for really complex stuff, but for short calculations or general parallelism it really sucks. In most cases the overhead of splitting the thread is higher than the benefit that the parallelism provides.

To counteract this, we provide the option of thread pools. These are worker threads that are spun off at the beginning of the program's execution (well, technically, they are spun off whenever the threadpool function is invoked) and then sit around in a low-cost wait state until someone wants to use one. Once some code is on the worker thread's work queue, it wakes up, executes the code, does anything necessary to store the response and/or message the results back to the calling thread, and then goes back to sleep.

This reduces the overhead of forking a task or future to the overhead of an OS locking primitive (on Windows I'm using a simple Event), which is minimal on good platforms. The result is that it becomes dirt cheap to use multiple threads all over the place.

All of this is pretty easy to set up. Here's an example program from the upcoming Release 8 package:

//// THREADPOOL.EPOCH//// Demonstration of worker thread pooling in Epoch//entrypoint : () -> (){	// Create the thread pool with a limited number of threads	threadpool("demo pool", 2)	// Fork off a chunk of work for the thread pool to do	thread("task1", "demo pool")	{		message(caller(), result(pi()))	}	thread("task2", "demo pool")	{		message(caller(), result(pi()))	}	future(r1, pi(), "demo pool")	future(r2, pi(), "demo pool")	// Wait for work results to complete and display them	responsemap(resulthandler)	{		result(real(piresult)) => { debugwritestring("Result: " ; cast(string, piresult)) }	}	acceptmsg(resulthandler)	acceptmsg(resulthandler)	debugwritestring("Future result: " ; cast(string, r1))	debugwritestring("Future result: " ; cast(string, r2))}pi : () -> (real(retval, 0.0)){	real(denominator, 1.0)	boolean(isplus, true)	do	{		real(div, 4.0 / denominator)		if(isplus)		{			retval = retval + div		}		else		{			retval = retval - div		}		isplus = !isplus		denominator = denominator + 2.0	} while(denominator < 10000.0)}


(You can see more example Epoch programs in the Examples repository.)

Note that instead of a typical task we request a thread. The thread construct accepts two parameters, in contrast to task's one. The first parameter is still the internal ID tag of the thread, so you can message it later. The second parameter is the name of the thread pool from which a thread should be used; this must match the name of a pool previously created by invoking the threadpool function.

A similar tweak is added to the future function: the optional third parameter controls which pool a future is computed in. All in all, using thread pooling is simple, intuitive, and concise - above all a supremely pragmatic sort of feature, which is exactly what Epoch should be.


Working on this has unearthed a couple of parser and validator bugs that I'm fixing; once that is out of the way, I still need to actually build the thread pool support in the VM (for now it still forks a thread, while I work on getting the parser stuff fixed).

Following that, there are only a handful of items left in the work list for Release 8:

  • Fix a bug with batch operations and output filenames

  • Add simple critical-section wrappers for CUDA library to give some primitive threading safety to CUDA integration

  • Add some build configurations to the VS solution to allow building Epoch with no CUDA support (for all the ATi users out there)

  • Perform a final code review, largely for documentation and exception safety purposes


Of these, the only really major chunk of time will be the code review. At this point I plan on doing a cursory pass over the code once to make sure it's bare-minimum acceptable for release, and then doing additional in-depth reviews over things as time permits. That should make sure that I can get this thing out the door by the end of the month, as planned.

Once R8 hits the shelves it's time to hop back into the CUDA integration side of things and really improve the supported features (right now you're kind of stuck with simple arithmetic on a single GPU-side thread, which is fairly useless). That and some other surprises are in store for R9, which will be the official GDC 2010 preview release - meaning I'll probably be cramming stuff into it right up until the first week of March.


Speaking of GDC 2010, I can confirm that I'll be present this year once again, and anyone interested in an Epoch preview kit can contact me here via PM to arrange a meeting at GDC. Ask nicely and I might even give you a hallway preview [wink]


That concludes our show; join us next time for nothing of consequence, or stay tuned to see absolutely nothing.
Previous Entry So long, 2009!
Next Entry Mmm, goodies.
0 likes 0 comments

Comments

Nobody has left a comment. You can be the first!
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement