Jump to content

  • Log In with Google      Sign In   
  • Create Account

Vaerydian



Coroutines: Simple HTTP Server

Posted by , 25 February 2014 - - - - - - · 1,217 views
Coroutine, Concurrent Programming and 2 more...
Ok, what are we up to this time?
This time we're going to go about making a very simple HTTP server that will serve static content and handle roughly 20,000 requests per second (I know in earlier posts I stated ~30k, but I refined aspects of the server to be more standard friendly, and lost performance as a result). To do so we'll need to expand the AsyncCS library we've been working on a bit to include a couple things: Coroutine Workers and a Resource Pool.

NOTE: Under no circumstances should you ever use this server in a production setting. It is buggy and should not be considered secure. You have been warned.

What is a Coroutine Worker?
Simply put, a Coroutine Worker is just a worker thread and a worker thread is just a standalone thread that runs in a specialized infinite loop that works only on a a set of work that is given to it. In our case, it pulls its work from our Resource Pool.

What is a Resource Pool?
A Resource Pool is just that, a pool of resources. In our case, its a set of concurrent collections and some access mechanisms surrounding them.

What is a Concurrent Collection?
Concurrent collections are a special type of thread-safe collection (i.e., generic containers) that allow multiple threads to utilize them at any given time. They make our life easy. If you want an idea as to how .Net goes about implementing them, I suggest you take a gander over at the Mono project and view their source here.

So how do we go about putting this all together?
Ok, this is going to be mainly a straight-forward construction due to the amount of code involved, so no "as I figured it out" type posts, sorry :/ that said, first lets construct our Resource Pool. For it we'll need a few things: a FIFO structure, so a queue to contain coroutines as they are issued a way to issue a new coroutine and a way to place a coroutine back into the queue after we've done some work on it.
public static class ResourcePool{

	public static ConcurrentQueue<Coroutine> coroutine_queue = new ConcurrentQueue<Coroutine> ();

	public static void issue_coroutine(Coroutine coroutine){
		coroutine.initialize (null);
		ResourcePool.coroutine_queue.Enqueue (coroutine);
	}

	public static void issue_coroutine(Coroutine coroutine, object input){
		coroutine.initialize (input);
		ResourcePool.coroutine_queue.Enqueue (coroutine);
	}

	public static void enqueue_coroutine(Coroutine coroutine){
		ResourcePool.coroutine_queue.Enqueue (coroutine);
	}

	public static object[] parameterize(params object[] parameters){
		return parameters;
	}
}
What about the Worker?
The worker is pretty simple. It needs to have an interrupt able infinite loop (i.e., while(bool_variable)), the ability to retrieve a coroutine from the queue, a max number of coroutines to work on before ending its loop-cycle, a period to remain dormant after its loop-cycle is complete, a way to identify it, a way to force it to shutdown, and whatever other spurious info you may want (for debugging and testing i added a "tasks_completed" counter. My worker ended up looking like so:
public class Worker{

	public Worker(){
		RUNNING = true;
	}

	private static bool RUNNING = true;

	public int max_count = 10;

	public int tasks_complete = 0;

	public int ID = 0;

	public long sleep_time = 10000L;

	public void run(){

		Console.WriteLine ("Worker {0} Starting...",ID);

		while (RUNNING) {
			if (ResourcePool.coroutine_queue.Count > 0) {
				for (int i = 0; i < (ResourcePool.coroutine_queue.Count > max_count ? max_count : ResourcePool.coroutine_queue.Count); i++) {

					Coroutine coroutine;

					if (ResourcePool.coroutine_queue.TryDequeue (out coroutine)) {
						coroutine.next ();

						if (coroutine.can_move_next)
							ResourcePool.enqueue_coroutine (coroutine);
						else
							this.tasks_complete++;
					} else
						continue;
				}
			}

			Thread.Sleep (new TimeSpan(sleep_time));
		}
		Console.WriteLine ("Worker {0} Stopping... {1} Coroutines Completed...",ID, this.tasks_complete);
	}

	public void shutdown(){
		RUNNING = false;
	}
}
While this isn't perfect, its good to show as a way to implement a simple worker thread. Methods for improvement may be to create a number of queues equal to the number of workers and then distribute the workload among those. It could allow for more efficient processing. I just wanted to keep it simple and reduce the number of problem areas.

Ok, so how do I use them?
Using them is fairly easy. You basically create a worker, or an array of workers then start them running on separate threads. You then issue coroutines to be "worked" via the ResourcePool. That code would look like so (also found in my test cases):
public class AdderTask : Coroutine{
	private int val;

	public override IEnumerable<object> process ()
	{
		for (int i = 0; i < 100000; i++) {
			if (i % 2 == 0)
				val += 2;
			else
				val += 3;
		}

		yield return YieldComplete(val);
	}
}

public void do_stuff(){

        //create our worker and assign a thread to run it
	Worker worker = new Worker ();
	Thread thread = new Thread (worker.run);

        //add our coroutines to the queue
	for (int i = 0; i < 100; i++) {
		ResourcePool.issue_coroutine (new AdderTask ());
	}

        //start the thread, wait a second, then kill it
	thread.Start ();
	Thread.Sleep (1000);
	worker.shutdown ();
	thread.Join ();
}


fairly simple eh? Improvements here could be offered in setting some sort of boolean stating that the coroutine is "done" and have it set by the Worker when it "dumps" the coroutine.

Easy stuff, now what?
Now we'll work on creating a simple HTTP server. Since we're using coroutines, and .NET by default doesn't use them in its methods, we're going to need to work at a lower level. This means we will not get to use the nice friendly HTTP Server/Listener/Request/Response constructs. We get to use a grittier construct, a more evil construct, the infamous NETWORK SOCKET! MWAAA HAHAHHAHAHAH! err... umm... yea... Seriously though, the socket construct gives everything we need and lets us blow the doors off the other crap, err... I mean less flexible constructs.

Ok... so how does a HTTP server operate?
It 1) listens to requests on a given port; 2) when a request is received, it determines what the request is for; 3) attempts to fulfill that request; 4) responds as appropriate when that request has been attempted. Its a fairly simple 4-step process. Since we're going to handle static content we need a couple of things: A main listen-loop, a way to read data from a Socket, a way to interpret a request, a way to handle a request, a way to read files from the file system, a way to handle a response, and a way to send data to a Socket.

From the top, where do we start?
We start by setting up our workers and start listening on ports. If/when we get a request, we want a way to handle it and then toss it that way and go back to listening. That construct looks as follows:
class MainClass
{
	private static Socket listener;
	private static List<Worker> workers = new List<Worker>();
	static bool run = true;

	public static void Main (string[] args)
	{
		int numThreads = int.Parse(args[0]);//Environment.ProcessorCount;

		Thread[] threads = new Thread [numThreads];

		for (int i = 0; i < numThreads; i++) {
			Worker worker = new Worker();
			worker.ID = i;
			worker.max_count = 10;
			worker.sleep_time = 10000L;//sleep for 1ms
			workers.Add (worker);
			threads [i] = new Thread (worker.run);
			threads [i].Start ();
		}

		listener = new Socket (AddressFamily.InterNetwork,
		                       SocketType.Stream,
		                       ProtocolType.Tcp);

		//just use standard localhost and http testing port
		IPAddress address = IPAddress.Parse("127.0.0.1");
		IPEndPoint endpoint = new IPEndPoint (address, 8080);
		listener.Bind (endpoint);			

		Console.Out.WriteLine ("listening...");

		//run forever
		while (run) {
			try{
				listener.Listen (1000);
				Socket sock = listener.Accept ();

				ResourcePool.issue_coroutine(new RequestHandler(), sock);

			}catch(Exception){
			}
		}

		listener.Close ();
	}
}
pretty basic eh? what is nice here is that socket.Accept() gives us a reference to a socket for that specific connection request.

How do we handle those requests?
We use a special Coroutine to handle them, and it is called RequestHandler. With it we want to read the data from the socket, then use that data to create a basic Request object. We'll then interrogate that object to determine what we need to do, then grab the data needed, build a response around that data, then send it over the socket to the computer that requested it. All that looks like this:
public class RequestHandler : Coroutine
{
	private Socket _socket = null;

	public override void initialize (object in_value)
	{
		_socket = (Socket)in_value;

		base.initialize (in_value);
	}

	public override IEnumerable<object> process ()
	{
		//read entire request
		SocketReader sock_reader = new SocketReader ();
		yield return YieldFrom(sock_reader,this._socket);

		string data = sock_reader.data;

		Request req = new Request (data);
		if (!req.parse ()) {
			this._socket.Close ();
			yield return YieldComplete ();
		}

		//double check that this was infact a GET request
		if (req.method != "GET") {
			//not a GET request, terminate request
			this._socket.Close ();
			yield return YieldComplete ();
		}

		//if its a valid resource, retrieve it
		if (req.uri != "") {
			//retrieve resource and then send
			Reader reader = new Reader();
			yield return YieldFrom(reader, req.uri);

			if (reader.data == null) {
				this._socket.Close ();
				yield return YieldComplete ();
			}

			Response response = new Response (reader.data);
			response.prepare_data ();

			if (response.data == null) {
				this._socket.Close ();
				yield return YieldComplete ();
			}

			object[] pkg = new object[2];
			pkg [0] = this._socket;
			pkg [1] = response.data;
			yield return YieldFrom (new SocketSender (), pkg);

		} else {
			//was not a valid resource...
			Console.Out.WriteLine ("Bad Request...");
		}

		//clean up and complete
		this._socket.Close();
		yield return YieldComplete();
	}

}
Nothing fancy, just the basics.
Ok, how do those smaller pieces work? Start with the SocketReader...
This is where coroutines start to show their strength. Their ability to tackle pieces of a larger task becomes valuable during I/O operations, especially during socket reading. I/O takes time... a... loooooonnnnngggg.... time.... and you could be doing something useful in that time. Coroutines allow you to do smaller I/O operations, then yield to do other work, then come back where you left off and continue onward. Its these aspects that allow you to achieve higher concurrency than other models when properly implemented. Anyway, enough with the proselytizing, time to show some code:
public class SocketReader : Coroutine
{
	private static int BUFFER_SIZE = 1024;
	private Socket _socket = null;
 	public string data = "";

        public SocketReader ()
	{
	}

	#region implemented abstract members of Coroutine

	public override void initialize (object in_value)
	{
		_socket = (Socket)in_value;
		base.initialize (in_value);
	}

	public override IEnumerable<object> process ()
	{
		byte[] buffer = new byte[BUFFER_SIZE];
		int dataSize = 0;

		//read entire request
		while((dataSize = _socket.Receive (buffer)) == BUFFER_SIZE){

			data += Encoding.UTF8.GetString (buffer, 0, dataSize);

			yield return data;
		}

		//convert to a string
		data += Encoding.UTF8.GetString (buffer, 0, dataSize);

		yield return YieldComplete (data);
	}

	#endregion
}
While nice 'n neat, this code does have an edge cast to beware of. If the incoming data is a multiple of BUFFER_SIZE, you could end up calling _socket.Receive() an additional time. This would be a blocking operation and would cause it to hang until it times out... fun times... Why did I choose 1024 as my buffer size? No reason especially, I like 1024, its nice, and power of two-y. Though keeping this small-ish is to your benefit. Remember, more time reading I/O is time you could be doing OTHER stuff!

So what does Request do?
Nothing but parse the data. Its nothing special, just attempts to loosely follow the HTTP 1.0 spec and parse the basic request info (i.e., request command [GET], resource [/get/this/resource/located/here/dag/nabit], and HTTP version [HTTP/1.0]). Its nothing special, you can look at it at the cHTTP link at the end of the article.

Ok then, What about Reader?
Reader works alot like SockerReader, except grabs stuff from the file system. It looks like so:
public class Reader : Coroutine
{
	private string _file_name = "";
	public byte[] data;
	private int BUFFER_SIZE = 1024;

	#region implemented abstract members of Coroutine
	public override void initialize (object in_value)
	{
		_file_name = ((string)in_value).Remove(0,1);

		base.initialize (in_value);
	}

	public override IEnumerable<object> process ()
	{
		if(!File.Exists(_file_name))
		   yield return YieldComplete();

		//open file so that it can be read by multiple threads
		FileStream fs = new FileStream (_file_name, FileMode.Open, FileAccess.Read, FileShare.Read);
		BinaryReader br = new BinaryReader (fs);

		//get length and how many iterations will be required for BUFFER_SIZE
		long length = fs.Length;
		data = new byte[length];

		int index = 0;
		int block_size = data.Length > BUFFER_SIZE ? BUFFER_SIZE : data.Length;

		//grab that file!
		while (br.Read (data, index, block_size) > 0) {
			index += block_size;
			block_size = data.Length - index > BUFFER_SIZE ? BUFFER_SIZE : data.Length - index;
			yield return data;
		}

		br.Close ();
		fs.Close ();

		yield return YieldComplete (data);
	}
	#endregion
}
Notice that we share read access to the file we are reading. This is due to file system read-locks. We tell the reader that we don't want to lock the file. Which is important, especially for static content given we are not using a cache. It allows other threads to read the file while another thread is reading the file, yay! The rest just reads the file piecemeal and yields progressive data chunks.

What about Response?
Its kinda like Request. In its case, it just generates a very basic HTTP 1.0 compliant response. Some additional re-factoring could be done to allow it to be created piecemeal in case the data payload is large, allowing more stuff to get done while its being constructed.

Ok, on to SocketSender!
SocketSender, like the other I/O classes sends the data payload piecemeal over the given socket though successive yields. Again, this allows work to happen in-between yields. It looks like so:
public class SocketSender : Coroutine
{
	private int BUFFER_SIZE = 1024;
	private Socket _socket = null;
	private byte[] _data;

	public SocketSender ()
	{
	}

	public override void initialize (object in_value)
	{
		object[] pkg = (object[])in_value;
		this._socket = (Socket)pkg [0];
		this._data = (byte[])pkg [1];
		base.initialize (in_value);
	}

	#region implemented abstract members of Coroutine

	public override IEnumerable<object> process ()
	{
		int index = 0;
		int block_size = _data.Length > BUFFER_SIZE ? BUFFER_SIZE : _data.Length;

		while (_socket.Send(_data, index, block_size, SocketFlags.None) > 0) {
			index += block_size;
			block_size = _data.Length - index > BUFFER_SIZE ? BUFFER_SIZE : _data.Length - index;
			yield return Yield ();
		}

		yield return YieldComplete ();
	}

	#endregion
	}
}
Very similar to our other classes.

Anything Else?
Nope, that is all there is to it. To test it, I compiled it on Kubuntu 14.04 using Mono and ran it with 32 threads. I then had Apache Bench toss 100,000 requests its way using the command "ab -n 100000 -c 32 http://127.0.0.1:8080/index.html". The results were as so:
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests


Server Software:        
Server Hostname:        127.0.0.1
Server Port:            8080

Document Path:          /index.html
Document Length:        708 bytes

Concurrency Level:      32
Time taken for tests:   5.090 seconds
Complete requests:      100000
Failed requests:        0
Total transferred:      75200000 bytes
HTML transferred:       70800000 bytes
Requests per second:    19646.46 [#/sec] (mean)
Time per request:       1.629 [ms] (mean)
Time per request:       0.051 [ms] (mean, across all concurrent requests)
Transfer rate:          14427.87 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       2
Processing:     0    1   1.0      1      25
Waiting:        0    1   0.9      1      25
Total:          0    2   1.0      1      26

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      2
  75%      2
  80%      2
  90%      2
  95%      3
  98%      5
  99%      6
 100%     26 (longest request)
Not too bad, 95% of the requests took only 3ms to accomplish Posted Image

So what is next?
Well, first you can check out the full project over at github called cHTTP. Next I'll go about talking abstractly on how coroutines could be used to create a basis of a concurrent game engine. No promises and nothing fancy, it'll be more of a theory article. I'll try to whip up a SDL2-CS example, but we'll see. SDL2-CS is tricky to use as it is a wrapper around SDL2, but it would be a good place to start and would be familiar ground.

Anyway, that's all for now.


Coroutines: Building a framework in C#

Posted by , 12 February 2014 - - - - - - · 3,460 views
Coroutine, C# and 1 more...
Where to start?
So we want to build a Coroutine, where do we start? Firstly, and simply, choose a language which supports some sort of re-entrant behavior natively. C/C++ unfortunately don't really support native re-entrant behavior at least not in any clean way. Though its possible somewhat with goto's or an odd duff's device, the code can get ugly and cumbersome quickly. Luckily, C# does support this natively through iterators.

Understanding C# Iterators
If you've used C# to any degree, it is highly likely you've used an iterator before, likely without knowing it. One very common usage is via the "foreach" keyword usually over a collection of some sort like a list. such as:
List<int> foo = new List<int>();
foo.add(1);
foo.add(2);
foo.add(3);

foreach(int i in foo){
    Console.WriteLine(i);
}
But how does that use iterators?
Most collections use iterators by implementing the IEnumerable or IEnumerator interfaces. These interfaces provide the backbone into the C# guts that allows re-entrant programming constructs to be created. The IEnumerable construct is what powers the re-entrant behavior with IEnumerator providing the majority of the access mechanisms. Combined, with some clever scaffolding, allows for the creation of Coroutines. Before we delve into Coroutines, we'll cover the basics of IEnumerable and IEnumerator.

How does IEnumerable Work?
IEnumerable works in 2 default ways IEnumearable and IEnumerable<out T>. We're only interested in the latter implementation IEnumerable<out T>. This interface (the 'I' in "IEnumerable") tells the CLR (.Net Runtime) that the following code is enumerable (i.e. iteratable). Meaning that it will produce values upon each successive call. You as the programmer get to specify those values utilizing the special enumeration keywords "yield return" and "yield break". It works kinda like so:
public IEnumerable<int> enumerable_function(int a){
    int b = 0;
    while(true){
        if(b > 100)
            break;
        b += 1;
        yield return a + b;
    }
    yield return b / a;
}
with this demonstration, we yield "a + b" until "b > 100" at which point we then yield "b / a". Notice I don't use a "yield break" in this example. "yield break" is a way to "terminate" an IEnumerable early, but is more of a keyword to the CLR that this "enumeration" is "done". This is useful when you want the logic of your IEumerable method to terminate right then and there and not yield any new information. Also, "yield break" does not "yield" a value, it just terminates.

How does IEnumerator work?
IEnumerator represents the "calling" mechanism of the CLR for enumerable constructs. This is more of the way that the CLR utilizes the IEnumerable method or class. In this case, what is most important is that a IEnumerator is created from an IEnumerable, meaning a IEnumerator requires the existence of an IEnumerable construct. That said, the IEnumerator exposes two key access mechanisms: the "MoveNext()" method and the "Current" property. The "MoveNext()" method is your main "re-entrant" executor. Meaning, it calls the method defined by IEnumerable to re-enter and execute until its next "yield return". The "Current" property, makes available the last yielded value from the IEnumerable method. In this case, demonstrated as so:
public void do_stuff(){
    IEnumerator foo = bar.GetEnumerator();
    foo.MoveNext();
    Console.WriteLine(foo.Current);//print 1 to console
    foo.MoveNext();
    Console.WriteLine(foo.Current);//print 5 to console
    foo.MoveNext();
    Console.WriteLine(foo.Current);//print 10 to console
}

public IEnumerable<int> bar(){
    yield return 1;
    yield return 5;
    yield return 10;
}
So how do we make a Coroutine with these?
Well, like many things, you build it with abstraction. In this case, a specially constructed abstract class. So, how doe we structure this beast? First, lets setup a basic structure, we need a Coroutine class and a standard method we'll call for our "Coroutine code". Additionally, in order to keep things simple and non-confusing, we'll not implement any Generics and instead just use return objects. The first setup will look like so:
public abstract class Coroutine{

    public abstract IEnumerable<object> process();
}
So we have our basic "atomic" Coroutine structure. This class is actually somewhat usable right now. If you built a class of of this, you could use it abstractly like so:
int main(){
    Coroutine foo = new Coroutine();
    IEnumerator<object> bar = foo.process.GetEnumerator();
    bar.MoveNext();
    Console.WriteLine(bar.Current);     
}
Even if we used it that way, it would get cumbersome, so lets "enhance" this Coroutine with some more usable access mechanisms. In this case, lets add a "next" method, that abstracts away both the "MoveNext" method as well as the "Current" property, such that when called, it re-enteres the coroutine and then returns the next value. It would look like so:
public abstract class Coroutine{

    private IEnumerator<object> _enumerator;

    public Coroutine(){
        this._enumerator = this.process.GetEnumerator();
    }    

    public object next(){
        this._enumerator.MoveNext();
        return this._enumerator.Current;
    }

    public abstract IEnumerable<object> process();
}
Yay! now we have something a bit more easy to utilize. Utilizing the previous example, we would now write it as so:
int main(){
    Coroutine foo = new Coroutine();
    Console.WriteLine(foo.next());     
}
Nice huh?

But wait, we're not done yet!
So, we've got a good base framework for a Coroutine. It works, you can write a stand-alone coroutine without problems. But what if you wanted to yield execution to another Coroutine during execution? This current framework wouldnt really help, would it? As this Coroutine sits, it is fairly simple. We'll need to enhance it by a bit if we're going to add that juicy functionality.

So how do we do that?
Well, first we need a way to tell the core abstract Coroutine class that its should execute another Coroutine. So we'll need to set a flag of some such as a mechanism to specify the coroutine to begin yielding. We'll also be required to keep some sort of reference to the other coroutine, a "sub coroutine" as it were. What would this look like? It would look like so:
public abstract class Coroutine{

    private IEnumerator<object> _enumerator;
    private bool _do_sub = false;
    private Coroutine _sub_coroutine;

    public Coroutine(){
        this._enumerator = this.process.GetEnumerator();
    }    

    public object YieldFrom(Coroutine coroutine){
        this._do_sub = true;
        this._sub_coroutine = coroutine;
        return this._sub_coroutine.next();
    }

    public object next(){
        if(_do_sub){
            return this._sub_coroutine.next()
        }else{
            this._enumerator.MoveNext();
            return this._enumerator.Current;
        }
    }

    public abstract IEnumerable<object> process();
}
So how would I use that?
Lets use it to define two coroutines, and have one yield to another. We'll keep it as simple as possible.
public class Foo: Coroutine{
    public override IEnumerable<object> process(){
        yield return 3;
        yield return 4;
        yield return 5;
    }
}

public class Bar: Coroutine{
    public override IEnumerable<object> process(){
        yield return 1;
        yield return 2;
        yield return YieldFrom(new Foo());
        yield return 6;
    }
}

int main(){
    Bar bar = new Bar();

    Console.WriteLine(bar.next());//prints 1
    Console.WriteLine(bar.next());//prints 2
    Console.WriteLine(bar.next());//prints 3
    Console.WriteLine(bar.next());//prints 4
    Console.WriteLine(bar.next());//prints 5
    Console.WriteLine(bar.next());//prints 5 *huh?
    Console.WriteLine(bar.next());//prints 5 *wait?
    Console.WriteLine(bar.next());//prints 5 *uh oh
    Console.WriteLine(bar.next());//prints 5 *oh noooo!!!
}
Did you catch the problem?

Yea... how are we going to fix that?
How will our 'parent' Coroutine know when its 'child' is done? Well, this is where things get a little complicated. This is where introducing another Coroutine construct becomes helpful. In this case, we'll introduce a method called "YieldComplete". It will set flags that state that "this coroutine" is "done". So instead of using something like "yield break", you'll call "yield return YieldComplete(object)". Now our coroutine starts looking like this:
public abstract class Coroutine{

    private IEnumerator<object> _enumerator;
    private bool _do_sub = false;
    private Coroutine _sub_coroutine;
    public bool is_complete = false;

    public Coroutine(){
        this._enumerator = this.process.GetEnumerator();
    }    

    public object YieldFrom(Coroutine coroutine){
        this._do_sub = true;
        this._sub_coroutine = coroutine;
        return this._sub_coroutine.next();
    }

    pulbic object YieldComplete(object return_value=null){
        this.is_complete = true;
        return return_value;
    }

    public object next(){
        if(_do_sub){
            if(!this._sub_coroutine.is_complete)
                return this._sub_coroutine.next();
            else
                this._do_sub = false;
                this._enumerator.MoveNext();
                return this._enumerator.Current;
        }else{
            this._enumerator.MoveNext();
            return this._enumerator.Current;
        }
    }

    public abstract IEnumerable<object> process();
}
Gets a little more ugly, but now our previous example will work just fine!
public class Foo: Coroutine{
    public override IEnumerable<object> process(){
        yield return 3;
        yield return 4;
        yield return YieldComplete(5);
    }
}

public class Bar: Coroutine{
    public override IEnumerable<object> process(){
        yield return 1;
        yield return 2;
        yield return YieldFrom(new Foo());
        yield return 6;
    }
}

int main(){
    Bar bar = new Bar();

    Console.WriteLine(bar.next());//prints 1
    Console.WriteLine(bar.next());//prints 2
    Console.WriteLine(bar.next());//prints 3
    Console.WriteLine(bar.next());//prints 4
    Console.WriteLine(bar.next());//prints 5
    Console.WriteLine(bar.next());//prints 6 *yay!
}
So is that all to it?
Not quite. There is still a weird edge case in there you need to catch in regards to if MoveNext() should be called to prevent an odd double value return scenario when switching back from a child to a parent. That can be fixed by tracking the boolean output of MoveNext() and checking it when you're also checking "is_complete". Additionally, its also nice to be able to actually pass INPUT to a coroutine ;) That also requires a few more widgets to be added. Nothing terrible, but you now need to store and maintain the last "input" value passed. Combining those two together, gets you the following:
public abstract class Coroutine{

    private IEnumerator<object> _enumerator;
    private bool _do_sub = false;
    private Coroutine _sub_coroutine;
    public bool is_complete = false;
    public bool can_move_next = true;
    private object _sub_input = null;

    public Coroutine(){
        this._enumerator = this.process.GetEnumerator();
    }    

    public object YieldFrom(Coroutine coroutine, object sub_input=null){
        this._do_sub = true;
        this._sub_coroutine = coroutine;
        this._sub_input = sub_input;
        return this._sub_coroutine.next();
    }

    pulbic object YieldComplete(object return_value=null){
        this.is_complete = true;
        return return_value;
    }

    public object next(object in_value=null){
        if (this._do_sub) {
            if (this._sub_coroutine.can_move_next && !this._sub_coroutine.is_complete)
                return this._sub_coroutine.next (this._sub_input);
	    else {
	        this._do_sub = false;
		this._input = in_value;
		this.can_move_next = this._enumerator.MoveNext ();
		return this._enumerator.Current;
	    }
	} else {
	    this._input = in_value;
	    this.can_move_next = this._enumerator.MoveNext ();
	    return this._enumerator.Current;
	}
    }

    public abstract IEnumerable<object> process();
}
Now you have a very usable Coroutine construct! You can easily call it via next(), pass input, and yield to other coroutines! yay!

Is there anything else we should do?
Well, we do use a lot of IEnumerable and IEnumerator data... so what if we wanted to turn it up to 11? We could easily make this work in the loop constructs, by having Coroutine itself inherit from IEnumerator<object> and then adding core MoveNext(), Current, etc. If you did that, you could do things like this:
public class Foo: Coroutine{
    public override IEnumerable<object> process(){
        yield return 3;
        yield return 4;
        yield return YieldComplete(5);
    }
}

public class Bar: Coroutine{
    public override IEnumerable<object> process(){
        yield return 1;
        yield return 2;
        yield return YieldFrom(new Foo());
        yield return 6;
    }
}

int main(){
    Bar bar = new Bar();

    foreach(int i in bar){
        Console.WriteLine(i);//prints 1, 2, 3, 4, 5, 6
    }
}
Slick huh? I'll leave you to explore things over at AsyncCS if you're curious how to lay that foundation (its really simple btw).

So Now What?
Well, that is how easy it is to build a basic Coroutine framework. In my next journal entry I'll go over how to take this framework and expand it into a thread worker model (very simple) and then use that to implement a very simple HTTP server. If you're curious, you can check out the AsyncCS link above if you want to play around with it. Take note that it is still experimental and does rely on my custom testing framework CSTester (works very similar to NUnit, but is much simpler and lighter weight), but it should be easy enough to exclude that subproject if you don't want it.

Anyway, that's all for now!


Coroutines: Wierd Little Wonders

Posted by , 27 January 2014 - - - - - - · 2,102 views
Coroutines, C# and 1 more...
What is a Coroutine?
In order to understand them, you need to understand a bit of history. The term coroutine was coined back in the 60s by Melvin Conway whom is famous for Conway's Law, but we're not talking about that. Something you should have great familiarity with are routines and subroutines. Coroutines are special class of subroutine, specifically allowing multiple entry points.

So, what does multiple entry points mean?
Take a subroutines for example, when you call it, it executes it contents, then returns. If you call it exactly the same again, it will behave the same way again using.

so if you called:
int subroutine(int a){
   return a + 5;
}
you would expect it always to increment 'a' by 5. Coroutines however are a different beast altogether especially since they have multiple entry points. The easiest way to demonstrate the difference between a coroutine is by looking at the below:
int coroutine(int a){
   return a + 5;
   return a + 4;
   return a + 3;
   return a + 2;
   return a + 1;
}
in this case if you called the coroutine above, you would expect the following output:
print coroutine(5); //10
print coroutine(5); //9
print coroutine(5); //8
print coroutine(5); //7
print coroutine(5); //6
print coroutine(5); //???could be 6 or error or it starts over or other depending on implementation
That just looks like a state machine! Posted Image
Well, its not. What is not shown is that coroutines maintain their state in-between calls, they don't switch to another. So "local" variables called earlier in the coroutine are still available in the coroutine's context. like so:
int coroutine(int a){
   int b = a * 2;
   return a + b;
   b = a * 1.5;
   b = return a + b;
   return b - a;
}
Here, you may noticed i did something weird. That 2nd to last line i wrote "b = return a + b". Depending on implementation, coroutines can allow you to do all sorts of cool things like that. Python 3.3 in particular has a very cool "yield from" ability that allows something like this. But the main point is that 'b' remains local to the coroutine and doesn't just disappear after the coroutine returns.

Pfff, so its just a weird iterator!
Yes, yes it is, in a weird way. Think of them more as an enhanced iterator in that coroutines can yield not only their operations, but yield to other coroutines, and those can yield to others, etc etc. Take a look at this great Python pseudocode example of two generators (a python iterator):
def gen_a(a):
    while True:
        b = yield from gen_b(a)
        yield b + 3


def gen_b(a):
    b = 0
    while True:
        yield b
        if b > 100:
            return b
        b += a
   

def main():
    for i in gen_a(5):
        print(i)
In this example, we iterate over the output of gen_a forever. However, while its iterating, gen_a yields from gen_b until b is incremented over 100, ad which point it returns b and breaks out of its loop, allowing gen_a to then yield b + 3 at which point it begins again. This example shows two things, how to "terminate" a generator early, as well as how to yield results from another generator. However, thats not really the important thing, the important thing here is that coroutines are very similar to generators and iterators, in-fact, most languages build coroutines off these constructs. Addditionally, what is important is the concept of yielding behavior to another coroutine/iterator/generator, which demonstrates the 'co' part of 'coroutine'.

now we'll take this concept and do something different:
def handle_request():
    request = yield from get_request()
    
    #do some processing or whatever

    f = file(request.uri)
    data = yield from f.read()

    yield from response.send(data)

def server():
    while True:
        yield from handle_request()

def main()
    serv_gen = server()
    while True:
        serv_gen.next()

So, in the above example, i create a forever loop calling server's "next" method, which is typically how you call a coroutine without using a "yield" or "yield from" keyword and causes a coroutine in python to re-enter and run to the next yield statement. In that case, it attempts to handle a request, which in turn attempts to get a request, yielding a result into request. Interesting aspect here is that this all happens over multiple calls to serv_gen.next()... This simple server can server some content (not really, its just pseudocode), but it takes multiple calls to do it. Think on that a second.

Ok, i thought about it... doesnt that make them take longer?
Perhaps it would take longer if i was just running 1 coroutine vs 1 subroutine as i am jumping in and out of the iteratiors/coroutines. But what if i wanted to run 2 or 3 or 8 or 100? all at the same time... do you see it now? do you see how coroutines start having a significant advantage over subroutines? Because as wierd as they may be, they have a strong affinity for concurrent programming, especially when they can yield execution to another coroutine as shown above. Combined with worker threads or event loops, coroutines become extremely powerful. Allowing you to do a little work then yield to the next coroutine to do a little work, then again and again. Since coroutines are stateful, they make concurrent programming dead simple too as well as ensure things execute in-order in your program as well as avoid the dreaded pyramid code of callback-like models.

Ok, so now what?
well, this is just to get you thinking about coroutines. In further entries i'll be showing how to build a coroutine framework in c#, as well as demonstrate their use in ways such as building a simple HTTP server capable of serving 30,000+ requests/sec (static content). Eventually, we'll get around to showing how their constructs may be used to create the foundation of a concurrent game engine. But until then, grab python 3.3, the tulip library and play around or if you're curious, check out my in-work library over at github: AsyncCS


Open Sourcing Vaerydian

Posted by , 31 July 2013 - - - - - - · 2,471 views
Vaerydian, LGPL, Open Source
Well, now I've gone and done it... I made Alpha 2 of Vaerydian Open Source under LGPL v3 so its open and free to poke around in. I'm still going to keep developing on it as I see it as an interesting experimentation platform for game ideas here and there. But if you were curious about how to do some of the following, give it a look here: Vaerydian GitHub Page

Vaerydian's Engine shows the following features:
  • Multi-Platform (will compile under Linux & Windows)
  • Procedural maps
  • Entity-Component-System based design
  • Behavior Trees
  • Asynchronous Event-driven threading for dynamic AI switching (Behavior Forest concept)
  • A* Pathing w/ binary heaps
  • Simple Skeletal animation
  • Data Driven design w/ JSON
  • Knowledge Based RPG progression mechanics
  • Screen-based game segmentation w/ threaded loading
  • Glimpse UI framework
  • SAT collision detection and resolution
  • Other odds 'n ends
Enjoy!


Reinventing the Agent Component Bus

Posted by , 22 June 2013 - - - - - - · 904 views
Vaerydian and 1 more...
Recently I've been teaching myself a lot about Node.js out of curiosity and out of interest in learning more about new areas of web development. I've come to really like how it works, its simplicity, and the huge open source community that has popped up around it. Its event-based async design is quite an effective approach to concurrent programming, so i really took a shine to it. Then I looked at my own Agent Component Bus (ACB), then at Node, then at the ACB, then at Node... yea... ACB, you're going to change...

My earlier work with my ACB was no slouch. It easily handled hundreds of separate agents doing several different processes asynchronous of the main game thread, but, it had a very rigid design and implementation path. So, no matter what process you were handling, you always had a component retrieve cycle, a process cycle, and a commit cycle. This is great if your process is doing component manipulation or game state assessments and taking action based on that assessment, but not so much if you're doing just a read, or just a write, or neither. Also, agents were very fat. They had to have extra logic to keep track of state, which components were active in their process pipeline, etc. Basically, the ACB was a pain to program for, effective yes, but a pain.

So I stripped it down to its guts. Assessed what i needed to change it into an evented-like design. I found that most of my Entity Component System extensions could be simplified, along with their associated ACB components. That the Bus itself wasnt needed and that all work could be handled by the ResourcePool and TaskWorker. Finally, the Agent only needed to be a data/delegate reference class, stripping the need for state information. When i tested it out, its was fast! Now, when multi-threaded, the new concept code processed roughly within 5% of the work packages original ACB concept would process, but it did this with a 6x cycle disadvantage; so i knew the new internal architecture was much better. Methods, overrides, interface calls, are about 6x faster than delegate calls in c#. However, there is much less overhead in the new architecture, so it makes back that speed in efficiency. Single-Threaded, the new architecture is faster, however multi-threaded, its within 5%.

"So, NetGnome, if its not faster, why would i use it?"
It is faster in-practice however. Those up above were using the core concept code. In conceptual implementation they basically performed at the same speed, with the original ACB edging out in performance. However, implementation into the game engine is a MUCH different matter. The new code hardly has any overhead compared to the old code and is extremely flexible.

Old code you had to have BusComponent classes that managed the data you wanted and what you did with them, Fat Agents to watch states and notifications, and a Bus AND TaskWorker to issue tasks and work them with a ResourcePool keeping track of everything plus the game engine integration systems.

New code only requires a light Agent, TaskWorker to work tasks, callbacks, and events, ResourcePool to handle data and queuing of tasks, events, callbacks, etc. and much more simplified engine integration systems.

Most interaction is now done in the following forms:
public static void issueTask(Agent agent, TaskHandler task, CallBackHandler callBack, params object[] parameters)

public static void emit(string eventName, Agent agent, params object[] parameters)
In this way, you can issue tasks like so:
ResourcePool.issueTask(agent, doSomething, delegate(TaskObject to){
  //handle callback
}, 1, DateTime.Now, "foo");

public TaskObject doSomething(TaskObject to){
  //do stuff and return a TO for the callback
}
As well as handle and issue events like so:
ResourcePool.on("CUSTOM_EVENT_NAME", agent, delegate(EventObject eo){
  //handle event here
});

ResourcePool.emit("CUSTOM_EVENT_NAME", agent, 1, DateTime.Now, "foo");
What makes it particularly useful, is that you can use anonymous delegates or named ones, whatever fits your desire, the new ACB doesn't care.

But, with anything concurrent, there is always the chance that you'll have to issue something synchronous, like working with List<T> structures in specific ways. That is why i also built in a Synchronous Operation and supporting system. It guarantees that the code passed to it will be performed in-sync on the game engine when the SyncOperationSystem is called to process. It looks kinda like this:
public static void issueSyncOperation(string syncEvent, Agent agent, Operation operation, params object[] parameters)
When it is finished, it will emit an event called syncEvent if you want to handle something after it was called.

"So, NetGnome, what can i use this for?"
Anything you want. Hell, I actually built a crappy http server out of this and it was able to serve very simple static webpages at about 12k requests per second at a concurrency of 4000 with 8 threads (i haven't upped my open file limit higher, so i don't know if it could take-on the C10K challenge Posted Image ). As long as you conform to the delegate structure below, you can run anything you want in it.
public delegate void CallBackHandler(TaskObject taskObject);
public delegate TaskObject TaskHandler(TaskObject taskObject);
public delegate void EventCallBackHandler(EventObject eventObject);

public struct TaskObject{
	public TaskHandler Task;
	public CallBackHandler CallBack;
        public Agent Agent;
	public object[] Parameters;
}
	
public struct EventObject{
	public string Name;
        public Agent Agent;
	public object[] Parameters;
}
Some other interesting notes, I've set up the task workers to run both by count and by timing. So if you want it only to process 5 tasks, 30 events, and 2000 callbacks per cycle, you can do that; or if you would rather it only work as long as 3000 ticks before the thread sleeps for 0 - n ticks, you can do that too.

Overall, the changes have worked great and performance is great and programming for it is about 100x easier now Posted Image

If you want to check it out, head over to the repo on GitHub (note: you'll also need my ECSFramework to make it work).

Anyway, that's all for now!


Lighting, Tile Maps, and Data Driven Development!

Posted by , 31 May 2013 - - - - - - · 1,077 views
Vaerydian, Tile Lighting and 1 more...
I've recently implemented my new tile based lighting system with tile shadowing too! You can see the nice results in the video below:



Its nothing fancy, just a simple implementation of a Shadow Casting algorithm :) It gets the job done.

Additionally, i've converted my map generation tech to use map defs now. So now maps defined like so:
{
			"name":"DUNGEON_DEFAULT",
			"map_type":"DUNGEON",
			"tile_maps":[
				{
					"map_to":"WALL",
					"tiles":[
						{"name":"DUNGEON_WALL","prob":100}
					]
				},
				{
					"map_to":"FLOOR",
					"tiles":[
						{"name":"DUNGEON_FLOOR","prob":100}
					]
				},
				{
					"map_to":"DOOR",
					"tiles":[
						{"name":"DUNGEON_DOOR","prob":100}
					]
				}
				{
					"map_to":"CORRIDOR",
					"tiles":[
						{"name":"DUNGEON_CORRIDOR","prob":100}
					]
				}
				{
					"map_to":"EARTH",
					"tiles":[
						{"name":"DUNGEON_EARTH","prob":100}
					]
				}
				{
					"map_to":"BEDROCK",
					"tiles":[
						{"name":"DUNGEON_BEDROCK","prob":100}
					]
				}
			]
	 	}
can be used to generate maps in the game.

It works by setting tile definitions that are referenced in code like so for an east room construction:
case EAST:
	//space enough to build it?
	for (int dy = (y-ylen/2); dy < (y+(ylen+1)/2); dy++){
		if (dy < 0 || dy > map.YSize) return false;
		for (int dx = x; dx < (x+xlen); dx++){
			if (dx < 0 || dx > map.XSize) return false;
			if (!MapHelper.isOfTileType(map, map.Terrain[dx,dy], "EARTH")) return false;
		}
	}

	//ok to build room
	for (int dy = (y-ylen/2); dy < (y+(ylen+1)/2); dy++){
		for (int dx = x; dx < (x+xlen); dx++){
			if (dx == x) MapHelper.setTerrain(map.Terrain[dx,dy], map.MapDef.Name, "WALL");
			else if (dx == (x+xlen-1)) MapHelper.setTerrain(map.Terrain[dx,dy], map.MapDef.Name, "WALL");
			else if (dy == (y-ylen/2)) MapHelper.setTerrain(map.Terrain[dx,dy], map.MapDef.Name, "WALL");
			else if (dy == (y+(ylen-1)/2)) MapHelper.setTerrain(map.Terrain[dx,dy], map.MapDef.Name, "WALL");
			else{ MapHelper.setTerrain(map.Terrain[dx,dy], map.MapDef.Name, "FLOOR");}
		}
	}

	break;
and these particular helpers reference definition structs that are loaded in and constructed at runtime based on the definition files from the previous reference.

Combining all of this together can be shown in this video where i create a quick face png texture and add it to my game.



Anyway, thats all for now!


Musings on Data Configs

Posted by , 04 May 2013 - - - - - - · 905 views
JSON, Data Driven Games and 1 more...
Haven't done too much over this week. Mainly just working on concepts surrounding creature definition files and ideas on how i'll make map configs work.

Here is how the creatures.v file is forming up so far:
{
  "creature_defs":[
  {
    "name":"BAT",
    "character_def":"BAT",
    "behavior_def":"DEFAULT_ENEMY",
    "acb_def":"DEFAULT_ENEMY",
    "skill_level":0,
    "information":{
      "name":"Bat",
      "general_group":"BAT",
      "variation_group":"NONE",
      "unique_group":"NONE"
    },
    "interactions_def":"DEFAULT_ENEMY",
    "equipment":{
      "weapon_def":"BAT_SONIC",
      "armor_def":"BAT_ARMOR"
    },
    "knowledges":{
    },
    "statistics":{
    },
    "health":{
    },
    "skills":{
    },
    "factions":{
    }			
  }
  ]
}


not much to see yet, but it captures most of the non derived/generated component definitions.

The maps are a different affair.

some early configs just for generation parameters looked like so:
{
  "WORLD" :{
    "x" : 0,
    "y" : 0,
    "dx" : 854,
    "dy" : 480,
    "z" : 5.0,
    "xsize" : 854,
    "ysize" : 480,
    "seed" : 42
  },
  "CAVE" :{
    "x" : 100,
    "y" : 100,
    "prob" : 45,
    "cell_op_spec" : true,
    "iter" : 50000,
    "neighbors" : 4,
    "seed" : 42 
  } 
}



but those just tell the generation routines what to use as defaults. So i thought of how i may start clustering like-minded information together and came up with my maps.v
{
  "map_types":{
    "CAVE": 0,
    "DUNGEON": 1,
    "TOWN": 2,
    "CITY": 3,
    "TOWER": 4,
    "OUTPOST": 5,
    "FORT": 6,
    "NEXUS": 7,
    "WORLD": 8,
    "WILDERNESS": 9
  },
  "map_defs":[
    {"name":"NONE", "id":0},
    {"name":"CAVE", "id":1}
  ],
  "map_params":{
    "WORLD" :{
      "world_params_x" : 0,
      "world_params_y" : 0,
      "world_params_dx" : 854,
      "world_params_dy" : 480,
      "world_params_z" : 5.0,
      "world_params_xsize" : 854,
      "world_params_ysize" : 480,
      "world_params_seed" : null
    },
    "CAVE" :{
      "cave_params_x" : 100,
      "cave_params_y" : 100,
      "cave_params_prob" : 45,
      "cave_params_cell_op_spec" : true,
      "cave_params_iter" : 50000,
      "cave_params_neighbors" : 4,
      "cave_params_seed" : null
    }
  }
}

here you can see i took the params file and just shoved it in an object def location. I also think the config defs like "cave_params_cell_op_spec" need to be shortened, but we'll see if i end up calling it enough times for it to matter. The new sections are just a listing of the types of maps i want to players to know they can generate and a definitions section where the maps will be defined. What i'm thinking of doing is having some default, yet tweakable parameters for each map definition, maybe not for the world, but definitely for the other maps. The idea would be that the map would expect one-to-many terrain definitions to be set in certain list definitions in the json, it would then choose them during run-time to set the tiles and data structures. so you would have definitions like:
"walls":[
  "TERRAIN_GRANITE",
  "TERRAIN_CARVED_STONE",
  "TERRAIN_STONE_WALL_TORCH"
],
"floors":[
  "TERRAIN_STONE_FLOOR",
  "TERRAIN_TILED_STONE",
  "TERRAIN_CRACKED_STONE_FLOOR"
]
and each of those definitions would be defined in the terrain.v config file like so:
{
  "name" : "BASE_LAND",
  "id" : 1,
  "texture" : "terrain\\default",
  "texture_offset" : [
    0,
    0
  ],
  "color" : [
    255,
    255,
    255
  ],
  "passible" : true,
  "effect" : "NONE",
  "type" : "FLOOR"
}

This way the game would know to randomly select "floor" terrain during generation that is defined in its data structures using the specs defined in the terrain.v files. To increase variety i may add some additional info like probability of occurrence based on either pure randomness or maybe depth of the player. So you start out in more wilderness-y like maps then move into caves then move into small dungeons then into underground cities, etc. Not sure how i want to approach that or if it should just be added to my back-log of things to do after alpha 2 is complete.

Anyway, I think that'll give me a fun and flexible map system to work with.

That's all for now!


Not Dead - Just Quiet

Posted by , 25 April 2013 - - - - - - · 1,508 views
Vaerydian, JSON
Well, first off, Vaerydian is still alive. I've done quite a bit with it since I last posted. I probably cant remember everything i've done since i posted last, but most of it has been placing systems or structures in place to make development down-the-road more streamlined, and much of that evolved around establishing paths to create and drive aspects of Vaerydian from data, especially human readable data.

With data driven frameworks a main goal, i needed something that was fast and simple to create, so I looked around and found an open source project called fastJSON. Its probably one of the fastest JSON tools i've found, and its fairly simple to use. I even built a few utilities around it to make it even more user friendly. so i can do things like:
  • string json = JsonManager.load("actions.json") etc.
  • JsonObject jo = JsonManager.jsonToJsonObject(json);
  • EnumerableType myEnum = jo["first level","second level", ... , "nth level"].toEnum<EnumerableType>();
All sorts of fun things that make it very simple to use, but that is only part of the story.

The real meat comes down to actually implementing useful data structures and using the json to drive them.

Here is a blurb from my "actions.v" and "damage.v" json files. The object of these is to allow me and any future players to create new fun things through definitions in these files and the game will use them.

action.v outtake:
{
  "name":"RANGED_DMG",
  "action_type":"DAMAGE",
  "impact_type":"DIRECT",
  "damage_def":"RANGED_DMG",
  "modify_type":"NONE",
  "modify_duration":"NONE",
  "creation_type":"NONE",
  "destroy_type":"NONE"
}

damage.v outtake:
{
  "name":"RANGED_DMG",
  "damage_type":"PIERCING",
  "damage_basis":"WEAPON",
  "min":0,
  "max":0,
  "skill_name":"RANGED",
  "stat_type":"FOCUS"		
}

They're fairly straight forward, but i can also do other cool things like store all my skeletal-bone information too Posted Image

outtake from animation.v
"character_defs":[
  {
    "name": "BAT",
    "skeletons": [
      "BAT_NORMAL"
    ],
    "current_skeleton":"BAT_NORMAL",
    "current_animation":"FLY"
},

...

{
  "name": "BAT_NORMAL",
  "bones":[
    {
    "name": "BAT_HEAD_NORMAL",
    "texture":"characters\\bat_head",
    "origin_x":12,
    "origin_y":12,
    "rotation":0.0,
    "rotation_x":4,
    "rotation_y":4,
    "time":500,
    "animations":[
      {
        "name": "FLY",
	"animation_def":"BAT_HEAD_NORMAL_FLY"
      }
  ]
},

...

{
"animation_defs":[
  {
    "name": "BAT_HEAD_NORMAL_FLY"
    "key_frames":[
      {"percent":0.0, "x": 0.0,"y":0.0,"rotation":0.0},
      {"percent":1.0, "x": 0.0,"y":0.0,"rotation":0.0}
    ]
},
Yes, it gets a bit verbose, but it works. I've done similar things for map definitions and terrain definitions (not yet utilized as i'm still pondering exactly how i want it to work)

Other things i've created is a Dungeon map generator which builds many many many interconnected rooms Posted Image its not populated with anything beyond the rooms yet, but its playable. I've also incorporated my 2D simple skeletal animation giving me easier animation capabilities. (you can see early experiments here). Things are simple at the moment, but i don't have to worry about my pixel art skills as much now Posted Image Additionally, I've made improvements to the enemy AI to prevent them from getting routed by the player into corners, it incorporates not only my Behavior Tree capability, but also my Agent Component Bus (ACB) capability to recognize situations. the ACB turned out quite well and is very very fast (see the TCycles/frame in the video below).

Anyway, here is how things are shaping up:

Note: Alot of it is horrible programmer art, so its going to look and sound bad ;)


[edits: fixed some errors, made the json examples a bit more clear]





September 2016 »

S M T W T F S
    123
45678910
11121314151617
18192021222324
25 26 27282930 

Recent Comments



PARTNERS