• ### Announcements

• #### Download the Game Design and Indie Game Marketing Freebook07/19/17

Followers 0
• entries
26
21
• views
25952

The development journal of my FPSMMORPG... only joking. Big city full of zombies and Havok physics.

## Everything burns eventually

I've made some progress since my last blag post. Here is a recent screen shot:

I've done a few thigns;

1) I've put vehicles back in. I took them out while I was refactoring my scene, but its good to be able to speed around and imagine what the finished product will be like.

2) I've added point lights to my render pipeline. These use the traditional deferred render method of drawing spheres for each light with depth testing disabled and back face culling enabled.

3) I've added a crude particle system; I plan to make this component based, with a list of emitters and a list of forces. The current one shown here is just a crude single vector for the force.

4) I've added a scene node which comprises both a light and a particle system, for making things like fire, explosions, and sparks which have both a light (to approximate the colour of the particle system) and the system itself. The image shown above is created using this system; it makes the light flicker from reddish orange to whitish orange, and it has to be seen to be appreciated. I should get fraps or something.

## Lights from a render list

Made some structural changes today to prepare my newly created lighting system for use with the scene.

I've got my system gathering spotlights from a render list and drawing as many as possible in a single pass; it performs quite well with around 50 lights per pass. This render list is gathered by the scene (ill detail my scene system in a later post) but atm the lighting is only debug data.

My light manager is capable of taking lights from a render list, batching them up into uniforms, and rendering as many as possible with a single pass. This will cut down on the number of fullscreen quads to be rendered. Fillrate is my primary bottleneck right now. I've seen methods which render the AABB of a light system as some kind of volume on screen, which conveniently means projecting it and reducing the number of fragments to render by orders of magnitude, this will likely be a good option.
	void LightManager::startList(RenderList3D& source){		current_position = source.light_begin(), last_position = source.light_end();	}	LightManager::Step LightManager::stepList(int shader_program)	{		static SpotLight dummy_light;		dummy_light.setOn(true);		static std::vector<float> pos(3 * available_spotlights);		static std::vector<float> dir(3 * available_spotlights);		static std::vector<float> colour(3 * available_spotlights);		static std::vector<float> radius(available_spotlights);		static std::vector<float> cosAngle(available_spotlights);		for(int i = 0; i < available_spotlights; i++)		{			SpotLight * l;			if(current_position != last_position)			{	// apply each light in turn				l = ( current_position->light );				current_position++;			}			else			{	// fill the remainder of the list with dummy lights				l = &dummy_light;			}			pos[3 * i + 0] = l->pos[0];			pos[3 * i + 1] = l->pos[1];			pos[3 * i + 2] = l->pos[2];			dir[3 * i + 0] = l->dir[0];			dir[3 * i + 1] = l->dir[1];			dir[3 * i + 2] = l->dir[2];			colour[3 * i + 0] = l->rgb[0];			colour[3 * i + 1] = l->rgb[1];			colour[3 * i + 2] = l->rgb[2];			cosAngle = cos(l->angle * 0.0174532925f);			radius = l->on ? l->radius : 1e-1f;		}		GLint location = glGetUniformLocation(shader_program, "uLightPos");		if(location != -1)			glUniform3fv(location, available_spotlights, &pos[0]);		location = glGetUniformLocation(shader_program, "uLightDir");		if(location != -1)			glUniform3fv(location, available_spotlights, &dir[0]);		location = glGetUniformLocation(shader_program, "uLightColour");		if(location != -1)			glUniform3fv(location, available_spotlights, &colour[0]);		location = glGetUniformLocation(shader_program, "uLightRadius");		if(location != -1)			glUniform1fv(location, available_spotlights,  &radius[0]);		location = glGetUniformLocation(shader_program, "uLightCosAngle");		if(location != -1)			glUniform1fv(location, available_spotlights,  &cosAngle[0]);		if(current_position != last_position)		{	// we are still iterating our spotlights			return SPOTLIGHTS;		}		else		{	// we have reached the end of the list			return FINISHED;		}	}

My next step is to work the way lights are gathered into the scene itself, I've created various code paths for lights in the scene but not integrated this with my 3d editor.
that: add point lights to the system.

## Multiple passes, deferred lighting

I've now set up my multiple pass system, using g-buffers for lighting, this means I can render the first pass as a single quad, then use the g-buffers rendered at the same time to perform multiple, far cheaper lighting passes.

Here is a screen shot of the result:

I still dont have properly sorted translucent geometry, but at least its rendered as part of the whole process; i only wanted translucent geometry for things like shop windows and street lamp glass, so hopefully not too many bits of translucent geometry will be visible. I'll fiddle around with the depth g-buffer and the depth of each translucent fragment to see if I cant figure out some way to do additive blending on translucent geometry without depth testing, while still only rendering it when its in front of the previous pass. Most translucent geometry in reality will be clear (not coloured) glass with a dirt texture so the colouration wont be an issue.

## g-buffers and ambient pass

I wasnt going to go with full blown deferred rendering, so only my lighting pass will be defferred. That is, the first pass (ambient light, sun light / shadow and light / shadow from one or more lightening flashes) will be rendered the old fashioned way; but at the same time as doing this pass, I will generate g-buffers for the second pass to make use of. Here is a screen shot of what I have:

The top left image is the first pass; no shadows as yet, only a directional light. You will notice that I have rendered translucent geometry on this pass. In this pass, different materials have a different shader attached, and geometry is sorted in order of shader to avoid too frequent binding of new shaders and uniforms, which can be a bottleneck.

The second image is my albedo map; this is only the textures and material properties. You will notice that the translucent geometry doesnt write to this buffer, this allows me to use various tricks to get the translucent geometry into some g-buffers but not others. For example, I write to a specular buffer if need be, but miss out the albedo. I may need to seperate material properties and texture at some point but for now they are together.

The bottom two are world space normals, and world space position; by doing things in world space i've been able to simplify the way lights are gathered, I simply need to render all lights which intersect the view frustum.

Hopefully by the end of today I'll be able to render many lights using only the g-buffers, and use additive blending to apply them over the first pass.

## Spot lights and stuff

Today I expanded my shaders / architecture to support more than 1 light per render pass. Its my plan to implement a multipass, forward renderer with some use of seperate buffers. This is my plan for rendering things:

1) Render the geometry albedo with a single directional light w/ shadow map to represent illuminating objects in the sky. The ambient lighting will be handled in this pass, whereby I can approximate the ambient light in terms of what is coming from the sky (sun, moon / stars, etc). The shadow map here will be very high resolution, and only recalculated occasionally.

2) I render a normal buffer, which will aid in the lighting passes, since each fragment will already have the appropriate normal after this step. I think this should be a nice optimisation, but I might skip it at first.

3) in several passes, I render up to 6 spotlights at once, adding up the results each time. By fiddling with its input params this shader can also render a single point light but with 6 shadow maps all to itself. I do this until there are less than 6 spotlights (or one point light) left over. Then, I use similar shaders optimised for 4, 2 or 1 light until all lights have been processed.

4) I combine the results of 1 and 3 and then apply post process effects.

Here is a screen shot of just the scene drawn with some spot lights, without shadow maps or ambient colours; this would be the result of one of the passes in step. Due do my decision to calculate ambient lighting in step 1, you will notive there is no ambient light here; the buffer only tells me how much each fragment is lit by various lights in the scene.
Ambient light will depend on the intensity of the main light source (the sun or moon etc) and will be rendered on the first pass.

Tomorrow:
Get a single directional light, coming from an arbitrary point in the sky, to work.
Figure out how to render a depth texture, which would be the first step to getting shadow maps working.
If possible: combine the two, so I have a sun / moon shadow effect.

## Starting again

Its been a while since I last made a journal post (over a year I think) mainly because development slowed considerably; also, ive been working on physics rather than graphics, until now, so there wasnt much to show.

Here, have some per pixel lighting:

I have a new system, which took me approx. 1 day to write; whereas the previous one took two weeks. My engine is designed to load from two kinds of data; it can load from intermediate files, in which case it must calculate dependencies (I've hacked this up and its slow, but its not critical for loading intermediate files) and then each resource is stored in a table with a string key. Next, I assign each loaded resource an integer ID, and create a vector of references to each resource; this means I can perform an O(1) resource lookup, and I mean proper O(1), not an ammortised one; and its considerably faster than any kind of hash table. I then give each resource links, through these integer ID's, to all their dependencies.

This is a slow process, but it does have several advantages; once its done, I can index resources without having to give each object a pointer or reference to its dependencies. I can bind dependencies instantly by directly accessing the pointer.

When I serialise my resources into blocks of data, I send the integer ID's along with them; this creates a header file for each block of binary data. I can serialise the entire resource manager, or individual tables. From here on, its the same as my old resource loading system, except without the need for linking resources to each other. Rather than gathering a shopping list, I lazy load every resource, and demand that every resource table provide a suitible fallback resource if a resource that was requested is not currently available.

When the game loads from packed blocks, it first loads the headers of each resource table, which is simply the integer ID, the string name (which is still useful) the block they are within, and the start / end point within that block. When a resource is needed, it will load its data and cache it in GPU memory. There is no need to link resources because they also know the integer ID of their dependencies.

This should make things simpler, and faster when loading from blocks of data.

## Change of plans

Change in plans

Well, after 3 weeks of working on work for uni, and making one of those rediculously long job applications, I've not done anything related to spawning characters.

Instead, I spent the last week making a GUI lib. This was a big change in plans, but the mood struck me, and I need a GUI of some kind eventually. Currently implemented are buttons and surfaces to put buttons on. I'm in the middle of creating text box widgets, although for them to be of a production standard I'll need to redo my font rendering system, which currenly draws nicely kerned and antialiased text, but by brute force, with no caching of glyphs.

Building this screen took only the following code to do:

gui_manager = new GUI::Manager(0, 0, 500, 400);
GUI::Button * exit = new GUI::Button("butnExit", "Exit", 100.f, 40.f);
exit->setListener(this);

cmd_history = new GUI::TextBox("txtdHistory","",20,8);
cmd_line = new GUI::TextBox("txtCmd","",20,1);
cmd_line->setListener(this);

gui_manager->addWidget(500 - 110,400 - 50,exit);

I'm quite happy with this for a weeks worth of work (less than 10 hours of actual work all told) and am actually surprised at how quick it was to do this. Having done it twice before may have helped.

All the widgets are drawn using vector art, generated from within OpenGL. I can customise the colours, but apart from that its pretty fixed. However, there is no reason I can't make other appearences than a coloured border, thanks to my skin system. The round corners are a mesh, and the lines between them are a single quad. I've had a lengthy discussion about alternatives to this method and ive decided that im right and everybody else is wrong... at least for now. I'm hard coding the details of each skin, with some of the properties loaded from a file such as this:

SKIN
DEFAULT
BACKGROUND Colour 0.1 0.1 0.1 0.7
NORMAL Colour 0.0 0.33 0.71 1.0
ACTIVE Colour 0.0 0.63 0.91 1.0
CLICK Colour 0.3 0.4 0.9 1.0
/DEFAULT

BUTTON
BACKGROUND Colour 0.3 0.3 0.3 1.0
MOUSEOVER Colour 1.0 0.0 0.0 1.0
CLICK Colour 1.0 1.0 1.0 1.0
/BUTTON
/SKIN

Once again ive decided against XML and am rolling my own.

I've also refactored my game once again, removing all traces of the old event system. I'm now using a combination of two event systems; for the GUI communicating back to its owners, I'm using an EventListener system, where the object recieving the events must inherit the interface ButtonListener, TextBoxListener, etc.

To recieve the input from that button, I need to override ButtonListener::listenButtonClick, like so:

void ZFrenzy::listenButtonClick(GUI::Button * sender, int x, int y, int button)
{
exit(0);
}

And there it is, a button that exits the program. Basically this is a cut down version of the observer pattern, since only one observer can be attached to each widget. There is no reason I couldnt make this more than one, but this is just a prototype GUI system, build from previous experience building GUI systems.

In the next 3 days I hope to have the font rendering sorted out enough that I can have a decent text input box, and will then create a console; this console will allow me to make interfaces for various parts of the game which are currently hard coded, and form the basics of an in-game editor.

## Factory pattern and such

So, did a few things of interest today:
wrote a system for loading keymappings from file. The number of config files my game will use is rapidly growing. the file looks like this:

KEYBOARD
w FORWARD
s BACK
a LEFT
d RIGHT
e ACTION1
f ACTION2
1 WEAPON_1
2 WEAPON_2
3 WEAPON_3
4 ITEM_1
5 ITEM_2
6 ITEM_3
7 ITEM_4
8 ITEM_5
SHIFT SPRINT
C CREEP
/KEYBOARD

This lets me have more than one keymapping, and lets users customise their keymappings with little effort from myself. I could provide an in-game interface later if I wish.

refactored the system so the world is spawning objects and the scene is only aware of what to draw. Thusly, each entity has different and unrelated data structures within each system, and systems communicate between each other via event queues. This loose coupling makes refactoring easy

Created a factory system for spawning a "Thing" from a prototype, just by its name. This loads from a text file like this:

PROTOTYPES
PROTOTYPE NAME "TV" WEIGHT 35.0 MODEL "TV" SCRIPT "TV" /PROTOTYPE
PROTOTYPE NAME "broom" WEIGHT 2.0 MODEL "broom" SCRIPT "broom" /PROTOTYPE
PROTOTYPE NAME "safety_stick" WEIGHT 4.0 MODEL "safety_stick" SCRIPT "safety_stick" /PROTOTYPE
PROTOTYPE NAME "bench" WEIGHT 150.0 MODEL "bench" SCRIPT "chair3" /PROTOTYPE
PROTOTYPE NAME "street lamp" WEIGHT 300.0 MODEL "street lamp" SCRIPT "street lamp" /PROTOTYPE
PROTOTYPE NAME "cine_camera" WEIGHT 45.0 MODEL "cine_camera" SCRIPT "cine camera" /PROTOTYPE
PROTOTYPE NAME "street lamp 2" INHERIT "street lamp" MODEL "street lamp 2" /PROTOTYPE
/PROTOTYPES

The advantage here is that some objects will be able to inherit the properties of another object, and then simply override the ones that are different. Note that "street lamp 2" inherits street lamp but has a different model. The rule here is that an object can inherit the properties of an object that appeared above it in the file. Very simple but creates lots of opportunities.

I can spawn an object from this system at any time by just having the world send an "this object spawned" event, and the other systems will instantiate the appropriate object within their own ranks to match it.

My interpretation of the factory pattern is quite literal; I have an actual object being the factory, and this object creates the prototypes based on the data in a file, such as the one above. Each prototype knows how to instantiate an object with those properties.

The factory knows how to spawn only one class of object, so for each type of entity I have a different factory object.

Next weeks objective: Have characters spawning from a factory, and moving around the world, represented by a piece of debug geometry.

## Progress since November...

I've been ignoring my game project of late due to christmas and assignments, however I have made some progress. I've finished the resource loading system, and now have a system whereby I can load any part of the game world, and load and keep track of all the materials that part of the world needs, as well as determine their lower level dependencies (shaders and textures).

Note how the world is divided into a semi-regular grid of chunks; There is no requirement for me to have these chunks aligned to any particular shape, or even that they are a regular size. I could stagger the chunks in a honeycomb structure, or a dodecahedron, or any other 3d arrangement I choose. The jagged edges you see will be outside the visible region of the level, for example I could have a 1km viewing distance and load all sections within 1.128 kilometers to avoid any of these jagged edges being visible.

Furthermore, I can spawn the renderable node of any object which has a mesh available, into the world, and calculate its dependencies as per the chunks of level. This provides a few opportunities;

1) I can have a "brush" system for geometry that is repeated, for example, there may be several thousand lamp-posts.

2) I can have dynamic objects in the scene, such as things which the player may use to whack zombies, held within a database. These may use the same meshes as dynamic objects, allowing me to have a larger and more detailed world, with not all objects physically active.

3) Every time a new section of the world is revealed, the database can be queried for dynamic objects that occur within that section of the world. These objects can then be spawned and moved around, and their dependencies (meshes, materials etc) will be dealt with too.

Here is an image of a bunch of "anti zombie safety sticks" among other objects being spawned in mid air.

Thus far, they do not move and are not affected by gravity, since there is no physics system as yet. I've not totally decided which physics system I should use. I have experience with Newton and Bullet, both of which are suitible. However, I've also considered PhysX and havok since its far easier with these engines to make and calibrate vehicle physics. Any insight into this would be appreciated.

Tommorrow, I'm going to lay the foundation of the "world logic" system. This will have two things that I've not done before, but are important to any game developer;

First is a database of object prototypes and their properties; I'll talk about the version of the factory pattern that I've come up with for dealing with this. I hope to have the "factory" for dynamic objects (im calling them "Things" at their most abstract level) loading from an SQLite database that I've created.

Tomorrows exercise: using SQLite, load the "Thing" prototypes from the database into a map of

## Untitled

Well, its a common fact that your prediction for how long something will take is usually far too short. I've started the resource management system, and it went through several iterations. Over the week I decided to work on something unrelated to the goal in the last journal entry.

The first was with just the textures. Once I had textures loading with the correct interface, I realised that I actually needed a templated resource counting class. So I implemented this instead and re-implemented the texture table using it. This core functionality is repeated so much that a templated class seemed the only way to go. I considered using inheritance but decided to keep this simple.
#pragma once#include #include #include #include #include #include #include "../Logging/Logging.h"/*	Templated storage of a set of objects along with a modifiable usage count.*/template <class VALUE> class UseCountTable{	public:		UseCountTable()		{					}		~UseCountTable()		{					}		bool addMember(const std::string& name, int use_count, VALUE &object)		{//return false if action cannot be performed			std::mapint,VALUE> >::iterator it = objects.find(name);			if(it == objects.end())			{				objects[name]=std::pair<int,VALUE>(use_count,object);				return true;			}			else			{				return false;			}		}		bool updateMember(const std::string &name, int change)		{//return false if action cannot be performed. 			std::mapint,VALUE> >::iterator it = objects.find(name);			if(it != objects.end())			{				it->second.first += change;				return true;			}			else			{				return false;			}		}		bool deleteMember(const std::string& name)		{//return false if action cannot be performed			std::mapint,VALUE> >::iterator it = objects.find(name);			if(it != objects.end())			{				objects.erase(it);				return true;			}			else			{				return false;			}		}		bool exists(const std::string& name)		{//return false if action cannot be performed			std::mapint,VALUE> >::iterator it = objects.find(name);			if(it != objects.end())			{				return true;			}			else			{				return false;			}		}		bool getMember(const std::string& name, int& use_count, VALUE& object)		{//return false if action cannot be performed			std::mapint,VALUE> >::iterator it = objects.find(name);			if(it != objects.end())			{				use_count = it->second.first;				object = it->second.second;				return true;			}			else			{				return false;			}		}		bool getUseCount(const std::string& name, int& use_count)		{//return false if action cannot be performed			std::mapint,VALUE> >::iterator it = objects.find(name);			if(it != objects.end())			{				use_count = it->second.first;				return true;			}			else			{				return false;			}				}		void consoleDump()		{			std::mapint,VALUE> >::iterator 				it = objects.begin(),				end=objects.end();			for(it; it != end; ++it)			{				std::cout<<"NAME:" <first <<" USE COUNT:" <second.first<			}		}		void getAllObjects(std::vector &table)		{			std::mapint,VALUE> >::iterator 			it = objects.begin(),			end=objects.end();			for(it; it != end; ++it)			{				table.push_back(it->second);			}		}	private:		std::mapint,VALUE> > objects;};

The other classes in the system will each be a wrapper for this; they will handle the shopping list they are given and load / update the resources under their control accordingly. I've dabbled with different ways of ensuring deletion of the contents, but after looking at one hack or another, decided to let the wrapper classes do it. any suggestions are welcome though. I've started a thread discussing this class and the unit test I did for it.

## Zombie safety

Todays progress:

I didnt do anything I planned today because I was too busy with Uni work. This is likely to happen alot.
Instead I've done the following things:

I upgraded my resource loading system to allow several seperate brush files. The headers for all of them are loaded during initialisation, and the data can be loaded at will. The advantage here is that I don't have to have a single large file of brushes; I can have one per each category of object, for example a file full of weapon meshes, a file of plant meshes, etc.

I also created a zombie safety stick:

This should be used by swinging the weapon and striking the zombie in the head with the pointy bits. This will damage the brain and disable the zombie. Keep out of reach of children. Do not insert into your ear.

Tomorrows objective: Finish the functionality to spawn an object and get hold of the "shopping list" the object requires.

## I'm Back!

Its been a while since my last journal entry, since I haven't been working on my project for a while. I didnt have any internet access for nearly a month (I cried every night) and I've been settling into a new house and university term.

I did do a little work in the last 3 weeks. I now have command line utilities to compile the .ac file into a Poly Forest file, and a brush file of a similar format. The brush file format will be used for both static geometry that is repeated, and dynamic objects that can be picked up and, for example, used to beat zombies about the head.

During my absence, I also did a little work towards another area of the game engine. In order to manage loading resources, I've implemented a "shopping list" system. This system manages the dependency between resources. For example, a mesh of a bus may use several meshes, including the wheel mesh. A truck may also have the same wheels and therefore use the same mesh. The shopping list would know that two objects are using the wheel mesh, and then load it with a user count of 2. If the bus is then unloaded, the use count will be decremented by one.

Furthermore, the wheel mesh is likely to require the wheel material, and may share this material with another kind of wheel, e.g. on a car. The user count for the wheel material will be 2, since 2 seperate meshes use the same material.

This continues down the heirarchy, as a material will consist of some shaders and textures which in turn may be used by more than one material.

The point of the "shopping list" analogy is that an "order" is constructed every time a new section of the level is loaded. Imagine that you are writing a menu, and three of the items are:
Tiri Misu,
Bacon and Eggs,
BLT sandwich.

There are
2 counts of egg,
1 count of mascarpone,
1 count of sugar,
2 counts of bacon,
1 count of bread,
etc...

By writing a list of the requirements of each item, we can formulate a single list of the ingredients and the amount required.

This can be done with the aid of the following class:
#ifndef CLASS_ShoppingList_h#define CLASS_ShoppingList_h#include #include #include #include namespace Resources{	enum EResourceType	{		EStaticMesh=0,		ETexture=1,		ESound=2,		EVehicle=3,		ECharacter=4	};	class ShoppingListEntry	{		public:			ShoppingListEntry(EResourceType type, int change, const std::string& name)				:type(type),change(change),name(name){}			ShoppingListEntry(){};			void getData(EResourceType& t, int& c, std::string& n)				{t=type; c=change; n=name;}						std::string getName()const				{return name;}			EResourceType getType()const				{return type;}			int getChange()const				{return change;}						void DumpToConsole()			{				std::stringstream msg;				msg<<" Resource type = "<<(int)type;				msg<<" Name of resource = '"<"'"<				std::cout<			}		private:			EResourceType type;			std::string name;			int change;			friend class ShoppingList;	};	class ShoppingList	{		public:			void addEntry(ShoppingListEntry& newentry)			{				std::vector::iterator index;				for(index = items.begin(); index != items.end(); ++index)				{					if(index->name==newentry.name && index->type==newentry.type)					{						index->change += newentry.change;						return;					}				}				items.push_back(newentry);			}			void dumpToConsole()			{				unsigned int index=0,end=items.size();				for(index; index!=end; ++index)				{					std::cout<<"Entry #"<					items[index].DumpToConsole();				}				std::cout<			}						void ShoppingList::extract(EResourceType t, ShoppingList& subset);			std::vector items;		private:	};}#endif

With the aid of this class, I can streamline the loading and unloading of resources as I need to. As yet I have not finalised the resource manager, but it will take items at the top of the heirarchy (vehicle, brush, actor) and load those. The seperate loader classes will then return a new shopping list, which is concatenated and the next level on the heirarchy (material) will be loaded. This continues until all resources have been loaded, with the correct usage counts.

Any resource that is no longer needed can be removed from GPU ram, or otherwise sidelined, and marked for death. If it is needed again shortly (which is likely, e.g. if a player goes 128m in one direction then doubles back) it is already loaded from disk, and needs only to be copied back into video memory, thus reducing HDD bandwidth. The criteria on which these are then removed from system memory I have not decided, but I have several ideas.

By tomorrow I hope to have the ability to spawn a physics object, and process the resulting "shopping list".

## Everything is faster now!

New results are in! I decided not to write the header for the new file format today. Instead, I have been removing debug code from my various classes, to see how fast the system runs. Its not yet completely optimised, but its close.

After those basic improvements were made, I have cut the render-list gather time down to 1 millsecond, and the render time to 6ms. this is a pretty awsome improvement all around compared with yesterdays results.

Next big objective
I need to get started on putting the whole structure into a new file format. First step is to take the current prototype for the loader and accessor, and turn it into seperate classes for loading from certain file formats and holding the mesh in memory ready for loading.

Classes that I will likely need in order to obey single responsibility principle:
class AC3DLoader - loads from the intermediate file format .ac
class NativeLoader - loads from the native file format of the system

WRITERS:
class NativeWriter - writes data to the native file format.
class EntityWriter - puts entities in database.

STORAGE IN MEMORY:
class Streamer - controls the other classes, and transparently loads the world. Manages the loading and unloading of chunks according to the parameters it is given.
class SceneGraph - uses the streamer and a database to load and store the entire world transparently, including entities, and provides an interface to manage the addition of new entities, and the removal of old ones.

The names of the last two classes are uncertain, since Im not planning to have any streaming functionality in the first prototype. Im starting a thread in General Programming to discuss exactly what this will all look like.

Objective for tomorrow:
Refactor current prototype into several classes each with with a single responsibility, all within a namespace.

## Success with VBOs

SUCCESS!

So, it turns out that VBO's are very fast indeed. Keeping arrays of vertices on the GPU side means very little increase in rendering time with a raw increase in polygons. Here is a table of results,

The first mesh, with 18k polys, is a hill made using subdivision modelling. The second is the same mesh, after being subdivided to another level, thus multiplying the number of tris by nearly 6.

It's pretty clear that using VBO's cuts the cpu -> gpu bandwidth considerably, and I didnt notice much of an overhead from building the array. This is an awsome success, but it is early days for serious benchmarking because,

• it is only with a few textures, and large blocks of the file using the same texture. More sorting of VBO's by texture will be required to keep these speeds.

• There are still some overheads in the code caused by some of my braindead STL container choices.

• No use of shaders. Every triangle is being rendered with a single texture and a diffuse colour of (0.7,0.7,0.7)

• As yet no culling is being done, the entire file is being renderered. This means I still have long render list gathering times. The sight distance is far further than what would be practical ingame, as the mesh is 4,000 x 500 x 4000 meters in dimensions, and I plan to use fogging and a close far plane to give the player a view of no further than 800 meters in any direction.

I expect that the final render times will be more affected by the shaders and other tricks I use than by polygon counts.

Issues to be resolved

Thanks to the use of VBO's I no longer need to construct octrees in order to cull the internals of one chunk. A chunk will have several things in it:

Design file header for first prototype of file format. This should have a list of all the chunks and where each one starts in the file.

## Good days progress.

SUCCESS!

So, it turns out that VBO's are very fast indeed. Keeping arrays of vertices on the GPU side means very little increase in rendering time with a raw increase in polygons. Here is a table of results,

 Num polys VBOs Fixed Function 18,872 24ms 33ms 112,680 24ms 110ms

The first mesh, with 18k polys, is a hill made using subdivision modelling. The second is the same mesh, after being subdivided to another level, thus multiplying the number of tris by nearly 6.

It's pretty clear that using VBO's cuts the cpu -> gpu bandwidth considerably, and I didnt notice much of an overhead from building the array. This is an awsome success, but it is early days for serious benchmarking because,

• it is only with a few textures, and large blocks of the file using the same texture. More sorting of VBO's by texture will be required to keep these speeds.

• There are still some overheads in the code caused by some of my braindead STL container choices.

• No use of shaders. Every triangle is being rendered with a single texture and a diffuse colour of (0.7,0.7,0.7)

• As yet no culling is being done, the entire file is being renderered. This means I still have long render list gathering times. The sight distance is far further than what would be practical ingame, as the mesh is 4,000 x 500 x 4000 meters in dimensions, and I plan to use fogging and a close far plane to give the player a view of no further than 800 meters in any direction.

I expect that the final render times will be more affected by the shaders and other tricks I use than by polygon counts.

Issues to be resolved

Thanks to the use of VBO's I no longer need to construct octrees in order to cull the internals of one chunk. A chunk will have several things in it:

 Object type: How loaded: How culled: Notes: Unique Geometry Stored in file as list of vertices Single chunk of approx 128^3 culled Stores several VBO's, one per texture. Brush Geometry Stored as {Matrix,Octree } OCtree compared to frustum Also stores lights, which will be culled as per lights in unique geometry Entities Loaded from database entities which are loaded will have their own triggers for when to work. Dynamic objects Same as brush geometry, but with a seperate table of physics only meshes Bounding Sphere Stored in database rather than in the actual file format. Placed in in-game editor Creatures Not sure, probably stored in database with resource files loaded in one big gulp Bounding Volume Heirachy

Design file header for first prototype of file format. This should have a list of all the chunks and where each one starts in the file.

## Apparrently I have readers :-)

I didnt realise I had regular readers. Ill make a point of updating my progress every day rather than when I feel like it. Thanks alot!

I've been chatting with friends in the know about my performance issues, and came across an nvidia paper detailing the issue. Basically, because I am breaking the landscape into seperate chunks, there are more "batches" of vertices being sent to the GPU by several orders of magnitude. The GPU is therefore spending alot of time waiting for CPU side operations to complete, and thus I lose the benefit of a seperate GPU.

The solution to this is to modify my render list gather step so that rather than sending one batch, per texture, per leaf node and therefore having several thousand renderlist nodes, I traverse the octree structure and concatenate all batches for each texture into a single large map of triangle->texture.

The use of vertex arrays DID provide a performance boost, but not by as much as I had hoped. Rendering now happens within 16-20ms rather than the 20-40 I had previously. Tomorrow I will upgrade it again, this time to VBO's, and report taste.

Task for tomorrow: Get it working with VBO's and make it dynamically fall back on Vertex Arrays on demand.

## Is there anybody out there?

I had abandoned the journal thinking that nobody read it; only 5 kind souls left comments and 2 of them were from the same person. Well, I've made some progress since 1st August, although I must confess that I did alot of procrastination.

My secret second project, which I have only shared with 1 person so far, is modding S.t.a.l.k.e.r to be faster paced. I find that many shots are wasted either because they did not penetrate body armour or because the weapon was not accurate.

If you want to get hold of this mod then PM me, but there is nothing special about it, and there are loads of mods to be found from official sources. I did turn a handgun which you can find early on into a machine pistol which totally pwns anybody at close range but has questionable accuracy at longer ranges.

Right, my game.

I've finalised my system for loading meshes, and rendering them in immediete mode. This screenshot shows the different regions, colour coded, rendered by brute force, in immediete mode. A mesh of approx 21k triangles renders in between 20 and 40 milliseconds, and the render list can be processed in between 5 and 10 milliseconds. These are relatively slow values, considering that no shaders or lighting are being handled. The render time will be improved by the use of vertex arrays, while the time to gather a render list is a concequence of debug code in the renderer; I am copying data several times in order to track its progress throgh the pipeline. I'm sure I can drop the gather time to less than 1 millisecond with basic optimisations.

There are some other images in the same folder, with similar terrains at different dimensions of chunk.

It looks like abstract art, but the colours are for debugging purposes. It occurs me that the boundaries between the areas have alot of triangular areas; this is not a problem because the "end" of the loaded area will be in the horizon, hidden by the fog plane, and the segments line up perfectly. When textured properly there should be no noticable boundary between sections. Cutting triangles in half would be a waste of time as they would be close to the end of the loaded world anyway.

I've been learning the method required to use vertex arrays, and by tomorrow I hope to have performance statistics for an environment which still renders chunks by brute force, but this time using vertex arrays internally.

Tomorrows objective: Get the terrain rendering with vertex arrays, and report any speed gains.

## Time lapse detected

Sorry, I didnt update the blag yesterday. I had a nasty bug in my AC3D file loader and was awake until the small hours of the afternoon (GMT) debugging it. I woke up today at 5pm. My sleeping patterns suck >:(

Yesterdays progress: New camera class is written.

The idea behind my camera system is that a camera can be passed around the game engine, without worrying about the effect on the state of the GPU when the camera is changed. Prior to stepping through its render list, the renderer can then set the view matrix for that camera. The camera can also be used to construct a view frustum, thus enabling the system responsible for CPU bound culling steps to perform its task without concerning itself with API specific code. It can be used by the AI to determine who is looking at who / what, and it can be controlled by the physics engine without the physics engine needing to know about the GPU view matrices.

I'm currently setting up the system mentioned yesterday where I can see a camera move around the level and watch the geometry as it gets culled.

Objective for the end of today:
Get a debug display of the octree boundaries with a moving object to serve as the frustum for culling.

Objective for tomorrow:
Make serious progress on the culling system. Hopefully, this means complete demo of culling taking place.

## Day 6 (technically, look at the times)

Today was a day for reading up on Octree culling. I read some good articles on frustum culling,

http://www.crownandcutlass.com/features/technicaldetails/frustum.html

http://www.flipcode.com/archives/Frustum_Culling.shtml

I wrote some psudocode for generating a frustum and doing the octree culling.

Seeing what was needed here, I decided to make some changes to my renderer so that the view frustum can be calculated once, accessed externally and used for demonstration purposes, then another frustum calculated for the actual drawing. So, I should be able to make a demo with one camera moving around the scene, and show the culling taking place, while having another camera in a fixed location viewing the results. In this demo app you would be able to swap between the moving camera, the view the player would have, and the fixed camera.

However, I have run into some trouble with my camera class. I decided from an early point to have a totally API independent camera, so I can use it for things such as AI, and so that I could have more than one at a time ready to be used to setup the view frustum.

Tomorrows objective: Fix the camera class

## Day 5:

Yesterdays objective: Met

I was able to find the correct configuration of triangle translation, and here we have a screenshot of a mesh that was rendered with the system. What we have here is a low resolution trimesh, with one of the sections textured. It has been loaded into a collection of octrees of each 256m squared, and the alternating squares show roughly where the boundaries are on the horizontal plane. There is a huge gap underneath where no octrees have been created, thus increasing efficiency.

Its difficult to get a sense of the scale of the scene without any characters as a frame of reference, so I added some smilie faces. The large on in the middle is 100m squared and there are some tiny ones, so small they take only 1 or 2 pixels, which are 1m squared. As a result they got killed by the jpeg compression. The green grass square is approx 256m squared, and the entire scene is 2048m squared. The squares which are black and pink are being bound to the warning texture I mentioned yesterday, which is pretty useful since by chance it has 8x8 squares on it, so each square aligns closely to one of the boundaries of the scene.

FPS data is not meaningful at the moment, because I am not performing any culling operations on the system. It simply loads the entire file and enables me to render it, and it is rendering each entire octree by brute force.

Secondly, each leaf node on the octree is rendering its data with intermediate mode. To get real performance statistics, I need to convert the data into a vertex array.

The system does not yet perform the actual task it was designed to do: Load chunks from file as they are needed. Instead, to test that it renders properly, I am loading the entire file. However, once I have tested the efficiency of rendering the entire system including culling and vertex arrays, I will consider if I should write it to disk, and load it, from my own file format, or if it is just as efficient just to load from the original .ac file format.

Tomorrows Objective:
Write psudocode and do some dry runs for frustum culling within the octrees.

## Day 4

Yesterdays objective for today: Failed.

I didnt manage to make it render properly.

### What I am actually getting:

Two things are missing:
1) It doesn't draw the correct tris in the right places. This could be for any of a number of reasons, but I suspect it is
because the triangles are not being properly transformed from the world frame of reference to the octrees frame of reference when being passed down the levels of the octree.

2) Textures are not being bound. I didn't have time today to make it to fill the texture table, what with debugging a few strange behaviours. The purple texture you see is not the same as the purple one in the image of the correct picture, it is in fact a pink and black warning texture that is bound when the texture that is requested cannot be found.

However, I did fix several bugs, and get a large portion of the rendering code written. It contains bugs of course, but I still have the code and have a rough idea what the problem is. I also have some leverage on these bugs and know what to do.

I will debug this problem by colouring each triangle according to the octree it belongs in. It remains to be seen why there is a rectangle being drawn, without rendering in wireframe, i cant really see which triangle is which. I will create debug textures which, when bound properly, will enable me to see where on the model the triangles should be. I will create models with specific shapes for each block.

Tomorrows objective: Debug, and post new screenshots. Get the mesh rendering properly.

I didn't procrastinate at all today. I spoke to friends from my home town, went shopping, ate a curry. Drank 2 pints of chocolate milkshake.

## No journal entry yesterday...

This is because of my dodgy sleeping patterns. I updated yesterdays journal at close to midnight (GMT+0), so it falls in the 27th. Todays journal entry will be arriving at about 6:00am, GMT+0.

[edit: better make that 6, not 5....]

I will have screenshots by then, I hope. Preliminary tests don't show anything promising but I have 3 hours. Working under pressure is the way forward, for a chronic procrastinator like myself.

## Day 2:

Met yesterdays objective, the file is now being loaded into memory without crashing. Ran into a strange behaviour in MSVC9. Aparrently, this is not a warning at level 3:

switch(blah){    case 1:  ...   break;    case 2:  ...   break;    case 3:  ...   break;    default:  ...   break;    CodeThatWillNeverExecute();}

It caused an equally bizarre problem however. In debug mode, the testharness I created seemed to work, although it always passed triangles down to the 100th level limit that I set to prevent superflous subdivision of the octree. In release mode, it caused the application to eat memory at the rate of 500meg per second. Aparrently this is faster than Vista is capable of guarding against, because not only did it page out everything, but the entire machine froze.

One hard reboot later and I took my problem to the irc channel, and found that I had been banned, with a link to a reply in the thread I started in the lounge detailing why. I might have to publicly clarify what I meant by "stop me from procrastinating". But your hearts are in the right place, so thank very much you anyway.

Now, I am able to feed a file in AC3D format into the system, and transform it into the correct memory format. My next step will be to get it drawing onscreen, using intermediate mode. This will prove that the triangles were sorted into the correct tree. After that, I will convert it so that it is rendered with vertex arrays instead.

Procrastination today:

• Chatted with an old college friend. (this is allowed of course)
• Ate 2 slices of toast with butter. Its surprising that if you dont move around alot, you will not need alot of food.
• Drank chocolate milkshake (easy recipe: throw ice, sugar, cocoa powder and milk into blender. Blend to a thick paste. Then add more milk, and drink)
• Played games for around 2 hours.
• Breathed approximately 2000 gallons of air. Had to open my bedroom window which was glued shut with spider webs.
• Drank roughly 2 pints of water.

It seems that if I continue this trend, by the end of the week I will be able to live on a small amount of air and one glass of water alone. This will surely make me a better programmer!

Tomorrows objective: Create screenshots of the entire mesh being rendered with textures.

## Day 1: (Sat July 26th 2008)

CURRENT PROGRESS:

I have created and tested one of the lower level constructs, using a nested set of std::pair. Basically, this is a tuple with half the fuss. I then created a wrapper around this which used the tuple as the key to an std::map. The entire thing is named SparseGrid on the suggestion of somebody on #gamedev, I forget who. (tell me, will you? you deserve some credit.) I have also written most of the system to load from AC3D and sort into a forest of octees in psudocode and done dry runs.

Drank 7 cups of tea.

Played s.t.a.l.k.e.r. for only 5 hours.

Tomorrow: Fill in the psudocode with real code, and debug.

## Current development:

Currently, I am developing the new file format that I will create. This first prototype will only contain textured polygons, and will be used as a proof of concept. This is how the concept works:

Firstly, I create the mesh as usual in the modelling application. The mesh may be rather large and unwieldy, but this does not matter. It is not distributed with the game.

Then, my loader will load the data and sort it into a 3d grid of chunks of uniform size. Each chunk will contain an octree.

Then, I will serialise all the data in those chunks into a std::string, so there is one string per chunk. These strings will be quite large, so I may break them down depending on requirements.

Once that is done, I will create a file header and calculate where in the file each chunk of data will go. The header allows the loader to open the file, and skip to the location of the appropriate chunk. This header is written to file, followed by the strings containing the binary data for the polygon info.

On loading, the loader will be provided with a "start position". it will load all chunks into memory that are within a certain radius of that start position. It will be frequently notified of the current position of the focal point.
The physics system will then be required to create a single triangle mesh for each chunk that is in memory.

When there is a change to the position of the focus, the loader will automagically load the new chunks from disk, and disguard the old chunks. The renderer and the physics system will be notified of this change and will "forget" the chunks that went out of focus. Poof. Gone.
Followers 0