• Advertisement
Sign in to follow this  
  • entries
  • comments
  • views

About this blog

This story of the Geist3D graphics engine and development studio.

Entries in this blog

I have been busy finalizing the rendering pipeline of Geist3D, including HDR, Bloom, SSAO etc. I am really glad now that I have created a development environment alongside the graphics engine right from the get go. The inbuilt GLSL editor and the scripting capabilities helped to streamline the development process for all of the necessary shaders. Soon, I will dedicate a few journal entries to the editor alone. But over the next few entries I will outline all that I have done regarding the rendering pipeline during the past weeks, starting with Screen Space Ambient Occlusion (SSAO) culling. Check out the Geist3d website for downloads, images and movies of the current status.

Examples and discussions about SSAO are plentiful and I stitched my implementation together after digging around in various forums. In a nutshell, screen space ambient occlusion culling adjusts the intensity of every pixel according to the differences in depth of the surrounding pixels along the viewing axis. I think the idea is that the larger the difference in depth to the neighboring pixels, the more likely it is that the surface is shadowing itself; thus the darker it will become. The remarkable thing about SSAO is that it clearly produces a 3D image without there being a light source at all. Yeah, that's right; there is no light source in these images.

In order to implement this algorithm in Geist3D, it became necessary to render the scene into multiple render targets rather than straight into the back buffer. The occlusion algorithm is then applied in screen space to the content of the render targets. For SSAO, I used two GL_RGBA16F render targets to store color and depth as well as the eye space surface normal:

Buffer 1
R - diffuse X
G - diffuse Y
B - diffuse Z
A - depth

Buffer 2
R - normal X
G - normal Y
B - normal Z
A - unsued

Much of the literature suggests using a random set of sample points within a fixed radius around a pixel. It turns out that a regular distribution of sample points produced better results for me. Somehow this also makes sense; you would want the sample points to cover the surrounding area as evenly as possible.

Another important step I came across is to reflect a sample point over the surface normal if the angle between the sample point and the normal is greater than 90 degrees. The assumption is that for angles greater than 90 the sample point is inside the "surface" and should thus not count. The images below illustrate how much this additional condition improves the quality. The left hand side shows Galactica with the reflection over the normal and the one on the right without it:

I also realized that it is possible to use the displacement and normal maps to compute depth values even within the texture space of a single triangle. Normally, a flat surface such as a wall would generate a smooth gradient of depth values. However, if the wall is textured with a normal and displacement maps, it should contain small variations in depth which are easily computed given the displacement value and surface normal. The left hand side below shows a wall with depth information on the mesh level, the one on the right with depth at the pixel level:

There is still a problem when applying SSAO to the planet surface since the range of 16 bit floating point number is somewhere around 65000. However, at least for now you can see much further than that when you are on the planet surface or in low oribit. I am therefore "blending" out SSAO up to the maximum of a 16 bit floating point value. I am not quite sure if I am doing this right, but at least there are no visible artifacts. You can see in the images below that the peak in the distance has no SSAO applied to it at all. As soon as I figure out how, I will encode the depth in 2 RGBA16F.

I have to say that in general my SSAO implementation darkens the final images too much. HDR undoes some of that problem, but I am not quite happy with it. For now I will leave it at that, since my goal is too get the entire rendering infrastructure in place first. Later I will make another pass over everything and refine it.

Unlike many others Journals, I again have not posted any code, but all of the shaders are accessible via the geist3d editor. All you have to do is figure out how to use it :)
I am beginning to finalize the lighting model for Geist3D. With more powerful graphics hardware coming online all the time it is becoming feasible to perform more rendering passes to compute realistic effects such as shadow maps or deferred lighting. For now, I will try to do everything in one pass, but architect for multiple passes in the future. I have been following Ysaneya thread on this topic and it seems that he has already decided to include both shadow mapping and deferred lighting in his engine. I will trust his judgement, and it won't be long until I follow his footsteps in this regard.

As usual, this article is interlaced with a few of the latest screenshots. You can also download an executable here. According to some reports it didn't run on ATI cards, but I think I fixed that now. Soon I will get a 4890 and develop on an ATI platform for a while. That should sort things out.

There is also a download for the editor at the Geist3D site, but it does not contain the latest changes. For anyone interested, all of the shaders are accessible via the GUI, if you can find your way around it. I am hoping that some shader gurus will point out some improvements. There are certainly lots to be had.

At this point, there will be two rendering passes. The object-space pass renders the 3D scene into a series of frame buffers that will contain color, normals, and depth. The screen-space pass will then further process the frame buffers to perform HDR and bloom, depth of field, motion blur and maybe ambient occlusion culling. At first there will definitely be HDR and bloom and I will add the other effects over time. I have already implemented bloom and HDR effects in the past, so it shouldn't be a difficult to add them again. The main problem will be to nicely integrate the screen space shading pipeline with the editor. I always try to build a user interface for all the components in the engine. It is more work, but it will make the tool more versatile. I hope that once (and if) Larrabee takes hold and real time ray tracing comes along, most of this work won't be in vein.

Cleary, there will not be just one lighting model for all object since there are so many different artifacts in the environment. I will therefore break it down by category.

Space ships, Characters and Objects in general
After digging around at gamedev and looking at the odd GDC presentation for the past couple of years, it has pretty much become clear to me that other than triangles and vertex normals, every object should contain diffuse colors, normal maps, a specular component, a displacement component and potentially an occlusion component, or a self-illumination value. All of this information should fit into 2 texture maps with RGBA channels.

Since the lighting calculations are all done in tangent space, it is only necessary to store the x-y components of the normal, since the z component is always positive. The actual normal can then be reconstructed in the pixel shader. So, here is how I would suggest to assign the components of the two textures.

R - diffuse X
G - diffuse Y
B - diffuse Z
A - displacement

R - normal X
G - normal Y
B - ambient occlusion
A - specular

I know it is a lot to ask of an artist to create these types of textures, but maybe I can write some software to combine different textures and bake them together. I am not quite sure which components can be computed automatically. I think you can create normal maps from diffuse maps, and specular and displacement maps from the normal map. But I am really not certain about that; maybe its time that I actually start learning how to use a modeling tool. The ambient occlusion component is difficult, as the surrounding geometry must be known to compute the self-shadowing of a pixel. I am also considering implementing screen space ambient occlusion culling, once I changed the shaders to collect depth information for each pixels during the object-space rendering pass. So, maybe we can skip the occlusion component and add the Z value of the normal back in. In any case, ambient occlusion adds quite a bit of realism.

For metallic objects like spaceships, I will use anisotropic lighting to add a special metallic look. It is quite simple to implement and adds a nice touch.

The planet shaders are much more specialized and really do not concern artists much as there is little that can be imported, other than the planet texture packs. All of the other parameters are changed within the confines of the editor. The shaders for the planet are certainly not finalized and I will still improve them as time goes on. But, I do like what I see so far. Here is breakdown of the components making up the planets:

Atmospheric scattering
This is a tough one and requires some pretty heavy shaders. I have a pretty good handle now on the physics involved, but the equations cannot be computed in real time. O'Neils demos and articles certainly help a lot! Yet, there is a lot of fudging that must go on there... This will require more trial and error.

For now, there are already Cirrus clouds but I definitely want to implement volumetric clouds which you can fly into as well. This will add quite a bit of realism and make areal battles so much more fun. The shaders for these clouds also need a lot more work and the lighting calculations have to tie in with the atmospheric scattering. Obviously, the cloud color is quite dependent on time of day. Plus, the volumetric clouds will require yet another planet wide LOD scheme, which I assume is different from the one for the terrain patches, the fauna and the collision tree. Here we go, yet another structure...

For some reason water just eludes me. I see so many nice demos on the web, but I haven't been able to properly recreate them yet. Most realistic water demos use environment maps to determine the reflections that come of the water. I think that it would be difficult to create an environment map for a planet if it is possible at all. They would certainly have to be viewpoint dependent and would thus have to be computed on the fly. Right now I am already using atmospheric scattering to determine the reflection color, which has made the water much more realistic than before. I am thinking that once I have added shadow maps, there will be enough information to make realistic reflections. However, I yet still have to get ripples right :)

I am happy with the trees so far. The shaders certainly need more work, since I haven't paid too much attention to them when I added the trees. I also still have to improve the transition a bit between the geometric trees and the billboards. Plus, I haven't even begun to explore the algorithm that generates the geometry of the trees. Soon I will tie in the parameters with slope, elevation and the global noise function and the there will be different trees in different regions. I will also add grass asap...

I am happy with texturing the planet surface using the texture packs and slope/elevation blending. It would probably be a good idea to use normal mapping as well here. Also, the surface still pops a lot, especially with low level of detail setting, so geo-mipmapping is in the pipeline. Plus, I am still thinking about a completely procedural planet texturing method, but the GPU performance still isn't there yet.

So, this is basically it. I realized this is not all about lighting but most of it is relevant.
I think it's about time for an update! I have been working on a few things here and there. But before I go into details here are some of the latest screenshots. You can find fullsize images and additional youtube videos in the Geist3D gallery.

The planets really need a cloud cover ... It's in the pipeline.


I finally ended up implementing a client side prediction algorithm to provide first person shooter style interactivity with real time responses. After tweaking and experimenting for a while, I came up with a pretty robust and efficient solution using UDP. I have also added chatting capabilities and developed a reliable protocol on top of UDP, which is necessary for picking up objects or entering spaceship.

There are still improvements necessary to launch a demo server, including a user database and some kind of authentication. That will stay on the back burner for a while longer though. First, I would like to see a little more artwork in the demo and add some kind game play, either combat or skill based.

Planet Rendering

I have also improved how terrain patches are selected for splitting and merging. Until now, this decision was exclusively based on the screen space error of a patch. Now it also depends on the number of patches which are still unused out of the 1300 that are initially allocated. This has the effect that the planets are much more detailed when you are looking at them from space.

I have also increased the number of threads used to compute the geometry of the terrain patches. There are a lot of concurrency issues, especially since patches need to access their neighbors in order to compute smooth surface normals along the edges. It seems to me that concurrency on the patch level is too coarse and multi-threading is not right way to utlizie multiple cores. It will be interesting to see what kind of APIs and examples Intel comes up with for Larrabee.

Collision detection

I have created different collision categories which allows for fairly fine control over how objects collide. For example, a shuttle consists of a triangles mesh and a simple collision shape such as a box. The collision shape is only used for collisions with other triangles meshes, while the triangle mesh will collide with other simple collision shapes, but not triangle meshes. This way mesh-mesh collisions can be avoided but avatars and vehicles can still enter the shuttle bay. Here is an image of a vehicle in the back of a shuttle. The shuttle can actually fly away with the cargo secured in the back, land somewhere else and then the vehicle can exit the bay; all of it completely seamless.

Game Mechanics

Finally I have also worked on the dynamics of landing a space ship on the surface of a planet. Since the planet is generated by a noise function it becomes expensive to compute the exact shape of the terrain below the spaceship. An easier solution is to sample the surface at three points around an object and then create a plane against which to perform collision detection. This works rather nice, but it causes some strange artifacts such as the shuttle penetrating the terrain or vice versa (left image). This problem becomes even more pronounced on an uneven surface, especially where the features are smaller than the size of the shuttle. I have therefore come to the conclusion that space ships can only land on flat surfaces (right picture). In order to enforce this, I have come up with a simple sampling technique that determines how uneven the ground is. Depending on this metric, the spaceship will take damage. If the ground is too rough then it will get destroyed very quickly. I am quite happy with this approach. It will make for some interesting game play. Very large ships that may transport a lot of cargo can only land on certain spots; maybe not at all on some planets. Of course, there is always the possibility to construct a landing platform, where ships can safely land without taking damage.


Today, I will to talk a little bit about covering a planet surface with trees. I actually did this work a while ago, but instead of reporting my progress on networking, I decided to change the topic. The networking infrastructure is progressing well though, and I will talk about that soon.

I had to rework the tree engine because the location of the trees depended on the tessellation of the planet. The trees were only generated around the viewers' position using the triangle mesh of the surface to determine the location of each tree. This technique is sufficient for rendering but not for collision detection. A ball rolling around on the other side of the planet, for example, should still collide with the trees there. Especially in a networked setting, the server will not tessellate the planet at all, but still has to perform collision detection.

The trick was to find an architecture that supports quick searches for the nearest trees at any point on the planet surface. The solution I came up with was to cover the entire sphere with a network of tree patches each containing a random sample of trees. This has reduced the problem to first locating the nearest patch(es) and then searching for the appropriate trees inside the patch. The tree distribution of the patches is pre-determined and optimized for quick searches. It required a lot of experimentation though in order to generate a grid of patches that spans the entire planet. The patches and the random patterns they contain ultimately only specify the longitude and latitude where a tree could potentially be located. Various noise functions still have to be evaluated in order to determine if there actually is a tree and what its final elevation will be.

I am pretty happy with the new tree engine now. Although I haven't implemented the collision code yet, I can be certain that it's possible to find any tree on the planet and detect collisions without having to triangulate the surface first. While I was at it, I also improved the transition between the detailed trees and the billboards. The blending has become much smoother and the trees now span so far into the distance that it becomes difficult to notice how they appear.

The images above show a scene that uses 25 tree patches of 6km side length which are constructed around the viewer. Of course, most of the patches aren't visible and only a few are actually drawn. Each patch contains about 6000 trees, but only the nearest 20 or so are rendered as geometries; the rest are all billboards. The images below show a lesser density of trees as it is currently implemented in the download.

Right now the trees are all the same, but Geist3D already uses an algorithm to generate this single tree. Once I get around to I will add variations according to elevations, slope etc.
I am trying to keep up with my journal as much as I can. For the most part though, I find myself reporting more on overall progress rather than technical details like some of the others do. However, some of the details, especially relating to shading programs can be found in the download of the Geist3D editor. You first have to get used to the interface, but double-clicking the shading nodes (Pr.) or the planet and camera nodes in the Treeview will bring up a GLSL editor showing all the different shaders that make up the demos. You can also find a rudimentary manual and a few tutorials on how to use the editor in the Geist3D wiki.

This entry is about networking though, and since this is not too visual, I will just add some intermittent images, although they have nothing to do with the entry. The images are somewhat older too, but then they are just designed to keep things interesting. I will soon write another entry with more recent visuals.

Today, I finished the infrastructure necessary to propagate keyboard and mouse events to the server and all the clients. Before, data was sent after every frame, but now I am using a sliding time window where updates are sent every four frames or later depending on the network traffic. This means that a client who controls an avatar accumulates the keyboard and mouse events over a period of time before they are sent to server. The server then passes on this input immediately to all the clients including the one from which it was received in the first place. At the same time, the server uses the input to control its own representation of the character over the given time period. Once a client has received the input it in turn uses the data to control its own version of the character. For example, once you hit 'w', first the character on the server begins to walk and then about 100ms later then ones on the clients begin to move. This is working quite well and if no packets get lots, all versions of the avatars are exactly at the same location about 200ms after the controlling client has released the 'w' key. Just in case I also send the final position of the avatar a few 100 ms after the 'w' key is released. All collisions with the static environment are resolved locally on the clients and server. Thus, even if you are walking into a wall, in no instance does the character penetrate the wall in any of the computers running the simulation.

There are some problems with this approach though. First off all, it takes a noticeable amount of time before your character begins to walk when you hit a key because the keyboard event first has to go to the server and then come back. This effect is even more noticeable when you are trying to turn using the mouse. In my opinion this delay is not acceptable and I have never noticed it during my BF1942 days. That game would have been unplayable with those kinds of delays. So, the solution is that the character on the controlling client begins to walk right away without waiting for the IO state from the server. This is really not a problem and will work well in a purely static environment where there are no other dynamic objects. Since a client does its own collision detection with the static environment, its avatar will at the end still come to rest in the same place as on the server and the other clients. The problem only begins once you introduce other dynamic objects. Right now, the clients don't resolve collisions among dynamic objects because that didn't work too well before.

So, imagine this: You are standing right in front of a box that is moveable and now you are hitting 'w'. If the client has to wait on keyboard input from the server it will at the same time get an update on the position and velocities of the box, because the collision will have occurred on the server. In this case, the box will move as you hitting it. But there are still some problems, because the server sends you updates on your characters position as well, since you are a dynamic object and you have collided with another dynamic object. The effect is that the client will have to update the characters position according to the keyboard and subsequent animation sequence and, at the same, incorporate the correction due to collision. This could, actually it will, cause the character to jitter. So, I believe the solution must be that the client also resolves collisions among dynamic objects. In this case, the character on the controlling clients side can be begin to walk right away without waiting for the IO state from the server and it will correctly collide with box and move it out of the way. So far so good, but the problem now is the time delay. Image this box is moving and the character on the client is colliding with it, but at the time the server moves the character, it has missed the box and the collision thus has never occurred on the server. Now things are out of sync, that's why I never wanted to allow dynamic objects on the client to collide in the first place. My original idea was to detect dynamic collision on the server only and then send a new position and velocities to the clients. The clients then use that information to plot the path of, in this case the box until a new collision occurs on the server. Remember, collisions with the static environment are always resolved on the client as well. So, there will be no new updates for the box until another collision with a dynamic object occurs.

So, if the client resolves collision and the server resolves collision things can get out of sync, especially if there are other avatars walking around. The solution, I think, lies in how to update the clients. The server will still have to send information about an objects position and velocities but this information has to be used differently. Now, I am simply replacing the position and velocities with those from the server. But this can make things even worse. Imagine the server sends an update for an object which causes it to deeply intersect with another dynamic object. The clients' physics engine will then compute correction forces that make the objects explode, even though this has never occurred on the server. An update for the other object that caused the explosion might still be on the network, but by the time it arrives it will too late. The solution here, I think, is not to simply replace the clients' information with that from the server but rather to apply linear and angular correction forces that will ensure that the local object arrives at the same position and orientation as dictated by the server in a reasonable amount of time. This way, at least objects won't jitter around and it's possible to start moving the local avatar, shuttle or vehicle right away. By using correction forces, it may also not be necessary for the server to send location and orientation as frequently, because the forces are designed to reach the final position. Things can still get out of sync but over time they will gradually converge on the same state. Again, the idea is that this will occur smoothly without jittering. Tomorrow I will begin working on this problem...

I have also realized that restrictions will have to be placed on the magnitude of the velocities that objects can achieve. Imagine a ball rolling around a corner on the server, but on the client it didn't quite make it. Now, the server will send the client correction information that will lead it right through the wall. But since the client does collision detection, the ball will never make it to the desired position. This is a big problem and I can think of several others along that line. By limiting the magnitude of the velocities according to the width of the thinnest wall this may be avoidable. Geist3D should probably also do temporal collision detection, where collisions aren't just resolved at the location where the objects are at a point in time, but along the swept volume between the last position and the current position. I am pretty sure this is standard in most of the games now and, it is probably not too difficult to implement; at least a rudimentary version. With this type of collision detection the above scenario can be detected and corrected.

In any case, the dynamic objects in a game are probably limited to avatars, shuttles, vehicles and maybe some fancy projectiles, so I am going a bit overboard with 40 balls being pushed around by an avatar in close quarters. I don't see much value in objects like balls and spheres unless they can add to the game experience.

Well, this was a pretty big entry. Only took a couple of beers to write. I hope it brings across some of the problems that networking introduces to the story.

Improved Planets

I have found some time to improve the planet rendering engine again. Most importantly, the noise function that generates the surface is now much faster than before. I have basically incorporated many of the enhancements to the improved Perlin noise which are posted all over the web, and on this site. I would say that the performance is now an order of magnitude better than before, and it can probably still double one more time by using SMID instructions.

Right now I am only using one separate thread to compute the terrain patches as needed, but I will probably start a second thread soon. My dual core processor still has lots of resources available, and so far Geist3D is mainly GPU bound. I am looking forward to a quad core processor that can run 8 simultaneous threads. At that point, I will experiment with even more complex noise functions similar to the ones in libnoise.

I have also changed the way the planet is textured. Before, a 2D lookup table was used to determine the color for a pixel with a given slope and elevations. A bit of GPU noise was used the mix the colors and perturb the surface normals. In this new version, the lookup table is used to select a texture in a 4x4 texture pack, just as it is done by Infinity. In the spirit of Geist3D, I have also added a little interface widget to select the texture pack and create the lookup table interactively. Here are a few images of the engine, texture pack and interface widget.

The borders between adjacent textures are still quite rigid and I will have to make another texture lookup in order to blend in the adjacent texture. The lookup table is a 256x256 texture where R and G contain the x-y coordinate of a tile in the texture pack. The slope and elevation for a given pixel correspond to the uv coordinates for the lookup table. The trick is going to be to find the adjacent tile and determine by how much to blend them. I am thinking of using the remaining B and A components in the lookup table to encode the index of the next texture tile by elevation or slope, as well as a blend factor. I haven't gone any further with that idea though....

I am not quite sure that the texture packs are the final solution and I am still supporting the previous color map based method, although I haven't improved much upon it. In any case, with the editor it is simple enough to choose one method or the other.

There are downloads available at the Geist3D website that demo these improvement. I get pretty good performance on a GeForce 8800 GT. Even at the planet surface the fps stays above 40. I don't know how well you are going to do with an ATI card though.
It has been a while again! I was hoping to do better than 4 posts a year. Maybe next year... Lately, I have been busy working on the editor. Among minor bug fixes, I have also improved the syntax-highlighting for the GLSL and Lua source code editors and added debugging capabilities to the point that double-clicking on an error message will bring up the proper editor and select the line which caused the error.

Most importantly I have reworked how cameras render the scene and added a Framebuffer object. Now, every camera will implicitly render the model into every framebuffer which it has a child in the scene tree. It is then possible to use the Lua scripting interface and shader editors to create a whole range of screen space post-processing effects.

The following two images show to example of post processing. The window in the left image shows the scene in the background with a bloom effect, and the window in the right images shows how the exposure is decreased according to the distance from the center of the image.

Theoretically, it should now be possible to create all kinds of screen-space effects using the editor including screen space ambient occlusion culling and deferred lighting. However, I will leave that for someone else to try as I am going to work on the planet rendering engine for a while.
I am finally getting around to writing another entry. I hoped that I would post more frequently but I guess this is the best I can do for now. Recently, I have been busy improving the terrain rendering engine of Geist3D for a research project to model costal surveillance scenarios. The most interesting challenge was to generate a model of the coastal regions of British Columbia using digital elevation models (DEM) and Landsat7 satellite images.

I am basically using the same patch-based LOD algorithm as for planet rendering except that the terrain is flat and that the vertices come from a file rather than a noise function. Rendering really wasn't much of a problem as most of the tessellation algorithm was already implemented in Geist3D. In a nutshell, a separate thread reads terrain patches from a file and then generates the triangles meshes as a user navigates the scene.

The challenge was to generate the huge quad tree necessary to represent the terrain at all levels of detail. For that, I built an application that merges the DEM files and satellite images into a single file in the Geist3D terrain format. Each vertex contains the elevation and three color values. Since the DEM files and pan-sharpened satellite images have approximately the same resolution, each vertex in the final model has a unique color.

The screenshots are from a model covering the entire coastal region of British Columbia at 15 meter spatial resolution. The simulation runs in real-time on a dual core 2.1Ghz Pentium processor using an ATI 1950 Pro (256MB) graphics adapter. The terrain file is 3.1GB in size and contains 652,434 patches. After startup, Geist3D pre-allocates 1200 terrain chunks on the GPU which occupy approximately 68MB of graphics memory, leaving plenty of room for other objects. Even at the highest resolution, which consumes all 1200 patches, the frame rate consistently remains above 20 frames per second.

There are numerous details involved in building this model including sharpening the satellite images and smoothing the transitions between different levels of detail. If you have any questions about how it's done, send me a message. You can find a little more information and some movies at www.geist3d.com

I will post again soon and introduce the recent improvements to the Geist3D planet rendering engine. The planets now include tree cover....
As promised, today I will talk a little bit about tessellating an entire planet. The concept is actually quite simple:

Subdivide each face of a cube into four quads and push the new vertices onto the surface of the sphere. Continue this process for every new quad and you will generate a set of patches that cover the entire sphere. Not evenly, but good enough. Each patch is then covered with a triangle mesh where each vertex is also pushed onto the sphere. Instead of mapping the vertices exactly onto a sphere, a noise function can be used to generate slight displacements that result in surface features. In order to generate an earth sized planet, however, a lot of octaves are needed for each vertex. It basically requires a dual core processor where one thread is constantly computing new terrain patches as they are needed. Some of the vertices can be recycled from the previous patch, but at least half of them have to be computed on the fly.

I have looked at a number of other terrain rendering algorithms, in particular ROAM, geoclipmaps, geomipmaps and chunked LOD. ROAM certainly produces the best tessellation, but it is also the most inefficient and simply not practical; at least for now. The algorithm described here is sort of a chunked LOD, except that the borders between terrain patches are properly stitched up with triangles. In order to keep the complexity low, two adjacent terrain patches may only differ by one level detail. This restriction also makes it possible to link up all the patches and allow for fast access to the neighbors, which will become important when computing normal and tangent space. This is it for today, allthough there is a lot more to be said about rendering planets. I will cover more aspects in the future. Check out the Geist3D website for movies and actual software downloads.


I was going to talk about planet rendering today, but since I have worked on 3D manipulators and navigation for the past couple of days, I decided to cover this topic while it's still fresh in my mind. The term Manipulator, which I believe was first coined in the OpenInventor library, refers to an interface widget designed to manipulate 3D content; i.e. to move, scale or reshape objects.

Many 3D modeling tools use four split windows as an interface to edit 3D content. Three windows show the model from viewpoints aligned with each of the primary axes, and the fourth window displays a fully rendered model. The three axis-aligned windows support simple interface widgets to move and scale objects perpendicular to the corresponding axis. In Geist3D, I decided to invert this concept and use only one editing window, but support more complex 3D manipulators that allow you to move and scale objects from any viewpoint.

The above image shows two manipulators designed to move and scale objects. Each constructs a symmetrical, mouse-sensitive editing region that covers the six faces of the bounding box containing the object. Due to the symmetry, the same interface is always accessible regardless of the viewpoint. The manipulator displayed at the top of the image is designed to rotate and translate an object. Dragging any one of the stippled planes translates the object along that plane while the bars on the edges rotate around the axis aligned with the bar. The bottom half of the picture shows the scale manipulator. Each of the tabs surrounding the object scales along a different axis.

The manipulators also support additional functions which are activated by pressing keys while dragging an editing region. For example, clicking on a face of the translating manipulator while pressing the Alt-key locks that axis so that the object can only be moved along a line rather than a plane. This feature is very useful when laying out dominos or building a brick wall.

The disadvantage of using this editing style is that it requires you to navigate the viewpoint into the right position in order to edit an object. However, once that skill has been mastered, it becomes much easier to manipulate very large environments by simply flying to the desired place and then edit in-place. Of course, a prerequisite is that it is easy enough to navigate through the environment. Geist3D therefore supports a number of ways to move around, including the traditional w,a,s,d keyboard style as well as flying and orbiting using the mouse pointer, buttons and wheel.

Judgment on the interface is still out, but I have become very effective in using it. But then, I am also the one who built it....

The beginning...

The first entry of my development journal is finally being written. I having been wanting to do this for the past five years, and now there is a lot of catching up to do. This journal will basically retrace the development challenges of the Geist3D graphics engine and development studio. I began work on Geist3D about six years ago during my Ph.D. research. The goal back then was to produce a simulation tool for manufacturing systems using Petri Nets and 3D computer graphics. By now, the direction has changed and the goal is to produce a more general gaphics engine. To be honest, I would like to see Geist3D used as a game engine for an online multi-user game. But more about that some other time....

I think that I will start the introduction with a section about the graphics engine and one about the development environment, or editor. Each section will contain a few pictures and a short description. In the future, I will add new journal entries to discuss different aspects of the system.

Graphics Engine

The Geist3D graphics engine was built from the ground up using C++ and OpenGL 2.0. The rendering pipeline is based on a standard scene tree architecture where different types of nodes encapsulate graphic artifacts such as geometries, textures and shading programs. The number of nodes has grown considerably to cover a range of features including:

  • Spherical and planar terrains

  • Rigid body physics

  • Skeletal character animation

  • Lua scripting

  • Petri Nets

  • OpenGL shading programs

  • Cameras and textures

  • Collision and proximity sensors

  • 2D user interface widgets

The features are at different levels of completeness but the architecture is in place to expand each component quickly as need be. Below are few real time screenshots and .avi movies of the graphics engine. Most are of planets and flat terrain, but you can find additional information on the Geist3D website. Most of the site is still under construction and new media will become available over time.

Each of the .avi movies is around 3-5 MB and requires the DivX codec. Note that the popping on the plantary terrain has been reduced quite a bit, but no new movies have been made yet.


Development Studio

The Geist3D development studio allows an interactive editing style of 3D content, Petri Nets, Lua scripts and GLSL shading programs. It uses the graphics engine as a rendering window and adds 3D user interface widgets, source code editors and a Petri Net layout tool. At this point, Geist3D supports the .3DS file format for triangles meshes and md5 for characters, as well as a variety of heightfield formats for flat terrains. The idea is that the editor will one day be used to develop content for an online virtual world by populating planets with settlements or interactive content such as games and puzzles.

That's it for now. I will try to update this journal once every couple of days by introducing a new aspect of Geist3D. Stay tuned and check out the software downloads at www.geist3d.com. Keep in mind that Geist3D is still in its Alpha stages. You can certainly look at the models included with the download, but I doubt that you will have any luck using the editor to construct your own content. Hopefully that will change soon...



Well, the first entry of my development journal is finnally being written. I having been wanting to start one for the past five years, and now there is a lot of catching up to do. This journal will basically retrace the development challenges of the Geist3D graphics engine and development studio. I began this work about six years ago during my Ph.d. research, with the goal of producing a tool for manufacturing simulation using Petri Nets and 3D computer graphics. Now, the goal is produce a more general virtual reality modelling engine. To be honest, I would like to see Geist3D used as a game engine for an online mulituser games. But more about that some other time ....

I think I will start the Introduction with one section about the graphics engine and one about the development environment, or editor. Each section will contain a few pictures and a short description. In the future, I will add new journal entries to discuss different aspects of the system.

Graphics Engine



Development Studio
The Editor is really an extension of the graphics engine that allows an interactive editing style of 3D content, Petri Nets, Lua scripts and GLSL shading programs
Sign in to follow this  
  • Advertisement