Advertisement Jump to content
  • Advertisement
  • entries
  • comments
  • views

Entries in this blog


Tech Demo Video 2010

It's been many years since the release of the last video showcasing the seamless planetary engine, so I'm happy to release this new video. This is actually a video of the game client, but since there's little gameplay in it, I decided to label it as a "tech demo". It demonstrates an Earth-like planet with a ring, seamless transitions, a little spaceship ( the "Hornet" for those who remember ), a space station and a couple of new effects.

You can view it in the videos section of the gallery.

Making-of the video

Before I get into details of what's actually shown in the video, a few words about the making-of the video itself, which took more time than expected.

What a pain ! First of all, it took many hours to record the video, as each time I forgot to show something. In one case, the framerate was really low and the heavy stress required to dump a 1280x720 HQ uncompressed video to the disk. The raw dataset is around 10 GB for 14 minutes of footage.

14 minutes ? Yep, that video is pretty long. Quite boring too, which is to be expected since there's no action in it. But I hope you'll still find it interesting.

Once the video was recorded, I started the compression process. My initial goal was to upload a HQ version to YouTube and a .FLV for the video player embedded on the website. The second was quite easily done, but the quality after compression was pretty low. The bitrate is capped to 3600 kbps for some reason, and I didn't find a way to increase it. I suspect it's set to this value because it's the standard with flash videos.

I also wanted to upload a HQ version to YouTube to save bandwidth on the main site, but so far it's been disappointing. I tried many times, each time YouTube refused to recognize the codec I used for the video ( surprisingly, H264 isn't supported ). After a few attempts I finally found one that YouTube accepted, only to discover that the video was then rejected due to its length: YouTube has a policy to not accept videos that are more than 10 minutes long. What a waste of time.

So instead I uploaded it to Dailymotion , but it's very low-res and blurry, which I cannot understand since the original resolution is 1280x720; maybe it needs many hours to post-processing, I don't know. There's also now a two parts HQ video uploaded to youtube: " target="_self">part 1 and " target="_self">part 2 . If you're interested in watching it, make sure you switch to full screen :)

Content of the video

The video is basically split in 3 parts:

1. Demonstration of a space station, modelled by WhiteDwarf and using textures from SpAce and Zidane888. Also shows a cockpit made by Zidane888 ( I'll come back on that very soon ) and the Hornet ( textured by Altfuture ).

2. Planetary approach and visit of the ring. Similar to what's already been demonstrated in 2007.

3. Seamless planetary landings.


I've been very hesitant in including the cockpit in the video, simply because of the exceptations it could potentially generate. So you must understand that it's an experiment, and in no way guarantees that cockpits will be present for all ships in the game at release time. It's still a very nice feature, especially with the free look around. You will notice that you can still see the hull of your ship outside the canopy, which is excellent for immersion. Note that the cockpit isn't functionnal, so if we indeed integrate it to the game one day, I would like that all instruments display functionnal informations, that buttons light on/off, etc..


The backgrounds you see in the video ( starfield, nebula ) are dynamically generated and cached into a cube map. This means that if you were located in a different area of the galaxy, the background would be dynamically refreshed and show the galaxy from the correct point of view.

Each star/dot is a star system that will be explorable in game. In the video, as I fly to the asteroids ring, you will see that I click on a couple stars to show their information. The spectral class is in brackets, and follows is the star's name. At the moment, star names are using a unique code which is based on the star location in the galaxy. It is a triplet formed of lower/upper case characters and numbers, like q7Z-aH2-85n. This is the shortest representation that I could find that would uniquely identify a star. This name is then followed by the distance, in light-years ( "ly" ).

I still have to post a dev-journal about the procedural rendering of the galaxy on the client side, in which I'll come back on all the problems I've had, especially performance related.


I'm not totally happy with the look of the planet, so it is likely that in the future, I will at least do one more update of the planetary engine. There are various precision artifacts at ground level, as the heightmaps are generated on the GPU in a pixel shader ( so are limited to 32-bits of floating point precision ). I've also been forced to disable the clouds, which totally sucks as it totally changes the look & feel of a planet seen from space. The reason for that is that I implemented the Z-Buffer precision enchancement trick that I described in a previous dev journal, and it doesn't totally work as expected. With clouds, the clouds surface is horribly Z-fighting with the ground surface, which wasn't acceptable for a public video. At the moment, I use a 32-bits floating point Z-Buffer, reverse the depth test and swap the near/far clipping planes, which is supposed to maximize Z precision.. but something must have gone wrong in my implementation, as I see no difference with a standard 24-bits fixed point Z Buffer.

The terrain surface still lacks details ( vegetation, rocks, etc.. ). I still have to implement a good instancing system, along with an impostor system, to get an acceptable performance while maintening a high density of ground features.

Look & Feel

Don't think for one second that the "look & feel" of the camera and ship behavior is definitive in this video. I'm pretty happy with the internal view and the cockpit look, but the third-person camera still needs a lot of work. It theorically uses a non-rigid system, unlike the ICP, but it still needs a lot of improvements.


As you may notice, the ship's thrusters correctly fire depending on the forces acting on the ship, and the desired accelerations. Interestingly, at one given point in time, almost all thrusters are firing, but for different reasons. First, the thrusters that are facing the planet are continuously firing to counter-act the gravity. It is possible to power down the ship ( as seen at the end of the video ), in which case the thrusters stop to work. Secondly, many thrusters are firing to artifically simulate the drag generated by the auto-compensation of inertia. For example when you rotate your ship to the right, if you stop moving the mouse the rotation will stop after a while. This is done by firing all the thrusters that would generate a rotation to the left. Of course, some parameters must be fined tuned.

When the ship enters the atmosphere at a high velocity, there's a friction/burning effect done in shaders. It still lacks smoke particles and trails.

This video will also give you a first idea of how long it takes to land or take off from a planet. The dimensions and scales are realistic. Speed is limited at ground level for technical reasons, as higher speeds would make the procedural algorithms lag too much behind, generating unacceptable popping. At ground level, I believe you can fly at modern airplanes speeds. A consequence of this system is that if you want to fly to a far location on the planet, you first have to fly to low space orbit, then land again around your destination point.




Anti-hacking idea

I've been thinking to a solution to prevent hackers from releasing a pirated version of the game giving unfair advantages. Surely all the critical data and calculations will be done on the server side, but protecting the client from "cheap hacks" (like aimbots) is necessary too. This is for an online game, client/server based.

So here is the idea: each time a client wants to connect to the server, it has to download & replace the client EXE from it, pretty much like a patch.. but every time it connects. The trick is, the downloaded EXE is different each time and incompatible with its previous version, including the network protocol, function adresses, etc..

When the server detects an incoming connection, it picks up one EXE from a pool (which can be arbitrarily as large as you want) and sends it to the client. All the EXEs use a different network protocol and once the download is complete, the server is set to only understand the specific protocol corresponding to the client. If there's still no reconnection after 10 seconds, for example, the process has to restart from the beginning.

Some drawbacks i can see:
- additional bandwidth (if the EXE is 1 MB, that's 1 MB of bandwidth consumed each time a client connects).
- the server has to understand all the possible protocols at the same time
- the number of client EXEs has to be pretty large
- each time the client EXE is patched/improved (by the developpers), many versions have to be recompiled.
- not 56K friendly. ADSL is not problem because downloading 1 MB is only a couple seconds.. but 56K modem users will have to be patient.

The advantages:
- if a hacker modifies his local EXE, the server will send him a new one when he connects. If the hacker ignores the newly downloaded EXE and tries to connect to the server with his hacked version, the protocols will be incompatible and the connection refused.
- if the hacker downloads a new EXE, wants to hack it and connect to the server with this hacked version, he has to do it in less than 10 seconds.
- a hacked version cannot be redistributed because it would be invalidated as soon as average-Joe connects to the server.

The flaws..?
- could the hacker modify the EXE in less than 10 seconds ?
- still does not fix the content-modification problem (like modifying textures/models or external DLLs).





Terrain engine


- clouds don't have a coriolis effect
- no storm effect
- motion blur / bluriness in starfield
- clouds have patterns
- terrain too blue
- atmosphere too thick
- over saturation to white
- areas too sharp / contrasted
- terrain only based on altitude / too randomized
- texture pack for ship is too low contrast / flat
- jagged / aliased shadows
- too strong stars reflections
- lack of bloom
- star field edges are visible
- only one type of clouds

I'm not complaining. Just noticing. I'm getting more and more scared of posting updates. The amount of anticipation, hype and expectation is rising, and honnestly, while many of those remarks are valid and will be fixed, many of them are just not on my todo list.

Take the comment about jagged shadows for example. I've explained in great lengths in past dev journals that a technical choice had to be made between shadow resolution / aliasing, shadowing range and performance. If you increase the range and keep the same number of shadow maps, you'll get more aliasing. If you increase the shadow resolution or amount of shadow maps to decrease the aliasing and keep the same shadowing range, you'll hurt performance ( which is already quite low ).

It's a bit annoying as a programmer to say "this can't be fixed, or I don't have more time to spend to improve that", but really, I have to progress on the game itself.. all I'm saying, nit-pick as long as you want, but don't expect everything to be perfect at release time.

Screenshots time

Sorry for the lack of anti-aliasing, I forgot to enable it before taking those pictures, and I didn't want to spend another half an hour just to take a new set.

Behold, lots of pictures today !

Terrain texturing, sun shafts / god rays, vegetation ( not procedural, modeled and textured by SpAce ), etc...




A retrospective on the Infinity project

Hey Everybody, long time no see, Ysaneya here ! I haven't posted in the past 6 years if I count well. Most of you probably don't remember me, but the few of you who do should remember the Infinity project and how it all started back in 2005. It started with a dream, one made of stars and full of procedurally-generated planets to visit. At the time, Elite was a long forgotten franchise and nobody was working on a procedural universe. I started to work in my spare time on a MMO project called Infinity. 2005 - 2010: procedural dreams In the first years, I started to research procedural planets generation. I also developped an entire engine ( nowadays known as the I-Novae Engine ) to support all features I'd need for the Infinity project. Including:
A flexible scene-graph
A 3D renderer supporting all the latest-gen features and shaders ( shadow mapping, motion blur, HDR, dynamic lighting.. the usual list.. )
A physics engine ( I settled on ODE )
An audio engine ( OpenAL )
A network engine ( based on UDP )
All the procedural planetary & universe generation technology

In 2007 I released a small free game, simply named the "Infinity Combat Prototype". The goal for that game was to integrate all the engine into a game to validate that all the components were working together, and that a game ( some newtonian multplayer combat in arenas in space ) could be produced. The idea was that it'd be the first step that would eventually lead to the whole MMO.
Unfortunately, it's pretty much at this point that I started to get "lost" into the ambition of the project. I had created the concept of "community contributions" where wannabe-artists could submit artwork, 3D models & textures to be used in the game, but it quickly took a dozen hours a week to review all this work and to validate/reject it, keeping in mind that 95% of it was at the indy level at best. I was the only programmer on the team, and so progress started to slow down tremendously. We entered into a vicious circle where as months were passing, the cool brand new technology was getting deprecated / looking obsolete, and catching up took months for a single feature. That was the time were I replaced the old fashioned renderer by a deferred renderer, implemented dynamic lighting and shadow mapping and all sorts of visually cool stuff.. but meanwhile, gameplay progress was at a standpoint. I spent some time working on the client/server architecture and databases, but nothing too fancy, and definitely not to the point it could be used for a full fledged MMO. By 2010 it became crystal clear that as the sole programmer of the project, even using procedural technology and an artists community to alleviate the content generation problem, I couldn't keep up. A few programmers offered their help but clearly weren't up to the task, or gave up very quickly after a few months. If you've been an indy relying on external help by volunteers to work on your project, that should ring a bell. But in early 2010, I met Keith Newton, an ex-developer from Epic Games who worked on the Unreal Engine. He offered to set up an actual company, review our strategoy and approach the problem from a professional & business perspective. I was about to give up on the project at that time, so naturally, I listened. 2010 - 2012: Infancy of I-Novae Studios We formed the company I-Novae Studios, LLC, in early 2010, and started to look for investors that could be interested in the technology. Or companies interested in doing partnerships or licensing. Unfortunately it was bad timing and we didn't realize that immediately. If you recall, this was right after the economic crisis of 2008. All the people we talked to were very interested in the tech, but none were ready to risk their money in a small company with no revenue. We had a few serious opportunities during these year, but for various reasons nothing ever came out of it. Another problem was that this period was the boom of the mobile market, and most companies we talked to were more interested in doing mobile stuff than, sic, a PC game. During these years we also revamped our technology from the grounds up to modernize it. We switched to physical-based rendering ( PBR ) at this time, implemented a powerful node-based material system, added an editor ( one thing I simply never worked on pre-2010, due to lack of resources ) and much more. Keith worked approximately 2 years and a half full time, out of his own savings, to mature the tech and look for business opportunities. Meanwhile, our other artists and I were still working part time. On the game side, unfortunately things still weren't looking great. It was our strategy to focus back on the technology and put Infinity on hold. We came to the conclusion that we'd probably need millions to realistically have a shot at producing a MMO at a decent quality and in good conditions, and that it couldn't be our first project as a company. In 2012, Kickstarter started to become a popular thing. It was at this time that we started to play with the idea of doing a Kickstarter for a less ambitious project, but still including our key features: multiplayer components and procedural planetary generation. That was how Infinity: Battlescape was born. 2013 - 2015: Kickstarter, full steam ahead It took us more than 2 years to prepare our Kickstarter. Yup. At this point Keith was back to working part time, but I left my job to dedicate myself to the Kickstarter, working full time out of my own savings on it. To produce the Kickstarter we needed a lot of new content, never shown before, and at near-professionel quality. This included a ship with a fully textured PBR cockpit, mutliple smaller ships/props, asteroids, a gigantic space station, multiple planetary texture packs and a larger cargo ship. We decided pretty early to generate the Kickstarter video in engine, to demonstrate our proprietary technology. It'd show seamless take offs from a planet, passing through an asteroid field, flying to a massive space station that comes under attack, with lots of pew-pew, explosions and particle effects. IIRC we iterated over 80 times on this video during the year before the Kickstarter. It's still online, and you can watch it here: Meanwhile, I was also working on a real-time "concept demo" of Infinity: Battlescape. Our original plan was to send the demo to the media for maximum exposure. It took around 8 months to develop this prototype. It was fully playable, multiplayer, including the content generated by our artists in the Kickstarter trailer. The player could fly seamlessly between a few planets/moons, in space, around asteroids or dock in a space station. Fights were also possible, but there never was more than a handful of players on the server, so we could never demonstrate one of the keypoints of the gameplay: massive space battles involving hundreds of players. In October 2015, we launched our Kickstarter. It was a success and we gathered more than 6000 backers and $330,000, a little above the $300,000 we were asking for the game. It was one of the top 20 most successful video games Kickstarters of 2015. Our media campaign was a disapointment and we received very little exposure from the mass media. I understandably blame our "vaporware" history. The social media campaign however was a success, particularly thanks to a few popular streamers or twitters that brought exposure on us, and by Chris Roberts from Star Citizen who did a shout-out on his website to help us. But as much as we're happy to -finally- have a budget to work with, it was only the beginning.. 2016+: Infinity Battlescape We started full development in February 2016 after a few months of underestimated post-KS delays ( sorting out legal stuff, proper contracts with salaries for our artists, and figuring out who was staying and who was leaving ). Since then, we've focused on game design, producing placeholders for the game prototype and improving our technology. We're still working on adding proper multithreading to the engine, moving to modern Entity-Componeny-System ( ECS ), and figuring out what to do with Vulkan and/or Directx 12. Meanwhile we're also working on networking improvements and a more robust client/server architecture. The game is scheduled for release in end-2017. All the pictures in this article are coming from our current pre-alpha.




GPU Terrain generation, cell noise, rivers, crater

GPU Planetary Generation


Until now, the planetary generation algorithm was running on the CPU synchronously. This means that each time the camera zoomed in on the surface of the planet, each terrain node was getting split into 4 children, and a heightmap was generated synchronously for each child.

Synchronous generation means that rendering is paused until the data is generated for each child node. We're talking of 10-20 milliseconds here, so it's not that slow; but since 4 childs are generated at a time, those numbers are always multiplied by 4. So the cost is around 40-80 ms per node that is getting split. Unfortunately, splits happen in cascade, so it's not rare to have no split at all during one second, and suddenly 2 or 3 nodes get split, resulting in a pause of hundreds of milliseconds in the rendering.

I've addressed this issue by adding asynchronous CPU terrain generation: a thread is used to generate data at its own rythm, and the rendering isn't affected too harshly anymore. This required to introduce new concepts and new interfaces ( like a data generation interface ) to the engine, which took many weeks.

After that, I prepared a new data generation interface that uses the GPU instead of the CPU. To make it short, I encountered a lot of practical issues with it, like PBOs ( pixel buffer objects ) not behaving as expected on some video cards, or the lack of synchronization extension on ATI cards ( I ended up using occlusion queries with an empty query to know when a texture has been rendered ), but now it's more or less working.


There are a lot of advantages to generating data on the GPU instead of the CPU: the main one is that, thanks to the higher performance, I will now be able to generate normal maps for the terrain, which was too slow before. This will increase lighting and texturing accuracy, and make planets ( especially when seen from orbit ) much nicer. Until now, planets seen from space weren't looking too good due to per-vertex texturing; noise and clouds helped to hide the problem a bit, but if you look carefully at the old screenshots, you'll see what I mean.

The second advantage is that I can increase the complexity of the generation algorithm itself, and introduce new noise basis types, in particular the cell noise ( see Voronoi diagrams on wikipedia ).

Another advantage is debug time. Previously, playing with planetary algorithms and parameters was taking a lot of time: changing some parameters, recompiling, launching the client, spawning a planet, moving the camera around the planet to see how it looks, rinse and repeat. Now I can just change the shader code, it gets automatically reloaded by the engine and the planet updates on-the-fly: no need to quit the client and recompile. It's a lot easier to play with new planets, experiment, change parameters, etc..

I'm not generating normal maps yet ( I will probably work on that next week ), and there's no texturing; in the coming pictures, all the planet pictures you will see only show the heightmap ( grayscale ) shaded with atmospheric scattering, and set to blue below the water threshold. As incredible as it sounds, normal mapping or diffuse/specular textures are not in yet.

Cell noise

.. aka Voronoi diagrams. The standard implementation on the cpu uses a precomputed table containing N points, and when sampling a 3D coordinate, checking the 1 or 2 closest distances to each of the N points. The brute-force implementation is quite slow, but it's possible to optimize it by adding a lookup grid. Now, doing all of that on the GPU isn't easy, but fortunately there's a simpler alternative: procedurally generating the sample points on-the-fly.

The only thing needed is a 2D texture that contains random values from 0 to 1 in the red/green/blue/alpha channels; nothing else. We can then use a randomization function that takes 3D integer coordinates and returns a 4D random vector:

vec4 gpuGetCell3D(const in int x, const in int y, const in int z)
float u = (x + y * 31) / 256.0;
float v = (z - x * 3) / 256.0;
return(texture2D(cellRandTex, vec2(u, v)));

The cellNoise function then samples the 27 adjacent cells around the sample point, generate a cell position in 3D given the cell coordinates, and get the distance to the sample point. Note that distances are squared until the last moment to save calculations:

vec2 gpuCellNoise3D(const in vec3 xyz)
int xi = int(floor(xyz.x));
int yi = int(floor(xyz.y));
int zi = int(floor(xyz.z));

float xf = xyz.x - float(xi);
float yf = xyz.y - float(yi);
float zf = xyz.z - float(zi);

float dist1 = 9999999.0;
float dist2 = 9999999.0;
vec3 cell;

for (int z = -1; z {
for (int y = -1; y {
for (int x = -1; x {
cell = gpuGetCell3D(xi + x, yi + y, zi + z).xyz;
cell.x += (float(x) - xf);
cell.y += (float(y) - yf);
cell.z += (float(z) - zf);
float dist = dot(cell, cell);
if (dist {
dist2 = dist1;
dist1 = dist;
else if (dist {
dist2 = dist;
return vec2(sqrt(dist1), sqrt(dist2));

The two closest distances are returned, so you can use F1 and F2 functions ( ex.: F2 = value.y - value.x ). It's in 3D, which is perfect for planets, so seams won't be visible between planetary faces:

New planetary features

Using the cell noise and the GPU terrain generation, I'm now able to create new interesting planetary shapes and features. Have a look yourself:


"Fake" rivers I'm afraid, as it's only using the ocean-level threshold and they don't flow from high altitudes to low altitudes, but it's better than nothing. When seen from orbit, there is some aliasing, so not all pixels of a river can be seen.

It's simply some cell noise with the input displaced by a fractal ( 4 octaves ):


I've started to experiment on craters. It's a variation of cell noise, with 2 differences: extinction ( a density value is passed to the function, which is used to kill a certain number of cells ), and instead of returning the distance, return a function of the distance. This function of distance is modeled to generate a circular, crater-like look.

Here's a quick experiment with 90% extinction. The inputs are also displaced with a small fractal:

And here's the result with a stronger displacement:

The next step is to add more octaves of crater noise:

It doesn't look too good yet, mostly because the craters at different octaves are just additively added and not combined properly. More experiments on that later.

Planetary experiments

When adding more octaves and combining different functions together, then adding back atmosphere scattering and the ocean threshold, the results start to look interesting. Keep in mind that all the following pictures are just the grayscale heightmap, and nothing else: no normal mapping or no texturing yet !




Multimonitor support

Multimonitor support

Finally, I have recently added multimonitor support.

If you've followed Infinity's development for a while, you'll probably go, "eh ? I thought it already supported multiple monitors !?", thinking to Kaboom22's triple-screen setup.

The trick here is that Kaboom's system used a special device, the Matrox Triple head 2 GO to make windows "believe" there's only one physical monitor instead of 2 or 3. It's completely transparent to the program; Windows just reports a single screen of a wide resolution ( like 3840x1024 ) instead of three.

But if you have a multimonitor setup ( 2 or 3 screens ) without this device, like me.. well.. you're out of luck.

A week ago I decided to have a look at adding support for multiple screens into the I-Novae engine.

Technically, it's not so hard. After all, all you need to do is to create two windows, one on each screen ( in case of a dual screen setup ), to create two cameras, and to render the scene twice.

Of course, you get a 2 times slow-down ( or 3 times in case of a triple config ).

Fortunately, I took a different approach. I'm still creating my 2 windows, but I now use the engine's pipeline to create one virtual off-screen buffer. The scene only has to be rendered once. Much better! Then in the final stage, the buffer is shared and split between the windows.

It's also a lot faster in case your second monitor doesn't have hardware acceleration, because technically, the rendering only happens off-screen on the first monitor's device.

To implement all of this I had to tweak many small bits everywhere in the engine, and add new concepts. A few subtle bugs when sharing textures between different GL contexts with the texture cache. A monitor lister. Management of virtual windows. Etc..

The result is pretty cool, and if properly set up, the images are perfectly seamless. For example, consider this one, taken on a dual monitor config, each screen being full-screen 1280x1024 (click for full-screen):

That pic was simply taken with the "Print Screen" key in Windows.

Now, in windowed mode, you can see better how the buffer is split in 2 windows:

Note that nothing forces the windows to get perfectly aligned or to "touch", and they can even be of different resolutions ( meaning that it'll work perfectly even if your monitors have different resolutions ).




New planet video

Finally, it is done.

I've been missing some time to fix all the issues, but i'd say it's "good enough". You might notice a bit of popping in the textures, that's because recording with fraps decreased the framerate to the 30s, which in return made the texturing process "lag" a bit. When it's running at over 100 fps, popping doesn't appear.

In addition in some view angles, the atmosphere is not correctly blended with the ring (a sort of dark halo appears, which makes the ring look ugly). I tried to not show it, but that's something i'll have to fix in the future.

The video is a bit long ( 4 minutes ) and weights 75 MB. Encoded in Divx4 as usual.

Primary download link

Mirror Link

New screens:




Final Earth model

I think i can safely say that now, everything is close to what i envisionned. In disorder, i fixed the sun color at right angles, the contrasts of the starfield, the quality of the cloud textures and adjusted some parameters. The first screen is an Earth-like planet with an atmosphere thickness of 250 kilometers (gives an "artistic" feeling to it), the second one has a thickness of 100 kilometers, which is more realistic, but looks less nice in my opinion.

Reducing the images to fit in this journal makes them a bit blurry. For fun, i upped the resolution to 1600x1200 with antialiasing x6, planet textures 1024x1024 and took a screenshot (maybe if some of you need a new wallpaper). It still rendered at more than 60 fps on my machine.

Link to wallpaper-ed image (500 Kb)

I can finally move on to other tasks. I need to work on my 4E4 entry, too..




Nebulae, part III

First of all, the answer from the previous journal: the code that generated this strange, mathematical looking image is the following:

for (TUInt k = 0; k {
for (TUInt j = 0; j {
for (TUInt i = 0; i )
SVec3D pos((TFloat)i / 127.0f, (TFloat)j / 127.0f, (TFloat)k / 127.0f);
pos = pos * 2.0f - 1.0f;
m_sprites[1]->addSprite(pos * 20.0f, SColor(1, 1, 1, 0.25f));

As you see, it simply creates a 128 * 128 * 128 voxel grid and generates one particle / point sprite for each cell. Nothing more. I'm guessing the strange looking aspect is coming from the particle texture ( which is smoke like ).

Going back to the "serious" stuff: i've been playing with various ways to light the nebulae. Before, each particle of the nebulae had its own color. That would make what is called "emission nebulae", ie. nebulae that emit lighting due to ionization of their particles when receiving light. What i've implemented now is "reflective nebulae", ie. a nebula that reflects color of the nearest stars, so it takes into account the position, distance and color to each star.

I've also added negative blending in order to simulate black dust / bok globules. The drawback is that, as there is no sorting, it can look a bit weird if you move quickly in 3D ( as in the coming video ), but in a sky box it won't be noticeable.

A very short video ( a few seconds long only ) is available here:

The nebulae are now considered close to being "complete". I still have some things to fix and redesign some code a bit, but the graphical aspect won't be improved much.

And, of course, mandatory screenies:




Rant and depth-of-field


Time for a small rant. On various topics. I'll keep it short to not bore people to death. Some things are somewhat irritating:

1) People who want to test the ICP, have a crash, and post or send me e-mails / PMs saying "it does not work, can you help ?". No, I cannot help. At least, not if you don't explain what the problem is. What are your system specs ? Do you have the latest drivers ? Did you use the search function to see if your problem has been answered ? What happens exactly, can you login, is it crashing, is it freezing, or what ? Sorry, I don't have telepathy skills, I cannot read your mind. The point is: if you're going to ask for help, be precise! I don't have time to waste to extract the useful information for you point by point.

2) People who don't read the dev journals. Seriously, I'm sometimes spending an hour or two writing those journals. Nothing is irritating me more than people who criticize something when that thing was described and explained in the journal a few posts up. Things such as "omg vertex popping".. "omg it all looks the same, boring texture" when I explicitely said that geo-morphing isn't implemented yet, and that the terrain uses only big tiled textures.. wow.

I certainly have no problems with people commenting and criticizing the features that are shown in the video, but please, don't criticize or comment on things that are not done yet.

And please.. read the dev journal before flooding the thread with tons of questions.

By the way, this section is dedicated to SinKing from who said the motion blur didn't look good and was unrealistic, thinking he was looking at depth-of-field. Hi :)


To be clear, motion blur is the act of bluring pixels in the direction of the motion. Play a DVD, put it on pause, and look at the image of your TV ( especially when the camera or something is fast moving ). That is motion blur.

Depth-of-field is another kind of blur, but this time based on a focal distance. It's the effect you see in movies when the focus is on an object in the middle of the scene: what is close to the camera is blurred, and what is far too.

Keep in mind that I'm exagerating the effects a bit, mostly to debug.

My implementation of depth-of-field is based on a pretty standard technique: the scene is rendered to a texture ( like many effects ); the scene is also rendered to a depth buffer. In a post-processing pass, those two textures are read back: the depth is sampled, compared with the focal plane distance, and the further the pixel is from this plane, the more blurry the scene pixel becomes.

The quality of the blur isn't very good ( by my standards ), as it does not use a separable gaussian yet. It's your basic blur taking N samples in a circular kernel around the center pixel, and averaged together. The size of the kernel is depending on the absolute distance to the focal plane ( and some magic formulas to "feel" good ).

One important thing is also missing: the focal plane distance is currently a constant, that I can change with 2 keys; in the future, I will have to cast some rays in the scene to determine the focus area near the center of the camera. Nothing too advanced technically, I'm not worried.





Shadow mapping & misc. work

In the past days, i've implemented a nice little feature: automatic reloading of texture maps. The engine uses a texture manager, which checks every N seconds (currently, N=1) if a file image has changed ( i compare the size and the last modification date ). If this is the case, the texture is automatically reloaded on-the-fly. It's funny to load the viewer, open photoshop, modify a texture, and when alt-tabing back to the running viewer, see the changes take place without having to reload the viewer.

This week end, i started to work on dynamic shadow maps for the station. It's been really exhausting. I found two nice bugs:

- shadow mapping involves a texture projection from light space to world space. When applying the transformation to a vertex in world coordinates, i forgot to divide the projection result by the w component. The x,y and z components were invalid, and i didn't understand why, until.. too late.

- the second bug was a typo in the code. I applied a transpose() operation to a matrix, thinking it would act as selfTranspose(). In the first case, the matrix is untouched and the function results the transposed matrix, while in the second case, the matrix is erased by its transposed part.

Now, shadow mapping is more or less working. The results aren't very good yet, i'm afraid.. lots of artifacts everywhere. I still have to tweak them and to match the light frustum to the object frustum ( at the moment some resolution is lost in the shadow maps ). Each object ( the station is composed of 7 objects ) gets its own 1024^2 shadow map. It's funny to move the light in circles around the station, and see the shadows running around.




Deferred lighting and instant radiosity

In the past months, I've been wondering how to approach the problem of lighting inside hangars and on ship hulls. So far, I had only been using a single directional light: the sun. The majority of older games precompute lighting into textures ( called lightmaps ) but clearly this couldn't work well in the context of a procedural game, where content is generated dynamically at run time. Plus, even if it did.. imagine the amount of texture memory needed to store all the lighting information coming from surfaces of kilometers-long battleship !

Fortunately, there's a solution to the problem.. enter the fantastic universe of deferred lighting !

Deferred lighting

Traditionally, it is possible implement dynamic lighting without any precomputations via forward lighting. The algorithm is surprisingly simple: in a first pass, the scene is rendered to the depth buffer and to the color buffer using a constant ambient color. Then, for each light you render the geometry that is affected by this light only, with additive blending. This light pass can include many effects, such as normal mapping/per pixel lighting, shadowing, etc..

This technique, used in games silmilar to Doom 3, does work well, but is very dependent on the granularity of the geometry. Let's take an object of 5K triangles that is partially affected by 4 lights. This means that to light this object, you will need to render 25K triangles over 5 passes total ( ambient pass + 4 lights passes, each 5K ). An obvious optimization is, given one light and one object, to only render the triangles of the object that are affected by the light, but this would require some precomputations that a game such as Infinity cannot afford, due to its dynamic and procedural nature.

Now let's imagine the following situation: you've got a battleship made of a dozen of 5K-to-10K triangles objects, and you want to place a hundred lights on its hull. How many triangles do you need to render to achieve this effect with forward lighting ? Answer: a lot. Really, a lot. Too much.

Another technique that is getting more and more often used in modern games is deferred lighting. It was a bit impractical before shader model 3.0 video cards, as it required many passes to render the geometry too. But using multiple render targets, it is possible to render all the geometry once, and exactly once ! independently of the number of lights in the scene. One light or a hundred lights: you don't need to re-render all the objects affected by the lights. Sounds magical, doesn't it ?

The idea with deferred lighting is that, in a forward pass, geometric informations are rendered to a set of buffers, usually called "geometry buffers" ( abbrev: G-buffers ). Those informations usually include the diffuse color ( albedo ), the normal of the surface, the depth or linear distance between the pixel and the camera, the specular intensity, self-illumination, etc.. Note that no lighting is calculated yet at this stage.

Once this is done, for each light, a bounding volume ( which can be as simple as a 12-triangles box for a point light ) is rendered with additive blending. In the pixel shader, the G-buffers are accessed to reconstruct the pixel position from the current ray and depth, then this position is then used to compute the light color and attenuation, do normal mapping or shadowing, etc..



There are a few tricks and specificities in Infinity. Let's have a quick look at them. First of all, the G-buffers.

I use 4 RGBAF16 buffers. They store the following data:

- R G B A
Buffer 1 FL FL FL Depth
Buffer 2 Diffuse Diffuse Diffuse Self-illum
Buffer 3 Normal Normal Normal Specular
Buffer 4 Velocity Velocity Extinction MatID

'FL' = Forward lighting. That's one of the specificity of Infinity. I still do one forward lighting pass, for the sun and ambient lighting ( with full per-pixel lighting, normal mapping and shadowing ) and store the result in the RGB channels of the first buffer. I could defer it too, but then I'd have a problem related to atmospheric scattering. At pixel level, the scattering equation is very simple: it's simply modulating an extinction color ( Fex ) and adding an in-scattering color ( Lin ):

Final = Color * Fex + Lin

Fex and Lin are computed per vertex, and require some heavy calculations. Moving those calculations per pixel would kill the framerate.

If I didn't have a forward lighting pass, I'd have to store the scattering values in the G-buffers. This would require 6 channels ( 3 for Fex and 3 for Lin ). Here, I can get away with only 4 and use a grayscale 'Extinction' for the deferred lights ( while sun light really needs an RGB color extinction ).

'Velocity' is the view-space velocity vector used for motion blur ( computed by taking the differences of positions of the pixel between the current frame and the last frame ).

'Normal' is stored in 3 channels. I have plans to store it in 2 channels only and recompute the 3rd in the shader. However this will require to encode the sign bit in one of the two channels, so I haven't implemented it yet. Normals ( and lighting in general ) are computed in view space.

'MatID' is an ID that can be used in the light shader to perform material-dependent calculations.

As you can see, there's no easy way to escape using 4 G-buffers.

As for the format, I use F16. It is necessary both for storing the depth, but also encoding values in HDR.


At first, I was a bit disapointed by the performance hit / overhead caused by G-buffers. There are 4 buffers after all, in F16: that requires a lot of bandwidth. On an ATI X1950 XT, simply setting up the G-buffers and clearing them to a constant color resulted in a framerate of 130 fps at 1280x1024. That's before even sending a single triangle. As expected, changing the screen resolution dramatically changed the framerate, but I found this overhead to be linear with the screen resolution.

I also found yet-another-bug-in-the-ATI-OpenGL-drivers. The performance of clearing the Z-buffer only was dependent on the number of color attachments. Clearing the Z-buffer when 4 color buffers are attached ( even when color writes are disabled ) took 4 more time than clearing the Z-buffer when only 1 color buffer was attached. As a "fix", I simply dettach all color buffers when I need to clear the Z-buffer alone.

Light pass

Once the forward lighting pass is done and all this data is available in the G-buffers, I perform frustum culling on the CPU to find all the lights that are visible in the current camera's frustum. Those lights are then sorted by type: point lights, spot lights, directional lights and ambient point lights ( more on that last category later ).

The forward lighting ( 'FL' ) color is copied to an accumulation buffer. This is the buffer in which all lights will get accumulated. The depth buffer used in the forward lighting pass is also bound to the deferred lighting pass.

For each light, a "pass" is done. The following states are used:

* depth testing is enabled ( that's why the forward lighting's depth buffer is reattached )
* depth writing is disabled
* culling is enabled
* additive blending is enabled
* if the camera is inside the light volume, the depth test function is set to GREATER, else it uses LESS

A threshold is used to determine if the camera is inside the light volume. The value of this threshold is chosen to be at least equal to the znear value of the camera. Bigger values can even be used, to reduce a bit the overdraw. For example, for a point light, a bounding box is used and the test looks like this:

const SBox3DD& bbox = pointLight->getBBoxWorld();
SBox3DD bbox2 = bbox;
bbox2.m_min -= SVec3DD(m_camera->getZNear() * 2.0f);
bbox2.m_max += SVec3DD(m_camera->getZNear() * 2.0f);
bbox2.m_min -= SVec3DD(pointLight->getRadius());
bbox2.m_max += SVec3DD(pointLight->getRadius());
TBool isInBox = bbox2.isIn(m_camera->getPositionWorld());
m_renderer->setDepthTesting(true, isInBox ? C_COMP_GREATER : C_COMP_LESS);

Inverting the depth test to GREATER as the camera enters the volume allows to discard pixels in the background / skybox very quickly.

I have experimented a bounding sphere for point lights too, but found that the reduced overdraw was cancelled out by the larger polycount ( a hundred polygons, against 12 triangles for the box ).

I haven't implemented spot lights yet, but I'll probably use a pyramid or a conic shape as their bounding volume.

As an optimization, all lights of the same type are rendered with the same shader and textures. This means less state changes, as I don't have to change the shader or textures between two lights.

Light shader

For each light, a Z range is determined on the cpu. For point lights, it is simply the distance between the camera and the light center, plus or minus the light radius. When the depth is sampled in the shader, the pixel is discarded if the depth is outside this Z range. This is the very first operation done by the shader. Here's a snippet:

vec4 ColDist = texture2DRect(ColDistTex, gl_FragCoord.xy);
if (ColDist.w LightRange.y)

There isn't much to say about the rest of the shader. A ray is generated from the camera's origin / right / up vectors and current pixel position. This ray is multiplied by the depth value, which gives a position in view space. The light position is uploaded to the shader as a constant in view space; the normal, already stored in view space, is sampled from the G-buffers. It is very easy to implement a lighting equation after that. Don't forget the attenuation ( color should go to black at the light radius ), else you'll get seams in the lighting.


In a final pass, a shader applies antialiasing to the lighting accumulation buffer. Nothing particularly innovative here: I used the technique presented in GPU Gems 3 for Tabula Rasa. An edge filter is used to find edges either in the depth or the normals from the G-buffers, and "blur" pixels in those edges. The parameters had to be adjusted a bit, but overall I got it working in less than an hour. The quality isn't as good as true antialiasing ( which cannot be done by the hardware in a deferred lighting engine ), but it is acceptable, and the performance is excellent ( 5-10% hit from what I measured ). Here's a picture showing the edges on which pixels are blurred for antialiasing:

Instant radiosity

Once I got my deferred lighting working, I was surprised to see how well it scaled with the number of lights. In fact, the thing that matters is pixel overdraw, which is of course logical and expected given the nature of deferred lighting, but still I found it amazing that as long as overdraw remained constant, I could spawn a hundred light and have less than a 10% framerate hit.

This lead me to think about using the power of deferred lighting to add indirect lighting via instant radiosity.

The algorithm is relatively simple: each light is set up and casts N photon rays in a random direction. At each intersection of the ray with the scene, a photon is generated and stored in a list. The ray is then killed ( russian roulette ) or bounces recursively in a new random direction. The photon color at each hit is the original light color multiplied by the surface color recursively at each bounce. I sample the diffuse texture with the current hit's barycentric coordinates to get the surface color.

In my tests, I use N = 2048, which results in a few thousands photons in the final list. This step takes around 150 ms. I have found that I could generate around 20000 photons per second in a moderately complex scene ( 100K triangles ), and it's not even optimized to use many CPU cores.

In a second step, a regular grid is created and photons that share the same cell get merged ( their color is simply averaged ). Ambient point lights are then generated for each cell with at least one photon. Depending on N and the granularity of the grid, it can result in a few dozen ambient point lights, up to thousands. This step is very fast: around one millisecond per thousand photons to process.

You can see indirect lighting in the following screenshot. Note how the red wall leaks light on the floor and ceiling. Same for the small green box. Also note that no shadows are used for the main light ( located in the center of the room, near the ceiling ), so some light leaks on the left wall and floor. Finally, note the ambient occlusion that isn't fake: no SSAO or precomputations! There's one direct point light and around 500 ambient point lights in this picture. Around 44 fps on an NVidia 8800 GTX in 1280x1024 with antialiasing.


I have applied deferred lighting and instant radiosity to Wargrim's hangar. I took an hour to texture this hangar with SpAce's texture pack. I applied a yellow color to the diffuse texture of some of the side walls you'll see in those screenshots: note how light bounces off them, and created yellow-ish ambient lighting around that area.

There are 25 direct point lights in the hangar. Different settings are used for the instant lighting, and as the number of ambient point lights increase, their effective radius decrease. Here are the results for different grid sizes on a 8800 GTX in 1280x1024:

Cell size # amb point lights Framerate
0.2 69 91
0.1 195 87
0.05 1496 46
0.03 5144 30
0.02 10605 17
0.01 24159 8

I think this table is particularly good at illustrating the power of deferred lighting. Five thousand lights running at 30 fps ! And they're all dynamic ( although in this case they're used for ambient lighting, so there would be no point in that ): you can delete them or move every single of them in real time without affecting the framerate !

In the following screenshots, a few hundred ambient point lights were used ( sorry, I don't remember the settings exactly ). You'll see some green dots/spheres in some pics: those highlight the position of ambient lights.

Full lighting: direct lighting + ambient lighting

Direct lighting only

Ambient ( indirect ) lighting only




Gas giants part II

I am continuying my work on gas giants.

I have finished the color table generation code. My first try was using a lookup table that maps a density ( in the range [0-1] ) to a color, that interpolated colors based on 4 to 7 "color keys". Each color key was determined randomly.. as expected, it didn't look very good. I decided to use another approach: generate a start color for density 0; generate an end color for density 1; then use the mid-point displacement algorithm in 1D to generate the intermediate values.

Here's an example of a color table ( stretched in 2D to see the colors better ).

And Here's an example of a gas giant texture ( there are 6 per layer, and 6 layers total -> there's 36 of those ) with the applied color table.

I also rewrote the color table lookup code itself, to use a simple 256-values array, and pre-computing the colors and the alpha for each value. Then, to texture the gas giant, I only need 1 line: get the density, convert it to an integer in the 0-255 range, then lookup the RGBA values. It provided a nice performance boost ( 5 times faster than the previous method ). You wouldn't notice a difference though, because this part of the algorithm wasn't a bottleneck to start with.

I rewrote the atmospheric scattering shaders in GLSL. Here's a gas giant with its 6 layers ( each cloud layer is moving at different speeds, although it's subtle ). Notice how the scattering on the top-right makes the atmosphere go yellow-ish:

Last but not least, I've been trying to get good results when entering inside the atmosphere of the gas giant. It's still very experimental, at the moment my main problem is to blend the atmosphere look from space and from inside.. and I haven't succeeded yet. But that's how it looks:

In the upper atmosphere:

In the medium atmosphere:

In the deep atmosphere, you probably won't be able to go deeper in a spaceship anyway:




Low orbit view

As i've mentionned in a previous entry, the low orbit view is now looking pretty good. I got ride of the visible tiling in the detail textures ( to the cost of more texture popping, but that'll be fixed later ).

I've found a NASA picture on the web showing how the atmosphere looks like from a low orbit, and tried to replicate it. Clouds are missing, and the ground features are of course different ( my planet is not Earth, after all ), but the colors are matching pretty closely, so i'm happy:




Station screenshots 2

Not much to say - a few fixes and details by Shawn, but code-wise, most of my work this week end has been oriented towards the Minas Tirith patch.

In the last shot, you can see the station ( which is in orbit at an altitude of 300 Km ) looks like from the ground.. definately huge. Atmospheric shading missing.




Server emulation

In the past days, i've been performing a network stress test, by emulating a server and a lot of clients connecting to it. In the above screenshot, you can see the test window as well as many statistics, which i'll explain in a short while.

The whole server is centralized around two components. The first one is, of course, the network module. I decided to code my own a year ago instead of using an established one ( like Raknet: i heard it didn't support a massive amount of connections, due to its per-packet overhead ). The result is INetwork, an implementation of the RDP protocol ( reliable-UDP ) with a few custom optimizations. It has a low cpu processing overhead, it automatically determines the quality of each connection in the system ( the "ping" ), can concatenate packets together to save header space, or even integrate acknowledgment packets into data packets. It supports many types of reliability, from fully unreliable ( pure UDP packets ) to pure reliable packets ( much like TCP ), with reliable but out-of-order packets in-between. I/O completion ports are experimental under Windows, but i found that the library performance was still excellent without it.

An interesting thing affecting the server performance is the quality of the network card. I already mentionned it in a previous entry, like 6 months ago, but some systems simply do not seem to be capable of handling a large amount of clients; the CPU usage rises extremely quickly, to struggle at around 100 connections. Fortunately, my test computer seems to have a good network card, and i went up to 2500 connections without too much troubles.

The second "core" component in the server is called IClustering. This is a small library which is responsible of determining which entities "see" which others. An entity is determined by a position in space, and a radius. This radius determines the minimum distance at which the entity is being visible by other entities. For example, the entity could be a tree and have a visibility radius of 1 km, which means that any other entity ( like a human ) that is closer than 1 km can "see" the tree. Note that the contrary isn't true: if the human has a visibility radius of 500 meters only, the tree will not "see" him. Each entity can be in 3 states: fully static, semi-static and dynamic. Only semi-static and dynamic entities can be moved inside the space ( or change their visibility radius ). Only dynamic bodies are informed about what they see or not. Technically, the implementation is based off a recursive regular grid, made of NxNxN cells ( each cell being a sub-grid ).

So, with these two components, i "emulated" a MMO server. To do that, i used many concepts:
- zones ( approximately 250 ): entities in different zones cannot see or interact with each other
- places of interest ( the green circles in the screenshot ): these are areas that mostly attract clients. In a Fantasy MMO, these would be cities, buildings or dungeons. In Infinity, they represent planets or stations.
- the clients: the yellow or cyan dots in the screen. They can be in active or idle state.

How it works:
- the server initially creates a number of zones ( 250 in my test ), with a set of places of interest in each one. The places can be recursively formed around other places ( to simulate moons orbiting planets in Infinity, or buildings in cities in other games ).
- a client emulator connects a defined number of clients per second ( between 10 and 100 in my test.. yeah that's a lot of connections/second, but the goal is to stress it after all ).
- each connecting client is affected to a zone. Zones have different probabilities of "interest" ( to simulate zones more busy than other ones ). The most active zones can gather ~30% of all the clients.
- the client is set into active state, and chooses a destination point. There are many probabilities for that. Around 30% to choose an existing place of interest. Around 50% to warp into another zone. Around 20% to choose a location where an existing client is. These numbers are for Infinity, where warping into different systems happens often.
- when a client arrives at destination, he is set into the "idle" mode. If he is warping into another zone, this zone is chosen based on zone probabilities, and is moved to a random point in that zone.
- when a client is in idle state, he has a 0.5% probability to be awaken and to choose a new destination point.

- there are normally 2 "update" packets per second ( to simulate sending position + orientation ), which are around 200 bytes each ( this is randomized a bit too ). These are sent in unreliable mode. They are sent to all the clients an entity ( another client ) is seen by.
- if the client is in idle mode, there are no updates.
- if the client is not seen by anybody, the update rate is lowered by a factor of 4 ( so one update every 2 seconds ).
- two updates per second can look low, but that's what Guildwars is using. And do not forget for Infinity, that ships cannot change of velocity quickly. This should be more than enough even during fights, thanks for dead-reckoning.
- when a client starts or stops to see another entity, a "creation" or destruction packet is sent in reliable mode ( ~300 bytes ).
- the clustering visibility computations are updated 10 times per second

- they are currently a bit biased, because i have a dual core machine, and the client emulator takes away a lot of resources from the server.
- some zones get a high amount of clients ( up to 150/zone ), most zones get a low amount of clients ( 0-5 ), the average being around 20 clients.
- the visibility radius of the clients is very small compared to the zone and places size. In consequence, most of time there are only packets sent when some clients are in the same place.
- for 1000 clients, the upload bandwidth is around 100 KB, and the download bandwidth around 200 KB. In average, 40% of the clients are idle. The average amount of bandwidth sent by a client to the server is 400 bytes, and the server returns ~500 to 1500 bytes. When many clients (10-15) are in the same place, this can rise to 5000-8000 bytes.

This post is becoming long, i'll add more informations/results later.




Space combat prototype - phase 3

I've made a lot of progress on the combat prototype this week end, and that's a Good Thing (tm), even if it wasn't as relaxing as it was supposed to be. I tested the space station interior with Shawn in network. Despite being a bit late and having difficulties with DLL hell, everything worked fine. For the first time, Shawn was able to "visit" his space station from the inside, with ships physics and collisions. I also showed him how the docking procedure will look like, and so far we haven't spotted any problem. It also seriously helped to "feel" the scale of the station, and how some sections are a bit too small even for a small ship to pass.

I've redesigned a large part of the code. I've introduced the concept of "ship parts" ( which was missing before ), and the ability to attach thrusters, hardpoints and weapons to a specific ship part. I've also recoded the breaking part functionnality and the collision bodies.

A lot of people have been working on their parts, too. Juan is advancing quite well in the hunter texturing, and today i placed it into the game (note: not the prototype) and adjusted the shaders to give it a nicer look; see screenshot underneath. John also sent the more or less final version for the sound fx and the quality is really good for mono sounds. Koshime's still making some fantastic sketches, and Russell's busy on the storyline.

Despite this, there is still a lot to do. I might get the "gameplay" done this week, but i'll probably need more time to add the HUD, 3D sound, particle / shaders effects, balancing, etc.. which means it's now quite likely that it'll get delayed again by a week.

A side view:




Two planets

I've added an array to hold a variable amount of planets. I also reorganized the code to give to each planet different parameters. I still have a lot to do in that area. For example, even if it's not that visible, the two planets on the following screenshot have the same set of textures/clouds, but because of the atmosphere they look different. The rendering is not correctly done in order (i have to fix quite a few things in the 3D engine for that), so when a part of the atmosphere covers another planet you get strange results.




Ships comparison, persp. sheet

I've made a 3ds max render showing off some of our ships in perspective view so that dimensions can be compared.. click to see the large size image:




Fast Perlin noise

I'm here going to present a "tweak" to the Perlin noise function that multiplies its speed by a factor of 7.

The original Perlin noise function that is used is Perlin's Improved noise function.

Of course, a speed increase of x7 means that we're going to loose some nice properties / accuracy of the original noise, but everything is a trade-off in life, isn't it ? Whether you want to use the fast version instead of the original version is up to you and depends on if you need performance over quality or not..

On a Pentium 4 @ 3 Ghz, Improved noise as implemented directly in C++ from the code linked above takes 7270 milliseconds to generate 1024x1024 samples of fBm noise ( the basis function being Improved noise ) with 12 octaves. In other words, for each sample, the Improved noise function is called 12 times. I'm using the 3D version of the algorithm. I ran the test 8 times and averaged the results to get something relevant.

The first thing that can be improved in the Improved noise are the casts from float-to-int:

const TInt xtr = floorf(xyz.x);
const TInt ytr = floorf(xyz.y);
const TInt ztr = floorf(xyz.z);

Those are really, really.. REALLY bad. Instead you can use a bit of assembly:

__forceinline TInt __stdcall MFloatToInt(const TFloat x)
TInt t;
__asm fld x
__asm fistp t
return t;

and replace the floor calls by:

const TInt xtr = MFloatToInt(xyz.x - 0.5f);
const TInt ytr = MFloatToInt(xyz.y - 0.5f);
const TInt ztr = MFloatToInt(xyz.z - 0.5f);

The same performance test ( 1024x1024 @ 12 octaves ) now takes 3324 milliseconds, a 118% improvement.

The next trick is an idea of a co-worker, Inigo Quilez, who's heavily involved in the 64 KB-demo coding scene. His idea is simple: instead of computing a gradiant, just replace it with a lookup table.

The original code looked like this:

return(_lerp(w, _lerp(v, _lerp(u, _grad(ms_p[8][AA], x, y, z),
_grad(ms_p[8][BA], x - 1, y, z)),
_lerp(u, _grad(ms_p[8][AB], x, y - 1, z),
_grad(ms_p[8][BB], x - 1, y - 1, z))),
_lerp(v, _lerp(u, _grad(ms_p[8][AA + 1], x, y, z - 1),
_grad(ms_p[8][BA + 1], x - 1, y, z - 1)),
_lerp(u, _grad(ms_p[8][AB + 1], x, y - 1, z - 1),
_grad(ms_p[8][BB + 1], x - 1, y - 1, z - 1)))));

The "fast" version looks like this:

return(_lerp(w, _lerp(v, _lerp(u, ms_grad4[AA],
_lerp(u, ms_grad4[AB],
_lerp(v, _lerp(u, ms_grad4[AA + 1],
ms_grad4[BA + 1]),
_lerp(u, ms_grad4[AB + 1],
ms_grad4[BB + 1]))));

Here, "ms_grad4" is a 512-entry lookup table that contains absolutely random float values in the [-0.7; +0.7] range.

It is initialized like this:

static TFloat ms_grad4[512];

TFloat kkf[256];
for (TInt i = 0; i kkf = -1.0f + 2.0f * ((TFloat)i / 255.0f);

for (TInt i = 0; i {
ms_grad4 = kkf[ms_p2] * 0.7f;

At this point you're maybe wondering why 0.7 ? This is not clear to us yet; we're suspecting it has something to do with the average value you can get from a normal Perlin noise basis. We tried a range of [ -1; +1 ] originally, but we found that the fBm was saturating to black or white too often, hurting quality by a lot.

This "fast" version of noise runs at 1027 milliseconds, a 223% improvement compared to the previous tweak, or a 607% improvement compared to the original Improved noise version.

As for quality, well, everything has a price. There's no visible artifacts in the fast noise, but the features don't seem to be as well regular and well distributed than the improved noise. The following image has been generated with the same frequency/scaling values. The features appear different since the lookup table holds different values in the two versions, but that's normal.




Space combat prototype - phase 6

Finally, getting somewhere!

The shots ( i'll try to no longer call them lasers ) have been fixed. They no longer pass through walls. I've also added the correct spawn position for those shots, in the center of each gun. Note that the current turrets have 2 guns per turret. On the Lancet, with 4 hardpoints, this generate 8 shots per fire..

The debrits are now functionning in network.
The balance / parameters still have to be adjusted.
I still have to implement the victory/defeat conditions, teams, player spawning, but it should not be too hard. I plan to do that today or tomorrow, and to add some basic turrets A.I. for the mines and the battleships.

I made a lot of progress this week end, mostly on the network optimisations. One major change is related to how the firing is handled. Before, each weapon was generating a packet ( basically saying: entity ID1, turret ID2, has fired ). Now, you can imagine what happens when two battleships ( 22 turrets each ) and 50 mines are all firing..

Instead, i now send packets on the "change" of firing state. A packet is sent when a weapon starts to fire, and another packet sent when a weapon stops to fire. In addition, the firing rate is taken into account so that slower firing guns do not generate more packets than needed. It seems to work very well.

I implemented a similar system for the state of each ship. Instead of sending the state of a ship ( which parts are broken, the hitpoints for each thruster/gun, etc.. ) every second, i now detect if a state is "dirty", and only send data for those parts of the ship that are dirty. Again, it makes a lot of difference for a battleship: i don't have to send the state of the 22 turrets when only one of them takes damage..

I still have to interpolate the positions/rotations with dead reckoning, but since i have some code to do that from a previous prototype, that shouldn't take too long.

Oh, and... :




Space combat prototype - phase 4

The space combat prototype has been progressing well these last days (depsite a "slow" last week, thanks to Bethesda!). In a next update i'll describe from a technical standpoint how spaceships are constructed and handled internally.

Now, weapons can be attached to hardpoints, hardpoints attached to ship parts, ship parts to a ship entity, and each part can have its own status (can break or not, take damage, etc.. ). Thrusters are attached to a specific part too. So, if a weapon is damaged by a laser, it won't fire anymore, and if a thruster is destroyed, it won't operate, making some "fun" situations. Loose a wing of your interceptor, and you'll start to spin like mad in some directions.

JoeB has sent in an update of his battleship. It looks really impressive now. Especially with the hangar in which you can fly and dock with an interceptor.. the sense of scale is really nice.


In other news, i want to start developing in these journals another aspect of the development, which i'll call (for lack of a better word now), the "social" situation. Infinity is getting more and more exposure, and i think it can be interesting for other developers not only to learn about the technical aspects of the game, but also about its "social" ones, the problems encountered in the community/organization, etc.. So, do not take the following as a rant or some selfish blabling, if i'm exposing these publically, it's only in the hopes that it can help other developers. I've been pretty open on the technical side, discussing about algorithms/implementations, why not doing the same with the social side ?

One thing that i should mention now is that my official position regarding attacks, flames or simply negative opinions (founded or not) that could happen on any external forum ( excepted of course) is to never comment. It would open a pandore box, if you see what i mean..

Today when i connected, quickly checking my logs i saw a high amount of incoming connections from the Eve Online forums. By curiosity i had a look on them and saw a thread called "Developer of Infinity bashing EVE !!". That was quite a surprise!

All started on when Betelgeuze posted an answer to a question somebody posted regarding the difference between Infinity and Eve (is Infinity a clone of Eve ? how are the games different ?). Betelgeuze promptly (and quite innocently) answered by detailing the differences between Eve and Infinity, namely:

- seamless landing on planets
- size of the universe (billions of systems)
- newtonian physics/collisions
- owning planets
- owning more than one ship at a time
- the storyline concept with "unique" quests

(For the curious, the thread is here).

There's an error in what Betelgeuze said (owning more than one ship at a time), but i believe it was more a shortcut that was really meant to be interpreted as "you can't control more than one ship at a time", since that post was likely written in 3 minutes.

I don't really know what could be considered "bashing" in that, but it looks like it's not the opinion of some Eve fans.

There was quite a lot of criticism in that thread, i'll just list the most frequent comments (incidentely, it's not specific to the Eve forums. I've seen similar comments pretty much everywhere). A selection:

- i am lying about the amount of worlds in the game universe, because it's not technically possible (would take too much CPU or hard drive space, your choice). Coincidence or not, these kind of comments are generally made by players, not developers.
- it is impossible to develop a game such as Infinity because it'd take an infinite amount of time to design such a big universe
- (my prefered) it is impossible to do because i'm a newbie at programming and obviously have no understanding of how big such a project is
- the screenshots/videos are fake, that cannot run in real-time
- the terrain from the screenshots were made in Terragen (thank you!)
- it will take years to complete (an obvious statement, i've been saying this everywhere)
- it will take an insanely high machine to run (despite the fact that it'll run on a medium-specs machine at release time)
- vaporware (can't really blame)
- copyright infringement (huh?)
- trademark infringement (re-huh?)
- the game is a clone of > (triple-huh?)
- some ships design have been copied on > (given the amount of games and movies released up to today, it'd be quite hard not to find something that looks remotely similar, don't you think ?)

Don't think i'm annoyed or angry by these kind of comments, because it's really not the case. As i said, i'm just ignoring them and never reply, unless you contact me by email or on the game forums. Fortunately there's a lot of constructive and positive feedback too (actually, more than the negative one), but i thought posting this would be interesting for our fellow developers :)




IBL: Incredibly Bad Luck (tm)

Unfortunately, I have a bad news to announce.

Sometimes, things that you thought would never happen, do happen. Today, I learned it the hard way.

I'm a pretty paranoid developer. I always keep 4 copies of my work, as backups, in case something happens. In the worst case, only a few days or weeks of work can be lost. Unless.. fate decides otherwise.

When I waked up this morning, I had a bad surprise. My home computer's external USB drive, which I had for 3 years now, had died. I was storing all the Infinity source code and data files on it. Fortunately, I had a backup. Every few days I transfer that backup to my work computer, which is in a different physical location ( in case my appartment burns, or a horde of werewolfs invade it, or something.. ). My workplace is in Brussels too, but unless a nuclear bomb explodes on Brussels, that should be safe, isn't it ? So since I had many backups, I wasn't too worried. Except...

When I arrived at work this morning, I found that everybody was already here, and in a great panic. Apparently, during the night some thiefs broke in our offices, and since they weren't too tech-savy, they didn't steal the most expensive hardware, but they went directly for our work computers. The bastards! But the most funny thing is, during the night, the office concierge saw some suspicious movements and called the cops. They arrived soon after, and chased the thiefs in the woods. Those last ones probably couldn't escape with all the stolen material, so they threw it away over a fence, to run faster. I don't know if they were finally captured or not, but what I do know is that my computer, on which there was the backup, is badly broken. But fear not! I still have backups. So I still wasn't too worried ( although a bit confident than the first time. It really smelled fishy ). Except...

I also have a backup on my pocket USB drive. It's only 1 GB, so I don't backup as often as on my other computers. A few weeks of lost work, too bad, but not catastrophic either. So I looked in my appartment for the USB key, I was pretty sure I had it in hand a few days ago, but couldn't find it.. until I realized it stayed in my trousers. And I had just washed it yesterday. Guess what ? It didn't survive either.

By now, I was really in panic. But I still had one more chance. One last backup, a .zip archive of the whole project. The one I upload every once in a while on my US server.. in case of a problem ( asteroid fall ? ) in Brussels. So I was saved. The chance that the US server had died during the night was just.. zero.

And I found my file back ! Ouf. Saved. So I quickly copied it to one of my hard drives ( one that didn't burn ), and started to decompress it. A dialog popped in. Please enter the password. Ah, true, it was password protected, of course. But what was the damn password ? No way to remember it, I always choose passwords that are a random mix of weird symbols. But I had noted this password somewhere.. in a file. Where was that file again ?

Then I started to cry when I realized. You see it coming. The file was on my USB drive.

...Tomorrow will be another day, I'm sure it was a nightmare, and I'll wake up..



  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!