Jump to content

  • Log In with Google      Sign In   
  • Create Account

Journal of Ysaneya

Patch 0.1.6.0 screenshots

Posted by , 14 October 2016 - - - - - - · 1,083 views

Patch 0.1.6.0 has been released a few weeks ago. This patch introduced improved cockpits with lighting & shadowing, the hauler NPC ship ( static at the moment, but once the game becomes more mature there will be A.I. ), military bases and factories ( currently placeholders: undetailed and untextured ) on planets. See the attached screenshots.

 

Attached Image

 

Attached Image

 

Attached Image

 

One of our artists, Dan Hutchings, is making a first pass on the space station modules. In Infinity: Battlescape, we designed our space stations, bases and factories to be modular. This means that we model & texture independant modules, which can get attached together in various configuration layouts. Here's one of such layouts for a space station:

 

Attached Image

 

But more layouts / shapes are possible to create interesting combinations:

 

Attached Image

 

Meanwhile, I've been working on refactoring the client / server ( most of the code was quickly set up for our Kickstarter campaign and still suffers from architecturing issues; for example, projectiles hit detection is still client authoritative, which is a big no-no ) and improving networking latency, bandwidth and interpolation. It is expected that this work will take at least a month, if not more, but during this refactoring I'll also add a bunch of new gameplay elements ( teams, resources/credits generation etc.. ).

 

Work has started on the user interface / HUD too but I'll leave that for a future post.

 

Here are pics of the cargo ship ( hauler ):

 

Attached Image

 

Attached Image

Attached Thumbnails

  • Attached Image
  • Attached Image
  • Attached Image



A retrospective on the Infinity project

Posted by , 26 September 2016 - - - - - - · 1,235 views


Hey Everybody, long time no see, Ysaneya here ! I haven't posted in the past 6 years if I count well. Most of you probably don't remember me, but the few of you who do should remember the Infinity project and how it all started back in 2005. It started with a dream, one made of stars and full of procedurally-generated planets to visit. At the time, Elite was a long forgotten franchise and nobody was working on a procedural universe. I started to work in my spare time on a MMO project called Infinity.

 

Attached Image

 

2005 - 2010: procedural dreams

 

In the first years, I started to research procedural planets generation. I also developped an entire engine ( nowadays known as the I-Novae Engine ) to support all features I'd need for the Infinity project. Including:

  • A flexible scene-graph
  • A 3D renderer supporting all the latest-gen features and shaders ( shadow mapping, motion blur, HDR, dynamic lighting.. the usual list.. )
  • A physics engine ( I settled on ODE )
  • An audio engine ( OpenAL )
  • A network engine ( based on UDP )
  • All the procedural planetary & universe generation technology
In 2007 I released a small free game, simply named the "Infinity Combat Prototype". The goal for that game was to integrate all the engine into a game to validate that all the components were working together, and that a game ( some newtonian multplayer combat in arenas in space ) could be produced. The idea was that it'd be the first step that would eventually lead to the whole MMO.

 

Attached Image

 

Unfortunately, it's pretty much at this point that I started to get "lost" into the ambition of the project. I had created the concept of "community contributions" where wannabe-artists could submit artwork, 3D models & textures to be used in the game, but it quickly took a dozen hours a week to review all this work and to validate/reject it, keeping in mind that 95% of it was at the indy level at best.

 

I was the only programmer on the team, and so progress started to slow down tremendously. We entered into a vicious circle where as months were passing, the cool brand new technology was getting deprecated / looking obsolete, and catching up took months for a single feature. That was the time were I replaced the old fashioned renderer by a deferred renderer, implemented dynamic lighting and shadow mapping and all sorts of visually cool stuff.. but meanwhile, gameplay progress was at a standpoint. I spent some time working on the client/server architecture and databases, but nothing too fancy, and definitely not to the point it could be used for a full fledged MMO.

 

By 2010 it became crystal clear that as the sole programmer of the project, even using procedural technology and an artists community to alleviate the content generation problem, I couldn't keep up. A few programmers offered their help but clearly weren't up to the task, or gave up very quickly after a few months. If you've been an indy relying on external help by volunteers to work on your project, that should ring a bell.

 

But in early 2010, I met Keith Newton, an ex-developer from Epic Games who worked on the Unreal Engine. He offered to set up an actual company, review our strategoy and approach the problem from a professional & business perspective. I was about to give up on the project at that time, so naturally, I listened.

 

Attached Image

 

2010 - 2012: Infancy of I-Novae Studios

 

We formed the company I-Novae Studios, LLC, in early 2010, and started to look for investors that could be interested in the technology. Or companies interested in doing partnerships or licensing.

 

Unfortunately it was bad timing and we didn't realize that immediately. If you recall, this was right after the economic crisis of 2008. All the people we talked to were very interested in the tech, but none were ready to risk their money in a small company with no revenue. We had a few serious opportunities during these year, but for various reasons nothing ever came out of it. Another problem was that this period was the boom of the mobile market, and most companies we talked to were more interested in doing mobile stuff than, sic, a PC game.

 

Attached Image

 

During these years we also revamped our technology from the grounds up to modernize it. We switched to physical-based rendering ( PBR ) at this time, implemented a powerful node-based material system, added an editor ( one thing I simply never worked on pre-2010, due to lack of resources ) and much more. Keith worked approximately 2 years and a half full time, out of his own savings, to mature the tech and look for business opportunities. Meanwhile, our other artists and I were still working part time.

 

On the game side, unfortunately things still weren't looking great. It was our strategy to focus back on the technology and put Infinity on hold. We came to the conclusion that we'd probably need millions to realistically have a shot at producing a MMO at a decent quality and in good conditions, and that it couldn't be our first project as a company. In 2012, Kickstarter started to become a popular thing. It was at this time that we started to play with the idea of doing a Kickstarter for a less ambitious project, but still including our key features: multiplayer components and procedural planetary generation. That was how Infinity: Battlescape was born.

 

Attached Image

 

2013 - 2015: Kickstarter, full steam ahead

 

It took us more than 2 years to prepare our Kickstarter. Yup. At this point Keith was back to working part time, but I left my job to dedicate myself to the Kickstarter, working full time out of my own savings on it.

 

To produce the Kickstarter we needed a lot of new content, never shown before, and at near-professionel quality. This included a ship with a fully textured PBR cockpit, mutliple smaller ships/props, asteroids, a gigantic space station, multiple planetary texture packs and a larger cargo ship. We decided pretty early to generate the Kickstarter video in engine, to demonstrate our proprietary technology. It'd show seamless take offs from a planet, passing through an asteroid field, flying to a massive space station that comes under attack, with lots of pew-pew, explosions and particle effects. IIRC we iterated over 80 times on this video during the year before the Kickstarter. It's still online, and you can watch it here:

 

 

Meanwhile, I was also working on a real-time "concept demo" of Infinity: Battlescape. Our original plan was to send the demo to the media for maximum exposure. It took around 8 months to develop this prototype. It was fully playable, multiplayer, including the content generated by our artists in the Kickstarter trailer. The player could fly seamlessly between a few planets/moons, in space, around asteroids or dock in a space station. Fights were also possible, but there never was more than a handful of players on the server, so we could never demonstrate one of the keypoints of the gameplay: massive space battles involving hundreds of players.

 

Attached Image

 

In October 2015, we launched our Kickstarter. It was a success and we gathered more than 6000 backers and $330,000, a little above the $300,000 we were asking for the game. It was one of the top 20 most successful video games Kickstarters of 2015. Our media campaign was a disapointment and we received very little exposure from the mass media. I understandably blame our "vaporware" history. The social media campaign however was a success, particularly thanks to a few popular streamers or twitters that brought exposure on us, and by Chris Roberts from Star Citizen who did a shout-out on his website to help us.

 

But as much as we're happy to -finally- have a budget to work with, it was only the beginning..

 

Attached Image

 

2016+: Infinity Battlescape

 

We started full development in February 2016 after a few months of underestimated post-KS delays ( sorting out legal stuff, proper contracts with salaries for our artists, and figuring out who was staying and who was leaving ).

 

Since then, we've focused on game design, producing placeholders for the game prototype and improving our technology. We're still working on adding proper multithreading to the engine, moving to modern Entity-Componeny-System ( ECS ), and figuring out what to do with Vulkan and/or Directx 12. Meanwhile we're also working on networking improvements and a more robust client/server architecture.

 

The game is scheduled for release in end-2017.

 

All the pictures in this article are coming from our current pre-alpha.

 

https://www.inovaestudios.com/

 

Attached Image




Tech Demo Video 2010

Posted by , 04 May 2010 - - - - - - · 3,111 views

It's been many years since the release of the last video showcasing the seamless planetary engine, so I'm happy to release this new video. This is actually a video of the game client, but since there's little gameplay in it, I decided to label it as a "tech demo". It demonstrates an Earth-like planet with a ring, seamless transitions, a little spaceship ( the "Hornet" for those who remember ), a space station and a couple of new effects.

You can view it in the videos section of the gallery.

Making-of the video

Before I get into details of what's actually shown in the video, a few words about the making-of the video itself, which took more time than expected.

What a pain ! First of all, it took many hours to record the video, as each time I forgot to show something. In one case, the framerate was really low and the heavy stress required to dump a 1280x720 HQ uncompressed video to the disk. The raw dataset is around 10 GB for 14 minutes of footage.

14 minutes ? Yep, that video is pretty long. Quite boring too, which is to be expected since there's no action in it. But I hope you'll still find it interesting.

Once the video was recorded, I started the compression process. My initial goal was to upload a HQ version to YouTube and a .FLV for the video player embedded on the website. The second was quite easily done, but the quality after compression was pretty low. The bitrate is capped to 3600 kbps for some reason, and I didn't find a way to increase it. I suspect it's set to this value because it's the standard with flash videos.

I also wanted to upload a HQ version to YouTube to save bandwidth on the main site, but so far it's been disappointing. I tried many times, each time YouTube refused to recognize the codec I used for the video ( surprisingly, H264 isn't supported ). After a few attempts I finally found one that YouTube accepted, only to discover that the video was then rejected due to its length: YouTube has a policy to not accept videos that are more than 10 minutes long. What a waste of time.

So instead I uploaded it to Dailymotion , but it's very low-res and blurry, which I cannot understand since the original resolution is 1280x720; maybe it needs many hours to post-processing, I don't know. There's also now a two parts HQ video uploaded to youtube: part 1 and part 2 . If you're interested in watching it, make sure you switch to full screen :)

Content of the video

The video is basically split in 3 parts:

1. Demonstration of a space station, modelled by WhiteDwarf and using textures from SpAce and Zidane888. Also shows a cockpit made by Zidane888 ( I'll come back on that very soon ) and the Hornet ( textured by Altfuture ).

2. Planetary approach and visit of the ring. Similar to what's already been demonstrated in 2007.

3. Seamless planetary landings.

Cockpit

I've been very hesitant in including the cockpit in the video, simply because of the exceptations it could potentially generate. So you must understand that it's an experiment, and in no way guarantees that cockpits will be present for all ships in the game at release time. It's still a very nice feature, especially with the free look around. You will notice that you can still see the hull of your ship outside the canopy, which is excellent for immersion. Note that the cockpit isn't functionnal, so if we indeed integrate it to the game one day, I would like that all instruments display functionnal informations, that buttons light on/off, etc..

patd7_med.jpg

Background

The backgrounds you see in the video ( starfield, nebula ) are dynamically generated and cached into a cube map. This means that if you were located in a different area of the galaxy, the background would be dynamically refreshed and show the galaxy from the correct point of view.

Each star/dot is a star system that will be explorable in game. In the video, as I fly to the asteroids ring, you will see that I click on a couple stars to show their information. The spectral class is in brackets, and follows is the star's name. At the moment, star names are using a unique code which is based on the star location in the galaxy. It is a triplet formed of lower/upper case characters and numbers, like q7Z-aH2-85n. This is the shortest representation that I could find that would uniquely identify a star. This name is then followed by the distance, in light-years ( "ly" ).

I still have to post a dev-journal about the procedural rendering of the galaxy on the client side, in which I'll come back on all the problems I've had, especially performance related.

patd7_med.jpg

Planet

I'm not totally happy with the look of the planet, so it is likely that in the future, I will at least do one more update of the planetary engine. There are various precision artifacts at ground level, as the heightmaps are generated on the GPU in a pixel shader ( so are limited to 32-bits of floating point precision ). I've also been forced to disable the clouds, which totally sucks as it totally changes the look & feel of a planet seen from space. The reason for that is that I implemented the Z-Buffer precision enchancement trick that I described in a previous dev journal, and it doesn't totally work as expected. With clouds, the clouds surface is horribly Z-fighting with the ground surface, which wasn't acceptable for a public video. At the moment, I use a 32-bits floating point Z-Buffer, reverse the depth test and swap the near/far clipping planes, which is supposed to maximize Z precision.. but something must have gone wrong in my implementation, as I see no difference with a standard 24-bits fixed point Z Buffer.

The terrain surface still lacks details ( vegetation, rocks, etc.. ). I still have to implement a good instancing system, along with an impostor system, to get an acceptable performance while maintening a high density of ground features.

patd7_med.jpg

patd7_med.jpg

Look & Feel

Don't think for one second that the "look & feel" of the camera and ship behavior is definitive in this video. I'm pretty happy with the internal view and the cockpit look, but the third-person camera still needs a lot of work. It theorically uses a non-rigid system, unlike the ICP, but it still needs a lot of improvements. 

Effects

As you may notice, the ship's thrusters correctly fire depending on the forces acting on the ship, and the desired accelerations. Interestingly, at one given point in time, almost all thrusters are firing, but for different reasons. First, the thrusters that are facing the planet are continuously firing to counter-act the gravity. It is possible to power down the ship ( as seen at the end of the video ), in which case the thrusters stop to work. Secondly, many thrusters are firing to artifically simulate the drag generated by the auto-compensation of inertia. For example when you rotate your ship to the right, if you stop moving the mouse the rotation will stop after a while. This is done by firing all the thrusters that would generate a rotation to the left. Of course, some parameters must be fined tuned.

When the ship enters the atmosphere at a high velocity, there's a friction/burning effect done in shaders. It still lacks smoke particles and trails.

This video will also give you a first idea of how long it takes to land or take off from a planet. The dimensions and scales are realistic. Speed is limited at ground level for technical reasons, as higher speeds would make the procedural algorithms lag too much behind, generating unacceptable popping. At ground level, I believe you can fly at modern airplanes speeds. A consequence of this system is that if you want to fly to a far location on the planet, you first have to fly to low space orbit, then land again around your destination point.

patd7_med.jpg

patd7_med.jpg



ASEToBin 1.0 release

Posted by , 06 October 2009 - - - - - - · 1,115 views

Finally, the long awaited ASEToBin 1.0 has been released !

ASEToBin is a tool that is part of the I-Novae engine ( Infinity's engine ). It allows contributors and artists to export their model from 3DS Max's .ASE file format and to visualize and prepare the 3D model for integration into the game.

This new release represents more or less 200 hours of work, and is filled with tons of new features, like new shaders with environmental lighting, skyboxes, a low-to-high-poly normal mapper, automatic loading/saving of parameters, etc..

ASEToBin Version 1.0 release, 06/10/2009:

http://www.fl-tw.com/Infinity/Docs/SDK/ASEToBin/ASEToBin_v1.0.zip

Changes from 0.9 to 1.0:

- rewrote "final" shader into GLSL; increase of 15% performance (on a Radeon 4890).
- fixed various problems with normal mapping: artifacts, symmetry, lack of coherency between bump and +Z normal aps, etc.. hopefully the last revision. Note that per-vertex interpolation of the tangent space can still lead to smoothing artifacts, but that should only happen in extreme cases (like the cube with 45° smoothed normals) that should be avoided by artists in the first place.
- removed anisotropic fx in the final shader and replaced it by a fresnel effect. Added a slider bar to control the strength of the fresnel reflection ("Fresnel").
- changed the names of the shaders in the rendering modes listbox to be more explicit on what they do.
- set the final shader (now "Full shading") to be the default shader selected when the program is launched.
- added a shader "Normal lighting" that shows the lighting coming from per-pixel bump/normal mapping.
- added support for detail texturing in "Full Shading" shader. The detail texture must be embedded in the alpha channel of the misc map.
- increased accuracy of specular lighting with using the real reflection vector instead of the old lower precision half vector.
- added support for relative paths.
- added support for paths to textures that are outside the model's directory. You can now "share" textures between different folders.
- added automatic saving and reloading of visual settings. ASEToBin stores those settings in an ascii XML file that is located next to the model's .bin file.
- ase2bin will not exit anymore when some textures could not be located on disk. Instead it will dump the name of the missing textures in the log file and use placeholders.
- fixed a crash bug when using the export option "merge all objects into a single one".
- ambient-occlusion generator now takes into account the interpolated vertex normals instead of the triangle face. This will make the AO map look better (non-facetted) on curved surfaces. Example:
Before 1.0: http://www.infinity-universe.com/Infinity/Docs/SDK/ASEToBin/ao_before.jpg
In 1.0: http://www.infinity-universe.com/Infinity/Docs/SDK/ASEToBin/ao_after.jpg
- added edge expansion to AO generator algorithm, this will help to hide dark edges on contours due to bilinear filtering of the AO map, and will also fix 1-pixel-sized black artifacts. It is *highly recommended* to re-generate all AO maps on models that were generated from previous version of ASEToBin, as the quality increase will be tremendous.
- automatic saving/loading of the camera position when loading/converting a model
- press and hold the 'X' key to zoom the camera (ICP style)
- press the 'R' key to reset the camera to the scene origin
- reduced the znear clipping plane distance. Should make it easier to check small objects.
- program now starts maximized
- added a wireframe checkbox, that can overlay wireframe in red on top of any existing shader mode.
- added a new shader "Vertex lighting" that only shows pure per-vertex lighting
- fixed a crash related to multi-threading when generating an AO map or a normal map while viewing a model at the same time.
- added a skybox dropdown that automatically lists sll skyboxes existing in the ASEToBin's Data/Textures sub-directories. To create your own skyboxes, create a folder in Data/textures (name doesn't matter), create a descr.txt file that will contain a short description of the skybox, then place your 6 cube map textures in this directory. They'll be automatically loaded and listed the next time ASEToBin is launched.
- the current skybox is now saved/reloaded automatically for each model
- added a default xml settings file for default ASEToBin settings when no model is loaded yet. This file is located at Data/settings.xml
- removed the annoying dialog box that pops up when an object has more than 64K vertices
- fixed a bug for the parameter LCol that only took the blue component into account for lighting
- added support for environment cube map lighting and reflections. Added a slider bar to change the strength of the environment lighting reflections ("EnvMap"). Added a slider bar to control the strength of the environment ambient color ("EnvAmb").
- added experimental support for a greeble editor. This editor allows to place greeble meshes on top of an object. The greeble is only displayed (and so only consumes cpu/video resources) when the camera gets close to it. This may allow kilometer-sized entities to look more complex than they are in reality.
- added experimental support for joypads/joysticks. They can now be used to move the camera in the scene. Note that there's no configuration file to customize joystick controls, and the default joystick is the one used. If your joystick doesn't work as expected, please report any problem on the forums.
- added a slider bar for self-illumination intensity ("Illum")
- added a slider bar for the diffuse lighting strength ("Diffuse")
- added a Capture Screenshot button
- added a new shader: checkerboard, to review UV mapping problems (distortions, resolution incoherency, etc..)
- added the number of objects in the scene in the window's title bar
- added a button that can list video memory usage for various resources (textures, buffers, shaders) in the viewer tab
- added a Show Light checkbox in the visualization tab. This will display a yellowish sphere in the 3D viewport in the direction the sun is.
- added new shaders to display individual texture maps of a model, without any effect or lighting (Diffuse Map, Specular Map, Normal Map, Ambient Map, Self-illumination Map, Misc Map, Detail Map)
- fixed numerous memory/resources leaks
- added a button in the visualization tab to unload (reset) the scene.
- added an experimental fix for people who don't have any OpenGL hardware acceleration due to a config problem.
- added a button in the visualization tab to reset the camera to the scene origin
- added a checkbox in the visualization tab to show an overlay grid. Each gray square of the grid represents an area of 100m x 100m. Each graduation on the X and Y axis are 10m. Finally, each light gray square is 1 Km.
- added a feature to generate ambient-occlusion in the alpha channel of a normal map when baking a low-poly to a high-poly mesh. Note: the settings in the "converter" tab are used, even if disabled, so be careful!

Note: Spectre's Phantom model is included as an example in the Examples/ directory !

Screenshots (click to enlarge):

man-comet-1-med.jpg
 
man-comet-2-med.jpg

man-comet-3-med.jpg

man-comet-4-med.jpg

lynx1-med.jpg

lynx2-med.jpg

lynx3-med.jpg

phantom1-med.jpg

phantom2-med.jpg

phantom3-med.jpg


Tip of the day: logarithmic zbuffer artifacts fix

Posted by , 20 August 2009 - - - - - - · 7,092 views

Logarithmic zbuffer artifacts fix

In cameni's Journal of Lethargic Programmers, I've been very interested by his idea about using a logarithmic zbuffer.

Unfortunately, his idea comes with a couple of very annoying artifacts, due to the linear interpolation of the logarithm (non-linear) based formula. It particularly shows on thin or huge triangles where one or more vertices fall off the edges of the screen. As cameni explains himself in his journal, basically for negative Z values, the triangles tend to pop in/out randomly.

It was suggested to keep a high tesselation of the scene to avoid the problem, or to use geometry shaders to automatically tesselate the geometry.

I'm proposing a solution that is much more simple and that works on pixel shaders 2.0+: simply generate the correct Z value at the pixel shader level.

In the vertex shader, just use an interpolator to pass the vertex position in clip space (GLSL) (here I'm using tex coord interpolator #6):


void main()
{
vec4 vertexPosClip = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_Position = vertexPosClip;
gl_TexCoord[6] = vertexPosClip;
}

Then you override the depth value in the pixel shader:


void main()
{
gl_FragColor = ...
const float C = 1.0;
const float far = 1000000000.0;
const float offset = 1.0;
gl_FragDepth = (log(C * gl_TexCoord[6].z + offset) / log(C * far + offset));
}

Note that as cameni indicated before, the 1/log(C*far+1.0) can be optimized as a constant. You're only really paying the price for a mad and a log.

Quality-wise, I've found that solution to work perfectly: no artifacts at all. In fact, I went so far as testing a city with centimeter to meter details seen from thousands of kilometers away using a very very small field-of-view to simulate zooming. I'm amazed by the quality I got. It's almost magical. ZBuffer precision problems will become a thing of the past, even when using large scales such as needed for a planetary engine.

There's a performance hit due to the fact that fast-Z is disabled, but to be honnest in my tests I haven't seen a difference in the framerate. Plus, tesselating the scene more or using geometry shaders would very likely cost even more performance than that.

I've also found that to control the znear clipping and reduce/remove it, you simply have to adjust the "offset" constant in the code above. Cameni used a value of 1.0, but with a value of 2.0 in my setup scene, it moved the znear clipping to a few centimeters.

Results

Settings of the test:
- znear = 1.0 inch
- zfar = 39370.0 * 100000.0 inches = 100K kilometers
- camera is at 205 kilometers from the scene and uses a field-of-view of 0.01°
- zbuffer = 24 bits

Normal zbuffer:

http://www.infinity-universe.com/Infinity/Media/Misc/zbufflogoff.jpg


Logarithmic zbuffer:
http://www.infinity-universe.com/Infinity/Media/Misc/zbufflogon.jpg

Future works

Could that trick be used to increase precision of shadow maps ?


Seamless filtering across faces of dynamic cube map

Posted by , 19 August 2009 - - - - - - · 3,587 views

Tip of the day

Anybody who tried to render to a dynamic cube map probably has encountered the problem of filtering across the cube faces. Current hardware does not support filtering across different cube faces AFAIK, as it treats each cube face as an independent 2D texture (so when filtering pixels on an edge, it doesn't take into account the texels of the adjacent faces).

There are various solutions for pre-processing static cube maps, but I've yet to find one for dynamic (renderable) cube maps.

While experimenting, I've found a trick that has come very handy and is very easy to implement. To render a dynamic cube map, one usually setups a perspective camera with a field-of-view of 90 degrees and an aspect ratio of 1.0. By wisely adjusting the field-of-view angle, rendering to the cube map will duplicate the edges and ensure that the texel colors match.

The formula assumes that texture sampling is done in the center of texels (ala OpenGL) with a 0.5 offset, so this formula may not work in DirectX.

The field-of-view angle should equal:

fov = 2.0 * atan(s / (s - 0.5))

where 's' is half the resolution of the cube (ex.: for a 512x512x6 cube, s = 256).

Note that it won't solve the mipmapping case, only bilinear filtering across edges.

Results:
Dynamic 8x8x6 cube without the trick:
http://www.infinity-universe.com/Infinity/Media/Misc/dyncube_seam1.jpg

Dynamic 8x8x6 cube with the trick:
http://www.infinity-universe.com/Infinity/Media/Misc/dyncube_seam2.jpg


Audio engine and various updates

Posted by , 08 July 2009 - - - - - - · 1,039 views

In this journal, no nice pictures, sorry :) But a lot to say about various "small" tasks ( depending on your definition of small. Most of them are on the weekly scale ). Including new developments on the audio engine and particle systems.

Audio engine


As Nutritious released a new sound pack ( of an excellent quality! ) and made some sample tests, I used the real-time audio engine to perform those same tests and check if the results were comparable. They were, with a small difference: when a looping sound was starting or stopping, you heard a small crack. It seems like this artifact is generated when the sound volume goes from 100% to 0% ( or vice versa ) in a short amount of time. It isn't related to I-Novae's audio engine in particular, as I could easily replicate the problem in any audio editor ( I use Goldwave ). It also doesn't seem to be hardware specific, since I tested both on a simple AC'97 integrated board and on a dedicated Sound Blaster Audigy, and I had the crack in both cases.

A solution to that problem is to use transition phases during which the sound volume smoothly goes from 100% to 0%. It required to add new states to the state machine used in the audio engine, and caused many headaches. But it is now fixed. I've found that with a transition of 0.25s the crack has almost completely disappeared.

One problem quickly became apparant: if the framerate was too low, the sound update ( adjusting the volume during transition phases ) wasn't called often enough and the crack became noticeable again. So I moved the sound update into a separate thread ( which will be good for performance too, especially on multi-core machines ) which updates at a constant rate independently of the framerate.

Since I was working on the audio engine, I also took some time to fix various bugs and to add support for adjusting the sound pitch dynamically. I'm not sure yet where it will be used, but it's always good to have more options to choose from.

Particle systems


In parallel I've been working on a massive update ( more technically a complete rewrite ) of the particle system. So far I was still using the one from the combat prototype ( ICP ), dating from 2006. It wasn't flexible enough: for example, it didn't support multi-texturing or normal mapping / per pixel lighting. Normal mapping particles is a very important feature, especially later to reimplement volumetric nebulae or volumetric clouds.

Particles are updated in system memory in a huge array and "compacted" at render time into a video-memory vertex buffer. I don't use geometry shaders yet, so I generate 4 vertices per particle quad, each vertex being a copy of the particle data with a displacement parameter ( -1,-1 for the bottom-left corner to +1,+1 for the top-right corner ). The vertices are displaced and rotated like a billboard in a vertex shader.

Performance is decent: around 65K particles at 60-70 fps on a GF 8800 GTS, but I'm a bit baffled that my new Radeon HD 4890 is getting similar framerates, as it's supposed to be much faster than a 8800 GTS. I ran a profiler and most of the time seems to be spent into uploading the vertex buffer rather than updating or rendering. I don't know whether I should blame Vista or ATI/AMD...

I still have a few ideas to experiment to better manage the vertex buffer and avoid re-filling it completely every frame, especially when some particles are static ( example: all particles in a nebulae ).

Visual Studio 2008


I migrated all my projects to Visual Studio 2008. While doing so I switched the C-runtime library from dynamic to static, hopefully avoiding future problems with missing dependencies. Unfortunately, most of the external libraries I was using were compiled with the dynamic CRT, so I had to update and recompile every single of those libraries, which took a lot of time. I also used that occasion to move the automatic linking of INovae's IKernel from the header files to the cpps.

Normal mapping


SpAce reported normal mapping problems in ASEToBin. He generated a cube in 3ds max, duplicated it, applied a UV map to one of them and used it as the "low poly" mesh, while the other version is the "high poly". Then he baked the normal maps from the hi-poly to the low-poly into a texture and loaded it in ASEToBin. The results were wrong: in 3ds max the cube render was as expected, but in ASEToBin, there was some strange smoothing/darkening artifacts.

I played with that code for days and was able to improve it, but arrived to the conclusion that they were caused by vertex interpolation of the tangent space. 3ds max doesn't interpolate the tangent space per vertex, but actually re-calculates the tangent space per pixel. The only way I could do that in ASEToBin ( or more generally in the game ) is to shift this calculationto the pixel shader, but for various reasons it's a bad idea: it'd hurt performance quite a bit; it'd raise the hardware requirements, etc..

So far I haven't seen any real-time engine/tool that took 3ds max's normal map and rendered the cube with good lighting, which comforts my in my conclusion that it can only be fixed if you perform the calculations per pixel.

Gathering Texture packs


In the past years, many people have made tiling texture packs. Those texture packs have variable quality; some of the textures inside the packs are excellent; others are "good enough"; others aren't so nice. Almost none of them were made with a specific faction in mind - which is partially due to us not providing clear guidelines on the visual style of faction textures -. In any case, I think it's time to collect all those textures, filter them by quality, sort them by faction and re-publish them in a single massive pack everybody can use.

It will take a while to sort everything. A few devs are currently working on new textures ( especially SFC textures ), but I think it would be nice if in the coming weeks some contributors could help. We are primarily looking for generic textures, like plating for hulls, greeble, hangar/garages elements, etc.. Also, if you have work-in-progress textures sitting on your hard drive in a decent ( usable ) state, now would be a good time to submit them.


Galaxy generation

Posted by , 19 May 2009 - - - - - - · 4,013 views

In the past weeks, I've been focusing my efforts on the server side. A lot of things are going on, especially on the cluster architecture. But one particular area of interest is the procedural galaxy generator. In this journal, I will be speaking of the algorithm used to generate the stars and the various performance/memory experiments I made to stress the galaxy generator.

Overview


Note: video available at the end of the article.

Our galaxy, the Milky Way, contains an estimated 100 to 400 billion stars. As you can imagine, generating those in a pre-processing step is impossible. The procedural galaxy generator must be able to generate stars data in specific areas, "regions of interest", usually around the players ( or NPCs, or star systems in which events happen ).

The jumpdrive system will allow a player to select any star and attempt to jump to it. The range doesn't matter. What's important is the mass of the target and the distance to it. Let's start with a simple linear formula where the probability to successfully jump is a function of M / D ( M = target's mass and D = distance ). Of course, the "real" formula is a lot more complicated and isn't linear, but let's forget about that now.

Under that simple formula, you will have the same chance of jumping to a star that has a mass of 1.0 and that is located 10 LY ( light-years ) away than you have to jump to a star of mass 10.0 that is located 100 LY away..

The mass of stars ( for stars that are on their main sequence ) is defining their color. Stars that are many times as massive as the Sun are blue; Sun-like stars are white/yellow; low-mass stars appear redish and are often called red dwarves.

How does all of that relate to the galaxy generator ? Well, it defines a fundamental constraint to it: it must be hierarchical. In other words, massive blue stars must be generated even when they're very far away, while lighter red dwarves only need to be generated in a volume much closer to the player.

If you don't fully understand that previous sentence very well, read it again and again until you fully realize what it means, because it's really important. Red dwarves that are far away aren't generated. At all. They're not displayed, but they're not even in memory, and do not consume memory. More subtely, it is impossible to "force" them to appear, until you "physically" approach them closer. This also implies that you will not be able to search a star by its name unless it's a "special" star stored in the database.

Generating a point cloud of the galaxy


The algorithm is based on an octree. Remember that stars must be generated hierarchically. The octree is subdivided around the player recursively until the maximum depth ( 12 ) is reached. Each node in the octree has a depth level ( starting at 0 for the root node ) and increased by 1 at each recursion level ( so the maximum will be 12 ). This depth level is important because it determines the type of stars that are generated in that node.

This level is used as an index into a probability table. The table stores probabilities for various star classes at different depths. For the root node ( level #0 ) for example, there may be a 40% chance to generate an O-class ( hot blue ) star, a 40% chance to generate a B-class and a 20% chance to generate an A-class star.

That way, it's possible to drive the algorithm to generate the good proportion of star classes.

The potential number of stars per node is only a function of the depth level. At the root level, there are 50 million stars. At the deepest level ( #12 ) there are 200 stars. Note that the actual amount of stars generated will be lower than that, because stars need to pass a decimation test. That's how you shape the galaxy... with a density function.

The density function takes as input some 3D coordinates in the galaxy and returns the probability in [0-1] that a star exists for the given coordinates.

To generate the spherical halo, the distance to the galactic origin is computed and fed into an inverse exponential ( with some parameters to control the shape ).

To generate the spiral arms, the probability is looked up from a "density map" ( similar to a grayscale heightmap ). The 2D coordinates as well as the distance to the galactic plane are then used to determine a density.

To generate globular clusters, the calculation is similar to the spherical halo, except that each cluster has a non-zero origin and a radius on the order of a few dozen light-years.

The final density function is taken as the maximum of all those densities.

To generate stars for a given node, a random 3D coordinate inside the node's bounding box is generated for each potential star. The density is evaluated for this location. Then a random number is generated, and if that number is lower than the density, the star actually gets generated and stored into the node.

When the node gets recursively split into 8 children, all stars from the parent node gets distributed into the correct child ( selected based on their coordinates ).

As a note, all nodes are assigned a seed, and when a node gets subdivided, a new seed is generated for each child. That seed is used in various places when random numbers need to be generated. Therefore, if the player goes away and a node gets merged, then comes closer again and the node gets split, the exact same stars will be generated. They will have the exact same location, the same color, the same class, etc..

The drawback of procedural generation is that any change made to any parameter of the algorithm ( like the number of stars per node, or the probability tables ) will result in a completely different galaxy. None of the stars will end up at the same place ( or if they do, it's just a coincidence ). So all the probabilities and parameters better be correctly adjusted before the game gets released, because after, it will lead to the apocalypse..





Performance considerations


The algorithm as described above suffers from performance problems. The reason is quite simple: if for a given node you have 1000 potential stars, then you need to generate 1000 coordinates and test them against the density function at each coordinate, to see if a real star has been generated.

I quickly noticed that in the terminal nodes, the densities were pretty low. Imagine a cube of 100x100x100 LY located in the halo of the galaxy, far away from the origin: the density function over this volume will be pretty regular, and low ( I'm making this up, but let's say 10% ). This means that for 1000 potential stars, the algorithm will end up generating 1000 coordinates, evaluate the density 1000 times, and 10% of the candidates will pass the test, resulting in 100 final stars. Wouldn't it be better to generate 100 candidates only ? That would be 10 times faster !

Fortunately it's possible to apply a simple trick. Let's assume that the density function is relatively uniform over the volume: 10%. It's statistically equivalent to generate 1000 stars from which 1 out of 10 will succeed, than to generate 100 stars from which 10 out of 10 will succed. In other words, when the density is uniform, you can simply reduce the amount of stars by the correct ratio ( 1 / density ), or said otherwise, multiply the number of stars by the density ! 1000 stars * 10% = 100 stars.

Most of the time, the density isn't uniform. The lower the depth level of the node is, the larger the volume is, the less chance the density will be uniform over that volume. But even when the density isn't uniform, you can still use its maximum probability to reduce the number of potential candidates to generate.

Let's take a node of 1000 candidates where you have a 1% density on one corner and 20% on another corner (the maximum in the volume). It's still statistically equivalent to a node of 200 candidates ( 1000 * 20% ) with a density of 5% on the first corner and 100% on the other corner.

As you can see, there's no way around evaluating the density function for each candidate, but the number of candidates has been reduced by a factor of 5 while at the same time, the probability of the density function has been multiplied by 5. Less stars to generate, and for each star, a higher chance to pass the test: a win-win situation !

Memory considerations


Until now, I've explained how to generate the galaxy model and how stars are procedurally distributed on-the-fly without any pre-processing. But keep in mind that the algorithm is primarily used on the server, and that there won't be just one player, but thousands of them. How does the galaxy generation works with N viewpoints ?

To keep it short, I modified the standard octree algorithm to split nodes as soon as needed, but delayed merging nodes together until more memory is needed.

The galaxy manager works as a least-recently-used ( LRU ) cache. Stars data and nodes consume memory. When the target memory budget is reached, a "garbage collector" routine is launched. This routine checks all nodes and determines which nodes have been the least recently used ( that is: the nodes that have been generated long ago, but that aren't in use currently ). Those nodes are then merged and memory is freed.

It's a bit tricky to stress test the galaxy generator for performance and memory with multiple players, simply because it's extremely dependent on where players will be located in the galaxy. The worst case would probably be players randomly distributed in the galaxy, all far from each other. But, doing an educated guess, I don't expect this to be the norm in reality: most players will tend to concentrate around the cores, or around each other, forming small groups. But even then, can we say that 90% of the players will be at less than 1000 LY from the cores ? Is it even possible to estimate that before beta starts ?

Galactic map considerations


I've followed with interest suggestions of players in the galactic map thread, and how the galactic map should look like. After testing the galaxy generator, I arrived to the conclusion that everybody severely under-estimates the amount of stars there can be in a volume close to you. For example, in a radius of 100 LY, in the spiral arms with an average density, it's not uncommon to find 5000 stars.

Remember that the jump-drive is not limited exclusively by range. Or more exactly, while distance is a factor, there's no "maximal range". This means that it's perfectly possible to try to jump at a red dwarf that is 5000 LY away. The probability to succeed is ridiculously small ( more than winning at the lottery ), but non-zero. Of course, for the galactic map, this means that even stars that are far away should be displayed ( provided that you don't filter them out ). That's an insane number of dots that may appear on your map...

One of the more effective filters, I think, will be the jump-probability filter. That one is a given: only display stars with a minimum of 50% jump success.

In the following screenshots, you can see a blue sphere in wireframe. This defines the range in which stars are displayed. It's just an experiment to make people realize how many stars there are at certain ranges: by no means it shows how the galactic map will work ( plus, it's all on the server, remember ! ).

I can select any star by pressing a key, and it gets highlighted in pink. On the top-left, you can see some informations about the selected star: first, a unique number that defines the "address" ( in the galaxy ) of the star. On the line below, the 3 values are the X Y and Z coordinates of the star compared to the galactic origin. Then, the star class, its distance in light-years, and finally the jumping probability.

In the coming weeks, I will probably move the galaxy algorithm to the client and start to add some volumetric/particle effects on the stars/dust to "beautify" it. The reason the algorithm will also be running on the client is to avoid having to transfer a million coordinates from the server to the client each time the player opens his galactic map. That wouldn't be kind to our bandwidth...







Video


I produced a demonstration video. Watch it on Youtube in HD ! ( I will also uploaded it later to the website as I convert it to .flv ).