• entries
13
13
• views
14265

Space RPG game development log

## DevLog #13 - Engine: Viewport & Camera

Before displaying a scene, the viewport and projection transformations are needed. The engine have to allow multiple viewports (for splitscreen or PIP). Each viewport may want to display same, or absolutely different scene. In a case multiple viewports display same scene, they may want to display it from different angles. Like this can be summed up the requirements for Scene, Camera, and Viewport interaction implementation.

Camera in scene

As first let take a look at a single scene that should support different view angles. The situation is offering to create a class for Camera, and have a Camera object in the scene for every predefined view. So you make something like a scene from a movie shooting, where are several cameras on the place and the cut then only selects which view will be used in the film at which moment. In my implementation the Camera::Render() function performs the transformation of camera and then calls the Scene::Draw(), so a single call of camera will draw whole scene from the desired angle and settings.
As you may see on the class diagram below, the Scene provides methods to create a Camera. The scene knows all its cameras and the created camera knows which scene it belongs to.

Viewport in window

Now when we have the ability to draw a scene from a camera, we have still to perform viewport transformation. As in case of cameras we create a class for viewports. Every viewport on the screen will correspond to one Viewport object. Split screen or PIP (picture in picture) is then made by creating at least two Viewport objects and calling Render() on each.

Display camera in viewport

Last thing remains - tell the viewport which scene to display and from vhich camera. By simply passing Camera as an argument of Render call we tell to viewport: "In this viewport display view from that camera".
The Viewport::Render() performs viewport transformation and calls Camera::Render().
The Camera::Render() performs camera transformations and calls Scene::Draw().

Here is an example of how can look the game code for split screen for two players:

[spoiler]// Create a 1280x1024 fullscreen windowWindow* window = new Window();window->Create( "Game title", 1280, 1024, 32, true ); // name, width, height, debth, fullscreenwindow->Show();// Create 2 viewports 1280x512 (horizontal splitscreen)Viewport* viewportPlayer1 = new Viewport( 0, 0, window->Width(), window->Height()/2 ); // x, y, width, heightViewport* viewportPlayer2 = new Viewport( 0, window->Height()/2, window->Width(), window->Height()/2 );float aspect = window->Width() / ( window->Height()/2 );//Create sceneScene* gameScene = new Scene();Camera* camPlayer1 = gameScene->CreatePerspectiveCamera( position1, 60.0, aspect, 0.1, 500.0 ); // position, FOV, aspect, near, farCamera* camPlayer2 = gameScene->CreatePerspectiveCamera( position2, 60.0, aspect, 0.1, 500.0 );// add other objects to Scene// DrawviewportPlayer1->Render( camPlayer1 );viewportPlayer2->Render( camPlayer2 );Renderer::SwapBuffers( window );[/spoiler]

## DevLog #12 - Engine engineering 2

The keyboard listener is complete, now with fixed and improved mask for special keys ( Ctrl, Alt, Shift ) to be a bit more flexible. The left Alt was a bit tricky to make it work using window messages. I was slowly going to think about abandoning the idea of callbacks as reaction on messages if it wouldn't be possible to catch messages for left Alt combinations. In the end the Alt + key combination revealed that although it does not send WM_KEYDOWN message as any other combination does, it sends a WM_SYSKEYDOWN message instead. With that the keyboard handling system is completely prepared.

Here you can see the structure.

Window
WndProc sends messages to HandleEvent method of Window wrapper.
HandleEvent directly solves messages like window activation/inactivation, dumps screensaver message, process key press/release messages, etc.
I was thinking at first about allowing game to handle all the messages on its own, but in the end these messages seems to be unlikely handled in different way game from game. Only some messages invoke a call of methods in WindowEventHandler class.

WindowEventHandler
When a Window::HandleEvent() receives a certain message, it calls appropriate method of this class.
The game can inherit from this class ( e.g. GameWindowEventHandler ) and override the methods if it wants to handle some messages by itself.

KeyboardListener & KeyHandler
Game can register a callback function for each key ( + combinations of special keys ).
When KEYDOWN message is received, a KeyPressed function is called. This checks if some callback for the key is registered and calls it.

Ship & Character Controller
Static callback method is called from KeyHandler when key is pressed. The callback takes an instance of controlled object (e.g. ship, character, etc. ) and calls its member method to perform the game operation.
For example if we fly a ship, the game registers the methods from ShipController in keyboard listener and when a key is pressed, the appropriate function from ShipController is called directly. If we get off the ship, the game can unregister the old functions and register functions from CharacterController instead. If the same key is then pressed, a function from CharacterController is called now.
I think this can separate pretty nicely different operations that should be performed on different screens by same key. Operations for different screen or game mode will not mess among each other.

Well, so that's all for now. The mouse and possibly joystick handling will be made in a very similar way as keyboard is, but I will keep them for a later time. In the upcomming days I'd like to look on the Renderer and prepare some usable API for it.

## DevLog #11 - Engine engineering

Last several days I've spent by separating engine from game by creating engine API.
Code for window handling and keyboard listening are nearly done.

Sounds strange to start speaking about creating an engine after many weeks of development. Bad for me, I haven't draw the line between engine part and game part at the beginning, and so no wonder that codes of both parts began to weave together. In fact when I needed draw something from the game part, I accessed directly the OpenGL API. It works, but creates a mess that is hard to maintain. Although there was some shallow intermediate level for more complex operations (e.g. depth sorting), the usage of OpenGL was in most cases direct.

I had tendency to make the engine more separated already before, but it was burnt at the first step - wrap the window management. Creating a window wrapper is easy so long until you reach WndProc (method processing window messages that had to be registered). To register the WndProc it had to be a static method. I was trying to search Google for help but at my earlier attempts I found only confirmations that there isn't a way how to use a member function for window message processing. This week I tried once more to find a solution on this and finally was lucky.

The WndProc will stay static, but the only thing it will do is to pass the message to a member function of the window instance. The only thing remaining is how to pass the window instance to static WndProc. Before we have the HWND available, we pass the pointer on our window class using LPARAM of window creation message.

[spoiler]HWND WINAPI CreateWindowEx( __in DWORD dwExStyle, __in_opt LPCTSTR lpClassName, __in_opt LPCTSTR lpWindowName, __in DWORD dwStyle, __in int x, __in int y, __in int nWidth, __in int nHeight, __in_opt HWND hWndParent, __in_opt HMENU hMenu, __in_opt HINSTANCE hInstance, __in_opt LPVOID lpParam // This will come as LPARAM of WM_NCCREATE message, so put pointer on window wrapper here);[/spoiler]

Whatever we set as LPARAM in the CreateWindowEx function will come as LPARAM of WM_NCCREATE message.
In WndProc we have to catch the message and read out the pointer on our window wrapper instance from the LPARAM.
We need to save the pointer now somewhere else. Because with the mesage came also HWND (window handle), we can save the pointer into it's USERDATA space. All later messages received by WndProc will read the instance of our window wrapper from the USERDATA in HWND.

[spoiler]LRESULT CALLBACK WndProc( HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam ){ // Variable for our window wrapper instance Window *window = NULL; // This message comes after CreateWindowEx() is called if ( uMsg == WM_NCCREATE ) { // Read pointer on wrapper from LPARAM ... window = reinterpret_cast( ((LPCREATESTRUCT)lParam)->lpCreateParams ); // ... and save it into USERDATA in HWND SetWindowLong( hWnd, GWL_USERDATA, reinterpret_cast( window ) ); // Saving HWND into wrapper instace as well window->SetHWnd( hWnd ); } else { // Any other message received will read the wrapper instance from USERDATA in HWND window = reinterpret_cast( GetWindowLong( hWnd, GWL_USERDATA ) ); } if ( window ) { // Pass the message to our own member function in window wrapper return window->HandleWindowEvent( uMsg, wParam, lParam ); } else { return DefWindowProc( hWnd, uMsg, wParam, lParam ); }}[/spoiler]

With the WndProc out of the way there is nothing what would prevent us from creating a window handling API, that can be easily used

Window* window = new Window();window->Create( "My window", 1024, 768, 32, false );window->Show();// Switch from windowed to fullscreenwindow->Destroy();window->Create( "My window", 1024, 768, 32, true );window->Show();

[spoiler]class Window{ public: Window(); ~Window(); bool Create( char* title, int width, int height, int bits, bool fullscreen ); void Destroy(); void Show() const; bool IsFullscreen() const; HDC GetHDC() const; void SetHWnd( HWND hWnd ); static LRESULT CALLBACK WndProc( HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam ); private: LRESULT HandleWindowEvent( UINT uMsg, WPARAM wParam, LPARAM lParam ) const;};[/spoiler]

For listening keyboard events I am using window messages sent to WndProc. But reaction on pressed keys is game-dependent. So the message processing function have to pass the information about keys to some game method. So when user "click" on a key on keyboard, the game gets notified about it, but the game often needs to know also if the key is "held". There are two methods to know if key is still pressed I know about:
1) use of boolean array and set the appropriate bool to TRUE on WM_KEYDOWN message and set it to FALSE on WM_KEYUP message
2) ask on actual status of key by GetAsyncKeyState() method

I've picked the first approach - array of 256 bool values keeping track of all keys. The KeyboardListener stores this array and also provides API for game to register for pressed keys or check actual status of the keys.

// Example of KeyboardListener usage// Game functions that should react on different keysvoid Funct_A() { ... }void Funct_Shift_A() { ... }void Funct_Ctrl_Shift_A() { ... }void test(){ KeyboardListener keyboardListener; // Each function is now registered for a specific key (and possibly for combination of special keys like CTRL, ALT and SHIFT) keyboardListener.RegisterCallback( 'A', &Funct_A ); keyboardListener.RegisterCallback( 'A', SPECIAL_KEY_SHIFT, &Funct_Shift_A ); keyboardListener.RegisterCallback( 'A', SPECIAL_KEY_CTRL | SPECIAL_KEY_SHIFT, &Funct_Ctrl_Shift_A ); // The registered functions are automatically called everytime their key (or combination) is pressed // If some code should execute for the duration of some key being held, it have to check the the key status if ( keyboardListener.IsPressed( 'A', SPECIAL_KEY_ALT | SPECIAL_KEY_CTRL ) ) { ... }}

[spoiler]class KeyboardListener{ public: KeyboardListener(); ~KeyboardListener(); void KeyPressed ( unsigned char key, unsigned short specialKeyMask = 0 ); void KeyReleased( unsigned char key ); bool IsPressed ( unsigned char key, unsigned short specialKeyMask = 0 ); void RegisterCallback ( unsigned char key, unsigned short specialKeyMask, void(*callback)() ); void UnregisterCallback( unsigned char key, unsigned short specialKeyMask ); private: KeyHandler* keys[ 256 ];};class KeyHandler{ public: KeyHandler(); ~KeyHandler(); void Press( unsigned short specialKeyMask ); void Release(); bool IsPressed( unsigned short specialKeyMask ); void RegisterCallback( void (*function)(void), unsigned short specialKeyMask = 0 ); void UnregisterCallback( unsigned short specialKeyMask = 0 ); private: bool pressed; std::map< unsigned short, void(*)(void) > callbacks;};[/spoiler]

Every key can have a callback function. The game registers the callback function through KeyboardListener API. When the listener is notified by WndProc about pressed key, it checks if the key has a callback function registered and calls it. The thing gets a little bit complicated when we want to react on SHIFT, ALT and CTRL combinations. To check a combination of these special keys is used a 9 digit mask (unsigned short):

1 - Ctrl (any)
2 - Alt (any)
4 - Shift (any)
8 - Left Ctrl
16 - Right Ctrl
32 - Left Alt
64 - Rigth Alt
128 - Left Shift
256 - Right Shift

If a game needs some function to react on Ctrl + A, but does not bother if it is left or right Ctrl, it registers the function in KeyboardListener for key 'A' and a special key mask "000000001":

keyboardListener.RegisterCallback( 'A', binary(000000001) , &Function );

When player hits a left Ctrl and A for example, the WndProc catches 'A' key pressed message and looks for state of Ctrl, Alt and Shift keys. It creates the following mask:

000000001 // because VK_CONTROL is pressed000001000 // because VK_LCONTROL is pressed----------------000001001

As you see, although the Ctrl was pressed, the two masks are not equal. The KeyboardListener needs to split this mask into two parts - general part and LR (left-right) part. The general part takes last 3 digits of mask, the L-R part takes first 6 digits of the mask. These two parts are then taken as the masks to compare against what was registered by game.

This enables the game to make differences between left and right control keys, or don't bother about their side at all.

## DevLog #10 - OpenGL fog

OpenGL provides 3 fog types - GL_LINEAR, GL_EXP and GL_EXP2.
Some time ago I've found an image showing the characteristics of these modes which helped me to understand them a bit more. I can't find it now again, so I tried to recreate it. It isn't absolutely accurate, but it could help understand the differences between the fog modes.

There are several fog parameters that can be set. Except of fog color it's GL_FOG_START, GL_FOG_END, and GL_FOG_DENSITY.

The GL_FOG_START and GL_FOG_END is meaningful only for LINEAR fog.
The GL_FOG_DENSITY is meaningful only for EXP and EXP2 fogs.

Because the LINEAR fog works with FOG_START / END parameters, it is the easiest type to set. You simply set the distance where the fog should be starting and a distance where the fog should be ending (covering objects completely).
Setting density for EXP fogs is a bit more tricky. Mostly it's a set & try over and over until you find the density that looks good.

From the equations for fog calculation we can however retrieve an equation for calculating the desired density.
The EXP fog mode uses these formulas for fog calculations:
EXP: f = e^( -(density * z) )EXP2: f = e^( -(density * z)^2 )

With several modifications we can get an equation for density:
density_EXP = ( -ln( fog_end_intensity ) ) / fog_enddensity_EXP2 = sqrt( -ln( fog_end_intensity ) ) / fog_end
The fog_end_intensity is an intensity of picture color at distance of fog_end (e.g. 0.01 or something similar close to 0)

The easiest thing then to do is to create a MACRO for giving you back a density for fog end distance:
// The fog intensity used is 0.01f #define FOG_EXP_DENSITY_FOR(fogEnd) 4.605170186f/fogEnd#define FOG_EXP2_DENSITY_FOR(fogEnd) 2.145966026f/fogEnd

Anywhere in code you can now set the density more easily. If you would like for example the EXP2 fog to end 100 units from camera, you set the density like this:
float fogEnd = 100.0f;glFogf( GL_FOG_DENSITY, FOG_EXP2_DENSITY_FOR( fogEnd ) );

I hope this helped in understanding the exponential fog modes and also reduce the amount of time needed to find the proper density to meet your needs.

## DevLog #9 - Blended objects sorting

Finally done. The blended objects are rendered in proper order. Or at least in most cases they are.

I think that everywhere where is a word about rendering blended objects is said that you must draw solid objects (faces) first and only after that draw the translucent/transparent objects (faces). And yes, don't forget that the blended objects must be drawn from back to front. Polygon sorting wasn't something I would be happy to go into.

I tried two variants of polygon sorting, both having some positives and negatives, and both being relatively easy. They are not perfect (for example they don't perform spliting of overlapping polygons), but they appear satisfactory for my needs, so I go with them. But for a case I rather turn off write to depth-buffer for blended objects. Sometimes this may reduce the damage if something goes bad.

glDepthMask( GL_FALSE );

STL map

This is easiest implementation of polygon sorting I think, but the easiness cost some performance.

// Pseudo-codevoid draw(){ if ( camera_moved ) { map.clear(); for each ( polygon ) { distance = polygonLocation - cameraLocation; // The STL map stores its elements sorted by the key in ascending order map.put( distance, polygon ); } } for ( iterator = map.rbegin(); iterator != map.rend(); ++iterator ) { (*iterator)->drawPolygon(); }}

The largest negative here is a need to recalculate distance of polygon everytime camera moves. On the other hand the map may be easily expanded by adding new polygons, and moving polygons are easy to update.

BSP tree

BSP (Binary-Space Partitioning) is a performance improvement for polygon sorting. I used it to reduce calculations of distance to simple comparison ( < > ). The drawback of this method is time needed to create the tree. The BSP tree can be used for objects that are static. Dynamically moving objects would take too much time to reconstruct the tree.

Tree construction:
- Always compare polygons just by coordinates of only one axis (by X on 1st level of tree, by Y on 2nd, by Z on 3rd, again by X on 4th level of tree, etc.)
- Find median by which to split set of polygons to two sub-sets
- Median makes a node, sub-set of polygons with coordinate < median goes to left sub-tree, sub-set of polygons with coordinate >= median goes to right sub-tree

Drawing the tree:
void draw(){ root_node.draw();}void node::draw(){ if ( is_leaf ) { draw_polygon(); } // AXIS represents either X, Y, or Z dependent on the level the node is in else if ( camera.AXIS < median.AXIS ) { draw_right_subtree(); draw_left_subtree(); } else { draw_left_subtree(); draw_right_subtree(); }}

On the picture is shown BSP tree construction (in 2D, using only X & Y axes) and the order of nodes visited when drawing.
On left is a scene with objects and camera placement, on right is a tree created from the scene.
Numbers in the tree represents the order of traversing the tree if drawn for the current camera location.
In a case camera moves, the tree does not change, only the order in which the left and right sub-tree are traversed on draw may change.

## DevLog #8 - Bright & dark spots image detection

The simple B&W image reading for generating galaxy arms presented in DevLog #7 showed up to be a bit too simple for my needs. I needed to reduce the thickness of dust stream within the arms. I've thought about another density map exclusively for the dust, but I needed larger resolution for it. And while I was thinking about what size would be good enought to depict the thin dust streams, I've reached a point where few pixels more or less on the image size wouldn't make already large difference. So why not use the colored source image I have there already instead of relying on some density maps?

And so the detection of bright and dark spots on an image taken from JPEG started.
On this enlarged fragment of the source image you can see the blur and noise I am standing against.

Bright spots

First step to detect any bright spot is to find local brightness maximums (pixels that are brighter than their surrounding pixels).
Then it must be determined whether it is a blurred dot (interested), a spill (not interested) or a noise (not interested).
I didn't make much elaborate detection mechanism in this. I just took close area surrounding the local maximum, calculated the average brightness of pixels in specific distances, and compared that with increasing distance from the maximum the brightness of pixels is decreasing. If yes, the local maximum is a center of blurred point.

byte pixelBrightness = pixel[ R ] * 0.2126f + pixel[ G ] * 0.7152f + pixel[ B ] * 0.0722f;...if ( pixelBrightness > maxSurroundingBrightness // Check there is no pixel around brighter than this one && avgCloseSurroundingBrightness > avgFarSurroundingBrightness // Check the brightness is reducing with greater distance && 0.9f * pixelBrightness > avgFarSurroundingBrightness ) // Checks the brightness difference is large enough enought (i.e. filters out a noise){ /// code for adding star}

Dark spots

Now this was a challenge. Here are no dots I could easily detect (and if yes, they aren't interesting for my purpose). Everything - noise, areas between arms, areas outside the Galaxy, areas of dust - are dark spills. The only clue for detecting the dust I had was the color tone. The dust on the image is reddish or brownish, while other areas are bluish. So here I was reading color channels and comparing them against each other. The supperiority of red appeared to not immediatelly mean a red dust, but it could be an orange core, or some bright dot that was more to the orange. To filter this out I added weights for each color and later also a top limit for overall brightness of pixel, securing I will take areas that are much more reddish, but not too bright to be a dust. At the end I discovered that some dust areas are more blue than red and had to tweak the weights even more.

if ( 1.03f * pixel[ R ] >= pixel[ B ] // Allowing little blue tone && 1.25f * pixel[ R ] >= pixel[ G ] // Allowing some orange tone && pixel[ R ] > MIN_PIXEL_BRIGHTNESS_FOR_DUST // Avoid black color (and its noise) && pixelBrightness < MAX_PIXEL_BRIGHTNESS_FOR_DUST ) // Avoid white color{ /// code for adding dust}

Finally the image was parsed with a relatively satisfying result:

On the left is a small fraction of the Galaxy where I was testing the detection. You can see the white pixels hitting the blurred white spots.
On the right is a test of full galaxy - white pixels representing bright spots, red pixels tracking the dust in arms.

(Because of drawing performace only every 10th red dot is displayed)

## DevLog #7 - Galaxy arms (image based generating)

Hello again,
it's not much common for me to write entries somewhere too often, so I should better hurry to make this one before I change my mind
So, what happened in StarDust the last week? For long it looked I will have nothing, as the spiral arms of galaxy completely broke the images I presented in previous log entry. Simply it stopped looking good and I spent incredible amount of time by playing with blending parameters. But either the dust was too mild where layer was thin, or when increased the alpha channel it was too bright in the thick areas. Finally today it all came to fit together and I feel a huge relief from finishing the work (or at least a part of it)

So, lets start...
(This time I tell you the implementation details immediatelly here )

[size="3"]Spiral arms

As I already mentioned in previous entry, I wanted to create the arms from a galaxy image, instead of from a mathematical function only. At first I was afraid it will be harder than generating from a function, but it turned out to be not such a problem.
I use already one colored galaxy image in the scene for views from distance. When camera approaches the disc, the image is blended out. It appears only when leaving the Galaxy disc and looking back onto it. This image is 5600x5600 pixels large however and although it has a fine details of dust patches and clusters, going throught it and generating dust particles or stars in such resolution would kill my PC at this moment (not speaking about the memory it would take). So for the detection of spiral arms I use a smaller 214x214 b&w image with increased contrast (don't know how I came to these numbers ).

The image is parsed pixel by pixel, reading it's intesity. If the pixel's intensity is over a chosen treshold, a multiple dust clouds are randomly placed in an area covered by this pixel and their alpha is calculated from the pixel's intensity. E.g. in areas that are between the spiral arms are dark pixels, mostly under the treshhold and no dust is generated from them. Pixels closer to arms are brighter, reaching or exceeding the treshhold. From these the dust clouds are already generated, but with low alpha. Pixels in the arms are brightest, generating the dust clouds with greatest alpha channel and making them look more dense.
The only exception is area of Galaxy core, where should be no dust. So although this is the brightest area of whole image, it is ignored.

The blending for dust clouds is set to (GL_SRC_ALPHA , GL_ONE_MINUS_SRC_ALPHA).

Galaxy prototype preview footage:
The scene contains 70,000 GL_POINTS and 20,000 blended GL_QUADS without optimizations.
The framerate is 2.5 - 3 fps (on a single core AMD Duron 1.8GHz, 2GB RAM, 128 VGA RAM).
The HyperCam reduced the framerate by half.
When disabling the GL_POINTS and keeping only the GL_QUADS, the framerate is around 18 fps.

Video was speeded up to make it smoother.

[media]

[/media]

[size="3"]Galaxy glow

The galaxy glow is the same dust from previous log entry, using blending parameters (GL_SRC_APLHA, GL_ONE). This blending makes thicker layers (for example the core) appear "shining". The glow is also a subject to the same density map as the spiral arms (only with lower treshold), but has a different way of generating. Dust for arms was generated in a pixel-by-pixel pattern.
The glow is generated by taking a random radius from galaxy center and random angle on the horizontal (x-z) plane. From the X-Z coordinates is then calculated what pixel in the density map has been hit, and the pixel intensity is taken for alpha channel.
This approach results in increased density of glow particles around the center of the galaxy (it's the same as in previous log entry), even without the density map. The density map is just a minor addition for the glow.

[size="3"]Galaxy core

The MilkyWay as I am modelling it is a barred spiral galaxy. The core is not a spherical bulge, but a long bar. Lucky enough the areas on the sides of the bar are empty on the pictures from which I am generating, so I could tread everything in radius 10,000 ly from the center of galaxy as a core, like it would be a spherical bulge. I am not much sure how much it will affects the Near- and Far-3kpc arms later, but they looks fine for now.
When reading the density map, the area of core is ignored for generating dust. The glow on the other side is generated more in the core, and with increased Y-coordinate range. Everywhere else the dust, glow and star clusters after calculating their X-Z coordinates (either from map or from radius-angle) are given random Y coordinate in the limits of galaxy disc thickness. In the core the Y-limit is linearly increased from the core edge to the core centre. I tried also cosinus interpolation istead of linear, but that looked strange.

[size="3"]Halo

The halo around the galaxy is made by points. The largest density of the halo should be around the disc and reducing with growing distance. So I used similar radius-angle approach as in case of the galaxy glow, except I power the radius by 2 (intensyfying the reduction of density with distance) and use two angles here - horizontal angle (random 0 to 360?) and vertical angle (random -90? to 90?). This worked partially only. The density was greatest around the galaxy center, but not around whole disc. To fix it, I've multiplied the final Y coordinate (representing height), by random value from 0.0 to 1.0.
This pushed the points closer to the galactic plane and making a fine looking halo.

[size="3"]Smoothing the galaxy edges

The galaxy disc had one more flaw - it looked like a thin slice of some column, i.e. had sharp corners because the height in the disc was always generated within the limits of full disc thickness. At first I thought that I would multiply the Y coordinate simply by cosinus of radius from galaxy center, which would smooth the disc edges. But this wouldn't remove the same problem around the galaxy arms, where it appeared as well. So one more time the density map came to word and the pixel intensity not only influence the alpha channel of the generated element, but also the Y-limits. Darker pixels are reducing the limits, making the disc thinner, while brighter pixels are increasing the limits, making the disc thicker. The smoothing of edges is thus made thanks to the smoothness of image and gradual fading of galaxy disc (or arms) on the image into darkness.

## DevLog #6 - The Milky-Way

It has been a long time since last update. Personal life stepped in and the development was freezed for a while as I had no taste for anything. Fortunately I am slowly grabbing powers again and the StarDust project is waking up from its hibernation.

I am bringing you now some screens of the Milky-Way as it appears right now. The star-generator presented in previous dev-logs was temporarily disconnected, so there isn't many stars inside the disc.
There are still some TODOs for the Galaxy:
- the bulge needs to be made thicker
- dust and stars should respect the arms
- dust particle performance needs improvement
- a dark (light-blocking) dust should be added with the arms

The Galaxy arms are not going to be fully procedurally generated to get completely random spiral galaxy, but a Galaxy image will be used for detection of arms and their generation. It is a harder way, but is more likely to result in something recognizable as Milky-Way and not only a generic galaxy.

## DevLog #5 - Stars make-up

I got a bit bored with all the tiny dots covering approximately same area on screen with no real comparable differences in sizes and so I played a bit with the flare appearance on the distant stars. The result are visible differences in sizes. Some stars now are really looking brighter than others.

Each star has a flare (or glow) around its surface, that is increasing with the star getting further from camera. A star that is quite close can have this glow radius of about 7x its body radius. This makes a fairly large glow around star body, but not covering whole screen (remember the video with orbiting planet from previous dev log?). However stars that are many light years away can have this glow increased hundreds or even thousands of times. This increase is crucial to make distant star still visible. If the glow would remain of same size, then even if just a light year away it wouldn't be sufficient to cover a fragment of a single pixel on the screen.

The glow has fixed size at the end for the star visibility radius. This size is 4 pixels (in diameter) on screen to secure the star is visible at least a bit before it completely disappears. It prevents a flickering of tiny distant dots over the black background. From the FOV, current screen settings (width and height) and the maximum visibility distance is calculated how large the flare/glow must be to have these 4 pixels on the screen. The actual size of flare/glow is then calculated as interpolation of the 7x body radius and this 4-pixels calculated radius.

Just this technique secures you can see distant stars as tiny dots which won't disappear because they would be too small for a pixel on screen. However all the dots would appear of being same tiny size. In a very short distance the star would be so small that it would be held on screen mostly by that 4 pixels fixation and the differences of sizes between stars would be +/- 1 or 2 pixels. To make the sizes of distant stars really comparable and the sky looking nicer (at least by my oppinion) the interpolation of glow size of star is made only in the closer half of maximum visibility radius, i.e. the glow will grow from distance 0 to distance 1/2 of visibility range. The size of glow in the 1/2 visibility range distance is that size calculated as would-be-4-pixels-if-at-the-end. This would-be-4-pixels glow is then fixed and remains of same size from 1/2 of visibility range to the end of visibility range. So at the end the star has the required 4 pixels on screen, is growing when you are getting closer because the glow is not reducing in size, and once you get close enough, the glow finally begins to reduce to fit the star body radius when you get really close to it.

Last improvement so far is a small flickering of the stars. This makes the screen looking more lively and attractive. It has been done by using a random modifier (from 0.9 to 1.1) on the star glow size. Mabe it won't stay in the game in the end since flickering stars are not realistic, but for now it is adding a fine touch to the sky.

I've tweaked a bit the star generator parameters finally to get a screen filled with more final amount of stars. There are now about 5000-7000 stars present in 360? around the observer, so an amount that is agreed to be visible by a naked eye from Earth (under optimal conditions). The frame rate dropped rapidly as I expected. So far I didn't have FPS counter displayed, so I didn't know the exact number until very recently. The frame rate fell down to 0.9 FPS. Although I am working on an older piece of HW, it's very bad result. I was tracking the cause and found out that even without the star draw implementation, the manupulation code serving the star draw calls is by itself already reducing the frame rate to 3.2 FPS. After removing pieces of code here and there I have found the major limiter - iterator through STL vector. After replacing it with old good "for" loop the FPS went up by 10 times.

// 7000 elements in array/vector// Iterating through vector of stars (FPS 3.2)for (vector::iterator i = stars.begin(); i!=stars.end(); i++) { (*i)->draw(microseconds, cameraPos);}// Old style iteration with direct access to vector (FPS 23)int count = stars.size();for (i = 0; i < count; i++) { stars->draw(microseconds, cameraPos);}// Old style iteration and array (FPS 32)C_Star stars[];for (i = 0; i < count; i++) { stars->draw(microseconds, cameraPos);}

Well, that closes up this weeks update. Hope to see you next time again.

Petr Marek

## DevLog #4 - Stars and systems

This log is comming later than I expected, so without any further delays lets have a look on what happened since DevLog #3.

A lot of the time has been dedicated to analysis of information about stars, their types, spectral classes, magnitudes (luminosity), size and count. It however looks like the source of star counts (ratios of star types in the Galaxy) is not much reliable. Although it gives a small ratio (or percentage) to the largest and most luminous stars, in the end when the numbers are applied to Galaxy parameters, it still results in a huge amount of bright stars within the range of visibility. Normally about 5000 stars should be visible in 360? view, but I am getting more than half million of canditates for visibility. That's really a lot. It looks that I'll need to tweak the percentages a bit (or rather a lot), or ask some experts for some more accurate estimations than those one can find through Google and Wiki.

For now the number of stars is reduced significantly to allow continue of works on the project. After a week spent just by reading materials and reaching a doubtful results I really needed to take a rest and code a bit. Star generator has been modified from generating absolutely random stars into generating of stars of specific type and spectrum.

So far the recognized star types are:
- Hypergiants (class 0),
- Supergiants (class I),
- Bright Gaints (class II),
- Giants (class III),
- Subgiants (class IV),
- Dwarfs (class V),
- Subdwarfs (class VI),
- White dwarfs (class VII).

Spectral classes ranges from
- red M (<3700 K) and K (3700-5200 K),
- yellow G (5200-6000 K), F (6000-7500 K),
- white A (7500-10,000 K),
- to blue B (10,000-33,000 K) and O (33,000 - 52,000 K).

A magnitude (luminosity) based on real data is assigned to the star by its type and class. This says from how far the star can be seen. Dwarfs are the faintest, giants a middle ranged, and hypergiants the most visible stars. Also a red (cooler) star is less visible than blue (hotter) star. From the previous preview the stars changed a lot in their size. From oversized blobs they were scaled down to their real size and also a fading of distant stars has been implemented to prevent their sudden pop up.

Basics for adding planets to these generated stars have been laid as well. Generating of planetary systems seems to be the next step, but for now a planet is at each star to avoid searching for a star among thousands that is so lucky that it was granted a planet by the generator.

The last delay of this entry was caused by a need of textures to show you a bit better looking images. Instead of propagating a bunch of textures from upper layers down to generated stars and planets, a texture manager has been made. Only a reference to this manager is passed to object which then use it to ask for any texture it may need.

Here is a video of a planet orbiting a small dwarf star. The orbit is now very fast because of testings. In final it will be much slower, nearly static. The time is expected to flow just about 12x faster than in real, so Earth in StarDust would rotate around its axis in 2 hours.

[media]

[/media]

Finally I promissed to somebody explain a bit more how StarDust generates the stars using the new sectors.
It's not in a power of normal home PC to store information about all stars from the Galaxy. Only stars in area of about 5000 ly (light years) around the observer are really interesting at the moment, because these stars can be visible. This area is covered by the largest sector of StarDust (internally called G2-sector). It's a cube of size 5103 light years. But even an area of 5000 ly contains too much stars and vast majority of them is too faint to be visible from large distace. So the generator generates only the largest and brightest stars (Hypergiants and Supergiants) into the G2-sector. Inside of the sector is generated a smaller G1-sector, with a size of 1701 ly. This sector contains smaller and less bright stars that are too faint to be put in the largest sector. However even the G1-sector contains too many stars, and so similar process is applied to it as before. G1 holds bright giants (class II) and the most luminous giants (class III). Smaller stars are generated into a smaller M-Sector (567 ly). This breaking iterates further through S-Sector (189 ly), T-Sector (63 ly) to Base-Sector (21 ly). So each sector type is responsible for generating of stars of only a specific brightness and does not require to be regenerated to bring up smaller stars when player change his position - generating of smaller stars is a task of independent smaller sector.

Each sector in the galaxy is given a unique ID that is used for initialization of random generator. Because the generator is for same sector initialized always by the same ID, it generates for it always same results.

Well, that should basically cover how the star generator in StarDust currently work.

So that's all for now. I hope you liked it. See you with next dev-log.
Petr Marek

## DevLog #3 - Weekend in sign of troubles

The weekend is over. Code has went to Bitbucket repositories, initial encounter with TortoiseHg was relativelly successfull and optimization of sectors for generating stars brought a significant performance increase. But not everything was so bright. VisualStudio on second PC began randomly throwing error MSB4014 (can't find MSBuild.exe). Waiting several minutes for a timeout to throw this error to be able only start building again and ohping it'll pass this time was at least very frustrating. Except of this an STL vector entered a strike and was throwing "incompatible iterator" exception during runtime. Mystery is that there were 6 vectors, and only one was causing troubles, although they were all processed in a same loop.

Below is a code example that was causing problems. The loop processed sectorsM and sectorsS, but when the it came to sectors, it crashed on the != operator in the inner loop:

vector sectors;vector sectorsS;vector sectorsM;vector* actSectorSet;for (int i=0; i<3; i++) { switch(i) { case 0: actSectorSet = §orsM; break; case 1: actSectorSet = §orsS; break; case 2: actSectorSet = §ors; break; } for (vector::iterator iter = actSectorSet->begin(); iter!=actSectorSet->end(); iter++) { ... }}

In the end I added one more vector to be used instead of the malfunctioning one and it magically worked. Really a strange bug.

While fighting with both MSBuild.exe and STL vectors not really much has been done. But at least the sectors were optimized. If you looked on the video in previous log, you could notice that there were only sectors with same size, creating thus a regular grid. This was naturally resulting in browsing through many (several hundreds) tiny sectors, reducing the speed significantly. So sectors of 6 different scales has been brought in, reducing the total amount of sectors to be drawn at one moment to 117 in full 3D generating.

I was thinking about bringing another video or a picture, but instead of putting another technical render with grey lines around sectors and placeholding blobs for stars I'll wait till the next entry, when I want come up with planets for the already generated stars. That will be finally something more interesting to watch at.

Have a nice week and see you with the next dev log.

Petr Marek

## Distances and stars visibility

Several days have passed and StarDust moved thousands of light years away from the Solar System for now. The eliptical orbits of planets were done, but with some major changes in the system it looks the planets will maybe have to go around a star using physical simulation instead. Before the planets and other fun stuff can be made more in depth, it was necessary to solve a technical issue about a varying scale of distances. Everybody who tried to model at least a Solar System in 3D with some good precision knows that soon the distances will become so large that the precision is simply lost. The work on StarDust during the last days was about securing a precision of centimeters over a distance of 100,000 light years. With that is StarDust able to address any centimeter within whole Galaxy. If it really works will show up within few weeks.

The Milky Way galaxy is estimated to have about 100-400 billion stars. To be able come at least close to the real model the star generating system for StarDust had to be designed. The primary responsibility of the generator is to repeatedly create always same star on the same spot of the Galaxy. Everytime you move away from some constellation of stars and then come back, the generator must secure that you'll find any of the billion stars on the place saw them before. Secondary duty of the generator is to reduce the amount of stars your system have to process, even if it's within the visibility range. The need for this comes from the various visibility of stars. There are dwarf stars that are visible from only several light years away, and giants that you can see from hundreds of light years.

If the visibility would be set to several hundred light years, the system would get overhauled with thousands of dwarf stars that would be within the range, but would be useless because they don't cover even a half of pixel. On the other side if the visibility is set to just several light years, the system would breath freely but we wouldn't see the brightest stars of the sky that may be further than the dwarfs. The generator is thus responsible for generating the stars at the right moment, so large stars appear in greater distance and small stars appear only when the observer comes really close to them.

Here is a speeded up technical animation showing stars generator in action. In the animation is shown just one of the final 50 horizontal layers. The maximum visibility in the test was 210 light years. Stars have been enlarged so they can be seen better.

[media]

[/media]

And what is next? Next step is adding remaining star layers and perform some adjustments to generated star radius and color. Probably some optimizations will be needed as the amount of work for he machine will multiply with stars expanding up and down. After that a corrections of star density based on real Milky Way shape is planned and override of star generator for area surrounding Sol, so real star data can be used for Earth's neighbourhood instead of the generated ones.

Well, for now that's all. See you next time.

Petr Marek

## Different space simulator

Welcome to the dev log of StarDust. This project lied in my head for quite a time, but I never had time I could dedicate to it. Until relatively recently. Just several weeks ago a concept for the game has been finally laid out. Design is still being written, but enough has been already prepared to start coding the basics. In the end I expect some things will change on the fly.

So what the game is supposed to be about?

StarDust will be a space simulator with some RPG elements. Big importance is given on the "simulation" and "realism". But don't be afraid, travel to Mars is not going to take 9 months. Realism is placed in ship handling, the physics and the ship design. The game should evoke in a player feelings that he is really in space and in a ship that can in the future really exist.

With the realism in game the StarDust is however not going to be a pacifist. Battles are on the list to do as well and with the physics of space they will be something not experienced before. The space requires new maneuvers and new tactics to be used in combat than in ordinary fly-simulators. Even for most players that played some space simulators before will be ships in StarDust something new.

As in all RPGs the player will be able to buy new equipment for himself or his ship, or buy a brand new ship that is better and larger. Starting from small fighter the player can end with a battlecruiser and having several fighter squadrons under his command, all depending on his progress over the missions. His actions in one mission will have an effect on what his next mission will bbe and where the main story will go and how it'll end. Every game will thus be different.

Currently I am working on the basics of an engine. Most probably the game will come running with all it's game features before the graphical engine with all the additional effects is finished.

My latest work were orbital and rotation parameters of planets of Solar system. Each planet has not only real orbital distance, size, rotation and orbital period, but simply said also real angle of the orbital plane and axis angle. A bit more work is still needed on the eliptical orbit paths.

Picture below shows all the planets except Uranus. Saturn is for now without rings, but hopefully not for long. Pluto was on the scene too, but even with the modificators reducing the distance of planets is Pluto too far and too small to be seen. Maybe I'll bring you a size compare of Pluto and one of the giants in next post. For now I must end.

Thanks for reading. I hope you enjoyed it and that I raised at least some interest in you about StarDust.

Petr Marek