Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

119 Neutral

About KaRei

  • Rank
  1. Before displaying a scene, the viewport and projection transformations are needed. The engine have to allow multiple viewports (for splitscreen or PIP). Each viewport may want to display same, or absolutely different scene. In a case multiple viewports display same scene, they may want to display it from different angles. Like this can be summed up the requirements for Scene, Camera, and Viewport interaction implementation. Camera in scene As first let take a look at a single scene that should support different view angles. The situation is offering to create a class for Camera, and have a Camera object in the scene for every predefined view. So you make something like a scene from a movie shooting, where are several cameras on the place and the cut then only selects which view will be used in the film at which moment. In my implementation the Camera::Render() function performs the transformation of camera and then calls the Scene::Draw(), so a single call of camera will draw whole scene from the desired angle and settings. As you may see on the class diagram below, the Scene provides methods to create a Camera. The scene knows all its cameras and the created camera knows which scene it belongs to. Viewport in window Now when we have the ability to draw a scene from a camera, we have still to perform viewport transformation. As in case of cameras we create a class for viewports. Every viewport on the screen will correspond to one Viewport object. Split screen or PIP (picture in picture) is then made by creating at least two Viewport objects and calling Render() on each. Display camera in viewport Last thing remains - tell the viewport which scene to display and from vhich camera. By simply passing Camera as an argument of Render call we tell to viewport: "In this viewport display view from that camera". The Viewport::Render() performs viewport transformation and calls Camera::Render(). The Camera::Render() performs camera transformations and calls Scene::Draw(). Here is an example of how can look the game code for split screen for two players: [spoiler]// Create a 1280x1024 fullscreen window Window* window = new Window(); window->Create( "Game title", 1280, 1024, 32, true ); // name, width, height, debth, fullscreen window->Show(); // Create 2 viewports 1280x512 (horizontal splitscreen) Viewport* viewportPlayer1 = new Viewport( 0, 0, window->Width(), window->Height()/2 ); // x, y, width, height Viewport* viewportPlayer2 = new Viewport( 0, window->Height()/2, window->Width(), window->Height()/2 ); float aspect = window->Width() / ( window->Height()/2 ); //Create scene Scene* gameScene = new Scene(); Camera* camPlayer1 = gameScene->CreatePerspectiveCamera( position1, 60.0, aspect, 0.1, 500.0 ); // position, FOV, aspect, near, far Camera* camPlayer2 = gameScene->CreatePerspectiveCamera( position2, 60.0, aspect, 0.1, 500.0 ); // add other objects to Scene // Draw viewportPlayer1->Render( camPlayer1 ); viewportPlayer2->Render( camPlayer2 ); Renderer::SwapBuffers( window );[/spoiler]
  2. The keyboard listener is complete, now with fixed and improved mask for special keys ( Ctrl, Alt, Shift ) to be a bit more flexible. The left Alt was a bit tricky to make it work using window messages. I was slowly going to think about abandoning the idea of callbacks as reaction on messages if it wouldn't be possible to catch messages for left Alt combinations. In the end the Alt + key combination revealed that although it does not send WM_KEYDOWN message as any other combination does, it sends a WM_SYSKEYDOWN message instead. With that the keyboard handling system is completely prepared. Here you can see the structure. Window WndProc sends messages to HandleEvent method of Window wrapper. HandleEvent directly solves messages like window activation/inactivation, dumps screensaver message, process key press/release messages, etc. I was thinking at first about allowing game to handle all the messages on its own, but in the end these messages seems to be unlikely handled in different way game from game. Only some messages invoke a call of methods in WindowEventHandler class. WindowEventHandler When a Window::HandleEvent() receives a certain message, it calls appropriate method of this class. The game can inherit from this class ( e.g. GameWindowEventHandler ) and override the methods if it wants to handle some messages by itself. KeyboardListener & KeyHandler Game can register a callback function for each key ( + combinations of special keys ). When KEYDOWN message is received, a KeyPressed function is called. This checks if some callback for the key is registered and calls it. Ship & Character Controller Static callback method is called from KeyHandler when key is pressed. The callback takes an instance of controlled object (e.g. ship, character, etc. ) and calls its member method to perform the game operation. For example if we fly a ship, the game registers the methods from ShipController in keyboard listener and when a key is pressed, the appropriate function from ShipController is called directly. If we get off the ship, the game can unregister the old functions and register functions from CharacterController instead. If the same key is then pressed, a function from CharacterController is called now. I think this can separate pretty nicely different operations that should be performed on different screens by same key. Operations for different screen or game mode will not mess among each other. Well, so that's all for now. The mouse and possibly joystick handling will be made in a very similar way as keyboard is, but I will keep them for a later time. In the upcomming days I'd like to look on the Renderer and prepare some usable API for it.
  3. KaRei

    Processing Alt + Enter

    The problem is you won't receive WM_KEYDOWN message when you hold (left) ALT, but WM_SYSKEYDOWN message instead. Alt + some_key is recognized as a system command combination, therefore sends a SYS- message. Ctrl + Alt + some_key isn't a system command combination, therefore sends a nonSYS- message. WM_KEYDOWN WM_SYSKEYDOWN Note: Only the left Alt works in this way, i.e. WM_SYSKEYDOWN. With the right Alt you'll receive the ordinary WM_KEYDOWN message.
  4. Last several days I've spent by separating engine from game by creating engine API. Code for window handling and keyboard listening are nearly done. Sounds strange to start speaking about creating an engine after many weeks of development. Bad for me, I haven't draw the line between engine part and game part at the beginning, and so no wonder that codes of both parts began to weave together. In fact when I needed draw something from the game part, I accessed directly the OpenGL API. It works, but creates a mess that is hard to maintain. Although there was some shallow intermediate level for more complex operations (e.g. depth sorting), the usage of OpenGL was in most cases direct. [subheading]Window handling[/subheading] I had tendency to make the engine more separated already before, but it was burnt at the first step - wrap the window management. Creating a window wrapper is easy so long until you reach WndProc (method processing window messages that had to be registered). To register the WndProc it had to be a static method. I was trying to search Google for help but at my earlier attempts I found only confirmations that there isn't a way how to use a member function for window message processing. This week I tried once more to find a solution on this and finally was lucky. The WndProc will stay static, but the only thing it will do is to pass the message to a member function of the window instance. The only thing remaining is how to pass the window instance to static WndProc. Before we have the HWND available, we pass the pointer on our window class using LPARAM of window creation message. [spoiler]HWND WINAPI CreateWindowEx( __in DWORD dwExStyle, __in_opt LPCTSTR lpClassName, __in_opt LPCTSTR lpWindowName, __in DWORD dwStyle, __in int x, __in int y, __in int nWidth, __in int nHeight, __in_opt HWND hWndParent, __in_opt HMENU hMenu, __in_opt HINSTANCE hInstance, __in_opt LPVOID lpParam // This will come as LPARAM of WM_NCCREATE message, so put pointer on window wrapper here );[/spoiler] Whatever we set as LPARAM in the CreateWindowEx function will come as LPARAM of WM_NCCREATE message. In WndProc we have to catch the message and read out the pointer on our window wrapper instance from the LPARAM. We need to save the pointer now somewhere else. Because with the mesage came also HWND (window handle), we can save the pointer into it's USERDATA space. All later messages received by WndProc will read the instance of our window wrapper from the USERDATA in HWND. [spoiler]LRESULT CALLBACK WndProc( HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam ) { // Variable for our window wrapper instance Window *window = NULL; // This message comes after CreateWindowEx() is called if ( uMsg == WM_NCCREATE ) { // Read pointer on wrapper from LPARAM ... window = reinterpret_cast( ((LPCREATESTRUCT)lParam)->lpCreateParams ); // ... and save it into USERDATA in HWND SetWindowLong( hWnd, GWL_USERDATA, reinterpret_cast( window ) ); // Saving HWND into wrapper instace as well window->SetHWnd( hWnd ); } else { // Any other message received will read the wrapper instance from USERDATA in HWND window = reinterpret_cast( GetWindowLong( hWnd, GWL_USERDATA ) ); } if ( window ) { // Pass the message to our own member function in window wrapper return window->HandleWindowEvent( uMsg, wParam, lParam ); } else { return DefWindowProc( hWnd, uMsg, wParam, lParam ); } }[/spoiler] With the WndProc out of the way there is nothing what would prevent us from creating a window handling API, that can be easily used Window* window = new Window(); window->Create( "My window", 1024, 768, 32, false ); window->Show(); // Switch from windowed to fullscreen window->Destroy(); window->Create( "My window", 1024, 768, 32, true ); window->Show(); [spoiler]class Window { public: Window(); ~Window(); bool Create( char* title, int width, int height, int bits, bool fullscreen ); void Destroy(); void Show() const; bool IsFullscreen() const; HDC GetHDC() const; void SetHWnd( HWND hWnd ); static LRESULT CALLBACK WndProc( HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam ); private: LRESULT HandleWindowEvent( UINT uMsg, WPARAM wParam, LPARAM lParam ) const; };[/spoiler] [subheading]Keyboard listener[/subheading] For listening keyboard events I am using window messages sent to WndProc. But reaction on pressed keys is game-dependent. So the message processing function have to pass the information about keys to some game method. So when user "click" on a key on keyboard, the game gets notified about it, but the game often needs to know also if the key is "held". There are two methods to know if key is still pressed I know about: 1) use of boolean array and set the appropriate bool to TRUE on WM_KEYDOWN message and set it to FALSE on WM_KEYUP message 2) ask on actual status of key by GetAsyncKeyState() method I've picked the first approach - array of 256 bool values keeping track of all keys. The KeyboardListener stores this array and also provides API for game to register for pressed keys or check actual status of the keys. // Example of KeyboardListener usage // Game functions that should react on different keys void Funct_A() { ... } void Funct_Shift_A() { ... } void Funct_Ctrl_Shift_A() { ... } void test() { KeyboardListener keyboardListener; // Each function is now registered for a specific key (and possibly for combination of special keys like CTRL, ALT and SHIFT) keyboardListener.RegisterCallback( 'A', &Funct_A ); keyboardListener.RegisterCallback( 'A', SPECIAL_KEY_SHIFT, &Funct_Shift_A ); keyboardListener.RegisterCallback( 'A', SPECIAL_KEY_CTRL | SPECIAL_KEY_SHIFT, &Funct_Ctrl_Shift_A ); // The registered functions are automatically called everytime their key (or combination) is pressed // If some code should execute for the duration of some key being held, it have to check the the key status if ( keyboardListener.IsPressed( 'A', SPECIAL_KEY_ALT | SPECIAL_KEY_CTRL ) ) { ... } } [spoiler]class KeyboardListener { public: KeyboardListener(); ~KeyboardListener(); void KeyPressed ( unsigned char key, unsigned short specialKeyMask = 0 ); void KeyReleased( unsigned char key ); bool IsPressed ( unsigned char key, unsigned short specialKeyMask = 0 ); void RegisterCallback ( unsigned char key, unsigned short specialKeyMask, void(*callback)() ); void UnregisterCallback( unsigned char key, unsigned short specialKeyMask ); private: KeyHandler* keys[ 256 ]; }; class KeyHandler { public: KeyHandler(); ~KeyHandler(); void Press( unsigned short specialKeyMask ); void Release(); bool IsPressed( unsigned short specialKeyMask ); void RegisterCallback( void (*function)(void), unsigned short specialKeyMask = 0 ); void UnregisterCallback( unsigned short specialKeyMask = 0 ); private: bool pressed; std::map< unsigned short, void(*)(void) > callbacks; };[/spoiler] Every key can have a callback function. The game registers the callback function through KeyboardListener API. When the listener is notified by WndProc about pressed key, it checks if the key has a callback function registered and calls it. The thing gets a little bit complicated when we want to react on SHIFT, ALT and CTRL combinations. To check a combination of these special keys is used a 9 digit mask (unsigned short): 1 - Ctrl (any) 2 - Alt (any) 4 - Shift (any) 8 - Left Ctrl 16 - Right Ctrl 32 - Left Alt 64 - Rigth Alt 128 - Left Shift 256 - Right Shift If a game needs some function to react on Ctrl + A, but does not bother if it is left or right Ctrl, it registers the function in KeyboardListener for key 'A' and a special key mask "000000001": keyboardListener.RegisterCallback( 'A', binary(000000001) , &Function ); When player hits a left Ctrl and A for example, the WndProc catches 'A' key pressed message and looks for state of Ctrl, Alt and Shift keys. It creates the following mask: 000000001 // because VK_CONTROL is pressed 000001000 // because VK_LCONTROL is pressed ---------------- 000001001 As you see, although the Ctrl was pressed, the two masks are not equal. The KeyboardListener needs to split this mask into two parts - general part and LR (left-right) part. The general part takes last 3 digits of mask, the L-R part takes first 6 digits of the mask. These two parts are then taken as the masks to compare against what was registered by game. General mask: 000000001 - equal to registered mask to call Function L-R mask: 000001000 - no function is registered with this mask This enables the game to make differences between left and right control keys, or don't bother about their side at all.
  5. KaRei


    Space simulator and RPG in one. Currently in development.
  6. KaRei

    DevLog #10 - OpenGL fog

    OpenGL provides 3 fog types - GL_LINEAR, GL_EXP and GL_EXP2. Some time ago I've found an image showing the characteristics of these modes which helped me to understand them a bit more. I can't find it now again, so I tried to recreate it. It isn't absolutely accurate, but it could help understand the differences between the fog modes. There are several fog parameters that can be set. Except of fog color it's GL_FOG_START, GL_FOG_END, and GL_FOG_DENSITY. The GL_FOG_START and GL_FOG_END is meaningful only for LINEAR fog. The GL_FOG_DENSITY is meaningful only for EXP and EXP2 fogs. Because the LINEAR fog works with FOG_START / END parameters, it is the easiest type to set. You simply set the distance where the fog should be starting and a distance where the fog should be ending (covering objects completely). Setting density for EXP fogs is a bit more tricky. Mostly it's a set & try over and over until you find the density that looks good. From the equations for fog calculation we can however retrieve an equation for calculating the desired density. The EXP fog mode uses these formulas for fog calculations: EXP: f = e^( -(density * z) ) EXP2: f = e^( -(density * z)^2 ) With several modifications we can get an equation for density: density_EXP = ( -ln( fog_end_intensity ) ) / fog_end density_EXP2 = sqrt( -ln( fog_end_intensity ) ) / fog_end The fog_end_intensity is an intensity of picture color at distance of fog_end (e.g. 0.01 or something similar close to 0) The easiest thing then to do is to create a MACRO for giving you back a density for fog end distance: // The fog intensity used is 0.01f #define FOG_EXP_DENSITY_FOR(fogEnd) 4.605170186f/fogEnd #define FOG_EXP2_DENSITY_FOR(fogEnd) 2.145966026f/fogEnd Anywhere in code you can now set the density more easily. If you would like for example the EXP2 fog to end 100 units from camera, you set the density like this: float fogEnd = 100.0f; glFogf( GL_FOG_DENSITY, FOG_EXP2_DENSITY_FOR( fogEnd ) ); I hope this helped in understanding the exponential fog modes and also reduce the amount of time needed to find the proper density to meet your needs.
  7. Finally done. The blended objects are rendered in proper order. Or at least in most cases they are. I think that everywhere where is a word about rendering blended objects is said that you must draw solid objects (faces) first and only after that draw the translucent/transparent objects (faces). And yes, don't forget that the blended objects must be drawn from back to front. Polygon sorting wasn't something I would be happy to go into. I tried two variants of polygon sorting, both having some positives and negatives, and both being relatively easy. They are not perfect (for example they don't perform spliting of overlapping polygons), but they appear satisfactory for my needs, so I go with them. But for a case I rather turn off write to depth-buffer for blended objects. Sometimes this may reduce the damage if something goes bad. glDepthMask( GL_FALSE ); STL map This is easiest implementation of polygon sorting I think, but the easiness cost some performance. // Pseudo-code void draw() { if ( camera_moved ) { map.clear(); for each ( polygon ) { distance = polygonLocation - cameraLocation; // The STL map stores its elements sorted by the key in ascending order map.put( distance, polygon ); } } for ( iterator = map.rbegin(); iterator != map.rend(); ++iterator ) { (*iterator)->drawPolygon(); } } The largest negative here is a need to recalculate distance of polygon everytime camera moves. On the other hand the map may be easily expanded by adding new polygons, and moving polygons are easy to update. BSP tree BSP (Binary-Space Partitioning) is a performance improvement for polygon sorting. I used it to reduce calculations of distance to simple comparison ( < > ). The drawback of this method is time needed to create the tree. The BSP tree can be used for objects that are static. Dynamically moving objects would take too much time to reconstruct the tree. Tree construction: - Always compare polygons just by coordinates of only one axis (by X on 1st level of tree, by Y on 2nd, by Z on 3rd, again by X on 4th level of tree, etc.) - Find median by which to split set of polygons to two sub-sets - Median makes a node, sub-set of polygons with coordinate < median goes to left sub-tree, sub-set of polygons with coordinate >= median goes to right sub-tree Drawing the tree: void draw() { root_node.draw(); } void node::draw() { if ( is_leaf ) { draw_polygon(); } // AXIS represents either X, Y, or Z dependent on the level the node is in else if ( camera.AXIS < median.AXIS ) { draw_right_subtree(); draw_left_subtree(); } else { draw_left_subtree(); draw_right_subtree(); } } On the picture is shown BSP tree construction (in 2D, using only X & Y axes) and the order of nodes visited when drawing. On left is a scene with objects and camera placement, on right is a tree created from the scene. Numbers in the tree represents the order of traversing the tree if drawn for the current camera location. In a case camera moves, the tree does not change, only the order in which the left and right sub-tree are traversed on draw may change.
  8. The simple B&W image reading for generating galaxy arms presented in DevLog #7 showed up to be a bit too simple for my needs. I needed to reduce the thickness of dust stream within the arms. I've thought about another density map exclusively for the dust, but I needed larger resolution for it. And while I was thinking about what size would be good enought to depict the thin dust streams, I've reached a point where few pixels more or less on the image size wouldn't make already large difference. So why not use the colored source image I have there already instead of relying on some density maps? And so the detection of bright and dark spots on an image taken from JPEG started. On this enlarged fragment of the source image you can see the blur and noise I am standing against. Bright spots First step to detect any bright spot is to find local brightness maximums (pixels that are brighter than their surrounding pixels). Then it must be determined whether it is a blurred dot (interested), a spill (not interested) or a noise (not interested). I didn't make much elaborate detection mechanism in this. I just took close area surrounding the local maximum, calculated the average brightness of pixels in specific distances, and compared that with increasing distance from the maximum the brightness of pixels is decreasing. If yes, the local maximum is a center of blurred point. byte pixelBrightness = pixel[ R ] * 0.2126f + pixel[ G ] * 0.7152f + pixel[ B ] * 0.0722f; ... if ( pixelBrightness > maxSurroundingBrightness // Check there is no pixel around brighter than this one && avgCloseSurroundingBrightness > avgFarSurroundingBrightness // Check the brightness is reducing with greater distance && 0.9f * pixelBrightness > avgFarSurroundingBrightness ) // Checks the brightness difference is large enough enought (i.e. filters out a noise) { /// code for adding star } Dark spots Now this was a challenge. Here are no dots I could easily detect (and if yes, they aren't interesting for my purpose). Everything - noise, areas between arms, areas outside the Galaxy, areas of dust - are dark spills. The only clue for detecting the dust I had was the color tone. The dust on the image is reddish or brownish, while other areas are bluish. So here I was reading color channels and comparing them against each other. The supperiority of red appeared to not immediatelly mean a red dust, but it could be an orange core, or some bright dot that was more to the orange. To filter this out I added weights for each color and later also a top limit for overall brightness of pixel, securing I will take areas that are much more reddish, but not too bright to be a dust. At the end I discovered that some dust areas are more blue than red and had to tweak the weights even more. if ( 1.03f * pixel[ R ] >= pixel[ B ] // Allowing little blue tone && 1.25f * pixel[ R ] >= pixel[ G ] // Allowing some orange tone && pixel[ R ] > MIN_PIXEL_BRIGHTNESS_FOR_DUST // Avoid black color (and its noise) && pixelBrightness < MAX_PIXEL_BRIGHTNESS_FOR_DUST ) // Avoid white color { /// code for adding dust } Finally the image was parsed with a relatively satisfying result: On the left is a small fraction of the Galaxy where I was testing the detection. You can see the white pixels hitting the blurred white spots. On the right is a test of full galaxy - white pixels representing bright spots, red pixels tracking the dust in arms. (Because of drawing performace only every 10th red dot is displayed)
  9. Hello again, it's not much common for me to write entries somewhere too often, so I should better hurry to make this one before I change my mind So, what happened in StarDust the last week? For long it looked I will have nothing, as the spiral arms of galaxy completely broke the images I presented in previous log entry. Simply it stopped looking good and I spent incredible amount of time by playing with blending parameters. But either the dust was too mild where layer was thin, or when increased the alpha channel it was too bright in the thick areas. Finally today it all came to fit together and I feel a huge relief from finishing the work (or at least a part of it) So, lets start... (This time I tell you the implementation details immediatelly here ) [size="3"]Spiral arms As I already mentioned in previous entry, I wanted to create the arms from a galaxy image, instead of from a mathematical function only. At first I was afraid it will be harder than generating from a function, but it turned out to be not such a problem. I use already one colored galaxy image in the scene for views from distance. When camera approaches the disc, the image is blended out. It appears only when leaving the Galaxy disc and looking back onto it. This image is 5600x5600 pixels large however and although it has a fine details of dust patches and clusters, going throught it and generating dust particles or stars in such resolution would kill my PC at this moment (not speaking about the memory it would take). So for the detection of spiral arms I use a smaller 214x214 b&w image with increased contrast (don't know how I came to these numbers ). The image is parsed pixel by pixel, reading it's intesity. If the pixel's intensity is over a chosen treshold, a multiple dust clouds are randomly placed in an area covered by this pixel and their alpha is calculated from the pixel's intensity. E.g. in areas that are between the spiral arms are dark pixels, mostly under the treshhold and no dust is generated from them. Pixels closer to arms are brighter, reaching or exceeding the treshhold. From these the dust clouds are already generated, but with low alpha. Pixels in the arms are brightest, generating the dust clouds with greatest alpha channel and making them look more dense. The only exception is area of Galaxy core, where should be no dust. So although this is the brightest area of whole image, it is ignored. The blending for dust clouds is set to (GL_SRC_ALPHA , GL_ONE_MINUS_SRC_ALPHA). Galaxy prototype preview footage: The scene contains 70,000 GL_POINTS and 20,000 blended GL_QUADS without optimizations. The framerate is 2.5 - 3 fps (on a single core AMD Duron 1.8GHz, 2GB RAM, 128 VGA RAM). The HyperCam reduced the framerate by half. When disabling the GL_POINTS and keeping only the GL_QUADS, the framerate is around 18 fps. Video was speeded up to make it smoother. [media][/media] [size="3"]Galaxy glow The galaxy glow is the same dust from previous log entry, using blending parameters (GL_SRC_APLHA, GL_ONE). This blending makes thicker layers (for example the core) appear "shining". The glow is also a subject to the same density map as the spiral arms (only with lower treshold), but has a different way of generating. Dust for arms was generated in a pixel-by-pixel pattern. The glow is generated by taking a random radius from galaxy center and random angle on the horizontal (x-z) plane. From the X-Z coordinates is then calculated what pixel in the density map has been hit, and the pixel intensity is taken for alpha channel. This approach results in increased density of glow particles around the center of the galaxy (it's the same as in previous log entry), even without the density map. The density map is just a minor addition for the glow. [size="3"]Galaxy core The MilkyWay as I am modelling it is a barred spiral galaxy. The core is not a spherical bulge, but a long bar. Lucky enough the areas on the sides of the bar are empty on the pictures from which I am generating, so I could tread everything in radius 10,000 ly from the center of galaxy as a core, like it would be a spherical bulge. I am not much sure how much it will affects the Near- and Far-3kpc arms later, but they looks fine for now. When reading the density map, the area of core is ignored for generating dust. The glow on the other side is generated more in the core, and with increased Y-coordinate range. Everywhere else the dust, glow and star clusters after calculating their X-Z coordinates (either from map or from radius-angle) are given random Y coordinate in the limits of galaxy disc thickness. In the core the Y-limit is linearly increased from the core edge to the core centre. I tried also cosinus interpolation istead of linear, but that looked strange. [size="3"]Halo The halo around the galaxy is made by points. The largest density of the halo should be around the disc and reducing with growing distance. So I used similar radius-angle approach as in case of the galaxy glow, except I power the radius by 2 (intensyfying the reduction of density with distance) and use two angles here - horizontal angle (random 0 to 360?) and vertical angle (random -90? to 90?). This worked partially only. The density was greatest around the galaxy center, but not around whole disc. To fix it, I've multiplied the final Y coordinate (representing height), by random value from 0.0 to 1.0. This pushed the points closer to the galactic plane and making a fine looking halo. [size="3"]Smoothing the galaxy edges The galaxy disc had one more flaw - it looked like a thin slice of some column, i.e. had sharp corners because the height in the disc was always generated within the limits of full disc thickness. At first I thought that I would multiply the Y coordinate simply by cosinus of radius from galaxy center, which would smooth the disc edges. But this wouldn't remove the same problem around the galaxy arms, where it appeared as well. So one more time the density map came to word and the pixel intensity not only influence the alpha channel of the generated element, but also the Y-limits. Darker pixels are reducing the limits, making the disc thinner, while brighter pixels are increasing the limits, making the disc thicker. The smoothing of edges is thus made thanks to the smoothness of image and gradual fading of galaxy disc (or arms) on the image into darkness.
  10. KaRei

    Equipment and clothing

    The character reminds me a lot a paladin from Diablo II Looks really good
  11. KaRei

    DevLog #6 - The Milky-Way

    Here it is ;) [img]http://img545.imageshack.us/img545/3286/wireframe48.jpg[/img]
  12. KaRei

    Distances and stars visibility

    [quote name='evolutional' timestamp='1306754212'] Do you have any time to write about how you approached your star generation algorithms? [/quote] I've noticed now that although I wrote about it in later dev-log, I missed to write a reply here So, the generation of stars is made using a fictional octree dividing the space area. Fictional because the structure is never stored (because of it's size). From the camera coordinates is calculated what index would have a leaf containing the camera in the tree. This index is used as a generator seed, securing that random function will for same sector always return same sequence of values - i.e. generate same stars. Same calculation of index is made also for all nodes above the leaf with camera (i.e. parents of the node with camera). If the index of some of the sectors containing the camera changed, the stars of that level are regenerated. E.g. if camera moves in a way it leaves the sector represented by a leaf, but still remains in the same larger sector represented by the leaf's parent, only the stars for the new leaf are generated. If camera moves in a way it leaves both the leaf and it's parent, stars for both (the new leaf and it's parent) are generated. Generation of stars alone is then simple calling of random function. First value returned by random function is used as a count of stars in the sector. For each star is then given random X-Y-Z coordinates within the limits of the sector, a size, a color, etc.
  13. KaRei

    DevLog #6 - The Milky-Way

    Ow, and if you're interested about the bright core, it's just a result of additive blending. The dust clouds are only more dense there and so their color sums up to be brighter at blending than the less dense surrounding areas
  14. KaRei

    DevLog #6 - The Milky-Way

    It's all made just by texturing (no shaders used as I still don't know how to write some ` ). There are few thousand (i think it was 3,000 at that revision) cloud billboards randomly scattered throughout the disc and blended together using additive blending. Stars are GL_POINTS. For now it's quite slow and to get a temporary speed up I've baked all the dust clouds into one display list. Obviously this can't be used as final solution as it doesn't allow dynamic billboard rotation, so later I'll have to switch to imposters or rendering to sky-box texture.
  15. KaRei

    DevLog #5 - Stars make-up

    Ouch, forgot to switch to Release configuration, hehe I'll take one more look on the vectors and will bring updated numbers. Thanks for your comment
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!