Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


crancran

Member Since 14 Oct 2009
Offline Last Active Nov 10 2014 03:50 PM

Topics I've Started

Terrain Multitexturing with Alpha Blend Masks

04 February 2014 - 10:55 PM

We have split our terrain out into logical sectors or pages that allow us to stream in portions of the terrain based on the camera's location.  Each sector of terrain is further subdivided into 256 cells in a 16x16 format.  Each of these cells consist of up to 4 color textures, generally 256x256 and an alpha blend map for each color layer past the first layer along with a light map.  

 

Rendering each of the smaller cells independently is quite easy by simply passing the color textures along with a RGB texture that combines the maximum of 3 alpha blend textures into a single texture and I do the shading inside a pixel shader.  While this works quite well, it is far from efficient mainly because this pushes the batch count extremely high since I am rendering on a small cell by cell basis and since that no material is shared between any of the cells due to their varying alpha blend map texture.

 

I did consider the idea of using a quad-tree structure based on texture counts to split the terrain sectors into quads that contained at minimum 1 cells but more like 8x8 patches of cells, dynamically generate a material that referenced the textures used by the given set of cells and then generate a dynamic runtime texture by combining the alpha blend maps for those cells into one larger texture.  In the case where a terrain sector was within limits and thus the entire sector was 1 leaf node, the alpha texture is only 1024x1024 but generally I would expect it to be 512x512 or smaller in more detailed texture-varying areas.

 

The problem with this approach is I am not sure how to control which 4 textures to sample and blend in my shader.  As a simple example, my algorithm determines that a 2x2 set of grid cells are within the texture limits to be combined into a single material.  So I bind lets say 8 textures and my blend texture.  The top left cell might need to sample textures 1, 3, 5, and 8 while the top right cell might need to sample textures 2, 5, 6, and 7.  Both cases, the same alpha texture is sampled and the where the red channel controls the blend weight for texture 1 in the top left grid while it controls the blend weight for texture 2 in the top right.

 

Is this even possible and if so, anyone have any examples or suggestions on how I could leverage this?  Or is there a cleaner while efficient way to do this without overly complicating the shader's logic?
 


Tiled Terrain Rendering with Multiple Textures

31 January 2013 - 10:27 PM

I am currently working on my terrain system and I've run into a wall on how to handle rendering my terrain efficiently with adequate detail.  Each of our terrain tiles span 512x512 and consists of 256 smaller blocks that are 32x32.  Each smaller block has a predefined list of textures to be applied with heightmap, normal map, lightmap, shadowmap, and alpha map data for blending.  I technically need to be able to load the current tile where the camera is located plus the surrounding 8 adjacent tiles with decent detail.  The tiles that are beyond that perimeter I'll need to load too but with far less detail since they'll begin to be distorted by fog anyway.

 

I'm presently using the OGRE3d render engine and the problem I face is that if I render each of the smaller blocks (32x32) one by one using their own material with texture references running through my pixel shader for blending, I generate a batch for each of those blocks.  In order to render the 9 tiles, I'd already be exceeding 2300 batches to the GPU and on lower end hardware, the FPS will hover around 5-9 FPS which isn't acceptable. 

 

The idea is to minimize my batch count by trying to combine what I can possibly on the CPU side through some preprocessing/loading steps.  I've tried using Render-To-Texture techniques for a single tile but the result proved to be very blurry when the avatar's camera looked down at the ground compared to my rendering techinque of each smaller chunk being batched separately.  I suspect given that I've seen RTT be pretty clear in other games that it's likely something I've missed but scurrying through the documentation hasn't triggered any thoughts.

 

I considered doing texture stitching on the CPU side by creating the larger tile textures for each of the blend layers and then passing the tile textures to my shader to blend the layers with the associated other textures, but wasn't sure if that was an ideal path either.

 

How have others approached this in your games for large scale terrain? 


Game State Management

20 December 2012 - 11:10 AM

When I was researching game state management a while back, I found the traditional stack-based finite state machine highly recommended. I implemented the system in a current game but there are aspects of this design that feel a bit flawed and often too restrictive.

In a networked game, you might have states such as Login, SelectAvatar, and Play. By using a stack-based approach for states, you might consider establishing your server connection in the Login state and by using the stack, that connection can remain open & valid until the Login state is either destroyed or some event triggers the destruction of the connection.

But lets assume that during the Play state a loss of connectivity happens, now the player must be sent to the login screen. Since we've built up this stack of states, navigation back to the start isn't easy. Using a switch() call won't eliminate the states that exist below the play state. Furthermore it seems like bad design to allow passing a numeric count to pop() to remove N number of states from the stack too, particularly if N could vary depending on various conditions.

An alternate would be to avoid using the stack all together and instead traverse linearly between the three states. Since games often use a layered systems approach, there would likely be some network layer abstraction or subsystem that could hold the references to connections made and simply provide an API that various states could use to interact with the connection. Therefore, the connection is established in one state may cleaned up by another. That again feels like poor design to me but maybe others feel it's typical.

I have seen references where game states are treated like screens. Now we're wrapping UI aspects into these states. If you've approached game state using a stack-based approach, that may or may not work well with aspects around UI. By nature, UI is generally not stack-driven. You often have more than one UI screen open at any given time which may need input, logic updates, etc. Furthermore, UI screens can be opened in varying order and depending upon the order of operations, you get different outcomes. For example, opening panel A followed by panel B implies that A would get closed. But if you opened panel B first followed by panel A, then both are acceptably open with no conflicts. One could introduce panel C into the picture which can be opened or closed independently from the other two panel's open/close order.

This has all lead me to believe that trying to approach game state management like this feels like really poor design. I have begun to feel that a different approach is needed, perhaps multiple state machines per subsystem. In order for these subsystem state machines to interact is through some event/messaging system or well defined subsystem interfaces.

But before I take any approach, I'm curious how others have addressed game state around managing your UI interactions and various screens with other subsystems like networking and audio where you create a connection/sound in one state and it remains active until another state is reached downstream. It could simply be my approach using the stack-based solution is flawed somehow and if so, feel free to correct me where my understanding may be inaccurate.

Decoupling Network from Game/UI logic

30 November 2012 - 01:52 PM

I am wondering what design strategies or approaches others have used to decouple networking from game and UI logic in a game client.

I considered the notion where everything that would be sent to the server endpoint would be some form of an event. Various points in the game logic code, specific events would get dispatched to an event queue. The network layer components would register during initialization for these types of events and upon receipt, perform the network action. Once the network action completes, a network layer completion handler that got invoked that would fire an event back to the event queue to be distributed the next time the event dispatcher was ticked.

One thing I like about the above is that the event dispatcher is the conduit by which any module can speak with the network layer. In fact, my design has been hinged around this being the way any subsystem in the framework can talk to one another.

Using boost asio with an io service worker thread, I can have the service event loop continually respond to read/write operations as fast as possible and update an internal queue of message packets received from the server. During the main game loop I can simply lock the queue, make a clone, clear, and unlock then process the cloned list. This is so I can maintain some order of state update during the game loop. I don't think it would be ideal to allow the io service thread to manipulate state since it has no idea where the main game loop may be during the simulation for example.

I'm curious whether there is a better approach to interfacing the network layer in the game loop besides the above? The world simulation will have a large number of various message types sent to/from the server and I'd like to find a good way to handle sending/receiving these messages effectively with minimal overhead and decent decoupling too.

EDIT: Also keep in mind, that if a proposed solution is not to use an event queue like the above, the game client maintains at most three server connections simultaneously during game play, namely the authentication service connection, chat service connection, and world simulation service connection. So I'd need to have a way to distinguish between multiple server-side endpoints.


Game Engine Architecture Overview

19 April 2012 - 01:59 PM

I recently stumbled onto this article and have a few questions:
http://software.intel.com/en-us/articles/designing-the-framework-of-a-parallel-game-engine/

It seems that Engine Framework is using these Universal Scene (UScene) and Universal Object (UObject) concepts to loosely couple objects that are maintained by each of the systems of the engine so that data can be easily shared/exchanged in some fashion. At one point in the article, the following paragraph is mentioned in 3.1.2:

Another thing to point out is that the universal scene and universal object are responsible for registering all their extensions with the state manager so that the extensions will get notified of changes made by other extensions (ie: other systems). An example would be the graphics extension being registered to receive notification of position and orientation changes made by the physics extension.


I cannot seem to visualize exactly how the above along with the comments in 5.2.1, 5.2.2, and 5.2.3 all correlate to handling the inter-system communication. It seems somehow the UScene/UObject get extended by the objects maintained inside the systems.

Anyone have any idea how that relationship looks?

PARTNERS