Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Lambdacerro

Member Since 05 Feb 2009
Offline Last Active Jun 25 2013 09:35 AM

Topics I've Started

Shadow Mapping: Understanding Orthographic Projection

21 May 2013 - 04:38 PM

Hello,

 

Im currently in the process of implementing shadow mapping on my engine, im using a deferred renderer for now and works just fine, my main issue comes to implementing directional lights.

 

I've managed to setup a ortographic matrix and multiply it with a view matrix to obtain a camera who can render the scene and draw proper shadows, the problem arises in the fact that not the entire geometry is covered in the camera range.

 

The solution i've read is to create the camera based on the user-controlled camera frustum, however, i do not quite understand how the ortho projection works, mainly what each parameter means, right now im using a function who takes 4 parameters, this is the code (im using the irrlicht math library for my engine as i was used to it already).

 

	template <class T>
	inline CMatrix4<T>& CMatrix4<T>::buildProjectionMatrixOrthoLH(
			float widthOfViewVolume, float heightOfViewVolume, float zNear, float zFar)
	{
		AX_ASSERT_IF(widthOfViewVolume==0.f); //divide by zero
		AX_ASSERT_IF(heightOfViewVolume==0.f); //divide by zero
		AX_ASSERT_IF(zNear==zFar); //divide by zero
		M[0] = (T)(2/widthOfViewVolume);
		M[1] = 0;
		M[2] = 0;
		M[3] = 0;

		M[4] = 0;
		M[5] = (T)(2/heightOfViewVolume);
		M[6] = 0;
		M[7] = 0;

		M[8] = 0;
		M[9] = 0;
		M[10] = (T)(1/(zFar-zNear));
		M[11] = 0;

		M[12] = 0;
		M[13] = 0;
		M[14] = (T)(zNear/(zNear-zFar));
		M[15] = 1;

#if defined ( USE_MATRIX_TEST )
		definitelyIdentityMatrix=false;
#endif
		return *this;
	}

 

I dont know what exactly each parameter means when it comes to projection, i'm also not quite sure if i need a view matrix to create the light camera, maybe someone can help me with this.

 

Thanks!


Octrees: Precomputed visibility and rendering approach

18 April 2013 - 03:50 PM

Hello guys,

 

Right now i'm working on a game engine for a game i want to make, this is mainly for research and learning purposes.

 

I have most of the rendering base code done, loading meshes from a custom format aswell as materials and other stuff, however, in the last days i've been doing some research about what space partitioning method should i use for my world geometry.

 

I've been reading about KD-Trees, Octrees (and it's smaller variants) and BSP Trees, out of the three i guess i will stick with Octrees because it allows both outdoor and indoor efficient culling of geometry (or that's what i understood), the game will be mainly indoor but because of the nature of the project i wanted some generic way to make both types of scenes.

 

I've got the concept of octrees pretty well and i dont think i will have problems when implementing them (Im a self-taught graphis programmer), however there are some concepts that i dont get a clear view when i think about them, mainly precomputed visibility of nodes and what would be the most efficient way to render each visible node.

 

For the first i thought of the hardcore way, for each node check which nodes are visible (just like portals), this is kinda heavy for a complex scene but i dont mind to spend some time on precomputing visibility, the other i thought is to do the checking on the fly based on the camera frustum bounding box or the frustum planes themselves.

 

The other is once you determine which nodes are visible, what would be the most efficient way to render them.

 

The first approach i thought was to build a list of indices of the visible triangles and send them to the rendering API to be processed, main problem with this is that is kind of CPU intensive i guess since you would have to iterate over all the nodes, grab the indices of the polygons inside them and copy them over the active index buffer.

 

The second was to make a index buffer for each visible node and render each node manually, main problem with this is the memory overhead (i guess that DX/OGL uses extra memory per each created index buffer besides the memory needed to hold the indices itself) and the increased draw calls, both for the drawing itself and the switching between index buffers.

 

The third is to precompute a index buffer of the visible polygons for each node, same issue with memory as the second method.

 

I ask the help and opinion of the experts over here about what would you guys recommend me in order to create the best possible design.

 

Thanks!


PARTNERS