Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 02 May 2013
Offline Last Active Mar 08 2015 09:28 AM

Topics I've Started

Game Master Server - UDP, TCP or both?

08 February 2015 - 09:31 AM

I'm working on a real-time game engine and need to create a master server application to maintain a list of all active servers, however I'm not sure which network protocol to use.

The engine itself uses a combination of TCP and UDP.

I'm not sure how much of an overhead TCP has in comparison to UDP when running idle and packages only have to be send every couple of minutes at most, so I'm currently leaning towards using UDP.

However, obviously I need to make sure that all servers actually support UDP and TCP, so to do that, I would need both for the master server as well.


How is this usually done? What else do I have to take into account? Would it even make a mentionable difference?

Issue using 2D depth texture array and regular depth texture in same shader

26 January 2015 - 10:48 AM

I'm using a texture array for cascaded shadow maps, and regular depth textures for spot-lights. They work fine, as long as I don't have them active at the same time in my shader.

In fact, after some testing, I found out that as soon as I bind ANY other depth texture to the shader, the shadows from the texture array are rendered incorrectly.

Here are the relevant snippets from the shaders / code:

Generating the shadow texture array:

unsigned int frameBuffer;

The fragment shader function for the texture lookup:

float CalculateShadowTerm(sampler2DArrayShadow shadowMap,vec4 worldCoord,float bias)
    int index = numCascades -1;
    mat4 vp;
    for(int i=0;i<numCascades;i++)
        if(gl_FragCoord.z < csmFard[i])
            vp = csmVP[i];
            index = i;
    vec4 shadowCoord = vp *worldCoord;
    shadowCoord.w = shadowCoord.z *0.5f +0.5f -bias;
    shadowCoord.z = float(index);
    shadowCoord.x = shadowCoord.x *0.5f +0.5f;
    shadowCoord.y = shadowCoord.y *0.5f +0.5f;
    return shadow2DArray(shadowMap,shadowCoord +vec4(
        offset.x *shadowRatioX *shadowCoord.w,
        offset.y *shadowRatioY *shadowCoord.w,

The result is essentially just multiplied with the output color.

This is working as it should (At the top in the image are the contents of the 4 cascades):


However, as soon as I add any texture look-up to a different depth texture anywhere in the fragment shader, it suddenly gets rendered incorrectly.

Here's the second depth texture for testing purposes:

unsigned int frameBuffer;
Fragment Shader:
uniform sampler2D testShadowMap;
void main()

This shouldn't do anything whatsoever, and they're bound to different texture units, however suddenly the result is this:


Each cascade is now rendered with different intensity, even though the actual contents in the depth textures is the same as before.

It looks like 'shadow2DArray' suddenly doesn't seem to do the actual depth comparison anymore and just returns the actual depth value (Value between 0 and 1).

If I bind any non-depth texture instead of the test-texture, or if I just comment out the texture lookup ('testShadowMap'), it works again.

What's going on here?


[GLSL] Prevent lights from casting onto shadows

01 January 2015 - 01:53 PM

I'm trying to implement shadow mapping in my engine. Almost everything is working now, one of the last problems left is that I need to prevent my lights from shining through my shadow casters (See attachment #1) onto the shadows.


In my test-case, I'm using a spotlight with a 45 degree angle and a single shadowmap.

I'm using poisson-blur for the shadow map-lookup in the fragment shader:

vec2 poissonDisk[16] = vec2[](

float GetSpotLightShadeScale(sampler2D shadowMap,vec4 shadowCoord,float bias)
	int visibility = 0;
	for(int i=0;i<16;i++)
		float z = texture2D(shadowMap,(shadowCoord.xy +poissonDisk[i] /3.0) /shadowCoord.w).z +bias;
		if(z > 0 && z < 1 && z < (shadowCoord.z /shadowCoord.w))
	return 1.0 -(float(visibility) /16.0);

The function "GetSpotLightShadeScale" returns a value between 0 and 1, where 0 means the fragment is completely inside the shadow and 1 means it's completely outside.


The color is then calculated as such:

float visibility = GetSpotLightShadeScale(shadowMap,shadowCoord[i],bias);
float shadow = (1.0 -(1.0 -visibility) *0.32); // 0.32 = Minimum brightness for shadowed areas
visibility = distScale *lambertTerm *spot *shadow; // 'distScale', 'lambertTerm' and 'spot' just make sure to decrease the intensity of the light around the cone of the spotlight, the last part decreases it further for the actual shadow
outColor += lightColor *diffuseColor *visibility;

The problem is that this essentially casts the light over the shadow. After some experimentation I've come up with this:

float visibility = GetSpotLightShadeScale(shadowMap,shadowCoord[i],bias);
float shadow = 1.0 -(1.0 -visibility) *0.82;
visibility = min(distScale *lambertTerm *spot,shadow); // This simply makes sure the total intensity for this fragment can't go above the shadow value
outColor += lightColor *diffuseColor *visibility;

This does what it's supposed to but it also has the side effect, that the poisson-blur suddenly looks a lot sharper (Can't figure out why) and this, in turn, leads to graphical artifacts around the edges of the shadow when the camera is moving around.



I'm probably missing something very obvious, but I can't quite get this to work.

What's the best way to actually do this?

Occlusion Culling - Combine OctTree with CHC algorithm?

11 December 2014 - 02:03 PM

I'm trying to implement occlusion culling in my engine to improve my framerate. The general idea is this:

The entire scene is split into OctTree nodes where all visible meshes are contained. The scene is very dynamic, therefore some of the nodes in the OctTree change every frame.

There's only a single OctTree for the entire scene.


I want to combine this system with the coherent hierarchical culling algorithm, but I'm unsure how to do so.

The CHC algorithm basically travels through the OctTree hierarchy, looks for visible nodes, and marks them as such.

The problem is, the CHC algorithm needs to be run several times, once for the view camera, then for each light (to cull shadow casters), again for reflections, etc. That means the CHC implementation has to be separated from the actual OctTree and can't be directly integrated.

It still does, however, need the same hierarchy structure as the OctTree to be able to traverse correctly, which means I essentially have to 'duplicate' the OctTree hierarchy for each CHC instance. This is problematic, because as I've said, the OctTree is dynamic and nodes can change, which would be difficult to reflect in the CHC tree.

More importantly, the duplication and constant synchronization between the OctTree and the CHC Tree seems like an unnecessary drain of resources and speed.


Alternatively I could store the CHC node information directly in each OctTree node, but I'd need some sort of container to account for all CHC instances and that seems highly impractical.


I can't help but feel there has to be a better way, but I'm out of ideas.

Any help would be much appreciated.

Bullet - Setting up a slider constraint (btSliderConstraint) / Using btMatrix.setEulerZYX?

13 November 2014 - 04:25 PM

I'm trying to figure out how to create a slider constraint, but so far none of my attempts have worked properly.


In the fork lift demo from the SDK, the slider constraint for the fork is set up like this:

btTransform localA;
btTransform localB;
localA.getBasis().setEulerZYX(0, 0, M_PI_2);
localA.setOrigin(btVector3(0.0f, -1.9f, 0.05f));
localB.getBasis().setEulerZYX(0, 0, M_PI_2);
localB.setOrigin(btVector3(0.0, 0.0, -0.1));
m_forkSlider = new btSliderConstraint(*m_liftBody, *m_forkBody, localA, localB, true);
m_dynamicsWorld->addConstraint(m_forkSlider, true);

The parameters to create a new slider constraint are:

btSliderConstraint (btRigidBody &rbA, btRigidBody &rbB, const btTransform &frameInA, const btTransform &frameInB, bool useLinearReferenceFrameA)

In the demo, the slide direction is set up with these two lines:

localA.getBasis().setEulerZYX(0, 0, M_PI_2);
localB.getBasis().setEulerZYX(0, 0, M_PI_2);

I'm assuming these are euler angles with z = pitch, y = yaw and x = roll.

In this case the fork is moving upwards and downwards.

Changing it to (M_PI_2, 0, 0) allows it to move left / right, and with (0, M_PI_2, 0) it's forward / backward.
M_PI_2 is a rotation of 90 degrees so that makes sense to me. Problem is, this only seems to work if the angles are aligned on an axis, meaning something like (M_PI_4, M_PI_2, 0) ends up doing the same as (0, M_PI_2, 0), which I don't really understand.
Could someone please enlighten me how to actually use setEulerZYX, or how to set up the slider constraint for a specific (non-world-axis-aligned) axis?