Jump to content

  • Log In with Google      Sign In   
  • Create Account

Calling all IT Pros from Canada and Australia.. we need your help! Support our site by taking a quick sponsored surveyand win a chance at a $50 Amazon gift card. Click here to get started!


Silverlan

Member Since 02 May 2013
Offline Last Active Yesterday, 01:22 AM

Topics I've Started

Game Master Server - UDP, TCP or both?

08 February 2015 - 09:31 AM

I'm working on a real-time game engine and need to create a master server application to maintain a list of all active servers, however I'm not sure which network protocol to use.

The engine itself uses a combination of TCP and UDP.

I'm not sure how much of an overhead TCP has in comparison to UDP when running idle and packages only have to be send every couple of minutes at most, so I'm currently leaning towards using UDP.

However, obviously I need to make sure that all servers actually support UDP and TCP, so to do that, I would need both for the master server as well.

 

How is this usually done? What else do I have to take into account? Would it even make a mentionable difference?


Issue using 2D depth texture array and regular depth texture in same shader

26 January 2015 - 10:48 AM

I'm using a texture array for cascaded shadow maps, and regular depth textures for spot-lights. They work fine, as long as I don't have them active at the same time in my shader.

In fact, after some testing, I found out that as soon as I bind ANY other depth texture to the shader, the shadows from the texture array are rendered incorrectly.

Here are the relevant snippets from the shaders / code:

Generating the shadow texture array:

unsigned int frameBuffer;
glGenFramebuffers(1,&frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER,frameBuffer);
glGenTextures(1,&texture);
glBindTexture(GL_TEXTURE_2D_ARRAY,texture);
 
glTexImage3D(
    GL_TEXTURE_2D_ARRAY,0,GL_DEPTH_COMPONENT24,
    size,size,GetSplitCount(),1,GL_DEPTH_COMPONENT,GL_FLOAT,NULL
);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_COMPARE_FUNC,GL_LEQUAL);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_COMPARE_MODE,GL_COMPARE_R_TO_TEXTURE);
 
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_WRAP_R,GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER);
glFramebufferTextureLayer(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,texture,0,0);

The fragment shader function for the texture lookup:

float CalculateShadowTerm(sampler2DArrayShadow shadowMap,vec4 worldCoord,float bias)
{
    int index = numCascades -1;
    mat4 vp;
    for(int i=0;i<numCascades;i++)
    {
        if(gl_FragCoord.z < csmFard[i])
        {
            vp = csmVP[i];
            index = i;
            break;
        }
    }
    vec4 shadowCoord = vp *worldCoord;
    shadowCoord.w = shadowCoord.z *0.5f +0.5f -bias;
    shadowCoord.z = float(index);
    shadowCoord.x = shadowCoord.x *0.5f +0.5f;
    shadowCoord.y = shadowCoord.y *0.5f +0.5f;
 
    return shadow2DArray(shadowMap,shadowCoord +vec4(
        offset.x *shadowRatioX *shadowCoord.w,
        offset.y *shadowRatioY *shadowCoord.w,
        0.0,
        0.0
    )).x;
}
 

The result is essentially just multiplied with the output color.

This is working as it should (At the top in the image are the contents of the 4 cascades):

http://puu.sh/f1p23/d9f5549847.jpg

However, as soon as I add any texture look-up to a different depth texture anywhere in the fragment shader, it suddenly gets rendered incorrectly.

Here's the second depth texture for testing purposes:

unsigned int frameBuffer;
glGenFramebuffers(1,&frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER,frameBuffer);
glGenTextures(1,&texture);
glBindTexture(GL_TEXTURE_2D,texture);
glTexImage2D(
    GL_TEXTURE_2D,0,
    GL_DEPTH_COMPONENT16,
    128,128,0,
    GL_DEPTH_COMPONENT,GL_FLOAT,(void*)0
);
 
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAX_LEVEL,0);
glFramebufferTexture2D(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,GL_TEXTURE_2D,texture,0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
 
Fragment Shader:
uniform sampler2D testShadowMap;
void main()
{
    texture(testShadowMap,vec2(0,0));
    [...]
}

This shouldn't do anything whatsoever, and they're bound to different texture units, however suddenly the result is this:

http://puu.sh/f1p9R/92e86e22b7.jpg

Each cascade is now rendered with different intensity, even though the actual contents in the depth textures is the same as before.

It looks like 'shadow2DArray' suddenly doesn't seem to do the actual depth comparison anymore and just returns the actual depth value (Value between 0 and 1).

If I bind any non-depth texture instead of the test-texture, or if I just comment out the texture lookup ('testShadowMap'), it works again.

What's going on here?

 


[GLSL] Prevent lights from casting onto shadows

01 January 2015 - 01:53 PM

I'm trying to implement shadow mapping in my engine. Almost everything is working now, one of the last problems left is that I need to prevent my lights from shining through my shadow casters (See attachment #1) onto the shadows.

 

In my test-case, I'm using a spotlight with a 45 degree angle and a single shadowmap.

I'm using poisson-blur for the shadow map-lookup in the fragment shader:

vec2 poissonDisk[16] = vec2[](
	vec2(-0.94201624,-0.39906216),
	vec2(0.94558609,-0.76890725),
	vec2(-0.094184101,-0.92938870),
	vec2(0.34495938,0.29387760),
	vec2(-0.91588581,0.45771432),
	vec2(-0.81544232,-0.87912464),
	vec2(-0.38277543,0.27676845),
	vec2(0.97484398,0.75648379),
	vec2(0.44323325,-0.97511554),
	vec2(0.53742981,-0.47373420),
	vec2(-0.26496911,-0.41893023),
	vec2(0.79197514,0.19090188),
	vec2(-0.24188840,0.99706507),
	vec2(-0.81409955,0.91437590),
	vec2(0.19984126,0.78641367),
	vec2(0.14383161,-0.14100790)
);

float GetSpotLightShadeScale(sampler2D shadowMap,vec4 shadowCoord,float bias)
{
	int visibility = 0;
	for(int i=0;i<16;i++)
	{
		float z = texture2D(shadowMap,(shadowCoord.xy +poissonDisk[i] /3.0) /shadowCoord.w).z +bias;
		if(z > 0 && z < 1 && z < (shadowCoord.z /shadowCoord.w))
			visibility++;
	}
	return 1.0 -(float(visibility) /16.0);
}

The function "GetSpotLightShadeScale" returns a value between 0 and 1, where 0 means the fragment is completely inside the shadow and 1 means it's completely outside.

 

The color is then calculated as such:

[...]
float visibility = GetSpotLightShadeScale(shadowMap,shadowCoord[i],bias);
float shadow = (1.0 -(1.0 -visibility) *0.32); // 0.32 = Minimum brightness for shadowed areas
visibility = distScale *lambertTerm *spot *shadow; // 'distScale', 'lambertTerm' and 'spot' just make sure to decrease the intensity of the light around the cone of the spotlight, the last part decreases it further for the actual shadow
outColor += lightColor *diffuseColor *visibility;

The problem is that this essentially casts the light over the shadow. After some experimentation I've come up with this:

[...]
float visibility = GetSpotLightShadeScale(shadowMap,shadowCoord[i],bias);
float shadow = 1.0 -(1.0 -visibility) *0.82;
visibility = min(distScale *lambertTerm *spot,shadow); // This simply makes sure the total intensity for this fragment can't go above the shadow value
outColor += lightColor *diffuseColor *visibility;

This does what it's supposed to but it also has the side effect, that the poisson-blur suddenly looks a lot sharper (Can't figure out why) and this, in turn, leads to graphical artifacts around the edges of the shadow when the camera is moving around.

 

 

I'm probably missing something very obvious, but I can't quite get this to work.

What's the best way to actually do this?


Occlusion Culling - Combine OctTree with CHC algorithm?

11 December 2014 - 02:03 PM

I'm trying to implement occlusion culling in my engine to improve my framerate. The general idea is this:

The entire scene is split into OctTree nodes where all visible meshes are contained. The scene is very dynamic, therefore some of the nodes in the OctTree change every frame.

There's only a single OctTree for the entire scene.

 

I want to combine this system with the coherent hierarchical culling algorithm, but I'm unsure how to do so.

The CHC algorithm basically travels through the OctTree hierarchy, looks for visible nodes, and marks them as such.

The problem is, the CHC algorithm needs to be run several times, once for the view camera, then for each light (to cull shadow casters), again for reflections, etc. That means the CHC implementation has to be separated from the actual OctTree and can't be directly integrated.

It still does, however, need the same hierarchy structure as the OctTree to be able to traverse correctly, which means I essentially have to 'duplicate' the OctTree hierarchy for each CHC instance. This is problematic, because as I've said, the OctTree is dynamic and nodes can change, which would be difficult to reflect in the CHC tree.

More importantly, the duplication and constant synchronization between the OctTree and the CHC Tree seems like an unnecessary drain of resources and speed.

 

Alternatively I could store the CHC node information directly in each OctTree node, but I'd need some sort of container to account for all CHC instances and that seems highly impractical.

 

I can't help but feel there has to be a better way, but I'm out of ideas.

Any help would be much appreciated.


Bullet - Setting up a slider constraint (btSliderConstraint) / Using btMatrix.setEulerZYX?

13 November 2014 - 04:25 PM

I'm trying to figure out how to create a slider constraint, but so far none of my attempts have worked properly.

 

In the fork lift demo from the SDK, the slider constraint for the fork is set up like this:

btTransform localA;
btTransform localB;
[...]
localA.setIdentity();
localB.setIdentity();
localA.getBasis().setEulerZYX(0, 0, M_PI_2);
localA.setOrigin(btVector3(0.0f, -1.9f, 0.05f));
localB.getBasis().setEulerZYX(0, 0, M_PI_2);
localB.setOrigin(btVector3(0.0, 0.0, -0.1));
m_forkSlider = new btSliderConstraint(*m_liftBody, *m_forkBody, localA, localB, true);
m_forkSlider->setLowerLinLimit(0.1f);
m_forkSlider->setUpperLinLimit(0.1f);
m_forkSlider->setLowerAngLimit(0.0f);
m_forkSlider->setUpperAngLimit(0.0f);
m_dynamicsWorld->addConstraint(m_forkSlider, true);

The parameters to create a new slider constraint are:

btSliderConstraint (btRigidBody &rbA, btRigidBody &rbB, const btTransform &frameInA, const btTransform &frameInB, bool useLinearReferenceFrameA)

In the demo, the slide direction is set up with these two lines:

localA.getBasis().setEulerZYX(0, 0, M_PI_2);
[...]
localB.getBasis().setEulerZYX(0, 0, M_PI_2);

I'm assuming these are euler angles with z = pitch, y = yaw and x = roll.

In this case the fork is moving upwards and downwards.

Changing it to (M_PI_2, 0, 0) allows it to move left / right, and with (0, M_PI_2, 0) it's forward / backward.
 
M_PI_2 is a rotation of 90 degrees so that makes sense to me. Problem is, this only seems to work if the angles are aligned on an axis, meaning something like (M_PI_4, M_PI_2, 0) ends up doing the same as (0, M_PI_2, 0), which I don't really understand.
 
Could someone please enlighten me how to actually use setEulerZYX, or how to set up the slider constraint for a specific (non-world-axis-aligned) axis?
 
 

PARTNERS