Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 08 Apr 2011
Offline Last Active Today, 06:04 AM

#5253403 Game Development Laptop

Posted by KaiserJohan on 22 September 2015 - 01:46 AM

Why are all "professional" / "corporate" marketed laptops dual-core processors? And expensive as hell for crappier hardware? Something like Asus Zenbook which is marketed more as a "gaming" laptop is both cheaper and outperforms many "professional" laptops...

#5252135 Omni-Directional Soft Shadows

Posted by KaiserJohan on 14 September 2015 - 01:13 AM

Whats your shadowmap resolution per face?


Also when rendering into the shadowmap for each face, what is your proj matrix zNear and do you use the lights max radius as zFar?


The aliasing when moving is called shimmering. Should be some good google hits. Higher resolution and tighter bounds helps but you can never fully negate the problem. I believe if you can make the camera move only in shadowmap pixel-sized increments it will help too,


I've never had any aliasing issues to warrant a CSM for point lights. Whats the radius of your point light?

#5250227 Write to a texture with a Compute Shader

Posted by KaiserJohan on 02 September 2015 - 01:10 AM

Have you used http://cryengine.com/renderdoc ?


You can step through the shader code and inspect the texture in video memory before, during and after the Compute shader. 

#5248757 sRGB on diffuse textures or gbuffer color texture?

Posted by KaiserJohan on 25 August 2015 - 07:42 AM



If you assume that linear->srgb and srgb->linear is done by dedicated circuits, and is therefore free, then it's not something to worry about smile.png
Mark source texture views, GBuffer render-target, GBuffer texture view, and backbuffer render-target as SRGB:
(Textures)-- sRGB->Linear --[To GBuffer Shader]-- Linear->sRGB --(GBuffer)-- sRGB->Linear --[Lighting shader]-- no change (Lighting buffer)-- no change --[Tonemap]-- Linear->sRGB (Backbuffer)
Mark source texture views and GBuffer render-target as Linear (even though they're not!), and mark GBuffer texture view, and backbuffer render-target as SRGB:
(Textures)-- no change --[To GBuffer Shader]-- no change --(GBuffer)-- sRGB->Linear --[Lighting shader]-- no change (Lighting buffer)-- no change --[Tonemap]-- Linear->sRGB (Backbuffer)


Awesome, that was exactly what I was looking for.


As long as it's guaranteed to be a free operation (maybe only older cards dont have this feature? or newer but cheaper?) the first option then seems more clearer / less deceptive.

#5247393 Quaternions for FPS Camera?

Posted by KaiserJohan on 18 August 2015 - 08:39 AM



Orientation returns a Matrix (x,y,z,w), which is two Quaternions multiplied together.


Yes; it takes the resulting quaternion and converts it to a rotation matrix.




 I'm guessing the * has been overloaded to support multiplying Quaternions?






The second part is a little difficult to understand. I can't find a resource to define what the translate method takes in and returns


It applies a translation component (the cameras position vector, mTranslation) to the input matrix (identity matrix in this case) and returns the result.

See GLM docs http://glm.g-truc.net/0.9.2/api/a00245.html#ga4683c446c8432476750ade56f2537397




Would mTranslation be the vector my camera was currently located at?



#5247359 Quaternions for FPS Camera?

Posted by KaiserJohan on 18 August 2015 - 04:43 AM

Very simple using GLM (http://glm.g-truc.net/0.9.7/index.html)


Here's how to build a rotation matrix using Quaternions and vertical/horizontal angles only

// angles in radians
float mVerticalAngle;
float mHorizontalAngle;

glm::mat4 Camera::Orientation() const
    glm::quaternion rotation(glm::angleAxis(mVerticalAngle, glm::vec3(1.0f, 0.0f, 0.0f)));
    rotation = rotation * glm::angleAxis(mHorizontalAngle, glm::vec3(0.0f, 1.0f, 0.0f));

    return glm::toMat4(rotation);

Multiply this with the FPS cameras translation matrix and you have your view matrix.

glm::vec3 mTranslation;

glm::mat4 Camera::GetCameraTransform() const
    return Orientation() * glm::translate(glm::mat4(1.0f), -mTranslation);

#5240445 Simulating the sun

Posted by KaiserJohan on 15 July 2015 - 02:46 AM

Here's what I've tried right now:

void Sun::Update()
	if (!mIsMoving)

	const auto now = Clock::now();
	const auto timeDiff = now - mTimeStart;
	const float count = std::chrono::duration_cast<Intervall>(timeDiff).count();

	assert(mNightDayRatio <= 1.0f);
	if (count >= 0.5f + 0.5f * mNightDayRatio)
		mTimeStart = now;

	const float angle = count * glm::two_pi<float>() - glm::pi<float>();

	mDirLight.mLightDirection = Vec3(glm::cos(angle), glm::sin(angle), glm::sin(angle));

The issue is that at low or high angles the shadows gets elongated/really long. Is there an easy remedy for this? I want to keep it simple it dosn't have to be realistic, only believable.

#5236616 Uploading texture data to cubetexture

Posted by KaiserJohan on 24 June 2015 - 02:26 PM

I'm puzzled why this isn't working.


I'm trying to add texture data to each of the cube textures faces. For some reason, only the first(+x) works. The MSDN documentation is quite sparse, but it looks like this should do the trick:

// mip-level 0 data
uint32_t sizeWidth = textureWidth * sizeof(uint8_t) * 4;
if (isCubeTexture)
	for (uint32_t index = 0; index < gCubemapNumTextures; ++index)
		const uint32_t subResourceID = D3D11CalcSubresource(0, index, 1);
		context->UpdateSubresource(mTexture, subResourceID, NULL, &textureData.at(sizeWidth * textureHeight * index), sizeWidth, 0);

When debugging and looking at the faces its all just black except the first face. So obivously I am doing somerhing wrong, how do you properly upload cubetexture data?


EDIT: R8G8B8A8 texture btw

#5226329 Visual Studio and HLSL compiling

Posted by KaiserJohan on 29 April 2015 - 01:42 PM

I've got VS 2013 pro and currently have one hlsl source file for each pcf kernel, for example pcf2x2.hlsl, pcf3x3.hlsl, pcf5x5.hlsl, ...

VS compiles them automatically at build time, but it leads to some code redundancy and it unpleasant to work with. It would be much better to simply have one source file and recompile with different macros - but how does one make VS recompile the same file several times?

#5214680 NVIDIA NSight vs Visual Studio 2013 Graphics Debugger?

Posted by KaiserJohan on 05 March 2015 - 03:02 AM

Try RenderDoc (http://cryengine.com/renderdoc). Of all the three, it is the one I keep going back to. Free, fast, awesome. 

#5208079 Light-weight render queues?

Posted by KaiserJohan on 01 February 2015 - 02:25 PM


Is it a sound idea to do view frustrum culling for all 6 faces of a point light? For example, my RenderablePointLight has a collection of meshes for each face.


Is this about a shadow-casting point light which renders a shadow map for each face?


If your culling code has to walk the entire scene, or a hierarchic acceleration structure (such as quadtree or octree) it will likely be faster to do one spherical culling query to first get all the objects associated with any face of the point light, then test those against the individual face frustums. Profiling will reveal if that's the case.


If it's not a shadow-casting light, you shouldn't need to bother with faces, but just do a single spherical culling query to find the lit objects.



Yeah its for shadowmapping. How do you do a spherical culling query?

#5174629 So... C++14 is done :O

Posted by KaiserJohan on 19 August 2014 - 12:43 AM


 I still think the range-based loops were unnecessary. 


Can you elaborate more? Personally I feel it is alot more readable.

#5172915 Point light shadowmapping

Posted by KaiserJohan on 11 August 2014 - 03:15 PM

Fixed it... in case it helps anyone else:


The TextureCube follows Left-hand coordinate system in DirectX, while it is right-handed in OpenGL. I am using right-handed matrices due to GLM, so It required flipping the +/-Z tex cube faces:

    const Vec3 CUBEMAP_DIRECTION_VECTORS[DX11PointLightPass::TEXTURE_CUBE_NUM_FACES] = { Vec3(1.0f, 0.0f, 0.0f), Vec3(-1.0f, 0.0f, 0.0f), Vec3(0.0f, 1.0f, 0.0f),
                                                                                         Vec3(0.0f, -1.0f, 0.0f), Vec3(0.0f, 0.0f, -1.0f), Vec3(0.0f, 0.0f, 1.0f) };

    const Vec3 CUBEMAP_UP_VECTORS[DX11PointLightPass::TEXTURE_CUBE_NUM_FACES] = { Vec3(0.0f, 1.0f, 0.0f), Vec3(0.0f, 1.0f, 0.0f), Vec3(0.0f, 0.0f, 1.0f),
                                                                                  Vec3(0.0f, 0.0f, -1.0f), Vec3(0.0f, 1.0f, 0.0f), Vec3(0.0f, 1.0f, 0.0f) };

Likewise, the projection matrix is different from OpenGL to DirectX, as the range of NDC is different. The old VectorToDepth function I used in OpenGL I altered to this:

float VectorToDepthValue(float3 Vec)
    float3 AbsVec = abs(Vec);
    float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));

    const float f = 100.0;
    const float n = 0.1;

    float NormZComp = -(f / (n - f) - (n * f) / (n - f) / LocalZcomp);

    return NormZComp;

ALTHOUGH I had to add an extra instruction to the pixel shader:

    float3 cubemapDir = (float3)(worldPosition - gLightPosition);
    cubemapDir.z = -cubemapDir.z;       // TODO: any way to remove this extra instruction?
    float storedDepth = gShadowmap.Sample(gShadowmapSampler, cubemapDir).r;
    float visibility = 0.0;
    if (storedDepth + 0.0001 > VectorToDepthValue(cubemapDir))
        visibility = 1.0;

Any input on how to optimize this away would be awesome

#5172354 DXGI leak warnings

Posted by KaiserJohan on 08 August 2014 - 03:30 PM

Nevermind, I fixed it. 


In case it helps anyone else: It turns out Get() methods such as OMGetDepthStencilState() also increases the RefCount - and now that I think about it, of course it does... those pointers would have to be released aswell.

#5171722 Cubemap texture as depth buffer (shadowmapping)

Posted by KaiserJohan on 05 August 2014 - 04:25 PM

I'm converting my OpenGL 4 renderer to DirectX 11. Once I got over making a simple textured triangle, the rest went very easy, I now have a deferred shader setup with ambient/point/directional lights and am now reimplementing the shadow mapping parts, starting with point lights (omnidirectional shadowmaps).


Here's how I create my texture cubemap:

// create shadowmap texture/view/srv
D3D11_TEXTURE2D_DESC depthBufferDesc;
ZeroMemory(&depthBufferDesc, sizeof(D3D11_TEXTURE2D_DESC));
depthBufferDesc.ArraySize = 6;
depthBufferDesc.Format = DXGI_FORMAT_D32_FLOAT;     // potential issue? UNORM instead?
depthBufferDesc.Width = shadowmapSize;
depthBufferDesc.Height = shadowmapSize;
depthBufferDesc.MipLevels = 1;
depthBufferDesc.SampleDesc.Count = 1;
depthBufferDesc.Usage = D3D11_USAGE_DEFAULT;
depthBufferDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthBufferDesc.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE;
DXCALL(device->CreateTexture2D(&depthBufferDesc, NULL, &mShadowmapTexture));
DXCALL(device->CreateDepthStencilView(mShadowmapTexture, NULL, &mShadowmapView));
DXCALL(device->CreateShaderResourceView(mShadowmapTexture, NULL, &mShadowmapSRV));

I'm currently at a loss of how to select which face of the cubemap to use as the depth buffer. In OpenGL I would iterate 6 times and select the cubemap face with:


But in DirectX11, cubemaps seems to be defined as Texture2D but with different array size. Looking at the OMSetRenderTargets(), it dosn't specify anywhere which cubemap face to use either.


Any pointers on how to select the proper face to use as depth texture?


I've been browsing around like mad on MSDN, and it is fine as a reference for structures and functions, but awful for anything else :(