Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


KaiserJohan

Member Since 08 Apr 2011
Offline Last Active Today, 08:50 AM

Topics I've Started

Difference between SDSM and PSSM?

Today, 08:38 AM

I've been implementing PSSM (http://http.developer.nvidia.com/GPUGems3/gpugems3_ch10.html) so far for cascaded shadow mapping. I've been reading abit about SDSM (http://visual-computing.intel-research.net/art/publications/sdsm/sampleDistributionShadowMaps_SIGGRAPH2010_notes.pdf) recently, and I'm not sure I understand the difference between them.

 

From what I gather, PSSM uses a logarithmic division of the splits with static near/far Z-bounds (such as the near/far view distance), while SDSM first transforms all visible objects into the cameras view space, finds the min/max Z and use it for the same split formula as PSSM (logarithmic).

 

What I find confusing though is that the PSSM article then in section 10-2 builds a crop matrix which takes the smallest of the frustrum AABB and the visible objects combined AABB... resulting in the same tight frustrum as SDSM does...? Isn't that the exact same thing?

 

Another thing not mentioned in the SDSM article is tightening the frustrums in x/y dimensions?

 

 

 

 


GDC videos?

Yesterday, 04:21 AM

Not sure if this has been posted elsewhere, if so I couldn't find it, but is there any publicly available compilation of recordings of developer GDC tech-talks? Dosn't seem to be much on youtube.


Directional light shadow mapping

10 March 2015 - 05:38 PM

I am doing cascaded shadowmapping and having some issues.

 

I have a scene like this (yellow light is from a point light; theres also a chair just outside the view to the left):

 

Attached File  1.png   565.74KB   0 downloads

 

If do not use a lights view matrix at all, i.e I only use an orthographic projection matrix when rendering shadow maps, it looks OK. The cascade order is top left, top right, bottom left, bottom right.

 

Attached File  asd.png   179.45KB   0 downloads

 

Now if I use an orthographic projection matrix and use a rotation matrix (as in the code below) based on the lights direction, it instead looks like this:

 

Attached File  asd2.png   219.72KB   0 downloads

 

Which is not correct, for example, the boxes are completely missing, even though they are encompassed in the orthographic projection

 

The resulting shadows then look like this:

 

Attached File  asd3.png   529.69KB   0 downloads

 

Some parts of shadow are correct, but major artifacts.

 

Here's how I build the matrix:

Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir /* == Vec3(-1.0f, -1.0f, -1.0f) in this example */)
{
	// "cameraFrustrum" contains the 8 corners of the cameras frustrum in world space
	float maxZ = cameraFrustrum[0].z, minZ = cameraFrustrum[0].z;
	float maxX = cameraFrustrum[0].x, minX = cameraFrustrum[0].x;
	float maxY = cameraFrustrum[0].y, minY = cameraFrustrum[0].y;
	for (uint32_t i = 1; i < 8; i++)
	{
		if (cameraFrustrum[i].z > maxZ) maxZ = cameraFrustrum[i].z;
		if (cameraFrustrum[i].z < minZ) minZ = cameraFrustrum[i].z;
		if (cameraFrustrum[i].x > maxX) maxX = cameraFrustrum[i].x;
		if (cameraFrustrum[i].x < minX) minX = cameraFrustrum[i].x;
		if (cameraFrustrum[i].y > maxY) maxY = cameraFrustrum[i].y;
		if (cameraFrustrum[i].y < minY) minY = cameraFrustrum[i].y;
	}

	Vec3 right = glm::normalize(glm::cross(glm::normalize(lightDir), Vec3(0.0f, 1.0f, 0.0f)));
	Vec3 up = glm::normalize(glm::cross(glm::normalize(lightDir), right));

	Mat4 lightViewMatrix = Mat4(Vec4(right, 0.0f),
								Vec4(-up, 0.0f),		// why do I need to negate this btw?
								Vec4(lightDir, 0.0f),
								Vec4(0.0f, 0.0f, 0.0f, 1.0f));

	return OrthographicMatrix(minX, maxX, maxY, minY, maxZ, minZ) * lightViewMatrix;
}

It was my understanding (based on topics like https://www.opengl.org/discussion_boards/showthread.php/155674-Shadow-maps-for-infinite-light-sources), that all I needed to do shadow mapping for directional light was an orthographic projection matrix and then a rotation matrix (with no translation component, since the dir light has no position, and as I've understod it, translation dosnt matter since it is orthographic meaning no perspective anyway).

 

Then what is causing the errors in the 3rd image? Is there something I am missing?


Directional light position in shadow mapping

04 March 2015 - 04:48 PM

I've got a basic cascaded shadow mapping working, but there's alot to improve. Something thats nagging me is this:

 

Technically, the light must have a position when rendering the shadow maps. But, it is not supposed to have one. So how do you determine what position to use when rendering depth for each of the cascade frustrums?

The way I imagine it (and do currently) is to simply offset it some amount of distance from the cameras position. It must be trickier though because this does not take geometry into account. Do you simply sum up all the geometry in each cascade to get the max height, and then offset from that? If so, how much?

 

Generally the resources I've found on the net simply offsets it abit from the camera, but it must be more complex than that. Any pointers on how to do this?

 

 

EDIT: another, not too unrelated question is this: how do you do culling for directional lights? do you view frustrum culling for each cascade, each frame?


A more data-oriented tree structure

14 February 2015 - 07:57 AM

After doing some profiling, I find alot of time is spent culling and traversing my scene graph. Each model has a hierarchy of nodes, and each node has a collection of meshes. The AABB of a node encompasses all of its children plus all of its meshes. The transform is relative to its parent. This is what it looks like OO-style:

class Model
{
public:
	// ...

	const Node mRootNode;
	const std::string mName;

	
private:
	//
};
class Node
{
public:
	// ...

	std::vector<Mesh>& GetMeshes();
	std::vector<Node>& GetChildNodes();


	const Mat4 mTransform;

	const Vec3 mAABBCenter;
	const Vec3 mAABBExtent;


private:
	std::vector<Mesh> mMeshes;
	std::vector<Node> mChildNodes;
};
class Mesh
{
public:
        // ...	

	const MeshID mMeshID;

	const Vec3 mAABBCenter;
	const Vec3 mAABBExtent;


private:
	// data
};

I've read about data-oriented design but I'm puzzled how to best approach this. It is clear that transform (mat4) and AABB (2x vec3's) is both accessed very often in for example view frustrum culling and rather than a hierarchy of vectors, just use a flat vector.

 

I'm sure this problem must've been encountered a dozen of times before. Any pointers on how to refactor this but still keep the Model/Node/Mesh structure?

 

 


PARTNERS