1721 views

# World Generation

This is a simple world made of chunks of 32³ voxels. The world itself is not static : as you can see on the top of the image, chunks only exist if there is at least one solid voxel in them. In other words, the world is dynamic and can contain as many chunks as the player's computer can handle.

In this particular screenshot, the world is generated with the simple vectorial gradient equation that I invented in high school but which I suppose already existed. Here's the basic equation :

$$\overrightarrow{ \textit{voxel value} } = \frac{ \overrightarrow{\textit{position} } \cdot \overrightarrow{ \textit{gradient}}}{ \overrightarrow{\textit{gradient} } \cdot \overrightarrow{ \textit{gradient}} }$$

That's the equation I came up with and remembered. The gradient * gradient can be simplified for the magnitude (length) of the gradient power squared.

$$\overrightarrow{ \textit{voxel value} } = \frac{ \overrightarrow{\textit{position} } \cdot \overrightarrow{ \textit{gradient}}}{ \left \| \overrightarrow{ \textit{gradient}} \right \| ^{2} }$$

In conclusion, this gives an N dimensional gradient which gives a single decimal number.

# Voxel Traversal Algorithm

As for the voxel traversal algorithm, I decided to go with the most popular one, which was made by John Amanatides and Andrew Woo. As much as I like research papers, I also despise them because they lack simplicity, examples and full source code. That's why I had to google implementations of it and later on remembered that I had actually already implemented this algorithm a few years ago.

## Summary

The simplest way to understand the algorithm is to imagine a line in an 3D world made of blocks. Which blocks does the line touch? Then, in which order are they touched based on the line's start and end positions? The goal is to traverse iteratively the blocks that are touched by the line .

More simply, the logic of the algorithm can be summed making a distinction between the ray's direction's components. Those three define the importance of their axes in terms of how many blocks need to be traversed in what direction. Think of this with integers : two people are running to reach a goal; the fastest runs a 5 km/h, while the slowest runs at 1 km/h. For each time step, i.e. an hour, how many kilometers have each runner traveled? The ratio is 5 : 1, so, to merge the analogy, a ray would traverse each step 5 blocks on the X axis and 1 block on the Y axis. Of course, it's more complicated than that, as there are more parameters to it, especially because of exceptions such as what to do when each component is equal with one another?

## Implementation

The first thing to know about my implementation is that each voxel index is centered within the voxel itself. In other words, the voxel at the position (0, 0, 0) starts at (-0.5, -0.5, -0.5) inclusively and ends at (0.5, 0.5, 0.5) exclusively. This is for a cube extent of 1, naturally. The original research paper doesn't work that way as it starts at the lowest corner, i.e. the voxel spans from (0, 0, 0) to (1, 1, 1). Without any further delay, here is the class for the VoxelRay

import com.cheesygames.colonysimulation.math.MathExt;
import com.cheesygames.colonysimulation.math.vector.Vector3i;
import com.cheesygames.colonysimulation.world.World;
import com.jme3.math.Vector3f;
import com.jme3.scene.plugins.blender.math.Vector3d;

import java.util.function.Function;

/**
* Ray for ray casting inside a voxel world. Each voxel is considered as a cube within this ray. A ray consists of a starting position, a direction and a length. The voxel distance
* is computed once the method {@link #rayCastLocal(double, Function, Vector3i)} or {@link #rayCast(double, Function)} is called.
*/
public class VoxelRay {

private Vector3d m_start;
private Vector3d m_offsettedStart;
private Vector3d m_direction;
private double m_length;
private int m_voxelDistance;
private boolean m_wasStopped;

/**
* Constructs an invalid {@link VoxelRay} as its direction and length are null. The setters must be called after constructing a {@link VoxelRay} with this constructors.
*/
public VoxelRay() {
this.m_start = new Vector3d();
this.m_offsettedStart = new Vector3d();
this.m_direction = new Vector3d();
this.m_length = 0;
}

/**
* Constructs a {@link VoxelRay} from two points : start and end.
*
* @param start The absolute starting position of the ray.
* @param end   The absolute ending position of the ray.
*/
public VoxelRay(Vector3d start, Vector3d end) {
this.m_start = new Vector3d(start);
this.m_offsettedStart = new Vector3d();
this.m_direction = end.subtract(start);
this.m_length = m_direction.length();
this.m_direction.normalizeLocal();
}

/**
* Constructs a {@link VoxelRay} from two points : start and end.
*
* @param start The absolute starting position of the ray.
* @param end   The absolute ending position of the ray.
*/
public VoxelRay(Vector3f start, Vector3f end) {
this.m_start = new Vector3d(start);
this.m_offsettedStart = new Vector3d();
this.m_direction = new Vector3d(end).subtractLocal(m_start);
this.m_length = m_direction.length();
this.m_direction.normalizeLocal();
}

/**
* Constructs a {@link VoxelRay} from a start, a direction and a length.
*
* @param start     The absolute starting position of the ray.
* @param direction The direction of the ray. Must be normalized.
* @param length    The length of the ray.
*/
public VoxelRay(Vector3d start, Vector3d direction, double length) {
this.m_start = new Vector3d(start);
this.m_offsettedStart = new Vector3d();
this.m_direction = new Vector3d(direction);
this.m_length = length;
}

/**
* Constructs a {@link VoxelRay} from a start, a direction and a length.
*
* @param start     The absolute starting position of the ray.
* @param direction The direction of the ray. Must be normalized.
* @param length    The length of the ray.
*/
public VoxelRay(Vector3f start, Vector3f direction, float length) {
this.m_start = new Vector3d(start);
this.m_offsettedStart = new Vector3d();
this.m_direction = new Vector3d(direction);
this.m_length = length;
}

/**
* Casts the ray from its starting position towards its direction whilst keeping in mind its length. A lambda parameter is supplied and called each time a voxel is traversed.
* This allows the lambda to stop anytime the algorithm to continue its loop.
*
* @param onTraversingVoxel The operation to execute when traversing a voxel. This method called the same number of times as the value of {@link #getVoxelDistance()}. The
*                          supplied {@link Vector3i} parameter is not a new instance but a local instance, so it is a reference. The return value {@link Boolean} defines if
*                          the algorithm should stop.
*
*/
public void rayCast(Function<Vector3i, Boolean> onTraversingVoxel) {
rayCastLocal(World.VOXEL_HALF_EXTENT, onTraversingVoxel, new Vector3i());
}

/**
* Casts the ray from its starting position towards its direction whilst keeping in mind its length. A lambda parameter is supplied and called each time a voxel is traversed.
* This allows the lambda to stop anytime the algorithm to continue its loop.
*
* @param voxelHalfExtent   The half extent (radius) of a voxel.
* @param onTraversingVoxel The operation to execute when traversing a voxel. This method called the same number of times as the value of {@link #getVoxelDistance()}. The
*                          supplied {@link Vector3i} parameter is not a new instance but a local instance, so it is a reference. The return value {@link Boolean} defines if
*                          the algorithm should stop.
*
*/
public void rayCast(double voxelHalfExtent, Function<Vector3i, Boolean> onTraversingVoxel) {
rayCastLocal(voxelHalfExtent, onTraversingVoxel, new Vector3i());
}

/**
* Casts the ray from its starting position towards its direction whilst keeping in mind its length. A lambda parameter is supplied and called each time a voxel is traversed.
* This allows the lambda to stop anytime the algorithm to continue its loop.
* <p>
* This method is local because the parameter voxelIndex is locally changed to avoid creating a new instance of {@link Vector3i}.
*
* @param onTraversingVoxel The operation to execute when traversing a voxel. This method called the same number of times as the value of {@link #getVoxelDistance()}. The
*                          supplied {@link Vector3i} parameter is not a new instance but a local instance, so it is a reference. The return value {@link Boolean} defines if
*                          the algorithm should stop.
* @param voxelIndex        The voxel index to locally modify in order to traverse voxels. This parameter exists simply to avoid creating a new {@link Vector3i} instance.
*
*/
public void rayCastLocal(Function<Vector3i, Boolean> onTraversingVoxel, Vector3i voxelIndex) {
rayCastLocal(World.VOXEL_HALF_EXTENT, onTraversingVoxel, voxelIndex);
}

/**
* Casts the ray from its starting position towards its direction whilst keeping in mind its length. A lambda parameter is supplied and called each time a voxel is traversed.
* This allows the lambda to stop anytime the algorithm to continue its loop.
* <p>
* This method is local because the parameter voxelIndex is locally changed to avoid creating a new instance of {@link Vector3i}.
*
* @param voxelHalfExtent   The half extent (radius) of a voxel.
* @param onTraversingVoxel The operation to execute when traversing a voxel. This method called the same number of times as the value of {@link #getVoxelDistance()}. The
*                          supplied {@link Vector3i} parameter is not a new instance but a local instance, so it is a reference. The return value {@link Boolean} defines if
*                          the algorithm should stop.
* @param voxelIndex        The voxel index to locally modify in order to traverse voxels. This parameter exists simply to avoid creating a new {@link Vector3i} instance.
*
*/
public void rayCastLocal(double voxelHalfExtent, Function<Vector3i, Boolean> onTraversingVoxel, Vector3i voxelIndex) {
assert !Double.isNaN(voxelHalfExtent);

assert !Double.isNaN(m_start.x);
assert !Double.isNaN(m_start.y);
assert !Double.isNaN(m_start.z);

assert !Double.isNaN(m_direction.x);
assert !Double.isNaN(m_direction.y);
assert !Double.isNaN(m_direction.z);

assert !Double.isNaN(m_length);

m_wasStopped = false;
final double voxelExtent = voxelHalfExtent * 2;

// This id of the first/current voxel hit by the ray.
VoxelWorldUtils.getVoxelIndexNoOffsetLocal(voxelExtent, m_offsettedStart, voxelIndex);

computeVoxelDistance(voxelExtent, voxelIndex);
assert !Double.isNaN(m_voxelDistance);

// In which direction the voxel ids are incremented.
int stepX = (int) MathExt.getSignZeroPositive(m_direction.x);
int stepY = (int) MathExt.getSignZeroPositive(m_direction.y);
int stepZ = (int) MathExt.getSignZeroPositive(m_direction.z);

// Distance along the ray to the next voxel border from the current position (tMaxX, tMaxY, tMaxZ).
double nextVoxelBoundaryX = (voxelIndex.x + (MathExt.getNegativeSign(stepX) + 1)) * voxelExtent;
double nextVoxelBoundaryY = (voxelIndex.y + (MathExt.getNegativeSign(stepY) + 1)) * voxelExtent;
double nextVoxelBoundaryZ = (voxelIndex.z + (MathExt.getNegativeSign(stepZ) + 1)) * voxelExtent;

// tMaxX, tMaxY, tMaxZ -- distance until next intersection with voxel-border
// the value of t at which the ray crosses the first vertical voxel boundary
double tMaxX = (m_direction.x != 0) ? (nextVoxelBoundaryX - m_offsettedStart.x) / m_direction.x : Double.MAX_VALUE;
double tMaxY = (m_direction.y != 0) ? (nextVoxelBoundaryY - m_offsettedStart.y) / m_direction.y : Double.MAX_VALUE;
double tMaxZ = (m_direction.z != 0) ? (nextVoxelBoundaryZ - m_offsettedStart.z) / m_direction.z : Double.MAX_VALUE;

// tDeltaX, tDeltaY, tDeltaZ --
// how far along the ray we must move for the horizontal component to equal the width of a voxel
// the direction in which we traverse the grid
// can only be FLT_MAX if we never go in that direction
double tDeltaX = (m_direction.x != 0) ? stepX * voxelExtent / m_direction.x : Double.MAX_VALUE;
double tDeltaY = (m_direction.y != 0) ? stepY * voxelExtent / m_direction.y : Double.MAX_VALUE;
double tDeltaZ = (m_direction.z != 0) ? stepZ * voxelExtent / m_direction.z : Double.MAX_VALUE;

if (onTraversingVoxel.apply(voxelIndex)) {
m_wasStopped = true;
return;
}

int traversedVoxelCount = 0;
while (++traversedVoxelCount < m_voxelDistance) {
if (tMaxX < tMaxY && tMaxX < tMaxZ) {
voxelIndex.x += stepX;
tMaxX += tDeltaX;
}
else if (tMaxY < tMaxZ) {
voxelIndex.y += stepY;
tMaxY += tDeltaY;
}
else {
voxelIndex.z += stepZ;
tMaxZ += tDeltaZ;
}

if (onTraversingVoxel.apply(voxelIndex)) {
m_wasStopped = true;
break;
}
}
}

/**
* Computes the voxel distance, a.k.a. the number of voxel to traverse, for the ray cast.
*
* @param voxelExtent The extent of a voxel, which is the equivalent for a cube of a sphere's radius.
* @param startIndex The starting position's index.
*/
private void computeVoxelDistance(double voxelExtent, Vector3i startIndex) {
m_voxelDistance = 1 +
MathExt.abs(VoxelWorldUtils.getVoxelIndexNoOffset(voxelExtent, m_offsettedStart.x + m_direction.x * m_length) - startIndex.x) +
MathExt.abs(VoxelWorldUtils.getVoxelIndexNoOffset(voxelExtent, m_offsettedStart.y + m_direction.y * m_length) - startIndex.y) +
MathExt.abs(VoxelWorldUtils.getVoxelIndexNoOffset(voxelExtent, m_offsettedStart.z + m_direction.z * m_length) - startIndex.z);
}

public Vector3d getStart() {
return m_start;
}

public Vector3d getDirection() {
return m_direction;
}

public double getLength() {
return m_length;
}

public int getVoxelDistance() {
return m_voxelDistance;
}

public void setStart(Vector3d start) {
m_start.set(start);
}

public void setStart(Vector3f start) {
m_start.set(start);
}

/**
* Sets the direction.
*
* @param direction The direction to set to the ray. Must be normalized.
*/
public void setDirection(Vector3d direction) {
m_direction.set(direction);
}

/**
* Sets the direction.
*
* @param direction The direction to set to the ray. Must be normalized.
*/
public void setDirection(Vector3f direction) {
m_direction.set(direction);
}

/**
* Sets the length of the ray.
*
* @param length The new length of the ray. Must be positive.
*/
public void setLength(double length) {
m_length = length;
}

/**
* Sets the end position of the ray, which is not a real variable but a way to set the direction and the length at the same time. The start position does matter for this
* method.
*
* @param end Where the ray ends.
*/
public void setEnd(Vector3d end) {
m_direction.set(end).subtractLocal(m_start);
m_length = m_direction.length();
m_direction.normalizeLocal();
}

/**
* Gets if the voxel ray cast was stopped by the "onTraversingVoxel" method call.
*
* @return True if the voxel ray cast was stopped by the "onTraversingVoxel" method call, false otherwise.
*/
public boolean wasStopped() {
return m_wasStopped;
}
}

Here are the external static methods :

/**
* Gets the voxel index of the specified position. This method suppose that the parameter position is already offsetted with + voxel half extent. This method local because the
* supplied voxel index will be locally modified and returned.
*
* @param voxelExtent The  extent of a voxel, which is the equivalent to a cube of a sphere's diameter.
* @param position    The position to get the voxel index from. Must already be offsetted with + voxel half extent
* @param voxelIndex  Where to store the voxel index.
*
* @return The voxel index parameter that is set to the supplied position's voxel index.
*/
public static Vector3i getVoxelIndexNoOffsetLocal(double voxelExtent, Vector3d position, Vector3i voxelIndex) {
return voxelIndex.set(getVoxelIndexNoOffset(voxelExtent, position.x), getVoxelIndexNoOffset(voxelExtent, position.y), getVoxelIndexNoOffset(voxelExtent, position.z));
}
/**
* Gets the sign of the supplied number. The method being "zero position" means that the sign of zero is 1.
*
* @param number The number to get the sign from.
*
* @return The number's sign.
*/
public static long getSignZeroPositive(double number) {
assert !Double.isNaN(number);
return getNegativeSign(number) | 1;
}
/**
* Gets the negative sign of the supplied number. So, in other words, if the number is negative, -1 is returned but if the number is positive or zero, then zero is returned. It
* does not check if the parameter is NaN.
*
* @param number The number to get its negative sign.
*
* @return -1 if the number is negative, 0 otherwise.
*/
public static long getNegativeSign(double number) {
assert !Double.isNaN(number);
return Double.doubleToRawLongBits(number) >> BIT_COUNT_EXCLUDING_SIGN_64;
}

The important parts to adjust the algorithm to fit my voxel boundaries are the following :

m_offsettedStart.set(m_start).addLocal(voxelHalfExtent, voxelHalfExtent, voxelHalfExtent);

It is mandatory to add the half extent to the starting position.

double nextVoxelBoundaryX = (voxelIndex.x + (MathExt.getNegativeSign(stepX) + 1)) * voxelExtent;
double nextVoxelBoundaryY = (voxelIndex.y + (MathExt.getNegativeSign(stepY) + 1)) * voxelExtent;
double nextVoxelBoundaryZ = (voxelIndex.z + (MathExt.getNegativeSign(stepZ) + 1)) * voxelExtent;

What the MathExt method call does could be programmed as : (stepX >= 0 ? 1 : 0).

I don't know how to express how it is delightful when everything starts to fit and work properly :')

Here are some screenshots

14 minutes ago, swiftcoder said:

They are not scalars but vectors; that's what the arrow means on top of a variable. So it's not scalar multiplications but dot products.

My friend just told me that I can use TeX instead of images for the equations : I'll update that at the same time that I'll upload the video.

Quote

// In which direction the voxel ids are incremented.
double stepX = MathExt.getSignZeroPositive(m_direction.x);

Shouldn't the steps be integers? Actually there is double to int conversion in the inner loop caused by this.

10 hours ago, JoeJ said:

Shouldn't the steps be integers? Actually there is double to int conversion in the inner loop caused by this.

You're right, thank you. I'll fix it.

Very impressive.  Nicely done.  If you don't mind saying, what language did you write this code in?  I can't follow or understand it.  I know English isn't your native language, but would you be willing to offer up a simplified explanation of the logic you used for this blog post?

6 hours ago, Awoken said:

If you don't mind saying, what language did you write this code in?﻿

It's in Java but everything remains the same as other C languages, except the calls to my personal methods, which is kind of egoistic of me. I just haven't tried yet if they are more performant than Java's JDK math methods.

6 hours ago, Awoken said:

ould you be willing to offer up a simplified explanation of the logic you used for this blog post

For the gradient or the algorithm? Well, first of all, for the algorithm, you need to at least read the research paper to get a grasp of what was the intent and the result wanted by the authors, even if you don't understand everything.

Second of all, the simplest way to understand the algorithm is to imagine a line in an 3D world made of blocks. Which blocks does the line touch? Then, in which order are they touched based on the line's start and end positions? The goal is to traverse iteratively the blocks that are touched by the line .

Third of all, the logic of the algorithm can be summed making a distinction between the ray's direction's components. Those three define the importance of their axes in terms of how many blocks need to be traversed in what direction. Think of this with integers : two people are running to reach a goal; the fastest runs a 5 km/h, while the slowest runs at 1 km/h. For each time step, i.e. an hour, how many kilometers have each runner traveled? The ratio is 5 : 1, so, to merge the analogy, a ray would traverse each step 5 blocks on the X axis and 1 block on the Y axis. Of course, it's more complicated than that, as there are more parameters to it, especially because of exceptions such as what to do when each component is equal with one another?

I'll add this explanation to the article ^^

I'm still confused.  Which part of the code you posted actually determines which voxels are hit?

4 hours ago, Awoken said:

I'm still confused.  Which part of the code you posted actually determines which voxels are hit?

        if (onTraversingVoxel.apply(voxelIndex)) {
m_wasStopped = true;
return;
}

int traversedVoxelCount = 0;
while (++traversedVoxelCount < m_voxelDistance) {
if (tMaxX < tMaxY && tMaxX < tMaxZ) {
voxelIndex.x += stepX;
tMaxX += tDeltaX;
}
else if (tMaxY < tMaxZ) {
voxelIndex.y += stepY;
tMaxY += tDeltaY;
}
else {
voxelIndex.z += stepZ;
tMaxZ += tDeltaZ;
}

if (onTraversingVoxel.apply(voxelIndex)) {
m_wasStopped = true;
break;
}
}

Look more specifically at onTraversingVoxel.apply(voxelIndex) : this line of code applies a Java functional interface, which is a lambda. The supplied parameter is the absolute (world) index of the voxel touched.

Posted (edited)

Oh fantastic explanation.  I now understand the logic and it's remarkably simple, which is a good thing.  Great blog post, very educational.

Edited by Awoken

## Create an account

Register a new account

• ### Similar Content

• By Waaayoff
I'm looking for an algorithm that I can use to remove vertices that are close to each other within a margin of error in a triangular mesh. Pretty much similar to Blender's "Remove Doubles" feature, if anyone is familiar with it.
I think the issue isn't just removing the doubles, but also how would I handle the face indices once I remove "duplicate" vertices?
• By jonwil
We have an engine based on Direct3D11 that uses ID3D11Device::CreateTexture2D to create its textures passing in whatever format we read from the dds file header.
We also have a previous version of our engine that uses the DX9 fixed function bump map feature for bump maps. This feature takes bump map textures in U8V8 format as input but it also takes textures in DXT5 and A8R8G8B8 and converts them into U8V8 using D3DXCreateTextureFromFileInMemoryEx in d3dx9).
Our current D3D11 engine handles the U8V8 textures just fine (I think it feeds it to CreateTexture2D as DXGI_FORMAT_R8G8_TYPELESS) and has some shader code that emulates the fixed function bump map feature without problems. But now we want to add support for the DXT5 and A8R8G8B8 bump maps.
Does anyone know where I can find code for Direct3D11 (or just plain code with no dependence on specific graphics APIs) that can convert the DXT5 or A8R8G8B8 texture data into U8V8 data in the same way as D3DXCreateTextureFromFileInMemoryEx and the other D3DX9 functions would do? (Someone out there must have written some code to convert between different texture formats I am sure, I just can't find it)
• By iGrfx
I've learned that the triangle clipping in the rasterization process usually using Sutherland–Hodgman algorithm. I also found an algorithm called "Guard-band". I'm writing a software raster so I want to know what technical the GPU use, I want to implement it for study. Thanks!
updated: what's the more proper triangulate algorithm?

• Welcome to the first part of multiple effect articles about soft shadows. In recent days I've been working on area light support in my own game engine, which is critical for one of the game concepts I'd like to eventually do (if time will allow me to do so). For each area light, it is crucial to have proper soft shadows with proper penumbra. For motivation, let's have the following screenshot with 3 area lights with various sizes:

Fig. 01 - PCSS variant that allows for perfectly smooth, large-area light shadows

Let's start the article by comparison of the following 2 screenshots - one with shadows and one without:

Fig. 02 - Scene from default viewpoint lit with light without any shadows (left) and with shadows (right)

This is the scene we're going to work with, and for the sake of simplicity, all of the comparison screenshots will be from this exact same viewpoint with 2 different scene configurations. Let's start with the definition of how shadows are created. Given a scene and light which we're viewing. Shadow umbra will be present at each position where there is no direct visibility between given position and any existing point on the light. Shadow penumbra will be present at each position where there is visibility of any point on the light, yet not all of them. No shadow is everywhere where there is full direct visibility between each point on the light and position.
Most of the games tend to simplify, instead of defining a light as area or volume, it gets defined as an infinitely small point, this gives us few advantages:
For single point, it is possible to define visibility in a binary way - either in shadow or not in shadow From single point, a projection of the scene can be easily constructed in such way, that definition of shadow becomes trivial (either position is occluded by other objects in the scene from lights point of view, or it isn't) From here, one can follow into the idea of shadow mapping - which is a basic technique for all others used here.

Trivial, yet should be mentioned here.
inline float ShadowMap(Texture2D<float2> shadowMap, SamplerState shadowSamplerState, float3 coord) { return shadowMap.SampleLevel(shadowSamplerState, coord.xy, 0.0f).x < coord.z ? 0.0f : 1.0f; } Fig. 03 - code snippet for standard shadow mapping, where depth map (stored 'distance' from lights point of view) is compared against calculated 'distance' between point we're computing right now and given light position. Word 'distance' may either mean actual distance, or more likely just value on z-axis for light point of view basis.

Which is well known to everyone here, giving us basic results, that we all well know, like:

Fig. 04 - Standard Shadow Mapping

This can be simply explained with the following image:

Fig. 05 - Each rendered pixel calculates whether its 'depth' from light point is greater than what is written in 'depth' map from light point (represented as yellow dot), white lines represent computation for each pixel.

Percentage-Close-Filtering (PCF)
To make shadow more visually appealing, adding soft-edge is a must. This is done by simply performing NxN tests with offsets. For the sake of improved visual quality I've used shadow mapping with bilinear filter (which requires resolving 4 samples), along with 5x5 PCF filtering:

Fig. 06 - Percentage close filtering (PCF) results in nice soft-edged shadows, sadly the shadow is uniformly soft everywhere

Clearly, none of the above techniques does any penumbra/umbra calculation, and therefore they're not really useful for area lights. For the sake of completeness, I'm adding basic PCF source code (for the sake of optimization, feel free to improve for your uses):
inline float ShadowMapPCF(Texture2D<float2> tex, SamplerState state, float3 projCoord, float resolution, float pixelSize, int filterSize) { float shadow = 0.0f; float2 grad = frac(projCoord.xy * resolution + 0.5f); for (int i = -filterSize; i <= filterSize; i++) { for (int j = -filterSize; j <= filterSize; j++) { float4 tmp = tex.Gather(state, projCoord.xy + float2(i, j) * float2(pixelSize, pixelSize)); tmp.x = tmp.x < projCoord.z ? 0.0f : 1.0f; tmp.y = tmp.y < projCoord.z ? 0.0f : 1.0f; tmp.z = tmp.z < projCoord.z ? 0.0f : 1.0f; tmp.w = tmp.w < projCoord.z ? 0.0f : 1.0f; shadow += lerp(lerp(tmp.w, tmp.z, grad.x), lerp(tmp.x, tmp.y, grad.x), grad.y); } } return shadow / (float)((2 * filterSize + 1) * (2 * filterSize + 1)); } Fig. 07 - PCF filtering source code

Representing this with image:

Fig. 08 - Image representing PCF, specifically a pixel with straight line and star in the end also calculates shadow in neighboring pixels (e.g. performing additional samples). The resulting shadow is then weighted sum of the results of all the samples for a given pixel.

While the idea is quite basic, it is clear that using larger kernels would end up in slow computation. There are ways how to perform separable filtering of shadow maps using different approach to resolve where the shadow is (Variance Shadow Mapping for example). They do introduce additional problems though.

To understand problem in both previous techniques let's replace point light with area light in our sketch image.

Fig. 09 - Using Area light introduces penumbra and umbra. The size of penumbra is dependent on multiple factors - distance between receiver and light, distance between blocker and light and light size (shape).

To calculate plausible shadows like in the schematic image, we need to calculate distance between receiver and blocker, and distance between receiver and light. PCSS is a 2-pass algorithm that does calculate average blocker distance as the first step - using this value to calculate penumbra size, and then performing some kind of filtering (often PCF, or jittered-PCF for example). In short, PCSS computation will look similar to this:
float ShadowMapPCSS(...) { float averageBlockerDistance = PCSS_BlockerDistance(...); // If there isn't any average blocker distance - it means that there is no blocker at all if (averageBlockerDistance < 1.0) { return 1.0f; } else { float penumbraSize = estimatePenumbraSize(averageBlockerDistance, ...) float shadow = ShadowPCF(..., penumbraSize); return shadow; } } Fig. 10 - Pseudo-code of PCSS shadow mapping

The first problem is to determine correct average blocker calculation - and as we want to limit search size for average blocker, we simply pass in additional parameter that determines search size. Actual average blocker is calculated by searching shadow map with depth value smaller than of receiver. In my case I used the following estimation of blocker distance:
// Input parameters are: // tex - Input shadow depth map // state - Sampler state for shadow depth map // projCoord - holds projection UV coordinates, and depth for receiver (~further compared against shadow depth map) // searchUV - input size for blocker search // rotationTrig - input parameter for random rotation of kernel samples inline float2 PCSS_BlockerDistance(Texture2D<float2> tex, SamplerState state, float3 projCoord, float searchUV, float2 rotationTrig) { // Perform N samples with pre-defined offset and random rotation, scale by input search size int blockers = 0; float avgBlocker = 0.0f; for (int i = 0; i < (int)PCSS_SampleCount; i++) { // Calculate sample offset (technically anything can be used here - standard NxN kernel, random samples with scale, etc.) float2 offset = PCSS_Samples[i] * searchUV; offset = PCSS_Rotate(offset, rotationTrig); // Compare given sample depth with receiver depth, if it puts receiver into shadow, this sample is a blocker float z = tex.SampleLevel(state, projCoord.xy + offset, 0.0f).x; if (z < projCoord.z) { blockers++; avgBlockerDistance += z; } } // Calculate average blocker depth avgBlocker /= blockers; // To solve cases where there are no blockers - we output 2 values - average blocker depth and no. of blockers return float2(avgBlocker, (float)blockers); } Fig. 11 - Average blocker estimation for PCSS shadow mapping

For penumbra size calculation - first - we assume that blocker and receiver are plannar and parallel. This makes actual penumbra size is then based on similar triangles. Determined as:
penmubraSize = lightSize * (receiverDepth - averageBlockerDepth) / averageBlockerDepth This size is then used as input kernel size for PCF (or similar) filter. In my case I again used rotated kernel samples. Note.: Depending on the samples positioning one can achieve different area light shapes. The result gives quite correct shadows, with the downside of requiring a lot of processing power to do noise-less shadows (a lot of samples) and large kernel sizes (which also requires large blocker search size). Generally this is very good technique for small to mid-sized area lights, yet large-sized area lights will cause problems.

Fig. 12 - PCSS shadow mapping in practice

As currently the article is quite large and describing 2 other techniques which I allow in my current game engine build (first of them is a variant of PCSS that utilizes mip maps and allows for slightly larger light size without impacting the performance that much, and second of them is sort of back-projection technique), I will leave those two for another article which may eventually come out. Anyways allow me to at least show a short video of the first technique in action:

Note: This article was originally published as a blog entry right here at GameDev.net, and has been reproduced here as a featured article with the kind permission of the author.
You might also be interested in our recently featured article on Contact-hardening Soft Shadows Made Fast.
×