Jump to content

  • Log In with Google      Sign In   
  • Create Account

DwarvesH

Member Since 12 Jun 2013
Offline Last Active Aug 11 2015 09:13 AM

Topics I've Started

Question about nDotL across LOD levels

10 August 2015 - 06:26 AM

I know I've been spamming a bit the forums, but please bare with me.

 

I have this old DX9 CPU based terrain LOD system and I'm updating it to a modern DX11 GPU based one. Progress has been slow but very fun, since I get to replace hundreds of lines of code doing expensive CPU LOD operations with a few GPU lines and I also get a performance boost out of it.

 

I am not going to talk a bout terrain height across LOD and morphing because these questions have fairly solid answers in my head.

 

It is about lighting. Even with physically based rendering, the good old cosine factor, the nDotL plays a huge role since we multiply by it. So I want to get it right, but I am getting some weird results as I move across LOD/MIP levels in my normal map generated based on the height map.

 

Supposing that we have a sampling rate of 1, meaning for NxN vertices we have NxN texels in our height map and a direction of float3(0.80, -0.40, 0.0), I have a first question.

 

1. Is the following output of nDotL correct, useful for physically based rendering and must be used as a general guideline across all LOD levels, meaning that whatever the sampling rate, your nDotL should roughly follow this curve?

 

Attached File  ndotl1.png   390.66KB   0 downloads

 

This looks correct for a low resolution (128x128) input texture.

 

But if I double the height map and normal map resolution, but adjust the sample rate (half it) so that the amount of vertices remains the same, I get this result, which changes the the nDotL curve:

 

Attached File  ndotl2.png   573.59KB   0 downloads

 

This because I am creating the normal map based on sampling of 4 points in the height map:

float4 GenNM(VertexShaderOutput input) : SV_TARGET {
	float ps = 1 / size;
	//ps *= size / 128;

	float3 n;
	n.x = permTexture2d.SampleLevel(permSampler2d, float4(input.texCoord.x - ps, input.texCoord.y, 0, 0), 0).x * 30 - 
			permTexture2d.SampleLevel(permSampler2d, float4(input.texCoord.x + ps, input.texCoord.y, 0, 0), 0).x  * 30;
	n.z = -(permTexture2d.SampleLevel(permSampler2d, float4(input.texCoord.x, input.texCoord.y - ps, 0, 0), 0).x * 30 - 
			permTexture2d.SampleLevel(permSampler2d, float4(input.texCoord.x, input.texCoord.y + ps, 0, 0), 0).x * 30);
	n.y = 2;
	n = normalize(n);
	n = n * 0.5 + 0.5;	

	return float4(n, 1);
}

permTexture2d is the wrongly named heightMapTexture and 30 is the world scale. When I double the height map resolution, it becomes finer and the height points closer, so some of the overall curve of the lower resolution height map is lost, rather than adding detail to it. Am I understanding this right?

 

Doubling the height and normal resolution again and halving the sample rate so that we have the same amount of vertices I get this result:

 

Attached File  ndotl3.png   590.87KB   0 downloads

 

By the time I get to the desired resolution, the normals become quite flat and so does the lighting.

 

The reason for generating a high resolution map is that this is the input for LOD 0 close to the camera terrain. LOD 0 is rendered using a lot of vertices. As I move away from the camera, I start using less vertices, spaced further apart. The final LOD will have the 128x128 vertices, like in the first low resolution screenshot.

 

So the next question is:

 

2. How should the normal map on full resolution look? More like the one in the first screenshot, but with more detail, or more like in the last screenshot, flat.

 

3. How consistent should be the various MIP levels of the normal map as I go down in resolution?

 


I guess I manged to mess up GPU perlin?

07 August 2015 - 08:13 AM

This single octave of simplex noise doesn't look right, does it?

 

Attached File  noise.png   421.26KB   0 downloads

 

It is a port of:

https://digitalerr0r.wordpress.com/2011/05/15/xna-shader-programming-tutorial-25-perlin-noise-using-the-gpu/

 

Which implements:

http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter26.html

 

A port of a port and something was lost in translation, when switching from XNA and DX9 to C++ DX11.

 

Probably my code that generates the two textures on the CPU that are sent to the shader have some issues. Probably with the NormalizedByte4 type:

https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.packedvector.normalizedbyte4.aspx

 

This type must be matched to something, so I'm guessing DXGI_FORMAT_R8G8B8A8_SNORM? This type should give a range of [-1..1] in floats.

 

 

 

 


How to use D3DX11CreateShaderResourceViewFromFile to create a CPU readable texture?

06 August 2015 - 08:51 AM

Hello everybody!

 

I have a small problem: I can't figure out how to create a texture that is readable on the CPU side after loading. I have googled the issue and the normal instructions are changing the Usage and CpuBindFlags, but as soon as I touch the flags, D3DX11CreateShaderResourceViewFromFile returns FAILED.

 

I need to read the texture for two things:

1. I have finally transitioned to VTF. Thousands of lines of CPU code for building and managing terrain LOD are now moved to the GPU. This means that I need to have my terrain data in a GPU readable format: textures. I read my terrain map into a texture and I would like to split it up into small chunks of 128x128 (size not important). So I want to read one or multiple large textures, lock them, use the data on the CPU to get all mip levels in a chunk and create a default usage and bind flags texture for each chunk, then discard the large texture.

2. I have several special partitioning schemes that have their data saved on disk as a texture. I need to read that texture, process it, and then discard it.

 

Both steps are done once at load time so performance is not a problem.

 

I have a simple wrapper for the texture class:

bool Create(wchar_t* path) {
		HRESULT result;

		// Load the texture in.
		D3DX11_IMAGE_LOAD_INFO info;
		info.Width = D3DX11_DEFAULT;
		info.Height = D3DX11_DEFAULT;
		info.Depth = D3DX11_DEFAULT;
		info.FirstMipLevel = D3DX11_DEFAULT;
		info.MipLevels = D3DX11_DEFAULT;
		info.Usage = (D3D11_USAGE) D3DX11_DEFAULT;
		info.BindFlags = D3DX11_DEFAULT;
		info.CpuAccessFlags = D3D11_CPU_ACCESS_READ;
		info.MiscFlags = D3DX11_DEFAULT;
		info.Format = DXGI_FORMAT_FROM_FILE;
		info.Filter = D3DX11_DEFAULT;
		info.MipFilter = D3DX11_DEFAULT;
		info.pSrcInfo = NULL;
		//info.CpuAccessFlags = D3D11_CPU_ACCESS_WRITE | D3D11_CPU_ACCESS_READ;
		result = D3DX11CreateShaderResourceViewFromFile(DeviceSingleton, path, &info, NULL, &Handle, NULL);
		if(FAILED(result)) 
			return false;

		ID3D11Texture2D* tex;
		Handle->GetResource((ID3D11Resource**)&tex);
		D3D11_TEXTURE2D_DESC desc;
		tex->GetDesc(&desc);
		tex->Release();

		Width = desc.Width;
		Height = desc.Height;

		return true;
	}

I tried all combinations of Usage and CpuAccessFlags and the creation fails. It only works with D3DX11_DEFAULT for all values.

 

And if I leave all at default, my Lock method fails:

void* Lock() {
		ID3D11Resource* res = nullptr;
		
		Handle->GetResource(&res);
		res->QueryInterface(&TexHandle);

		D3D11_MAPPED_SUBRESOURCE mappedResource;		
		// Lock the constant buffer so it can be written to.
		HRESULT result = DeviceContextSingleton->Map(TexHandle, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
		if(FAILED(result))
			return nullptr;

		return mappedResource.pData;
	}

The fields are:

ID3D11ShaderResourceView* Handle;
ID3D11Texture2D* TexHandle;

Thank you for your time reading this!


Is there a way to draw super precise lines?

27 January 2015 - 09:34 AM

So I'm creating a top down/2.5D/polygons + 2 top down camera game.

 

I snap the polygons to a grid but the game is not pixely.

 

I would also like to add borders to walls. How I did this is drawing the walls a second time, but this time with lines. 

 

But I have found that the rasterizer does not behave exactly as it does for filling polygons as it does for drawing lines. Randomly the end points are not draw, especially for horizontal lines.

 

One can see this in screenshots if you zoom in a bit:

https://dl.dropboxusercontent.com/u/45638513/sprite15.png

https://dl.dropboxusercontent.com/u/45638513/sprite16.spr.png

 

The outlines look a bit rounder at corners.

 

Is there a way to get DirectX to fill the exact border of a triangle with a line? In a portable way?

 

Or maybe I'm overthinking things and shouldn't really care about one pixel.

 


How to properly switch to fullscreen with DXGI?

16 January 2015 - 09:13 AM

So i finally managed to create a stable Hello World for DirectX 11 (for some reason 10 and 11 never wanted to cooperate with me).

 

Yet most of the basic functionality is not working as expected.

 

For starters switching to fullscreen. All the documentation and other posts on the Internet say that DirectX 10/11 is so easy and low maintenance compared to 9 and just let DXGI take care of everything. Well, things are finicky to say the least.

 

So for starters, I would like to get DXGI resolution fullscreen switching to work as expected.

 

Here is the scenario:

1. I have a RenderForm class that creates a window. Specifying WS_POPUP is key for the behavior I am experiencing:

handle = CreateWindowEx(WS_EX_APPWINDOW, L"D3D10WindowClass", aTitle.c_str(), WS_CLIPSIBLINGS | WS_CLIPCHILDREN 
		| WS_POPUP /*WS_OVERLAPPEDWINDOW*/, 0, 0, CW_USEDEFAULT, CW_USEDEFAULT, nullptr, nullptr, hInstance, nullptr);

2. To better test everything, I am running in "fake" fullscreen mode, so the window has no frame and has the same dimensions as my desktop. I get a FPS of 670.

 

3. I switch to fullscreen mode with DXGI (Alt-Enter). I get 630 FPS. Normally one expects FPS to go up when switching to fullscreen. The swap chain when created has the flag for mode switching. Without ti, if I change from a less than native resolution to fullscreen, I get ugly artifacts I have never seen before.

swapChainDesc.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH;

If I create the swap chain in fullscreen mode directly, I get 1370 FPS. The framerate is almost double compared to when I start in fullscreen mode. But that is not the extent of the problems. If I switch from this initial fullscreen mode to windowed, I get a white screen. If I switch back again I get the 630 fullscreen mode, the original high performance one being lost.

 

I do have code to handle window resize the swap chain, but because of WS_POPUP it is never called. If I create the window with WS_OVERLAPPEDWINDOW, things go to hell. The fullscreen window content is all squashed horizontally in the middle of the screen.

 

I could upload the code, but I think it is better to determine the correct way to handle going to fullscreen with DXGI before and try it out to see how that behaves. 


PARTNERS