Sign in to follow this  
dxdude

DX11 Texture close view looks pixelated

Recommended Posts

Good day,
i have some problem with my textures in my directx 11 app.
In the distance the texture filtering seems to work like it should.
But they look very pixelated when iam very close to it.
 
See this picture:
98555950.jpg
 
And this one:
 
70372621.jpg
 
Iam using "D3D11_FILTER_MIN_MAG_MIP_LINEAR" to create the texture sampler.
 
//Code for sampler creation
D3D11_SAMPLER_DESC SamDesc;
ZeroMemory(&SamDesc, sizeof(D3D11_SAMPLER_DESC));
SamDesc.Filter		= D3D11_FILTER_MIN_MAG_MIP_LINEAR;
SamDesc.AddressU	= D3D11_TEXTURE_ADDRESS_WRAP;
SamDesc.AddressV	= D3D11_TEXTURE_ADDRESS_WRAP;
SamDesc.AddressW	= D3D11_TEXTURE_ADDRESS_WRAP;
SamDesc.MipLODBias	= 0.0f;
SamDesc.MaxAnisotropy	= 1;
SamDesc.ComparisonFunc	= D3D11_COMPARISON_NEVER;
SamDesc.BorderColor[0]	= SamDesc.BorderColor[1] = SamDesc.BorderColor[2] = SamDesc.BorderColor[3] = 0;
SamDesc.MinLOD		= 0;
SamDesc.MaxLOD		= D3D11_FLOAT32_MAX;

 

 

I tested it by using the filter "D3D11_FILTER_ANISOTROPIC" with MaxAnisotropy of 4 or 8 and the texture still looks pixelated at very close view.
The texture loading is done with "D3DX11CreateShaderResourceViewFromFile".
 
//Code for texture loading
D3DX11_IMAGE_LOAD_INFO imageInfo;
imageInfo.Width		= D3DX11_DEFAULT;
imageInfo.Height	= D3DX11_DEFAULT;
imageInfo.Depth		= D3DX11_DEFAULT;
imageInfo.FirstMipLevel	= D3DX11_DEFAULT;
imageInfo.MipLevels	= D3DX11_DEFAULT;
imageInfo.Usage		= D3D11_USAGE_DEFAULT;
imageInfo.BindFlags	= D3D11_BIND_SHADER_RESOURCE;
imageInfo.Format	= DXGI_FORMAT_R8G8B8A8_UNORM;
imageInfo.MipFilter	= D3DX11_FILTER_LINEAR;
imageInfo.Filter	= D3DX11_FILTER_LINEAR;

D3DX11CreateShaderResourceViewFromFile(..)

 

 

It doesnt matter what kind of Filter i set or using default values for the 'D3DX11_IMAGE_LOAD_INFO' structure it still pixelated at close view.
 
First of all i tested if swapchain, backbuffer, depth texture are the same size as the window creation.
They where all matching.
 
//Code window creation
RECT rc = { 0, 0, 1600, 960 };
AdjustWindowRect( &rc, WS_OVERLAPPEDWINDOW, FALSE );

hWnd = CreateWindow(L"myclass", L"myapp", WS_OVERLAPPEDWINDOW,
      CW_USEDEFAULT, CW_USEDEFAULT, rc.right - rc.left, rc.bottom - rc.top, NULL, NULL, hInstance, NULL);

if (!hWnd)
    return FALSE;

g_WindowHWND = hWnd;
ShowWindow(g_WindowHWND, nCmdShow);

//Code buffer creation after window is created
m_hWndMainRenderTarget = hwnd;

// Create a Direct2D render target			
RECT rcRenderTarget;
GetClientRect( hwnd, &rcRenderTarget);
m_uiRenderTargetWidth	= rcRenderTarget.right-rcRenderTarget.left;
m_uiRenderTargetHeight	= rcRenderTarget.bottom-rcRenderTarget.top;


// Create swapchain settings
DXGI_SWAP_CHAIN_DESC sSwapChainDesc;
ZeroMemory( &sSwapChainDesc, sizeof( sSwapChainDesc ) );

sSwapChainDesc.BufferCount		= 1;
sSwapChainDesc.BufferDesc.Width		= m_uiRenderTargetWidth;
sSwapChainDesc.BufferDesc.Height	= m_uiRenderTargetHeight;
sSwapChainDesc.BufferDesc.Format	= DXGI_FORMAT_R8G8B8A8_UNORM;
sSwapChainDesc.BufferUsage		= DXGI_USAGE_RENDER_TARGET_OUTPUT;
sSwapChainDesc.OutputWindow		= m_hWndMainRenderTarget;
sSwapChainDesc.SampleDesc.Count		= 1;
sSwapChainDesc.SampleDesc.Quality	= 0;
sSwapChainDesc.Windowed			= m_bWindowed;
sSwapChainDesc.SwapEffect		= DXGI_SWAP_EFFECT_DISCARD;

// Retrive device, adapter and factory that was created with the device
IDXGIDevice * pDXGIDevice;
hr = m_pDevice->QueryInterface(__uuidof(IDXGIDevice), (void **)&pDXGIDevice);

IDXGIAdapter * pDXGIAdapter;
hr = pDXGIDevice->GetParent(__uuidof(IDXGIAdapter), (void **)&pDXGIAdapter);

IDXGIFactory * pIDXGIFactory;
pDXGIAdapter->GetParent(__uuidof(IDXGIFactory), (void **)&pIDXGIFactory);

// Create swap chain seperate	
if(FAILED( hr = pIDXGIFactory->CreateSwapChain( pDXGIDevice, &sSwapChainDesc, &m_pSwapChain ) ))
{
         return hr;
}
// Get a pointer to the back buffer
if(FAILED( hr = m_pSwapChain->GetBuffer( 0, __uuidof( ID3D11Texture2D ), ( LPVOID* )&m_pBackBuffer )))
{
	return hr;
}

// Create a render-target view
if(FAILED( hr = m_pDevice->CreateRenderTargetView( m_pBackBuffer, NULL, &m_pRenderTargetView )))
{
	return hr;
}

D3D11_TEXTURE2D_DESC	sDepthStencilTextureDesc;

// Create depth stencil texture etc..
sDepthStencilTextureDesc.Width			= m_uiRenderTargetWidth;
sDepthStencilTextureDesc.Height			= m_uiRenderTargetHeight;
sDepthStencilTextureDesc.MipLevels		= 1;
sDepthStencilTextureDesc.ArraySize		= 1;
sDepthStencilTextureDesc.Format			= DXGI_FORMAT_D24_UNORM_S8_UINT;
sDepthStencilTextureDesc.SampleDesc.Count	= 1;
sDepthStencilTextureDesc.SampleDesc.Quality	= 0;
sDepthStencilTextureDesc.Usage			= D3D11_USAGE_DEFAULT;
sDepthStencilTextureDesc.BindFlags		= D3D11_BIND_DEPTH_STENCIL;
sDepthStencilTextureDesc.CPUAccessFlags		= 0;
sDepthStencilTextureDesc.MiscFlags		= 0;

if(FAILED( hr = m_pDevice->CreateTexture2D( &sDepthStencilTextureDesc, NULL, &m_pDepthStencilTexture )))
{
	return hr;
}
	
D3D11_DEPTH_STENCIL_VIEW_DESC	sDepthStencilViewDesc;

// Depth stencil view desc...
ZeroMemory( &sDepthStencilViewDesc, sizeof( sDepthStencilViewDesc ) );
sDepthStencilViewDesc.Format			= DXGI_FORMAT_D24_UNORM_S8_UINT;
sDepthStencilViewDesc.ViewDimension		= D3D11_DSV_DIMENSION_TEXTURE2D;
sDepthStencilViewDesc.Texture2D.MipSlice	= 0;
sDepthStencilViewDesc.Flags			= 0;

if(FAILED( hr = m_pDevice->CreateDepthStencilView( m_pDepthStencilTexture, &sDepthStencilViewDesc,&m_pDepthStencilView )))
{
	return hr;
}

// Bind the view
m_pDeviceContext->OMSetRenderTargets( 1, &m_pRenderTargetView, m_pDepthStencilView );

D3D11_VIEWPORT sViewPort;

// Setup the viewport
sViewPort.Width		= (FLOAT)m_uiRenderTargetWidth;
sViewPort.Height	= (FLOAT)m_uiRenderTargetHeight;
sViewPort.MinDepth	= 0.0f;
sViewPort.MaxDepth	= 1.0f;
sViewPort.TopLeftX	= 0;
sViewPort.TopLeftY	= 0;

m_pDeviceContext->RSSetViewports( 1, &sViewPort );

 

 

 

 

During some debug session i tested if all the dimension of the created buffers match up with the window client rect.
Every created buffer has the same size as the client area of the created window.
The window istself is very basic. No menu, bars or anything. Just a simply titlebar.
Text displayed on the screen done with my font engine are pixel perfect so i think the buffers size match up with the window client area size.
At this point i have to say that other application like the samples from the Microsoft sdk works like accepted.
 
Since iam running out of ideas i can post here some more code that could be relevant.
 
//Code projection matrx creation. zNear = 1.0f and zFar = 12000.0f
int width	= g_CDevice.GetRenderTargetWidth();
int height	= g_CDevice.GetRenderTargetHeight();
float fov	= 0.785398163f;
float aspectRatio = width / (float)height;
D3DXMatrixPerspectiveFovLH(&g_mProjection, fov, aspectRatio, g_fZNear, g_fZFar);

 

 

The pixelated effect normaly starts when iam comming close to the zNear value.
Something like 2 units above the zNear settings.
 
 
Here is the pixelshader iam using.
It is a combination of some detail textures with slope based texturing and some blending of mixed uv values for the detail textures.
They are also combined via a noise blending map for mixing the slope based texture values to get ride of the repeat effect of detail textures.
All textures used here are 512x512.
The texture are loaded all the same way like described at the beginning of this post and using the same texture sampler.
 
//Code pixelshader
Texture2D txColorMap_1 : register( t0 ); // grass
Texture2D txColorMap_2 : register( t1 ); // dirt
Texture2D txColorMap_3 : register( t2 ); // rock
Texture2D txColorMap_4 : register( t3 ); // rgba random blend map
Texture2D txColorMap_5 : register( t4 ); // noisy normal map

// Texture sampler - D3D11_FILTER_MIN_MAG_MIP_LINEAR / ADRESS: D3D11_TEXTURE_ADDRESS_WRAP
SamplerState samLinear2D_1 : register( s0 );

struct PixelInputType
{
    float4 position : SV_POSITION;
    float3 normal   : TEXCOORD0;
    float2 tex_1    : TEXCOORD1; // uv for detail
    float2 tex_2    : TEXCOORD2; // uv for noise map
};


float4 PSMain(PixelInputType input) : SV_Target
{
	const float uvDetail = 32.0f; // detail uv in the range of 0.125f - 0.25f
	const float4 vLightDir = float4(-0.5f, 1.0f, 1.0f, 1.0f); // Directional light for testing
	const float4 vLightColor = float4(0.8f, 0.8f, 0.8f, 1.0f);
	const float4 vAmbientColor = float4(0.2f, 0.2f, 0.2f, 1.0f);

	float4 finalColor;
	float blendAmount;
	
	float4 t_1 =  txColorMap_4.Sample( samLinear2D_1, input.tex_2 * 0.125f);// random map blend values
	float4 col;

	// Read detail texture
	float4 c1 = txColorMap_1.Sample( samLinear2D_1, input.tex_1 * uvDetail);
	float4 c2 = txColorMap_2.Sample( samLinear2D_1, input.tex_1 * uvDetail);
	float4 c3 = txColorMap_3.Sample( samLinear2D_1, input.tex_1 * uvDetail);
	
	// Read detail texture with lower uv values for mixing
	float4 c4 = txColorMap_1.Sample( samLinear2D_1, input.tex_1 * uvDetail * 0.25f);
	float4 c5 = txColorMap_2.Sample( samLinear2D_1, input.tex_1 * uvDetail * 0.125f);
	float4 c6 = txColorMap_3.Sample( samLinear2D_1, input.tex_1 * uvDetail * 0.125f);
	
	// lerp the values with the blend values of the 
	c1 = lerp(c1*c4, c2, t_1.r);
	c2 = lerp(c2*c5, c3, t_1.g);
	c3 = lerp(c3,c6, t_1.b);

	//  Slope calculation based on rastertek tutorial
	float slope = 1.0f - input.normal.y;

    if(slope < 0.2)
    {
        blendAmount = slope / 0.2f;
        col = lerp(c1, c2, blendAmount);
    }
	
    if((slope < 0.7) && (slope >= 0.2f))
    {
        blendAmount = (slope - 0.2f) * (1.0f / (0.7f - 0.2f));
        col = lerp(c2, c3, blendAmount);
    }

    if(slope >= 0.7) 
    {
        col = c3;
    }
	
	// add normal map noise values for some cheap bump effect
	float3 n1 = txColorMap_5.Sample(samLinear2D_1, input.tex_1 * uvDetail);
	float3 n2 = txColorMap_5.Sample(samLinear2D_1, input.tex_1 * uvDetail * 0.25f);
	n1 = lerp(n1, n2, t_1.r);
	float  d = dot((float3)vLightDir,n1);
	col *= d;
	
	// calculate directional lighting
	finalColor = saturate( dot( (float3)vLightDir,input.normal) * vLightColor) * col;
	finalColor += col * vAmbientColor;

	return finalColor;
}

 

 

 

Thats all the code that could be relevant to guess what kind of bug i have.
Thanks in advance to anyone here.
Edited by dxdude

Share this post


Link to post
Share on other sites

The one thing I can't see mentioned here is the UV coordinates you are using, which may mean you are trying to debug code that doesn't actually need to be debugged. Have a look (if you haven't already) to see if the UV coords you are using use the full resolution of the texture.

 

Aimee

Share this post


Link to post
Share on other sites

Thank you for the answer AmzBee.

I cant test any code here at work atm so will do it when iam back home later.

The UV are generated inside my vertex shader.

 

Can very low uv coodinates make problems?

So when mapping from 0 to 1 and the uv for it are like 0,0078125 - 0,01..... ?

 

But ill check my vertex shader later.

Thank you very much.

Share this post


Link to post
Share on other sites

The one thing I can't see mentioned here is the UV coordinates you are using, which may mean you are trying to debug code that doesn't actually need to be debugged. Have a look (if you haven't already) to see if the UV coords you are using use the full resolution of the texture.

 

Aimee

 

Woot i solved it now.

There was a bug in the uv generation in the vertex shader process.

Thank you very much.

And sorry for the big post :)

Share this post


Link to post
Share on other sites

Can very low uv coodinates make problems?
So when mapping from 0 to 1 and the uv for it are like 0,0078125 - 0,01..... ?
 
I don't do shaders, but i follow pretty much everything you're doing.
 
looks like you might have a classic case of "5 foot texture, 10 foot rock" as i call it.   IE your texture is not high rez enough for the size of the mesh its mapped onto.
 
when this occurs, you can get very small u,v coords for a given tri, such as you mention. 
 
that and your first image (a classic case of low rez texture) are what make me suspect "5 foot texture, 10 foot rock", or "1 meter texture, 2 meter rock" if you prefer.
 
if thats the case, then the image in your texture is a picture of a piece of land thats not as big as the area it gets mapped onto in your world. so it gets stretched. instead of vertex coords 0 to 1 mapping to UVs  0 to 1, you get UVs 0 to 0.1 or .01, etc.
 
so, you can do one or a combo of things:
1. increase texture rez.   i run 256x256 for speed, but have tested up to 4096x4096 on the exact same case you're working (ground textures). i saw that first image of yours, and i was like "yeah, been there, done that, i remember that, walking up to rocks and ground that sticks up, and figuring out how to get high enough rez with the lowest rez textures possible, while tweaking texture wrap, quad size, and seamless textures.   increasing texture size costs memory.   bigger textures run slower (at least in fixed function pipeline).
 
2. decrease quad/triangle size.  1/2 as big with same mapping means twice the resolution from the same texture.  seamless textures may be required.
 
3. repeat the texture more than once across a quad - usually requires seamless texture. this gets you the high rez of a high rez texture without the memory hit, and doesn't increase your triangle count. the downside is possible moire' patterns from repeating the same texture 2 or more times across a surface.
 
here's the load code for the texture:
 
// load texture - creates mipmaps. no image filtering, box filtering for mipmaps. 
void Zloadtex(char *s)
{
HRESULT h;
char s2[100];
h=D3DXCreateTextureFromFileExA(Zd3d_device_ptr,s,D3DX_DEFAULT,  // widtth
                                                D3DX_DEFAULT,   // height 
                                                D3DX_DEFAULT,      // miplvls ( default = complete chain ) 
                                                0,                 // usage (0=not render & not dynamic)
                                                D3DFMT_A8R8G8B8,
                                                mempool,                         // memory pool of your choice - managed, most likely.
                                                D3DX_FILTER_NONE,      //   image filter   ( default = tri + dither )
                                                D3DX_DEFAULT,      // mip filter ( default = box )  
                                                0,     // color key (0=none)
                                                NULL,NULL,&(Ztex[numtextures].tex));
if (h != D3D_OK) { strcpy_s(s2,100,"Error loading "); strcat_s(s2,100,s); Zmsg2(s2); exit(1); }
strcpy_s(Ztex[numtextures].name,s);
numtextures++;
}
 
 
and here are the states of the pipeline when drawing:
 
// no blending, turn on gouraud shading
Zd3d_device_ptr->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_MODULATE );
Zd3d_device_ptr->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE );
Zd3d_device_ptr->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE );
Zd3d_device_ptr->SetRenderState(D3DRS_SHADEMODE,D3DSHADE_GOURAUD);
Zmipmaps(1);         //  Zd3d_device_ptr->SetSamplerState(0,D3DSAMP_MIPFILTER,D3DTEXF_LINEAR);
Zminmagfilter(2);   // 0=point, 1=linear, 2=aniso min and magnification
Zambient(255,255,255);  //  i control ambient with materials
Znormalize(1);                // normalize normals on.   i do scaling of meshes on the fly.
Zspecular(1);                  // turn on specular
Zalphablend(0);             // alpha blend off
 
 
and heres the results with 256x256 texture on a 10x10 quad.    when i get up close and personal like in your first image, i get about 4x the rez you are getting (IE just slightly pixelated).
 
well, i was going to post an image, but the image button doesn't work. i have a couple screenshots in my gallery if you want to check them out. not sure how well they show the ground though.
 
 

It is a combination of some detail textures with slope based texturing and some blending of mixed uv values for the detail textures.
They are also combined via a noise blending map for mixing the slope based texture values to get ride of the repeat effect of detail textures.

 
i believe this is the only place where we're doing things differently. i do a simple aniso tmapping of a seamless texture onto a height mapped quad.
 
as i said, i'm using 256x256 tex on a 10x10 quad. based on you first image, if you're using a 512x512 texture, it looks like you're mapping it onto a quad size of about 80x80, whereas 512x512 mapped onto 20x20 would be the equivalent of what i'm doing (4 times the rez).   
 
from my testing, i found i couldn't go bigger than a 10x10 with a 256x256 texture, without unacceptable levels of pixelation at closest ranges. this would be the equivalent of a 20x20 quad using 512x512 textures.
 
don't forget that the size of the image in the texture can be a factor too.
 
if your mapping a 512x512 tex onto a 20x20 quad, at 1 meter per d3d or Ogl unit, and your image on your texture is only a section of land 5 meters across, you'll also get similar results to your first image.

Share this post


Link to post
Share on other sites

The one thing I can't see mentioned here is the UV coordinates you are using, which may mean you are trying to debug code that doesn't actually need to be debugged. Have a look (if you haven't already) to see if the UV coords you are using use the full resolution of the texture.

 

Aimee

 

Woot i solved it now.

There was a bug in the uv generation in the vertex shader process.

Thank you very much.

And sorry for the big post smile.png

no worries, sometimes the biggest problem can be solved with the smallest fix lol :P

Share this post


Link to post
Share on other sites

There was a bug in the uv generation in the vertex shader process.

 

uv coords too small , eh?

 

Nice info Norman.

There where not to small but i did a small mistake.

Since i update a small constant buffer for each patch that get rendered i also supplied sector data from the patch.

The sector data was used to generate continuous uv coordinates.

But i got a small bug in the update process for the constant buffer.

 

Nice info you supplied. Thank you very much.

Edited by dxdude

Share this post


Link to post
Share on other sites

But i got a small bug in the update process for the constant buffer.

 

its always the little stuff, isn't it?

 

i was working on blending height map edges the other day.   it helps if you calculate the average height at an edge as (h1+h2)/2 as opposed to h1 + h2/2. i forgot the parentheses. that one took a few hours to find, tracing up and down the call stack (so to speak, purely by code inspection).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628278
    • Total Posts
      2981789
  • Similar Content

    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By turanszkij
      If I do a buffer update with MAP_NO_OVERWRITE or MAP_DISCARD, can I just write to the buffer after I called Unmap() on the buffer? It seems to work fine for me (Nvidia driver), but is it actually legal to do so? I have a graphics device wrapper and I don't want to expose Map/Unmap, but just have a function like void* AllocateFromRingBuffer(GPUBuffer* buffer, uint size, uint& offset); This function would just call Map on the buffer, then Unmap immediately and then return the address of the buffer. It usually does a MAP_NO_OVERWRITE, but sometimes it is a WRITE_DISCARD (when the buffer wraps around). Previously I have been using it so that the function expected the data upfront and would copy to the buffer between Map/Unmap, but now I want to extend functionality of it so that it would just return an address to write to.
    • By mister345
      Trying to write a multitexturing shader in DirectX11 - 3 textures work fine, but adding 4th gets sampled as black!
      Could you please look at the textureClass.cpp line 79? - I'm guess its D3D11_TEXTURE2D_DESC settings are wrong, 
      but no idea how to set it up right. I tried changing ArraySize from 1 to 4, but does nothing. If thats not the issue, please look
      at the LightShader_ps - maybe doing something wrong there? Otherwise, no idea.
          // Setup the description of the texture.
          textureDesc.Height = height;
          textureDesc.Width = width;
          textureDesc.MipLevels = 0;
          textureDesc.ArraySize = 1;
          textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
          textureDesc.SampleDesc.Count = 1;
          textureDesc.SampleDesc.Quality = 0;
          textureDesc.Usage = D3D11_USAGE_DEFAULT;
          textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
          textureDesc.CPUAccessFlags = 0;
          textureDesc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
      Please help, thanks.
      https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/Texture.cpp
       
    • By GameDevCoder
      I have to learn DirectX for a course I am studying. This book https://www.amazon.co.uk/Introduction-3D-Game-Programming-Directx/dp/1936420228 I felt would be great for me to learn from.
      The trouble is the examples which are all offered here http://www.d3dcoder.net/d3d11.htm . They do not work for me. This is a known issue as there is a link on the examples page saying how to fix it. I'm having difficulty with doing this though. This is the page with the solution http://www.d3dcoder.net/Data/Book4/d3d11Win10.htm.
      The reason why this problem is happening, the book was released before Windows 10 was released. Now when the examples are run they need slight fixes in order for them to even work. I just can't get these examples working at all.
      Would anyone be able to help me get the examples working please. I am running Windows 10 also just to make this clear, so this is why the examples are experiencing the not so desired behaviour. I just wish they would work straight away but there seems to be issues with the examples from this book mainly because of it trying to run from a Windows 10 OS.
      On top of this, if anyone has any suggestions with how I can learn DirectX 11 i would be most grateful. Thanks very much. I really would like to get them examples working to though from the book I mentioned.
      Look forward to reading any replies this thread receives.
       
      GameDevCoder.


      PS - If anyone has noticed. I asked this about 1 year ago also but this was when I was dabbling in it. Now I am actually needing to produce some stuff with DirectX so I have to get my head round this now. I felt at the time that I sort of understood what was being written to me in response to my thread back then. I had always been a little unsure though of being absolutely sure of what was happening with these troublesome examples. So I am really just trying to get to the bottom of this now. If anyone can help me work these examples out so I can see them working then hopefully I can learn DirectX 11 from them.
       
      *SOLUTION* - I was able to get the examples running thanks to the gamedev.net community. Great work guys. I'm so please now that I can learn from this book now I have the examples running.
      https://www.gamedev.net/forums/topic/693437-i-need-to-learn-directx-the-examples-for-introduction-to-3d-programming-with-directx-11-by-frank-d-luna-does-not-work-can-anyone-help-me/?do=findComment&comment=5363013
    • By DiligentDev
      Hello!
      I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. It also supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. The engine contains shader source code converter that allows shaders authored in HLSL to be translated to GLSL.
      The engine currently supports Direct3D11, Direct3D12, and OpenGL/GLES on Win32, Universal Windows and Android platforms.
      API Basics
      Initialization
      The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode:
      #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ...  GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State
      Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.)
      Creating Shaders
      To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member:
      SHADER_SOURCE_LANGUAGE_DEFAULT  - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL  - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL  - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] =  {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object
      To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format:
      // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below:
      // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO:
      m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources
      Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object:
       
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state:
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details.
      Setting the Pipeline State and Invoking Draw Command
      Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context:
      // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Build Instructions
      Please visit this page for detailed build instructions.
      Samples
      The engine contains two graphics samples that demonstrate how the API can be used.
      AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. It can also be thought of as Diligent Engine’s “Hello World” example.

       
      Atmospheric scattering sample is a more advanced one. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. 

       
      The engine also includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. 

      Integration with Unity
      Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.

       
  • Popular Now