# DX12 Casting a cubemap to be used as 6 separate 2d texture srvs

## Recommended Posts

I was wondering if it is possible to use individual faces of cubemap for creating 2D srvs. So the underlying ID3D12Resource will be the cubemap but we can also access it as 6 separate 2d textures in shader code.

ID3D12Resource* pCubeMap = createCubeMap();

SRV cubeMapFace1 = {};

cubeMapFace1.dimension = 2D;
cubeMapFace1.xxx = 0; // This will be the face index (is this the plane slice field in D3D12_TEX2D_SRV? How does it map to Vulkan since there is no plane slice in VkImageViewCreateInfo)

// Plan is to create an srv for only the first face of the cubemap and use it as a normal 2D texture in shader code
createSRV(pCubeMap, &cubeMapFace1);

Thank you

Edited by mark_braga

##### Share on other sites

Have a look at https://github.com/NightCreature/SpaceSim/blob/master/SpaceSim/Graphics/RenderSystem.cpp and function initialiseCubemapRendererAndResources.

This initialises a cubemaps and SRVs and RTVs for use in a cubemap renderer, were the target and resource are cubemaps but are rendered to as single target RTs. This is D3D11 but I have a feeling this will extend into D3D12 too.

Edited by NightCreature83

##### Share on other sites

An additional D3D11 example using cube map arrays for which an RTV is created for each face:

##### Share on other sites

I just wanted to confirm that it's basically the same in D3D12 as it is in D3D11: you just need to specify which array slice the SRV will read from by filling out the "Texture2DArray.FirstArraySlice" and Texture2DArray.ArraySize" members of the D3D12_SHADER_RESOURCE_VIEW_DESC structure. You can also do this to make a view into a sub-array if you want, or to make a view that only sees a subset of the mip levels.

##### Share on other sites

Thanks for posting these links. They have given a lot of insight.

So just to confirm if I want to create a 2D view of face 3 of a cubemap, will the code look something like this?

SRV_DESC desc = {};
desc.dimension = TEXTURE_2D_ARRAY;
desc.tex2DArray.firstArraySlice = 3; // Face 3 in the cubemap
desc.tex2DArray.arraySize = 1;       // Only need one face
addSRV(pCubeMapResource, &desc);

Thank you

##### Share on other sites
5 minutes ago, mark_braga said:

Thanks for posting these links. They have given a lot of insight.

So just to confirm if I want to create a 2D view of face 3 of a cubemap, will the code look something like this?


SRV_DESC desc = {};
desc.dimension = TEXTURE_2D_ARRAY;
desc.tex2DArray.firstArraySlice = 3; // Face 3 in the cubemap
desc.tex2DArray.arraySize = 1;       // Only need one face
addSRV(pCubeMapResource, &desc);

Thank you

Use index 2 because in computer science we start counting at 0. Also rememeber that the faces are in the cubemap as you render into them so if you have face 0 be positive Z axis that is the image you will get back

Edited by NightCreature83

##### Share on other sites

My local transforms look in the direction of the positive  z-axis. The y-axis points up and the x-axis points to the left. This results in the following transformation matrices and indices for each of the six planes of the cube map.

static const XMMATRIX rotations[6] = {
XMMatrixRotationY(-XM_PIDIV2), // Look: +x Index: 0
XMMatrixRotationY( XM_PIDIV2), // Look: -x Index: 1
XMMatrixRotationX( XM_PIDIV2), // Look: +y Index: 2
XMMatrixRotationX(-XM_PIDIV2), // Look: -y Index: 3
XMMatrixIdentity(),            // Look: +z Index: 4
XMMatrixRotationY(XM_PI),      // Look: -z Index: 5
};

Edited by matt77hias

## Create an account

Register a new account

• 10
• 19
• 14
• 19
• 15
• ### Similar Content

• Hi.
I wanted to experiment D3D12 development and decided to run some tutorials: Microsoft DirectX-Graphics-Samples, Braynzar Soft, 3dgep...Whatever sample I run, I've got the same crash.
All the initialization process is going well, no error, return codes ok, but as soon as the Present method is invoked on the swap chain, I'm encountering a crash with the following call stack:
The crash is an access violation to a null pointer ( with an offset of 0x80 )
I'm working on a notebook, a toshiba Qosmio x870 with two gpu's: an integrated Intel HD 4000 and a dedicated NVIDIA GTX 670M ( Fermi based ). The HD 4000 is DX11 only and as far as I understand the GTX 670M is DX12 with a feature level 11_0.
I checked that the good adapter was chosen by the sample, and when the D3D12 device is asked in the sample with a 11_0 FL, it is created with no problem. Same for all the required interfaces ( swap chain, command queue...).
I tried a lot of things to solve the problem or get some info, like forcing the notebook to always use the NVIDIA gpu, disabling the debug layer, asking for a different feature level ( by the way 11_0 is the only one that allows me to create the device, any other FL will fail at device creation )...
I have the latest NVIDIA drivers ( 391.35 ), the latest Windows 10 sdk ( 10.0.17134.0 ) and I'm working under
Visual Studio 2017 Community.
Thanks to anybody who can help me find the problem...
• By _void_
Hi guys!
In a lot of samples found in the internet, people when initialize D3D12_SHADER_RESOURCE_VIEW_DESC with resource array size 1 would normallay set its dimension as Texture2D. If the array size is greater than 1, then they would use dimension as Texture2DArray, for an example.
If I declare in the shader SRV as Texture2DArray but create SRV as Texture2D (array has only 1 texture) following the same principle as above, would this be OK? I guess, this should work as long as I am using array index 0 to access my texture?
Thanks!
• By _void_
Hey!

What is the recommended upper count for commands to record in the command list bundle?
According to MSDN it is supposed to be a small number but do not elaborate on the actual number.
I am thinking if I should pre-record commands in the command buffer and use ExecuteIndirect or maybe bundles instead.
The number of commands to record in my case could vary greatly.

Thanks!

• While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
#define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
mov  qword ptr [rdx],rax
which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.

• By lubbe75
As far as I understand there is no real random or noise function in HLSL.
I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious?