Search the Community

Showing results for tags 'DX11' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Audio Jobs
  • Business Jobs
  • Game Design Jobs
  • Programming Jobs
  • Visual Arts Jobs

Categories

  • GameDev Unboxed

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 1349 results

  1. I'm attempting to get a triangle to display in DirectX 11, but it seems like I can only get it to show when I have created and set a viewport Is having at least 1 viewport set required in DirectX 11? If not what have I done wrong if my triangle's vertex data looks like: Vertex vertices[] = { {0.0f, 0.0f, 0.0f, 1.0f, Color(1.0f, 0.0f, 0.0f, 1.0f)}, {100.0f, 0.0f, 0.0f, 1.0f, Color(1.0f, 0.0f, 0.0f, 1.0f)}, {100.0f, 100.0f, 0.0f, 1.0f, Color(1.0f, 0.0f, 0.0f, 1.0f)} }
  2. Hi all, I've just implemented a screenshot function for my engine, using dx11. This all works fine, also when the rendertarget has MSAA applied, then I use resolvesubresource into an additional rendertarget. Basically, without AA: - copy primary RT to RT with usage STAGING and CPU access (staging texture) - map the RT, copy the pixel data and unmap And with AA: - copy primary RT with AA to temporary 1 without AA (usage default, no CPU access) (resolve texture) - copy temporary RT without AA to new RT with usage STAGING and CPU access (staging texture) - map the RT, copy the pixel data and unmap So it all works fine, I applied a branch for MSAA enabled/ disabled. My question; according to the MSDN documentation, "Resolvesubresource", should only work when 'source' has >1 samples and 'dest' has 1 sample (in the desc). But somehow the process with the resolve texture, including using 'resolvesubresource', also works when I don't have MSAA enabled. So both source + dest has 1 sample. I would expect the D3D debugger/ logging to give an error or at least a warning. Below the code in which this applies (don't mind it not being optimized/ cleaned up yet, just for illustration) std::vector<char> DX11RenderTarget::GetRGBByteArray() const { CComPtr<ID3D11Device> myDev; CComPtr<ID3D11DeviceContext> myDevContext; mBuffer->GetDevice(&myDev); myDev->GetImmediateContext(&myDevContext); // resolve texture D3D11_TEXTURE2D_DESC tempRTDesc; tempRTDesc.Width = mWidth; tempRTDesc.Height = mHeight; tempRTDesc.MipLevels = 1; tempRTDesc.ArraySize = 1; tempRTDesc.Format = mDXGIFormat; tempRTDesc.BindFlags = 0; tempRTDesc.MiscFlags = 0; tempRTDesc.SampleDesc.Count = 1; tempRTDesc.SampleDesc.Quality = 0; tempRTDesc.Usage = D3D11_USAGE_DEFAULT; tempRTDesc.CPUAccessFlags = 0; CComPtr<ID3D11Texture2D> tempTex; CComPtr<ID3D11Texture2D> anotherTempTex; myDev->CreateTexture2D(&tempRTDesc, 0, &tempTex); // add check if(FAILED), logging myDevContext->ResolveSubresource(tempTex, 0, mBuffer, 0, GetDXGIFormat(mBufferFormat)); // format = DXGI_FORMAT_R8G8B8A8_UNORM_SRGB ; not flexible now // staging texture tempRTDesc.Usage = D3D11_USAGE_STAGING; tempRTDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ; myDev->CreateTexture2D(&tempRTDesc, 0, &anotherTempTex); // add check if(FAILED), logging myDevContext->CopyResource(anotherTempTex, tempTex); // map, copy pixels and unmap D3D11_MAPPED_SUBRESOURCE loadPixels; myDevContext->Map(anotherTempTex, 0, D3D11_MAP_READ, 0, &loadPixels); std::vector<char> image(mWidth*mHeight*3); char *texPtr = (char*)loadPixels.pData; int currPos = 0; for(size_t row=0;row<mHeight;++row) { texPtr = (char*)loadPixels.pData + loadPixels.RowPitch * row; for(size_t j=0;j<mWidth*4;j+=4) // RGBA, skip Alpha { image[currPos] = texPtr[j]; image[currPos+1] = texPtr[j+1]; image[currPos+2] = texPtr[j+2]; currPos += 3; } } myDevContext->Unmap(anotherTempTex, 0); return image; }
  3. I am trying to add normal map to my project I have an example of a cube: I have normal in my shader I think. Then I set shader resource view for texture (NOT BUMP) device.ImmediateContext.PixelShader.SetShaderResource(0, textureView); device.ImmediateContext.Draw(VerticesCount,0); What should I do to set my normal map or how it is done in dx11 generally example c++?
  4. Dears, I am having a shadow shimmering in my scene. I am using shadow mapping technique. I know that there is a lot of posts over the internet dealing with this subject but My issue is slightly different. I know that the problem comes from sub-texel issue when moving the camera BUT My scene is using a different technique as the camera and the light sources are stable and the objects inside the scene is moving Relative To Eye concept (RTE). The issue is when implementing cascaded shadow mapping and variance shadow mapping as stated in Directx examples, Every thing goes well except that the shadows are shimmering (Flickering). The shimmering is coming from that the objects are moving not the camera. So when I try to solve the problem with adjusting the sub-texel problem with the camera movement it didn't solve the problem as the camera is stable but the objects are not. Any help will be appreciated. Thanks in advance.
  5. I have started my "small game" for a few days, I use DirectX 9(some reason i chose this old version of dx) as my graphic engine, I need the effect like the show below. I think that i may need some depth sorting algorism? The text seems to be drawed on 2d surface(sorted and always be the front)?, I'm a beginer in DX, not too much experience... Any suggestion to me is welcome, thanks
  6. DX11 Dynamic ibl

    Hi guys, i implemented ibl in my engine. Currently i precompute cubemaps offline and use them in game. This works good, but its only static. I would like to implement dynamic cubemap creation and convolution. I more or less know how to do it. But : My current workflow is : Render hdr cubemap in 3dsmax with mental ray (white material for everything). Convolute with ibl baker. Use it in game. Capture probe ingame (only once). Convolute with ibl baker and use it without changing. This is used for every "ambient" light in game. On top of that I'm rendering "normal" light (with ambient and specular). I would like to capture and convolute cubemaps dynamically in game. So capture cubemap in 3ds max once. Use It in game and generate cube maps there at some time. This sounds easy. But as I said I first render ambient lights and on top of that normal lights. Then I create cubemap from that and use it in next frame for ambient light and add normal lights... Creating infinite feedback. Is there any way around it ? I believe games are using reatime generated ibl cubemaps. Or it's done completely differently ?
  7. I've gotten to part in my DirectX 11 project where I need to pass the MVP matrices to my vertex shader. And I'm a little lost when it comes to the use of the constant buffer with the vertex shader I understand I need to set up the constant buffer just like any other buffer: 1. Create a buffer description with the D3D11_BIND_CONSTANT_BUFFER flag 2. Map my matrix data into the constant buffer 3. Use VSSetConstantBuffers to actually use the buffer But I get lost at the VertexShader part, how does my vertex shader know to use this constant buffer when we get to the shader side of things In the example I'm following I see they have this as their vertex shader, but I don't understand how the shader knows to use the MatrixBuffer cbuffer. They just use the members directly. What if there was multiple cbuffer declarations like the Microsoft documentation says you could have? //Inside vertex shader cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float4 color : COLOR; }; struct PixelInputType { float4 position : SV_POSITION; float4 color : COLOR; }; PixelInputType ColorVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the input color for the pixel shader to use. output.color = input.color; return output; }
  8. How do i open an image to use it as Texture2D information without D3DX11CreateShaderResourceViewFromFile? And how it works for different formats like (JPG, PNG, BMP, DDS, etc.)? I have an (512 x 512) image with font letters, also i have the position and texcoord of every letter. The main idea is that i want to obtain the image pixel info, use the position and texcoords to create a new texture with one letter and render it. Or am I wrong in something?
  9. Hey, I found a very interesting blog post here: https://bartwronski.com/2017/04/13/cull-that-cone/ However, I didn't really got how to use his "TestConeVsSphere" test in 3D (last piece of code on his post). I have the frustumCorners of a 2D Tile cell in ViewSpace and my 3D Cone Origin and Direction, so where to place the "testSphere"? I thought about to also move the Cone into viewspace and put the sphere to the Center of the Cell with the radius of half-cellsize, however what about depth? A sphere does not have inf depth? I am missing anything? Any Ideas? Thx, Thomas
  10. HRESULT FBXLoader::Open(HWND hWnd, char* Filename) { HRESULT hr = S_OK; if (FBXM) { FBXIOS = FbxIOSettings::Create(FBXM, IOSROOT); FBXM->SetIOSettings(FBXIOS); FBXI = FbxImporter::Create(FBXM, ""); if (!(FBXI->Initialize(Filename, -1, FBXIOS))) MessageBox(hWnd, (wchar_t*)FBXI->GetStatus().GetErrorString(), TEXT("ALM"), MB_OK); FBXS = FbxScene::Create(FBXM, "MCS"); if (!FBXS) MessageBox(hWnd, TEXT("Failed to create the scene"), TEXT("ALM"), MB_OK); if (!(FBXI->Import(FBXS))) MessageBox(hWnd, TEXT("Failed to import fbx file content into the scene"), TEXT("ALM"), MB_OK); if (FBXI) FBXI->Destroy(); FbxNode* MainNode = FBXS->GetRootNode(); int NumKids = MainNode->GetChildCount(); FbxNode* ChildNode = NULL; for (int i=0; i<NumKids; i++) { ChildNode = MainNode->GetChild(i); FbxNodeAttribute* NodeAttribute = ChildNode->GetNodeAttribute(); if (NodeAttribute->GetAttributeType() == FbxNodeAttribute::eMesh) { FbxMesh* Mesh = ChildNode->GetMesh(); NumVertices = Mesh->GetControlPointsCount();//number of vertices MyV = new FBXVTX[NumVertices]; for (DWORD j = 0; j < NumVertices; j++) { FbxVector4 Vertex = Mesh->GetControlPointAt(j);//Gets the control point at the specified index. MyV[j].Position = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); } NumIndices = Mesh->GetPolygonVertexCount();//number of indices; for cube 20 MyI = new DWORD[NumIndices]; MyI = (DWORD*)Mesh->GetPolygonVertices();//index array NumFaces = Mesh->GetPolygonCount(); MyF = new FBXFACEX[NumFaces]; for (int l=0;l<NumFaces;l++) { MyF[l].Vertices[0] = MyI[4*l]; MyF[l].Vertices[1] = MyI[4*l+1]; MyF[l].Vertices[2] = MyI[4*l+2]; MyF[l].Vertices[3] = MyI[4*l+3]; } UV = new XMFLOAT2[NumIndices]; for (int i = 0; i < Mesh->GetPolygonCount(); i++)//polygon(=mostly rectangle) count { FbxLayerElementArrayTemplate<FbxVector2>* uvVertices = NULL; Mesh->GetTextureUV(&uvVertices); for (int j = 0; j < Mesh->GetPolygonSize(i); j++)//retrieves number of vertices in a polygon { FbxVector2 uv = uvVertices->GetAt(Mesh->GetTextureUVIndex(i, j)); UV[4*i+j] = XMFLOAT2((float)uv.mData[0], (float)uv.mData[1]); } } } } } else MessageBox(hWnd, TEXT("Failed to create the FBX Manager"), TEXT("ALM"), MB_OK); return hr; } I've been trying to load fbx files(cube.fbx) into my programme. but I get this. Can someone pls help me?
  11. I want to rotate my camera around the target horizontally and vertically. Example if q-e pressed only horizontally rotated. And when 1-2 pressed vertically rotated around its direction Example Cameraview : public Matrix View { get; set; } public Vector3 eye { get; set; } = new Vector3(0, 0, -5); public Vector3 target { get; set; } = new Vector3(0, 0, 0); public Vector3 Translation = new Vector3(0, 0, 0); View = Matrix.Translation(Translation.X, Translation.Y, Translation.Z)* Matrix.LookAtLH(eye, target, Vector3.UnitY); I think rotation must be based on target point not the camera itself.
  12. Hi i'm new to this forum and was wondering if there are any good places to start learning directX 11. I bought Frank D Luna's book but it's really outdated and the projects won't even compile. I was excited to start learning from this book because it gives detailed explanations on the functions being used as well as the mathematics. Are there any tutorials / courses /books that are up to date which goes over the 3D math and functions in a detailed manner? Or where does anyone here learn directX 11? I've followed some tutorials from this website http://www.directxtutorial.com/LessonList.aspx?listid=11 which did a nice job but it doesn't explain what's happening with the math so I feel like I'm not actually learning, and it only goes up until color blending. Rasteriks tutorials doesn't go over the functions much at all or the math involved either. I'd really appreciate it if anyone can point me in the right direction, I feel really lost. Thank you
  13. I'm trying to get, basically, screenshot (each 1 second, without saving) of Direct3D11 application. Code works fine on my PC(Intel CPU, Radeon GPU) but crashes after few iterations on 2 others (Intel CPU + Intel integrated GPU, Intel CPU + Nvidia GPU). void extractBitmap(void* texture) { if (texture) { ID3D11Texture2D* d3dtex = (ID3D11Texture2D*)texture; ID3D11Texture2D* pNewTexture = NULL; D3D11_TEXTURE2D_DESC desc; d3dtex->GetDesc(&desc); desc.BindFlags = 0; desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE; desc.Usage = D3D11_USAGE_STAGING; desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM_SRGB; HRESULT hRes = D3D11Device->CreateTexture2D(&desc, NULL, &pNewTexture); if (FAILED(hRes)) { printCon(std::string("CreateTexture2D FAILED:" + format_error(hRes)).c_str()); if (hRes == DXGI_ERROR_DEVICE_REMOVED) printCon(std::string("DXGI_ERROR_DEVICE_REMOVED -- " + format_error(D3D11Device->GetDeviceRemovedReason())).c_str()); } else { if (pNewTexture) { D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); // Wokring with texture pNewTexture->Release(); } } } return; } D3D11SwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast< void** >(&pBackBuffer)); extractBitmap(pBackBuffer); pBackBuffer->Release(); Crash log: CreateTexture2D FAILED:887a0005 DXGI_ERROR_DEVICE_REMOVED -- 887a0020 Once I comment out D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); code works fine on all 3 PC's.
  14. Imagine that we have a vertex structure that looks like this: struct Vertex { XMFLOAT3 position; XMFLOAT4 color; }; The vertex shader looks like this: cbuffer MatrixBuffer { matrix world; matrix view; matrix projection; }; struct VertexInput { float4 position : POSITION; float4 color : COLOR; }; struct PixelInput { float4 position : SV_POSITION; float4 color : COLOR; }; PixelInput main(VertexInput input) { PixelInput output; input.position.w = 1.0f; output.position = mul(input.position, world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.color = input.color; return output; } And the pixel shader looks like this: struct PixelInput { float4 position : SV_POSITION; float4 color : COLOR; }; float4 main(PixelInput input) : SV_TARGET { return input.color; } Now let's create a quad consisting of 2 triangles and the vertices A, B, C and D: // Vertex A. vertices[0].position = XMFLOAT3(-1.0f, 1.0f, 0.0f); vertices[0].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex B. vertices[1].position = XMFLOAT3( 1.0f, 1.0f, 0.0f); vertices[1].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex C. vertices[2].position = XMFLOAT3(-1.0f, -1.0f, 0.0f); vertices[2].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex D. vertices[3].position = XMFLOAT3( 1.0f, -1.0f, 0.0f); vertices[3].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // 1st triangle. indices[0] = 0; // Vertex A. indices[1] = 3; // Vertex D. indices[2] = 2; // Vertex C. // 2nd triangle. indices[3] = 0; // Vertex A. indices[4] = 1; // Vertex B. indices[5] = 3; // Vertex D. This will result in a grey quad as shown in the image below. I've outlined the edges in red color to better illustrate the triangles: Now imagine that we’d want our quad to have a different color in vertex A: // Vertex A. vertices[0].position = XMFLOAT3(-1.0f, 1.0f, 0.0f); vertices[0].color = XMFLOAT4( 0.0f, 0.0f, 0.0f, 1.0f); That works as expected since there’s now an interpolation between the black color in vertex A and the grey color in vertices B, C and D. Let’s revert the previus changes and instead change the color of vertex C: // Vertex C. vertices[2].position = XMFLOAT3(-1.0f, -1.0f, 0.0f); vertices[2].color = XMFLOAT4( 0.0f, 0.0f, 0.0f, 1.0f); As you can see, the interpolation is only done half of the way across the first triangle and not across the entire quad. This is because there's no edge between vertex C and vertex B. Which brings us to my question: I want the interpolation to go across the entire quad and not only across the triangle. So regardless of which vertex we decide to change the color of, the color interpolation should always go across the entire quad. Is there any efficient way of achieving this without adding more vertices and triangles? An illustration of what I'm trying to achieve is shown in the image below: Background This is just a very brief explanation of the problems background in case that would make it easier for you to understand the problems roots and maybe help you with finding a better solution to the problem. I'm trying to texture a terrain mesh in DirectX11. It's working, but I'm a bit unsatisfied with the result. When changing the terrain texture of a single vertex, the interpolation with the other vertices results in a hexagon shape instead of a squared shape: As the red arrows illustrate, I'd like the texture to be interpolated all the way into the corners of the quads.
  15. Hello ! I have a texture in 4K resolution and I need to downsample this texture to get a 1x1 resulting texture. I know that there are intermediate downsamplings before getting to the 1x1 texture but how downsampling works and how do I have to code my pixel shader to downsample my texture ?
  16. I want to render true type fonts in DirectX11, right now in one program im using stb_truetype.h to get the position and texcoords from every character i want. And in other one, i have DirectX11 with a texture that i can fill it with some information, in this case the .ttf info. But how do i need to open the .tff to tell the texture something like: "Hey, these are the coordinates you need, copy that pixels and fill yourself"?
  17. Hi, When reading Frank Luna's book Introduction to 3D Game Programming with DX11, chapter 10 stenciling, the author made a demo to achieve mirror reflection effect. He made tranformation matrix and transform directional lights as following: // Build reflection matrix to reflect the skull. XMVECTOR mirrorPlane = XMVectorSet(0.0f, 0.0f, 1.0f, 0.0f); // xy plane XMMATRIX R = XMMatrixReflect(mirrorPlane); XMMATRIX world = XMLoadFloat4x4(&mSkullWorld) * R; …/ / Reflect the light source as well. // Cache the old light directions, and reflect the light directions. XMFLOAT3 oldLightDirections[3]; for(int i = 0; i < 3; ++i) { oldLightDirections[i] = mDirLights[i].Direction; XMVECTOR lightDir = XMLoadFloat3(&mDirLights[i].Direction); XMVECTOR reflectedLightDir = XMVector3TransformNormal(lightDir, R); XMStoreFloat3(&mDirLights[i].Direction, reflectedLightDir); } But I think the direction of a light source is no more than a vector, not a normal. Why did the author use XMVector3TransformNormal to flip the lights? Thanks!
  18. Hi guys, I would like to double-check with you if I am correct. I have a buffer of unsigned integers of format DXGI_FORMAT_R16_UINT, which I would like to read from the shader. Since there is no dedicated HLSL 16 bit unsigned int type, I guess we should go with HLSL uint (32 bit) type. Buffer<uint> g_InputBuffer : register(t0); // DXGI_FORMAT_R16_UINT In this case, will each element of the input buffer be automatically converted into a corresponding uint (32 bit) element? And vice versa, if I want to output to buffer of type DXGI_FORMAT_R16_UINT, I guess HLSL uint will be automatically converted to 16 bit unsigned? RWBuffer<uint> g_OutputBuffer : register(u0); // DXGI_FORMAT_R16_UINT uint value = ... g_OutputBuffer[outIndex] = value; Thanks!
  19. In Direct3D 11, Could I define two RWTexture2D resource variables which share the same register in pixel shader as below, RWTexture2D<float4> tex1 : register(u0); RWTexture2D<float4> tex2 : register(u0); Compile error: error X4500: overlapping register semantics not yet implemented 'u0' It seems that it is illegal to define as above. The reason to define as above is that I want to these two resource variable bind the same texture. I try to create two different unodered access views, and their resources are the same texture. But Two UAVs conflict if they share a subresource (and therefore share the same resource), reference to https://msdn.microsoft.com/en-us/library/windows/desktop/ff476465(v=vs.85).aspx
  20. i got a strange result in calling UpdateTileMapping and UpdateTiles every frame like this which as same as the Mars Tiled Resource Sample do: void CalledEveryFrame() { vector<D3D11_TILED_RESOURCE_COORDINATE> coordinates; // 4 diffrent coordinates vector<D3D11_TIL_REGION_SIZE> regions; // 4 as same. but only set the NumTiles=1 vector<UINT> offsets(4, 0); vector<UINT> rangeFlags(4, 0); vector<UINT> rangeTileCounts(4, 1); deviceContext->UpdateTileMappings(myMiscTexture, 4, coordinates.data(), regions.data(), myMiscBuffer, 4, rangFlags.data(), offsets.data(). rangeTileCounts.data(), D3D11_TILE_MAPPING_NO_OVERWRITE); map<int, vector<float>> myTileData; // 4 with different data. for (int index = 0; index < coordinates.size(); index++) { D3D11_TILED_RESOURCE_COORDINATE coord = coordinate[index]; D3D11_TILE_REGION_SIZE region = regions[index]; deviceContext->UpdateTiles(myMiscTexture, &coord, &region, myTileData[index].data(), D3D11_TILE_COPY_NO_OVERWRITE); } } i alway got 4 same TileData show in different coordinate. if i copy the code above with another 4 coordinate like this: void CalledEveryFrame() { UpdateTileMappings(coordinates); for(coordinates) { UpdateTiles(TileData); } UpdateTileMappings(coordinates2); for(coordinates2) { UpdateTiles(TileData2); } } i alway got 8 same TileData showing, why?
  21. After getting "black squares" rendered on my "bloom" post process: So i searched and read this thread where i think L.Spiro had same problem. Then i searched my HLSLs programs to find where i could have NaNs. Found one problem in my point light shader: float4 posVS = gbPosition_tex.Sample(sampPointClamp, tc); // this has no effect ! if (IsNAN(posVS)) posVS = float4(0.0f, 0.0f, 0.0f, 1.0f); if (any(isinf(posVS))) posVS = float4(0.0f, 0.0f, 0.0f, 1.0f); ... //code skipped, nothing else touches posVS float3 v = normalize(-posVS.xyz); // but if i uncomment this there is NO more black squares ! //if (IsNAN(v)) // v = float3(0.0f, 0.0f, 0.0f); ...// rest of lightning computation Please read comments in code. IsNAN is defined like this (i cant use "native" isnan function because i cant set /Gis option inside VS2013 for fxc compiler options): bool IsNAN(float n) { return (n < 0.0f || n > 0.0f || n == 0.0f) ? false : true; } bool IsNAN(float2 v) { return (IsNAN(v.x) || IsNAN(v.y)) ? true : false; } bool IsNAN(float3 v) { return (IsNAN(v.x) || IsNAN(v.y) || IsNAN(v.z)) ? true : false; } bool IsNAN(float4 v) { return (IsNAN(v.x) || IsNAN(v.y) || IsNAN(v.z) || IsNAN(v.w)) ? true : false; } wtf is going on?
  22. Hi, I am newer of Direct3D 11, and I want to get help from you. In Direct3D samples, we usually define RWTexture2D variables globally, and use it in functions directly. Is it legal to transfer this variable as a parameter of a user-defined function? RWTexture2D<float4> texture : regiters(u0); void setData(RWTexture2D<float4> tex, float2 data) { tex[uint2(0, 0)] = data; } PS_OUTPUT main(PS_INPUT input) { setData(texture, float2(0.0, 0.0)); PS_OUTPUT output; return output; }
  23. I am writing code for capturing Skype video call for windows 10, I am facing an issue as following: 1). Skype use Directx11, and I am able to hook into the IDXGISwapChain::Present, and use code as below to capture the screenshot: ID3D11Device *device = NULL; HRESULT hr = pswap->GetDevice(__uuidof(ID3D11Device), (void**)&device); if (hr != S_OK) { OutputDebugStringA("++++++++ GetDevice failed ++++++"); return; } ID3D11DeviceContext *context; device->GetImmediateContext(&context); ID3D11Texture2D* backBuffer = nullptr; hr = pswap->GetBuffer(0, __uuidof(*backBuffer), (LPVOID*)&backBuffer); if (hr != S_OK) { OutputDebugStringA("++++++++ IDXGISwapChain::GetBuffer failed++++++"); return; } //Save to a JPG file hr = SaveWICTextureToFile(context, backBuffer, GUID_ContainerFormatJpeg, strScreenPath); SafeRelease(device); SafeRelease(context); SafeRelease(backBuffer); The problem is what I captured is either the local video frame(camera) or the remote video frame, what I want to capture is synthetic image, just like I see on the Skype window during a video call, I remember that in D3D9, there is a GetRenderTargetData() to retrieve the raw data from the back buffer, and it contains the rendered scene which is the same as I see on the Skype window during the video call, but I can't find a way to do this in D3D11.... Any advise is appreciated.
  24. I need to resolve the depth buffer texture. This depth/stencil buffer (ID3D11DepthStencilView*) I get every frame with such parameters (from it's desc): SampleDesc.Count = 2; SampleDesc.Quality = 0; ArraySize = 1; Format = DXGI_FORMAT_R24G8_TYPELESS; And I need a simple non-MSAA depth buffer in the end. For use it with non-MSAA RenderTarget (I already have it). As I understand, I can not do this with hardware conversion - with DeviceContext->ResolveSubresource(...). It is possible to do this through a pixel shader. (?) But I still do not understand how to do it. If anyone has a solution to this problem(resolve depth texture) through a pixel shader please show it. Thank.
  25. Hi guys, I was wondering if any of you ever had an encounter with 0xCCCCCCCC as a return code/result when calling a D3D11CreateDeviceAndSwapChain or D3D11CreateDevice. My code worked just fine as of yesterday, and did for months, but I now get 0xCCCCCCCC instead of the usual S_OK (or rarely S_FALSE) for some reason. Literally nothing in my code changed. I woke up to that return code. Whatever it is, it's in no way preventing my application from working as it should but I do have to bypass the error checking I've put in place because of how unusual the error code is. Can anyone shed some light on this undocumented return code?