Search the Community

Showing results for tags 'DX11' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Audio Jobs
  • Business Jobs
  • Game Design Jobs
  • Programming Jobs
  • Visual Arts Jobs

Categories

  • GameDev Unboxed

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 1342 results

  1. Hi Guys, Just having a play with the Profiler for VS 2015. Makes sense so far but there is a bit I am not quite sure about. What is the purple par of the image? I't just shows up as 'GPU Work'. It only shows up periodically. Is this GPU work something that is happening in another program perhaps? As it doesn't fit with my render cycle at all. Thanks in advance
  2. So I'm a little confused about how to use textures in DirectX I'm currently loading a image from a file, which creates a ID3D11Texture2D but is this the final step to actually use the texture? Looking at this tutorial I see that there is something called a ID3D11ShaderResourceView. What is the purpose of this? It looks like thats how they send the texture to the shader, but is it possible to just bind the ID3D11Texture2D and use it directly? If not does this mean I need to create a ID3D11ShaderResourceView for each texture I load?
  3. This works for planes, not for solid objects FbxLayerElementArrayTemplate<FbxVector2>* uvVertices = NULL; Mesh->GetTextureUV(&uvVertices); MyUV = new XMFLOAT2[uvVertices->GetCount()]; for (int i = 0; i < Mesh->GetPolygonCount(); i++)//polygon(=mostly rectangle) count { for (int j = 0; j < Mesh->GetPolygonSize(i); j++)//retrieves number of vertices in a polygon { FbxVector2 uv = uvVertices->GetAt(Mesh->GetTextureUVIndex(i, j)); MyUV[Mesh->GetTextureUVIndex(i, j)] = XMFLOAT2((float)uv.mData[0], (float)uv.mData[1]); } } I can load the 3D model but I can't wrap up a texture. How do I duplicate vertices easily? Pls help.
  4. I got a quick question about buffers when it comes to DirectX 11. If I bind a buffer using a command like: IASetVertexBuffers IASetIndexBuffer VSSetConstantBuffers PSSetConstantBuffers and then later on I update that bound buffer's data using commands like Map/Unmap or any of the other update commands. Do I need to rebind the buffer again in order for my update to take effect? If I dont rebind is that really bad as in I get a performance hit? My thought process behind this is that if the buffer is already bound why do I need to rebind it? I'm using that same buffer it is just different data
  5. Hello, I manged so far to implement NVIDIA's NDF-Filtering at a basic level (the paper can be found here). Here is my code so far: //... // project the half vector on the normal (?) float3 hppWS = halfVector / dot(halfVector, geometricNormal) float2 hpp = float2(dot(hppWS, wTangent), dot(hppWS, wBitangent)); // compute the pixel footprint float2x2 dhduv = float2x2(ddx(hpp), ddy(hpp)); // compute the rectangular area of the pixel footprint float2 rectFp = min((abs(dhduv[0]) + abs(dhduv[1])) * 0.5, 0.3); // map the area to ggx roughness float2 covMx = rectFp * rectFp * 2; roughness = sqrt(roughness * roughness + covMx); //... Now I want combine this with LEAN mapping as state in Chapter 5.5 of the NDF paper. But I struggle to understand what theses sections actually means in Code: I suppose the first-order moments are the B coefficent of the LEAN map, however things like float3 hppWS = halfVector / dot(halfVector, float3(lean_B, 0)); doesn't bring up anything usefull. Next theres: This simply means: // M and B are the coefficents from the LEAN map float2x2 sigma_mat = float2x2( M.x - B.x * B.x, M.z - B.x * B.y, M.z - B.x * B.y, M.y - B.y * B.y); does it? Finally: This is the part confuses me the most: how am I suppose to convolute two matrices? I know the concept of convolution in terms of functions, not matrices. Should I multiple them? That didn't make any usefully output. I hope someone can help with this maybe too specific question, I'm really despaired to make this work and i've spend too many hours of trial & error... Cheers, Julian
  6. In DirectX 11 we have a 24 bit integer depth + 8bit stencil format for depth-stencil resources ( DXGI_FORMAT_D24_UNORM_S8_UINT ). However, in an AMD GPU documentation for consoles I have seen they mentioned, that internally this format is implemented as a 64 bit resource with 32 bits for depth (but just truncated for 24 bits) and 32 bits for stencil (truncated to 8 bits). AMD recommends using a 32 bit floating point depth buffer instead with 8 bit stencil which is this format: DXGI_FORMAT_D32_FLOAT_S8X24_UINT. Does anyone know why this is? What is the usual way of doing this, just follow the recommendation and use a 64 bit depthstencil? Are there performance considerations or is it just recommended to not waste memory? What about Nvidia and Intel, is using a 24 bit depthbuffer relevant on their hardware? Cheers!
  7. hi, i have read very much about the binding of a constantbuffer to a shader but something is still unclear to me. e.g. when performing : vertexshader.setConstantbuffer ( buffer, slot ) is the buffer bound a. to the VertexShaderStage or b. to the VertexShader that is currently set as the active VertexShader Is it possible to bind a constantBuffer to a VertexShader e.g. VS_A and keep this binding even after the active VertexShader has changed ? I mean i want to bind constantbuffer_A to VS_A, an Constantbuffer_B to VS_B and only use updateSubresource without using setConstantBuffer command every time. Look at this example: SetVertexShader ( VS_A ) updateSubresource(buffer_A) vertexshader.setConstantbuffer ( buffer_A, slot_A ) perform drawcall ( buffer_A is used ) SetVertexShader ( VS_B ) updateSubresource(buffer_B) vertexshader.setConstantbuffer ( buffer_B, slot_A ) perform drawcall ( buffer_B is used ) SetVertexShader ( VS_A ) perform drawcall (now which buffer is used ??? ) I ask this question because i have made a custom render engine an want to optimize to the minimum updateSubresource, and setConstantbuffer calls
  8. Hi, I am writing a linear allocator of per-frame constants using the DirectX 11.1 API. My plan is to replace the traditional constant allocation strategy, where most of the work is done by the driver behind my back, with a manual one inspired by the DirectX 12 and Vulkan APIs. In brief, the allocator maintains a list of 64K pages, each page owns a constant buffer managed as a ring buffer. Each page has a history of the N previous frames. At the beginning of a new frame, the allocator retires the frames that have been processed by the GPU and frees up the corresponding space in each page. I use DirectX 11 queries for detecting when a frame is complete and the ID3D11DeviceContext1::VS/PSSetConstantBuffers1 methods for binding constant buffers with an offset. The new allocator appears to be working but I am not 100% confident it is actually correct. In particular: 1) it relies on queries which I am not too familiar with. Are they 100% reliable ? 2) it maps/unmaps the constant buffer of each page at the beginning of a new frame and then writes the mapped memory as the frame is built. In pseudo code: BeginFrame: page.data = device.Map(page.buffer) device.Unmap(page.buffer) RenderFrame Alloc(size, initData) ... memcpy(page.data + page.start, initData, size) Alloc(size, initData) ... memcpy(page.data + page.start, initData, size) (Note: calling Unmap at the end of a frame prevents binding the mapped constant buffers and triggers an error in the debug layer) Is this valid ? 3) I don't fully understand how many frames I should keep in the history. My intuition says it should be equal to the maximum latency reported by IDXGIDevice1::GetMaximumFrameLatency, which is 3 on my machine. But, this value works fine in an unit test while on a more complex demo I need to manually set it to 5, otherwise the allocator starts overwriting previous frames that have not completed yet. Shouldn't the swap chain Present method block the CPU in this case ? 4) Should I expect this approach to be more efficient than the one managed by the driver ? I don't have meaningful profile data yet. Is anybody familiar with the approach described above and can answer my questions and discuss the pros and cons of this technique based on his experience ? For reference, I've uploaded the (WIP) allocator code at https://paste.ofcode.org/Bq98ujP6zaAuKyjv4X7HSv. Feel free to adapt it in your engine and please let me know if you spot any mistakes Thanks Stefano Lanza
  9. I am really stuck with something that should be very simple in DirectX 11. 1. I can draw lines using a PC (position, colored) vertices and a simple shader just fine. 2. I can draw 3D triangles using PCN (position, colored, normal) vertices just fine (even transparency and SpecularBlinnPhong shaders). However, if I'm using my 3D shader, and I want to draw my PC lines in the same scene how can I do that? If I change my lines to PCN and pass them to the 3D shader with my triangles, then the lighting screws them all up. I only want the lighting for the 3D triangles, but no SpecularBlinnPhong/Lighting for the lines (just PC). I am sure this is because if I change the lines to PNC there is not really a correct "normal" for the lines. I assume I somehow need to draw the 3D triangles using one shader, and then "switch" to another shader and draw the lines? But I have no clue how to use two different shaders in the same scene. And then are the lines just drawn on top of the triangles, or vice versa (maybe draw order dependent)? I must be missing something really basic, so if anyone can just point me in the right direction (or link to an example showing the implementation of multiple shaders) that would be REALLY appreciated. I'm also more than happy to post my simple test code if that helps as well! THANKS SO MUCH IN ADVANCE!!!
  10. Hey all. I've been working with compute shaders lately, and was hoping to build out some libraries to reuse code. As a prerequisite for my current project, I needed to sort a big array of data in my compute shader, so I was going to implement quicksort as a library function. My implementation was going to use an inout array to apply the changes to the referenced array. I spent half the day yesterday debugging in visual studio before I realized that the solution, while it worked INSIDE the function, reverted to the original state after returning from the function. My hack fix was just to inline the code, but this is not a great solution for the future. Any ideas? I've considered just returning an array of ints that represents the sorted indices.
  11. I was thinking about how to render multiple objects. Things like sprites, truck models, plane models, boats models, etc. And I'm not too sure about this process Let's say I have a vector of Models objects class Model { Matrix4 modelMat; VertexData vertices; Texture texture; Shader shader; }; Since each model has is own model matrix, as all models should, does this mean I now need to have 1 draw call per model? Because each model that needs to be drawn could change the MVP matrix used by the bound vertex shader. Meaning I have to keep updating/mapping the constant buffer my MVP matrix is stored in, which is used by the vertex shader Am I thinking about all of this wrong? Isn't this horribly inefficient?
  12. I am making a game using a custom graphics engine written in Direct3D 11. I've been working to lower the amount of system RAM the game uses, and I noticed something that, to me, was surprising and went against my expectations: Textures that are using D3D11_USAGE_DEFAULT or D3D11_USAGE_IMMUTABLE (along with a D3D11_CPU_ACCESS_FLAG of 0) are increasing my system RAM usage according to the size of the texture (i.e., a 1024x1024x32bpp texture adds about 4MB of system RAM usage). I had thought that the point of the D3D11_USAGE_DEFAULT and (especially) D3D11_USAGE_IMMUTABLE usage modes was to put the texture in VRAM instead of system RAM? I might expect this behavior on a system with integrated graphics and shared memory, but I'm seeing this on a desktop with no integrated graphics and only a GTX 1070 GPU. So am I just not understanding how this works? Is there any way I can make sure textures are allocated only in VRAM? Thanks for your help!
  13. I'm trying to port my engine to DirectX and I'm currently having issues with depth reconstruction. It works perfectly in OpenGL (even though I use a bit of an expensive method). Every part besides the depth reconstruction works so far. I use GLM because it's a good math library that has no need to install any dependencies or anything for the user. So basically I get my GLM matrices: struct DefferedUBO { glm::mat4 view; glm::mat4 invProj; glm::vec4 eyePos; glm::vec4 resolution; }; DefferedUBO deffUBOBuffer; // ... glm::mat4 projection = glm::perspective(engine.settings.fov, aspectRatio, 0.1f, 100.0f); // Get My Camera CTransform *transform = &engine.transformSystem.components[engine.entities[entityID].components[COMPONENT_TRANSFORM]]; // Get the View Matrix glm::mat4 view = glm::lookAt( transform->GetPosition(), transform->GetPosition() + transform->GetForward(), transform->GetUp() ); deffUBOBuffer.invProj = glm::inverse(projection); deffUBOBuffer.view = glm::inverse(view); if (engine.settings.graphicsLanguage == GRAPHICS_DIRECTX) { deffUBOBuffer.invProj = glm::transpose(deffUBOBuffer.invProj); deffUBOBuffer.view = glm::transpose(deffUBOBuffer.view); } // Abstracted so I can use OGL, DX, VK, or even Metal when I get around to it. deffUBO->UpdateUniformBuffer(&deffUBOBuffer); deffUBO->Bind()); Then in HLSL, I simply use the following: cbuffer MatrixInfoType { matrix invView; matrix invProj; float4 eyePos; float4 resolution; }; float4 ViewPosFromDepth(float depth, float2 TexCoord) { float z = depth; // * 2.0 - 1.0; float4 clipSpacePosition = float4(TexCoord * 2.0 - 1.0, z, 1.0); float4 viewSpacePosition = mul(invProj, clipSpacePosition); viewSpacePosition /= viewSpacePosition.w; return viewSpacePosition; } float3 WorldPosFromViewPos(float4 view) { float4 worldSpacePosition = mul(invView, view); return worldSpacePosition.xyz; } float3 WorldPosFromDepth(float depth, float2 TexCoord) { return WorldPosFromViewPos(ViewPosFromDepth(depth, TexCoord)); } // ... // Sample the hardware depth buffer. float depth = shaderTexture[3].Sample(SampleType[0], input.texCoord).r; float3 position = WorldPosFromDepth(depth, input.texCoord).rgb; Here's the result: This just looks like random colors multiplied with the depth. Ironically when I remove transposing, I get something closer to the truth, but not quite: You're looking at Crytek Sponza. As you can see, the green area moves and rotates with the bottom of the camera. I have no idea at all why. The correct version, along with Albedo, Specular, and Normals.
  14. I am attempting to rotate a triangle around the X / Y axis, but for some reason I cannot get the triangle to display in some cases (screen is blank) Specific example: If I am trying to rotate around the Y axis and use 45 degrees nothing shows. BUT using -45 degrees I see my triangle rotated correctly If I am trying to rotate around the X axis and use 45 degrees nothing shows. BUT using -45 degrees I see my triangle rotated correctly Am I missing something? I don't understand why I can't see it one direction but the other is fine. I understand that back-face culling happens by default, but in this case I should be able to see the triangle because I have not gone past the back-face culling threshold //Vertex data used Vertex vertices[] = { { 0.0f, 0.0f, 1.0f, Color(1.0f, 0.0f, 0.0f, 1.0f) }, { 0.0f, 100.0f, 1.0f, Color(1.0f, 0.0f, 0.0f, 1.0f) }, { 100.0f, 100.0f, 1.0f, Color(1.0f, 0.0f, 0.0f, 1.0f) } }; //Matrix setup. Matrices are column-major //My Matrix4 class are Identity matrix by default Matrix4 model; model.rotateY(45); //rotate 45 degrees and disappear :( Same happens if the call is changed to rotateX Matrix4 view; Matrix4 projection; projection.makeOrthoLH(); //Using an orthogrpahic projection //Create the mvp Matrix4 mvp = projection * view * model; //Map the mvp matrix to the constant buffer D3D11_MAPPED_SUBRESOURCE mapResource; HRESULT mapped = deviceContext->Map(cbuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &mapResource); ((FLOAT*)mapResource.pData)[0] = mvp.data[0]; ((FLOAT*)mapResource.pData)[1] = mvp.data[1]; ((FLOAT*)mapResource.pData)[2] = mvp.data[2]; ((FLOAT*)mapResource.pData)[3] = mvp.data[3]; ((FLOAT*)mapResource.pData)[4] = mvp.data[4]; ((FLOAT*)mapResource.pData)[5] = mvp.data[5]; ((FLOAT*)mapResource.pData)[6] = mvp.data[6]; ((FLOAT*)mapResource.pData)[7] = mvp.data[7]; ((FLOAT*)mapResource.pData)[8] = mvp.data[8]; ((FLOAT*)mapResource.pData)[9] = mvp.data[9]; ((FLOAT*)mapResource.pData)[10] = mvp.data[10]; ((FLOAT*)mapResource.pData)[11] = mvp.data[11]; ((FLOAT*)mapResource.pData)[12] = mvp.data[12]; ((FLOAT*)mapResource.pData)[13] = mvp.data[13]; ((FLOAT*)mapResource.pData)[14] = mvp.data[14]; ((FLOAT*)mapResource.pData)[15] = mvp.data[15]; deviceContext->Unmap(cbuffer, NULL); makeOrthoLH method void Matrix4::makeOrthoLH() { //LH ORTHO hardcode in for 800x600, where near is 0.0f and far is 100.0f //------------------------------------ FLOAT w = 800.0f; FLOAT h = 600.0f; FLOAT n = 0.0f; FLOAT f = 100.0f; data[0] = 2.0f / w; data[1] = 0.0f; data[2] = 0.0f; data[3] = 0.0f; data[4] = 0.0f; data[5] = 2.0f / h; data[6] = 0.0f; data[7] = 0.0f; data[8] = 0.0f; data[9] = 0.0f; data[10] = 1 / (f - n); data[11] = 0.0f; data[12] = 0; data[13] = 0; data[14] = -n / (f-n); data[15] = 1.0f; } rotateY method void Matrix4::rotateY(FLOAT degrees) { FLOAT cosVal = cosf(degrees * PI/180.0f); FLOAT sinVal = sinf(degrees * PI/180.0f); FLOAT rot[16] = { cosVal, 0, -sinVal, 0, 0, 1, 0, 0, sinVal, 0, cosVal, 0, 0, 0, 0, 1 }; Matrix4 r(rot); data[0] = r.data[0]; data[1] = r.data[1]; data[2] = r.data[2]; data[3] = r.data[3]; data[4] = r.data[4]; data[5] = r.data[5]; data[6] = r.data[6]; data[7] = r.data[7]; data[8] = r.data[8]; data[9] = r.data[9]; data[10] = r.data[10]; data[11] = r.data[11]; data[12] = r.data[12]; data[13] = r.data[13]; data[14] = r.data[14]; data[15] = r.data[15]; } Shader being used cbuffer mvpBuffer : register(b0) { matrix mvp; }; struct VOut { float4 position : SV_POSITION; float4 color : COLOR; }; VOut VShader(float3 position : POSITION, float4 color : COLOR) { VOut output; output.position = mul(mvp, float4(position, 1.0f)); output.color = color; return output; } float4 PShader(float4 position : SV_POSITION, float4 color : COLOR) : SV_TARGET { return color; }
  15. I'm trying to render the characters of a bitmap texture that i generated with stb_truetype.h. Currently i have the texcoords and the longitude of each character, my DirectXTexture2D is 512x512 normilized from -1 to 1. So, i send to render the character "C", creating my triangulation starting from coords (0,0) -> (X0,Y0). Then i get my X1 adding X0 + longitude. NOTE: I transformed my longitude to screen space coordinates dividing out of 512 (texture width size) before. This is my bitmap texture: This is my vertex buffer: float sizeX = static_cast<float>(tempInfo.longitude) / 512; float sizeY = 0.0625f; //32/512 (32->height of font) spritePtr[0].Pos = XMFLOAT3(0.0f + sizeX, 0.0f + sizeY, 1.0f); spritePtr[1].Pos = XMFLOAT3(0.0f + sizeX, 0.0f, 1.0f); spritePtr[2].Pos = XMFLOAT3(0.0f, 0.0f, 1.0f); spritePtr[3].Pos = XMFLOAT3(0.0f, 0.0f, 1.0f); spritePtr[4].Pos = XMFLOAT3(0.0f, 0.0f + sizeY, 1.0f); spritePtr[5].Pos = XMFLOAT3(0.0f + sizeX, 0.0f + sizeY, 1.0f); spritePtr[0].Tex = XMFLOAT2(tempInfo.Tex_u1, tempInfo.Tex_v0); spritePtr[1].Tex = XMFLOAT2(tempInfo.Tex_u1, tempInfo.Tex_v1); spritePtr[2].Tex = XMFLOAT2(tempInfo.Tex_u0, tempInfo.Tex_v1); spritePtr[3].Tex = XMFLOAT2(tempInfo.Tex_u0, tempInfo.Tex_v1); spritePtr[4].Tex = XMFLOAT2(tempInfo.Tex_u0, tempInfo.Tex_v0); spritePtr[5].Tex = XMFLOAT2(tempInfo.Tex_u1, tempInfo.Tex_v0); NOTE: spritePtr is the pointer to my dynamic buffer that i map and unmap. And this is my result: I don't understand why it is too small compared to my bitmap and if i expand the triangulation i get a pixelated character.
  16. hi all I want to enable and disable shader in MPCH Media player Classic the MPCH have shader option using HLSL shaders I want the shader to read each file extension before it plays the file so if the video file name is video.GR.Mp4 it will play it in Grayscale shader if it is not and standard file name Video.Mp4 without GR. unique extension so it plays standard without shader or end the shader here is the shader I have for grayscale // $MinimumShaderProfile: ps_2_0 sampler s0 : register(s0); float4 main(float2 tex : TEXCOORD0) : COLOR { float c0 = dot(tex2D(s0, tex), float4(0.299, 0.587, 0.114, 0)); return c0; } I want to add if or block stantement or bloean to detect file name before it call the shader in order to go to the procedure or disable it or goto end direct without it any thoughts or help
  17. Hey, I just had the idea to use a RWTexture2D within the pixel shader of my particles to try to reduce overdraw/fillrate. I have created a R32_Float texture with UAV and bound it together with the RenderTarget. In my pixel shader I just add a contstant value to pixel of the current fragment while I am checking for a maximum at the beginning. However it does not work. It seams that the texture is not getting written. What I am doing wrong? Or is it not possible to read/write at the same time in PixelShader? Thx, Thomas
  18. 0 down vote favorite I develop a test application with directx 11 und fl 10.1. Everything is working as expected and fine, but when I maximize the window with my graphics in it, the time per frame increases drastically. like 1ms to 40ms. If it stays at a lower resolution range, it works perfectly. But i need to support it in the maximized resolution also. Destination hardware and software specs: NVS 300 graphics card Windows 7 32-bit resolution 1920x1080 Application that draws few sinuses with direct3d, c# via sharpdx Windows forms with a control and sharpdx initialized swapchain, programmed to change backbuffer on resize event (would occur without that too though) I used a System.Stopwatch to find the issue at the code line: mSwapChain.Present(1, PresentFlags.None); where the time it needs when maximized increases by a lot suddenly. If i drag and resize it manually at some resolution the frame time jumps, seems weird. If i comment out the drawing code, I get the same behavior. On my local development machine with a HD4400 i don't have this issue though, there it works and the frame time isn't affected by resizing at all. Any help is appreciated ! I am farily new to programming in c#, windows and with DirectX also, so please be kind
  19. I'm attempting to get a triangle to display in DirectX 11, but it seems like I can only get it to show when I have created and set a viewport Is having at least 1 viewport set required in DirectX 11? If not what have I done wrong if my triangle's vertex data looks like: Vertex vertices[] = { {0.0f, 0.0f, 0.0f, 1.0f, Color(1.0f, 0.0f, 0.0f, 1.0f)}, {100.0f, 0.0f, 0.0f, 1.0f, Color(1.0f, 0.0f, 0.0f, 1.0f)}, {100.0f, 100.0f, 0.0f, 1.0f, Color(1.0f, 0.0f, 0.0f, 1.0f)} }
  20. Hi all, I've just implemented a screenshot function for my engine, using dx11. This all works fine, also when the rendertarget has MSAA applied, then I use resolvesubresource into an additional rendertarget. Basically, without AA: - copy primary RT to RT with usage STAGING and CPU access (staging texture) - map the RT, copy the pixel data and unmap And with AA: - copy primary RT with AA to temporary 1 without AA (usage default, no CPU access) (resolve texture) - copy temporary RT without AA to new RT with usage STAGING and CPU access (staging texture) - map the RT, copy the pixel data and unmap So it all works fine, I applied a branch for MSAA enabled/ disabled. My question; according to the MSDN documentation, "Resolvesubresource", should only work when 'source' has >1 samples and 'dest' has 1 sample (in the desc). But somehow the process with the resolve texture, including using 'resolvesubresource', also works when I don't have MSAA enabled. So both source + dest has 1 sample. I would expect the D3D debugger/ logging to give an error or at least a warning. Below the code in which this applies (don't mind it not being optimized/ cleaned up yet, just for illustration) std::vector<char> DX11RenderTarget::GetRGBByteArray() const { CComPtr<ID3D11Device> myDev; CComPtr<ID3D11DeviceContext> myDevContext; mBuffer->GetDevice(&myDev); myDev->GetImmediateContext(&myDevContext); // resolve texture D3D11_TEXTURE2D_DESC tempRTDesc; tempRTDesc.Width = mWidth; tempRTDesc.Height = mHeight; tempRTDesc.MipLevels = 1; tempRTDesc.ArraySize = 1; tempRTDesc.Format = mDXGIFormat; tempRTDesc.BindFlags = 0; tempRTDesc.MiscFlags = 0; tempRTDesc.SampleDesc.Count = 1; tempRTDesc.SampleDesc.Quality = 0; tempRTDesc.Usage = D3D11_USAGE_DEFAULT; tempRTDesc.CPUAccessFlags = 0; CComPtr<ID3D11Texture2D> tempTex; CComPtr<ID3D11Texture2D> anotherTempTex; myDev->CreateTexture2D(&tempRTDesc, 0, &tempTex); // add check if(FAILED), logging myDevContext->ResolveSubresource(tempTex, 0, mBuffer, 0, GetDXGIFormat(mBufferFormat)); // format = DXGI_FORMAT_R8G8B8A8_UNORM_SRGB ; not flexible now // staging texture tempRTDesc.Usage = D3D11_USAGE_STAGING; tempRTDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ; myDev->CreateTexture2D(&tempRTDesc, 0, &anotherTempTex); // add check if(FAILED), logging myDevContext->CopyResource(anotherTempTex, tempTex); // map, copy pixels and unmap D3D11_MAPPED_SUBRESOURCE loadPixels; myDevContext->Map(anotherTempTex, 0, D3D11_MAP_READ, 0, &loadPixels); std::vector<char> image(mWidth*mHeight*3); char *texPtr = (char*)loadPixels.pData; int currPos = 0; for(size_t row=0;row<mHeight;++row) { texPtr = (char*)loadPixels.pData + loadPixels.RowPitch * row; for(size_t j=0;j<mWidth*4;j+=4) // RGBA, skip Alpha { image[currPos] = texPtr[j]; image[currPos+1] = texPtr[j+1]; image[currPos+2] = texPtr[j+2]; currPos += 3; } } myDevContext->Unmap(anotherTempTex, 0); return image; }
  21. I am trying to add normal map to my project I have an example of a cube: I have normal in my shader I think. Then I set shader resource view for texture (NOT BUMP) device.ImmediateContext.PixelShader.SetShaderResource(0, textureView); device.ImmediateContext.Draw(VerticesCount,0); What should I do to set my normal map or how it is done in dx11 generally example c++?
  22. Dears, I am having a shadow shimmering in my scene. I am using shadow mapping technique. I know that there is a lot of posts over the internet dealing with this subject but My issue is slightly different. I know that the problem comes from sub-texel issue when moving the camera BUT My scene is using a different technique as the camera and the light sources are stable and the objects inside the scene is moving Relative To Eye concept (RTE). The issue is when implementing cascaded shadow mapping and variance shadow mapping as stated in Directx examples, Every thing goes well except that the shadows are shimmering (Flickering). The shimmering is coming from that the objects are moving not the camera. So when I try to solve the problem with adjusting the sub-texel problem with the camera movement it didn't solve the problem as the camera is stable but the objects are not. Any help will be appreciated. Thanks in advance.
  23. I have started my "small game" for a few days, I use DirectX 9(some reason i chose this old version of dx) as my graphic engine, I need the effect like the show below. I think that i may need some depth sorting algorism? The text seems to be drawed on 2d surface(sorted and always be the front)?, I'm a beginer in DX, not too much experience... Any suggestion to me is welcome, thanks
  24. DX11 Dynamic ibl

    Hi guys, i implemented ibl in my engine. Currently i precompute cubemaps offline and use them in game. This works good, but its only static. I would like to implement dynamic cubemap creation and convolution. I more or less know how to do it. But : My current workflow is : Render hdr cubemap in 3dsmax with mental ray (white material for everything). Convolute with ibl baker. Use it in game. Capture probe ingame (only once). Convolute with ibl baker and use it without changing. This is used for every "ambient" light in game. On top of that I'm rendering "normal" light (with ambient and specular). I would like to capture and convolute cubemaps dynamically in game. So capture cubemap in 3ds max once. Use It in game and generate cube maps there at some time. This sounds easy. But as I said I first render ambient lights and on top of that normal lights. Then I create cubemap from that and use it in next frame for ambient light and add normal lights... Creating infinite feedback. Is there any way around it ? I believe games are using reatime generated ibl cubemaps. Or it's done completely differently ?
  25. I've gotten to part in my DirectX 11 project where I need to pass the MVP matrices to my vertex shader. And I'm a little lost when it comes to the use of the constant buffer with the vertex shader I understand I need to set up the constant buffer just like any other buffer: 1. Create a buffer description with the D3D11_BIND_CONSTANT_BUFFER flag 2. Map my matrix data into the constant buffer 3. Use VSSetConstantBuffers to actually use the buffer But I get lost at the VertexShader part, how does my vertex shader know to use this constant buffer when we get to the shader side of things In the example I'm following I see they have this as their vertex shader, but I don't understand how the shader knows to use the MatrixBuffer cbuffer. They just use the members directly. What if there was multiple cbuffer declarations like the Microsoft documentation says you could have? //Inside vertex shader cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float4 color : COLOR; }; struct PixelInputType { float4 position : SV_POSITION; float4 color : COLOR; }; PixelInputType ColorVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the input color for the pixel shader to use. output.color = input.color; return output; }