Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

138 Neutral

About _Camus_

  • Rank
  1. In fact it was wrong, I was sending the same face all the time and making it for all the mipmaps is not needed. I also followed your advice and used the desc to get the mipmap count! Thanks, this is the working code: D3D11_TEXTURE2D_DESC pDesc; Tex->GetDesc(&pDesc); int MipMapCount = pDesc.MipLevels; if (cil_props & CIL_CUBE_MAP) { for (int i = 0; i < 6; i++) { D3D11DeviceContext->UpdateSubresource(Tex.Get(), D3D11CalcSubresource(0, i, MipMapCount), 0, initData.pSysMem, initData.SysMemPitch, 0); } }else { D3D11DeviceContext->UpdateSubresource(Tex.Get(), 0, 0, buffer, initData[0].SysMemPitch, 0); } And the correct output: Again, thanks for the time.
  2. Thank you so much! I was doing it wrong, changed to: int MipMapCount = 1 + floor(log10((float)max(this->x, this->y)) / log10(2.0)); if (cil_props & CIL_CUBE_MAP) { for (int i = 0; i < 6; i++) { for (int j = 0; j < MipMapCount; j++) { D3D11DeviceContext->UpdateSubresource(Tex.Get(), D3D11CalcSubresource(j, i, MipMapCount), 0, initData[0].pSysMem, initData[0].SysMemPitch, size); } } }else { D3D11DeviceContext->UpdateSubresource(Tex.Get(), 0, 0, buffer, initData[0].SysMemPitch, 0); } And now it's working just fine! Thank you once again!
  3. Hello, I would like to generate mipmaps for a RGBA8 texture, however it only works for one face. Here my source: D3D11_TEXTURE2D_DESC desc = { 0 }; desc.Width = this->x; desc.Height = this->y; if(cil_props & CIL_CUBE_MAP) desc.ArraySize = 6; else desc.ArraySize = 1; if (this->props&TEXT_BASIC_FORMAT::CH_ALPHA) desc.Format = DXGI_FORMAT_R8_UNORM; else desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; desc.SampleDesc.Count = 1; desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET; desc.MiscFlags = 0; if (cil_props & CIL_CUBE_MAP) { desc.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE; } desc.MipLevels = 0; desc.MiscFlags |= D3D11_RESOURCE_MISC_GENERATE_MIPS; HRESULT hr; hr = D3D11Device->CreateTexture2D(&desc, nullptr, Tex.GetAddressOf()); if (hr != S_OK) { this->id = -1; return; } D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc{}; srvDesc.Format = desc.Format; if (cil_props & CIL_CUBE_MAP) { srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE; srvDesc.Texture2D.MipLevels = -1; srvDesc.TextureCube.MipLevels = -1; } else { srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvDesc.Texture2D.MipLevels = -1; } D3D11Device->CreateShaderResourceView(Tex.Get(), &srvDesc, pSRVTex.GetAddressOf()); D3D11_SUBRESOURCE_DATA initData[6]; int bufferSize = this->size/6; if (cil_props & CIL_CUBE_MAP) { unsigned char *pHead = buffer; for (int i = 0; i < 6; i++) { initData.pSysMem = pHead; initData.SysMemPitch = sizeof(unsigned char) * this->x * 4; pHead += bufferSize; } }else { initData[0].pSysMem = buffer; initData[0].SysMemPitch = sizeof(unsigned char) * this->x * 4; } if (cil_props & CIL_CUBE_MAP) { for (int i = 0; i < 6; i++) { D3D11DeviceContext->UpdateSubresource(Tex.Get(), i, 0, initData.pSysMem, initData.SysMemPitch, size); //size is the size of the entire buffer } }else { D3D11DeviceContext->UpdateSubresource(Tex.Get(), 0, 0, buffer, initData[0].SysMemPitch, 0); } D3D11DeviceContext->GenerateMips(pSRVTex.Get()); This code assume RGBA (32 bpp) and no mipmaps in the buffer. This is how it looks without mipmaps: This is how it looks with the code above, only one face is updated but curiously, that face alone does have mipmaps: Can anyone help me, I can add mipmaps offline but this should work, I would like to handle were the buffer is a cubemap and doesn't have any mipmaps. I tried on Intel and Nvidia, so I guess is not a driver issue, but something I am doing wrong. Thanks!
  4. Hello,   I would like to ask some questions regarding corner cases when validating the GPU driver under virtualization environment:   - What kind of test would stress the GPU in ways that a non virtualized environment would not?   I mean from the perspective of the driver, not the hypervisor. I am not asking for specifics, just points to explore, I know this kind of knowledge it's not common and in most cases it's vendor specific, I just need some guidance on topics to study.   Thank you!
  5. How fast!    Thank you, it worked!
  6. Hello,   I've been looking for info but I couldn't find any about this. Normally on Dx9 using the FVF I was able to pack Color into an unsigned byte, 8 bits per component, and the semantic on the hlsl shader was just COLOR and everything worked out:.   On D3D11 however, using the Input Layout interface I'm declaring:   static D3D11_INPUT_ELEMENT_DESC vertexDesc_2[] = { { "POSITION", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 8, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "COLOR", 0, DXGI_FORMAT_R8G8B8A8_UINT, 0, 16, D3D11_INPUT_PER_VERTEX_DATA, 0 }, }; Just 2 components for the position X and Y, 4 byte each, 2 components for U and V, 4 bytes each and Color 1 component 1 byte.   On my Vertex program:   struct VertexShaderInput { float2 pos : POSITION; float2 texcoord : TEXCOORD; float4 color : COLOR; }; And obviously it doesn't work, the debugger tells me:       I understand the problem but I don't know how to proceed, there is very little info about this and I really don't want to waste 16 bytes by expanding the colors using one float per color component. I know it's possible because that's how blend weights and blend indices works for skinning, this is the same just for colors.   Thank you for your help!  
  7. GF460GTX SLI runs flawless Visual Studio
  8. _Camus_

    100 billion polygons realtime?

    Ok, thanks for pointing that out, I understand now, interesting without a doubt, and I agree, looks different and I kind of like it, is new technology, new ways of doing the traditional process, it totally worth it. I am tired of Unreal, I think is overrated, just because it's tools are good and easy to use doesn't make it the best engine, it's the most common and traditional, uses light maps, PRT and thousands of filters to make it pretty, glow everywhere, I don't now, maybe it's impressive for some gamers, not for me. ID Software in the other hand is impressive, again, efforts on new technology, it is progress, not the same thing with more glow all over... BTW the label of the video should say "100 billions polygons on the content creation process", they tend to make us think that is in real time, which is not.
  9. _Camus_

    100 billion polygons realtime?

    I understand the concept of paging and memory virtualization, the problem I see is, where the data is stored? 100 billions? there is no way to store that amount of data... Let's say that is possible to store that amount of data (9 harddrives of 1.5TB?), just imagine the time spent on searching what polygons of which page or segment needs to be cached for rendering, can't be 100 billions... 100 millions sounds real and possible using memory virtualization, because only a few thousands will be cached and rendered and there is space on harddrive, even in ram memory to store 100 million polygons, but 100 billion, no way.
  10. This afternoon a friend of mine, came to tell me about this video: http://www.youtube.c...h?v=M04SMNkTx9E After seeing it and especially the tittle, 100 billion polys?? really?? One thing is a big large megatexture of 128 000 x 128 000, which can take 32Gb or more on HD, maybe less with some sort of compression, and ANOTHER 100 billion polygons, in the best case scenario, let's say a classic 44byte vertex composed of position, normal, texture coordinates and tangent, that geometry would take up to 12 TB!! 100 000 000 000 (100 billions) 100 000 000 000 x 3 = 300000000000 (3 vertex per poly) 13200000000000 bytes ( 44 bytes per vertex ) 12890625000 KB 12588500.9765625 MB 12293.45798492431640625 GB 12.005330063402652740478515625 TB How this could be possible? I'm missing something, that's for sure, because this was a talk in the GDC, I don't believe that I'am the only one who notice that 100 billions is a really big number... Another thing, in the second 0:11, the red couch has hard edges, even the boat has it too, and the leaves are simple planes with alpha. I can't believe that a demo who has as many polygons as stars in the galaxy can't have modeled leaves, not just planes! from second 1:31 to 1:33 look the bottom left corner, that are billboards, not fully geometry, clearly those planes rotate to face the camera all the time...
  11. _Camus_

    Play animation once in DirectX

    In fact it's really easy, take a look to D3DXCreateCompressedAnimationSet and the flag D3DXPLAY_ONCE.
  12. Yes, I believe it's D3D11_FILTER_MIN_MAG_MIP_POINT Because you are dealing with depth to reconstruct position, if you have any kind of linear/anisotropic filtering, your depth value will be interpolated when you acces via tex2d, you will get a interpolated depth, wich means that your position will be a mess. However it is useful with shadow maps, but G-Buffer it's not the case.
  13. Also, you shouldn't use any filtering at all: D3D11_FILTER_MIN_MAG_MIP_LINEAR
  14. _Camus_

    Best way to sample depth maps

    Yes, so far I do the same thing: float2 ShadowTexC = 0.5 * Coords.xy / Coords.w + float2( 0.5, 0.5 ); ShadowTexC.y = 1.0f - ShadowTexC.y; if(ShadowTexC.x > 1.0 || ShadowTexC.y > 1.0 || ShadowTexC.x < 0.0 || ShadowTexC.y < 0.0 || Coords.w < 0.0 ) return 1.0f; ... // depth comparison In this case, pixels out of and behind of the light frustrum do not perform depth comparison, if someone has a better way to do this, would be appreciated. I'm using XNA 4.0, so DirectX 9 and Shader Model 3. I have clamp on right now, I think that results in the texture sampler returning a value at the edge of the texture. Border was doing something similar. In any event, without using that silly nested statement I get shadows EVERYWHERE that the texture doesn't explicitly say there is none. [/quote]
  15. _Camus_

    MRT & Z Pre-Pass manually ?

    Hi, Thanks @Hodgman that DX9 secret capability is awesome, and yes, was my mistake is D3DFMT_R32F, I was thinking in depth stencil too much Now I can read it directly, in fact the Z Pre-Pass is working now: 1 - Render to depth 2 - Clear only target, disabling Z Writting and ZFunc to LessEqual Using 3 point lights, Z Pre Pass saves me about .1 ms on the second pass. Now that I also can read directly from depth, I was curious to make the depth test by myself on the fragment program, first to ensure that I was using the right projective texture coordinates I did the follow specting to have z fighting: float2 Texcoords = 0.5 * In.Other.xy / In.Other.w + float2( 0.5, 0.5 ); Texcoords.y = 1.0f - Texcoords.y; float dist = In.Other.z/ In.Other.w; if(dist == tex2D( nolerpsampler, Texcoords ).r) return float4(1.0f,0.0f,0.0f,1.0f); And I was right: Well, I thought it would be a matter of adding bias to the distance and do the test: ... float dist = (In.Other.z + 0.001f) / In.Other.w; if(dist > tex2D( nolerpsampler, Texcoords ).r) return float4(1.0f,0.0f,0.0f,1.0f); And, I was wrong: Why? It is supposed to be (dist + bias), and not (dist - bias), why it works inverse to what I spected? Thanks Again
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!