Jump to content
  • Advertisement

ErnstH

Member
  • Content Count

    17
  • Joined

  • Last visited

Community Reputation

244 Neutral

About ErnstH

  • Rank
    Member
  1. Thank you unbird & kauna for your help.   As you can see the stencil buffer is working fine now!
  2.   One of the things I tried was to create soft shadows by blurring the stencil buffer. This does not work because not all edges must be blurred and a constant blur size does not look realistic.
  3. Aha, the StencilRef value is hidden in the OMSetDepthStencilState method. Thanks!   I was expecting to discover great new effects reading the stencil in my shaders. No luck so far, I have only been using it for standard shadows, mirrors and masking.
  4. I am reading the stencil buffer in my shader with this function:   [source] uint MGetStencilValue(float2 inUV){  int x=Viewport.x+inUV.x*(Viewport.y-Viewport.x);  int y=Viewport.z+inUV.y*(Viewport.w-Viewport.z);  return MyTexture3.Load(int3(x,y,0)).y; } [/source]   For a simple stencil mask it is used in a pixel shader like this:   [source] if(MGetStencilValue(uv)<128) discard; [/source]   I find it strange that I have to do my own floating point uv coordinates to integer pixel coordinates conversion and also do my own viewport calculations, but it works!   However, when I want to use the stencil buffer as a mask for a mirror I have to include the above code in ALL shaders that are used to render the scene in the mirror.   I wonder if there's a global way to make any shader use the stencil mask.   In DirectX 9 you could do this:   [source] mDevice->SetRenderState(D3DRS_STENCILREF, 128); mDevice->SetRenderState(D3DRS_STENCILFUNC, D3DCMP_LESSEQUAL); [/source]   I can't find a way to do this in DirectX 11.   Do I really have to include my stencil code in all my shaders?
  5. Oh wow, that's great. Thank you!!! Love this forum.
  6. My mesh vertices are stored this way:   [source] D3D11_INPUT_ELEMENT_DESC layout[] = {  { "POSITION", 0,DXGI_FORMAT_R32G32B32_FLOAT, 0,0,     D3D11_INPUT_PER_VERTEX_DATA, 0 },  {"NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0,D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },  {"COLOR", 0, DXGI_FORMAT_R32_UINT, 0,D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },  {"TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0,D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },  {"TANGENT", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0,D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 }, }; [/source]     It is presented to my shaders this way:   [source] struct VS_INPUT{  float4 pos: POSITION;  float4 normal: NORMAL;  uint col: COLOR;  float2 tex: TEXCOORD0;  float4 tangent: TANGENT; }; [/source]     The 3D vectors are magically converted into 4D with the w component set to 1. No idea where this is done, but it works.   I wonder if there's a similar trick to convert my UINT into a float4. The rest of my HLSL code needs colours as float4 so it has to be converted somewhere.   This is the function I use now. It works fine but something tells me that it is, as MJP would say, "extremely abnormal" and "off the beaten path". There must be a more efficient and cleaner way.   [source] float4 MGetVertexColour(uint inCol){  if(inCol==0xffffffff) return float4(1,1,1,1);    float a=((inCol & 0xff000000) >>24);  float r=((inCol & 0xff0000) >> 16);  float g=((inCol & 0xff00) >> 8);  float b=((inCol & 0xff));  return float4(r,g,b,a)/255.0; } [/source]  
  7. When you render a quad with UV coordinates (0,0) top left to (1,1) bottom right you can render your cubemap as a cylinder panorama with a pixel shader like this: [source] float4 PS(VS_OUTPUT input): SV_Target{   float heading=input.tex.x*cPI*2; float pitch=cPI*0.5-input.tex.y*cPI;   float3 ReflectionVector; ReflectionVector.x=sin(heading)*cos(pitch); ReflectionVector.y=sin(pitch); ReflectionVector.z=cos(heading)*cos(pitch);   return MyTexture0.Sample(MySampler0, ReflectionVector); } [/source]
  8. Wow, that feels like streaming your vertices. Thanks for the info!   I'm not porting from OpenGL, but from DirectX 9 using IDirect3DDevice9::DrawIndexedPrimitiveUP with D3DPT_TRIANGLELIST: http://msdn.microsoft.com/en-us/library/windows/desktop/bb174370(v=vs.85).aspx
  9. Interesting technique. I found more info about it on this page: http://msdn.microsoft.com/en-us/library/windows/desktop/dn508285(v=vs.85).aspx   However, you can't dynamically change the size of the buffer.   What I really would like to have is a render and forget function. I can't create this myself without using the WaitForGPU2Finish() function.   It could have been super easy if the pipeline would use refcounting: releasing the buffer after use.
  10. ErnstH

    Questions about billboards

    A low-tech way is to rotate the model coordinates in your vertex shader before applying the world matrix.   Easiest way is to multiply them with the transpose of your view matrix with the translations set to 0:   [source] float4 RotateTowardsEye(float4 inModelPos){    float4x4 Q;  for(int i=0; i<4; i++){   for(int j=0; j<4; j++){    Q[j]=View[j];   }  }  Q[0][3]=0;  Q[1][3]=0;  Q[2][3]=0;    return mul(inModelPos,Q); } [/source]   This way you can use 3D models as particles/billboards as well, not just simple shapes like quads.
  11. Thank you for your replies. I will look into using dynamic buffers.   Hope I no longer have to use the WaitForGPU2Finish() function when using its map method.
  12. I'm generating meshes during rendering for a lot of effects: -lightning flashes -extended silhouette edges for lightbeams en stencil shadows -marching cube particle blobs -voxel models (optimized for view and transparency bounds)   I guess some of them could be generated with geometry shaders, but others are too complex or need too much data.   And I run out of video memory when I keep every instance so I have to delete some during rendering. Question is when?
  13. Most of my meshes are created before rendering starts and deleted afterwards.   Some have to be recreated every frame.   And some have to be recreated multiple times during a frame.   The last category flickers: meshes are invisible at random moments and sometimes even take over the appearance of other meshes!!!   When I say "mesh" I mean a class encapsulating a vertex and an index buffer, both using an ID3D11Buffer. When the mesh is deleted those bufers are released.   The loop looks like this:   [source] for (i=0; i<10; i++){  delete Mesh;  Mesh=new Mesh(MyParameters);  Mesh->Render(); } [/source]   The flickering can be fixed by adding a Sleep before deleting the mesh:   [source] for (i=0; i<10; i++){  Sleep(10);  delete Mesh;  Mesh=new Mesh(MyParameters);  Mesh->Render(); } [/source]   So I have to wait for something before I can delete my mesh. But what?   My guess is that I have to wait for the mesh to be rendered:   [source] for (i=0; i<10; i++){  WaitForGPU2Finish();  delete Mesh;  Mesh=new Mesh(MyParameters)  Mesh->Render(); } [/source]   This is my WaitForGPU2Finish function:   [source]   void MDirectX_Device::WaitForGPU2Finish() const{    D3D11_QUERY_DESC d;  ZeroMemory(&d, sizeof(d));  d.Query = D3D11_QUERY_EVENT;    ID3D11Query* Q = nullptr;  HRESULT hr =mDevice->CreateQuery(&d, &Q);  if (FAILED(hr)) return;    mContext->End(Q);    BOOL data = 0;  while (true){   hr =mContext->GetData(Q, &data, sizeof(data), 0);   if (hr ==S_OK && data) break;  }  Q->Release(); } [/source]   This also works but since sleep works equally well, something totally different could be going on.   I worry there's something fundamental I do not understand about DirectX. I create and delete buffers all the time and wonder if I should be more careful.   The WaitForGPU2Finish is very cryptic and if it does what I hope it does also not very efficient because it waits for everything. I only want to wait for a specific buffer to be processed by the queue.   Does anyone know what's going on here?
  14. I managed to add mipmapping to the BC1 encoded DDS cubemaps. Here's what I do:   1) Create a texture using the CreateDDSTextureFromFile function   ID3D11Resource* Texture = nullptr;   HRESULT hr = CreateDDSTextureFromFile(mDevice, inPath, &Texture, &mShaderResourceView);   2) Get the resolution   D3D11_TEXTURE2D_DESC d; ZeroMemory(&d, sizeof(d));   Texture->GetDesc(&d); mWidth = d.Width; mHeight = d.Height;   3) Create a new cubemap texture with the same resolution   mFormat = DXGI_FORMAT_B8G8R8A8_UNORM;   D3D11_TEXTURE2D_DESC d; ZeroMemory(&d, sizeof(d));   d.Width = mWidth; d.Height = mHeight; d.MipLevels = 0; d.ArraySize = 6; d.Format = mFormat; d.SampleDesc.Count = 1; d.SampleDesc.Quality = 0; d.Usage = D3D11_USAGE_DEFAULT; d.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET; d.CPUAccessFlags = 0; d.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE | D3D11_RESOURCE_MISC_GENERATE_MIPS;   HRESULT hr = mDevice->GetDevice()->CreateTexture2D(&d, nullptr, &mTexture);   4) Create a shader resource view   D3D11_SHADER_RESOURCE_VIEW_DESC d; ZeroMemory(&d, sizeof(d));   d.Format = mFormat; d.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE d.Texture2D.MipLevels = -1;   HRESULT hr = mDevice->CreateShaderResourceView(mTexture, &d, &mShaderResourceView);   5) For all 6 faces: create a render target   D3D11_RENDER_TARGET_VIEW_DESC d; ZeroMemory(&d, sizeof(d));   d.Format = mFormat; d.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2DARRAY; d.Texture2DArray.ArraySize = 1; d.Texture2DArray.FirstArraySlice = inFace;   HRESULT hr = mDevice->CreateRenderTargetView(mTexture, &d, &mRenderTargetView);   6) For all 6 faces: select this render target:   mContext->OMSetRenderTargets(1, &mRenderTargetView, nullptr);   7) For all 6 faces, I use my framework to render a square on this render target using the ShaderResourceView created in step 1   When your square mesh has UV coordinates ranging from (0,0) top left to (1,1) bottom right, the pixel shader looks something like this:   float4 PS(VS_OUTPUT input): SV_Target{       float4 c=float4(1,0,0,1);       if(TextureResolution[0].x>0){           float2 uv=input.tex;           float3 v=float3(0,0,0);           int f=round(Prop[0].x);           if(f==0){             //+X             v.x=1;             v.y=1-uv.y*2;             v.z=1-uv.x*2;         }         else if(f==1){             //-X             v.x=-1;             v.y=1-uv.y*2;             v.z=-1+uv.x*2;         }         else if(f==2){             //+Y             v.x=-1+uv.x*2;             v.y=1;             v.z=-1+uv.y*2;         }         else if(f==3){             //-Y             v.x=-1+uv.x*2;             v.y=-1;             v.z=1-uv.y*2;         }         else if(f==4){             //+Z             v.x=-1+uv.x*2;             v.y=1-uv.y*2;             v.z=1;         }         else if(f==5){             //-Z             v.x=1-uv.x*2;             v.y=1-uv.y*2;             v.z=-1;         }         c=MyTexture0.Sample(MySampler0,normalize(v));     }     return c; }   8) After rendering all 6 faces, create the mipmaps   mDevice->GenerateMips(mShaderResourceView);   That's it. Thanks to MJP for pointing me to the right direction!
  15. Thank you for the info.   I think I'try the latter option first. I'll let you know how it goes.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!