• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

143 Neutral

About YixunLiu

  • Rank

Personal Information

  • Interests
  1. Thanks for reminder.
  2. Hi, I have a surface mesh and I want to use a cone to cut a hole on the surface mesh. Anybody know a fast method to calculate the intersected boundary of these two geometries? Thanks. YL
  3. Many thanks Infinisearch! Thank you so much for your detailed information!
  4. Got it. Thanks. I got similar answer from other places. https://gamedev.stackexchange.com/questions/26719/when-does-depth-testing-happen "This is the idea behind the Early-Z optimization, where if you're rendering a pixel whose pixel shader doesn't change the depth, the hardware may never actually run the pixel shader (or, more likely, if a full 2x2 quad of pixels is occluded, then none of them will be run through the pixel shader). This is why you want to render a fully-opaque scene from front to back."
  5. I am confused about "it doesn't need to be run at all if the fragment fails the depth test". The output merge stage gets color and depth from pixel shader output and then do depth and stencil test. I think this means the pixel shader will run no matter if the fragment pass the depth and stencil test or not, right? Thank.
  6. Hi, I would like to know if rendering opaque surfaces in front-to-back order (closer ones first, more distant ones last) can improve performance much. By 'opaque' I mean surfaces for which the DepthWriteMask is set to one in the depth-stencil state. I think no matte we adjust order or not, the pixel shader needs to be performed. The difference is the output merge stage. Does this improve performance much? Thanks. YL
  7. Thanks Hodgman. I am not clear about the first method you mentioned. For example, I can run the following code in main thread, m_d3dContext->Map(vertexBuffer2.Get(), 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource); // Update the vertex buffer here. memcpy(mappedResource.pData, vertices, sizeof(vertices)); // Reenable GPU access to the vertex buffer data. m_d3dContext->Unmap(vertexBuffer2.Get(), 0); To run the above code in two threads, I run map and unmap in main thread and memcpy in another thread. However, is it safe to call unmap while another thread is updating the buffer? Thanks.
  8. DX11 Free-form line in 3D space

    Got it. Thanks.
  9. Hi, I want to render a object with a dynamic vertex buffer and I do rendering in UI thread. I am thinking is it possible to change this vertex buffer content in a non UI thread using Map and Unmap. Thanks. YL
  10. Hi, I want to use a 3D mouse to draw free-form line in 3D space using DirectX11. I have a question about how to create vertex buffer. Please see comments in the following code. struct SimpleVertexCombined { XMFLOAT3 Pos; XMFLOAT3 Col; }; D3D11_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D11_USAGE_DYNAMIC; bufferDesc.ByteWidth = ???;//the size will change dynamically, how to fill this part? bufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 1; bufferDesc.MiscFlags = 0; After I create a dynamic vertex buffer, I use Map and UnMap to update the buffer. How to incrementally update the buffer? Any comments about how to do free-form line drawing in 3D space are really appreciated. Many thanks. YL
  11. The problem is I do not have the sphere model. What I have is the slice and the photo of the sphere taken by a real camera. The margin of the hole looks very real. Is it produced by using shadow?
  12. Hi, I found this video, which can create a hole on the real wall and through this hole you can see sky and mountain outside. https://hololens.reality.news/news/video-holelenz-adds-magic-windows-hololens-gives-portals-new-worlds-0176281/ Any idea about how to reach this kind of effect? My speculation is first scan the wall to get the wall surface mesh and then break this mesh with a hole to generate the margin of the hole and then render hole margin and skybox(sky or mountain). Any comments are really appreciated. Best, YL
  • Advertisement