# zhangdoa

Member

22

114 Neutral

• Website
• Role
Programmer
• Interests
Programming

zhangdoa
• Github
zhangdoa

## Recent Profile Visitors

793 profile views

1. ## R&D Yet another question about correct PBR fresnel usage

You are (almost) correct, just only one thing you may not catch up with so well, that in microfacet theory, we always give the "surface" 2 normal, the macro-normal n, and the micro-normal m. SIGGRAPH 2013 Course: Physically Based Shading in Theory and Practice - Background: Physics and Math of Shading I suggest you read "Surface Reflectance (Specular Term)" from pg.12 thoroughly and I hope you could understand the reason why it's VdotH, that we would use halfway-vector h as the assumption of the micro surface's normal. The second answer in https://computergraphics.stackexchange.com/questions/2494/in-a-physically-based-brdf-what-vector-should-be-used-to-compute-the-fresnel-co explained as above, it's crystal clear ๐ The reason we don't need kS is not "as it is included in the F term", rather it's because F term is kS, it's all about the nature of specular reflectance. When you split each term and try to visualize them, you could only expect that the visual result is depending on the parameters of the term's formula, rather than the complete visual appearance. When you put a point light behind the sphere, you could only count on the macro Visibility term (or macro Geometry term) to occlude the light, which in microfacet theory the macro V term is made up by micro D and G term. F term did nothing here.
2. ## 3D Modern rendering process

There are couple-few nice SIGGRAPH courses covered the topic in some sort of degrees, for example: Physically Based Shading at Disney Extending the Disney BRDF to a BSDF with Integrated Subsurface Scattering Practical Multilayered Materials in Call of Duty: Infinite Warfare

4. ## 3D understanding clipspace w coordinate division practically

Your texCoord is unused. You are actually using input.TexCoord. Besides, your texture coordinate calculation seems not so correct. If you really read some articles carefully you should understand Perspective Division is used to represent point from Homogeneous coordinates to Cartesian coordinates. Or in a more mathematical point of view, transform an element in projective space to Euclidean space. Because after projection matrix all points in your 3D Euclidean space going to a 4D projective space, you need to transform them back to get your texture coordinates later. The Perspective Division has been done to the SV_POSITION if you are trying to use it in the pixel shader. (https://docs.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl-semantics)
5. ## Speeding up voxelization from a high poly model

What MJP talked about is basically the algorithm of the voxelization on GPU, the 22nd chapter in "OpenGL Insights", Octree-Based Sparse Voxelization Using the GPU Hardware Rasterizer covered all the fundamental theory and implementation. You may need conservative rasterization for a better voxelization result, whether by handcrafting it in the geometry shader or some hardware alternatives depends on the specific API you used.
6. ## 3D Garbage Pointer When Setting Pipeline State

In the garbage data, the m_DS and m_GS pointed to some memory range of the d3d11_3SDKLayers.dll, that looks like some linkage errors. I've occurred such similar scenario (very rare) when the incremental linkage feature was activated and the linker can't resolve the segment offset correctly, then in debug runtime some weird memory addresses were pop up. Try to fully cleanly rebuild the solution and see if it's solved.

8. ## DX12 Use Texture2DArray in D3D12

If you were familiar with DDSTextureLoader for DirectX 11, then you could take a look at the same tool in DirectXTK12.

11. ## making sound support in my engine

A generally compatible enough choice is OpenAL, an equivalent of OpenGL. The vendor or community's implementation (OpenAL Soft) could cover all your engine's target platforms. Few cons: macOS would deprecate OpenAL support from version 10.15; The community is less active than OpenGL's; No further successor standard so far. Personally I recommend you take a look at FMOD or Wwise, they are both widely adopted by the industry nowadays, and have more advantages around the maintenances, user/dev community, and software maturities. Wwise encapsulated the low-level audio business tighter than FMOD, in contrast, it's easier to get your hand dirty through FMOD. They both use Event-Driven design for high-level communication between the host application and themselves, and they both have the (almost) unified implementation across different platforms. But since they are targeting the actual products so the API is more verbose and messy, the learning curve would be a little steep.
12. ## Rendering 1,000,000 sprites. I'm stuck profiling

CPU related questions: How's about the cost of MapBuffer() in Debug build? What's the difference between the different compiler optimization level? Do you need to consider about to optimize the O(m*n) for-loop? Do you have to submit the data of every single sprite every frame? Could you identify and optimize out any unnecessary temporary variables like the return value of std::vector<T>::size(), or any unnecessary and expensive copy construction? GPU related questions: What's the buffer usage pattern (GL_STATIC_DRAW/etc) you specified when create and upload the vertex buffer? What's the mapping flag (GL_MAP_PERSISTENT_BIT/etc) you specified when map the vertex buffer? How do you handle the CPU-GPU synchronization between CPU-write operation and GPU-read operation? Is there any double buffering/triple buffering you've implemented? And could you share the blog post you're referencing to?