# hustruan

Member

8

187 Neutral

• Rank
Newbie
1. ## Boat wakes on projected grid water

Due to the projected grid method, there is no water mesh grid data on CPU, so it's not easy to run a completely dynamic wave simulation like Tessendorf's iWave algorithm? What I want is a simple wake propagation method which I can run on a pixel shader,  Actually, a pixel shader simulation wake propagation to generate wave height map. Then generate normal map from the height map. When render water, combine the wake normal map.
2. ## Boat wakes on projected grid water

I have implemented a water system, which uses FFT to generate water waves and projected grid to generate water mesh. Now I want to add V-shape boat wakes, does anyone has suggestion on this. I have read Tessendorf's iWave paper, but it works on regular grid, seems not work with projected grid water.  Thanks!
3. ## vertex shader output interpolation problem

Shader model 1 doesn't even have pixel shaders  If you're using D3D11, then you'll be using SM 2, 4 or 5 (depending if you use the 9, 10 or 11 feature level). Perspective correct interpolation works the same in every shader model, except that in 4/5 you can use the modifiers that you mention.   In SM 2/3, without these modifiers, I guess that if you want to get per-pixel interpolated(z)/interpolated(w), you'd output z*w and w in the vertex shader: VS output: pos = float4(x,y,z,w); o   = float2(z*w, w);   Interpolator performs per pixel: o.x = interpolate(z*w/w)/interpolate(1/w) == interpolate(z)/interpolate(1/w) o.y = interpolate(w/w)/interpolate(1/w) == 1/interpolate(1/w)   Ps code: float depthBuf = o.x / o.y / o.y; // depthBuf == interpolate(z)/interpolate(1/w) / (1/interpolate(1/w)) / (1/interpolate(1/w)) // depthBuf == interpolate(z)/interpolate(1/w) * interpolate(1/w) * interpolate(1/w) // depthBuf == interpolate(z) * interpolate(1/w) This is confusing though, I might have made a mistake  Why not just use a depth buffer?     Yeah, instead of output a non-linear depth buffer, I can use the system back depth buffer directly. I just want to know why the version 1 shader is wrong. I have found that divide w in vertex shader is a wrong way. because w may less than 0. So the version 2 is right.

5. ## vertex shader output interpolation problem

You are talking just about output positions, oDepth is untouched?   So it divides by w after vertex shader and then just before pixel shader it multiplies by w (restoring old value) ?   http://www.comp.nus.edu.sg/~lowkl/publications/lowk_persp_interp_techrep.pdf   This page explains what is perspective-correct interpolation.
6. ## vertex shader output interpolation problem

Now you got me confused. I thought that just output position is affected "behind scene"? If we saved in vertex shader z/w in oDepth, oDepth should not be affected later on? Or i am wrong?   I am interested in DX9 pipeline, don't know about DX11.   Actually,  perspective correct interpolation means that in order to do linear interpolation, it first need to divide by SV_Position's w. later, it will multiply back before pass the output to pixel shader. So If we saved in vertex shader z/w in oDepth, oDepth is equal to  the interpolated z/w in pixel shader.
7. ## vertex shader output interpolation problem

Really? I don't think so. The rasterizer will interpolate vertex shader outputs, default is by perspective-correct interpolation. Consider rasterize a triangle, for first version, the final value of a fragment in triangle is a * (Z1/W1) + b * (Z2/W2) + c * (Z3/W3), (a, b, c) is the barycentric coordinate. (Z/W) is values on each triangle vertex. But for second version, the final value is (a*Z1 + b*Z2 + c*Z3) / (a*W1 + b*W2 + c*W3). Obviously, not the same value.
8. ## vertex shader output interpolation problem

I have a question about vertex shader output interpolation. Fox example, when implement shadow map, we need a depth map.  Let's first suppose that we need a non-linear depth map, Z/W. Consider following two shaders, which is the correct way to output a non-linear depth map. I have seen online that someone use the first version, while others use second. But I think only first version is right in math.   // Version 1 void NonLinearDepthVS(in float3 iPos : POSITION, out float4 oPos : SV_Position, out float oDepth : TEXCOORD0) {         oPos = mul(iPos, mul(World, ViewProj));          oDepth = oPos.z / oPos.w;     } float4 NonLinearDepthPS(in float oDepth : TEXCOORD0) : SV_Target0 {         return float4(oDepth, 0, 0, 0); } // Version 2 void NonLinearDepthVS(in float3 iPos : POSITION, out float4 oPos : SV_Position, out float2 oDepth : TEXCOORD0) {         oPos = mul(iPos, mul(World, ViewProj));          oDepth = oPos.zw; } float4 NonLinearDepthPS(in float2 oDepth : TEXCOORD0) : SV_Target0 {         return float4(oDepth.x / oDepth.y, 0, 0, 0); }