Jump to content

  • Log In with Google      Sign In   
  • Create Account


jajcek

Member Since 13 Aug 2010
Offline Last Active Jul 30 2014 09:16 AM

Topics I've Started

Refractive water

29 May 2014 - 01:52 AM

Hey,

 

I want to optimize my water rendering on the grounds of this: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter19.html

 

However, I don't understand how the last step should look like when rendering the final scene. 3 ways came to my mind, but I don't know which one is the correct way to go. Could you give me some suggestions, please?

 

I

1. Render all to the S texture except for the water

2. Render water to the same S texture using (for refractions) the texture you are rendering to (is this even possible?)

3. Render texture on the orthogonal to camera plane

 

II

1. Render all to the S texture except for the water, alpha channel will contain what part of the water is seen

2. Render texture on the orthogonal to camera plane

3. Render water to the main back buffer (in front of the plane) using mask from alpha channel to clip what is not seen

 

III

1. Render all (without water) to the S texture an back buffer at once using MRT

2. Render water simply to the back buffer using the texture for refractions

 

Thanks!


Spliting terrain vertex buffer into 2 buffers

01 March 2014 - 01:32 PM

Hello,

 

I am currently working on a terrain generation (so common problem that you're probably bored already :)). I have recently read on gamedev.net topic (can't find it anymore :/) that there's a possibility to have one buffer containing X and Z positions (which is always the same - real positioning will be done later by using translation) and send only Y values to the shader. I was wondering about such method for a while, but I can't find any way to merge these buffers in a shader. If I sent to shader only Y values how can I obtain X and Z values?

 

One thing that came up to my mind while writing this question was to send X and Z values through the constant buffer. Is this the correct way?

 

Thank you for any help.


Binormal (probably) artifacts

17 September 2013 - 08:46 AM

Hello,

 

I have recendly added normal mapping to my terrain engine, but after diagnosing a bit, it looks like there is some problem with the binormal (it is somehow too sharp in some places). Here are some screenshots:

 

Full (there area visible artifacts in some places): http://img833.imageshack.us/img833/89/lzwv.png

Binormal color (compare it with the full screenshot): http://img543.imageshack.us/img543/3219/dyu7.png
 
Why this happens? This is the code responsible for generating TBN:
 
- Terrain has shared vertices.
- vertexPosition below is a position not transformed by any matrix.
- normal is calculated on CPU and is send (not transformed by anything) to the VS which sends it to the PS as is.
float3 computeNormalWithTBN(float3 vertexPosition, float2 texCoord, float3 normal ) {
    float3 p_dx = ddx(vertexPosition);
    float3 p_dy = ddy(vertexPosition);

    float2 tc_dx = ddx(texCoord);
    float2 tc_dy = ddy(texCoord);

    float3 t = normalize( tc_dy.y * p_dx - tc_dx.y * p_dy );
    float3 b = normalize( tc_dy.x * p_dx - tc_dx.x * p_dy );

    float3 n = normalize(normal);
    float3 x = cross(n, t);
    t = cross(x, n);
    t = normalize(t);

    x = cross(b, n);
    b = cross(n, x);
    b = normalize(b);

    float4 detail = normalMap.Sample( SampleType, texCoord );
    detail = (detail * 2.0f) - 1.0f;
    detail *= 6.0f;
    return normalize( normal + detail.x * t + detail.y * b );
}

main PS function:

    // ...
    input.normal = computeNormalWithTBN( input.rawPosition.xyz, input.tex.xy, input.normal;
    float light = saturate( dot( input.normal, float3( 0, 0.73, -0.69 ) ) );

    float4 color = 0.3f;
    color += light;
    return color;
}

Thank you for any hints!


Normal mapping dependent on view.

14 September 2013 - 08:13 AM

Hi,

 

I've got a problem that normal mapping is dependent on view. It looks like this (sorry for the watermarks): http://www.youtube.com/watch?v=-V_2Pp4kiLM&feature=youtu.be

 

The normal mapping is calculated in the pixel shader as follows (it is actually taken from http://stackoverflow.com/questions/5255806/how-to-calculate-tangent-and-binormal):

 

- vertexPosition is a position in SV_POSITION semantic

- shared vertices on in the triangles

 

float3 computeNormalWithTBN(float3 vertexPosition, float2 texCoord, float3 normal ) {
    float3 p_dx = ddx(vertexPosition);
    float3 p_dy = ddy(vertexPosition);


    float2 tc_dx = ddx(texCoord);
    float2 tc_dy = ddy(texCoord);


    float3 t = normalize( tc_dy.y * p_dx - tc_dx.y * p_dy );
    float3 b = normalize( tc_dy.x * p_dx - tc_dx.x * p_dy );


    float3 n = normalize(normal);
    float3 x = cross(n, t);
    t = cross(x, n);
    t = normalize(t);


    x = cross(b, n);
    b = cross(n, x);
    b = normalize(b);


    float4 detail = normalMap.Sample( SampleType, texCoord );
    detail = (detail * 2.0f) - 1.0f;
    detail *= 6.0f;
    return normalize( normal + detail.x * t + detail.y * b );
}
// in main function
    // ...
    input.normal = computeNormalWithTBN( input.position.xyz, input.tex.xy, input.normal;
    float light = saturate( dot( input.normal, float3( 0, 0.73, -0.69 ) ) );


    float4 color = 0.3f;
    color += light;
    return color;
}
 

 

Why this happens?

 

Thanks for help.


Passing an array for each vertex

29 August 2013 - 01:01 PM

Hello,

 

I'm looking for some flexibility on sending data for each vertex. In my situation I'd like to pass some array for every vertex to specify which texture it should use (the array won't me too big, like 5, 6 maybe 7 values; I cannot send only one value with texture id because I need blending between them), like:

 

[1.0, 0.0, 0.0 ...] - first texture

[0.0, 1.0, 0.0 ...] - second texture and so on

 

Currently I am packing these values into some TEXCOORD etc. but it is not flexible, because when I want to add next texture I need to change the vertex structure and try to pack the value somewhere in the shader.

 

So short question: is it possible to send an array with each vertex in DirectX/HLSL?

 

EDIT:

 

I have such structure (variable t is unused but it works with it):

 

struct TerrainVertex {
    DirectX::XMFLOAT3 position;
    DirectX::XMFLOAT4 color;
    DirectX::XMFLOAT3 normals;
    DirectX::XMFLOAT2 texture;
    DirectX::XMFLOAT2 t;
};

and changed it to (for testing):

struct TerrainVertex {
    DirectX::XMFLOAT3 position;
    DirectX::XMFLOAT4 color;
    DirectX::XMFLOAT3 normals;
    DirectX::XMFLOAT2 texture;
    float t[2];
};

but my whole terrain disappears and I have random artifacts on the screen (plus Microsoft C++ exception: _com_error at memory location 0x002EF170.)

 

input layout:

 

D3D11_INPUT_ELEMENT_DESC inputLayout[] = 
{
    { "POSITION", 0U, DXGI_FORMAT_R32G32B32_FLOAT,    0U, 0U,                           D3D11_INPUT_PER_VERTEX_DATA, 0U },
    { "COLOR",    0U, DXGI_FORMAT_R32G32B32A32_FLOAT, 0U, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0U },
    { "NORMAL",   0U, DXGI_FORMAT_R32G32B32_FLOAT,    0U, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0U },
    { "TEXCOORD", 0U, DXGI_FORMAT_R32G32_FLOAT,       0U, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0U },
    { "TEXCOORD", 1U, DXGI_FORMAT_R32G32_FLOAT,       0U, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0U }
};

 

Thanks for help!


PARTNERS