Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Jake Rivers

Member Since 23 Oct 2000
Offline Last Active Jan 22 2015 09:58 PM

Topics I've Started

Should I use the gradient to get the Normal of a Sin wave?

12 January 2015 - 03:27 PM

Hi,

 

I'm making a small 3d waves demo computing the waves based on the distance of the vertex from the camera.

 

Acoording to this older post to compute the Normal I should compute the partial derivatives of the function I'm using for the waves... I tried the example they're using (a 2d wave) and it works just fine... 

 

The formula I'm using is quite simple:

 

f(x,z) = amplitude * Sin( (sqrt( x^2 + z^2) + fase) * frequency );

 
so, the partial derivatives for each component are:
 
fx = amplitude * frequency * x * Cos( (sqrt(x^2 + y^2) + fase) * frequency )  / sqrt(x^2 + y^2);
fy = 0;
fz = amplitude * frequency * z * Cos( (sqrt(x^2 + y^2) + fase) * frequency ) / sqrt(x^2 + y^2);
 

thus the gradient vector would be <fx, fy, fz> which according to some math references it should be the normal of the given surface at any point by itself.

 

however, if I normalize it and use it in my vertex shader, the lighting doesn't look right... 

 

Am I not doing it right?.... am I missing something?

 

Thanks


Using some ConstantBuffer values causing CreateShader to fail

17 August 2013 - 02:56 PM

I think I still need to understand better how to use constant buffers.

 

I have a constant buffer with just two floats declared:

 

C++ struct:

struct ConstantBuffer
{
    float value0;
    float value1;
} g_CBData;

.

 

And in the Vertex Shader I'm just trying to multiply the color by the Sin() of any of these two values:

cbuffer cbPerFrame : register (b0)
{
    float  value0;
    float  value1;
};


struct VSInput
{
    float3 Pos : POSITION;
    float4 Col : COLOR;
};


struct VSOutput
{
    float4 Pos : SV_POSITION;
    float4 Col : COLOR;
};


VSOutput main( VSInput vs_in )
{
    VSOutput vs_out;

    vs_out.Pos = float4( vs_in.Pos, 1 );
    vs_out.Col = vs_in.Col * sin( value1 ); // "value0" runs ok, "value1" causes E_INVALIDARG

    return vs_out;
}

 

If I use "value0" everything runs well, however, if I use "value1" then I get an E_INVALIDARG when creating the VertexShader:

R = g_pDirect3D->CreateVertexShader( pVSData, vsSize, nullptr, &g_pVertexShader );

.

 

The weird thing is that if I don't use the sin() function (i.e. I just multiply Color * value1):

vs_out.Col = vs_in.Col * value0; // runs ok
vs_out.Col = vs_in.Col * value1; // runs ok
.

 

Then the values are read correctly.

 

Also I can't multiply value0 * value1, nor sum or anything between these two values.. when I do, I just get the E_INVALIDARG error.

 

Is there any strange rule when using the values from a constant buffer?

 

Thanks

 

 


constant buffer data layout mismatch between app/shader

17 August 2013 - 03:31 AM

I started making a basic D3D11 application for Win8 without the Effects framework, everything was ok until I tried to pass constant buffers data to the shader, the data I'm passing to the shader doesn't match the actuall data the shader is reading.

 

 

I have a self made Matrix structure, and a struct containing a float value and the Matrix structure:

struct Matrix
{
    float m00, m01, m02, m03;
    float m10, m11, m12, m13;
    float m20, m21, m22, m23;
    float m30, m31, m32, m33;
};


struct ConstantBuffer
{
    float  value1;
    Matrix value2;
} g_CBData;

.

 

 

After creating the constant buffer object, y update the data in my shader using:

//---------- update constant buffer ----------//
Matrix m =
{
  1.0f, 1.0f, 1.0f, 1.0f,
  0.0f, 1.0f, 0.0f, 0.0f,
  0.0f, 0.0f, 1.0f, 0.0f,
  0.0f, 0.0f, 0.0f, 1.0f
};

g_CBData.value1 = 1.0f;
g_CBData.value2 = m;

D3D11_MAPPED_SUBRESOURCE mappedResource;
g_pDeviceContext->Map(g_pConstantBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
CopyMemory(mappedResource.pData, &g_CBData, sizeof(ConstantBuffer) );
g_pDeviceContext->Unmap(g_pConstantBuffer, 0);

g_pDeviceContext->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer );

.

 

 

However, if I try to read the data I'm sending to the vertex shader using something like this:

buffer cbPerFrame : register (b0)
{
    float  value1;
    float4x4 value2;
};


struct VSInput
{
    float3 Pos : POSITION;
    float4 Col : COLOR;
};


struct VSOutput
{
    float4 Pos : SV_POSITION;
    float4 Col : COLOR;
};


VSOutput main( VSInput vs_in )
{
    VSOutput vs_out;

    vs_out.Pos = float4( vs_in.Pos, 1 );
    vs_out.Col = vs_in.Col * value2._m11; // <- checking "value1" and the values in "value2"

    return vs_out;
}

.

 

the data layout:simply does not match to what I'm sending to it!

 

The weird thing is that if I use in the constant buffer ONLY the "float value1" or "Matrix value2", the shader reads the corresponding data properly (the Matrix gets transposed, as I read it is expected), but if I include in the constant buffer structure both values, it looks like the binary layout simply gets messed up inside the vertex shader.. with apparently no logical order.

 

so.... what am I doing wrong?... how am I supposed to pass the data to the shader so I can read it properly?

 

Thanks!

 

 


DXT1 + mipmaps .dds file encoding

23 June 2013 - 01:00 PM

Hi,

 

I was trying to manually read a .dds texture file which has mipmaps but something in the size of the data doesn't seem to be matching.

 

As far as I understand, the .dds file stores first all the bytes for the top level texture, then continuously the next mipmap level and so on.

 

So, if I have a 16x16 texture, I should have 256 pixels in the top level mipmap, which due to block compression (BC1) in size it would be:

 

256/16 * 8 = 128 bytes

 

Everything ok so far.

 

However, the texture has other 4 texture levels (5 including the top level one), so that would mean that I should have:

 

Lvl 0: 16x16 = 256

Lvl 1: 8x8     = 64

Lvl 2: 4x4   = 16

Lvl 3: 2x2   = 4

Lvl 4: 1  = 1

TOTAL = 341 pixels / 16 * 8 = 170 bytes

 

However, the actual data size I have is 184 bytes (plus the 128 bytes of the header its 312, which indeed is the size of the file).
 
 
So, something tells me that the mipmap encoding is not working as I was expecting.
 
Does anybody know why is this happening?
 
Thanks!

Performance improvements from Dx9 to Dx10/11

03 May 2013 - 04:07 PM

Hi,

 

I was in an interview last week, and one of the questions I was asked was "What peformance improvements were made into DirectX when moving from Dx9 to Dx10/11?"...

 

I have worked with Dx9 and Dx11, but this question totally freaked me out, because I don't know (maybe I'm still very young on this); however, I remember to have read that it reduced "micro-batching" (although I don't know how it does so or why).

 

So, I thought that it would be a good idea to ask you guys and learn a little bit about it....  so, basically that's the question, what improvements were made in the API / Architecture when changing from Dx9 to Dx10/11 that helped to improve the performance?

 

Thanks!


PARTNERS