Jump to content

  • Log In with Google      Sign In   
  • Create Account


はとぶ

Member Since 23 Oct 2000
Offline Last Active Aug 15 2014 11:17 PM

Topics I've Started

Using some ConstantBuffer values causing CreateShader to fail

17 August 2013 - 02:56 PM

I think I still need to understand better how to use constant buffers.

 

I have a constant buffer with just two floats declared:

 

C++ struct:

struct ConstantBuffer
{
    float value0;
    float value1;
} g_CBData;

.

 

And in the Vertex Shader I'm just trying to multiply the color by the Sin() of any of these two values:

cbuffer cbPerFrame : register (b0)
{
    float  value0;
    float  value1;
};


struct VSInput
{
    float3 Pos : POSITION;
    float4 Col : COLOR;
};


struct VSOutput
{
    float4 Pos : SV_POSITION;
    float4 Col : COLOR;
};


VSOutput main( VSInput vs_in )
{
    VSOutput vs_out;

    vs_out.Pos = float4( vs_in.Pos, 1 );
    vs_out.Col = vs_in.Col * sin( value1 ); // "value0" runs ok, "value1" causes E_INVALIDARG

    return vs_out;
}

 

If I use "value0" everything runs well, however, if I use "value1" then I get an E_INVALIDARG when creating the VertexShader:

R = g_pDirect3D->CreateVertexShader( pVSData, vsSize, nullptr, &g_pVertexShader );

.

 

The weird thing is that if I don't use the sin() function (i.e. I just multiply Color * value1):

vs_out.Col = vs_in.Col * value0; // runs ok
vs_out.Col = vs_in.Col * value1; // runs ok
.

 

Then the values are read correctly.

 

Also I can't multiply value0 * value1, nor sum or anything between these two values.. when I do, I just get the E_INVALIDARG error.

 

Is there any strange rule when using the values from a constant buffer?

 

Thanks

 

 


constant buffer data layout mismatch between app/shader

17 August 2013 - 03:31 AM

I started making a basic D3D11 application for Win8 without the Effects framework, everything was ok until I tried to pass constant buffers data to the shader, the data I'm passing to the shader doesn't match the actuall data the shader is reading.

 

 

I have a self made Matrix structure, and a struct containing a float value and the Matrix structure:

struct Matrix
{
    float m00, m01, m02, m03;
    float m10, m11, m12, m13;
    float m20, m21, m22, m23;
    float m30, m31, m32, m33;
};


struct ConstantBuffer
{
    float  value1;
    Matrix value2;
} g_CBData;

.

 

 

After creating the constant buffer object, y update the data in my shader using:

//---------- update constant buffer ----------//
Matrix m =
{
  1.0f, 1.0f, 1.0f, 1.0f,
  0.0f, 1.0f, 0.0f, 0.0f,
  0.0f, 0.0f, 1.0f, 0.0f,
  0.0f, 0.0f, 0.0f, 1.0f
};

g_CBData.value1 = 1.0f;
g_CBData.value2 = m;

D3D11_MAPPED_SUBRESOURCE mappedResource;
g_pDeviceContext->Map(g_pConstantBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
CopyMemory(mappedResource.pData, &g_CBData, sizeof(ConstantBuffer) );
g_pDeviceContext->Unmap(g_pConstantBuffer, 0);

g_pDeviceContext->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer );

.

 

 

However, if I try to read the data I'm sending to the vertex shader using something like this:

buffer cbPerFrame : register (b0)
{
    float  value1;
    float4x4 value2;
};


struct VSInput
{
    float3 Pos : POSITION;
    float4 Col : COLOR;
};


struct VSOutput
{
    float4 Pos : SV_POSITION;
    float4 Col : COLOR;
};


VSOutput main( VSInput vs_in )
{
    VSOutput vs_out;

    vs_out.Pos = float4( vs_in.Pos, 1 );
    vs_out.Col = vs_in.Col * value2._m11; // <- checking "value1" and the values in "value2"

    return vs_out;
}

.

 

the data layout:simply does not match to what I'm sending to it!

 

The weird thing is that if I use in the constant buffer ONLY the "float value1" or "Matrix value2", the shader reads the corresponding data properly (the Matrix gets transposed, as I read it is expected), but if I include in the constant buffer structure both values, it looks like the binary layout simply gets messed up inside the vertex shader.. with apparently no logical order.

 

so.... what am I doing wrong?... how am I supposed to pass the data to the shader so I can read it properly?

 

Thanks!

 

 


DXT1 + mipmaps .dds file encoding

23 June 2013 - 01:00 PM

Hi,

 

I was trying to manually read a .dds texture file which has mipmaps but something in the size of the data doesn't seem to be matching.

 

As far as I understand, the .dds file stores first all the bytes for the top level texture, then continuously the next mipmap level and so on.

 

So, if I have a 16x16 texture, I should have 256 pixels in the top level mipmap, which due to block compression (BC1) in size it would be:

 

256/16 * 8 = 128 bytes

 

Everything ok so far.

 

However, the texture has other 4 texture levels (5 including the top level one), so that would mean that I should have:

 

Lvl 0: 16x16 = 256

Lvl 1: 8x8     = 64

Lvl 2: 4x4   = 16

Lvl 3: 2x2   = 4

Lvl 4: 1  = 1

TOTAL = 341 pixels / 16 * 8 = 170 bytes

 

However, the actual data size I have is 184 bytes (plus the 128 bytes of the header its 312, which indeed is the size of the file).
 
 
So, something tells me that the mipmap encoding is not working as I was expecting.
 
Does anybody know why is this happening?
 
Thanks!

Performance improvements from Dx9 to Dx10/11

03 May 2013 - 04:07 PM

Hi,

 

I was in an interview last week, and one of the questions I was asked was "What peformance improvements were made into DirectX when moving from Dx9 to Dx10/11?"...

 

I have worked with Dx9 and Dx11, but this question totally freaked me out, because I don't know (maybe I'm still very young on this); however, I remember to have read that it reduced "micro-batching" (although I don't know how it does so or why).

 

So, I thought that it would be a good idea to ask you guys and learn a little bit about it....  so, basically that's the question, what improvements were made in the API / Architecture when changing from Dx9 to Dx10/11 that helped to improve the performance?

 

Thanks!


About GPU-Memory interaction

29 January 2013 - 10:55 AM

Hi all,

I want to understand better how the GPU works, specially what causes why and when the performance issues related to it.

for example,

A) if a texture needed for a triangle is stored in VRAM, that means that when using a text2d(...) instruction within the shader code, the GPU stalls waiting to get the appropriate pixel from VRAM, am I right?... or does the whole texture get stored in cache?... if so, that means that all texture used are stored in cache (bump, diffuse, etc)?

B) when rendering, the GPU needs to write on the appropriate render target, would the whole RT be also on a local cache?... so that menas that when changing RT's it needs to send the old RT to VRAM and bring the new one to cache?

C) when changeing render states, I beleive this would be a matter of just changeing a flag in the GPU, so that wouldn't cause any performance issues, would it?... that is, I could go crazy changing states without changeing RT or textures or shader code and it would not have any relevant penalty, right?

D) if VRAM runs out of space, the textures, would be stored in System RAM?

Thanks!

PARTNERS