• Advertisement
Sign in to follow this  

DX11 Couple of DX10 Questions

This topic is 2469 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So When I first learned Direct X it was DX11, but there wasn't a TON of resources available so I started going off Frank Luna's DX10 book. I feel that I understand everything besides a "few" select things (mainly differences between DX10 and 11)

Anyways here they go:
For Vertex and Index buffers I noticed this:


D3D10_BUFFER_DESC vbd;
vbd.Usage = D3D10_USAGE_IMMUTABLE;
vbd.ByteWidth = sizeof(Vertex) * mNumVertices;
vbd.BindFlags = D3D10_BIND_VERTEX_BUFFER;
vbd.CPUAccessFlags = 0;
vbd.MiscFlags = 0;
D3D10_SUBRESOURCE_DATA vinitData;
vinitData.pSysMem = &vertices[0]; <-------
HR(md3dDevice->CreateBuffer(&vbd, &vinitData, &mVB));


What exactly does the piece I have an arrow pointed at do? I think in DX11 it was something completely different (like 3 lines worth) I think we used something like:


D3D11_MAPPED_SUBRESOURCE ms;
devcon->Map(pVBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms); // map the buffer
memcpy(ms.pData, OurVertices, sizeof(OurVertices)); // copy the data
devcon->Unmap(pVBuffer, NULL);



ok secondly is dealing with HLSL in DX11 I would compile the shader and do something like this:


ID3D10Blob *VS, *PS;
D3DX11CompileFromFile(L"shaders.hlsl", 0, 0, "VShader", "vs_5_0", 0, 0, 0, &VS, 0, 0);
D3DX11CompileFromFile(L"shaders.hlsl", 0, 0, "PShader", "ps_5_0", 0, 0, 0, &PS, 0, 0);

// create the shader objects
dev->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &pVS);
dev->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pPS);

// set the shader objects
devcon->VSSetShader(pVS, 0, 0);
devcon->PSSetShader(pPS, 0, 0);



but in the DX10 code it's like:


D3D10_TECHNIQUE_DESC techDesc;
mTech->GetDesc( &techDesc );
for(UINT p = 0; p < techDesc.Passes; ++p)
{
mWVP = mView*mProj;
mfxWVPVar->SetMatrix((float*)&mWVP);
mTech->GetPassByIndex( p )->Apply(0);
mLand.draw();
}



and this:


mTech = mFX->GetTechniqueByName("ColorTech");

mfxWVPVar = mFX->GetVariableByName("gWVP")->AsMatrix();



I've looked around the book, and it doesnt really explain what these do.
also here's the HLSL for reference for the DX10 book


//=============================================================================
// color.fx by Frank Luna (C) 2008 All Rights Reserved.
//
// Transforms and colors geometry.
//=============================================================================


cbuffer cbPerObject
{
float4x4 gWVP;
};

void VS(float3 iPosL : POSITION,
float4 iColor : COLOR,
out float4 oPosH : SV_POSITION,
out float4 oColor : COLOR)
{
// Transform to homogeneous clip space.
oPosH = mul(float4(iPosL, 1.0f), gWVP);

// Just pass vertex color into the pixel shader.
oColor = iColor;
}

float4 PS(float4 posH : SV_POSITION,
float4 color : COLOR) : SV_Target
{
return color;
}
/*
RasterizerState Wireframe
{
FillMode = Wireframe;
CullMode = Back;
FrontCounterClockwise = false;
};*/

technique10 ColorTech
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS() ) );

//SetRasterizerState(Wireframe);
}
}



I've never seen a "technique" mentioned in DX11....and i'd def. never seen anything like this:

mTech = mFX->GetTechniqueByName("ColorTech");

mfxWVPVar = mFX->GetVariableByName("gWVP")->AsMatrix();

seems like creating an input layout and such is ALOT easier in DX11...

anyone care to explain what these are?

Share this post


Link to post
Share on other sites
Advertisement
The line where the arrow is pointing at just stores the adress of your first vertex in vinitData.pSysMem.
This is one of the things that vinitData stores.
If I am not wrong you can also just map it like in DX11.

The part you show about HLSL in DX11 is when initializing, while the DX10 code you posted is for drawing it.

The techniques are indead not used in DX11 (although you can use them).
In DX11 you work directly with the pixel and vertex shader, you can also do this in DX10 and then you don't have to use techniques.

I don't have much experience with it so I don't know whether you can use those variables without an effect in DX10.

Sorry that I edited this post a few times, but I accidentaly posted it to early.

Share this post


Link to post
Share on other sites
[color="#1C2837"][color="#000000"]vinitData Is a pointer to the initial data to place inside the buffer once it's been created

And techniques are ways of calling different shaders for example...


technique10 Tech1
{
Pass P0
{
SetVertexShader(CompileShader(vs_4_0, VS()));
SetGeometryShader(NULL);
SetPixelShader(CompileShader(ps_4_0, PixelShaderWithNoEffects())); <---
}
}

technique10 Tech2
{
Pass P0
{

SetVertexShader(CompileShader(vs_4_0, VS()));
SetGeometryShader(NULL);
SetPixelShader(CompileShader(ps_4_0, PixelShaderWithEffects())); <---
}
}

And then in your main code you would just select a technique on which you would want to use

Share this post


Link to post
Share on other sites
You can still use memcpy to map data into a buffer... just the 2nd parameter in [color="#1C2837"][font="CourierNew, monospace"][color="#000000"]md3dDevice[color="#666600"]->[color="#660066"]CreateBuffer[color="#666600"](&[color="#000000"]vbd[color="#666600"], [color="#666600"]&[color="#000000"]vinitData[color="#666600"], [color="#666600"]&[color="#000000"]mVB) is used to set initial data in the buffer you can just set it to NULL [/font][font="CourierNew, monospace"][color="#000000"]md3dDevice[color="#666600"]->[color="#660066"]CreateBuffer[color="#666600"](&[color="#000000"]vbd[color="#666600"],[color="#000000"] NULL[color="#666600"], [color="#666600"]&[color="#000000"]mVB)[/font]
[font="CourierNew, monospace"] [/font]
[font="CourierNew, monospace"][color="#000000"]Oh and fail in my last post...[/font]
[font="CourierNew, monospace"] [/font]
[font="CourierNew, monospace"][color="#000000"]Edit:[/font]
[font="CourierNew, monospace"] [/font]
[font="CourierNew, monospace"][color="#000000"]Here's an example of copying data in dx10[/font]
[font="CourierNew, monospace"] [/font][font="CourierNew, monospace"]
bool CVertexBuffer::MapIndexData(UINT numIndex, void* pData)
{
void* pVoid;
IndexBuffer->Map(D3D10_MAP_WRITE_DISCARD, 0, &pVoid); // map the vertex buffer
memcpy(pVoid, pData, sizeof(int) * numIndex); // copy the vertices to the buffer
IndexBuffer->Unmap();

this->numIndexes = numIndex;

return true;
}
[/font]

Share this post


Link to post
Share on other sites
^^ Ya thats what I did in the DX11. Ok that makes sense then.

the technique thing.......for DX10 is a whole different other mess tho.

Share this post


Link to post
Share on other sites

^^ Ya thats what I did in the DX11. Ok that makes sense then.

the technique thing.......for DX10 is a whole different other mess tho.


Haha, yeah it's basically just a way to call different shader functions. It's mainly used for selecting which shaders to use on different graphics cards... If i remember correctly DX actually provides you with a function that decides which technique is better.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Fleshbits
      Back around 2006 I spent a good year or two reading books, articles on this site, and gobbling up everything game dev related I could. I started an engine in DX10 and got through basics. I eventually gave up, because I couldn't do the harder things.
      Now, my C++ is 12 years stronger, my mind is trained better, and I am thinking of giving it another go.
      Alot has changed. There is no more SDK, there is evidently a DX Toolkit, XNA died, all the sweet sites I used to go to are 404, and google searches all point to Unity and Unreal.
      I plainly don't like Unity or Unreal, but might learn them for reference.
      So, what is the current path? Does everyone pretty much use the DX Toolkit? Should I start there? I also read that DX12 is just expert level DX11, so I guess I am going DX 11.
      Is there a current and up to date list of learning resources anywhere?  I am about tired of 404s..
       
       
    • By Stewie.G
      Hi,
       
      I've been trying to implement a basic gaussian blur using the gaussian formula, and here is what it looks like so far:
      float gaussian(float x, float sigma)
      {
          float pi = 3.14159;
          float sigma_square = sigma * sigma;
          float a = 1 / sqrt(2 * pi*sigma_square);
          float b = exp(-((x*x) / (2 * sigma_square)));
          return a * b;
      }
      My problem is that I don't quite know what sigma should be.
      It seems that if I provide a random value for sigma, weights in my kernel won't add up to 1.
      So I ended up calling my gaussian function with sigma == 1, which gives me weights adding up to 1, but also a very subtle blur.
      Here is what my kernel looks like with sigma == 1
              [0]    0.0033238872995488885    
              [1]    0.023804742479357766    
              [2]    0.09713820127276819    
              [3]    0.22585307043511713    
              [4]    0.29920669915475656    
              [5]    0.22585307043511713    
              [6]    0.09713820127276819    
              [7]    0.023804742479357766    
              [8]    0.0033238872995488885    
       
      I would have liked it to be more "rounded" at the top, or a better spread instead of wasting [0], [1], [2] with values bellow 0.1.
      Based on my experiments, the key to this is to provide a different sigma, but if I do, my kernel values no longer adds up to 1, which results to a darker blur.
      I've found this post 
      ... which helped me a bit, but I am really confused with this the part where he divide sigma by 3.
      Can someone please explain how sigma works? How is it related to my kernel size, how can I balance my weights with different sigmas, ect...
       
      Thanks :-)
    • By mc_wiggly_fingers
      Is it possible to asynchronously create a Texture2D using DirectX11?
      I have a native Unity plugin that downloads 8K textures from a server and displays them to the user for a VR application. This works well, but there's a large frame drop when calling CreateTexture2D. To remedy this, I've tried creating a separate thread that creates the texture, but the frame drop is still present.
      Is there anything else that I could do to prevent that frame drop from occuring?
    • By cambalinho
      i'm trying draw a circule using math:
      class coordenates { public: coordenates(float x=0, float y=0) { X = x; Y = y; } float X; float Y; }; coordenates RotationPoints(coordenates ActualPosition, double angle) { coordenates NewPosition; NewPosition.X = ActualPosition.X*sin(angle) - ActualPosition.Y*sin(angle); NewPosition.Y = ActualPosition.Y*cos(angle) + ActualPosition.X*cos(angle); return NewPosition; } but now i know that these have 1 problem, because i don't use the orign.
      even so i'm getting problems on how i can rotate the point.
      these coordinates works between -1 and 1 floating points.
      can anyone advice more for i create the circule?
    • By isu diss
      I managed convert opengl code on http://john-chapman-graphics.blogspot.co.uk/2013/02/pseudo-lens-flare.html to hlsl, but unfortunately I don't know how to add it to my atmospheric scattering code (Sky - first image). Can anyone help me?
      I tried to bind the sky texture as SRV and implement lens flare code in pixel shader, I don't know how to separate them (second image)


  • Advertisement