# DX11 Getting NVIDIA SDK Samples to Compile

This topic is 1945 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi,

I've been trying to compile examples from the NVIDIA Graphics SDK 11 ([url="http://developer.nvidia.com/nvidia-graphics-sdk-11"]http://developer.nvi...graphics-sdk-11[/url]). I have Visual Studio 2010 Ultimate, but was getting tons of linker errors. The SDK says it wants to be statically linked, but when I changed the Linker Input from /MD to /MT, I got even more linker problems.

I read (in the context of the CUDA SDK) that (the CUDA SDK at least) depends on the older compiler in Visual Studio 2008. I downloaded and installed Visual Studio 2008, and after correcting some superficial problems, I get linking errors in various projects. The main ones seem to be:[code]LINK : fatal error LNK1104: cannot open file 'd3dx11effectsd.lib'[/code]and[code]LINK : fatal error LNK1104: cannot open file 'DXUT10d.lib'[/code]
The examples run just fine on my computer--but from the precompiled binaries. I need to be able to compile them myself to make changes and understand how they work. Help!

Thanks,
-G

##### Share on other sites
Those 2 libs should be included with the sample. Or if you installed the full SDK, they're in ProgramData\NVIDIA Corporation\NVIDIA Direct3D SDK 11\Lib. Just set up your project's linker settings to include that directory and you should be good.

##### Share on other sites
[quote name='MJP' timestamp='1306219916' post='4814950']Those 2 libs should be included with the sample. Or if you installed the full SDK, they're in ProgramData\NVIDIA Corporation\NVIDIA Direct3D SDK 11\Lib. Just set up your project's linker settings to include that directory and you should be good.[/quote]Awesome, that solves a huge part of it! It wants DXUT10d.lib, not DXUT10.lib, so it only compiles in release mode. Do the debug versions of these libraries just not exist?

Thanks,
-G

##### Share on other sites
[quote name='Geometrian' timestamp='1306246805' post='4815128']
[quote name='MJP' timestamp='1306219916' post='4814950']Those 2 libs should be included with the sample. Or if you installed the full SDK, they're in ProgramData\NVIDIA Corporation\NVIDIA Direct3D SDK 11\Lib. Just set up your project's linker settings to include that directory and you should be good.[/quote]Awesome, that solves a huge part of it! It wants DXUT10d.lib, not DXUT10.lib, so it only compiles in release mode. Do the debug versions of these libraries just not exist?

Thanks,
-G
[/quote]

Sorry, I have no idea. I can't find them either.

##### Share on other sites
[quote name='MJP' timestamp='1306259096' post='4815214']
[quote name='Geometrian' timestamp='1306246805' post='4815128']
[quote name='MJP' timestamp='1306219916' post='4814950']Those 2 libs should be included with the sample. Or if you installed the full SDK, they're in ProgramData\NVIDIA Corporation\NVIDIA Direct3D SDK 11\Lib. Just set up your project's linker settings to include that directory and you should be good.[/quote]Awesome, that solves a huge part of it! It wants DXUT10d.lib, not DXUT10.lib, so it only compiles in release mode. Do the debug versions of these libraries just not exist?

Thanks,
-G
[/quote]

Sorry, I have no idea. I can't find them either.
[/quote]

They do but they are part of the DX sdk and not the NVidia SDK I assume that they tell you somewhere in there release notes that you should have the latest DX SDK installed and where to find it if you haven't installed that.

##### Share on other sites
[quote name='NightCreature83' timestamp='1306272445' post='4815315']
[quote name='MJP' timestamp='1306259096' post='4815214']
[quote name='Geometrian' timestamp='1306246805' post='4815128']
[quote name='MJP' timestamp='1306219916' post='4814950']Those 2 libs should be included with the sample. Or if you installed the full SDK, they're in ProgramData\NVIDIA Corporation\NVIDIA Direct3D SDK 11\Lib. Just set up your project's linker settings to include that directory and you should be good.[/quote]Awesome, that solves a huge part of it! It wants DXUT10d.lib, not DXUT10.lib, so it only compiles in release mode. Do the debug versions of these libraries just not exist?

Thanks,
-G
[/quote]

Sorry, I have no idea. I can't find them either.
[/quote]

They do but they are part of the DX sdk and not the NVidia SDK I assume that they tell you somewhere in there release notes that you should have the latest DX SDK installed and where to find it if you haven't installed that.
[/quote]

The SDK only comes with source + projects for DXUT and Effects11, it doesn't have any precompiled static libs.

##### Share on other sites
[quote name='NightCreature83' timestamp='1306272445' post='4815315']They do but they are part of the DX sdk and not the NVidia SDK I assume that they tell you somewhere in there release notes that you should have the latest DX SDK installed and where to find it if you haven't installed that.[/quote]Well, I have had the (latest) DirectX SDK installed for some time now. Any clues on where it might be in the SDK--I didn't see them?

Thanks,

##### Share on other sites
Source for the libraries you need are in \NVIDIA Direct3D SDK 11\source\Common

As per usual, if building with 2008 you will need to add $(DXSDK_DIR)/include; to all of the front of all the Additional Include Directories for each project. Build debug and the 'd' versions of the libs will show up in NVIDIA Direct3D SDK 11\Lib #### Share this post ##### Link to post ##### Share on other sites [quote name='djmips' timestamp='1329201894' post='4912913'] Source for the libraries you need are in \NVIDIA Direct3D SDK 11\source\Common As per usual, if building with 2008 you will need to add$(DXSDK_DIR)/include; to all of the front of all the Additional Include Directories for each project.

Build debug and the 'd' versions of the libs will show up in NVIDIA Direct3D SDK 11\Lib
[/quote]
Which version of Directx SDK do I need? It don't work with "Microsoft DirectX SDK (March 2009)", I get errors:
[CODE]
error C2065: "D3D11_BUFFER_UAV_FLAG_COUNTER": undeclared identifier
......

[/CODE]

##### Share on other sites
Is there any good reason not to have the newest SDK, which is June 2010?

##### Share on other sites
At the time of my first reply (February 2012) I was using Microsoft DirectX SDK (June 2010)

##### Share on other sites

This topic is 1945 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628754
• Total Posts
2984520
• ### Similar Content

• Having some issues with a geometry shader in a very basic DX app.
We have an assignment where we are supposed to render a rotating textured quad, and in the geometry shader duplicate this quad and offset it by its normal. Very basic stuff essentially.
My issue is that the duplicated quad, when rendered in front of the original quad, seems to fail the Z test and thus the original quad is rendered on top of it.
Whats even weirder is that this only happens for one of the triangles in the duplicated quad, against one of the original quads triangles.

Here's a video to show you what happens: Video (ignore the stretched textures)

Here's my GS: (VS is simple passthrough shader and PS is just as basic)
struct VS_OUT { float4 Pos : SV_POSITION; float2 UV : TEXCOORD; }; struct VS_IN { float4 Pos : POSITION; float2 UV : TEXCOORD; }; cbuffer cbPerObject : register(b0) { float4x4 WVP; }; [maxvertexcount(6)] void main( triangle VS_IN input[3], inout TriangleStream< VS_OUT > output ) { //Calculate normal float4 faceEdgeA = input[1].Pos - input[0].Pos; float4 faceEdgeB = input[2].Pos - input[0].Pos; float3 faceNormal = normalize(cross(faceEdgeA.xyz, faceEdgeB.xyz)); //Input triangle, transformed for (uint i = 0; i < 3; i++) { VS_OUT element; VS_IN vert = input[i]; element.Pos = mul(vert.Pos, WVP); element.UV = vert.UV; output.Append(element); } output.RestartStrip(); for (uint j = 0; j < 3; j++) { VS_OUT element; VS_IN vert = input[j]; element.Pos = mul(vert.Pos + float4(faceNormal, 0.0f), WVP); element.Pos.xyz; element.UV = vert.UV; output.Append(element); } }
I havent used geometry shaders much so im not 100% on what happens behind the scenes.
Any tips appreciated!

• Hi, I'm building a game engine using DirectX11 in c++.
I need a basic physics engine to handle collisions and motion, and no time to write my own.
What is the easiest solution for this? Bullet and PhysX both seem too complicated and would still require writing my own wrapper classes, it seems.
I found this thing called PAL - physics abstraction layer that can support bullet, physx, etc, but it's so old and no info on how to download or install it.
The simpler the better. Please let me know, thanks!
• By Hexaa
I try to draw lines with different thicknesses using the geometry shader approach from here:
It seems to work great on my development machine (some Intel HD). However, if I try it on my target (Nvidia NVS 300, yes it's old) I get different results. See the attached images. There
seem to be gaps in my sine signal that the NVS 300 device creates, the intel does what I want and expect in the other picture.
It's a shame, because I just can't figure out why. I expect it to be the same. I get no Error in the debug output, with enabled native debugging. I disabled culling with CullMode.None. Could it be some z-fighting? I have little clue about it but I tested to play around with the RasterizerStateDescription and DepthBias properties with no success, no change at all. Maybe I miss something there?
I develop the application with SharpDX btw.
Any clues or help is very welcome

• Hi,
I'm currently trying to write a shader which shoud compute a fast fourier transform of some data, manipulating the transformed data, do an inverse FFT an then displaying the result as vertex offset and color. I use Unity3d and HLSL as shader language. One of the main problems is that the data should not be passed from CPU to GPU for every frame if possible. My original plan was to use a vertex shader and do the fft there, but I fail to find out how to store changing data betwen shader calls/passes. I found a technique called ping-ponging which seems to be based on writing and exchangeing render targets, but I couldn't find an example for HLSL as a vertex shader yet.
which seem to use COLOR0 and COLOR1 as such render targets.
Is it even possible to do such calculations on the gpu only? (/in this shader stage?, because I need the result of the calculation to modify the vertex offsets there)
I also saw the use of compute shaders in simmilar projects (ocean wave simulation), do they realy copy data between CPU / GPU for every frame?
How does this ping-ponging / rendertarget switching technique work in HLSL?
Have you seen an example of usage?
Thank you
appswert
Hi
Do the atomic operations (InterlockedAdd in my case) should work without any issues on RWByteAddressBuffer and be globaly coherent ?
I'v come back from CUDA world and commited fairly simple kernel that does some job, the pseudo-code is as follows:
(both kernels use that same RWByteAddressBuffer)
first kernel does some job and sets Result[0] = 0;
(using Result.Store(0, 0))
I'v checked with debugger, and indeed the value stored at dword 0 is 0
now my second kernel
RWByteAddressBuffer Result;  [numthreads(8, 8, 8)] void main() {     for (int i = 0; i < 5; i++)     {         uint4 v0 = DoSomeCalculations1();         uint4 v1 = DoSomeCalculations2();         uint4 v2 = DoSomeCalculations3();                  if (v0.w == 0 && v1.w == 0 && v2.w)             continue;         //    increment counter by 3, and get it previous value         // this should basically allocate space for 3 uint4 values in buffer         uint prev;         Result.InterlockedAdd(0, 3, prev);                  // this fills the buffer with 3 uint4 values (+1 is here as the first 16 bytes is occupied by DrawInstancedIndirect data)         Result.Store4((prev+0+1)*16, v0);         Result.Store4((prev+1+1)*16, v1);         Result.Store4((prev+2+1)*16, v2);     } } Now I invoke it with Dispatch(4,4,4)
Now I use DrawInstancedIndirect to draw the buffer, but ocassionaly there is missed triangle here and there for a frame, as if the atomic counter does not work as expected
do I need any additional synchronization there ?
I'v tried 'AllMemoryBarrierWithGroupSync' at the end of kernel, but without effect.
If I do not use atomic counter, and istead just output empty vertices (that will transform into degenerated triangles) the all is OK - as if I'm missing some form of synchronization, but I do not see such a thing in DX11.
I'v tested on both old and new nvidia hardware (680M and 1080, the behaviour is that same).

• 12
• 25
• 12
• 10
• 17