Sign in to follow this  
Icebone1000

DX11 DX11 on a DX10 card, how to handle effects/hlsl?

Recommended Posts

Im getting so confused with directx... So my video card is directx 10( feature level 10_0 ) with supports shader 4_0.. So, what should I do? Since Im working with directX 11, how should I handle this? Since d3dx11effect is a "separated" framework, if I find that the current feature is 10_0, I have to use d3dx10effect instead? With shaders for the pipeline from the directx 10?

Share this post


Link to post
Share on other sites
Im getting this error:
Effects11: Effect version is unrecognized. This runtime supports fx_5_0 to fx_5_0.

This happens at:
hr = D3DX11CreateEffectFromMemory(g_pShader, g_pShader->GetBufferSize(), NULL, g_pDevice, &g_pEffect);

heres the part of the code:

if( D3DX11CompileFromFile(L"Tutorial02.fx", NULL, NULL, "Render","fx_5_0",D3D10_SHADER_ENABLE_STRICTNESS|D3D10_SHADER_DEBUG, NULL, NULL,
&g_pShader, &g_pErrorMsgs, &hr ) != S_OK ) return E_FAIL;


hr = D3DX11CreateEffectFromMemory( g_pShader, g_pShader->GetBufferSize(), NULL,
g_pDevice, &g_pEffect);




heres my fx file( I copied that from a dx10 tutorial):


//--------------------------------------------------------------------------------------
// Vertex Shader
//--------------------------------------------------------------------------------------
float4 VS( float4 Pos : POSITION ) : SV_POSITION
{
return Pos;
}


//--------------------------------------------------------------------------------------
// Pixel Shader
//--------------------------------------------------------------------------------------
float4 PS( float4 Pos : SV_POSITION ) : SV_Target
{
return float4( 1.0f, 1.0f, 0.0f, 1.0f ); // Yellow, with Alpha = 1
}


//--------------------------------------------------------------------------------------
technique10 Render
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS() ) );
}
}





Doesnt matter if I set the profile to "fx_5_0" or "fx_4_0", it still says "Effect version is unrecognized. This runtime supports fx_5_0 to fx_5_0."...
What is happening?

--EDIT---

I just figured out I have to use ID3D10Blob->GetBufferPointer() and not just the ID3D10Blob..

Share this post


Link to post
Share on other sites
Im getting the following dx errors:

D3D11: ERROR: ID3D11Device::CreateVertexShader: The pClassLinkage parameter must be NULL, unless GetFeatureLevel returns D3D_FEATURE_LEVEL_11_0 or greater.

D3D11: ERROR: ID3D11Device::CreatePixelShader: The pClassLinkage parameter must be NULL, unless GetFeatureLevel returns D3D_FEATURE_LEVEL_11_0 or greater.

D3D11: ERROR: ID3D11DeviceContext::Draw: A Vertex Shader is always required when drawing, but none is currently bound.

D3D11: ERROR: ID3D11DeviceContext::Draw: Rasterization Unit is enabled (PixelShader is not NULL or Depth/Stencil test is enabled and RasterizedStream is not D3D11_SO_NO_RASTERIZED_STREAM) but position is not provided by the last shader before the Rasterization Unit.


The first 2 I dont know how to set, since Im using effects, and not creating the shaders myself...

The last 2 I have no idea( maybe they happen because of the first 2? ) Im trying to find if theres anything I should do that is different for dx 11( since Im using dx10 samples as reference..)

heres my functions( tell me if you need more code ):

HRESULT InitIA(){

//create input buffers

ID3D11Buffer *pInputBuffer = NULL;

if( SetTriangleBuffer( pInputBuffer ) != S_OK ) return E_FAIL;

//compile fx file, create effect and shader objects:

if( SetEffect() != S_OK ) return E_FAIL;


//create input layout object

D3D11_INPUT_ELEMENT_DESC layout[] = {
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 }
};


D3DX11_PASS_DESC pass = {0};
g_pEPass->GetDesc( &pass );
if( g_pDevice->CreateInputLayout( layout, 2, pass.pIAInputSignature, pass.IAInputSignatureSize, &g_pInputLayout ) != S_OK ) return E_FAIL;

//bind objects to the IA stage

UINT stride = sizeof( vposcolor );
UINT offset = NULL;
g_pDIContext->IASetVertexBuffers( 0, 1, &pInputBuffer, &stride, &offset );

g_pDIContext->IASetInputLayout( g_pInputLayout );

//specify the primitive type

g_pDIContext->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST );



return S_OK;
}
//===========
HRESULT SetEffect(){

HRESULT hr = NULL;

//compile fx file

if( D3DX11CompileFromFile( L"Tutorial02.fx", NULL, NULL, "Render",
"fx_5_0", D3D10_SHADER_ENABLE_STRICTNESS|D3D10_SHADER_DEBUG, NULL, NULL,
&g_pShader, &g_pErrorMsgs, &hr ) != S_OK ) return E_FAIL;

//create effect

if( D3DX11CreateEffectFromMemory( g_pShader->GetBufferPointer(), g_pShader->GetBufferSize(), NULL,
g_pDevice, &g_pEffect) != S_OK ) return E_FAIL;

//get technique

g_pETech = g_pEffect->GetTechniqueByName( "Render" );

//get pass

g_pEPass = g_pETech->GetPassByIndex( 0 );

return S_OK;
}
//===========
VOID Render(){

FLOAT fClearColor[] = { 0.0f, 0.0f, 1.0f, 1.0f };

g_pDIContext->ClearRenderTargetView( g_pRTV, fClearColor );


g_pEPass->Apply( 0, g_pDIContext );
g_pDIContext->Draw( 3, 0 );

g_pSwapChain->Present( 0, 0 );

}



The first 2 errors happen at D3DX11CreateEffectFromMemory, the last too at the draw call..Anything Im missing?

Share this post


Link to post
Share on other sites
You need the following code in order to fix the class linkage problem:

File: EffectNonRuntime.cpp

// This is a regular shader
D3D_FEATURE_LEVEL level = m_pDevice->GetFeatureLevel();
if(pShader->pReflectionData->RasterizedStream == D3D11_SO_NO_RASTERIZED_STREAM)
{
pShader->IsValid = FALSE;
}
else if(level == D3D_FEATURE_LEVEL_11_0)
{
if(FAILED((m_pDevice->*(pShader->pVT->pCreateShader))(
(UINT *) pShader->pReflectionData->pBytecode,
pShader->pReflectionData->BytecodeLength,
m_pClassLinkage, &
pShader->pD3DObject) ) )
pShader->IsValid = FALSE;
}
else
{
if(FAILED((m_pDevice->*(pShader->pVT->pCreateShader))(
(UINT *) pShader->pReflectionData->pBytecode, pShader->pReflectionData->BytecodeLength,
0,
&pShader->pD3DObject) ) )
pShader->IsValid = FALSE;
}


Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628333
    • Total Posts
      2982139
  • Similar Content

    • By noodleBowl
      I was wondering if someone could explain this to me
      I'm working on using the windows WIC apis to load in textures for DirectX 11. I see that sometimes the WIC Pixel Formats do not directly match a DXGI Format that is used in DirectX. I see that in cases like this the original WIC Pixel Format is converted into a WIC Pixel Format that does directly match a DXGI Format. And doing this conversion is easy, but I do not understand the reason behind 2 of the WIC Pixel Formats that are converted based on Microsoft's guide
      I was wondering if someone could tell me why Microsoft's guide on this topic says that GUID_WICPixelFormat40bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA and why GUID_WICPixelFormat80bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA
      In one case I would think that: 
      GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat32bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA, because the black channel (k) values would get readded / "swallowed" into into the CMY channels
      In the second case I would think that:
      GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat128bppRGBA, because the black channel (k) bits would get redistributed amongst the remaining 4 channels (CYMA) and those "new bits" added to those channels would fit in the GUID_WICPixelFormat64bppRGBA and GUID_WICPixelFormat128bppRGBA formats. But also seeing as there is no GUID_WICPixelFormat128bppRGBA format this case is kind of null and void
      I basically do not understand why Microsoft says GUID_WICPixelFormat40bppCMYKAlpha and GUID_WICPixelFormat80bppCMYKAlpha should convert to GUID_WICPixelFormat64bppRGBA in the end
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By GalacticCrew
      In some situations, my game starts to "lag" on older computers. I wanted to search for bottlenecks and optimize my game by searching for flaws in the shaders and in the layer between CPU and GPU. My first step was to measure the time my render function needs to solve its tasks. Every second I wrote the accumulated times of each task into my console window. Each second it takes around
      170ms to call render functions for all models (including settings shader resources, updating constant buffers, drawing all indexed and non-indexed vertices, etc.) 40ms to render the UI 790ms to call SwapChain.Present <1ms to do the rest (updating structures, etc.) In my Swap Chain description I set a frame rate of 60 Hz, if it's supported by the computer. It made sense for me that the Present function waits some time until it starts the next frame. However, I wanted to check, if this might be a problem for me. After a web search I found articles like this one, which states 
      My drivers are up-to-date so that's no issue. I installed Microsoft's PIX, but I was unable to use it. I could configure my game for x64, but PIX is not able to process DirectX 11.. After getting only error messages, I installed NVIDIA's NSight. After adjusting my game and installing all components, I couldn't get a proper result, because my game freezes after a few frames. I haven't figured out why. There is no exception or error message and other debug mechanisms like log messages and break points tell me the game freezes at the end of the render function after a few frames. So, I looked for another profiling tool and found Jeremy's GPUProfiler. However, the information returned by this tool is too basic to get an in-depth knowledge about my performance issues.
      Can anyone recommend a GPU Profiler or any other tool that might help me to find bottlenecks in my game and or that is able to indicate performance problems in my shaders? My custom graphics engine can handle subjects like multi-texturing, instancing, soft shadowing, animation, etc. However, I am pretty sure, there are things I can optimize!
      I am using SharpDX to develop a game (engine) based on DirectX 11 with .NET Framework 4.5. My graphics cards is from NVIDIA and my processor is made by Intel.
    • By GreenGodDiary
      SOLVED: I had written 
      Dispatch(32, 24, 0) instead of
      Dispatch(32, 24, 1)  
       
      I'm attempting to implement some basic post-processing in my "engine" and the HLSL part of the Compute Shader and such I think I've understood, however I'm at a loss at how to actually get/use it's output for rendering to the screen.
      Assume I'm doing something to a UAV in my CS:
      RWTexture2D<float4> InputOutputMap : register(u0); I want that texture to essentially "be" the backbuffer.
       
      I'm pretty certain I'm doing something wrong when I create the views (what I think I'm doing is having the backbuffer be bound as render target aswell as UAV and then using it in my CS):
       
      DXGI_SWAP_CHAIN_DESC scd; ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC)); scd.BufferCount = 1; scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT | DXGI_USAGE_SHADER_INPUT | DXGI_USAGE_UNORDERED_ACCESS; scd.OutputWindow = wndHandle; scd.SampleDesc.Count = 1; scd.Windowed = TRUE; HRESULT hr = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &scd, &gSwapChain, &gDevice, NULL, &gDeviceContext); // get the address of the back buffer ID3D11Texture2D* pBackBuffer = nullptr; gSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer); // use the back buffer address to create the render target gDevice->CreateRenderTargetView(pBackBuffer, NULL, &gBackbufferRTV); // set the render target as the back buffer CreateDepthStencilBuffer(); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); //UAV for compute shader D3D11_UNORDERED_ACCESS_VIEW_DESC uavd; ZeroMemory(&uavd, sizeof(uavd)); uavd.Format = DXGI_FORMAT_R8G8B8A8_UNORM; uavd.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; uavd.Texture2D.MipSlice = 1; gDevice->CreateUnorderedAccessView(pBackBuffer, &uavd, &gUAV); pBackBuffer->Release();  
      After I render the scene, I dispatch like this:
      gDeviceContext->OMSetRenderTargets(0, NULL, NULL); m_vShaders["cs1"]->Bind(); gDeviceContext->CSSetUnorderedAccessViews(0, 1, &gUAV, 0); gDeviceContext->Dispatch(32, 24, 0); //hard coded ID3D11UnorderedAccessView* nullview = { nullptr }; gDeviceContext->CSSetUnorderedAccessViews(0, 1, &nullview, 0); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); gSwapChain->Present(0, 0); Worth noting is the scene is rendered as usual, but I dont get any results from the CS (simple gaussian blur)
      I'm sure it's something fairly basic I'm doing wrong, perhaps my understanding of render targets / views / what have you is just completely wrong and my approach just makes no sense.

      If someone with more experience could point me in the right direction I would really appreciate it!

      On a side note, I'd really like to learn more about this kind of stuff. I can really see the potential of the CS aswell as rendering to textures and using them for whatever in the engine so I would love it if you know some good resources I can read about this!

      Thank you <3
       
      P.S I excluded the .hlsl since I cant imagine that being the issue, but if you think you need it to help me just ask

      P:P:S. As you can see this is my first post however I do have another account, but I can't log in with it because gamedev.net just keeps asking me to accept terms and then logs me out when I do over and over
    • By mister345
      Does buffer number matter in ID3D11DeviceContext::PSSetConstantBuffers()? I added 5 or six constant buffers to my framework, and later realized I had set the buffer number parameter to either 0 or 1 in all of them - but they still all worked! Curious why that is, and should they be set up to correspond to the number of constant buffers?
      Similarly, inside the buffer structs used to pass info into the hlsl shader, I added padding inside the c++ struct to make a struct containing a float3 be 16 bytes, but in the declaration of the same struct inside the hlsl shader file, it was missing the padding value - and it still worked! Do they need to be consistent or not? Thanks.
          struct CameraBufferType
          {
              XMFLOAT3 cameraPosition;
              float padding;
          };
  • Popular Now