Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

128 Neutral

About auto.magician

  • Rank
  1. auto.magician

    DX11 IDXGIAdapter.CheckInterfaceSupport bug

    Hi, yes, that was the first thing I thought of, but there's only 1 card though and only 1 adapter returned. There's no internal motherboard gpu or cpu/gpu combination that may create an extra entry for an adapter.
  2. auto.magician


    Hiya, Im not sure I'd be much help here as I'm throwing this out from memory but.... From what I remember the 2 layouts, your c++ and shader should match exactly. However your c++ is setup with :- layout[0].Format = DXGI_FORMAT_R32G32B32_FLOAT; which is a 3 component input. and the shader is expecting float4 position : POSITION; a 4 component input. However I cant remember if this makes a difference or not. The input semantic index is separate from the semantic name in the c++ input-layout. So you dont use "COLOR1" as a semantic name c++ side. For COLOR1:- layout[4].SemanticName = "COLOR"; layout[4].SemanticIndex = 1; layout[4].Format = DXGI_FORMAT_R32G32B32A32_FLOAT; layout[4].InputSlot = 0; layout[4].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT; layout[4].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA; layout[4].InstanceDataStepRate = 0; then you should be able to use COLOR1 in the shader. Using PIX will show you the exact layout and the data thats being passed through each stage of the render pipeline, including shader inputs and variables. So I'd start there to see what's going on gpu side. SV_TARGET is a semantic for the currently set render target. You can append an index value to direct pixel output to the corresponding render target. Of course the render targets need to be valid and have been created on the c++ side first. It can be used for writing to multiple render targets in one pass, used typically with but not limited to deferred renderers.
  3. Hiya, I had a problem with IDXGIAdapter.CheckInterfaceSupport returning that a Radeon Dx11 gpu didn't support the Dx11 interface, but was ok with 10.1. However Dx11 was available through using D3D11CreateDevice, so after a quick work-around, I change my code to support that command. After I had my code working correctly and reporting as expected I then scouted the internet, only to find references to this issue with Dx10 devices which were back in 2009/2010. Does anyone know if this bug has supposed to have been fixed just yet? I would ask over on the MSDN but well.... not meaning to sound rude but it seems a waste of time asking anything there lately. Many thanks in advance. Dave.
  4. Hiya, You can get a reasonably detailed error report from the compiler by passing in a Blob interface in for the 2nd from last parameter of the D3DXCompileFromFile function. If the D3DXCompileFromFile(.....) function returns an error, then that Blob buffer pointer will point to a standard C string which will contain any errors during compilation. I would check the return code of the CreatexxxxxShader functions too. It may be that your gpu card doesnt support shader model 5.0?
  5. auto.magician

    Copying from 2DTexture to 1DTexture

    Oh my god!! After nearly half an hour writing a lengthy explanation and making some detailed pics :- I've been tripped up with the shader compilation! In the shader file I have 2 texture definitions and various shader functions. In my code I was thinking about the 2 texture slots and setting the depth resource to slot1 ( as opposed to slot0 ) in the PSSetShaderResources function, not realising that because the first texture isnt used in this particular shader function then it gets culled out of the compilation, so the shader ends up using something from I don't know what. Must be a different texture from a previous stage of the process! LOL. I've been tripped up in the past with that already. Thankyou for replying, I feel so dumb-founded Dave
  6. auto.magician

    Copying from 2DTexture to 1DTexture

    Hiya. With your help, I got this working as I'm now able to create a 1D depth map from a 2D image using the depth buffer for depth testing. It works great, so a big thankyou. But I now have another problem which again I think is to do with the texture coords:- I want to use the information from the 1D depth map and build a 2D image from that information. I'm successfully sampling from the 1D depth map as shader resource, but my problem comes with that depth information is being applied to the 2D texture only on the diagonal from 0,0 to 1,1. I thought the rasterizer would sample the 1D texture only at the specified incoming u coord, and apply it to all vertical lines? This is a cut back version of the sampling :- Texture1D inDepth; struct PS_IN { float4 pos : SV_POSITION; float2 tex : TEXCOORD0; } float4 ShadowMapPS(PS_IN input) { [...] float shadowMapDistance; shadowMapDistance = inDepth.Sample(Sampler,input.tex.x).r; [...] float light = shadowMapDistance; [...] return float4(light,light,light,1); } I'm passing a standard full quad with standard uvs in the vertex buffer ( I understand your optimisation above, but I need the tools ), I've also tried changing the uv y coord for the 4 vertices to force it to sample from various fixed v, ie 0,0 or 0.5, but the 2d sampled texture just gets affected only across the diagonal, as if its sampling correctly but writing to the same v coord as the u coord. Your help is always appreciated. Dave.
  7. Hiya, Hehe, If i may butt in here... I used the same statue model when I learned shadow mapping , although the base is different. It's available in .x form in the 'source code' link at the bottom of this tutorial page :- Shadow Mapping Tutorial It is an awesome model.
  8. auto.magician

    Copying from 2DTexture to 1DTexture

    Wow. Thankyou. I didnt expect you to knock up some code like that, and I really appreciate you taking time out on your coffee break to do it. I've read through it and I do understand what you're doing. I've also learned about generating vertices within the shader from it. I'll have a go in the next few days as I'm working ridiculous hours at the moment. ( non IT related ). Thanks again. Dave
  9. auto.magician

    Copying from 2DTexture to 1DTexture

    Hiya Nik02, I was using the DrawInstanced call to generate the geometry (256x1 (dest) quads for a 256x256 (source)), then using the SV_InstanceID and some math to generate the v coord, and just pass through the same u coord, these uv would to read from the source texture. I did get it working when (as a test ) copying from a 256x256 to a 256x256 using struct ShadowMapPixelInput { float4 pos : SV_POSITION; float2 tex : TEXCOORD0; float instID : TEXCOORD1; }; float4 inCol = inTex.Load(uint3(input.tex.x*256.0,input.tex.y*256.0,0)); but when I use the SV_InstanceID to get the v value it just doesn't work float4 inCol = inTex.Load(uint3(input.tex.x*256.0,instID,0)); In PIX it shows that the value output from the vs for instID : TEXCOORD1 is the value of the SV_InstanceID ( integer cast to a float ), but not in the ps. Am I using it correctly in the Texture.Load function? PIX says that the value may be incorrectly shown as its an SV_ value. Thankyou for your help. EDIT :- Lol. I understand now! When using the instID the v data is the same for each vertex which is incorrect. I'll re-work the math and try again. Dave
  10. auto.magician

    Copying from 2DTexture to 1DTexture

    Hmm, I have to eat humble pie as I can't get this to work. What I'm struggling with is conceptualising how to send the correct uv ( or v only ) values from the vs to the ps. I send in the current vertex id, but then how would that equate to the ps rastering correctly across the incoming texture? Maybe I need an advanced texturing tutorial to teach me the fundamentals of which the vs and ps work together? All help is very much appreciated. Many thanks Dave.
  11. auto.magician

    vs_4_0 optimsations

    Thankyou for your help and advises. Very informative.
  12. auto.magician

    vs_4_0 optimsations

    Wow, Thankyou. It increased the instruction slots to 31 but brang the framerate back up to 80. I've read about those commands in the docs but I thought it would make things slower as more instruction slots would be used. Do you know where I could information in regards to the speed of the shader commands and functions? Thankyou for that tip and fixing it up! And I've learned something new too. Thanks again. Dave.
  13. auto.magician

    vs_4_0 optimsations

    Hiya MJP, Yes, I'm 100% sure its the pixel shader. Or at least I think I am I'm compiling the vs and ps seperately and changing the compilation flag only for the ps. I'm not using the fx framework at all. Without optimisations the ps ends up with 45 instruction slots which includes 2 'if else endif' nested one inside the other. The optimised version is only 26 instruction slots with no nesting or branching, but its almost 20% slower. TBH, the assembly was the first place I looked. Is it worth me posting the assembly output here? Are you thinking it might be something stalling in the pipeline ?
  14. auto.magician

    DX11 vs_4_0 optimsations

    Hiya, Sorry, the title should read ps_4_0 optmisations.... I've searched the forums for this problem and couldn't find anything related. I'm writing a vs and ps in Dx11.0 using shader level 4. If I use optimisation level 3 in the d3dcompile function, the fps is around 65fps, however if I turn off optimisations using the skip optimisations flag then I get 80fps! Has anyone heard of this kind of thing? Could it simply be the drivers for my gpu or the gpu itself? The shaders are nothing special, the vs is a basic 'pass-though' to the ps. Any ideas and help is appreciated. The ps is below - cbuffer ScreenDim { float screenWidth; float screenHeight; float2 padding; } struct PixelLightingType { float4 position : SV_POSITION; float2 tex : TEXCOORDS0; float4 lightPR : TEXCOORDS1; float4 lightCI : TEXCOORDS2; }; Texture2D inTex[2]; SamplerState Sampler; float4 LightingPixelShader(PixelLightingType input) : SV_TARGET { float4 outColor; float depth = inTex[0].Sample(Sampler,input.tex).r; float3 normal = inTex[1].Sample(Sampler,input.tex).rgb; normal = normal*2-1; normal = normalize(normal); float3 pixel; pixel.x = screenWidth * input.tex.x; pixel.y = screenHeight * input.tex.y; pixel.z = depth; float3 shading = 0; float3 lightDir = input.lightPR.xyz - pixel; float cone = saturate(1 - length( lightDir)/input.lightPR.w); if (cone>0) { float distance = 1/length(lightDir) * input.lightCI.w; float amount = max(dot(normal + depth , normalize(distance)),0); shading = distance * amount * cone * input.lightCI.rgb; } outColor = float4(shading,1); return outColor; } Thanks in advance. Dave
  15. auto.magician

    Copying from 2DTexture to 1DTexture

    Thanks Nik02, I thought I'd thought of everything and then you suggest that I thought MJPs suggestions involved manual looping. Although I'm not a complete newbie with HLSL, I'm no master either. I'll give it a go and report back here, whether I get it working or not. Thanks again. Dave.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!