daVinci

Members
  • Content count

    36
  • Joined

  • Last visited

Community Reputation

122 Neutral

About daVinci

  • Rank
    Member
  1. CreateVertexShader fails

    Quote:Original post by Nik02 I'm guessing that you try to load the object file as text. It was fast! Are you clairvoyant? It's exactly as you said! I missed 'b' in fopen. Many many thanks! Daniel
  2. I have writen a simple vertex shader. I compiled it with fxc.exe. Output: // // Generated by Microsoft (R) HLSL Shader Compiler 9.26.952.2844 // // fxc /Vd /T vs_2_0 /E main // // // Parameters: // // float4 ambient; // float4 diffuse; // float glossiness; // float3 lightDirection; // float4 specular; // float specularPower; // float3 viewDirection; // float4x4 world; // float4x4 worldViewProjection; // // // Registers: // // Name Reg Size // ------------------- ----- ---- // ambient c0 1 // diffuse c1 1 // specular c2 1 // glossiness c4 1 // specularPower c5 1 // worldViewProjection c6 4 // world c10 3 // lightDirection c14 1 // viewDirection c15 1 // vs_2_0 def c3, 1, 0, 0, 0 dcl_position v0 dcl_normal v1 mul r0, v0.y, c7 mad r0, c6, v0.x, r0 mad r0, c8, v0.z, r0 mad oPos, c9, v0.w, r0 mov r0.xyz, c14 add r0.xyz, r0, c15 nrm r1.xyz, r0 dp3 r0.x, v1, c10 dp3 r0.y, v1, c11 dp3 r0.z, v1, c12 nrm r2.xyz, r0 dp3 r0.x, r2, r1 dp3 r0.y, r2, c14 mov r1.xyz, c1 mad r0.yzw, r1.xxyz, r0.y, c0.xxyz pow r1.x, r0.x, c4.x mul r0.x, r1.x, c5.x mad oD0.xyz, c2, r0.x, r0.yzww mov oD0.w, c3.x // approximately 25 instruction slots used Now I load the object file and send data to CreateVertexShader function. The function fails, the message is "Direct3D9: Shader Validator: X305: (Instruction Error) (Statement 2) Reserved bit(s) set in instruction parameter token! Aborting validation."; What's wrong? :(
  3. I use the following shader: // Input image Texture2D inputImage : register(t0); SamplerState inputImageSampler : register(s0); float4 main(float2 nonused : TEXCOORD0, float2 uv : TEXCOORD1) : COLOR { return inputImage.Sample(inputImageSampler, uv); } This is 1-to-1 mapping between pixels and texture-samples (uv are calculated to make it). Format is BGRA 32bit, not compressed. Can it occur due to large amount of GPU memory.. I mean maybe GPU has to move the texture in memory in case of large textures before sampling?
  4. Hello. I have a pixel shader which just samples from a texture and return the value. Depending from size of the texture, performance is degraded dramatically (sure, the output size is constant). I used 640x480px and 3800x2800px textures. Why? And how I can improve the performance? P.S. I have tried to generate mipmaps, but it doesn't helps. Thanks, Daniel
  5. Thank you for you replies. To trasseltass: I do it in a single thread To Josh: I call Release when it is required as written in docs. I pretty sure that I have no leaks becouse I have complex unit tests and when device is released D3D debug layer says me about only two objects (pixel shader & mesh (4 points)). If I could request all alive object in particular moment... Unfortunately, I can not just call UnsetAllDeviceObjects. I still need some resources to contunue working. In D3D9 (in WPF) occurs: [7112] Direct3D9: (ERROR) :BitBlt or StretchBlt failed in Present It does not occur when I use small textures or when I work with large images, but 'slowly'. What tools could you suggest me to profile my issue? I would like to see how many free video memeory I have now, for example.
  6. Hi, guys. I have a very serios issue. This issue blocks my development totally. I have an application with D3D10 render. This application is for image processing, so I have to create & release a lot of textures. Everything works fine, except... I interop with D3D9 device (interop with WPF just for your information). As you know D3D9 (pool default) is limited by dedicated memory (D3D10 use virtualized video memory). What I experience now is D3D9 (WPF) crashes if I render in D3D10 extensively. I have no memory leaks for sure. I call 'resource'->Release() and deveice->Flush(). I think that actually D3D10 do not free dedicated memory after Release & Flush methods. D3D10 waits if I create the same texture, it just return old one. (I checked it, it is true). This is very good optimization, but... After some time D3D9 device experiences lack of free memory and crashes! I am very hope for your help! Thanks, Daniel
  7. Shared textures is out of sync

    Now I'm trying to copy 1x1px to staging resource and map it... I haven't figure out yet whether it helps or not...
  8. I create two devices: D3D10.1 and D3D9EX. First I crate surface with D3D9EX (surface A) with shared handle and create also D3D10.1 surface (B) based on this handle. Second I render in intermediate render target (C) and copy result to surface B. Thrid I invoke D3D10.1 device Flush and copy content from A texture to system memory and it content looks old (from previous frame). Flush doesn't enough? What I need to perform to be sure that all operations has been completed?
  9. How to use D2D with D3D11?

    Thank you for the detailed info, DieterVW. One more question: if I want to use D2D and D3D with WARP I have to create D3D10.1 device, right?
  10. I write something like the following // Use the texture to obtain a DXGI surface. IDXGISurface *pDxgiSurface = NULL; renderTarget->QueryInterface(&pDxgiSurface); if (pDxgiSurface == NULL) return NULL; // Create a D2D render target which can draw into. D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties( D2D1_RENDER_TARGET_TYPE_DEFAULT, D2D1::PixelFormat(DXGI_FORMAT_UNKNOWN, D2D1_ALPHA_MODE_PREMULTIPLIED), 96,96); HRESULT result = GetDirect2DFactory()->CreateDxgiSurfaceRenderTarget(pDxgiSurface, &props, &renderTargetDirect2D); I got E_NOINTERFACE in result. It must be D2D integrates with D3D10.1 only. How with minimal efforts we can render using D2D to D3D11 surface? The D3D11 device was created with flag D3D11_CREATE_DEVICE_BGRA_SUPPORT. The render target: D3D11_TEXTURE2D_DESC desc; ZeroMemory(&desc, sizeof(desc)); desc.Width = width; desc.Height = height; desc.MipLevels = 1; desc.ArraySize = 1; desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; desc.SampleDesc.Count = 1; desc.Usage = D3D11_USAGE_DEFAULT; desc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE; desc.MiscFlags = 0;
  11. Mipmap generation with D3D10

    Quote:Original post by ET3D Note that generating a full set of MIP maps doesn't mean that content is generated for them, just that the surfaces themselves are generated. You also need D3D10_RESOURCE_MISC_GENERATE_MIPS for MiscFlags (take a look at that flag in the docs to see the other requirements). Yes, with D3D10_RESOURCE_MISC_GENERATE_MIPS it works. Thanks
  12. I try automatically generate mipmaps. MSDN: D3D11_TEXTURE2D_DESC::MipLevels (Use 0 to generate a full set of subtextures) I do the following: D3D10_TEXTURE2D_DESC desc; desc.Width = imageWidth; desc.Height = imageHeight; desc.MipLevels = 0; desc.ArraySize = 1; desc.Format = format; desc.SampleDesc.Count = 1; desc.Usage = D3D10_USAGE_DEFAULT; desc.BindFlags = D3D10_BIND_SHADER_RESOURCE; pixels = new D3D10_SUBRESOURCE_DATA(); pixels->pSysMem = initialData; pixels->SysMemPitch = pitch; GetDevice()->CreateTexture2D( &desc, pixels, &renderTarget); D3D10 debug said: ERROR: ID3D10Device::CreateTexture2D: pInitialData[2].pSysMem cannot be NULL
  13. Several of my pixel shaders produce warning: Gradient-based operations must be moved out of flow control to prevent divergence. Performance may improve by using a non-gradient operation. For example for this shader: sampler2D original : register(s0); sampler1D curve : register(s1); float4 main(float2 uv : TEXCOORD) : COLOR { float4 color = tex2D(original, uv); float luminosity = (color.r + color.g + color.b) / 3.0; float mappedluminosity = tex1D(curve, luminosity).x; // Absolutely black color if(luminosity == 0.0) return float4(mappedluminosity, mappedluminosity, mappedluminosity, color.a); return float4(color.rgb * (mappedluminosity / luminosity), color.a); // WARNING points here } Does anyone know how to fix it?
  14. > You can’t MSDN says that I can, see here http://msdn.microsoft.com/en-us/library/bb173628(VS.85).aspx , same for ID3D11DeviceContext::VSSetShader(). > you need a pixel shader Yes, I have a lot of pixel shaders :)
  15. How to render without vertex shader correctly? (I want to vertices go to pixel shader without modifing). Previously I do it with D3D9 succesfully. In D3D10 case the method ID3D10Device::VSSetShader(NULL) disables the shader for this pipeline stage (you can see this here http://msdn.microsoft.com/en-us/library/bb173628(VS.85).aspx). The main trouble - how to tune input-layout rightly? ID3D10Device::CreateInputLayout() needs shader byte code, but I haven't any. Could someone write a little code snippet? Thanks, Daniel