Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

103 Neutral

About Trigle

  • Rank
  1. Hi guys,   During my time at university I created an application using DirectX 10.   I needed audio support for the following:   1. Background music 2. Ambient repeating samples 3. Effect samples (play once)     I followed all the Microsoft examples in order to achieve this and it runs on a separate thread to the application real-time logic.   1. The background music doesn't impact on performance (great!) 2. If I use anything thats not in xwma format the voices only play for around 30 seconds. The application thinks its still playing sound, no errors are returned or logged by the XAudio2 debug dll. 3. Playing lots of short samples (Play once) sequentially results in a similar behaviour, the sound just cuts out.   I have tried this using the release and debug versions of the dll, and the behaviour is the same. I also appear to have no memory leaks. Originally I thought it was because I was using a submix voice, so I removed it from the chain and send directly to the master voice.   Is XAudio2 just a poor library?   Any suggestions as to a replacement?
  2. Ok point taken still it was nice to actually understand why it was happening
  3. Thanks for the idea but it seems that changing this value: sampDesc.MaxLOD = 6; in my sampler has solved the problem, it seems to have worked like this with respect to my borders (in my case at least as explained previously by hodgman): Level 0 - 64px Level 1 - 32px Level 2 - 16px Level 3 - 8px; Level 4 - 4px Level 5 - 2px Level 6 - 1px Sweet
  4. Trigle

    DXUT and high resolutions

    Have you tried [font=CourierNew, monospace][size=2]D3D_FEATURE_LEVEL_9_1 ? Just a stab in the dark really [/font]
  5. OK I have tried this as a solution and unfortunately I'm still having this problem. Here's the vertex and pixel shader respectively. cbuffer main { matrix mWorldViewProj; }; struct VS_INPUT { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD0; float2 atlasSector : TEXCOORD1; }; struct VS_OUTPUT { float4 position : SV_POSITION; float4 color : COLOR; float2 texcoord : TEXCOORD0; float2 atlasSector : TEXCOORD1; }; VS_OUTPUT vsMain(in VS_INPUT vIn) { VS_OUTPUT vOut = (VS_OUTPUT)0; vOut.position = mul(float4(vIn.position, 1.0f), mWorldViewProj); vOut.texcoord = vIn.texcoord; vOut.atlasSector = vIn.atlasSector; return vOut; } sampler s0 : register(s0); cbuffer main { texture2D groundTex; }; struct VS_OUTPUT { float4 position : SV_POSITION; float4 color : COLOR; float2 texcoord : TEXCOORD0; float2 atlasSector : TEXCOORD1; }; float4 psMain(in VS_OUTPUT vOut) : SV_TARGET { float2 uv = frac(vOut.texcoord);//perform wrapping into the 0.0 to 1.0 range uv *= 0.25;//resize from a 512x512 area to a 128x128 area uv += float2(0.125, 0.125);//offset by the 64px border uv += vOut.atlasSector * 0.5;//offset into the desired quadrant vOut.color = groundTex.SampleGrad(s0, uv, ddx(vOut.texcoord*.25), ddy(vOut.texcoord*.25)); return vOut.color; } And finally the sampler description: D3D10_SAMPLER_DESC sampDesc; ZeroMemory(&sampDesc, sizeof(sampDesc)); sampDesc.Filter = D3D10_FILTER_MIN_MAG_MIP_LINEAR; sampDesc.AddressU = D3D10_TEXTURE_ADDRESS_CLAMP; sampDesc.AddressV = D3D10_TEXTURE_ADDRESS_CLAMP; sampDesc.AddressW = D3D10_TEXTURE_ADDRESS_CLAMP; sampDesc.ComparisonFunc = D3D10_COMPARISON_NEVER; sampDesc.MinLOD = 0; sampDesc.MaxLOD = D3D10_FLOAT32_MAX; Here's the screenshot and my texture screenshot (with guides from Photoshop to show the texture position): The problem still persists
  6. Wow I didn't think of the bordering in that way to be honest. I'll do as you suggest and see how it works out struct VS_INPUT { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD0;//regular tex-coords, as if you were using regular (non-atlas) textures float2 atlasSector : TEXCOORD1;//either (0,0), (1,0), (0,1) or (1,1) -- which part of the atlas to use }; [/quote] Yeah I was thinking similar to this, thanks for the idea you've just inspired me for a cool solution.
  7. Originally I just created a cube with unshared vertices, manually applied my tex-coords and I was done. I'd create a 'crucifix' style cube map and wrap the texture around it. I'd make my box much larger scale than my surroundings whilst also transform its position to the camera position. I'm assuming people use spheres to create their sky so that they can do faux reflections off surfaces? What are peoples thoughts on how I should implement my skybox?
  8. Thanks Hodgman I knew you'd be good for it [color="#1C2837"]Does this mean that each sub-image has a 64px wide border around it? If so, what's the 0.01f offset trick do?[/quote] Sorry this should have been 256x256 (sorry I've been slow since I got out of hospital mate ;)) The 0.01f offset is added to the start or deducted from the end of every single generated tex-coord to simulate a border, it also should mean that sampling done by linear samplers do not pick pixels outside of the texture, sure I should put in a border of the same value to prevent seam issues; note taken So that would be 0.01f x 256, which is 2.56 pixels for each border, giving me a 3 pixel border for each texture yeah? [color="#1C2837"]It would help if you posted up some of your shader code, but I'll just pretend this is your code ;)[/quote] You see I'm generating the tex-coords when I generate the mesh; it's not how I want to do it - I have always considered this an unnecessary approach to texturing as all of those 'duplicate' tex-coords are being stored against each vertex which is a massive waste of memory. I will be changing this but haven't resolved an alternative approach to doing the texturing within the vertex shader (determining the tex-coords from the height on each and every vertex from the y component of it's untransformed position) and sampling in the pixel shader. [color="#1C2837"]You can fix it by doing the rate-of-change calculations yourself at the appropriate stage:[color="#1C2837"][color="#000000"]float2 uv [color="#666600"]=[color="#000000"] input[color="#666600"].[color="#000000"]uv[color="#666600"];[color="#000000"] float2 ddx_uv [color="#666600"]=[color="#000000"] ddx[color="#666600"]([color="#000000"]uv[color="#666600"]);[color="#000000"] float2 ddy_uv [color="#666600"]=[color="#000000"] ddy[color="#666600"]([color="#000000"]uv[color="#666600"]);[color="#000000"] uv [color="#666600"]= [color="#880000"]/*do wrapping / atlas logic*/[color="#000000"]float4 colour = tex.SampleGrad( S, uv, ddx_uv, ddy_uv ); [/quote] What is the default input tex-coords that you've provided each vertex, assuming ddx and ddy need valid values in order to make a difference? My idea to solve this would be to allocate each vertex a 'texture reference and a triangle reference' in which the shader could instantly know which area of the texture atlas it needed to be sampled from. Each reference could be extracted using bitwise ops from a 32-bit integer as 16-bit integers saving myself space per vertex over the traditionally used for storing the UV coords. This could then be looked up in an array of values to get the corresponding tex-coords, unless there's an easier way for that. I have only started my DX10 version this week so my shader is very basic. Vertex Shader cbuffer main { matrix WorldViewProj; }; struct VS_INPUT { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD0; }; struct VS_OUTPUT { float4 position : SV_POSITION; float2 texcoord : TEXCOORD0; }; VS_OUTPUT vsMain(in VS_INPUT vIn) { VS_OUTPUT vOut = (VS_OUTPUT)0; vOut.position = mul(float4(vIn.position, 1.0f), WorldViewProj); vOut.texcoord = vIn.texcoord; return vOut; } Pixel Shader sampler s0 : register(s0); cbuffer main { texture2D groundTex; }; struct VS_OUTPUT { float4 position : SV_POSITION; float2 texcoord : TEXCOORD0; }; float4 psMain(in VS_OUTPUT vOut) : SV_TARGET { vOut.color = groundTex.Sample(s0, vOut.texcoord); return vOut.color; } I am using shader model 4.
  9. One day.. one day I hope to be as talented with this as you behc that is one seriously impressive demo; it's better than tech demo from a graphics card manufacturer!
  10. Ok it's definitely something to do with the linear sampling and mip-mapping, I have read a bit more about this, apparently because I'm using a texture atlas, the changes between the texcoords are changing so rapidly that it confuses the mip-map level selection. Although I don't entirely understand this, I have discovered the function SampleGrad and SampleLevel (I'm using shader model 4). I can't find many relevant examples of SampleGrad out there, can anyone explain to me how I could implement this to improve my sampling at far distances? Example: Maybe I am missing something fundamental to using texture atlas approaches, I have already added and subtracted an offset of 0.01f to either side of the sub-texture to remove any borders at mip-map level 0 but as it gets further away the sampling is causing obvious issues. It wasn't as bad in my DX9 implementation but in my DX10 it is much more visible. I'm really at my wits end with this one
  11. Trigle

    Realtime Trianglemesh

    Oh so this is PhysX related? I found this on a quick Google search: [url="http://pastebin.com/sPudh22G"]Pastebin[/url]
  12. Trigle

    DX11 Subdivision Surfaces

    Wow thanks MJP that was a really interesting read
  13. Trigle

    OpenGL DX9 install displaying framerate

    Dude he was only trying to help, you could have reinstalled before posting here.. this is a programming forum not a miscellaneous I've screwed up DirectX support forum.
  14. Trigle

    Realtime Trianglemesh

    If I have understood you correctly, as far as I am aware you will have to do the following: Create a mesh of your collision map (or many as an optimisation)Calculate which vertices have moved from the user interaction with the voxel terrainLock the vertex buffer of the meshCopy in the changed verticesUnlock the vertex bufferThe D3DX10CreateMesh function will allow you to create this mesh assuming you don't already have one ;)
  15. Hi all, I am new to this forum and I think I'll start off with a relatively tame post; I tend to try to solve all my own bugs and it's frustrating to have to ask about it but I've tried almost everything I can think of to resolve the problem. It may be a misconception of mine or maybe it's a common issue, I don't know, I don't know many 3D graphics peeps. Anyway on to the issue at hand: I have a manually generated terrain mesh and my terrain is rather large. I've upped my far plane to some number like 10000 so I can view all of my terrain without having to move around to check if its all rendered correctly, that's no problem. However I have noticed that when sampling from a texture with a MipLODBias of 0 using a linear sampler, I have a moire effect at far distances. If I provide a negative LOD bias I can avoid this but I get a shimmering effect from mesh poly's closer to the camera. At first I thought that it may be that my texture is not big enough so I tried a bigger texture, no joy. I am using a texture atlas of 512x512 pixels; which means each of the 4 images is 128x128 [Edit: should be 256x256] pixels in size. The negative LOD bias fixes this temporarily but I am aware that the control panels of drivers allow the ability to 'clamp' this value, I'm not sure what it would be clamped to and it still may solve the problem, but I am curious to whether there is another solution for this? or is my far plane simply too far? Any help would be appreciated thank you P.S. I am not using the Effect framework and not using DXUT.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!