• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

jonathantompson

Members
  • Content count

    14
  • Joined

  • Last visited

Community Reputation

100 Neutral

About jonathantompson

  • Rank
    Member
  1. OK, now I'm getting somewhere... My camera far plane was 2.5, my near plane was 0.01. All my objects were very close to the camera and they were all very small (<0.01 in width!). The view space coordinates were correct, it was just a matter of scaled values. The R and G and B values were therefore < 1, which is why everything just looked black. I was right, I was doing something stupid. So, I've made all my world objects 100x larger, and changed the camera near and far planes to 1.0 and 250.0. Now the view space coordinates look like this: View Space Buffer: [img]http://farm6.static.flickr.com/5101/5556274705_8c9cf9b9c3.jpg[/img] Occlusion buffer: [img]http://farm6.static.flickr.com/5018/5556281485_1775f7d931.jpg[/img] So there's something wrong with the occlusion buffer, but at least I know the inputs are now OK. I'll continue to debug and get back to you guys.
  2. Hmm... I guess I'm having trouble working out why it should look like that: Firstly, I think if (viewpos.x, viewpos.y, viewpos.z) are stored as (R, G, B ) , then shouldn't objects further away from the camera have a stronger blue component? In your view space rendering it looks like blue is constant across the low portion of the screen. When you're rendering the position data, are you rendering like this?: [code] // Sample the texture float3 xyz = tex2D(PPSampSource, tex0).xyz; return float4(xyz, 1.0); [/code] Or are you rendering some other representation of the data? As a side note; what format are you using to store your view-space position buffer? I'm guessing floating point... I'm really stuck on this. I feel like I must be doing something very stupid.
  3. Thanks again for your help. Does view space position need to be normalized form 0 to 1? That's the only way I can imagine that view space positions would give 4 colored squares. Otherwise, some view values will be negative (DirectX viewspace x and y is -1 to 1 I think), which will be dark on screen. Does depth need to be normalized as well? Sorry for bombarding you with questions... and I appreciate the help.
  4. I'm pretty sure it's correct. Stepping through the code, the matrices look ok. I even calculated the transforms in Matlab and compared it against the pixel shader results in PIX. This is the code that sets the view & project matrices per frame: [code] void camera::Update(void) { m_vUp = g_UI->GetSetting<D3DXVECTOR3>(&var_startingUp); util::GetCorrectUp(&m_vLookAtPt, &m_vEyePt, &m_vUp); // To correct for arbitrary eye direction D3DXMatrixLookAtLH(&m_matView,&m_vEyePt,&m_vLookAtPt,&m_vUp); // Fit near and far to scene objects's world bounds. // Important for cascaded shadow maps to reduce aliasing. g_objectManager->FitViewMatrixNearFarToRBObjects(&m_matView, &zNearFar.x, &zNearFar.y, zNearMin, zFarMax); zNearToFar = zNearFar.y - zNearFar.x; g_UI->GetWidthHeight(&width, &height); D3DXMatrixPerspectiveFovLH(&m_matProjection, g_UI->GetSetting<float>(&var_fieldOfView), (float)width / (float)height, zNearFar.x, zNearFar.y); m_ViewProj = m_matView * m_matProjection; }//Update [/code] This is the code that sets the matrices and draws a meshed object: [code] void renderer::DrawTexturedMeshPosNorm(rbobjectMeshData * meshData, D3DXMATRIXA16 * matWorld) { UINT numPasses = 0; HR(m_FX->Begin(&numPasses, 0),L"Render::DrawTexturedMeshPosNorm() - m_FX->Begin Failed: "); HR(m_FX->BeginPass(0),L"Render::DrawTexturedMeshPosNorm() - m_FX->BeginPass Failed: "); D3DXMATRIX WV; D3DXMatrixMultiply(& WV, matWorld, & g_objectManager->GetCamera()->m_matView ); // matWorld is current RBO model->world transform HR(m_FX->SetMatrix(m_FXHandles.m_hWV, & WV), L"Render::SetPosNormMatricies() - Failed to set m_hWV matrix: "); D3DXMATRIX WVP; D3DXMatrixMultiply(& WVP, & WV, & g_objectManager->GetCamera()->m_matProjection ); // matWorld is current RBO model->world transform HR(m_FX->SetMatrix(m_FXHandles.m_hWVP, & WVP), L"Render::SetPosNormMatricies() - Failed to set m_hWVP matrix: "); HR(m_FX->CommitChanges(),L"Render::DrawTexturedMeshPosNorm() - CommitChanges failed: "); for(UINT j = 0; j < meshData->materials->Size(); ++j) { HR(meshData->pMesh->DrawSubset(j),L"Render::DrawTexturedMeshPosNorm() - DrawSubset failed: "); } HR(m_FX->EndPass(),L"Render::DrawTexturedMeshPosNorm() - m_FX->EndPass Failed: "); HR(m_FX->End(),L"Render::DrawTexturedMeshPosNorm() - m_FX->End Failed: "); } [/code]
  5. Just as an update to anyone who come across this: I was never able to work out how to get floating point texture filtering on my 7800GTX machine working. Caps says it is supported. When enabled no DX9 warning or error is thrown, yet it just doesn't work. I've tried the same code on other cards (an ATI 5850) and the filtering looks great. For now, I'm using bilinear filtering in the shader on the 7800GTX machine as a work around: [code] // Bilinear interpolation texture lookup float4 tex2DBilinear( sampler textureSampler, float2 uv ) { float4 tl = tex2D(textureSampler, uv); float4 tr = tex2D(textureSampler, uv + float2(gTexelSize, 0)); float4 bl = tex2D(textureSampler, uv + float2(0, gTexelSize)); float4 br = tex2D(textureSampler, uv + float2(gTexelSize , gTexelSize)); float2 f = frac( uv.xy * gTextureSize ); // get the decimal part float4 tA = lerp( tl, tr, f.x ); // will interpolate the red dot in the image float4 tB = lerp( bl, br, f.x ); // will interpolate the blue dot in the image return lerp( tA, tB, f.y ); // will interpolate the green dot in the image } [/code] It's slower, but at least I get some shadow map filtering!
  6. I been tearing my hair out for days trying to get my DX9 SSAO code working. I know that there have been a million posts on this, but dispute reading them 1000 times I'm still having trouble! I'm using the algorithm described in: [url="http://archive.gamedev.net/reference/articles/article2753.asp"]A Simple and Practical Approach to SSAO[/url]: I think the issue stems from my calculation of view space normals and positions. Is the view space position suppose to be normalized? Here is the code for storing view space normals and positions to two D3DFMT_A32B32G32R32F render targets: [b]Position and Normal Map Generation:[/b] [code] struct PosNormMap_PSIn { float4 PosWVP : POSITION0; // Homogenious position float4 PosWV : TEXCOORD0; // View space position float4 NormWV : TEXCOORD1; // View space normal }; struct PosNormMap_PSOut // 128 bit buffers { float4 Pos : COLOR0; // View space position float4 Norm : COLOR1; // View space normal }; // Vertex Shader PosNormMap_PSIn BuildPosNormMapVS(float3 Position : POSITION, float3 Normal : NORMAL0 ) { PosNormMap_PSIn Output; Output.PosWVP = mul(float4(Position, 1.0f), gWVP); Output.PosWV = mul(float4(Position, 1.0f), gWV); Output.NormWV = mul(float4(Normal, 0.0f), gWV); return Output; } // Normalize into [0,1]... Not really necessary for floating point textures, but required for integer textures float4 NormalToTexture(float4 norm) { return norm * 0.5f + 0.5f; } // Pixel Shader PosNormMap_PSOut BuildPosNormMapPS(PosNormMap_PSIn Input) { PosNormMap_PSOut Output; Output.Pos = Input.PosWV; //Output.Pos.xy = Output.Pos.xy / Output.Pos.w; //Output.Pos.z = linstep(gCameraNearFar.x, gCameraNearFar.y, Output.Pos.z); // Rescale from 0 to 1 Output.Norm = NormalToTexture(normalize(Input.NormWV)); // Interpolated normals can become unnormal--so normalize., put into [0,1] range return Output; } technique BuildPosNormMap_Tech { pass P0 { vertexShader = compile vs_3_0 BuildPosNormMapVS(); pixelShader = compile ps_3_0 BuildPosNormMapPS(); } } [/code] [b]Here is the code for populating the occlusion buffer (D3DFMT_R32F):[/b] [code] uniform extern float2 gVectorNoiseSize; // default: 64x64 uniform extern float2 gScreenSize; uniform extern float gSSAOSampleRadius; // default: 0.5-->2.0 uniform extern float gSSAOBias; // default: 0.05 uniform extern float gSSAOIntensity; // default: 3.0 uniform extern float gSSAOScale; // default: 1.0-->2.0 sampler PPSampPosition = sampler_state { Texture = <gPPPosition>; MipFilter = POINT; MinFilter = POINT; MagFilter = POINT; MaxAnisotropy = 1; AddressU = CLAMP; AddressV = CLAMP; }; sampler PPSampNormal = sampler_state { Texture = <gPPNormal>; MipFilter = POINT; MinFilter = POINT; MagFilter = POINT; MaxAnisotropy = 1; AddressU = CLAMP; AddressV = CLAMP; }; sampler PPSampVectorNoise = sampler_state { Texture = <gPPVectorNoise>; MipFilter = LINEAR; MinFilter = LINEAR; MagFilter = LINEAR; MaxAnisotropy = 1; AddressU = WRAP; AddressV = WRAP; }; // Vertex Shader void OcclusionMap_VS(float3 pos0 : POSITION, float2 tex0 : TEXCOORD0, out float4 oPos0 : POSITION0, out float2 oTex0 : TEXCOORD1) { // Pass on texture and position coords to PS oTex0 = tex0; oPos0 = float4(pos0,1.0f); } float3 getPosition(in float2 tex0) { return tex2D(PPSampPosition,tex0).xyz; } float3 getNormal(in float2 tex0) { return normalize(tex2D(PPSampNormal, tex0).xyz * 2.0f - 1.0f); //return tex2D(PPSampNormal, tex0).xyz * 2.0f - 1.0f; } float2 getRandom(in float2 tex0) { return normalize(tex2D(PPSampVectorNoise, gScreenSize * tex0 / gVectorNoiseSize).xy * 2.0f - 1.0f); } float doAmbientOcclusion(in float2 tcoord,in float2 tex0, in float3 p, in float3 cnorm) { float3 diff = getPosition(tcoord + tex0) - p; const float3 v = normalize(diff); const float d = length(diff)*gSSAOScale; return max(0.0,dot(cnorm,v)-gSSAOBias)*(1.0/(1.0+d))*gSSAOIntensity; } // Pixel Shader float4 OcclusionMap_PS(float2 tex0 : TEXCOORD1) : COLOR { const float2 vec[4] = { float2(1,0),float2(-1,0),float2(0,1),float2(0,-1) }; float3 p = getPosition(tex0); float3 n = getNormal(tex0); float2 rand = getRandom(tex0); float ao = 0.0f; float rad = gSSAOSampleRadius/p.z; //SSAO Calculation const int iterations = 4; for (int j = 0; j < iterations; ++j) { float2 coord1 = reflect(vec[j],rand)*rad; // float2 coord2 = float2(coord1.x*0.707 - coord1.y*0.707, coord1.x*0.707 + coord1.y*0.707); float2 coord2 = float2(coord1.x - coord1.y, coord1.x + coord1.y) * 0.707f; ao += doAmbientOcclusion(tex0,coord1*0.25, p, n); ao += doAmbientOcclusion(tex0,coord2*0.5, p, n); ao += doAmbientOcclusion(tex0,coord1*0.75, p, n); ao += doAmbientOcclusion(tex0,coord2, p, n); } ao/=(float)iterations*4.0; //END return float4(ao, 1.0f, 1.0f, 1.0f); } // Technique technique OcclusionMap_Tech { pass P0 { // Specify the vertex and pixel shader associated with this pass. vertexShader = compile vs_3_0 OcclusionMap_VS(); pixelShader = compile ps_3_0 OcclusionMap_PS(); } } [/code] The normal buffer output looks ok. But the view buffer output looks off. I think each object is close to the camera near plane so (Z << 1). I think that's why everything looks dark. Also, what Filters should be used for position and normal buffers? When calculating the occlusion buffer I'm rendering a full screen quad the same size as the position and normal buffers, so 1:1 texel mapping. I'm guessing POINT, with CLAMP addressing is correct? [b]Viewspace Position Buffer ([b]62.5%[/b] of original size):[/b] [img]http://farm6.static.flickr.com/5053/5556177450_b0ef5a57cd.jpg[/img] [b]Viewspace Normal Buffer[/b][b] ([b]62.5%[/b] of original size):[/b] [img]http://farm6.static.flickr.com/5300/5556177422_01738ecd6f.jpg[/img] [b]Occlusion Buffer [/b][b](62.5% of original size):[/b] [b][img]http://farm6.static.flickr.com/5295/5555591431_795576c6d3.jpg[/img][/b] [b] [/b] [b]Thanks in advance for the help![/b]
  7. [quote name='glaeken' timestamp='1300223011' post='4786190'] Haven't looked at the code much but are you sure you video card supports floating point filtering for RG32F? Check the caps to make sure. [/quote] I guess a potential follow up question would be: If RG32F is not available, then what format do people use for variance shadow map textures? When I try 16bit floating point formats I get HORRIBLE artifacts... Is the only option then to manually filter bi or trilinearly in the fragment shader? If this is the case, then I don't see the advantage over regular shadow maps and PCF...
  8. [quote name='glaeken' timestamp='1300223011' post='4786190'] Haven't looked at the code much but are you sure you video card supports floating point filtering for RG32F? Check the caps to make sure. [/quote] You might be on the right track. Though my work computer doesn't list D3DUSAGE_QUERY_FILTER, my home computer does. My rendering results I get are the same on the computer that does. Maybe some other issue? My work computer caps doesn't seem to list hardware filtering for RG32F... (windows XP): [img]http://farm6.static.flickr.com/5298/5529813157_2ee5cee845_b.jpg[/img] However, on my home computer it does list support for G32R32F filtering (windows 7): [img]http://farm6.static.flickr.com/5055/5529822105_144bb9fe8d_b.jpg[/img]
  9. I am putting together a DX9 renderer with cascade variance shadow maps. Even though the shadows are rendering correctly, I am having trouble getting any sort of texture filtering to work. The aim of course (and the whole point of using the variance shadow maps) is to soften and anti-alias the projected shadow map with hardware sampling... I understand the theory that storing 2 moments and using Chebyshev inequality to approximate shadow percentage allows you to do linear sampling (which can be hardware accelerated). The shadow map texture format is D3DFMT_G32R32F, though I have tried others... However no matter what sampler_state I include in the effect, I can't seem to get any visible hardware sampling. I've included all the relevant HLSL code below as well as a few screen shots. I'm using a single low res shadow map to accentuate the aliasing. Basically, I'm unsure why "Mag/MagFilter = xxx;" (POINT, LINEAR, ANISOTROPIC) doesn't affect the texture sampler. Any help would be greatly appreciated: [b]Here is the vertex shader code TO BUILD THE SHADOW MAP:[/b] [code] Depth_PSIn BuildShadowMapVS(float3 Position : POSITION ) // Object space position { Depth_PSIn Output; Output.Position = mul(float4(Position, 1), gWVP); Output.PosView = mul(float4(Position, 1), gWV).xyz; return Output; } [/code] [b]Here is the pixel shader code TO BUILD THE SHADOW MAP:[/b] [code] float linstep(float min, float max, float v) { return clamp((v - min) / (max - min), 0, 1); } // Rescale into [0, 1] float RescaleDistToLight(float Distance) { return linstep(gLight.nearFar.x, gLight.nearFar.y, Distance); } float2 GetFPBias() { //return float2(0.5, 0); return float2(0, 0); } float2 ComputeMoments(float Depth) { float dx = ddx(Depth); float dy = ddy(Depth); // Compute first few moments of depth float2 Moments; Moments.x = Depth; Moments.y = Depth * Depth + 0.25*(dx*dx + dy*dy); ; return Moments; } float4 BuildShadowMapPS(Depth_PSIn Input) : COLOR { float Depth = RescaleDistToLight(length(Input.PosView)) + gVSMDepthEpsilon; float2 Moments = ComputeMoments(Depth) + GetFPBias(); return float4(Moments.x, Moments.y, 0.0f, 0.0f); } [/code] [b]Here is the vertex shader code WHEN RENDERING A TEXTURED MESH:[/b] [code] void MeshTextured_SpotLight_VS(float3 posL : POSITION, float3 normalL : NORMAL0, float2 tex0 : TEXCOORD0, out float4 oPosH : POSITION0, out float3 oPosW : TEXCOORD0, out float3 oNormalW : TEXCOORD1, out float3 oToEyeW : TEXCOORD2, out float2 oTex0 : TEXCOORD3, out float oSliceDepth : TEXCOORD4) { // Transform to homogeneous clip space. oPosH = mul(float4(posL, 1.0f), gWVP); // Transform vertex position to world space. oPosW = mul(float4(posL, 1.0f), gW).xyz; // Transform normal to world space (assume no non-uniform scaling). oNormalW = mul(float4(normalL, 0.0f), gW).xyz; // Compute the unit vector from the vertex to the eye. oToEyeW = gEyePosW - oPosW; // Pass on texture coords to PS oTex0 = tex0; // Calculate the slice depth oSliceDepth = oPosH.z; } [/code] [b]Here is the pixel shader code WHEN RENDERING A TEXTURED MESH:[/b] [code] // Per-pixel shading. Diffuse, ambient and specular void PerPixelShading_SpotLight( float3 posW, float3 normalW, float3 toEyeW, float3 lightVecW, float4 color, out float3 spec, out float3 diffuse, out float3 ambient, out float spot) { // Compute the reflection vector. float3 r = reflect(-lightVecW, normalW); // Determine how much (if any) specular light makes it into the eye. float t = pow(max(dot(r, toEyeW), 0.0f), gMtrl.specPower); // Determine the diffuse light intensity that strikes the vertex. float s = max(dot(lightVecW, normalW), 0.0f); // Compute the ambient, diffuse and specular terms separately. spec = t*(gMtrl.spec*gLight.spec).rgb; diffuse = s*(gMtrl.diffuse.rgb*gLight.diffuse.rgb); ambient = gMtrl.ambient.rgb*gLight.ambient.rgb; // Compute spotlight coefficient. spot = pow(max( dot(-lightVecW, gLight.dirW), 0.0000001f), gLight.spotPower); } // Per-pixel Shadow. void PerPixelShadowing_SpotLight( float3 posW, float sliceDepth, float DistToLight, out int Split, out float shadowCoeff ) { // Compute which split we're in: // (slideDepth > dist_0) + (slideDepth > dist_1) + (slideDepth > dist_2) + (slideDepth > dist_3) Split = dot(1, sliceDepth > gSplitDistances); // Project using the associated matrix float4 PosInLight = mul(float4(posW, 1), gSplitVPMatrices[Split]); float2 LightTexCoord = (PosInLight.xy / PosInLight.w) * float2(0.5, -0.5) + 0.5; // SHADOW CODE if(gDoShadowing) { // Sample the correct shadow map float2 Moments; if(Split == 0) Moments = tex2D(ShadowMapS0, LightTexCoord).xy; if(Split == 1) Moments = tex2D(ShadowMapS1, LightTexCoord).xy; if(Split == 2) Moments = tex2D(ShadowMapS2, LightTexCoord).xy; if(Split == 3) Moments = tex2D(ShadowMapS3, LightTexCoord).xy; if(Split == 4) Moments = tex2D(ShadowMapS4, LightTexCoord).xy; Moments = Moments + GetFPBias(); float RescaledDist = RescaleDistToLight(DistToLight); // VARIANCE SHADOW MAPS shadowCoeff = ChebyshevUpperBound(Moments, RescaledDist, gVSMMinVariance); shadowCoeff = LBR(shadowCoeff); } else { shadowCoeff = 1.0f; } } float4 MeshTextured_SpotLight_PS(float3 posW : TEXCOORD0, float3 normalW : TEXCOORD1, float3 toEyeW : TEXCOORD2, float2 tex0 : TEXCOORD3, float sliceDepth : TEXCOORD4) : COLOR { // Interpolated normals can become unnormal--so normalize. normalW = normalize(normalW); toEyeW = normalize(toEyeW); // Calculate normalized light vector and distance to light float3 lightVecW = gLight.posW - posW; float DistToLight = length(lightVecW); lightVecW /= DistToLight; // Sample Texture map. float4 texColor = tex2D(TexS, tex0); // Calculate per-pixel shading for spot light float3 spec; float3 diffuse; float3 ambient; float spot; PerPixelShading_SpotLight(posW, normalW, toEyeW, lightVecW, texColor, spec, diffuse, ambient, spot); // Calculate Shadow int Split; float shadowCoeff; PerPixelShadowing_SpotLight(posW, sliceDepth, DistToLight, Split, shadowCoeff); // Light/Texture pixel. Note that shadow coefficient only affects diffuse/spec. float3 litColor = spot*(ambient*texColor.rgb + shadowCoeff*(diffuse*texColor.rgb + spec)); // Visualize the splits by adding a linearly interpolated color offset if (gVisualizeSplits) { litColor = lerp(litColor, SplitColors[Split], 0.5); } return float4(litColor, gMtrl.diffuse.a*texColor.a); } [/code] [b]POINT Filter, 1 cascade, 512x512 textures, No texture blur:[/b] [img]http://farm6.static.flickr.com/5213/5529568603_e6566cf386.jpg[/img] [b]LINEAR Filter, 1 cascade, 512x512 textures, No texture blur:[/b] [img]http://farm6.static.flickr.com/5057/5529568217_f22450507c.jpg[/img] [b]ANISOTROPIC Filter, 1 cascade, 512x512 textures, No texture blur:[/b] [img]http://farm6.static.flickr.com/5060/5530155428_5bf7bc1008.jpg[/img] [b][b]POINT Filter, 4 cascades, 512x512 textures, 5x5 box blur:[/b][/b] [img]http://farm6.static.flickr.com/5053/5530156448_c6e9df444b.jpg[/img]
  10. Are you really that surprised? I'm a EE PhD and I know that some of my E&M textbooks were >$200. Low print volume + large page count make these things expensive.
  11. OK, thanks a lot guys. Looks like my wallet is going to be $80 lighter ;-)
  12. Hey guys, Does anyone have this book and could they offer some insight into whether or not it is a good purchase. I'm looking for a good reference book for real-time graphics algorithms. http://www.amazon.com/Real-Time-Rendering-Third-Tomas-Akenine-Moller/dp/1568814240 Is there a better resource? Thanks, Jonathan
  13. MY QUESTION: WHAT IS THE BEST WAY TO IMPLEMENT PLAYER CONTROLS IN A PHYSICS ENGINE? BACKGROUND: I'm putting together a quick XNA game. The physics engine is simply an RK4 integrator + bounding spheres with impulse based collision response. I'm currently implementing the player controls and I am not entirely sure how to do it. As I see it there are two options; 1. Add as impulse before integration: a) object.V +=someImpulse b) RK4Integrate(F,V,etc) 2. Add as a force a) Attach force to object. b) RK4Integrate(F,V,etc) When the player stops moving the object: a) Find force to stop object in Tstop time. b) Attach the force. c) RK4Integrate(F,V,etc) I can see issues with both approaches. 1. Looks less natural. Player motion is discontinuous. BUT it's easier for the player to control. 2. If the physics time step gets longer than Tstop, there are potential stability issues. Also you need to deal with the fact that the player may never stop (ie, it could oscillate around a point). WHAT IS THE USUAL APPROACH?
  14. I am trying to implement an OBB bounding volume hierarchy as in: S. Gottschalk et. al, "OBBTree: A Hierarchical Structure for Rapid Interference Detection". I have successfully generated the OBB Tree from a polygon soup, but I am having trouble testing OBB nodes for collision. My first question: 1. My collision code is reporting incorrect collisions. I think the issue stems from the fact that in my OBB code each OBB node has a rotation matrix and translation vector to describe orientation and position within the model frame, and then that model frame itself has a rotation and translation within the world frame. In Gottschalk's code, there is only one Rotation and one Translation for each OBB. Please let me know if any of the following is incorrect: My OBB tree is represented by something like this: class obbox { vector3 boxCenter; // Box center in model frame vector3 boxDimension; // Box half lengths matrix3x3 orientMat; // 3 columns are eigenvectors of // covariance matrix. }; But the OBB Tree is part of a rigid body object that has it's own rotation and translation vector: class rbobject { vector3 Trans; // Translation vector3 Rot; // Rotation } I have been using the detection code in Christer Ericson's book, Real-Time Collision Detection (page 103-105): http://books.google.com/books?id=WGpL6Sk9qNAC&lpg=PA101&ots=Pl4MjF4ciO&dq=Real-Time%20Collision%20Detection%20oriented%20bounding%20boxes&pg=PA101#v=onepage&q=Real-Time%20Collision%20Detection%20oriented%20bounding%20boxes&f=false In this, he has one rotation matrix and one translation vector to describe each OBB element. My confusion stems from the fact that I have two rotations and two translations for each OBB node. Instead I use the code below to find the R and T values in Ericson's code: matrix3x3 R_a = rbobject_a.Rot * obbox_a.orientMat; matrix3x3 R_b = rbobject_b.Rot * obbox_b.orientMat; // Matrix describing B in A's coordinate frame // A^-1 * B = A' * B for orthonormal A matrix3x3 R = Transpose(R_a) * R_b; vector3 t_a_world = (rbobject_a.Rot * obbox_a.boxCenter) + rbobject_a.Trans; vector3 t_b_world = (rbobject_b.Rot * obbox_b.boxCenter) + rbobject_b.Trans; vector3 t_a_b_world = t_b_world - t_a_world; // Bring translation vector into a's coordinate frame vector3 t = Transpose(R_a) * t_a_b_world ; Then the rest of the code is the same as in the text book. Is this correct? I can't see anything wrong with the matrix / vector manipulations, but I am getting incorrect collisions reported. My second question: 2. Can you test for leaf node collisions by using an OBB of close to zero thickness and which lies in the plane of the leaf triangle? I can't see any issue with this approach. Maybe the OBB collision test can be simplified if one of the lengths is known to be zero thickness (as described in Gottschalk's thesis), but the detection code should still work. Right? Thanks, Jonathan