• Content count

  • Joined

  • Last visited

Community Reputation

657 Good

About jrh2365

  • Rank

Personal Information

  1. I would not expect you to be able to reproduce it because the problem seems to be specific to this one machine. I cannot reproduce it on my other machine. I was just using the alt key in notepad to demonstrate that the problem exists at a lower level than my (or MonoGame's) input handling. The problem occurs with every key.
  2. I have managed to reproduce it in notepad. It doesn't happen when typing (probably because notepad uses the repeat events, which do not seem to be firing?), but the problem is visible with the alt key. Key down on alt will display underlines under the shortcut keys for the menu items, and key up will then highlight the first menu item (File). When I tap quickly tap alt, the shortcut underlines are displayed, but the menu item is never highlighted.   The fact that there are no repeat events seems strange. Are those generated by Windows, or the keyboard driver?
  3. I was doing some game development on a different machine than usual when I noticed that keys would occasionally stick after I released them, causing my character to keep running. At first I thought it was a bug in the new framework I was using, but I was unable to reproduce it on another machine, and I tracked the problem back to the Raw Input events that were being fired.   If I tap a key quickly, then I see the key down event fire, but the key up event isn't firing until the next time I press a key (it doesn't need to be the same key). This seems to be the case for every key on this device (E7440).   Any suggestions on how to address this? How common is this sort of low-level issue in the wild?
  4. Let me see if I can explain this better. I'm looking for approaches for controlling the movement of a player character (guided by the user), that do not use a physics engine (using the collision portion of Bullet is fine). So a non-physical or non-dynamic character controller, which would seem to be considered a kinematic character controller? (Though I'm not sure if the second approach below would be considered a kinematic character controller)   1. The approach that I have attempted to implement in the past used only the collision portion of Bullet to perform sphere-casts against a triangle mesh representing the world. So it calculates how far a sphere can move in a certain direction, and if that distance is less than the desired movement direction, it then projects the movement vector onto the surface that was hit, and performs another sphere-cast to allow for sliding collision. It might repeat that a couple times depending, and also does another sphere-cast in the vertical direction to handle y-velocity and stepping-down (to avoid floating/bouncing while moving down an incline. I believe that was also used to determine whether the player was standing on the ground. There were a number of issues that I ran into with this approach including getting caught when sliding along certain geometry, jittering in obtuse corners, and bouncing along the edge between the ground and a surface that is considered to steep to walk on.   2. The other approach is one that I have not attempted to implement yet. It would involve using a navmesh similar to what would typically be used to handle NPCs, except for the player. This is a slightly more limiting than #1 in that the player is constrained to the navmesh. Even though the player could jump, they could not, say, jump across the edges of the mesh (ex. over the edge of a staircase back to the floor). They basically move around within a triangulated 3D polygon. There would be no real collision detection between the player and the world geometry. One of my current uncertainties with this approach would be the handling of other obstacles (ex. enemies moving around).   So I'm wondering: Are the above approaches actually used? What other approaches exist? Are there any good resources on implementing these approaches robustly?     Rough sketch of #2 [attachment=24007:second.png]  
  5. I'm curious as to what sort of techniques exist for handling player movement in a 3D environment without the use of a physics engine.   One approach I've tried in the past was using sphere casting (performing multiple iterations to handle sliding), but it had a number of quirks that I couldn't easily resolve.   Another thought I had was to just move the player along a navmesh, as described in http://digestingduck.blogspot.com/2010/07/constrained-movement-along-navmesh-pt-3.html. The main thing I'm still unsure about is how to handle dynamic obstacles (ex. enemies). And most of the navmesh resouces I've found so far pertain to the implementations in Unity, Unreal, or Source, rather than how to actually implement it.   Any thoughts on the above or alternative techniques? Or good resources on implementing them?   Also, are there any commercial games using similar approaches? From the small amount that I've played it, I think Kingdoms of Amalur: Reckoning might be using an approach similar to the second one I described, but I'm not sure. 
  6. Unity Reflection problem in C#

    You could use generic methods for arithmetic operators such as those provided by MiscUtil (http://www.yoda.arachsys.com/csharp/miscutil/usage/genericoperators.html), invoking, for example, Multiply<T>(T, T) through reflection.   However, that seems a bit overkill. Is there any reason you can't just cast the stat value from object to float when you're performing the arithmetic?
  7. V-sync and crispy textures

    Wouldn't grainy / crispy textures indicate a lack of mipmaps? Or maybe incorrect texture sampler settings?   [Edit] Oops, Voidmancer beat me to it.
  8. DX11 Terrain rendering problem

    Also, before that line there's a "Access violation reading location 0xFEEEFEEE", which would mean that something is referencing memory that has already been freed.   http://en.wikipedia.org/wiki/Magic_number_(programming)#Magic_debug_values
  9. DX11 C++ DX API, help me get it?

    Allow me to pick off part of #6:   D3D11CreateDeviceAndSwapChain has two parameters that accept pointers to D3D_FEATURE_LEVEL. The first one is a pointer because it is actually looking for an array of D3D_FEATURE_LEVEL (and the parameter following that one is the number of elements in the array). The second one is a pointer because it is an output parameter to where the feature level that was actually selected can be stored.   http://msdn.microsoft.com/en-us/library/windows/desktop/ff476083(v=vs.85).aspx   [EDIT] Also, I expect that a lot of the reasons for the API being structured how it is are due to http://en.wikipedia.org/wiki/Component_Object_Model
  10. Direct x failing

    Is the issue resolved? Your code runs fine here. If not, which OS are you on?
  11. Direct x failing

    Have you tried stepping through with the debugger to ensure that things are behaving as expected? (ex. hwnd is not null, width and height have the expected values, etc.) Is it still failing at the same spot?   Also, you have this: result = g_pSwapChain->GetBuffer(0, __uuidof(ID3D10Texture2D), (LPVOID*)&p_RT); I think you want ID3D11Texture2D there, not ID3D10Texture2D.
  12. Direct x failing

    I believe the problem is that when you are calculating the height, you should be doing (bottom - top), not (top - bottom).   Also, two other things: You are passing in 3 as the feature level count to D3D11CreateDeviceAndSwapChain, meaning it will only look at the first 3 elements of your feature level array. Your feature level array is backwards. It will attempt to use feature level 9.1 first, then 9.2, etc.
  13. Thanks, pulling the saturate calls out and using another effect parameter works fine.
  14. DX11 I can't see anything?

    In your input layout, you specified DXGI_FORMAT_R32G32B32A32_FLOAT for position, but your triangle has 3 floats for position, not 4. Try changing it to DXGI_FORMAT_R32G32B32_FLOAT instead.
  15. I have an effect that I'm using in XNA that fails to compile in release mode, but works fine in debug mode. I was able to reproduce the behavior using the version of fxc included in the March 2009 DirectX SDK. Compilation fails unless preshaders are disabled (/Op)   This is the output, which isn't particularly helpful error : (202,17): ID3DXEffectCompiler::CompileEffect: There was an error compiling expression error : ID3DXEffectCompiler: Compilation failed  The problem seems related to the if/elseif/else block at the top of TransformPixel().   Any suggestions? XNA doesn't seem to have a way to disable preshaders but keep other optimizations. It also applies some post processing to the compiled effect code. I was thinking I could replace XNA's EffectProcessor with one that uses a compiler from a more recent DirectX SDK, and then call XNA's post processor through reflection. I did a quick test of this and was able to load an effect compiled using the June 2010 SDK, but I'm not sure if there are cases where the output isn't compatible.   Here is the effect code, from http://animationcomponents.codeplex.com/ float4x4 World; float4x4 View; float4x4 Projection; float3 DiffuseColor; float3 SpecularColor; float3 AmbientLightColor = float3(0,0,0); float3 EmissiveColor; float3 EyePosition; float3 FogColor; bool FogEnable; float FogStart; float FogEnd; bool DirLight0Enable; bool DirLight1Enable; extern bool DirLight2Enable; float3 DirLight0Direction; float3 DirLight1Direction; float3 DirLight2Direction; float3 DirLight0DiffuseColor; float3 DirLight1DiffuseColor; float3 DirLight2DiffuseColor; float3 DirLight0SpecularColor; float3 DirLight1SpecularColor; float3 DirLight2SpecularColor; float Alpha; float4x4 MatrixPalette[56]; float SpecularPower; bool TextureEnabled; bool LightingEnable = false; texture BasicTexture; sampler TextureSampler = sampler_state { Texture = (BasicTexture); }; struct VS_INPUT { float4 position : POSITION; float4 color : COLOR; float2 texcoord : TEXCOORD0; float3 normal : NORMAL0; half4 indices : BLENDINDICES0; float4 weights : BLENDWEIGHT0; }; struct VS_OUTPUT { float4 position : POSITION; float4 color : COLOR; float2 texcoord : TEXCOORD0; float distance : TEXCOORD1; }; struct PS_INPUT { float4 color : COLOR; float2 texcoord : TEXCOORD0; float distance : TEXCOORD1; }; struct PS_OUTPUT { float4 color : COLOR; }; struct SKIN_OUTPUT { float4 position; float4 normal; }; SKIN_OUTPUT Skin4( const VS_INPUT input) { SKIN_OUTPUT output = (SKIN_OUTPUT)0; float lastWeight = 1.0; float weight = 0; for (int i = 0; i < 3; ++i) { weight = input.weights[i]; lastWeight -= weight; output.position += mul( input.position, MatrixPalette[input.indices[i]]) * weight; output.normal += mul( input.normal , MatrixPalette[input.indices[i]]) * weight; } output.position += mul( input.position, MatrixPalette[input.indices[3]])*lastWeight; output.normal += mul( input.normal , MatrixPalette[input.indices[3]])*lastWeight; return output; }; void TransformVertex (in VS_INPUT input, out VS_OUTPUT output) { float3 inputN = normalize(input.normal); SKIN_OUTPUT skin = Skin4(input); output.position=skin.position; float3 normal = skin.normal; normal = normalize(mul(normal,World)); float3 totalDiffuse = DiffuseColor* ((DirLight0Enable ? dot(-DirLight0Direction,normal) * DirLight0DiffuseColor : 0) + (DirLight1Enable ? dot(-DirLight1Direction,normal) * DirLight1DiffuseColor : 0) + (DirLight2Enable ? dot(-DirLight2Direction,normal) * DirLight2DiffuseColor : 0)); float3 viewDirection = normalize(EyePosition - mul(output.position,World)); float3 spec0,spec1,spec2; if (DirLight0Enable) { float val = dot(-DirLight0Direction,normal); if (val < 0) { spec0 = float3(0,0,0); } else { spec0 = DirLight0SpecularColor * (pow(val*dot(reflect(DirLight0Direction,normal),viewDirection),SpecularPower)); } } else spec0=float3(0,0,0); if (DirLight1Enable) { float val = dot(-DirLight1Direction,normal); if (val < 0) { spec1 = float3(0,0,0); } else { spec1 = DirLight1SpecularColor * (pow(val*dot(reflect(DirLight1Direction,normal),viewDirection),SpecularPower)); } } else spec1=float3(0,0,0); if (DirLight2Enable) { float val = dot(-DirLight2Direction,normal); if (val < 0) { spec2 = float3(0,0,0); } else { spec2 = DirLight2SpecularColor * (pow(val*dot(reflect(DirLight2Direction,normal),viewDirection),SpecularPower)); } } else spec2=float3(0,0,0); float3 totalSpecular = SpecularColor * (spec0+spec1+spec2); output.color.xyz = saturate(AmbientLightColor+totalDiffuse + totalSpecular); output.color.w=1.0; output.texcoord = input.texcoord; output.position = mul(output.position,World); output.distance = distance(EyePosition, output.position.xyz); output.position = mul(output.position,mul(View,Projection)); } void TransformPixel (in PS_INPUT input, out PS_OUTPUT output) { if (LightingEnable == false && TextureEnabled) { output.color.xyz = tex2D(TextureSampler,input.texcoord).xyz * saturate(EmissiveColor + DiffuseColor); } else if (LightingEnable == false) { output.color.xyz = saturate(EmissiveColor + DiffuseColor); } else { output.color.xyz = TextureEnabled ? tex2D(TextureSampler, input.texcoord).xyz * input.color.xyz : input.color.xyz; } output.color.w = TextureEnabled ? tex2D(TextureSampler, input.texcoord).w * Alpha : Alpha; if (FogEnable) { float dist = (input.distance - FogStart) / (FogEnd - FogStart); dist = saturate(dist); float3 distv = float3(dist,dist,dist); distv = lerp(output.color.xyz,FogColor,distv); output.color.xyz = distv; } } technique TransformTechnique { pass P0 { VertexShader = compile vs_2_0 TransformVertex(); PixelShader = compile ps_2_0 TransformPixel(); } }