• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

146 Neutral

About ShawMishrak

  • Rank
  1. Direct3D 10 Debug Output

    Interesting, the ID3D10Buffer allocation did print an INFO message in the Output window. I guess that implies my D3D code is right (at least legal), so there must be another reason for the resize problem. Is there a way to turn on memory validation to detect when resources are not released at the time of device destruction, like in Direct3D 9 debug output? That's what originally made me question whether the debug output was working, since I purposely tried to not release any D3D resources to see what the output would be. I'm currently getting nothing about un-released resources.
  2. Is there a trick to getting the Direct3D 10 debug runtime to write debug information to the Visual Studio Output window? Or does the Direct3D 10 debug output just not say much? I'm creating the ID3D10Device instance with the D3D10_CREATE_DEVICE_DEBUG flag, and I've tried both "Application Controlled" and "Forced On" in the DX Control Panel. I have all of the "mute" options unchecked as well. Yet, I get no debug output in Visual Studio. Even if I never release any resources, I still get no debug output. I'm having problems with a swap chain resize event, and I would like the debug runtime to give me some indication as to what the problem is. I tried to use the ID3D10InfoQueue interface to write to the debug log (AddApplicationMessage) and that appears fine in the Output window. Does the Direct3D 10 debug output just not do as good of a job as the Direct3D 9 debug output?
  3. To get this to work with the basic new/delete keywords, add this after your #includes: #define DEBUG_NEW new(_NORMAL_BLOCK, __FILE__, __LINE__) #define new DEBUG_NEW This will override the default new keyword and allow the debug memory heap tracking functionality to work. [EDIT: You beat me to it!]
  4. Ageia chip without the sdk?

    Wow. It seems like all of the big-name physics middleware providers are being bought up.
  5. Quite frankly, I'm tired of hearing it from both sides. If I want to use C++, I'll use C++. If I want to use C#, I'll use C#. Just because someone prefers C# doesn't make C++ any less viable of a language.
  6. wxWidgets or C#

    If the tools will be calling into a large majority of your engine code, then I would definitely recommend C++/wxWidgets. wxWidgets + DialogBlocks is pretty easy to use and you don't have to worry about passing data between native and managed code.
  7. Also keep in mind the language differences: XNA is managed-only on Xbox. (No VMX access!) PS3 is native, with C/C++/assembly.
  8. Problems with Z-Buffer and Depth pass

    To get around the z-fighting issues, you can set a depth bias, in GraphicsDevice.RenderState.DepthBias.
  9. Problems with Z-Buffer and Depth pass

    #define D3DCLEAR_TARGET 0x00000001l /* Clear target surface */ #define D3DCLEAR_ZBUFFER 0x00000002l /* Clear target z buffer */ #define D3DCLEAR_STENCIL 0x00000004l /* Clear stencil planes */ So yes, 0x00000003 would be D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER. I guess what you would have to do it bind a dummy depth buffer and the final render target, which clears it, *then* assign the real depth buffer to the device. I'm not sure if a Clear happens when you bind a new depth surface, but it's worth a try.
  10. Quote:Original post by daviangel I thought the one of the main selling points for XNA 2.0 was VS 2005/VS2008 support finally? VS 2005 integration, yes. VS 2008 integration, no.
  11. GLSL: glGetUniformLocationARB

    Thanks for the suggestion, but unfortunately glGetError() returns 0 everywhere.
  12. Here is a problem that's been bothering me for awhile now. I compile/link some GLSL shaders, then use glGetUniformLocationARB to get access to the uniform variables (constants, textures, etc.). When I originally wrote the code a couple of years ago, I was using a Radeon 9800 Pro card, and everything worked fine. Lately, I dug the code back out and tried it on my new 8800 GTX, and some of the glGetUniformLocationARB calls are now failing. First off, the failing variables cannot be optimized out of the shaders. Second, the part that really gets me, is that not all variables for a given shaders will fail here. For instance, one of my shaders has the following fragment program: uniform sampler2D bumpMap; uniform samplerCube cubeMap; uniform sampler2D baseMap; uniform float kAmbient; uniform float kDiffuse; uniform float kSpecular; uniform float kSpecPower; uniform float kEnvMap; uniform vec4 specularColor; varying vec3 fragLightDir; varying vec3 fragViewDir; varying vec3 norm; varying vec3 viewVec; varying vec3 debugNormal; varying vec3 debugTangent; varying vec3 debugBinormal; void main(void) { // Get base diffuse color vec4 diffuseColor = texture2D(baseMap, gl_TexCoord[0].st); vec4 bumpDiffuse, envMapDiffuse, bumpSpec; // Normalize light/eye vectors, which are in tangent space vec3 normFragLightDir = normalize(fragLightDir); vec3 normFragViewDir = normalize(fragViewDir); // Get tangent space normal, mapping it from [0,1] to [-1,1] vec3 normal = normalize((texture2D(bumpMap, gl_TexCoord[0].st).xyz * 2.0) - 1.0); // Perform n DOT l calculation float nDotL = dot(normal, normFragLightDir); // Calculate reflection vector for specular highlights vec3 reflectionSpec = normalize(((2.0 * normal) * nDotL) - normFragLightDir); float rDotV = max(dot(reflectionSpec, normFragViewDir), 0.0); // Determine diffuse/specular color bumpDiffuse = nDotL * kDiffuse * diffuseColor; bumpSpec = kSpecular * pow(rDotV, kSpecPower) * specularColor; // Normalize the object space normal and view vector vec3 normViewVec = normalize(viewVec); vec3 envNormal = normalize(norm); // Reflect view vector around the normal vec3 reflVec = reflect(normViewVec, envNormal); // Get environment map contribution for this point envMapDiffuse = kEnvMap * textureCube(cubeMap, -normalize(reflVec).xyz) * diffuseColor; // Get final diffuse color vec4 finalDiffuse; if(kEnvMap < 0.1) finalDiffuse = bumpDiffuse; else if(kDiffuse < 0.1) finalDiffuse = envMapDiffuse; else finalDiffuse = mix(bumpDiffuse, envMapDiffuse, 0.5); // Get ambient/specular colors vec4 finalAmbient = kAmbient * diffuseColor; vec4 finalSpecular = bumpSpec; // Sum 'em all up gl_FragColor = finalDiffuse + finalAmbient + finalSpecular; //gl_FragColor = vec4(normalize(debugBinormal), 1.0); } Clearly, the baseMap uniform is used in a texture lookup, which then goes on to determine the final fragment color. Yet, when I use glGetUniformLocationARB to get handles to the uniforms, the call fails for the baseMap variable. All of the other calls to glGetUniformLocationARB for the other uniforms are successful. I use glGetObjectParameterivARB (with GL_COMPILE_STATUS, GL_LINK_STATUS, and GL_VALIDATE_STATUS, where appropriate) to check for errors, and nothing comes back as an error. What would cause this? I don't see it being a shader compilation error, since I can successfully query for the other uniforms. I have several programs (old school assignments) that exhibit this behavior. Sometimes its a texture uniform, other times its a float uniform.
  13. In a way. Depending on the conditions, you're probably breaking the pipeline anyway just by having the function call, unless it's inlined or the processor is really good at pipelining jump instructions. With function pointers, the usual big loss is that it requires an additional memory lookup which can miss in the cache.
  14. Has anyone else experienced exceptionally slow floating-point performance on Xbox with XNA? I was trying to run some of my physics code on Xbox for the first time, and found the performance to be poor, to say the least (>60 FPS on Windows to 0.5 FPS on Xbox). To take the garbage collector overhead out of the equation (since I know there are problems with my code there), I tried running a benchmark of 10 mil. matrix multiplies (XNA matrices) on Windows and Xbox, and I was getting over a 50% performance drop on Xbox, compared to my three year-old P4 3.0GHz desktop with a DVD movie playing in the background. I used the 'ref/out' version of the matrix multiply routine to prevent excessive copying on the Compact Framework. I know that the Compact Framework on Xbox is still fairly unoptimized for floating-point operations and that it does not take advantage of Altivec yet, but I'm still surprised that the performance drop is so severe. Even a custom written matrix multiplication routine on XNA matrices (to take out the possibility of XNA on Windows using SSE) on my P4 runs significantly faster than the same code on Xbox. Is this a common problem, or am I probably missing something here? Thanks.
  15. PhysX - Running Without System Software Install?

    Quote:Original post by ShotgunNinja How the hell (excuse my French) did you get an Ageia PhysX card into a SCHOOL computer? (By the way, I'm at school too, lol) Or *DID* you install the card? (You know that you have to do that first, don't you?) Anyway, the PhysX program and DLLs actively tells the processor to shift all physics-related functions (it uses the "compatible function" list of the Havok Engine) to the port of the PhysX card, instead of the core processor(s)/graphics processor(s). The PhysX program, without the card, is referencing a port which doesn't exist, or at least scanning for a port which doesn't exist. So it is effectively useless. I hate to be rude, because I'm relying on my school computer to develop as well, but do you even have a home computer? (If you don't, you're in the same boat as me.) You should be developing on that if possible. No, this has nothing to do with installing the PhysX hardware processor. I want it to just run in software mode, as a physics engine like Newton or ODE, which it is perfectly capable of doing. I have a home computer and I do most of my development on it, but it would be nice to be able to do stuff at school as well. It gives me something to do in my sometimes 2+ hours between classes. If I cannot get it to work, no big deal. Compiling is not a problem, I can do that easily by copying the .lib/.h files. It's just getting the PhysXLoader to find the actual PhysX binary that is the problem. I was hoping that just having the PhysXCore.dll file in the same folder as the executable and PhysXLoader.dll files would do the trick, but I guess not... Thanks for the help!
  • Advertisement