• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

ShawMishrak

Members
  • Content count

    95
  • Joined

  • Last visited

Community Reputation

146 Neutral

About ShawMishrak

  • Rank
    Member
  1. Interesting, the ID3D10Buffer allocation did print an INFO message in the Output window. I guess that implies my D3D code is right (at least legal), so there must be another reason for the resize problem. Is there a way to turn on memory validation to detect when resources are not released at the time of device destruction, like in Direct3D 9 debug output? That's what originally made me question whether the debug output was working, since I purposely tried to not release any D3D resources to see what the output would be. I'm currently getting nothing about un-released resources.
  2. Is there a trick to getting the Direct3D 10 debug runtime to write debug information to the Visual Studio Output window? Or does the Direct3D 10 debug output just not say much? I'm creating the ID3D10Device instance with the D3D10_CREATE_DEVICE_DEBUG flag, and I've tried both "Application Controlled" and "Forced On" in the DX Control Panel. I have all of the "mute" options unchecked as well. Yet, I get no debug output in Visual Studio. Even if I never release any resources, I still get no debug output. I'm having problems with a swap chain resize event, and I would like the debug runtime to give me some indication as to what the problem is. I tried to use the ID3D10InfoQueue interface to write to the debug log (AddApplicationMessage) and that appears fine in the Output window. Does the Direct3D 10 debug output just not do as good of a job as the Direct3D 9 debug output?
  3. To get this to work with the basic new/delete keywords, add this after your #includes: #define DEBUG_NEW new(_NORMAL_BLOCK, __FILE__, __LINE__) #define new DEBUG_NEW This will override the default new keyword and allow the debug memory heap tracking functionality to work. [EDIT: You beat me to it!]
  4. Wow. It seems like all of the big-name physics middleware providers are being bought up.
  5. Quite frankly, I'm tired of hearing it from both sides. If I want to use C++, I'll use C++. If I want to use C#, I'll use C#. Just because someone prefers C# doesn't make C++ any less viable of a language.
  6. If the tools will be calling into a large majority of your engine code, then I would definitely recommend C++/wxWidgets. wxWidgets + DialogBlocks is pretty easy to use and you don't have to worry about passing data between native and managed code.
  7. Also keep in mind the language differences: XNA is managed-only on Xbox. (No VMX access!) PS3 is native, with C/C++/assembly.
  8. To get around the z-fighting issues, you can set a depth bias, in GraphicsDevice.RenderState.DepthBias.
  9. #define D3DCLEAR_TARGET 0x00000001l /* Clear target surface */ #define D3DCLEAR_ZBUFFER 0x00000002l /* Clear target z buffer */ #define D3DCLEAR_STENCIL 0x00000004l /* Clear stencil planes */ So yes, 0x00000003 would be D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER. I guess what you would have to do it bind a dummy depth buffer and the final render target, which clears it, *then* assign the real depth buffer to the device. I'm not sure if a Clear happens when you bind a new depth surface, but it's worth a try.
  10. Quote:Original post by daviangel I thought the one of the main selling points for XNA 2.0 was VS 2005/VS2008 support finally? VS 2005 integration, yes. VS 2008 integration, no.
  11. Thanks for the suggestion, but unfortunately glGetError() returns 0 everywhere.
  12. Here is a problem that's been bothering me for awhile now. I compile/link some GLSL shaders, then use glGetUniformLocationARB to get access to the uniform variables (constants, textures, etc.). When I originally wrote the code a couple of years ago, I was using a Radeon 9800 Pro card, and everything worked fine. Lately, I dug the code back out and tried it on my new 8800 GTX, and some of the glGetUniformLocationARB calls are now failing. First off, the failing variables cannot be optimized out of the shaders. Second, the part that really gets me, is that not all variables for a given shaders will fail here. For instance, one of my shaders has the following fragment program: uniform sampler2D bumpMap; uniform samplerCube cubeMap; uniform sampler2D baseMap; uniform float kAmbient; uniform float kDiffuse; uniform float kSpecular; uniform float kSpecPower; uniform float kEnvMap; uniform vec4 specularColor; varying vec3 fragLightDir; varying vec3 fragViewDir; varying vec3 norm; varying vec3 viewVec; varying vec3 debugNormal; varying vec3 debugTangent; varying vec3 debugBinormal; void main(void) { // Get base diffuse color vec4 diffuseColor = texture2D(baseMap, gl_TexCoord[0].st); vec4 bumpDiffuse, envMapDiffuse, bumpSpec; // Normalize light/eye vectors, which are in tangent space vec3 normFragLightDir = normalize(fragLightDir); vec3 normFragViewDir = normalize(fragViewDir); // Get tangent space normal, mapping it from [0,1] to [-1,1] vec3 normal = normalize((texture2D(bumpMap, gl_TexCoord[0].st).xyz * 2.0) - 1.0); // Perform n DOT l calculation float nDotL = dot(normal, normFragLightDir); // Calculate reflection vector for specular highlights vec3 reflectionSpec = normalize(((2.0 * normal) * nDotL) - normFragLightDir); float rDotV = max(dot(reflectionSpec, normFragViewDir), 0.0); // Determine diffuse/specular color bumpDiffuse = nDotL * kDiffuse * diffuseColor; bumpSpec = kSpecular * pow(rDotV, kSpecPower) * specularColor; // Normalize the object space normal and view vector vec3 normViewVec = normalize(viewVec); vec3 envNormal = normalize(norm); // Reflect view vector around the normal vec3 reflVec = reflect(normViewVec, envNormal); // Get environment map contribution for this point envMapDiffuse = kEnvMap * textureCube(cubeMap, -normalize(reflVec).xyz) * diffuseColor; // Get final diffuse color vec4 finalDiffuse; if(kEnvMap < 0.1) finalDiffuse = bumpDiffuse; else if(kDiffuse < 0.1) finalDiffuse = envMapDiffuse; else finalDiffuse = mix(bumpDiffuse, envMapDiffuse, 0.5); // Get ambient/specular colors vec4 finalAmbient = kAmbient * diffuseColor; vec4 finalSpecular = bumpSpec; // Sum 'em all up gl_FragColor = finalDiffuse + finalAmbient + finalSpecular; //gl_FragColor = vec4(normalize(debugBinormal), 1.0); } Clearly, the baseMap uniform is used in a texture lookup, which then goes on to determine the final fragment color. Yet, when I use glGetUniformLocationARB to get handles to the uniforms, the call fails for the baseMap variable. All of the other calls to glGetUniformLocationARB for the other uniforms are successful. I use glGetObjectParameterivARB (with GL_COMPILE_STATUS, GL_LINK_STATUS, and GL_VALIDATE_STATUS, where appropriate) to check for errors, and nothing comes back as an error. What would cause this? I don't see it being a shader compilation error, since I can successfully query for the other uniforms. I have several programs (old school assignments) that exhibit this behavior. Sometimes its a texture uniform, other times its a float uniform.
  13. In a way. Depending on the conditions, you're probably breaking the pipeline anyway just by having the function call, unless it's inlined or the processor is really good at pipelining jump instructions. With function pointers, the usual big loss is that it requires an additional memory lookup which can miss in the cache.
  14. Has anyone else experienced exceptionally slow floating-point performance on Xbox with XNA? I was trying to run some of my physics code on Xbox for the first time, and found the performance to be poor, to say the least (>60 FPS on Windows to 0.5 FPS on Xbox). To take the garbage collector overhead out of the equation (since I know there are problems with my code there), I tried running a benchmark of 10 mil. matrix multiplies (XNA matrices) on Windows and Xbox, and I was getting over a 50% performance drop on Xbox, compared to my three year-old P4 3.0GHz desktop with a DVD movie playing in the background. I used the 'ref/out' version of the matrix multiply routine to prevent excessive copying on the Compact Framework. I know that the Compact Framework on Xbox is still fairly unoptimized for floating-point operations and that it does not take advantage of Altivec yet, but I'm still surprised that the performance drop is so severe. Even a custom written matrix multiplication routine on XNA matrices (to take out the possibility of XNA on Windows using SSE) on my P4 runs significantly faster than the same code on Xbox. Is this a common problem, or am I probably missing something here? Thanks.
  15. Quote:Original post by ShotgunNinja How the hell (excuse my French) did you get an Ageia PhysX card into a SCHOOL computer? (By the way, I'm at school too, lol) Or *DID* you install the card? (You know that you have to do that first, don't you?) Anyway, the PhysX program and DLLs actively tells the processor to shift all physics-related functions (it uses the "compatible function" list of the Havok Engine) to the port of the PhysX card, instead of the core processor(s)/graphics processor(s). The PhysX program, without the card, is referencing a port which doesn't exist, or at least scanning for a port which doesn't exist. So it is effectively useless. I hate to be rude, because I'm relying on my school computer to develop as well, but do you even have a home computer? (If you don't, you're in the same boat as me.) You should be developing on that if possible. No, this has nothing to do with installing the PhysX hardware processor. I want it to just run in software mode, as a physics engine like Newton or ODE, which it is perfectly capable of doing. I have a home computer and I do most of my development on it, but it would be nice to be able to do stuff at school as well. It gives me something to do in my sometimes 2+ hours between classes. If I cannot get it to work, no big deal. Compiling is not a problem, I can do that easily by copying the .lib/.h files. It's just getting the PhysXLoader to find the actual PhysX binary that is the problem. I was hoping that just having the PhysXCore.dll file in the same folder as the executable and PhysXLoader.dll files would do the trick, but I guess not... Thanks for the help!