• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Tim Coolman

Members
  • Content count

    24
  • Joined

  • Last visited

Community Reputation

171 Neutral

About Tim Coolman

  • Rank
    Member
  1. Thanks unbird and menohack for your suggestions.The texture atlas may be problematic because these aren't static textures that I can layout into an atlas resource in advance - these textures are first rendered by prior Draw calls and may be redrawn frequently. Basically I am drawing many things to these off-screen textures and then compositing them to the screen as quads, which I would like to do with instancing. I will consider the Texture2DArray suggestion using the largest texture for the dimensions.
  2.   Thanks for the suggestion. It would be possible, but I'm still hoping for a more straightforward solution.
  3. In my DirectX 11 application, I would like to draw a scene consisting of many textured quads. For the sake of efficiency, my first thought was to use instancing to pull this off in a single draw call - four common vertices and an instance buffer containing transformation matrices to handle positioning of each instance, and an index for which texture to sample from. I had hoped I could do this using a single Texture2DArray resource for storing my collection of textures, but the textures all vary in size (though would share the same format). This does not appear to be possible with a Texture2DArray.   I would really like to avoid a separate draw call for each of these quads. From what I understand there is overhead involved in draw calls that can create a CPU bottleneck, especially considering I would only be drawing two triangles per call.   Anyone have suggestions on the most efficient way to do this?
  4.   You were right! That line appeared to do the trick. Any idea if there is a way to take care of this on the export from Blender so I don't have to modify the cull mode?
  5. I have recently been playing around with XNA for the first time. I have some experience with DirectX 10 and 11, and have also gone through some modeling tutorials for Blender. But this is the first time I've tried to import a model created in Blender.   In the project I'm experimenting with, I am drawing a jet model provided in a Microsoft example, and a simple house model I created in Blender and exported to a .x file. The problem I'm having is the perspective of the house is opposite as it should be, relative to the camera. If the house model is in the center of the viewing area, it looks fine - all I see is the front surface of the model. As the model moves to the right of the camera (translation only, no rotation applied), I should begin to see some of the side of the model that is closest to the camera. Instead the opposite side becomes visible. The same happens with up and down movement.   The jet model behaves correctly, but I'm using the same view and projection matrix for both models.   Here are some screenshots to demonstrate what I'm talking about. Hard to tell with the jet, but the issue with the house is pretty clear. Just looking for some tips as to why this might happen? Hard for me to understand how the model could be the problem, but since I'm using the same matrices for both models, I feel like there must be something wrong with the way I exported the model or something. Thanks in advance for any time given to help me out!   [attachment=14031:1.png][attachment=14032:2.png][attachment=14033:3.png][attachment=14034:4.png][attachment=14035:5.png][attachment=14036:6.png]
  6. I'll bump this once just because I posted this topic late on a Friday afternoon. Anyone have any ideas on this?
  7. I posted this question to the nVidia developer forum under NSight Visual Studio, and I got this response from a moderator. [quote]Debugging DirectCompute shaders is the similar process as to debugging any other shader. Please take a look at the user's guide, under Graphics Debugger > Shader Debugger.[/quote] Simple answer, I just overlooked this assuming compute debugging would be more like CUDA debugging. Followed these instructions and it works great.
  8. Well, after trying a few other ways to do this, I put it back to how I had it and... now it works! Magic. I have no idea what changed since my first attempt, but is now working as I'd originally expected. I apologize, as I feel like I wasted your time with this question. But now using a [b]DXGI_FORMAT_R32G32B32A32_FLOAT[/b] texture, I'm able to store UINT values using asfloat() and asuint() to convert back and forth between Pixel and Compute shaders.
  9. I am writing Windows DirectX 11 software in C++ for which I would like to receive input from both a regular keyboard and a 10-key keypad. I would like a secondary user to be able to input from a 10-key keypad without disrupting the use of the full keyboard by the primary user. For example, if the primary user is typing into a text box, I would like the secondary user to be able to send 10-key data to the software to be handled separately so it does not affect the text box input. I am currently using DirectInput for both mouse and keyboard. But if anyone knows of a solution through the Windows API, I would consider that as well. When I create my keyboard device in DirectInput, I am currently using the GUID_SysKeyboard value, which lumps both keyboards into one so that my software can't discern the source of keyboard input. Is it possible to use EnumDevices to identify the two keyboards and create separate DirectInput devices? I imagine it would be, but I'm not sure how to go about identifying each device from the DIDEVICEINSTANCE structure provided to the EnumDevices callback. I would like to make this as generic as possible so it can be used with different combinations and models/brands of keyboards. Thanks in advance for any help or suggestions! (Note: I posted this same question on [url="http://stackoverflow.com/questions/13652811/separate-input-for-additional-10-key-keypad"]StackOverflow[/url])
  10. Okay. The values I'd like to store consist of one float and 3 uint values. Think using [b]DXGI_FORMAT_R32G32B32A32_TYPELESS[/b] instead of [b]DXGI_FORMAT_R32G32B32A32_FLOAT [/b]would prevent unexpected conversions from occurring?
  11. [quote name='MJP' timestamp='1352764056' post='5000386'] In your case anything except for a 32-bit FLOAT format will mess up your values, because conversions will happen. [/quote] What kind of conversions? I just figured that since I was using the asfloat() function to store my UINT values, the texture would accept it as a float - how would the texture know the difference that it is actually a binary representation of a UINT? Unless the texture requires that the value be a value color-component value between 0.0 and 1.0. [quote name='Jason Z' timestamp='1352771199' post='5000418'] Have you considered writing to a UAV in your pixel shader? [/quote] I'll have to think about this. The reason I'm doing it this way is because I actually am storing graphical data - I still take advantage of the way the pixel shader projects the data onto my texture using transformation matrices, and I also need it to take care of depth buffering and resolution. However, I don't care about color - instead I have other data to keep track of, which is why I was trying to use the color-component values to store other information.
  12. I am using a pixel shader to put some data into a texture. Typically, with a [i]float4[/i] formatted texture, you would output RGBA color data to the texture where each color component is a 0.0 - 1.0 float value. I'm trying to use the pixel shader to store non-color data. This texture is not meant for display. Instead, once the texture is filled, I convert the texture texels to a different binary format using a compute shader (due to the nature of the data, it makes sense for me to output this data with a pixel shader). When outputting to the texture from my pixel shader, I would like to store some [i]uint[/i] values instead of floats in the Y, Z, W components. So here is an example of how I'm trying to return from the pixel shader: [source lang="cpp"] return float4(floatValue, asfloat(firstUintValue), asfloat(secondUintValue), asfloat(thirdUintValue)); [/source] I do this because I don't want to cast the [i]uint[/i] values to [i]float[/i], but rather maintain their binary equivalent. However, when I read from the texture using my compute shader and convert these values back to [i]uint[/i] using the [i]asuint(texel.Y)[/i] function, they do not seem to be the same value I attempted to store in the first place. Actually, most the time I seem to get ZERO values out of this. I know that I have supplied my compute shader with the texture as a shader resource properly, because I am able to retrieve the X component of the texels, which you'll notice above was a regular float (between 0.0 and 1.0). Does the pixel shader require output to be 0.0 - 1.0 floats and do automatic adjustments otherwise? Thanks you for your time and assistance.
  13. I will also ask if anyone can recommend other methods of compute shader debugging. If possible, I'd really like to be able to debug my shader in the context of my application so that I can see for certain the data and parameters it has been given from my application.
  14. I would like to debug my DirectCompute shader. NVIDIA's NSight website claims that it supports DirectCompute for GPGPU debugging, but their documentation only shows how to debug CUDA C++ code. I have successfully used NSight to do graphics debugging and it works great - I run NSight on my laptop, which copies and launches my application on my desktop PC, and allows me to debug remotely. I can't seem to figure out how to get compute shader debugging to work, though. I tried putting a breakpoint inside the compute shader function of my .fx file, but it doesn't trigger when my C++ application calls Dispatch for that shader. Could it have something to do with the fact that my application compiles all my shaders at runtime? Has anyone had any success debugging their DirectCompute HLSL code using NVIDIA NSight? If so, any guidance would be much appreciated! Thanks, Tim
  15. Thanks to both MikeBMcL and MJP for your input. This really helps clarify things for me. Always helps to understand a little bit better how things work, even the things that happen "behind the scenes".