• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

dreijer

Members
  • Content count

    14
  • Joined

  • Last visited

Community Reputation

128 Neutral

About dreijer

  • Rank
    Member

Personal Information

  1. I want to render the back buffer to another texture (performing scaling and other operations). If the back buffer wasn't created with the D3D10_BIND_SHADER_RESOURCE flag, is my only option then to create an intermediate texture (where I *do* set D3D10_BIND_SHADER_RESOURCE), use CopyResource() to copy the back buffer to the intermediate texture, and then bind the intermediate texture for rendering to my render target texture?
  2. Haha, I've been reading the docs up and down -- even sideways -- and I still missed this. I'm glad somebody else knows how to read. [img]http://public.gamedev.net//public/style_emoticons/default/tongue.png[/img] I guess what threw me off is that the paragraph you linked is missing from ExecuteCommandList(), which is where it's most relevant, in my opinion.
  3. I took my first stab at using deferred contexts in DirectX 11 the other day. My use-case for deferred contexts is probably somewhat different from the common scenario though; I'm interested in rendering a bunch of things on a deferred context, have them executed on the immediate context and then have the API reset the immediate context to what it was before I executed my commands (i.e. basically restoring the render state). To test this, I created my deferred context using [b]CreateDeferredContext()[/b] and then rendered a simple triangle strip with it. Early on in my test application, I call [b]OMSetRenderTargets()[/b] on the immediate context in order to render to the swap chain's back buffer. Now, after having read the documentation on MSDN about deferred contexts, I assumed that calling [b]ExecuteCommandList()[/b] on the immediate context would execute all of the deferred commands as "an extension" to the commands that had already been executed on the immediate context, i.e. the triangle strip I rendered in the deferred context would be rendered to the swap chain's back buffer when I executed the generated command list on the immediate context. That didn't seem to be the case, however. Instead, I had to manually pull out the immediate context's render target view (using [b]OMGetRenderTargets()[/b]) and then set it on the deferred context with [b]OMSetRenderTargets()[/b]. Am I doing something wrong or is that the way deferred contexts work?
  4. In case others are interested, I stumbled upon an interface called ID3D10StateBlock, which presumably does exactly what I want.
  5. [quote name='mhagain' timestamp='1330108471' post='4916280'] I haven't benchmarked the use of such a Get call, but I would expect that all you're doing is getting a handle to the object from the runtime (rather than reading back data from the GPU) so it should be OK. [/quote] Right, I haven't benchmarked them either so they might just be really simple wrappers and pretty cheap to call each frame. I just didn't think of going down that road since I know OpenGL's GetXX() functions are really slow and not recommended to call on a per-frame basis (although I know the DIrectX API is fundamentally different , [quote name='mhagain' timestamp='1330108471' post='4916280']With D3D11 I'd create a deferred context, record my stuff in a command list, then play it back and destroy the context. FinishCommandList can be told to save and restore the previous states for you, so you won't need to worry about any of that. [/quote] That's really cool. That's exactly what I was looking for. I need to read up on deferred contexts, but ideally I'd be able to create the context once and reuse each frame.
  6. [quote name='iedoc' timestamp='1330106689' post='4916271'] to set the state back to the default, you can just pass NULL as the parameter. the application should set the state to whatever it needs before it draws it's stuff though. it's possible it set the state during initialization time, so that when you change the state, it never goes back to the state that was set in the initialization of the scene.[/quote] That's pretty much the core problem. Having done this for quite a bunch of Direct3D 9 games, I know for a fact that some games only set their state initially rather than on a per-frame basis, and thus modifying the state so that I can render my stuff will mess up the game itself. One thing I've learned when interacting with games is that you can never expect game developers to Do The Right Thing (TM) in code and you always have to consider the worst case scenario. It's therefore really important that I'm able to set back the game's render state after I'm done rendering my things. So, there's no way to do this automatically for DX10 like I can do in DX9 with state blocks? The only solution, then, would be to hook all the state functions that I change, such that I know what value to set back once I'm done rendering. That's a bunch of work though...
  7. Well, so the problem here is that I don't actually set the initial render state myself. I'm essentially doing an overlay over the Direct3D application, which means I'm piggy backing off of the application's existing render state (i.e. whatever the application had already set) and I want to make sure that when I'm done rendering, the state is set back to what it was before I did my thing,
  8. I've used state blocks (http://msdn.microsoft.com/en-us/library/windows/desktop/bb206121%28v=vs.85%29.aspx) with Direct3D 9 in the past to save off the current render state, make some state changes, render some primitives, and then reset the render state to what it was previously. How do I do something similar in Direct3D 10/11? I basically want to take the current render state (e.g. shaders, input layout) and save them off, change the state to whatever I need in order to render something else, and then set the state back to what it was before I rendered.
  9. I have an application that creates a Direct3D 9 device in fullscreen mode and then starts presenting. At a later point, after having created the first device, I temporarily create a new Direct3D device in windowed mode (on the same thread but for a different window). I destroy this device immediately again, but somehow I'm then no longer able to Alt-tab out of the fullscreen application anymore. The application just stays on top rather than dropping to the background although it looks like the application is no longer in focus. If I create my temporary device as [b]D3DDEVTYPE_NULLREF[/b], I'm suddenly able to Alt-tab out. Does anybody have an idea why that is, and if so, how I can create a second temporary device without messing up the existing device?
  10. I need to copy the back buffer of a game to a system memory surface. As we all know, that's a horribly bad situation for performance since it requires syncing the CPU and GPU. Due to certain restrictions, however, I cannot use GetRenderTarget() in the application itself to read the back buffer. Instead, I do the following: * Create a shared render target surface (DirectX9Ex) in video memory in a different application. * In the context of the game, blt the contents of the back buffer into the shared resource. * Call GetRenderTarget() data on the shared surface in the other application to read the contents into system memory. This works fine, but it made me curious about the performance of the readback operation in general. Since the actual readback happens on a different surface than the game's back buffer, does this mean the game will be able to continue rendering while I'm reading back from the shared surface or will the CPU<->GPU sync affect everything?
  11. [quote name='MJP' timestamp='1308770376' post='4826548'] In general creating resources is slow. I don't know offhand how slow it is for an offscreen CPU-readable surface, but I would assume that it isn't fast.[/quote] My idea was that it's generally pretty fast to allocate a bunch of video memory and just copy data from one video memory location to another -- slower than the process of transferring it to real memory. [quote name='MJP' timestamp='1308770376' post='4826548'] The real bottleneck from reading back render target data isn't so much the data transfer, but rather the potential GPU stalling that comes from not waiting long enough to read the data.[/quote] Right, the stall is definitely a major killer here -- I just thought that the actual transfer was equally slow compared to creating a video memory texture. Ultimately, the potential win of doing it is probably going to be so small that it won't be noticeable. Thanks for the quick response!
  12. I need to read back the contents of a swap chain's back buffer in my application. I'm using GetRenderTargetData() to do so. The back buffer could potentially be large-ish, so I'd like to optimize the read back as much as I can. For example's sake, let's say my back buffer is 1024x1024 but I know for a fact that only [0,0,1024,32] changed since the last frame; that is, the first 32 rows of the buffer. As we all know, transferring data between the GPU and CPU is expensive, so I'm now left with two options for reading the data back: 1) Simply use GetRenderTargetData() to read back the entire back buffer into a 1024x1024 system memory surface. 2) Create a temporary D3DPOOL_DEFAULT render target with the dirty rect dimensions, i.e. [0,0,1024,32], copy the contents of the back buffer to the temporary buffer and [i]then[/i] call GetRenderTargetData() on the smaller render target. Questions: * How expensive is it to create a temporary, smaller render target on the fly for use in my read back operation? Is it so expensive that it defeats the purpose and I'd be better off just reading back the entire back buffer with GetRenderTargetData()? * Alternatively, I could create several differently-sized render targets that I could use in my read back operation, such as 32x32, 512x512, etc., so I wouldn't have to create them on the fly, but that wastes video memory, though. Thoughts? Comments?
  13. When using the debug runtime and linking against D3dx9d, the only thing that's printed in the debugger is the following: Direct3D9: (ERROR) :Cannot create Vidmem or Driver managed vertex buffer. Will ***NOT*** failover to Sysmem. Direct3D9: (ERROR) :Failure trying to create Vertex Buffer Looks very much like a driver issue to me...?
  14. I've been playing around with shared resources in Direct3D9Ex. I've made an application that creates a shared vertex buffer and a shared render target, which I then use from a second application. I just ran the first application (the one that creates the shared surfaces) on another machine I had lying around running Windows Vista with an ATI Radeon x1400 Mobility card and I get the following error when attempting to create the shared vertex buffer: 80004005 [DDERR_GENERIC] The code looks as follows: device->CreateVertexBuffer( 4 * sizeof(CUSTOMVERTEX), D3DUSAGE_DYNAMIC | D3DUSAGE_WRITEONLY, D3DFVF_XYZRHW | D3DFVF_TEX1, D3DPOOL_DEFAULT, &g_vertexBuffer2, &g_hSharedVertexBuffer ); Creating the shared surface works just fine on that same machine; it's only the vertex buffer that fails. Does anybody know if this a driver bug or if I'm missing a point about shared resources? So far my test application has worked on machines with NVIDIA cards and failed on two machines with ATI cards. (I haven't been able to test on other ATI boxes at this time.)