• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

circlesoft

Members
  • Content count

    3565
  • Joined

  • Last visited

Community Reputation

1178 Excellent

About circlesoft

  • Rank
    Contributor
  1. Quote:Original post by MrSparkle27 Thank you for your quick reply. I'm using MDX so I use the SurfaceRenderer class to render offscreen surfaces and a swap chain to render to a backbuffer as suggested in an earlier posting. How are you guys rendering post effects? Are you rendering the scene to an offscreen surface or directly to the backbuffer and how do you pass the scene texture to the post effect? Thanks, Christian You have to create a texture with D3DUSAGE_RENDERTARGET (with only 1 mip level). Then get its surface using IDirect3DTexture9::GetSurfaceLevel(0). Render to this surface, and then pass the texture to your post-processing effect. It isn't 50% slower than normal.
  2. Quote:Original post by Drunken_Coder Unfortunately the settings are identical on both screens. The only solution I can see is to run two devices, but it seems from reading the docs that this would require separate resources, separate windows, and separate Present() calls. Is there any way to make two devices split two halves of the same window, and can I make them share resources (texture and vertex buffers)? The whole multi-monitor area has always been kind of blurry with D3D (I guess it was never a big issue, seeing as few, if no, commercial games use it), but I am guessing that with D3D9, no. All multimon apps I've seen have to create two or more devices and clone resources appropriately.
  3. If you said it works in debug, but not in release, try checking out these sites: Debugging Release Mode Problems Surviving the Release Version It is possible to interactively debug a release build, so you can try that at least.
  4. From the docs: Quote: The destination surface must be either an off-screen plain surface or a level of a texture (mipmap or cube texture) created with D3DPOOL_SYSTEMMEM. The source surface must be a regular render target or a level of a render-target texture (mipmap or cube texture) created with POOL_DEFAULT. This method will fail if: The render target is multisampled. The source render target is a different size than the destination surface. The source render target and destination surface formats do not match. Check the debug runtime output.
  5. I noticed that you only rely on windows messages to inform you of when the device should be lost/reset. For robustness, use IDirect3DDevice9::TestCooperativeLevel() before calling BeginScene(). This will tell you if the device is ready, and if not, when you can reset it.
  6. Quote:Original post by xissburg Sorry if I misunderstood but, the drivers act like a Just-in-Time compiler? Then, they all follow a standard right? I wouldn't go that far (about both the JIT and the standard [wink]). But the drivers will modify shaders based on the strengths and weaknesses (and bugs) of each card. This also just doesn't apply to shaders. For example, I remember when the NV cards didn't perform hardware decompression correctly on certain DDS formats. Then the driver just did it manually first in software. They can be quite tricky sometimes. Microsoft likes to put out standards, but as we have already seen in the past, they are more like "guidelines" hehe
  7. Which file are the errors in? Are you possibly trying to compile a HLSL/FX file with the C/C++ compiler?
  8. There are a lot of factors at play here. First, make sure that your LOD call actually succeeded and isn't failing or something. How many verts/tri's did a mesh have to begin with? It is likely that you are simply not throwing enough at your card to make an even noticable dent in performance. The low framerate could be coming from a limit in batching or the shaders.
  9. Quote:Original post by jollyjeffers There's not much point upgrading to D3D10 if you've no interest in making use of the new features... Yea, this is another good point. Many people speak of upgrading/rewriting their renderers to use D3D10, while I'm not so sure that they actually assess if they even need the new features that 10 offers hehe Other than the book that Jack and I are a part of, I have no commercial involvement with it, although we are starting to look there for the future. Anyone in middleware rendering should surely consider it.
  10. It depends on the hardware - some dual-monitor cards can't actually have 2 fullscreen devices going at once. Plus, remember that clicking on one will probably minimize the other. Generally, it's more realistic to create two maximized windowed devices. Here is a tutorial on multiple devices. Below that one is a nifty tutorial on doing it with swap chains.
  11. This isn't really a DX related problem (directly at least), so you could always try mailing the people at ATI. There is also Nvidia's FX Composer, which I have always prefered anyways [wink]
  12. Yep, although it is notoriously tricky to get right. Here is a short tutorial about it, right here on Gamedev. If you search the forums, you will find tons and tons of topics about it, as it is pretty common. You will also need a decent exporter, such as Pandasoft or kW Xport.
  13. I normally keep a VS2003 project space around and fall back to that when I need to debug a shader, since I find that debugger to be a bit nicer and more streamlined than the PIX one. The main thing is, if I am having a shader problem, it most likely has to do with setting up state, so it's nice to be able to step from the application, into the shader, and back out again. But then again, keeping around VS2003 is a pain in the but too (unless you need it for other reasons, like me).
  14. Quote:Original post by amtri Once I write to the stencil/depth buffers, is there a way I can find out exactly what the values of each 32-bits for each pixel in this buffer? Unless you use the D3DFMT_D[16/32]_LOCKABLE format, you won't be able to lock the surface directly. You won't be able to use any APIs to copy it to a lockable surface, either (such as StretchRect()). If you need to get the values from a z-buffer, it is more practical to use multiple render targets and just write the depth out to a R32F surface.
  15. Quote:Original post by rpsathe a coworker told me this may be useful.... ID3D10Device::OMGetBlendState Get the blend state of the output-merger stage. void OMGetBlendState( ID3D10BlendState **ppBlendState, FLOAT BlendFactor[4], UINT *pSampleMask ); Yea, this is what I was talking about. If you want to run in PIX and check this at the same time, you probably want to output this stuff to a log file, or something like that. The reference device isn't exactly interactive [wink]