• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

gjaegy

Members
  • Content count

    151
  • Joined

  • Last visited

Community Reputation

126 Neutral

About gjaegy

  • Rank
    Member
  1. I should add that I have also tried D3D10CompileEffectFromMemory(), however I get another error (NOT_IMPL), so I guess this one is not implemented...
  2. I am very confused at the moment, I have switched to visual studio 2012. I am using speed tree, which relies on the effect library. I am using D3D10. I can successfully compile the effects using D3DCompile() and the "fx_4_0" profile. However, now D3D10CreateEffectFromMemory() doesn't like the compiled effect anymore, and return E_FAIL. I have no other information but this error code. I had a look at the Effects11 library, but it requires a ID3D11Device. Any idea ?
  3. I would second CDLOD as well, this is what I implemented. It's much easier to understand, and performance are great - TBH I never really understood why geo clipmap would achieve better results than a chunk based method (but I might be dumb ) Also, CDLOD will give you geomorphing without any further effort, which is a very nice-to-have feature.
  4. Hi Nick, thanks for your answer. I think I have found a good trade off solution. Basically, I am resolving all includes by myself. This results in a big HLSL source code text with no #include directive anymore. I then check the shader cache (which contains a pair of "hlsl source code" / "compiled byte code" files for each original shader). If an entry already exists in the cache, I compare the current source code text with the "hlsl source code" file. If no change could be found between the two strings, I assume I can safely load the existing "compiled byte code" file and use it. Otherwise the shader gets compiled, and the result saved on disk ("hlsl source code" / "compiled byte code" files). It seems to work pretty well.
  5. Hi, I am trying to reduce the load time of my game. Right now, I compile all the shader at runtime, when the game is started, which takes quite some time. I use D3DX10CompileFromMemory to compile my shaders. I am thinking about writing the content of the blob returned by this function on disk, the first time the shader is being compiled, and re-use it next time the game is started. This is very straightforward. However, what I also would like to achieve, is to automatically detect any change in the shader source, and re-compile the shader if any change occured. So, my initial idea was to save two files, the source code version used for compilation, and the resulting compiled shader blob. At runtime, I would load the shader source file, and compare it with the source version used for compilation. If there is no difference, i can use the pre-compiled blob, otherwise I compile the shader again. However, this approach gets a bit more complicated when considering "#include" directives, along with the usage of a "ID3D10Include" object. The only way I see would be to manually resolve all the #include" directives, in order to get a "#include"-free HLSL source code I could use to compare the current version with the pre-compiled cache version. Is there any other option I haven't seen ? How do you guys handle this problem ?
  6. in the case you are interested in the import functionality only, I would recommend you also check Assimp: http://assimp.sourceforge.net/
  7. Hi guys, I am currently looking for a realtime tree rendering SDK. I am aware of SpeedTree, however, due to some restrictions in their license and their high license costs, I would like to check any other libraries that would offer similar features. Are you guys aware of such competitors ? Cheers, Gregory
  8. FYI I am using boost::mutex and their implementation is quite good. I guess you don't want to introduce any dependency to boost in the library, but it might give you another implementation inspiration ;)
  9. I would second the advice to remove all global variables. This is actually evil and not a good-practice anyway. I would suggest you first make Devil object-oriented. This will first remove the need of these global variables, and will consolidate Devil architecture. Once the project has been better structured, you will be able to identify pieces of code that might need mutex protection. This might sound like a lot of work (even if not so many classes would be required - from what I know from the source code), but I think this is actually mandatory to make Devil stable, structured and easier to use. Also, once done, making it thread-safe would be much easier than as in its current state. Just my personal opinion though...
  10. what you could do is to load the resource you need to load (textures, vbo, etc...) in system mem buffers only (you can do this asynchronously - in a loading thread). Once all the resources have been loaded into RAM, unload your previous module (including GPUs objects), then simply creates you GPUs buffer and fill them from you RAM buffers. That should be pretty fast, at least this should be the fastest solution (appart from sharing resource between modules, i.e. re-using texture objects/vbo - but this is nearly impossible as they should all have the same size...)
  11. Hi, I would like to know if someone has already implemented the following paper and would like to share the source code for it, of knows an already existing open source implementation (don't want to use ATI Tootle): Fast Triangle Reordering for Vertex Locality and Reduced Overdraw http://www.cs.princeton.edu/gfx/pubs/Sander_2007_%3ETR/tipsy.pdf What do you think of this algorithm ? Cheers, Gregory
  12. Hi, Does anybody use the "nedmalloc" memory allocatory ? Has anybody tried it on Vista/VC++ and benchmarked it ? From their website it looks great, what about the reality ?
  13. hmmm, I think I spoke to quickly. I still have this light bleeding problem. The only way of solving it is to set the light bleeding reduction amount to a very high number (0.8), which reduces the quality of the shadow transition... Any idea ? thanks !
  14. I would like to share the solution I found to resolve my issue. First of all, I noticed this light bleeding was only visible when shadows cast by aircrafts and by buildings/trees meet. There is no visible artifact when shadows cast by buildings meet together. This is because the difference between aircrafts Z and buildings Z is high, so when blurred, the artifacts get worse. This is because aircrafts are quite high in the sky. Then I re-red the part of the article that concerns light bleeding, and this time, tried to understand the math (a little bit ;). The article says that the light bleeding issue is proportional with Δa/Δb (figure 8-7). And this is exactly what happens in my case ! Well done Sherlock (hmm, actually everything was written in the article, so it wasn't a big victory ;) Anyway, the only solution I can see to decrease the light bleeding is to reduce the Zlight difference between the aircrafts and the buildings/ground. In my case, aircrafts casting their shadow on the ground are often out of frustum (not visible in main view). This is why I get a high Zlight range. I came up with a solution that involves Zlight value remapping, as shown on the drawing below: it seems that I can now use a reasonable light bleeding reduction amount (0.3) in order to solve all the problems. Maybe this will help somebody. Anyway, thanks to everybody for your help. And I would be happy to get any feedback/suggestion !