Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

126 Neutral

About gjaegy

  • Rank
  1. gjaegy

    D3D10, effects and VS2012

    I should add that I have also tried D3D10CompileEffectFromMemory(), however I get another error (NOT_IMPL), so I guess this one is not implemented...
  2. I am very confused at the moment, I have switched to visual studio 2012. I am using speed tree, which relies on the effect library. I am using D3D10. I can successfully compile the effects using D3DCompile() and the "fx_4_0" profile. However, now D3D10CreateEffectFromMemory() doesn't like the compiled effect anymore, and return E_FAIL. I have no other information but this error code. I had a look at the Effects11 library, but it requires a ID3D11Device. Any idea ?
  3. I would second CDLOD as well, this is what I implemented. It's much easier to understand, and performance are great - TBH I never really understood why geo clipmap would achieve better results than a chunk based method (but I might be dumb ) Also, CDLOD will give you geomorphing without any further effort, which is a very nice-to-have feature.
  4. Hi Nick, thanks for your answer. I think I have found a good trade off solution. Basically, I am resolving all includes by myself. This results in a big HLSL source code text with no #include directive anymore. I then check the shader cache (which contains a pair of "hlsl source code" / "compiled byte code" files for each original shader). If an entry already exists in the cache, I compare the current source code text with the "hlsl source code" file. If no change could be found between the two strings, I assume I can safely load the existing "compiled byte code" file and use it. Otherwise the shader gets compiled, and the result saved on disk ("hlsl source code" / "compiled byte code" files). It seems to work pretty well.
  5. Hi, I am trying to reduce the load time of my game. Right now, I compile all the shader at runtime, when the game is started, which takes quite some time. I use D3DX10CompileFromMemory to compile my shaders. I am thinking about writing the content of the blob returned by this function on disk, the first time the shader is being compiled, and re-use it next time the game is started. This is very straightforward. However, what I also would like to achieve, is to automatically detect any change in the shader source, and re-compile the shader if any change occured. So, my initial idea was to save two files, the source code version used for compilation, and the resulting compiled shader blob. At runtime, I would load the shader source file, and compare it with the source version used for compilation. If there is no difference, i can use the pre-compiled blob, otherwise I compile the shader again. However, this approach gets a bit more complicated when considering "#include" directives, along with the usage of a "ID3D10Include" object. The only way I see would be to manually resolve all the #include" directives, in order to get a "#include"-free HLSL source code I could use to compare the current version with the pre-compiled cache version. Is there any other option I haven't seen ? How do you guys handle this problem ?
  6. gjaegy

    Where is the COLLADA SDK?

    in the case you are interested in the import functionality only, I would recommend you also check Assimp: http://assimp.sourceforge.net/
  7. Hi guys, I am currently looking for a realtime tree rendering SDK. I am aware of SpeedTree, however, due to some restrictions in their license and their high license costs, I would like to check any other libraries that would offer similar features. Are you guys aware of such competitors ? Cheers, Gregory
  8. FYI I am using boost::mutex and their implementation is quite good. I guess you don't want to introduce any dependency to boost in the library, but it might give you another implementation inspiration ;)
  9. I would second the advice to remove all global variables. This is actually evil and not a good-practice anyway. I would suggest you first make Devil object-oriented. This will first remove the need of these global variables, and will consolidate Devil architecture. Once the project has been better structured, you will be able to identify pieces of code that might need mutex protection. This might sound like a lot of work (even if not so many classes would be required - from what I know from the source code), but I think this is actually mandatory to make Devil stable, structured and easier to use. Also, once done, making it thread-safe would be much easier than as in its current state. Just my personal opinion though...
  10. what you could do is to load the resource you need to load (textures, vbo, etc...) in system mem buffers only (you can do this asynchronously - in a loading thread). Once all the resources have been loaded into RAM, unload your previous module (including GPUs objects), then simply creates you GPUs buffer and fill them from you RAM buffers. That should be pretty fast, at least this should be the fastest solution (appart from sharing resource between modules, i.e. re-using texture objects/vbo - but this is nearly impossible as they should all have the same size...)
  11. Hi, I would like to know if someone has already implemented the following paper and would like to share the source code for it, of knows an already existing open source implementation (don't want to use ATI Tootle): Fast Triangle Reordering for Vertex Locality and Reduced Overdraw http://www.cs.princeton.edu/gfx/pubs/Sander_2007_%3ETR/tipsy.pdf What do you think of this algorithm ? Cheers, Gregory
  12. Hi, Does anybody use the "nedmalloc" memory allocatory ? Has anybody tried it on Vista/VC++ and benchmarked it ? From their website it looks great, what about the reality ?
  13. gjaegy

    issue with PSVSM

    hmmm, I think I spoke to quickly. I still have this light bleeding problem. The only way of solving it is to set the light bleeding reduction amount to a very high number (0.8), which reduces the quality of the shadow transition... Any idea ? thanks !
  14. gjaegy

    issue with PSVSM

    I would like to share the solution I found to resolve my issue. First of all, I noticed this light bleeding was only visible when shadows cast by aircrafts and by buildings/trees meet. There is no visible artifact when shadows cast by buildings meet together. This is because the difference between aircrafts Z and buildings Z is high, so when blurred, the artifacts get worse. This is because aircrafts are quite high in the sky. Then I re-red the part of the article that concerns light bleeding, and this time, tried to understand the math (a little bit ;). The article says that the light bleeding issue is proportional with Δa/Δb (figure 8-7). And this is exactly what happens in my case ! Well done Sherlock (hmm, actually everything was written in the article, so it wasn't a big victory ;) Anyway, the only solution I can see to decrease the light bleeding is to reduce the Zlight difference between the aircrafts and the buildings/ground. In my case, aircrafts casting their shadow on the ground are often out of frustum (not visible in main view). This is why I get a high Zlight range. I came up with a solution that involves Zlight value remapping, as shown on the drawing below: it seems that I can now use a reasonable light bleeding reduction amount (0.3) in order to solve all the problems. Maybe this will help somebody. Anyway, thanks to everybody for your help. And I would be happy to get any feedback/suggestion !
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!