Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

558 Good

About simonjacoby

  • Rank
  1. Thanks for the tip, I was looking for something like that.   However, I've encountered a fatal flaw with the way I was trying to do things, so I'm dropping it now, and trying something else.   Thanks anyway!
  2. Hi guys,   I have a problem where I want to sort faces according to depth to be able to render them without a depth buffer (or even a depth component in the position of the vertex, I just want to have x and y in the position).   However I still want to have a have a nice index buffer so that I don't have to treat them as separate faces, for memory (and performance, but mostly memory) reasons.   Basically, I just want to make a draw call that draws the entire mesh, knowing that it will be drawn back-to-front (overdraw is a smaller problem than the vertex processing in my specific case), and indexed so that I process as few verts as possible.   Does anyone have any ideas on how to accomplish that?   Thanks,   Simon      
  3. Hi all, I'm having a PIX issue that's confusing me. The thing is, this only happens if I select "Single frame capture whenever F12 is pressed", and it only happens while the application is running. If I capture a frame and look at it, all the API calls are there, and the data and rendering is correct for each API call. The problem I'm having then running it is that the rendering becomes complete garbage (primitives become distorted, UV's stretched, drawcalls seem to be interrupted in the middle of the call (such as text rendering stopping after rendering a couple of characters)). When running it from Visual Studio, I'm using the debug libs, I've #define D3D_DEBUG_INFO before the includes, I've cranked up the D3D Control panel to maximum debug output level, maximum validation, break on memory leaks, break on d3d9 error, and enable shader debugging, and I've also run it through the REF rasterizer. Not a single warning (yes, I've eliminated all redundant state setting), and running through REF looks perfect. I've run it on both AMD and Nvidia cards, and the behaviour is the same: No warnings, looks and works fine as standalone and running through Visual Studio (in both debug and release builds), garbage when running through PIX if PIX is set to "Single frame capture...", but when inspecting the API calls in the captured frame works fine. Thankful for any insight you might have on this
  4. I am using the effects framework, as in "[color=#1C2837][size=2]From an effect I can get a handle to a technique, from that technique..." etc ;) [color=#1C2837][size=2] [color="#1c2837"]I use it to load effects because it's convinient, but I don't want to use the SetFloatXXX functions when actually setting the shader parameters, because of the overhead of the D3DX effect framework. [color="#1c2837"] [color="#1c2837"]Anyway, thanks for your input! Using the constant table interface will be the way to go. [color="#1c2837"] [color="#1c2837"]/S [color="#1c2837"] [color="#1c2837"]
  5. Hi, is it possible to find out which register a shader parameter is mapped to, without using ID3DXConstantTable, and not explicitly assigning a register via a shader parameter? I'd like to load shaders as effects, but not use the effect framework to update my shader constants. From an effect I can get a handle to a technique, from that technique I can get a handle to a pass, and in that pass I can get the vertex-/pixelshader function being used. From that I can get a constant table. Using a constant table I can get a handle to a constant by it's name, and using that handle I can get a D3DXCONSTANT_DESC which has a member RegisterIndex which contains the index of the register it's mapped to. This just seems like a really long way to go just to get the register mappings of the parameters. Is there a simpler or more direct way? Thankful for any tips! /Simon
  6. simonjacoby

    Logarithm with any base in PS3.0

    Great, thanks! Now that I see it, I vaguely remember that from math class years ago Thanks again /Simon
  7. Hi, the topic says it all. Does anyone know a trick so that I can (efficiently) perform a logarithm with any base using only the instructions available in pixel shader 3.0? Basically I want an inverse of pow(x,y). Afaik in PS3.0 there is only log, log10 and log2 which performs logarithm of bases e, 10 and 2 respectively, but no general version that can perform for any base. Any ideas are very much appreciated! /Simon
  8. simonjacoby

    [SOLVED] Depth-based DOF and reflections

    So I tried it, and it works perfectly (and without using raytracing or other bs ) See attached screenshot. The reflection is now blurred and the water surface is still crisp. The DOF for the water reflection is just the reflection texture downsampled twice and then blended in with the original reflection texture with CoC per pixel calculated from depth. I think the reason I didn't think of this in the first place is that I just thought it would be too expensive. However, if you just assume there will be other things happening, tonemapping, bloom, motionblur, dof, color correction passes and whatever else you might have going on, then adding a simplified DOF filter at half res on top of the reflection rendering isn't that big of a deal. Anyway, thanks for your replies and happy coding EDIT: Changed to nicer screenshot
  9. simonjacoby

    [SOLVED] Depth-based DOF and reflections

    Here's a photo I found after a quick google, just for reference. Notice how the image in the reflection matches the CoC of the real world. EDIT: Sorry for spamming like this, but I just realized that I could just run the DOF post processing (or a simplified variant of it) on the reflection texture when I render it. Since the reflection texture is about a quarter of the main backbuffer, maybe performance won't be horrible. I'll try it and post my findings.
  10. simonjacoby

    [SOLVED] Depth-based DOF and reflections

    @styves: Thanks for the suggestion. I think that is in line with my thinking with the CoC per pixel. However I realized that it wont be correct either, because it will fix the reflection, but the surface itself will then instead become incorrectly blurred. The reflection on a water surface is just one example where it fails, a worse case would probably be a reflecting marble floor. I realized this problem is a variant of the problem of overlapping transparent surfaces (particles for example) having a single depth value for the CoC calculation. I started look around after some digging I think that to properly solve it you have to use either raytracing or stochastic rasterization, both of which are still out of reach performance-wise. Maybe it's possible to do some hack until then. I'll try storing depth or CoC for the reflection like you suggested and see how it turns out. @Sikkpin1: I don't think so. See the attached screenshot for an example, this is from my landscape renderer. The water surface should stay crisp the way it is, but the reflection should be blurry, just like the real landscape is.
  11. Hi, I've added a standard depth-based DOF post-processing pass to my terrain renderer, and noticed a horrible artifact. The CoC for the reflection in the water is completely wrong. This is because the CoC is based on the pixel depth for the water surface, instead of the reflected surface. The problem is most apparent when the water surface is undistorted and the focal plane is direcly over it. I googled around, and apparently it's a limitation with all depth based DOF filters. Here's a page I found that explains the problem more clearly, with images: The page also describes a solution, but unfortunately it's completely unusable for real-time renderers. (Basically jittering the camera position and rendering many frames). So, is there anyone who has a proper solution to this problem? I've been thinking about storing a CoC value in the alpha channel of each texel of the reflection, but that seems overly complex, and would probably cause problems with alpha blending other stuff. Thankful for any insight you might have! Cheers, Simon
  12. I never even considered baking a LUT, but that's a really good idea if the pow2/int conversions take alot of cycles. Thanks again!
  13. Thanks for the tips! Now to see what the shader compiler makes of that formula ;) Cheers, Simon
  14. Hi, I need to check if individual bits are set in a byte (single channel 8-bit texture). Is it possible to do this using arithmetic instead of bit operations in PS2.0/3.0? I know there's real bit ops in 4.0/5.0, but I have PS2.0/3.0 platform constraints. Thankful for any advice! Cheers, Simon
  15. simonjacoby

    Shader-Based Engine

    I would opt for option #2. It's not as bad as it sounds, because you can create your own library/utility functions that your shaders re-use, so when you create a new effect, you just #include the needed files (like you would if coding regular C/C++). You could write utility files for shadow mapping, environment mapping, lighting models and skinning (just examples, there's lots more you could write of course). You can now combine these in different ways to get new effects, and if you need custom stuff you just build upon these functions. And if a new awesome shadow mapping technique comes out, you just rewrite your shadow mapping routines and it will be applied in every effect that includes the shadow mapping utility code. A simple way to let the engine easily interface with different shader code is to specify pre-defined names for a standard set of parameters that most of your shaders use. For example world/view/projection transforms, diffuse/normal map/shadow map samplers etc. As long as a shader uses the name for a variable that you pre-defined (for example g_mWorld for a world transform), your shader framework can automatically get a handle to it and the engine can automatically update it. (Sidenote: some might argue that this is exactly the kind of thing that annotations are used for, and to them I respond that not all platforms that have support for shaders necessarily have support for effects, and simply using standardized names is a simple way to save yourself from a lot of headaches :)) Some of the more advanced commercial engines does this automatically via material editors. Many use a node based system where each node represents a certain effect (in actuality a small code fragment). An artist can then combine different nodes to get new effect, just by connecting them and specifying inputs (textures etc) and outputs. The framework then generates the real shader binaries from this graph automatically, with all the combinations needed for different types / numbers of light etc. If this is the first time you're writing shaders, a node based system should probably not be the first thing you first try. Try to write a general code base first, and then expand it to something more general (and automated) in the future.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!