Advertisement Jump to content
  • Advertisement

WizardOfOzzz

Member
  • Content Count

    40
  • Joined

  • Last visited

Community Reputation

127 Neutral

About WizardOfOzzz

  • Rank
    Member
  1. WizardOfOzzz

    Directx ddx/ddy

    Actually the Nvidia derivatives are just one type used today. ATI cards actually only uses three values (the top left corner of the quad) to determine the derivatives for the entire quad. There is an in depth discussion of this in the GPU Pro 2 book chapter "Shader Amortization using Pixel Quad Message Passing". That chapter also discusses how you can use the Nvidia-style derivatives to do 1/4 of the work in some cases (eg. 1/4 the shadow map samples). For fine/coarse derivatives, I figure those are there to specify between the two types (Nvidia=fine, ATI=coarse). To get even more accurate derivatives you would need to look at values outside the pixel quad, which would seem to be quite a challenge given that quads are rendered in parallel.
  2. WizardOfOzzz

    ddx_coarse in D3D11?

    Hey Guys, I was reading over the docs in the D3D11 preview. One interesting thing I noticed was the addition of two seperate derivative instructions. There is now ddx/ddy and ddx_coarse/ddy_coarse. Unfortunately they don't specify how they are different. Any ideas out there? I'm figuring the current ddy/ddy are already coarse (computed on 2x2 pixels blocks), so maybe they have figured out how to use a central differencing scheme or something instead? Seems like that would cost a lot in terms of the extra fragments needed to compute such a derivative Any ideas? Eric
  3. WizardOfOzzz

    directX10 possible on XP?

    Interestingly, on the DX11 technical preview they are mentioning supporting limited features of D3D10 and D3D11 on D3D9 class hardware. It sounds like they are moving back to something like the old system of CAPS bits, except it will be all-or-nothing levels of support. I'm not sure if this will carry back to XP, but I certainly hope so. I think a huge issue with adopting D3D10 is the requirment of writing a rendering abstraction just to support XP! Especially considering how bad Vista has flopped so far. Smaller developers or non-game developers just don't have the resources for targetting a bunch of APIs. Eric
  4. WizardOfOzzz

    HDR, tone mapping and gamma

    Hey. I'm not familiar with rienhart's tone mapping, but I have used other tone mapping algo's that do bake in the gamma correction. If that is the case you definately should not apply gamma correction. Eric <Correction> I didn't see Wolf's post, so it looks like you do need it. It all depends on the assumptions of the algorithm
  5. Do you have any other knowledge of how the images were created? Eg. the focal length of each image? For the most part I agree with your approach but I can think of some difficult cases. For example think of a forground object over a background object. When the background is in focus, the foreground image will blur over the background. When the forground is in focus, the edge between forground/background will be in focus, but the backgroud itself will be out of focus. I think for the purposes of the assignment your approach is as valid as one could expect, as solving the above problem would really require seeing around the forground object which isn't easy! Eric
  6. WizardOfOzzz

    Does D3D10 Support trilinear PCF?

    It definately supports bilinear PCF, but I'm wondering about tri-linear from a mipmapped/volume texture or even multi-sample anisotropic PCF. It definatly looks like the API supports it, but I was hoping someone had tried it out on some of the latest hardware.
  7. I was looking at the documentation and it looks like you can specify MIN_MAG_MIP_LINEAR for a sampler and then sample with SampleCmp. Would this result in tri-linear percentage closer filtering? (assuming you have a mip-mapped depth buffer). Has anyone tried this out? I'm still using D3D9, but I might be making the switch soon.
  8. WizardOfOzzz

    Quadbuffered stereo in D3D10 ?

    Check this out for hacking the DWM: http://siwu.info/66/hacking-into-vistas-desktop-window-manager-dwm.html I also hooked the entire d3d9.dll for a display wall system: www.cpsc.ucalgary.ca/~pennere/projects Eric Penner
  9. Thanks Sc4Freak. I have scoured the net and can't find any documentation about texture filtering precision in D3D10. I know there are constraints for 32 bit textures but I can't find out for lower. The reason for doing this is for memory considerations. I'm doing volume rendering and up to 512^3 volumes. I have some data I want to store in the volume but it doesn't need high precision. I need at least 11 bits for the primary data so if I could use RG for that and B for the other stuff I would save 128MB of memory! ( compared to one 16 bit and one 8 bit texture ). Eric PS> I noticed D3D has a 1 bit texture.. That would be cool to use too so I wonder what the precision is for that. Gotta find some time test this stuff!
  10. WizardOfOzzz

    dynamic frametime LOD

    This brings me to a related question. Is there a way to make the GPU stall if an operation doesn't take long enough? Then you could do something like: Tb = 0 Frame 1: Draw scene and env map A, taking at least Tb, but query real frame time Ta Frame 2: Draw scene and env map B, taking at least Ta, but query real frame time Tb Repeat That would give consistent frame times and there would only be a one frame lag before the time is adjusted for lower scene complexity etc. This is really important for other systems in the game like the physics engine etc. Unfortunately, I don't know of any mechanism to perform that type of stall/timing operation without timely GPU/CPU synchronization. Eric
  11. WizardOfOzzz

    dynamic frametime LOD

    Regarding adjusting LODs based on frame-time, I would just use some type of worst case test scene and then recommend a quality setting. The example in oblivion with lots of trees is usually solved during level design by sticking to a budget for static objects. In a multi-player game it gets tricky because you can't control things as easily so I suppose that would be the one time you might want to tweak the LOD. Eric
  12. This is a really common problem. You can potentially do the following: You can render the transparent objects a second time into the depth buffer. Then you will only blur the closest object. Depending how transparent it is that might not look perfect. If you have something like a billboard then it will look awful. For billboards or objects that have an alpha channel, use the LOD bias instead as you render it into the scene. So the final process could be something like: - Render your opaque objects - Sort/Render Translucent objects -- If the object is a billboard then use mip-lod-bias based on depth - Render translucent objects again into the depth buffer -- Excluding billboards etc from above - Post-Process DOF
  13. WizardOfOzzz

    Doubled Shadow maps?

    That sounds kind of strange. Are you intending to just use a spotlight? If you are just using a single projective shadow map it will only be valid for the area inside of the frustum of the projection. For a spotlight you want to make sure the light is completely attenuated before reaching the edges of the shadow map. If you want omni directional light then you will need to use a cube-map or dual parabaloid maps. Cheers! Eric
  14. Looks like they mentioned that in the previous post too.. I'm out of ideas other than that. The mip-lod should work for your example of smoke though. Here's an overview of the best solution I have seen thus far - Render your opaque objects - Sort/Render Translucent objects -- If the object is a billboard then use mip-lod-bias based on depth - Render translucent objects again into the depth buffer -- Excluding billboards etc from above - Post-Process DOF If an object is just a translucent mesh that it might be worth rendering its depth and blurring it as if it were opaque. This will only blur the closest object so you should only do it if it is fairly opaque. Definitely don't do it for things like billboards and the like that have an alpha channel. On another note the basic downsample and mix based on depth has issues with color bleeding too. For example a sharp foreground object will blur into a blurry background object and vice versa. The only solution to that that I can think of is to use a really expensive full-screen effect that checks each pixel's depth before using it in the blur. Eric [Edited by - WizardOfOzzz on February 8, 2008 5:02:45 PM]
  15. My friend ran into this problem while implementing this an we found it quite a pain to deal with. One solution we thought of was to blur opaque objects like normal and then when rendering translucent objects, use the mip-lod-bias DOF trick ( doesn't work on triangle edges ). I figure it should work okay so long as all your translucent objects fade to completely transparent at their borders. Cheers! Eric
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!