Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

271 Neutral

About jameszhao00

  • Rank
  1. jameszhao00

    Reprojection cache precision

    Ah thanks. I think I know why I came to the wrong conclusion with my earlier tests. I was reading + sampling textures in the same kernel, and either it's undefined, or it got optimized out.
  2. jameszhao00

    Lens blur in frequency space

    Have you tried anisotropic diffusion? From experience it's fast, has a constant time per pixel variable size blur, and isn't too hard to implement. You will need to manually add in bokeh shapes. I've also wanted to try out this adaptive manifold bilateral filter approximation. Think it could work for Dof.
  3. jameszhao00

    Reprojection cache precision

    Thank you for the tip. I set CUDA to use unnormalized texture coordinates though. CUDA allows you do to do normalized coordinates [0.0, 1.0-1/N] or unnormalized coordinates [0, N-1]. CUDA normalized coordinates seems a bit different from opengl/dx ones anyways, as to purely sample the 2nd pixel in a 2x1 image using normalized coord, we hit coord (0.5,0) [as anything higher goes into border behavior] Edit: Should probably test this... as there might be some undocumented intricacies.
  4. jameszhao00

    Reprojection cache precision

    The previous frame is indexed using floating point pixel coordinates like (300, 400). Thought about this a bit more and I think it's some other issue on my end. Purely positive imprecision like +0.00001 shouldn't make the image float bottom right (as a pixel is only taking stuff to the right/bottom of itself) Screen is integer coordinates. I'm deriving the floating point read locations from the integer screen coordinates.
  5. jameszhao00

    Reprojection cache precision

    I'm doing this reprojection in CUDA with cuda textures (aka cuda Arrays). I'm pretty sure that it's not the half pixel offset issue, because if I use the original screen positions (rather than the screen->world->screen computed ones) the picture is stable.
  6. Has anyone experienced major precision issues with the reprojection cache? I'm currently caching indirect lighting computations temporally, and going from screen->world->screen always gives me some minuscule precision issues in xy (+0.0001 ish in pixel coordinates). I tested this precision issue by leaving the scene alone and not moving the camera. In this situation, using 100% of the cached values causes the rendered image to float right/bottom. Have people experienced this before? If so, how did you mitigate it?
  7. jameszhao00

    HLSL Multiply High

    I wrote some test stuff in OpenCL, and it seems to emit 64 bit multiplies on my GPU. I went and read the HLSL assembly reference, and the instruction umul http://msdn.microsof...0(v=vs.85).aspx does exactly what I want. Is there a way to 'get at' instruction function?
  8. jameszhao00

    HLSL Multiply High

    Can you do something similar to a multiply-high in HLSL? mul_hi in opencl What it is: http://stackoverflow...ply-high-signed
  9. jameszhao00

    Line Algorithm for Ray Tracing

    What you're looking for is 3D DDA. I remember implementing this a while ago... be careful that this type of stuff is extremely hard to implement well due to precision/logic errors and debugging difficulties: Essentially we want to walk the grid while keeping track of the distance to the next grid cell in each dimension. Let's say in a grid of 1x1 cells, we're at (0, 0.5) (cell 0,0) with direction <1, 1>. The distance to the next cell in the X (DX) direction is sqrt(2), while the distance to the next cell in the Y direction (DY) is sqrt(1/2). Thus because DY is smaller, we need to go up 1 cell to cell (0, 1) (enters the cell at (0.5, 1)), and recalculate DX and DY. After we've went up 1 cell, DX is now smaller, so we need to visit the next cell in the X direction. And so forth. You can optimize this by precomputing a whole set of values. I have implemented it here dda_vals pre computes values dda_next walks to the next cell, given current cell/info Please again note that this kind of stuff is really frustrating/difficult to develop (my implemention is not correct for cases), so use a preexisting intersection library if at all possible (embree is great).
  10. jameszhao00

    Results on Voxel Cone Tracing

    Cone tracing voxel mipmaps means you progressively lookat higher and higher level mipmaps. A higher level mipmap stores an occlusion distribution built from child voxels (and not concrete occluders). In his case, I think he's just storing average occlusion for the voxel, and not something that varies by direction/position/etc.
  11. jameszhao00

    Results on Voxel Cone Tracing

    This looks pretty cool! - How are you choosing your grid bounds? - Is the voxelization done at runtime? I assume no? ("voxelization is performed on the CPU with raytracing") - The intersection generated by the +X ray is injected into the -X 3D texture? - During cone trace, how are you dealing with occlusion? - "which makes them look displaced from the scene even for glossy reflections." What does this mean? Shouldn't Eye <-> Glossy <-> Diffuse <-> Light work? Also, is there SSAO in those screenshots? Awesome stuff
  12. jameszhao00

    Path Tracing Weights

    By rays, do you mean a path? So are you generating only 1 path per pixel? (aka 1 sample per pixel?) Also, a key point of RR is float3 color = blah blah blah r = 0.3 (or some other ratio) if(rand() < r) color /= r <---- key else terminate next bounce... Unless I'm missing some context, your weighting scheme is what everyone does, as part of the standard lighting calculating.
  13. I suggest you attempt to write a mirror/shadow only ray tracer. No high school level (or college, depending on POV) math required. This isn't true GI, but it's a very good start.
  14. Paper's at Seems extremely tricky to implement well.
  15. jameszhao00

    Path Tracing Weights

    Hmm but the point of RR is unbiased results... whereas your weighting scheme (btw that doesn't look like a weighting scheme... just a simplified normal lighting bounce calculation) stops at N bounces... what about N+1 bounces? Russian roulette needs at least as many rays as there are depths[/quote] This statement doesn't quite make sense.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!