shiqiu1105

Members
  • Content count

    163
  • Joined

  • Last visited

Community Reputation

111 Neutral

About shiqiu1105

  • Rank
    Member
  1. HI folks, I am trying to write a basic ray tracer with CUDA. What I have implemented now is simply 1 sample per pixel, and each sample is assigned to a cuda thread. And each thread traces it's own ray. I am writing to ask more advanced and efficient ways of doing this. For example, what's the best the strategy for parallelizing all the tasks? When Multiple samples are used, should I assign each thread all samples in one pixel, or should I only parallelize computation within a thread and sequentially render all pixels? And, I have also heard that it's better to trace ray in a breadth-first mannar? Why? any tutorial of how to do this? Anyway, I will appreciate any advice and idea, thank you~
  2. [quote name='jameszhao00' timestamp='1341496372' post='4955977'] Uniformly sampled bidirectional tracer is noisier in mostly directly light situations. On debugging: You're on the right track with the reference image comparison. Get ready for a boatload of pain though. I really suggest you download the Intel compiler trial. It saves a lot of debugging time (as the generated code pretty much beats VC++ by 10-20% usually) For BDPT, the weights for path configurations of a given path length must sum to 1. So, we can have any weights we want, as long as it sums to 1. Some things I would try - render with path length = 3 only, compare uniform bdpt with reference - render with path length = 3 only, render with 4 eye verts, 0 light verts weighted to 1, - render with path length = 3 only, render with 3 eye verts, 1 light verts weighted to 1, Also, I've found it extremely helpful to posterize the images in a photo editor (I use Paint.Net) and compare. It lets me see the lighting gradients very well. Also, having a crapload of asserts is extremely helpful (check your understanding, detect undesirable nan/infs) Also, I have some stuff in my blog when I implemented BDPT. (I have not yet written about MIS BDPT) Here's my reference uniform BDPT code by the way [url="https://github.com/jameszhao00/lightway/blob/master/sln/lightway/lightway/bdpt.cpp"]https://github.com/j...ghtway/bdpt.cpp[/url] MIS BDPT (Does not handle specular / escaped ray) [url="https://github.com/jameszhao00/lightway/blob/master/sln/lightway/lightway/bdptMIS.cpp"]https://github.com/j...way/bdptMIS.cpp[/url] My implementation requires at least 2 eye vertices (so weights change slightly) [/quote] Hi I want to ask you something. How do I weight the first eye vertex if I am using the pinhole camera (no intersectable area in the lens)? Veach's thesis suggests we should set it to 1 / area, but in this case, we don't have area. Should I just set the probability of the first node to 1.0?
  3. How do you implement the SSS effect?? it's an approximation?
  4. Discussion on photon mapping.

    Nobody knows anything related to this topic??
  5. I have tried both the uniform and MIS approach. But I am not sure I calculated the path weights correctly, so I will take a look at your implementation, thanks a lot!! I think they should converge to the same result, even with the uniform approach(it just takes a lot more samples). So color differences and noise clearly indicate there are errors. This bug have been there for nearly 8 months, OMG..
  6. Hi guys, I have been reading pbrt and writing my own ray tracer. But I have bumped into a question though. In pbrt, it implements a low-discrepancy sampler which use the 0-2 sequence to generate high quality samples. It also states that when low-discrepancy pattern is used, the best image result comes with box filter(or no filter at all). Does this mean that, if I want to use low-discrepancy samples, then it's not necessary to add any filter to my ray tracer at all? And pbrt seems to say that the low-discrepancy sampler is the best, so in a filter will not be necessary to get the best image quality. I also noticed that it's written in pbrt that quasi monte carlo techniques such as low discrepancy sequence cannot be used together with russian roulette? And, is it really necessary to further shuffle the generated low-discrepancy sequence given that they are already scrambled?? The author of pbrt says it's necessary, other there will be unexpected artifacts. But I have tried otherwise, and the images look just fine for me. Can someone explain these to me, I am quite confused with all these knowledge.
  7. Hi guys, I have been trying to create my bidirectional path tracer for a long while. But there were always bugs which I was having a hard time debugging! The result image is down below. My bidir path tracer generates noiser images than my path tracer when the same amount of samples are used Bidir: [img]http://www.edxgraphics.com/uploads/8/5/2/1/8521459/3031187.jpg?623[/img] Normal path tracer: [img]http://www.edxgraphics.com/uploads/8/5/2/1/8521459/7394597.jpg?624[/img] Notice that not only is the bidir version noiser, they seem to converge to different colors too. Any idea how to debug? Bidir path tracer is particularly noisy in the edges, as you can see. Or can I put up the source code so that you guys can help?
  8. Lately I have been trying to add photon mapper to my own ray tracer. I have been taking pbrt and Jensen's book <Realistic Image Synthesis using Photon Mapping> as my primary references. In the progress of reading, I have generated a few questions. 1. I have noticed that pbrt's PM implementation differs very much from that of Jensen's. For example, in Jensen's implementation, 2 photon maps (caustic and global illumination) were built, and in pbrt's version, 3 photon maps (caustic, indirect lighting and radiance) were built. They also differ in the way they partition the light transport equation, and the way to approximate lighting. 2. The way photons were shot is also different, in pbrt, photons were shot was a uniform distribution in the hemisphere along the normal, while in jensen's version, a cosine-weighted distribution was used. I understand that in this case jensen's method wins, (importance sampling issue) I just don't understand why pbrt sample light source with a uniform distribution. 3. The way photons are scattered are also very different. Jensen used russian roulette to decide whether particles are diffusely reflected, specularly reflected or terminated, whose probability depends on surface's reflectance(which I don't know how to compute, somebody pls help). And pbrt uses a total different technique. 4. pbrt doesn't use techniques like "projection map". So I want to discuss with your folks before I start coding, which are the best strategies to choose when implementing my own PM. BTW, now that ompf.org is gone, what the hottest place for ray tracing geeks to ask questions?? I know that [url="http://igad2.nhtv.nl/ompf2/index.php"]http://igad2.nhtv.nl/ompf2/index.php[/url] is nice replacement, but there really isn't many active people on it....
  9. I want it too. Does anyone have it?
  10. Can anyone post the content in the CDROM online??
  11. [quote name='griffin77' timestamp='1307730799' post='4821801'] [quote name='shiqiu1105' timestamp='1307717351' post='4821728'] I read the book [url="http://www.advancedrenderingtechniques.com/"][font="Verdana"][size="2"]Advanced Lighting and Materials with Shaders[/size][/font][/url] recently, which I found really facinating! It gave me huge insight on Spherical Harmonics Lighting and ray tracing. So I'd really like to further investigate the source code. But I can't find it on google or anywhere else? Can anyone give me a direction? [/quote] I do but don't think I can post it online. You could also try this paper which goes over a lot of the same material and includes source code: [url="http://www.research.scea.com/gdc2003/spherical-harmonic-lighting.pdf"]http://www.research....ic-lighting.pdf[/url] [/quote] It is so hard to meet someone who really has access to that code. I have searched so much. I'd be so appreciated if you can sent it to my mail box.. I mean if it will not cause you too much trouble. My address is [u][color="#0000ff"]51826272@qq.com[/color][/u] I checked the paper you gave me. It is awesome, more detailed than the book I mentioned. I am having difficulty understanding the rotation of SH. But there seems not to be anything about ray tracing, in which I'm also interested in. Thanks a lot.
  12. I read the book [url="http://www.advancedrenderingtechniques.com/"][font="Verdana"][size="2"]Advanced Lighting and Materials with Shaders[/size][/font][/url] recently, which I found really facinating! It gave me huge insight on Spherical Harmonics Lighting and ray tracing. So I'd really like to further investigate the source code. But I can't find it on google or anywhere else? Can anyone give me a direction?
  13. I'm currently working on deferred shading, in which I need to reconstruct position data from depth data. But after I did some research, I got caught in the relationship between normalized device croodinates and clip space coordinate. [url="http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/MousePicking"]http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/MousePicking[/url] From the link above, I'm told that the Position in normalized space is clip space over the w component of clip space: PN = PC.xyz / PC.w; And to get clip space position PC, we need to do ProjectionMatrix * PV = PC; so it's natrual that to get view space coord we need to do: ProjectionInverse * PC = PV; But what the link here teaches me is ProjectionInverse * PN = PV; Instead of clip space that's multiplied with projection inverse, the normalized device coordinates is... why??
  14. I'm working on a chess game in XNA. I wonder if I could combine several effect such as shadow and HDR. It sounds like a small rendering engine already. Books have covered those techniques individually, but haven't talked about how to integrate them into a whole. Any suggestions? And is XNA really inefficient?? I can do DX11 too, but I figure it's not much a big project, so I'd use XNA. But now I'm kinda wavering cuz the inefficiency of managed code..XNA...