Jump to content
  • Advertisement

nicmenz

Member
  • Content Count

    79
  • Joined

  • Last visited

Community Reputation

169 Neutral

About nicmenz

  • Rank
    Member
  1. hey guys! I was thinking long and hard about how to implement realistic translucency. it even doesn't have to be real-time, but precomputation must not take more than 1-5 seconds. I know TSM and the diffuse dipole approximation - but the problem is, that this scattering approximation is only plausible for convex geometry. I even implemented a combination of depth peeling and TSM, but gathering the irradiance is non-trivival then (and the diffuse dipole not valid, anymore). I was thinking about the disc-approach by bunnel in gpu gems I + II, but they seem to have alot of artifacts. also, they don't compute visibility at all, which will result in scene-dependent parameter tuning in the end, I assume. My next idea was a dense set of random points in the bounding box of the object. In this approach, only points inside of the object can transport energy, which would result in a shooting-approach, such as used in radiosity or photon mapping. but: since points have neither normals nor surface area, how should I compute a form factor? I pretty much gave up at this point. I am grateful for *any* idea you guys can offer me. best, nick
  2. Wow, this looks good, thanks alot!
  3. Hello everybody, in comparison to CUDA, OpenCL is often praised to be indepented of the platform and hardware. But the AMD FAQs say that a program compiled with their Stream technology does not run on the hardware of another GPU vendor. This means that ppl with NVidia hardware need to recompile with their drivers (and SDK, I guess). This is, of course, not possible if I wanted to ship a professional software to customers with different hardware. Searching the web for this issue I found the possibility to dynamically link the OpenCL.dll, which comes with the specific video driver. What frightened me is, that apparently there have been (still are?) different calling conventions (stdcall/cdecl) arbitrarily mixed with Nvidia/AMD and 32bit/64bit. Do these issues still exist in the latest drivers and SDKs? My question: is it possible -at this time- to ship a software with different DLLs (32/64 bit, AMD/NVIDIA) and to dynamically link them to support heterogeneous systems? Thanks alot, Nicolas
  4. correct lighting always has to be computed over the entire hemisphere. when you only sample the area of the light source, you'll get shadowed surface points when the sun is occluded. however, this is not the case when you sample the hemisphere of all incoming directions. if I understand you right, you probably need something like this to separate important sampling points from less important sampling points. can I PM you about your scattering btw? nick
  5. hi, first of all, I know that there are plenty of threads about atmospheric scattering, but I could not find any information on how the wavelength-depending scattering coefficients are created! I implemented the ATI paper "Rendering Outdoor Light Scattering in Real-Time", where the original terrain color is multiplied with an extinction factor and an inscattering factor. L_o = L_in * Extinction( Distance ) * InScattering( Distance, AngleToSun ) that's pretty simple! In the function "InScattering()", rayleigh and mie functions have to be computed, both depending on the scalar value "AngleToSun". This is simple, too. my problem is, that I have only grayscale images so far! I know that the atmosphere scatters blue particles more than other wavelengths, but where to I take these numbers? In the Preetham paper is a table for 5 different wavelengths, but this didn't work out properly :-/ I would need something like float3 RayleighCoeff( float Theta ) { return ... } since the scattering parameters depend on the angle to the light source. thanks, nicolas
  6. hi, I want to map a twodimensional texture over a threedimensional object, in this case a simple sphere. the texture's scale should depend on the sphere's scale. This can be understood like a stamp or brush, just depending on the position of a sphere in space. I made a drawing to make myself more clear: I want to write a HLSL shader for this task. I tried to map the world coordinates to (0,1), but this didn't work. any help is appreciated! best, nico
  7. thanks you VERY much for this answer!! rating++
  8. hi everybody, I have a simple question: is it possible to render points (in a single vertex buffer) in variable, user-defined sizes? in all implementations I've seen so far the point size has to be set in the renderstate settings and seems fix. thanks, nicolas [Edited by - nicmenz on January 15, 2010 7:59:32 AM]
  9. nicmenz

    SSAO no halo artifacts.

    great post, but assuming that the surface is flat seems to be a major disadvantage, especially when you include displacement maps and normal maps in the calculation of the occlusion factor (which greatly improves the appearance and detail). with high frequency displacement maps, you will almost never have a flat surface. rating++, though.
  10. thanks viik for your demo! here are my results for the scene: unfortunately, I currently have to pay a high price for sampling very distant pixels. if I turn off random access to the normal texture and limit the sampling range, my fps immediately double. I have to work on the speed, it's only 30 fps at 512x512 on a 295 gtx :-( the problem really is the texture caching :-( regards, nicolas
  11. here is the atrium with the buddha as .x file: Sponza Buddha Scene I use the following scene parameters in Direct3D: Camera Position = ( 0.0f, 0.0f, 100.0f ) FOV = PI / 4.0 = 45 degree Near Plane = 0.0 Far Plane = 1000.0 Model Tranlation = (-10.0f, -20.0f, 50.0f ) Model Rotation = ( 0.2f, 1.37f, 0.0f ) Nicolas
  12. thanks viik, I read your thread before, your results are really impressing! Do you think it is possible that we render the same scene/model and compare our results? Best, Nicolas
  13. hi, I implemented the smart blur. First I used gaussian weights, but then I noticed a problem: If the depth difference is too big, the current sample is dismissed. But then, the sum of the gaussian weight (= 1.0 ) isn't right anymore. So I had to simply divide the accumulated color by the number of samples. anyway, I think the smart blur works very well, here is the result: Here is how it works: My algorithm guarantees an statistical optimal normal offset distribution for each normal/pixel. Since this leads to artifacts because each normal is sampled the same way, I came up with a permutation shader that varies each sample of a planar area. UPDATE: I just compared 8 samples, 16 samples and 32 samples. the optimal distribution works much better than I thought! See how small the differences are and how much detail is preserved: I am sure that the artifacts in the 8 samples image can be removed by a larger blur kernel (you can see the two directions of the separate blur). regards, nicolas [Edited by - nicmenz on August 30, 2009 9:30:42 AM]
  14. if you multiply values inside [-1,1] by 0.5, you get values inside [-0.5,0.5]. if you SUBTRACT an offset of 0.5, you would get values in the range [-1,0], which isn't the way you address a texture!
  15. hi and thanks for the quick response! I quickly re-created the scene of your example and rendered it with my shader. theoretically, my approach should be superior to any existing techniques using a reflection texture. As you can see in the lower figure, I am able to preserve high frequencies as well as distant occluders (in image space, of course), as can be seen between the pillars in the background and the "cube-arrangement" in the front. preserving high frequencies is even more important for such detailed geometry as the buddha. unfortunately, my approach produces very much noise. I am curious if a depth-map based blur can change that :-( but somehow I doubt it. <br/>By nicmenz at 2009-08-29 thanks for the links, I will check them out :-) Nicolas [Edited by - nicmenz on August 29, 2009 11:26:48 AM]
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!