Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Everything posted by nicmenz

  1. hey guys! I was thinking long and hard about how to implement realistic translucency. it even doesn't have to be real-time, but precomputation must not take more than 1-5 seconds. I know TSM and the diffuse dipole approximation - but the problem is, that this scattering approximation is only plausible for convex geometry. I even implemented a combination of depth peeling and TSM, but gathering the irradiance is non-trivival then (and the diffuse dipole not valid, anymore). I was thinking about the disc-approach by bunnel in gpu gems I + II, but they seem to have alot of artifacts. also, they don't compute visibility at all, which will result in scene-dependent parameter tuning in the end, I assume. My next idea was a dense set of random points in the bounding box of the object. In this approach, only points inside of the object can transport energy, which would result in a shooting-approach, such as used in radiosity or photon mapping. but: since points have neither normals nor surface area, how should I compute a form factor? I pretty much gave up at this point. I am grateful for *any* idea you guys can offer me. best, nick
  2. Wow, this looks good, thanks alot!
  3. Hello everybody, in comparison to CUDA, OpenCL is often praised to be indepented of the platform and hardware. But the AMD FAQs say that a program compiled with their Stream technology does not run on the hardware of another GPU vendor. This means that ppl with NVidia hardware need to recompile with their drivers (and SDK, I guess). This is, of course, not possible if I wanted to ship a professional software to customers with different hardware. Searching the web for this issue I found the possibility to dynamically link the OpenCL.dll, which comes with the specific video driver. What frightened me is, that apparently there have been (still are?) different calling conventions (stdcall/cdecl) arbitrarily mixed with Nvidia/AMD and 32bit/64bit. Do these issues still exist in the latest drivers and SDKs? My question: is it possible -at this time- to ship a software with different DLLs (32/64 bit, AMD/NVIDIA) and to dynamically link them to support heterogeneous systems? Thanks alot, Nicolas
  4. correct lighting always has to be computed over the entire hemisphere. when you only sample the area of the light source, you'll get shadowed surface points when the sun is occluded. however, this is not the case when you sample the hemisphere of all incoming directions. if I understand you right, you probably need something like this to separate important sampling points from less important sampling points. can I PM you about your scattering btw? nick
  5. hi, first of all, I know that there are plenty of threads about atmospheric scattering, but I could not find any information on how the wavelength-depending scattering coefficients are created! I implemented the ATI paper "Rendering Outdoor Light Scattering in Real-Time", where the original terrain color is multiplied with an extinction factor and an inscattering factor. L_o = L_in * Extinction( Distance ) * InScattering( Distance, AngleToSun ) that's pretty simple! In the function "InScattering()", rayleigh and mie functions have to be computed, both depending on the scalar value "AngleToSun". This is simple, too. my problem is, that I have only grayscale images so far! I know that the atmosphere scatters blue particles more than other wavelengths, but where to I take these numbers? In the Preetham paper is a table for 5 different wavelengths, but this didn't work out properly :-/ I would need something like float3 RayleighCoeff( float Theta ) { return ... } since the scattering parameters depend on the angle to the light source. thanks, nicolas
  6. hi, I want to map a twodimensional texture over a threedimensional object, in this case a simple sphere. the texture's scale should depend on the sphere's scale. This can be understood like a stamp or brush, just depending on the position of a sphere in space. I made a drawing to make myself more clear: I want to write a HLSL shader for this task. I tried to map the world coordinates to (0,1), but this didn't work. any help is appreciated! best, nico
  7. hi everybody, I have a simple question: is it possible to render points (in a single vertex buffer) in variable, user-defined sizes? in all implementations I've seen so far the point size has to be set in the renderstate settings and seems fix. thanks, nicolas [Edited by - nicmenz on January 15, 2010 7:59:32 AM]
  8. thanks you VERY much for this answer!! rating++
  9. nicmenz

    SSAO no halo artifacts.

    great post, but assuming that the surface is flat seems to be a major disadvantage, especially when you include displacement maps and normal maps in the calculation of the occlusion factor (which greatly improves the appearance and detail). with high frequency displacement maps, you will almost never have a flat surface. rating++, though.
  10. hi, I worked on an own implementation of SSAO. Of course I want to compare my technique to existing approaches, such as Crytek's SSAO or StarCraft II SSAO. I found a shader in the web that uses random reflection maps just like Crytek does. This approach is shown on the right side of the image, whereas my approach is on the left side. Both images were rendered with 16 samples per pixel. I blurred them with photoshop, because I did not implement a smart blur, yet. Now my question: Does the right image look like the crytek approach? Or is it too bad? Can anyone provide me with an HLSL shader of crytek or sc2 SSAO for further comparison? EDIT: (image removed, see postings below) regards, Nicolas [Edited by - nicmenz on September 1, 2009 6:04:47 AM]
  11. thanks viik for your demo! here are my results for the scene: unfortunately, I currently have to pay a high price for sampling very distant pixels. if I turn off random access to the normal texture and limit the sampling range, my fps immediately double. I have to work on the speed, it's only 30 fps at 512x512 on a 295 gtx :-( the problem really is the texture caching :-( regards, nicolas
  12. here is the atrium with the buddha as .x file: Sponza Buddha Scene I use the following scene parameters in Direct3D: Camera Position = ( 0.0f, 0.0f, 100.0f ) FOV = PI / 4.0 = 45 degree Near Plane = 0.0 Far Plane = 1000.0 Model Tranlation = (-10.0f, -20.0f, 50.0f ) Model Rotation = ( 0.2f, 1.37f, 0.0f ) Nicolas
  13. thanks viik, I read your thread before, your results are really impressing! Do you think it is possible that we render the same scene/model and compare our results? Best, Nicolas
  14. hi, I implemented the smart blur. First I used gaussian weights, but then I noticed a problem: If the depth difference is too big, the current sample is dismissed. But then, the sum of the gaussian weight (= 1.0 ) isn't right anymore. So I had to simply divide the accumulated color by the number of samples. anyway, I think the smart blur works very well, here is the result: Here is how it works: My algorithm guarantees an statistical optimal normal offset distribution for each normal/pixel. Since this leads to artifacts because each normal is sampled the same way, I came up with a permutation shader that varies each sample of a planar area. UPDATE: I just compared 8 samples, 16 samples and 32 samples. the optimal distribution works much better than I thought! See how small the differences are and how much detail is preserved: I am sure that the artifacts in the 8 samples image can be removed by a larger blur kernel (you can see the two directions of the separate blur). regards, nicolas [Edited by - nicmenz on August 30, 2009 9:30:42 AM]
  15. if you multiply values inside [-1,1] by 0.5, you get values inside [-0.5,0.5]. if you SUBTRACT an offset of 0.5, you would get values in the range [-1,0], which isn't the way you address a texture!
  16. hi and thanks for the quick response! I quickly re-created the scene of your example and rendered it with my shader. theoretically, my approach should be superior to any existing techniques using a reflection texture. As you can see in the lower figure, I am able to preserve high frequencies as well as distant occluders (in image space, of course), as can be seen between the pillars in the background and the "cube-arrangement" in the front. preserving high frequencies is even more important for such detailed geometry as the buddha. unfortunately, my approach produces very much noise. I am curious if a depth-map based blur can change that :-( but somehow I doubt it. <br/>By nicmenz at 2009-08-29 thanks for the links, I will check them out :-) Nicolas [Edited by - nicmenz on August 29, 2009 11:26:48 AM]
  17. nicmenz

    XNA MRT problem (solved)

    really thanks a lot, the tool debugview told me that I have to use my own vs_3_0 compiled vertex-shader instead of the built-in xna vs_2_0 shader! rating++ :-) nicolas
  18. hi, I write multiple data (depth, normals, worldpos) into four rendertargets using the following code: private void DrawSceneMRT( Matrix View, Matrix Projection, Effect Material, RenderTarget2D DestColor, RenderTarget2D DestWorld, RenderTarget2D DestNormalDepth, RenderTarget2D DestSpherical ) { AssignEffect( _Scene, Material ); GraphicsDevice.SetRenderTarget( 0, DestColor ); GraphicsDevice.SetRenderTarget( 1, DestWorld ); GraphicsDevice.SetRenderTarget( 2, DestNormalDepth ); GraphicsDevice.SetRenderTarget( 3, DestSpherical ); GraphicsDevice.Clear( ClearOptions.Target | ClearOptions.DepthBuffer, Vector4.Zero, 1, 0 ); Matrix World = Matrix.Identity * Matrix.CreateScale( _ModelScale ) * Matrix.CreateRotationY( _ModelRot.Y ) * Matrix.CreateRotationX( _ModelRot.X ) * Matrix.CreateRotationZ( _ModelRot.Z ) * Matrix.CreateTranslation( _ModelTrans ); Matrix WorldIT = Matrix.Invert( Matrix.Transpose( World ) ); Matrix Wvp = World * View * Projection; foreach( ModelMesh mesh in _Scene.Meshes ) { foreach( ModelMeshPart part in mesh.MeshParts ) { Effect e = part.Effect; e.CurrentTechnique = e.Techniques[ 0 ]; e.Parameters[ "gWorldXf" ].SetValue( World ); e.Parameters[ "gWorldITXf" ].SetValue( WorldIT ); e.Parameters[ "gViewXf" ].SetValue( View ); e.Parameters[ "gWvpXf" ].SetValue( Wvp ); e.Parameters[ "gClipDistance" ].SetValue( 1000.0f ); e.Parameters[ "gTexLightMap" ].SetValue( _TexLightMap ); } mesh.Draw( ); } GraphicsDevice.SetRenderTarget( 0, null ); GraphicsDevice.SetRenderTarget( 1, null ); GraphicsDevice.SetRenderTarget( 2, null ); GraphicsDevice.SetRenderTarget( 3, null ); } as you can see, I resolve the four rendertargets at the end of the code, so everything should be fine. but when I try to run the following code immediately after running the code sample from above... private void PostSSAO( RenderTarget2D NormalDepth, Texture2D Random, RenderTarget2D Destination ) { GraphicsDevice.SetRenderTarget( 0, Destination ); GraphicsDevice.Clear(Color.Black); Effect e = _Crytek; e.CurrentTechnique = e.Techniques[ 0 ]; e.Parameters[ "gTexNormals" ].SetValue( NormalDepth.GetTexture( ) ); e.Parameters[ "gTexRandom" ].SetValue( Random ); EffectPass pass = e.CurrentTechnique.Passes[ 0 ]; e.Begin( ); spriteBatch.Begin( SpriteBlendMode.None, SpriteSortMode.Immediate, SaveStateMode.SaveState ); pass.Begin( ); spriteBatch.Draw( NormalDepth.GetTexture( ), Vector2.Zero, Color.White ); pass.End( ); spriteBatch.End( ); e.End( ); GraphicsDevice.SetRenderTarget( 0, null ); } ...I get an "unexpected error" in the line spriteBatch.End(). If I remove the lines pass.Begin() and pass.End(), the Texture is normally drawn (but not processed, of course). Anybody an idea? Thanks, Nicolas [Edited by - nicmenz on August 28, 2009 9:54:05 AM]
  19. nicmenz

    XNA MRT problem (solved)

    where do I find the debug output? I never actually used the debug mode :-(
  20. nicmenz

    XNA MRT problem (solved)

    hi, thanks for your help, seems like I used the wrong order all the time. but unfortunately, this did not solve the problem. I could fix it, but I can't believe the reason: I changed the directX mode from debug to release using the DXCPL.EXE! obviously, the debug mode causes XNA to not work properly anymore. has anyone else encountered this problem? nicolas
  21. hi, for my new project I want to use CUDA instead of HLSL. My notion at this time is, that I render color/depth/normals to render targets as usual. Then, I want to transfer these render targets to my CUDA application for post-processing. I read that in version 1.1, only vertex buffer could be sent to CUDA. Does this constraint still exist? In the CUDA SDK browser, there is a postprocessing example for OpenGL, but not for DirectX. Or do I have to change to OpenGL? Thanks, Nicolas
  22. nicmenz

    God Rays

    krokhin running your demo seems to be corrupted on my computer. I run a 8800 gtx, I guess you have an ATI card? Neither do I see volumetric lighting. What technique do you use? I currently (desperately) try to perform a ray casting from the viewing perspective, but it doesn't work :-(
  23. nicmenz

    God Rays

    thanks changed :-)
  24. nicmenz

    God Rays

    hello, I implemented this effect three weeks ago, just have a look at Volumetric Lighting with XNA Just send me an PM with your email and I send you the code or any explanations. greetings, nicolas [Edited by - nicmenz on May 6, 2009 4:43:31 AM]
  25. hey! I was looking through the web to find some impressive sun shader for the space shooter I want to write (as a background effect). Unfortunatly, I did not find anything except the fx composer "aurora" effect. But I wanted something really cool, really BRIGHT :-) In the current version I use an albedo cubemap, two interference cubemaps to simulate solar wind/heat and the kawase multiresolution gaussian blur (fxxxxking hell to implemenent!). So in this thread I have no serious question, I just want to show you my results so far and get some feedback if you want to. The shader is not finished yet, I am currently working on refraction effects in areas of lower heat. The last thing to do are crepuscular rays to make it really, really bright :-D (Did I mention before that I like it bright?) So just check it out on youtube, any idea is appreciated! XNA Sun Shader thanks, nick
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!