Jump to content
  • Advertisement
Sign in to follow this  
Bucky32

Shader precision problems...

This topic is 4516 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

We're currently doing raycasting on the gpu and, more precisely, volume rendering. Our big problem right now is the precision, the volume is jumpy when rendering upclose (more specifically inside it) and has strange ring-like artefacts (apart from the normal and expected sampling artefacts when volume rendering). We're rendering several passes storing intermediate results into RGBA16F textures, FLOAT_RGB32_NV don't really work since it does not support GL_LINEAR. Our shaders are all using floats and where possible, the data sent from vs to fs is stored in texCoords (we've seen somewhere that this gives you higher precision than sending stuff as color). The data is stored in 3D textures and it's quite big so we've settled for 16F, but that should be enough and not produce the artefacts that we're getting. We're using both a gf6800 and a quadro fx4500. So.. we are wondering what general things can be done to improve overall precision in gpgpu? Thanks in advance! [Edited by - Bucky32 on April 6, 2006 10:26:54 PM]

Share this post


Link to post
Share on other sites
Advertisement
The mach banding is unavoidable, even when using a floating point render buffer. I think some type of dithering algorithm may work, but I haven't had time to implement one.

This screen shows the banding problem from my volumetric renderer. It sounds like we are using the exact same techniques:
http://cwiki.org/uploads/e/e0/Untitled-1.jpg

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by taby
The mach banding is unavoidable, even when using a floating point render buffer. I think some type of dithering algorithm may work, but I haven't had time to implement one.

This screen shows the banding problem from my volumetric renderer. It sounds like we are using the exact same techniques:
http://cwiki.org/uploads/e/e0/Untitled-1.jpg


Hi taby!

Yes, we can see tendencies of the same problem in our rendering. But we must be doing something else wrong, because the artifacts we have are a lot more severe.

They are hard to describe and are best seen when the volume in motion (not in an screenshot). But one of the artifact effects is that we in different angles can see rings that spread out on top of the volume rendering. The other is that when we're rotating inside the volume the entire rendering output seems to move some pixels and then back again over 2 frames.

The only thing we can think about is that it must be some sort of precision problem. That perhaps affect our direction texture calculated thats calculated as in the article "Acceleration Techniques for GPU-based Volume Rendering" by Krüger et. al.

All help appreciated! =)

Share this post


Link to post
Share on other sites
If you would like to have me test it and see what I can conclude, I'd be more than happy. I've definitely gone through my share of dead-ends on this project.

Alternatively, you can check out my shader source at:
http://cwiki.org/index.php/Image:Gpuvol4.rar

Also, the most current C++/shader source is at:
http://cwiki.org/index.php/Image:Gpuvol-04-06-06.zip

I'm currently developing an IR-ROYGBIV-UV colour system for use with the Fresnel equations and Cauchy's equation. Once I've got that finished, I'll flip you a copy of the source if you like.

BTW, this is all OpenGL 2.0 code.

Share this post


Link to post
Share on other sites
Thats a really nice project you got going there taby!

We had a look at you code. It seems like we arent calculating the ray directions in the same way. But we saw that you have a look_at vector in the shader thats updated via the color attribute. How do you actually calculate this vector?

This is the shader code we use for our two pass direction calculations. First pass we render a cube with front faces and the second pass using back faces.

First pass:

struct vertex
{
half4 position : POSITION;
};

struct fragment
{
half4 position : POSITION;
half4 color : COLOR;
half4 texCoords : TEXCOORD0;
};

fragment main(vertex IN,
uniform half2 screenResolution,
uniform half4x4 modelViewProj)
{
fragment OUT;

//Transform position coordinates
OUT.position = mul(modelViewProj, IN.position);

//Create textureCoords, (remapping of position)
OUT.texCoords = (OUT.position.xyzw + OUT.position.wwww) * 0.5;

//Rescale texture coords after window size
OUT.texCoords.xy *= screenResolution;

//Per-vertex color
OUT.color = IN.position;
return OUT;
}




Second pass (same vertexshader as above also active):

struct fragment
{
half4 color : COLOR;
half4 texCoords : TEXCOORD0;
};

struct pixel
{
half4 color : COLOR;
};

pixel main(fragment IN,
uniform samplerRECT entryTexture)
{
pixel OUT;

//Difference between back and front of cube = somewhat a ray-direction
half4 ray = IN.color - texRECTproj(entryTexture, IN.texCoords);

OUT.color.xyz = normalize(ray.xyz);

//W-component is the length of the ray through the cube
OUT.color.w = length(ray);

return OUT;
}




As our volume renderer is a part of a larger application we cant upload it here. Here are instead some screenshots showing some of the artefacts problems.

Outside
Inside volume
Another volume

Share this post


Link to post
Share on other sites
The method that I used is fairly common, and is modeled after classic image plane construction.

The uv_rig.cpp camera is essentially an orbit camera which is always looking at 0, 0, 0. To calculate the image plane corners, I do the standard tan() ratio with the assumption that the length from the eye to the image plane is 1.0.

I then rotate and translate the camera's 4 look at vectors to achieve the final camera position. Since I'm calculating the 4 look at vectors before they are rotated, the math stays extremely simple.

Once I have the rotated/translated 4 corner vectors, I draw a fullscreen quad, using the respective corner xyz direction as the rgb colour value. Since the image plane is planar (duh), the linear colour interpolation that occurs then generates the appropriate ray direction for each fragment.

I hope that makes sense?

Those screenshots are amazing!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!