Jump to content

  • Log In with Google      Sign In   
  • Create Account

Radiosity Aliasing Issue


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
9 replies to this topic

#1 arijan   Members   -  Reputation: 104

Like
1Likes
Like

Posted 23 August 2011 - 06:31 PM

Hi ,



For the screenshot below , I made 17 bounces for the geometry which has 22k polygons

(In my renderer , each polygon is a patch )


Except than using smaller and more polygons , what would you suggest to get a smooth output ?


I`ve read about interpolate shading but I already treat each polygon as a patch. Perhaps an object space anti aliasing

in which I will be using average of 4x4 or more neighbours` total color , as color value for each patch ?


Posted Image



Sponsor:

#2 Krypt0n   Crossbones+   -  Reputation: 2684

Like
1Likes
Like

Posted 24 August 2011 - 04:33 AM

best would be probably to write out those patch colors into a texture and let the texture units do all the interpolation.

#3 arijan   Members   -  Reputation: 104

Like
0Likes
Like

Posted 24 August 2011 - 01:49 PM

How can I do that ?



Besides , if I dont write to texture , how can I do the interpolation ?

#4 Emergent   Members   -  Reputation: 971

Like
0Likes
Like

Posted 24 August 2011 - 08:34 PM

How can I do that ?

How exactly depends on the API, but more-or-less, each "patch" for radiosity purposes would just be a texel.

Besides , if I dont write to texture , how can I do the interpolation ?


You could also think in terms of vertices instead of polygons, and let the API interpolate the vertex colors across the triangles for you. Not that there's any reason to do this instead of texturing.

#5 arijan   Members   -  Reputation: 104

Like
0Likes
Like

Posted 24 August 2011 - 09:13 PM


How can I do that ?

How exactly depends on the API, but more-or-less, each "patch" for radiosity purposes would just be a texel.

You could also think in terms of vertices instead of polygons, and let the API interpolate the vertex colors across the triangles for you. Not that there's any reason to do this instead of texturing.



Ok , I can easily create textures per surface in D3D9. But after that I guess what I need to do is

setting sampler state with bilinear filtering ( minification+magnification ) and linear or anisotropic filtering ?

#6 arijan   Members   -  Reputation: 104

Like
0Likes
Like

Posted 25 August 2011 - 09:47 PM

Am I on the right track ?

#7 B_old   Members   -  Reputation: 668

Like
0Likes
Like

Posted 26 August 2011 - 12:18 AM

Am I on the right track ?

If I understand Emergent right, his last suggestion was to calculate and store radiosity per vertex. When you render these vertices later, the color will automatically be interpolated between the vertices by D3D. If your scene is adequately tessellated (which seems to be the case) the results can be quite decent I believe.

Or were you asking about light maps specifically?



#8 Krypt0n   Crossbones+   -  Reputation: 2684

Like
0Likes
Like

Posted 26 August 2011 - 07:39 AM

vertex colors have the disadvantages that you'll draw a patch using two triangles and the interpolation won't look properly, you will have similar issues like with gouraud shading. it depends on what you do and what limitations you have, on mobile platforms that might be quite ok, on consoles/pc you might expect more.




create a texture in memory first (basically a 2d array of colors)

you need to create a UV set for that texture (in a simple case you can just planar project your surface).

based on the UVs draw the color of the patch, for each patch, into the texture.

not urgently needed, but it can increase the quality if you have a dilation step that fills the yet unfilled pixels by the neighbour colors. (you can repeat this step several times).

some people additionally blur the texture, that can hide artifacts if you don't use a water tight patch (-> if partially occluded patches exist).

also not urgently needed, but it can improve the quality if you create the lightmap in a higher resolution and as last step scale it down to the desired resolution.

as last step, create a device texture and copy your local texture.




some old school tutorial that might help.

http://www.flipcode.com/archives/Light_Mapping_Theory_and_Implementation.shtml




you make me want to work on my radiosity calculator again http://twitpic.com/4khy8e :)





















#9 rumble   Members   -  Reputation: 118

Like
0Likes
Like

Posted 26 August 2011 - 01:49 PM

Other than increasing tessellation, would storing at vertex level improve the problem much?

I'm reminded for spherical harmonics lighting, you typically calculate lighting at the vertex level and let the graphics card interpolate across the triangle. Unless your lighting/shadow variations on your surfaces changes very slowly, the outlines of typical shadows become zigzaggy.

The reason is if a shadow edge cuts through the middle of a triangle, vertex interpolation can't reproduce this edge. Might there be some work addressing this?

I'm really interested in seeing some demos showcasing how good can either radiosity or spherical harmonics can achieve, and the amount of tessellation required.

#10 MJP   Moderators   -  Reputation: 11790

Like
0Likes
Like

Posted 26 August 2011 - 05:43 PM

On a basic level, there's really not a ton of difference between storing your lighting in your vertices or in a texture. In both cases you have a sparse set of sample points covering a large set of surfaces, and you interpolate between those sample points to help alleviate the aliasing artifacts. However you need a high sample point resolution if you want to capture high-frequency lighting changes, which typically occur in shadows from direct lighting. The advantages of using textures for this are...
  • Your sample points are regularly spaced on a grid, while with vertices located based on the topology of the mesh
  • You can increase the sample resolution for certain surfaces without having to add more vertices, which means adding more position/normal/tangent/whatever data as well as increasing the number of triangles you need to render (it also means you use smaller triangles, which are less efficient)
  • You can compress textures
  • You can mipmap textures
  • Typically you can stream textures more easily than vertex data, or you can also use them in conjunction with virtual textures
Of course it's typically easier to get up and running with per-vertex data since you don't need to parametrize all of the meshes in your scene, or deal with lightmap atlasing/packing.

Also, the resolution needed for your lighting samples depends on what lighting you're calculating. For shadows or small-scale AO you'll need quite a bit of resolution for it to look good...even with lightmaps you typically end up with pretty blurry shadows that don't match your real-time shadows. However if you don't store direct lighting and only store indirect lighting or indirect + sky lighting, you can get pretty good results once you composite in some real-time lighting with shadows. Naughty Dog did this for a lot of their outdoor scenes in Uncharted 2, with lighting stored per-vertex. I also did this with my radiosity sample. If you look at the pictures you can see the indirect + sky lighting looks pretty messy on its own, but combined with the sun lighting + shadows it ends up looking pretty nice. Using a basis like H-basis also helps a lot, since you still get some high-frequency variation from the normal maps. One of the Uncharted 2 presentations also has some pictures like that and they have similar results.

EDIT: I forgot to mention this recent publication that attempts to address issues with interpolating vertex-baked samples: http://www.ppsloan.org/publications/vb-egsr11.pdf




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS