• Create Account

Awesome job so far everyone! Please give us your feedback on how our article efforts are going. We still need more finished articles for our May contest theme: Remake the Classics

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

30 replies to this topic

### #1w_poons  Members   -  Reputation: 122

Like
0Likes
Like

Posted 13 February 2007 - 02:40 AM

in the last couple of days i stumbled across a paper (can't remember the title) that mentioned that hierarchical and progressive radiosity can be combined. and here lies my problem. as far as i understood progressive radiosity tries to reduce the form factor calculations which are necessary. on the other hand hierarchical radiosity needs the form factor to decide if an area has to be subdivided before the radiosity calculation. is this not a contradiction? maybe someone can clearify my error in reasoning. thank you!

### #2Zipster  Crossbones+   -  Reputation: 398

Like
0Likes
Like

Posted 13 February 2007 - 03:17 AM

The difference between progressive radiosity and traditional radiosity is that progressive radiosity uses a scatter approach (light it shot out from a patch to all other patches) while traditional radiosity uses a gather approach (each patch gathers light from every other patch). The benefit of progressive radiosity is that it can be done in iterations, slowly converging toward a stable solution, and each iteration gives you a better intermediate result you can display. Usually only a few patches are emitting light in the first couple of passes, meaning that you can greatly reduce the amount of transfers initially while still achieving a decent result. With the traditional radiosity approach, it's all or nothing. You need to iterate over each patch and gather light from every other patch, no exceptions. Theoretically progressive radiosity has the same O(n2) worst-case, but that rarely happens and can be avoided by controlling the number of iterations.

The hierarchical radiosity approach (if I understand you correctly) tries to minimize the number of patches overall by using some heuristic to dynamically determine where higher patch densities are required, usually on surfaces with high-frequency color variation. However it should work with either scatter or gather radiosity, since those simply determine how the light is transfered while hierarchical radiosity (which I've always seen referred to as adaptive subdivision) determines how patches are generated.

### #3w_poons  Members   -  Reputation: 122

Like
0Likes
Like

Posted 13 February 2007 - 09:01 PM

But my problem still remains. As far as I understand so far, I need the form factors to decide if a polygon has to be subdivided or not. But on the other hand progressive radiosity doesn't want to calculate all form factors, therefore the speed up.

My idea was to first make an adaptive subdivision (maybe hierarchial radiosity was the wrong term) in areas where higher densities are needed (shadow boundaries, ..) and then start a progressive radiosity algorithm. But I need the form factors for this subdivision, or is there another possibility?

### #4Zipster  Crossbones+   -  Reputation: 398

Like
0Likes
Like

Posted 14 February 2007 - 02:01 AM

The trick is that you don't have to subdivide patches that aren't being lit. So what you do is perform the progressive radiosity, and as patches are being lit you decide whether to subdivide them. You don't even touch patches that aren't lit.

### #5w_poons  Members   -  Reputation: 122

Like
0Likes
Like

Posted 14 February 2007 - 02:07 AM

Ok. So I don't make the subdivision before the progressive radiosity step, instead I make the subdivision during the progressive radiosity. Thats why it is called adaptive subdivision ;)

Thanks again.

### #6w_poons  Members   -  Reputation: 122

Like
0Likes
Like

Posted 14 February 2007 - 02:08 AM

Ok. So I don't make the subdivision before the progressive radiosity step, instead I make the subdivision during the progressive radiosity. Thats why it is called adaptive subdivision ;)

Thanks again.

### #7 Anonymous Poster_Anonymous Poster_*   Guests   -  Reputation:

0Likes

Posted 25 February 2007 - 02:39 AM

@ Zipster

Many papers say, that you should subdivide, if "the radiosity varys too much over the extent of a patch" .... what does exactly mean that? And how can i compute if this is the case?

Thanks
gammastrahler

### #8Zipster  Crossbones+   -  Reputation: 398

Like
0Likes
Like

Posted 25 February 2007 - 07:33 AM

I'm not sure what the standard way is (if there is one), but if I were implementing it I would keep track of the minimum and maximum light transfers for each patch as a vector. Then if length(max - min) increases beyond a certain threshold, that's my signal that theres either too much color difference or intensity difference on the patch.

### #9Poons  Members   -  Reputation: 122

Like
0Likes
Like

Posted 07 March 2007 - 01:48 AM

Instead of starting a new topic I will just post the question here.

For progressive radiosity you have to determine the next shooter. This is the patch with the highest unshoot energy. I'm solving the radiosity for the three color channels (R,G and B). How can I calculate the correct energy of a patch? Just adding up the values seems to easy.

### #10gamma_strahler  Members   -  Reputation: 122

Like
0Likes
Like

Posted 07 March 2007 - 04:02 AM

Hi,

in my routine for finding the patch with the highest energy i currently just add up the components. works fine for me.

another more accurate method would be to convert to another color space (hsb) or even wavelength

this involves three steps:

- clamp each radiosity channel to be in the range of 0, 255 (an exposure function is recommended)
- convert to HSB or wavelength/frequency
- compare the results

this should work, but from my experience, it is sufficient to just add up the three values

[Edited by - gamma_strahler on March 7, 2007 10:02:10 AM]

### #11Poons  Members   -  Reputation: 122

Like
0Likes
Like

Posted 07 March 2007 - 07:52 PM

Thanks! I think for now I will go with the easy summation and switch to your proposed method later on.

### #12Sharlin  Members   -  Reputation: 860

Like
0Likes
Like

Posted 07 March 2007 - 08:28 PM

A marginally better result might be obtained by weighing the components based on the sensitivity of the human eye:

`luminance = 0.30*red + 0.59*green + 0.11*blue`

### #13Poons  Members   -  Reputation: 122

Like
0Likes
Like

Posted 16 March 2007 - 02:58 AM

I have to use this thread once again. Maybe somebody here on the forums has read the paper by Coombe et al. about "Radiosity on Graphics Hardware".

I'm not sure what they mean by:
"Next, each polygon that might have received energy from the shooter is rendered orthographically to a frame buffer at the same resolution as its associated radiosity texture."

I'm not sure why that must be done, because if I already have a texture, rendering it to a frame buffer with the same size should give me the same result. There must be something that I'm missing.

Maybe someone can help me.

### #14ViLiO  Members   -  Reputation: 1326

Like
0Likes
Like

Posted 16 March 2007 - 03:22 AM

It means you set you viewport to the resolution of that polygon's texture. You then bind the polygon's surface as a rendertarget and render the polygon with a an orthogonal projection matrix such that each texel of the polygon's texture corresponds directly to one fragment in the pixel shader. Then in the pixel shader you can back-project and lookup in the ID Map to see if that texel is visible or not.

That is a great paper btw, the way they pick the next emitter is seriously clever [smile]

Regards,
ViLiO

### #15Poons  Members   -  Reputation: 122

Like
0Likes
Like

Posted 16 March 2007 - 03:57 AM

Do I understand you correctly, that this has to be done to perform the back projection in the pixel shader? And after performing the form factor calculations the values in the frame buffer are stored back into the radiosity texture of that polygon?

### #16ViLiO  Members   -  Reputation: 1326

Like
0Likes
Like

Posted 16 March 2007 - 04:04 AM

Yep, that is pretty much it. The back-project is merely for the visibility lookup, you will still need to calculate the rest of the form factor in the pixel shader and the resulting values you output as the colour value are the incident light for that texel of the polygon's texture.

Regards,
ViLiO

### #17Poons  Members   -  Reputation: 122

Like
0Likes
Like

Posted 16 March 2007 - 04:24 AM

Thanks!

Maybe you can clarify one last question: If I bind the polygon as a rendertarget, I can access the world coordinates of the fragment in a pixel shader? Or is this wrong? Or maybe the better question is: How can I access the world coordinates?

### #18ViLiO  Members   -  Reputation: 1326

Like
0Likes
Like

Posted 16 March 2007 - 04:33 AM

You still render the polygon in it's world space position, you just have to use a view matrix and orthogonal projection matrix so that it is screen aligned and fills the viewport.

Regards,
ViLiO

### #19Zipster  Crossbones+   -  Reputation: 398

Like
0Likes
Like

Posted 16 March 2007 - 07:49 AM

As an aside, I'm also implementing this paper as part of a project for my GPU architecture class, as well as adding a few new extensions... so check back in a few months :)

### #20ViLiO  Members   -  Reputation: 1326

Like
0Likes
Like

Posted 16 March 2007 - 09:30 AM

Quote:
 Original post by ZipsterAs an aside, I'm also implementing this paper as part of a project for my GPU architecture class, as well as adding a few new extensions... so check back in a few months :)
Me too (but just for fun though, not for any real project/class) [lol]

This is as far as I got it before taking a break to do some XNA stuff [smile]