Jump to content
  • Advertisement
Sign in to follow this  
Butabee

Radiosity Idea

This topic is 2538 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I was thinking that radiosity could be done in a pixel shader by storing each rendered pixels world position in an image and color in a seperate image then sample the surrounding pixels within a given range, then calculate the lighting based on those.

I'm in the process of making a voxel engine where the GPU isn't used for much else so I was thinking this could be a way to put the GPU to use.

Whaddya think?

Share this post


Link to post
Share on other sites
Advertisement
In general, the surfaces that reflect light onto one particular point will not be coherent in screen space. So you can't just sample the surrounding pixels and expect to get good results. In a lot of cases the surfaces you need to sample won't be in your texture at all, because they're off-screen or facing away from the camera.

Share this post


Link to post
Share on other sites
What I was planning on was to make every point a light source, so it would shoot light in all directions under 180 degrees. I would do this by getting the cross product of a real light source(point - lightpos) and the camera(point - campos) for the surface direction. Then I'd do a dot product of the normalized (point - otherpoint) and the surface normal.


I think this would work anyways.


There still is the problem of the off screen stuff though, not sure what to do about that. Maybe just see how things look without it.

Share this post


Link to post
Share on other sites
If you are already working with voxels, there is no reason not to simply raytrace the global illumination. If you are using an octree, you wouldn't have to traverse the whole tree, since you probably wouldn't notice the difference.

Share this post


Link to post
Share on other sites

If you are already working with voxels, there is no reason not to simply raytrace the global illumination. If you are using an octree, you wouldn't have to traverse the whole tree, since you probably wouldn't notice the difference.



I'm not using an octree, but I do have an idea that would probably give a better looking effect on the CPU. Just trying to think how costly it would be. I guess I'll try to do as much as I can on the CPU before trying to improve performance by using the GPU. it would be awesome if I could do a full CPU version that has nice performance and looks good.

What I would do on the CPU is actually have light voxels that overlay solid ones and then just check the light voxel at a given point to see how much lighting the solid voxel has.

Share this post


Link to post
Share on other sites

What I was planning on was to make every point a light source, so it would shoot light in all directions under 180 degrees. I would do this by getting the cross product of a real light source(point - lightpos) and the camera(point - campos) for the surface direction. Then I'd do a dot product of the normalized (point - otherpoint) and the surface normal.


I think this would work anyways.


There still is the problem of the off screen stuff though, not sure what to do about that. Maybe just see how things look without it.


That category of GI techniques is called "Virtual Point Lights", or "VPL" for short. If you do some searching you can find a lot of research in this area, which can give you some ideas. You can also read up on instant radiosity, and Crytek's recent foray into real-time GI. IIRC they used a reflective shadow map (RSM) to generate their initial VPL, which limits the number of light sources you can use but doesn't explicitly tie you to what's visible in screen space.

However I agree with lpcsrt...if you already have a voxelized representation of your scene it seems silly not to use it for GI.

Share this post


Link to post
Share on other sites
Ive actually done this before, you end up having to sample too much to get a decent amount of light in, you pretty much have to sample the whole screen with the whole screen to get a decent result out of it, so it goes REALLY slowly.



I figured if it was a few years from now, maybe if you calculated a "cube view" out of the camera youd get rid of some of the hidden from view problem. I hope you dig what I did here, the idea actually works. :) and looks pretty damn cool.

Share this post


Link to post
Share on other sites
There was a nice paper about realtime GI which takes the basic idea of reflective shadowmap and builds on it..

Real-Time Bidirectional Path Tracing via Rasterization
http://www.square-enix.com/jp/info/library/

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!