#### Archived

This topic is now archived and is closed to further replies.

This topic is 5477 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I had this idea a while back, thought of sharing it with you guys. Imagine you have a donut, and an omni light directly overhead casting a shadow on the floor. On the floor you should see a "o", the shadow of the donut. now, if you could pre-compute which edges of the object, in this case, a donut, where involved in the calculation, then you could accelerate the entire process. i propose the following process to pre-process an object that will cast volume shadows: create a sphere with n points. insert the object into the sphere so that it is totaly enclosed by the sphere. now, we treat each point in our sphere as a light source, and we calculate which faces are the "edge" of the shadow for that particular light position. That was the pre-processing. While in the game, we have a "virtual" sphere, with the same n points, that goes wherever our object goes, suffers the same transforms as he does. Then, when we need to create a shadow volume from that object, all we have to do is ask which of the sphere''s points is closest to the light source, and that gives us which edges to work with. Did any of this sound logical? Your thoughts please...

[Hugo Ferreira][Positronic Dreams]
All your code are belong to us!

##### Share on other sites
Sounds like it could work, but there might potentially be artifacts if the number of points on your bounding sphere is not high enough.

However, the bottleneck in shadow volumes is generally fillrate, not only silhouette determination, so it doesn''t solve the main problem of shadow volumes.

Another optimization is to use a silhouette cache. When the light doesn''t move, no need to recompute the silhouette. It''s very simple to implement and, in some situations, can do miracles.

Y.

##### Share on other sites
while feasible possible, i dont think this would necessary be a good idea, considering that if you have an object with many vertices (and thus many edges) and a "lighting sphere thing" with high accuracy you''ll have a lot of potential edge lists to store, each containing a lot of edges...

just my 2c

-jonnii
=========
jon@voodooextreme.com
www.voodooextreme.com

##### Share on other sites
It also would get more complicated with animated meshes

##### Share on other sites
This won't work.
When you look at an object from one direction, but from different distances, the edges you needed for shadow volumes can change.

imagine this case
             .     <- light point                                                 /        /      /      <-- you see this as edge             .     <- pivot point of the object

but if you move the light closer to the line (polygon) but still from the same direction according to the objects pivot point
             .     <- light point          /   <-- now you see this as edge        /      /             .     <- pivot point of the object

My Site

[edited by - Quasar3D on September 22, 2003 10:27:27 AM]

[edited by - Quasar3D on September 22, 2003 5:49:06 PM]

##### Share on other sites
From this point onward, the Sphere Points will be called Nodes.

quote:
Original post by quasar3d
This won''t work.

You forget that there are other Nodes around the object. As your light source moves, the assigned Node changes.
The project volume itself changes from position to position, but not the silhouette edges themselfs. Thats what we are storing: For a certain Node, which are the silhouette edges?

quote:
(...)you''ll have a lot of potential edge lists to store(...)

Well, to solve this I came up with an algorithm that only stores the diference between Nodes .
Therefore you only store the change in edges, per Node, see?

Also, sometimes it might happen that two nodes, very close to each other, have the same edges, so they merge, becoming one.

quote:
It also would get more complicated with animated meshes

The same algorithm can be used here, by only storing the diference from one Node in a frame, to the same Node in the next Frame.

Also, with this pre-processing we can store self-shadowing information...

I think that if all the quircks get ironed out, this could evolve into a very useful thing...

[Hugo Ferreira][Positronic Dreams]
All your code are belong to us!

##### Share on other sites
quote:
Original post by pentium3id
From this point onward, the Sphere Points will be called Nodes.

quote:
Original post by quasar3d
This won't work.

You forget that there are other Nodes around the object. As your light source moves, the assigned Node changes.
The project volume itself changes from position to position, but not the silhouette edges themselfs. Thats what we are storing: For a certain Node, which are the silhouette edges?

if I understand you correctly, you have a sphere surrounding your object and then you project the position of the camera on that sphere to find a node. but when you move the camera straight back from the center of the sphere, the projection on the sphere will still be the same, and so you will still be in same node, while the edges you really need might have changed.

It's the same principle as that you can't do real reflections with an environment map.

Am I missing something?

My Site

[edited by - Quasar3D on September 22, 2003 5:48:29 PM]

##### Share on other sites
yea, I think you''re missing something.
Shadows aren''t camera-dependent, they are lightsource-dependent.

You use the current light source to find the node.
Altering the position of your light does.
Each Node "points" to a data structure that tells us which are the edges for the current Node.

More thoughts?

[Hugo Ferreira][Positronic Dreams]
All your code are belong to us!

##### Share on other sites

Great idea pent!! It looks like it will work quite well for directional lights, but perhaps not for point lights. Just scaling your light vector from the origin of your object will obtain a different set of edges for various objects.

A suggestion though: you may still have artefacts when moving from one node to another - edge jumps. If you were to store slightly redundant edges and then at runtime still compute the sillouettability (?!?) of that limited amount of edges you can still get a good performance gain.

Of course, the only real issue I see here is matching the light vector to one of the nodes in your data-structure. How do you perform the search for the appropriate node based on your light vector?

##### Share on other sites
quote:
Original post by pentium3id
yea, I think you''re missing something.
Shadows aren''t camera-dependent, they are lightsource-dependent.

You use the current light source to find the node.
Altering the position of your light does.
Each Node "points" to a data structure that tells us which are the edges for the current Node.

More thoughts?

[Hugo Ferreira][Positronic Dreams]
All your code are belong to us!

oops. with the camera in my previous post I meant the lightsource. I pictured the light as a camera looking at the center of the object, but of course the camera of the viewer doesn''t have anything to do with it.

as the previous poster said, it will indead work for directional lights, but I don''t think it will work for point lights and spot lights.

but maybe I''m wrong, so if you don''t have it implemented yet, go do it, and if you have, please show us, I am very interested in seeing it working

My Site

1. 1
2. 2
Rutin
21
3. 3
4. 4
frob
16
5. 5

• 9
• 12
• 9
• 33
• 13
• ### Forum Statistics

• Total Topics
632593
• Total Posts
3007271

×