Jump to content
  • Advertisement
Sign in to follow this  
sienaiwun

Multi-cameras grouping

This topic is 2482 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement
Hi!

This sounds to me a little bit like Instant Radiosity / Imperfect Shadow Maps. What do you need the rendered images for? If you want to create shadow maps for virtual point lights in order to estimate visibility for certain directions over a hemisphere, you should look into Imperfect Shadow Maps. It can render all shadow maps in a single draw call. (So, no geometry in the scene is drawn twice.) On the downside the shadow maps are somewhat dirty, but it is really fast.

To summarize it for short, it goes like this:

  • In a preprocess uniformly distribute a few hundred thousand points on the scene geometry. For dynamic geometry store for each point the barycentric coordinates and the triangle index so that you can do a fast lookup from a transformed vertex buffer (e.g. streamed out after skinning).
  • During rendering distribute virtual point lights starting from your lights. For coherence you should prefer quasi monte carlo methods (like Halton sequences). You can use a raytracer for that or you can take a reflective shadow map and do importance sampling on that (requires a summed-area table of the direct lighting buffer to create the cumulative distribution function).
  • Next is the creation of the shadow map atlas. Therefore you render all points in the scene. Each VPL uses different points for the shadow map estimation, so based on the VertexID you can decide to which frustum you want to transform the respective point. (You can store the viewport transformations in a texture buffer.)
  • The resulting shadow maps have many holes, so you do a pull-push step to fill in the gaps. pull: Average (only valid) depths for the next higher mip map level, push: copy the averaged depths from the higher mip map into the invalid atlas pixel of the lower mip map. Using points to render the shadow maps is a few times faster than using triangles, so that can absorb the loss for the pull-push step.
  • Well, and then you can do the lighting + shadow mapping.

    Your approach sounds interesting, though. It is another way to increase the quality of the shadow maps. (Standard way would be to simply use more points.) How do you decide in which camera frustum of the group to transform to? Is it somehow marked on your triangles in the grouping step?

    Best regards!

Share this post


Link to post
Share on other sites
Hi! This sounds to me a little bit like Instant Radiosity / Imperfect Shadow Maps. What do you need the rendered images for? If you want to create shadow maps for virtual point lights in order to estimate visibility for certain directions over a hemisphere, you should look into Imperfect Shadow Maps. It can render all shadow maps in a single draw call. (So, no geometry in the scene is drawn twice.) On the downside the shadow maps are somewhat dirty, but it is really fast. To summarize it for short, it goes like this:
  • In a preprocess uniformly distribute a few hundred thousand points on the scene geometry. For dynamic geometry store for each point the barycentric coordinates and the triangle index so that you can do a fast lookup from a transformed vertex buffer (e.g. streamed out after skinning).
  • During rendering distribute virtual point lights starting from your lights. For coherence you should prefer quasi monte carlo methods (like Halton sequences). You can use a raytracer for that or you can take a reflective shadow map and do importance sampling on that (requires a summed-area table of the direct lighting buffer to create the cumulative distribution function).
  • Next is the creation of the shadow map atlas. Therefore you render all points in the scene. Each VPL uses different points for the shadow map estimation, so based on the VertexID you can decide to which frustum you want to transform the respective point. (You can store the viewport transformations in a texture buffer.)
  • The resulting shadow maps have many holes, so you do a pull-push step to fill in the gaps. pull: Average (only valid) depths for the next higher mip map level, push: copy the averaged depths from the higher mip map into the invalid atlas pixel of the lower mip map. Using points to render the shadow maps is a few times faster than using triangles, so that can absorb the loss for the pull-push step.
  • Well, and then you can do the lighting + shadow mapping. Your approach sounds interesting, though. It is another way to increase the quality of the shadow maps. (Standard way would be to simply use more points.) How do you decide in which camera frustum of the group to transform to? Is it somehow marked on your triangles in the grouping step? Best regards!

Thanks for your reply. I use the picture rendered by each camera frustum as a texture to simulate refraction effect, I wander how the way you mentioned above can be adjusted to this. I plan to judge in which camera frustum in fragment shader. It is like this:
vertex shader
{
pass the geometry info to fragment shader directly;
}
fragment shader
{
if(in frastum1) // it has something to do with the frustum 1 's modeview and projection matrix
{
draw as picture1;
?
if(in frastum2)
{
...
}
....
}
this method might be trivial, I would like to modify it.

Share this post


Link to post
Share on other sites

I use the picture rendered by each camera frustum as a texture to simulate refraction effect,

Interesting! How is the refraction meant to work? Do you place the cameras on the refractive surface? Is the idea behind this to approximate the shooting of refractive rays by rasterization of the scene behind the surface? (In order to take advantage of the coherence of the rays?)
How do you decide, which camera and which pixel of it to use when composing the final image?


I wander how the way you mentioned above can be adjusted to this.

If you can live with an approximation of the refractive scene (e.g. by blurring it a little), I think the Imperfect Shadow Mapping approach can be applied here as well.


I plan to judge in which camera frustum in fragment shader. It is like this:
vertex shader
{
pass the geometry info to fragment shader directly;
}
fragment shader
{
if(in frastum1) // it has something to do with the frustum 1 's modeview and projection matrix
{
draw as picture1;
?
if(in frastum2)
{
...
}
....
}
this method might be trivial, I would like to modify it.

Hm, I’m afraid you have to decide on a frustum in the vertex shader, since rasterization takes place before the fragment shader. If you render triangles this might be difficult, since the vertices can end up in different frustums, which would mess up the transformation of the triangle. If you’d render points (like with ISM) you’d be fine.
As mentioned before, in ISM each camera gets its own set of points, so the code would be much easier:
int cameraID = vertexID / verticesPerCamera; // decide on which camera to use based on the ID of the vertex
matrix matWVP = matrices[cameraID]; // read the transformation matrix from a uniform buffer or a texture


Best regards!

Share this post


Link to post
Share on other sites

If you can live with an approximation of the refractive scene (e.g. by blurring it a little), I think the Imperfect Shadow Mapping approach can be applied here as well

Thanks for reply. As far as I know, this method (IMS) works because only that whether the point is viewed from virtual light point should be judged. thus although it is not clear, it can be applied to judge. However, I need the clear view of every virtual camera. So I am afraid it might not be ok in this case.

Share this post


Link to post
Share on other sites

[quote name='Tsus' timestamp='1330449665' post='4917437']
If you can live with an approximation of the refractive scene (e.g. by blurring it a little), I think the Imperfect Shadow Mapping approach can be applied here as well

Thanks for reply. As far as I know, this method (IMS) works because only that whether the point is viewed from virtual light point should be judged. thus although it is not clear, it can be applied to judge. However, I need the clear view of every virtual camera. So I am afraid it might not be ok in this case.
[/quote]
Whether its applicable or not depends on the number of points, but I guess it is very unlikely that you will get a "perfect" refraction / reflection. Due to the push pull step details will be blurred out. I assume for refractive objects far away from the camera it might be okay.


And I wander if multi perspective (http://research.micr...t.aspx?id=70525) could work?

Looks definitely promising. They stated refractions as one application of their approach.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!