• Advertisement

3D Spherical Harmonics and Lightmaps

Recommended Posts

Hey guys,

Are lightmaps still the best way to handle static diffuse irradiance, or is SH used for both diffuse and specular irradiance now?

Also, do any modern games use direct light in lightmaps, or are all direct lighting handled by shadow maps now?

Finally, how is SH usually baked?

Thanks!

Edited by KarimIO
Clarification

Share this post


Link to post
Share on other sites
Advertisement
4 hours ago, KarimIO said:

Are lightmaps still the best way to handle static diffuse irradiance, or is SH used for both diffuse and specular irradiance now?

You can have lightmaps where each texel is SH instead just a single color. Advantage would be directional information, so you can apply indirect light to normal mapped surface properly.

For specular you are typically out of memory to store high detailed enviroment at every lightmap texel, that's why a sparse set of high resolution enviroment probes is common for this. (A 3 band SH has less detail than a cubemap with 6 * 2*2 texels, a 2 band SH is similar to 6*1 texel cubemap - not enough for sharp reflections)

In contrast, sampling diffuse indirect lighting from a sparse set of enviroment probes would be much worse than using lightmaps. Lightmaps have high resolution in the surface so they can store accurate local details, like shadows and darkening in corners. High resolution directional information is not necessary.

But there are many games that use either sparse lobes or dense lightmap approach for everything, others use sparse 3D texture bricks instead 2D lightmaps, many do not use precomputed lighting at all.

4 hours ago, KarimIO said:

Finally, how is SH usually baked?

Typically you generate n rays evenly distributed to any direction, for each ray calculate the incoming light (e.g. by path tracing), and sum up. Finally you normalize SH and done. 

So you could do something like contribute each texel of a cubemap to a SH as well, just calculate direction to each texel and a weight (texels at corners are more dense than at a face center, the weight should compensate this effect)

 

There are other options than SH, e.g. Spherical Gaussians (https://mynameismjp.wordpress.com/2016/10/09/new-blog-series-lightmap-baking-and-spherical-gaussians/), low res cube maps (Valves Ambient Cube) or sphere maps.

 

Share this post


Link to post
Share on other sites
6 hours ago, JoeJ said:

You can have lightmaps where each texel is SH instead just a single color. Advantage would be directional information, so you can apply indirect light to normal mapped surface properly.

But then this isn't a lightmap, this is just SH projected onto surfaces :P

6 hours ago, JoeJ said:

For specular you are typically out of memory to store high detailed enviroment at every lightmap texel, that's why a sparse set of high resolution enviroment probes is common for this. (A 3 band SH has less detail than a cubemap with 6 * 2*2 texels, a 2 band SH is similar to 6*1 texel cubemap - not enough for sharp reflections)

Yeah I was specifically talking about irradiance, or indirect light.

6 hours ago, JoeJ said:

In contrast, sampling diffuse indirect lighting from a sparse set of enviroment probes would be much worse than using lightmaps. Lightmaps have high resolution in the surface so they can store accurate local details, like shadows and darkening in corners. High resolution directional information is not necessary.

Okay so this kinda does answer my major part of the question.

My current plan is to use:

CSM + Regular Shadow Maps for direct light shadows, normal-mapped lightmap (Valve's technique, but with only two targets: ambient color and direction) for indirect diffuse for static geometry, SH for indirect specular on all geometry + indirect diffuse on dynamic geometry, "The Last Of Us"'s occluders for indirect dynamic shadows, and parallax cubemaps, planar reflections, and SSAO for direct specular (planar reflections being mostly used for water).

I just want to verify that these techniques are reasonable and won't cause issues.

6 hours ago, JoeJ said:

There are other options than SH, e.g. Spherical Gaussians (https://mynameismjp.wordpress.com/2016/10/09/new-blog-series-lightmap-baking-and-spherical-gaussians/), low res cube maps (Valves Ambient Cube) or sphere maps.

I saw this, but from what I understand, spherical gaussians are just broken down SH (where directions are not intrinsically calculated, rather they're provided). They don't seem to be much of a solution.

Share this post


Link to post
Share on other sites
1 hour ago, KarimIOAH said:
7 hours ago, JoeJ said:

You can have lightmaps where each texel is SH instead just a single color. Advantage would be directional information, so you can apply indirect light to normal mapped surface properly.

But then this isn't a lightmap, this is just SH projected onto surfaces

The SH data already is generated from the surface, so no need to projection. Your original question was lightmap or SH, but you can store SHs in the lightmap. (which would look better than just ambient with primary light direction, but if that's worth the memory depends on the game.)

1 hour ago, KarimIOAH said:

My current plan is to use:

CSM + Regular Shadow Maps for direct light shadows, normal-mapped lightmap (Valve's technique, but with only two targets: ambient color and direction) for indirect diffuse for static geometry, SH for indirect specular on all geometry + indirect diffuse on dynamic geometry, "The Last Of Us"'s occluders for indirect dynamic shadows, and parallax cubemaps, planar reflections, and SSAO for direct specular (planar reflections being mostly used for water).

I just want to verify that these techniques are reasonable and won't cause issues.

That's a large collection of hacks... of course it will cause issues i guess, :)

But it looks pretty state of the art to me so see how it goes... (I do not understand the distinction between SH indirect specular and cubemaps for reflections - you might want to explain in more detail.)

2 hours ago, KarimIOAH said:

 

8 hours ago, JoeJ said:

There are other options than SH, e.g. Spherical Gaussians (https://mynameismjp.wordpress.com/2016/10/09/new-blog-series-lightmap-baking-and-spherical-gaussians/), low res cube maps (Valves Ambient Cube) or sphere maps.

I saw this, but from what I understand, spherical gaussians are just broken down SH (where directions are not intrinsically calculated, rather they're provided). They don't seem to be much of a solution.

I agree it does not matter that much but i see advantages of SG vs. SH. What matters is the idea to store the entire enviroment at dense locations on the surface to achieve highest quality, and to be able to calculate full BRDF from that data without the need to read from some cubemaps, do some screenspace raytracing, do some other traces in volume data, and whatever other hacks we come up with. 

Share this post


Link to post
Share on other sites
1 hour ago, JoeJ said:

The SH data already is generated from the surface, so no need to projection. Your original question was lightmap or SH, but you can store SHs in the lightmap. (which would look better than just ambient with primary light direction, but if that's worth the memory depends on the game.)

Makes sense. The only annoying bit is that I think you'll store half the information aiming into the texel rather than away from it, which seems like a waste.

1 hour ago, JoeJ said:

That's a large collection of hacks... of course it will cause issues i guess, :)

Well all of realtime CGI is hacks xD But what do you mean more specifically? Do you see any problems with it?

1 hour ago, JoeJ said:

(I do not understand the distinction between SH indirect specular and cubemaps for reflections - you might want to explain in more detail.)

Well, you've got both diffuse and specular, each based off direct and indirect light. For specular, I want to handle direct with SSAO + Cubemaps, and ambient with SH. Because low-res cubemaps aren't very good for ambient.

 

Share this post


Link to post
Share on other sites
20 minutes ago, KarimIOAH said:

Makes sense. The only annoying bit is that I think you'll store half the information aiming into the texel rather than away from it, which seems like a waste.

Yes, that's one argument for SG (you could align all its directions to the hemisphere) or tiny spheremaps (which is what i do but in realtime: 4x4 texel enviroment per lightmap texel. Even 2x2 would be better than primary light direction if the scene is colorful.)

It's also possible to modify the SH idea so it only covers the hemisphere, but i guess the win is not worth the effort to fully understand the math :)

31 minutes ago, KarimIOAH said:

For specular, I want to handle direct with SSAO + Cubemaps, and ambient with SH.

I'm still confused. I would define 'direct specular' as the reflection of light sources, which can not be stored in SH practically.

I would define 'ambient specular' as the reflection of the lit scene excluding light sources.

So how and why would you blend between cubemap and SH representation of the same thing? Why SH at all and not just cube mip? (Or 4 samples in higher cube mip, to fight the cubes axis-alignment.)

Maybe this indicates a flaw in your design, but maybe i just miss something (i know more of generating the data than using it.)

Share this post


Link to post
Share on other sites
53 minutes ago, JoeJ said:

Yes, that's one argument for SG (you could align all its directions to the hemisphere) or tiny spheremaps (which is what i do but in realtime: 4x4 texel enviroment per lightmap texel. Even 2x2 would be better than primary light direction if the scene is colorful.)

It's also possible to modify the SH idea so it only covers the hemisphere, but i guess the win is not worth the effort to fully understand the math :)

I don't think you'll be saving much with SG, considering you still need to include the direction.

Anyhoo, you were right, I just got confused xD I'm actually working on something else so I didn't get my facts straight, but yeah, SSAO/Cubemaps for indirect specular, SH for indirect diffuse, maybe with an option for lightmaps instead. Maybe in addition, some options for dynamic indirect diffuse to be calculated, whether through voxel-GI or otherwise.

Thanks, JoeJ!

Share this post


Link to post
Share on other sites
11 hours ago, KarimIOAH said:

SSAO/Cubemaps for indirect specular, SH for indirect diffuse, maybe with an option for lightmaps instead. Maybe in addition, some options for dynamic indirect diffuse to be calculated, whether through voxel-GI or otherwise.

Sounds good. I liked CryEngines dynamic voxel GI approach - IIRC it's about reflective shadowmaps using voxels for occlusion, so still expensive but voxels require only one bit. Details on their manual pages.

On the precalculated side this one is interesting (static geometry but dynamic lighting): https://users.aalto.fi/~silvena4/Projects/RTGI/index.html

Share this post


Link to post
Share on other sites

I was considering only using those kinds of approaches on smaller areas, if at all (RSM, VoxelGI, and the like would be supported in volumes) because they're so expensive. But I'll check out your link, thanks

Share this post


Link to post
Share on other sites

I just wanted to chime in on a few things, since I've lost too much of my time to this particular subject. :)

  • I'm sure plenty of games still bake the direct contribution for at least some of their lights. We certainly did this for The Order, and did it again for Lone Echo. Each of our lights has flags that control whether or not the diffuse and indirect lighting is baked, so the lighting artists could choose to fully bake the light if was unimportant and/or it would only ever need to affect static geometry. We also always baked area lights, since we didn't have run-time support for area lights on either of those games. For the sun shadows we also bake the shadow term to a separate lightmap. We'll typically use this for surfaces that are past the last cascade of dynamic runtime shadows, so that they still have something to fall back on. Here's a video if you want to see what it looks like in practice: https://youtu.be/zxPuZYMIzuQ?t=5059
  • It's common to store irradiance in a light map (or possibly a distribution of irradiance values about a hemisphere in modern games), but if you want to to compute a specular term then you need to store a radiance distribution in your lightmap. Radiance tells you "if pick a direction, how much light is coming in from that direction?" while irradiance tells you "if I have a surface with a normal oriented in this direction, what's the total amount cosine-weighted light that's hitting the surface?". You can use irradiance to reconstruct Lambertian diffuse (since the BRDF is just a constant term), but that's about it. Any more complicated BRDF's, including specular BRDF's, require that you calculate Integral(radiance * BRDF) for all directions on the hemisphere surrounding the surface your'e shading. How to do this efficiently completely depends on the basis function that you use to approximate radiance in your lightmap.
  • If you want SH but only on a hemisphere, then you can check out H-basis. It's basically SH reformulated to only exist on the hemisphere surrounding the Z axis, and there's a simple conversion from SH -> H-basis. You can also project directly into H-basis if you want to. I have some shader code here for projecting and converting. You can also do a least-squares fitting on SH to give you coefficients that are optimized for the upper hemisphere. That said I'm sure you would be fine with the Last of Us approach of ambient + dominant direction (I believe they kept using that on Uncharted 4), but it's nice to know all of your options before making a decision
  • You don't necessarily have to store directions for a set of SG's in a lightmap. We assume a fixed set of directions in tangent space, which saves on storage and makes the solve easier. But that's really the nicest part of SG's: you have a lot of flexibility in how you can use them, as opposed to SH which has a fixed set of orthogonal basis functions. For instance you could store a direction for one SG, and use implicit directions for the rest. 

Share this post


Link to post
Share on other sites

Sorry for the late reply!

On 1/22/2018 at 5:22 AM, MJP said:

I'm sure plenty of games still bake the direct contribution for at least some of their lights. We certainly did this for The Order, and did it again for Lone Echo. Each of our lights has flags that control whether or not the diffuse and indirect lighting is baked, so the lighting artists could choose to fully bake the light if was unimportant and/or it would only ever need to affect static geometry.

Fair enough, I'll try break down the granularity of choice to your level.

On 1/22/2018 at 5:22 AM, MJP said:

We also always baked area lights, since we didn't have run-time support for area lights on either of those games.

I'll look into it eventually. I'm sorry to say I haven't played The Order or Lone Echo (Don't have a Playstation for the former, and I'm pretty broke either way :P), but I'm pretty sure The Order doesn't have much requirements for area lights given its setting. I'm hoping to generalize my engine a bit so I'll probably want to handle it eventually, but it's not a priority right now.

On 1/22/2018 at 5:22 AM, MJP said:

For the sun shadows we also bake the shadow term to a separate lightmap. We'll typically use this for surfaces that are past the last cascade of dynamic runtime shadows, so that they still have something to fall back on.

I'm not sure I agree with this method, though I haven't tried it out, so I can't say that much about it.

On 1/22/2018 at 5:22 AM, MJP said:

if you want to to compute a specular term then you need to store a radiance distribution in your lightmap

I don't get what you mean by this. Specular Irradiance is very rarely handled in lightmaps as far as I'm aware, I don't really see a point to this.

On 1/22/2018 at 5:22 AM, MJP said:

If you want SH but only on a hemisphere, then you can check out H-basis. It's basically SH reformulated to only exist on the hemisphere surrounding the Z axis, and there's a simple conversion from SH -> H-basis. You can also project directly into H-basis if you want to. I have some shader code here for projecting and converting. You can also do a least-squares fitting on SH to give you coefficients that are optimized for the upper hemisphere. That said I'm sure you would be fine with the Last of Us approach of ambient + dominant direction (I believe they kept using that on Uncharted 4), but it's nice to know all of your options before making a decision

Sounsd quite interesting! I recently found a Call of Duty paper breaking down their use of hemispheres in lightmaps (among other things), so I'll check out all these sources a bit later, when I've actually got a preliminary lightmap working.

On 1/22/2018 at 5:22 AM, MJP said:

You don't necessarily have to store directions for a set of SG's in a lightmap. We assume a fixed set of directions in tangent space, which saves on storage and makes the solve easier. But that's really the nicest part of SG's: you have a lot of flexibility in how you can use them, as opposed to SH which has a fixed set of orthogonal basis functions. For instance you could store a direction for one SG, and use implicit directions for the rest. 

Very good point I haven't considered.

-------

On a separate note, there's one thing I can't figure out. How can I use multiple irradiance probes (SH) on a single mesh? Do I have to pass an static-sized-array of SH primitives? I'm worried that'll take up far too much memory.

Share this post


Link to post
Share on other sites
11 minutes ago, KarimIOAH said:

I don't get what you mean by this. Specular Irradiance is very rarely handled in lightmaps as far as I'm aware, I don't really see a point to this.

If you want glossy surfaces to look right, you need to be able to reproduce both diffuse and specular lighting for lightmapped surfaces. Sure, using probes for reflections is common, but not suitable everywhere. IIRC, The Order used probes for extremely glossy surfaces and lightmaps for everything else, as they have better spatial density than probes. COD (black ops?) "normalises" their probes by dividing by the average color, and then recolours them using lightmap data to make it seem like their probes are higher density than they really are. Many games retrieve the dominant lighting direction / colour and use it to do a specular highlight from a fake directional light. Unity has a light baking mode that stores the dominant lighting direction for the same purpose. With the HL2 basis you can use a weighted average of the three basis vectors/lightmap values to get a fake specular light direction. On an early PS3 game we stored the direction to the closest baked point light in the lightmap for doing specular. At the least, you would probably want some kind of specular occlusion value in your lightmaps these days to modulate your probes with. 

20 minutes ago, KarimIOAH said:

On a separate note, there's one thing I can't figure out. How can I use multiple irradiance probes (SH) on a single mesh? Do I have to pass an static-sized-array of SH primitives? I'm worried that'll take up far too much memory.

That's a similar problem of how to use many lights on one mesh. You can break the screen into tiles and store a list per tile. You can put them all in a big buffer and store indices per vertex, or just pre-bake/blend them into the vertices. You can store a list of them per mesh in a cbuffer. You can put them in a 3D volume texture and do linear filtered samples. 

Share this post


Link to post
Share on other sites
12 minutes ago, Hodgman said:

If you want glossy surfaces to look right, you need to be able to reproduce both diffuse and specular lighting for lightmapped surfaces. Sure, using probes for reflections is common, but not suitable everywhere. IIRC, The Order used probes for extremely glossy surfaces and lightmaps for everything else, as they have better spatial density than probes. COD (black ops?) "normalises" their probes by dividing by the average color, and then recolours them using lightmap data to make it seem like their probes are higher density than they really are. Many games retrieve the dominant lighting direction / colour and use it to do a specular highlight from a fake directional light. Unity has a light baking mode that stores the dominant lighting direction for the same purpose. With the HL2 basis you can use a weighted average of the three basis vectors/lightmap values to get a fake specular light direction. On an early PS3 game we stored the direction to the closest baked point light in the lightmap for doing specular. At the least, you would probably want some kind of specular occlusion value in your lightmaps these days to modulate your probes with. 

Oh alright, I get what you're talking about, but it took me until halfway through the paragraph :P It's also required for normal mapping, though, right? But my confusion is what it has to do with specular specifically, and glossy surfaces at all.

19 minutes ago, Hodgman said:

That's a similar problem of how to use many lights on one mesh. You can break the screen into tiles and store a list per tile. You can put them all in a big buffer and store indices per vertex, or just pre-bake/blend them into the vertices. You can store a list of them per mesh in a cbuffer. You can put them in a 3D volume texture and do linear filtered samples. 

Dammit. I was looking forward to programming the SH solution until now :P So far I'm not using a tile- or cluster- based deferred solution, just basic deferred rendering, so the first suggestion would be further down the line. I can't see the second suggestion working on anything with little tessellation. The third suggestion sounds very restrictive, and one of the main reasons we abandoned Forward lighting in general. The last suggestion is also quite restrictive, as I don't want to be reliant on uniform probe grids. But it's probably the suggestion I'll try until I can get the first one working.

Share this post


Link to post
Share on other sites
1 hour ago, KarimIOAH said:

Oh alright, I get what you're talking about, but it took me until halfway through the paragraph :P It's also required for normal mapping, though, right? But my confusion is what it has to do with specular specifically, and glossy surfaces at all.

 

Cubemaps are only sampled from one spatial point, maybe 2 or so if you're blending across. Say, an H basis lightmap would sample light from each texel. You just contribute whatever specular response you can from your spherical harmonics to help with the fact that the cubemap is almost certainly going to be some level of incorrect. For rough serfaces the entire specular response can come from the lightmap, and thus (except for dynamic stuff) be entirely correct, position wise.

Doing all this helps correlate your diffuse color to your specular response, which will become uncorrelated the more incorrect your cubemaps become.

BTW if you're curious I'd consider "state of the art" to be Remedy's sparse SH grid used in Quantum Break: https://users.aalto.fi/~silvena4/Publications/SIGGRAPH_2015_Remedy_Notes.pdf

The idea is to voxelize your level into a sparse voxel grid, then place SH (or SG/whatever) probes in each relevant grid point. The overall spatial resolution is less than a lightmap, but it's much easier to change up the lighting in realtime, and uses the same exact lighting terms for static and dynamic objects. It might not seem intuitive, but having a uniform response for lighting across all objects gives a nice look compared to the kind of disjointed look you get out of high detail lightmaps being right next to dynamic objects with less detailed indirect lighting. 

Edited by FreneticPonE

Share this post


Link to post
Share on other sites
2 hours ago, KarimIOAH said:

just basic deferred rendering,

You can treat your probes like typical deferred point lights in that case. To solve the issue where multiple of these "ambient point lights" overlap, you can have them add 1.0f into the alpha channel and then divide the lighting buffer by alpha after drawing all the ambient lights but before any normal lights. 

2 hours ago, KarimIOAH said:

It's also required for normal mapping, though, right? But my confusion is what it has to do with specular specifically, and glossy surfaces at all.

Yeah even just the addition of normal maps means that you suddenly need some form of advanced light mapping, unless your normal maps are lower resolution than your lightmaps.... 

So, at one extreme, you can imaging storing a full light probe at each texel of the lightmap - obviously not feasible, but does let you do correct lighting. At the other extreme, you reduce all of those probes to a single RGB value - this lets you have correct diffuse as you can do the cosine weighting during baking (assuming no normal maps!), but does not let you do specular at all. All of the above techniques are middle grounds that try to allow for normal maps (normal not known at baking time) and specular (need to be able to integrate Irradiance in different sized/oriented cones at runtime). 

Share this post


Link to post
Share on other sites
15 hours ago, FreneticPonE said:

Cubemaps are only sampled from one spatial point, maybe 2 or so if you're blending across. Say, an H basis lightmap would sample light from each texel. You just contribute whatever specular response you can from your spherical harmonics to help with the fact that the cubemap is almost certainly going to be some level of incorrect. For rough serfaces the entire specular response can come from the lightmap, and thus (except for dynamic stuff) be entirely correct, position wise.

But those issues can be mitigated by using features like parallax cubemapping and SSAO. Not fully, to be sure, but when it's the high frequency data that matters, I don't see how cramming more low-frequency data in can help all that much. And how would they even be combined?

15 hours ago, FreneticPonE said:

BTW if you're curious I'd consider "state of the art" to be Remedy's sparse SH grid used in Quantum Break: https://users.aalto.fi/~silvena4/Publications/SIGGRAPH_2015_Remedy_Notes.pdf

I skimmed through most of this and it's quite interesting so far. 

15 hours ago, FreneticPonE said:

It might not seem intuitive, but having a uniform response for lighting across all objects gives a nice look compared to the kind of disjointed look you get out of high detail lightmaps being right next to dynamic objects with less detailed indirect lighting. 

That was my thought process for the most part but I think many are able to combine lightmaps and SH quite well.

14 hours ago, Hodgman said:

ou can treat your probes like typical deferred point lights in that case. To solve the issue where multiple of these "ambient point lights" overlap, you can have them add 1.0f into the alpha channel and then divide the lighting buffer by alpha after drawing all the ambient lights but before any normal lights. 

Sounds interesting! I'll try this out then. Do you have any articles about this technique?

Share this post


Link to post
Share on other sites

Cubemaps only offer low frequency spatial data, ultra low frequency no matter how much angular frequency they offer. Invariably the farther away from the sample, or rather if it's just behind a pole or something, the less correct the data will be no matter how high a resolution it is. Lightmaps are ultra high frequency spatial data, even if angular data is low frequency it can still be more correct than a cubemap, no matter how many tricks you pull. And SSAO only works with onscreen data, and only works for darkening things.

Most modern SH/SG lightmaps are used to somewhat correct or supplement cubamaps.

Edited by FreneticPonE

Share this post


Link to post
Share on other sites
17 hours ago, KarimIO said:

Sounds interesting! I'll try this out then. Do you have any articles about this technique?

Check out this one. He even supports runtime dynamic updates to the probes in a very efficient manner (only updating lighting without re-rendering the probes, via cubemap GBuffer caching) 

http://codeflow.org/entries/2012/aug/25/webgl-deferred-irradiance-volumes/

8 hours ago, FreneticPonE said:

Cubemaps only offer low frequency spatial data, ultra low frequency no matter how much angular frequency they offer. Invariably the farther away from the sample, or rather if it's just behind a pole or something, the less correct the data will be no matter how high a resolution it is.

Check out this neat extension to probes to achieve awesome spatial resolution (at quite some expense...) 

http://graphics.cs.williams.edu/papers/LightFieldI3D17/

http://casual-effects.com/research/McGuire2017LightField/index.html

Share this post


Link to post
Share on other sites
13 hours ago, Hodgman said:

Check out this neat extension to probes to achieve awesome spatial resolution (at quite some expense...) 

http://graphics.cs.williams.edu/papers/LightFieldI3D17/

http://casual-effects.com/research/McGuire2017LightField/index.html

Oof, I remember that second one. At that point more traditional pathtracing is just as fast or faster, doesn't have any missing data problems, and would probably use less memory as there'd be no multiple copies of the same data.

Share this post


Link to post
Share on other sites
2 hours ago, FreneticPonE said:

Oof, I remember that second one. At that point more traditional pathtracing is just as fast or faster, doesn't have any missing data problems, and would probably use less memory as there'd be no multiple copies of the same data.

For complex scenes it gets expensive, but they do sponza in half a millisecond (on good HW of course). In this same future direction, there's also this one, which is a great marriage of both "light-maps" and probes:

https://users.aalto.fi/~silvena4/Projects/RTGI/index.html

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Nimmagadda Subba Rao
      Hi,
         I am a CAM developer working with C++ and C# for the past 5 years. I started working on DirectX from past 6 months. I developed a touch screen control viewer using Direct2D. I am working on 3D viewer currently. I am very slow with working on Direct3D. I want to be a gaming developer. As i am new to this i want to know what are the possibilities to explore in this area. How to start developing gaming engines? Is it through tutorials? I heard suggestions from my friends that going for an MS helps. I am not sure on which path to choose. Is it better to go for higher studies and start exploring? I am currently working in India. I want to go to Canada and settle there. Are there any good universities there to learn about graphics programming? Sorry if I am asking too many questions but i want to know the options to choose to get ahead. 
    • By _RoboCat_
      Hi,
      Can anyone point me into good direction how to resolve this?
      I have flat mesh made from many quads (size 1x1 each) each split into 2 triangles. (made procedural)
      What i want to achieve is : "merge" small quads into bigger ones (show on picture 01), English is not my mother language and my search got no result... maybe i just form question wrong.
      i have array[][] where i store "map" information, for now i'm looking for blobs of same value in it -> and then for each position i create 1 quad. and on end create mesh from all.
      is there any good algorithm for creating mesh between random points on same plane? less triangles better. Or "de-tesselate" this to bigger/less triangles/quads?
      Also i would like to find "edges" and create "faces" between edge points (picture 02 shows what i want to achieve).
      No need for whole code, just if someone can point me in good direction would be nice.
      Thanks


    • By Karol Plewa
      Hi, 
       
      I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. 
       
      Is there anyone that is wishing to help me set up my compute shader?
      Thank you in advance for any replies and interest!
    • By PhillipHamlyn
      Hi
      I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh.

      For example my landscape vertex could be;
      struct Vtx { Vector3 Position; bool IsForest; bool IsRoad; bool IsRiver; } I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within.

      My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in.

      The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer.

      In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on;
      Is this a really bad idea ? If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ? If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful. Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ? Thanks
      Phillip
    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
  • Advertisement