SIGGRAPH 2013 Master Thread

Started by
25 comments, last by FreneticPonE 10 years, 8 months ago

Those are some mighty fine looking materials. Damn you must have a lot of GPU horsepower lying around on these new consoles...

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Advertisement

Yeah it's pretty nice. biggrin.png

Like most modern GPU's there's plenty of ALU available while you're waiting for reads from memory, and complex BRDF's are a pretty straightforward way to take advantage of that.

Yeah it's pretty nice. biggrin.png

Like most modern GPU's there's plenty of ALU available while you're waiting for reads from memory, and complex BRDF's are a pretty straightforward way to take advantage of that.

What are you guys doing for indirect specular?

Yeah it's pretty nice. biggrin.png

Like most modern GPU's there's plenty of ALU available while you're waiting for reads from memory, and complex BRDF's are a pretty straightforward way to take advantage of that.

What are you guys doing for indirect specular?

There are details in the course notes but specular probes.

Graphics Programmer - Ready At Dawn Studios

I really don't love the specular probe thing, but I adore IBL and it seems like specular probes are the closest we're going to get in production real-time in the near future. I would like to see more work on that front though... the stuff out there still feels so hacked.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

Yeah I'm not a fan of the probes either. You can setup cases where they work fairly well, but the standard methods for pre-integrating the specular BRDF will result in a very large error at the grazing angles. And then of course there's all of the standard issues with placement, blending between probes, incorrect parallax, memory usage, and so on which make them very difficult to work with both for artists and programmers. We've been researching alternatives, but I'm not sure if we'll come up with something in the near future. I'd been kinda hoping that someone else would have a cool idea in this area, but so far I haven't heard of anything outside of the voxel cone-tracing stuff.

One of the papers that caught my eye this year in a bad way was the pre-computed cloth simulation. Essentially they took a (small) set of known character animations, put a cloak on the character, and precomputed every potential pose for the cloth and saved it in some kind of compressed 70 MB graph structure. Their proposal is that instead of trying to run cloth at runtime, you do it offline (4500 hours for this, if memory serves) and look up the results in the structure.

The paper tries to pitch this for upcoming gen real-time, which I find comical. The set of animations is small, the data is enormous to be traversing at runtime, and betting memory against compute power when GPU compute is coming fully into its own is tone-deaf at best -- if not completely insane. (Nevermind the implications to a production pipeline when the animation requires several thousand hours to regenerate.) While the technical aspects of the work may be well done, I really hate the core idea of this work and IMO it's a good example of bad academic work being pitched to industry.

part of my mind went towards "how about for mobile?". but then, nah.. i really think it's the wrong approach. still, the datastructure might have some interesting information in it.

If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

Yeah I'm not a fan of the probes either. You can setup cases where they work fairly well, but the standard methods for pre-integrating the specular BRDF will result in a very large error at the grazing angles. And then of course there's all of the standard issues with placement, blending between probes, incorrect parallax, memory usage, and so on which make them very difficult to work with both for artists and programmers. We've been researching alternatives, but I'm not sure if we'll come up with something in the near future. I'd been kinda hoping that someone else would have a cool idea in this area, but so far I haven't heard of anything outside of the voxel cone-tracing stuff.

Geomerics, a third party GI thing, had this video I stumbled across a while ago: vimeo.com/geomerics/review/60838484/d2817c3548 (doesn't want to embed) Maybe it's some environment probe hack, but it sure doesn't look like it.

It also reminds me of something Epic did. For their Samaritan demo they managed to have static occlusion of their impostor reflections pre-baked. I remember a blog post mentioning something about tracing against a point cloud, but if so I can't find what paper that was gleaned from. Still, while it was static only and in the demo only gave occlusion info, it wasn't an environment probe, was high resolution, worked with arbitrary materials, and was assumedly faster than voxel cone tracing (though that's just an assumption).


It also reminds me of something Epic did. For their Samaritan demo they managed to have static occlusion of their impostor reflections pre-baked. I remember a blog post mentioning something about tracing against a point cloud, but if so I can't find what paper that was gleaned from. Still, while it was static only and in the demo only gave occlusion info, it wasn't an environment probe, was high resolution, worked with arbitrary materials, and was assumedly faster than voxel cone tracing (though that's just an assumption).

IIRC, they bake the scene geometry into a 3D signed distance field. Tracing rays though that kind of structure is very efficient, except in cases where you have a ray that comes close to a boundary, but doesn't quite touch it. In the case where a ray is travelling parallel to a surface's tangent, very close to it, it becomes extremely inefficient, but all other cases are pretty good...

(Again, IIRC:) They trace reflections against a global cube-map, and dozens of reflection quads (either hand-drawn textures, like the neon lights, or slices of geometry that have been captured with artist-placed ortho frustums), using that 3D geometry texture to deal with occlusions.

[edit]P.S. congrats on the awesome presentation and work MJP :D

I thought I had remembered someone telling me that Geomerics was generating cubemaps at runtime, but I have no particular insight into what they're doing these days. And Hodgman is correct, Epic was tracing signed distance fields to get static occlusion for the Samaritan demo, which they coupled with an even more expensive solution for allowing dynamics to occlude reflections on planar surfaces. I'm not sure if they're still doing that for UE4, since it sounds fairly expensive.

This topic is closed to new replies.

Advertisement