Jump to content
  • Advertisement
Sign in to follow this  
vlj

Object Space Lightning

This topic is 989 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Oxide Games released the slides of their GDC presentations :

http://oxidegames.com/2016/03/19/object-space-lighting-following-film-rendering-2-decades-later-in-real-time/

 

As far as I understand OSL is moving all the lighting pass) into texture space. For all objects in the scene a shaded texture is produced using a world space normal and position texture and then the normal rasterization pass occur.

There are several benefits : first MSAA is quite straightforward, from raster point of view it's just a forward pass. Of course deferred shading can occur in texture but with correct mipmap selection it can be made barely noticeable.

A second benefit is that shading doesn't even need to be recomputed every frame and is a good candidate for async compute. Shading doesn't change a lot in texture space even if a unit is moving.
 

On the other hand there are several points that I didn't find addressed in the slide : Since no object can share a shaded texture I guess memory consumption must be high. The provided benchmark numbers show Fury X beating a 980 ti with a not so small margin, I'm guessing this is not only due to async usage but also due to high memory bandwidth pressure.

Another point is the authoring tool : every object must have "well flattened" textures. Flattening texture is generally difficult and often requires manual tweaking. 

 

What do you think ?

Share this post


Link to post
Share on other sites
Advertisement

I went over the presentation yesterday, and it's fascinating. I'm all in favor of exploring alternative approaches to shading, and this is definitely a very different set of trade-offs. That said, I don't get the comparisons to REYES and overall it seems like a very special purpose, application-specific approach to rendering. They spend a good amount of time talking about fixing it to work for terrain, after all. What on earth would happen to it if you gave it general purpose world geometry, like you'd see in a Battlefield game? And as you correctly mentioned, it's terribly sensitive to the structure of texture space for the models you feed it.

 

I simply can't see it as a basis for a general purpose rendering pipeline.

Share this post


Link to post
Share on other sites


That said, I don't get the comparisons to REYES and overall it seems like a very special purpose, application-specific approach to rendering

 

Yeah, I agree that the frequent mentioning of REYES is misleading. The only real commonality with REYES is the idea of not shading per-pixel, and even in that regard REYES has a very different approach (dicing into micropolygons followed by stochastic rasterization).

 

I also agree that it's pretty well-tailored to their specific style of game, and the general requirements of that genre (big terrain, small meshes, almost no overdraw). I would image that to adopt something similar for more general scenes you would need to a much much better job of allocating appropriately-sized tiles, and you would need to account for occlusion. I could see maybe going down the megatexture approach of rasterizing out tile ID's, and then analyzing that on the CPU or GPU to allocate memory for tiles. However this implies latency, unless you do it all on the GPU and rasterize your scene twice. Doing it all on the GPU would rule out any form of tiled resources/sparse textures, since you can't update page tables from the GPU.

ptex would be nice for avoiding UV issues (it would also be nice for any kind of arbitrary-rate surface calculations, such as pre-computed lightmaps or real-time GI), but it's currently a PITA to use on the GPU (you need borders for HW filtering, and you need quad->page mappings and lookups).

Share this post


Link to post
Share on other sites

To re-iterate from twitter, you could cull out texture patches from a virtual texture atlas, tie the patches to something like poly clusters (Dice's paper from GDC) cull the patches, and then you'd know which texture patches to shade without a lot of overdraw.

 

I like the idea of separating out which shaders to run, but this just goes back to a virtualized texture atlas, then re-ordering patches into coherent tiles of common materials, and running the shading on each tile. Eventually you'd just ditch the whole "pre" shading part anyway and it starts to look more like this stuff: https://t.co/hXCfJtnwWi

Share this post


Link to post
Share on other sites

Poly clusters would also help to avoid the stitching problem. For each cluster detect all visible mip levels and shade all of them.

 

Additional idea would be to decouple lighting and materials again and pre shade only lightmaps.

There are lots of complexities: Need for automatic charting and clustering, different UVs for each mip level to save space, directional lightmaps to support some eye movement without shading update, ...

 

But we really need to find ways to decouple shading rate from frame rate. Any hardware progress is more than lost by demaqnds like 4K and VR.

Share this post


Link to post
Share on other sites

I didn't get any of this. So they have a ton of 4K x 4K textures. What is in those textures? A bunch of pre-computed light maps?
 

They compute the screen space size of an object.

Somehow this reserves some space in a 4k x 4k texture.

What is in the texture.

What comes of the texture? Is the object re-drawn using it's normal vertex data and applying the portion of the 4k texture on it?

Or does the 4k texture hold something else that is used is some other way?

 

 

Share this post


Link to post
Share on other sites

They compute the screen space size of an object.

Somehow this reserves some space in a 4k x 4k texture.

What is in the texture.

What comes of the texture? Is the object re-drawn using it's normal vertex data and applying the portion of the 4k texture on it?

Or does the 4k texture hold something else that is used is some other way?

 

The object is only draw once.

 

What happens is that they allocate some space in the 4kx4k texture given object screen space (but without drawing it).

Then a compute shader fills this space.

Then the object is draw on screen for real. The space is passed as a texture to the fragment shader (which is very simple ie "gl_FragColor = texture(tex0, uv);" ).

 

What happens is that the compute shader renders shaded color in the suballocated space instead of doing it in the fragment shader. In this case it does all the lighting computation using normal and diffuse color textures. Since there is no vertex data the position information are passed via a precomputed "position" texture too (which means this technic makes deformable mesh quite unpracticable).

Share this post


Link to post
Share on other sites
They mention supporting a large number of point lights. Anyone want to wager a guess at how they determine which lights to evaluate when shading a texel?

Something like forward+ / tiled-deferred, where you break the "shading buffer" down into tiles, compute the bounds of each tile, then test every tile against every light to produce per-tile light lists?

I actually did texture-space lighting for static geometry on a PS3/360 game, but just brute-forced it by evaluating every light for every texel... Because it was done asynchronously and only needed to be updated once every few minutes.

[edit] Scratch that. They mention putting thousands of lights into an octree and then just walking it per texel? Sounds expensive but they say not. I guess it would be similar to walking a linked list of lights...

Share this post


Link to post
Share on other sites

That's what I thought it was, but I didn't get the benefit. Because you cant run pixel shaders on several 4K textures every frame. You could choose to not update certain things but, but anything moving/rotating has to be updated. And specular won't work if the camera is moving.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!