Jump to content

  • Log In with Google      Sign In   
  • Create Account

Vilem Otte

Member Since 11 May 2006
Offline Last Active Jul 22 2016 06:35 PM

Posts I've Made

In Topic: Area Lights with Deferred Renderer?

12 April 2016 - 08:32 PM

I have actually implemented Arkano's method long time ago (around the time he posted it) and further extended it.

 

The original idea there is to compute attenuation based on distance from given object (he used planes only as far as I remember, this can be easily extended to spheres, tubes, triangle(s)). Specular lighting can be implemented in a nice (phong-like) way by using a single ray to perform a "real" reflection of the geometric object. Diffuse still looks good when using just a single point (F.e. the center of the object) - unless the light is too big. Use this with textures and ideally project them along with diffuse (+ use mip maps to blur it) and you've got an awesome lighting system.

 

If you want a more physically-based solution, well you should be looking at how it is done in physically based renderers (read path tracers). What we do, is to sample the light and calculate the diffuse to each of the samples (well this can also be done for the previously mentioned way - calculating lambert N times is still really cheap). Specular lighting could use the same trick (although the mentioned reflection just looks better as unless you have a lot of samples - you will have a noise, which is a problem).

 

The real problem here are shadows, of course ray tracing is the ultimate answer here (yet, I assume your scene data are not in a way where ray tracing in realtime is possible) - so you have to stick with shadow maps. Actually omnidirectional shadow maps (cube shadow maps), as for me percentage-closer-soft-shadows (PCSS) worked really nice, again it is not physically correct (but you can get quite close by modifying it) achieving nice and plausible shadows for the objects. Other than that I've seen some guys having really good soft shadows using shadow volumes approach and penumbra wedges, but it was expensive as hell even for less complex scenes.

 

 

Example:

gpuray.png

 

This is a interactively path traced scene (at around 30-50 spp per second on GPU) with area light. Note the shadows (those are with "caustics" here (not that powerful as the material is not clear and IoR is not high enough)). Yet, you should be able to see that even for quite small light the shadows are fully blurred for the parts that are closer to the light, which is a problem for technically any fast shadow map algorithm these days.


In Topic: Reflections from area light sources

07 January 2016 - 08:51 AM

I have never read that paper so far, yet I do have an implementation of "area lights" (which have solid performance, yet they are definitely not 100% accurate).

 

The basic idea goes as following - each single area light (whatever the shape is), can be de-composed to some basic primitive (for me basically rectangles were enough (e.g. quads); although I've decided to re-work some parts of this and support triangles directly). Now the ideas went and were implemented as following:

 

Implementing single rectangular untextured light - there are 2 light components that need to be calculated -> diffuse and specular.

For specular, I've used simple math - as I use deferred renderer, I have a position and normal of each pixel in scene, from here I calculate reflection vector and do a ray cast. If I get intersection than I can basically calculate the color using a BRDF-like function. Of course this yields a hard reflection, but one can always use multiple samples and randomness (based on surface roughness) to achieve rough-surfaces. Nevertheless with normal mapping it really looks good.

 

For diffuse, it has been a bit more tricky - now for small area lights you technically can calculate lambert against a center-point of  the light, and as an attenuation you calculate distance from closest point on rectangle (which is actually projecting point onto plane and then checking whether you are in bounds of rectangle or not). For larger area lights, again, multiple points (aka sampling) works good.

 

Now, the real challenges were 3 - changing shape of light, adding projective texture for light and using multiple lights.

 

First two things can be handled quite easily - you can project each point to the plane using some projection and use that for projective texturing (with clever mip-map selection based upon distance it looks really cool); changing shape of light is now straight forward - by changing texture (using alpha channel for me, but you could also use color-key). Dang, two things handled simply.

 

Last one is tricky - now if your number of lights is quite low you can brute force it. In case of high number of lights I go with BVH approach (e.g. build bounding volume hierarchy on top of that to increase ray cast performance).

 

There is yet one more thing to solve, your light (specular and diffuse) is visible through walls, etc. But with good shadowing algorithm this problem is can be dealt with quickly.

 

If you are particularly interested in some cases, I could provide some math with explanation (and possibly pseudo-code/code in case you want to implement it).


In Topic: Rendering UI with OpenGL with C++

01 December 2015 - 12:01 PM

I have a custom UI implemented inside my software, let me try to explain how I work with it...

 

My UI builder basically takes some kind of file and builds user interface out of it (windows with buttons, etc. etc.). This is all stored only in memory, along with that it has process function that is called upon each input event - and this processes whole active user interface. Nothing of this is rendered and therefore it can be processed in separate thread (of course some synchronization is done as we need to be thread-safe).

 

The process function is straight forward - each UI is some kind of graph (actually in my case it is always a tree) - you have a 'root' node and nodes under it. On each event (mouse move, mouse click, etc.) you propagate those events through the graph and process each node with it. Upon meeting some conditions you do something. Therefore all the processing is done in separate thread (but beware, sometimes you need to mutex-lock some data - to avoid race conditions).

 

During the UI initialization (or UI update - in terms that some widget and its children become visible) the widgets are inserted into UI scene instance (therefore UI scene is re-built). A scene on my side is something that holds entities (entity is for example: Light, Camera, Static Mesh, 2D Widget Rectangle, etc.). This scene therefore contains a single camera and UI widgets -> it is rendered inside a frame buffer and mixed together with other outputs (note that it actually needs to be re-rendered only when there are some changes, I'm currently rendering it each frame but technically I don't need to - I could keep it inside a texture).


In Topic: Need help with GLSL and Raytrace

06 October 2015 - 08:06 PM

I will do shameless self-promotion.

 

If I recall correctly, they are using ray-sphere algorithm derived not from analitic view, but from geometry view (as it ends up with the code with less instructions in total that is more precise, than naive implementation of analytic-view dervation, note that they are equal).

 

Now for the self-promotion, check out the article http://www.gamedev.net/page/resources/_/technical/math-and-physics/intersection-math-algorithms-learn-to-derive-r3033

 

I did the derivation of geometry-view based one in there.


In Topic: Volume rendering uing 3D texture as intermediate storage

06 October 2015 - 08:00 PM

There are multiple ways how to 'voxelize' your data into a 3D texture.

 

Everything is based upon what your data are. In general, most common cases are:

  • Point cloud

The point cloud is quite a bit tricky - everything goes down to a question, what your points represent? If they represent fully opaque point in space, then you might consider the naive algorithm - divide part of space into voxels, if there is one or more points in this voxel, set it to opaque, otherwise it is clear (empty voxel). Color can be 'average' color for all the points in given voxel volume.

 

Of course more complex algorithm can be used when you are treating for example fox, or semi-transparent surfaces in general - which is where the fun begins (using some crazy functions to describe the opacity of your voxel, etc.).

  • Triangular geometry

Now, one of the common approaches is to render slice-by-slice using conservative rasterization (this way a triangle generates voxels in such way that there is no hole in the result). You can rasterize with color of course.

 

Another approach is casting rays from left side of voxel cube (from the center of each voxel on that side), and writing those voxels that are inside a geometry object as opaque (of course this way you can also assign color).

 

Also an approach where you generate point cloud from your geometry and perform point-cloud based voxelization might be viable (triangular data is far too complex - this can be one of the cases).

 

And of course, many others...


PARTNERS