Though instead of averaging all sampled ray distances, I split them up into small 4x4 8-bits precision sets, so as to break the uniform thickness up into several view dependent values. These grids snug nicely into a uint32 vertex map component and can be sampled in a vertex shader based on the camera-to-vertex view tangent.
Updated my lightmapper to bake full indirect diffuse as opposed to only an ambient occlusion term. It now also captures a viewpoint dependent occlusion mask that lets the environment cast a silhouette onto any reflective surfaces:
Also added a 'bias & gain' option to the multitexturing interface that lets you shift the gradient ramp and contrast of vertex map masks. This makes them usable as e.g. damage/wear maps, even with their lo-res nature:
Wasted an entire week trying to salvage this plan I had for a new depth of field shader. To up the performance, I came up with this really awkward mipmapping scheme, because for some reason I was convinced the weighted blending filter needed wasn't separable. A few minutes reconsidering this just now and it turns out a two pass method is perfectly feasible - so a quick rewrite later and I have this:
This version gets rid of the worst bleeding artifacts in both the near and far field by adjusting any texel's circle-of-confusion radius based on the depth values of neighboring (and occluding) texels.
I apply a basic dilation filter to bring out the brighter values, but I think I'm probably going to have to render some small point sprites to really have those distinct iris blade patterns pop out.
Something I've vowed several times I'd never do again: hand-paint a skybox.
The outdoor cloud-ish type specifically:
Painting clouds is all fun and fulfilling, until you need to fill out a full 360 degree view with dozens of them.
I preferred to draw a cloud scene as a spherical environment map; stretch around the outer regions is somewhat easier to deal with than the discontinuities along borders of the six faces in a cube map. (I always do my painting in Painter, so no clever 3D painting across seams for me.) Then you get the obvious issues with lighting (time of day etc.), that often require a complete repaint if you do not plan properly.
Anyway, the interface of the 'cloud tracer'-tool as it is now:
I skipped the simulation pass I initially planned on implementing, and simply modelled and replicated a series of basic cloud shapes. These cloud parts are all made up of smaller convex elements, which makes ray-testing fast and robust (occlusion tests against translucent convex objects make up the bulk of the work). I've also been meaning to try and run a radiosity solver on a low-resolution proxy mesh and have its results mapped back onto higher detail geometry - these convex bits seemed like a good candidate.
Illumination comes from a single light probe for now.
I got to doodle some more clouds by hand: geometry is expanded using these normal mapped 'depth billboards', which then have results from the lighting pass projected onto them. They're cheaper to render than volumetric textures, as there's no ray-marching involved. Also, 2D textures are far more straightforward to produce. I might come back to volume textures for other reasons later though.
Then there's some stuff about global cloud density and fog values for space in the scene that's not occupied by cloud geometry.
- Clouds need to be a bit more whispy here and there. - The in-modeller point cloud replicator I used does a fairly good job at coming up with interesting cloud shapes, but I feel a custom distributor could still do better. - The radiosity pass finishes pretty quickly as I had hoped, but tracing the final 2048 * 2048 image takes a while running on the cpu - needs more Compute.
Apologies for all the going back-and-forth with the camera.
Lastly, a bit of blatant self-promotion: I've been looking to do some graphics/tools programming for a U.S./European studio. If anyone can think of someone who might be interested, I'd love to hear (doesn't have to be game related per se).
The thing with architecture is that so much about it is up to whim and any procedural creation algorithm needs to be guided by a large amount of rules and grammars before it'll know how to churn out any remotely useable pieces of real-estate.
I got to the point where I could generate objects similar to these:
These were actually the most acceptable out of several hundred bakes (only three front doors!) and I'm afraid getting the tool to generate acceptable output at all times is becoming way too time consuming. Instead, I'm simply going to hand-model and texture a series of basic solids and then pass sets of these through the CSG boolean tool:
There's no way to deform (like stretching them out without distorting them) these in any way yet (apart from orienting and scaling them), but it serves for now. Also tried to merge the whole scenery into a single solid (trees and foliage are still instanced) to trim away as much overdraw (especially the terrain is texfetch-expensive) as possible.