Finally here comes the Outerra tech demo, coming together with the alpha release of our game Anteworld.
Recent download links can be found here: demo.outerra.com
This alpha release features:
A complete, real scale planet Earth that can be explored
Created from real elevation data with resolution 90m where available, 1km resolution for oceans; data are dynamically downloaded as you go
Further refined by fractal-based procedural techniques down to centimeter-level details
Vector-based road system that integrates with the procedurally generated terrain
Ability to place static stock objects and drive provided vehicles
The demo comes with the whole planet Earth that can be explored in a free-camera mode or in a 8-wheeler truck. People who like it and/or want to support us and the development of Outerra engine can buy the alpha release of Anteworld at a discounted price ($15), half the amount for the final release. Doing so will give you access to regularly released alpha/beta updates of the game, together with the final version when it's done. The price will gradually rise with each major release. You will also become our beta-testers, with ability to influence the priorities of the development.
The full game includes also a plane and a helicopter, and basic sandbox tools that allow you to create roads and runways and place stock objects. A model importer and vehicle configurator that will allow creating custom models and vehicles will be coming soon in an update to the game.
The demo contains a few locations around the world (a couple of them were created by our tester Pico). Data for the default location are already included within the installer, the rest of it will be downloaded automatically on demand as you explore the world (note: proxy servers aren't supported yet for data download). The total size of the data is around 12GB, but normally you'll download just a fraction of the size, unless you traverse the whole planet surface at a low altitude.
Outerra engine runs on OpenGL 3.3 and requires recent graphics drivers. It will warn you if your drivers are outdated, or even refuse to run in case you've got old ATI drivers that are known not to work at all. The minimum requirements are:
[color=#800000]Nvidia 8800GT or better, ATI [s]4850[/s] (discontinued support by AMD) 57xx or better[/color]
[color=#800000]512MB GPU memory[/color]
[color=#800000]a 2-core CPU[/color]
Nvidia 460GTX or better, ATI 6850 or better
1GB GPU memory
Limitations of current alpha state of the technology
[color=#800000]This alpha release comes out to show the potential of the engine, but it still lacks many features commonly found in other engines, and especially the effects are postponed until the major features are implemented. The demo currently comes with just a single biome - northern type forests. There are no rivers and lakes implemented yet, and no weather yet. Almost all the areas are work in progress.[/color]
There are still some driver issues with ATI cards, the most problematic being the 4xxx line, where there are still some seemingly random crashes. The alpha state of the engine also means that it's not very optimized yet. It consumes more GPU memory than it should, and spends some time rendering things that is not eventually visible etc.
Anteworld[color=#0088ff]*[/color] is a world-building game on a massive true-to-life scale of our planet. Returning aboard an interstellar colonizer ship built in the Golden Age of Mankind, players arrive on the planet earth to discover civilization and humanity vanished. They will have to rebuild the civilization - exploring, fighting, and competing for resources while searching for clues to the disappearance of humanity.
The game will contain several modes, the basic one will be a single-player game but with player-built locations being synchronized and replicated between clients. That means player can settle in a free location of his choice where he can build and play, and when he goes exploring he'll be able to observe and visit other sites where other players are building their world.
There's going to be also a multiplayer mode for gaming in the existing world. Sim-connect mode should allow to use Anteworld as an image generator for another simulation program. In fact, Anteworld is meant to create the basis for an Outerra game/sim platform, allowing to create mods and new game modules that would run on the existing backend.
[color=#0088ff]*[/color]The name comes from Latin prefix Ante-, with the meaning of prior-to in time. A world that was. There's going to be an accompanying novella written by C.Shawn Smith that should be loosely tied to the game. Here's a sample, the epilogue: The Outerra Initiative - Epilogue
3D Engine Design for Virtual Globes is a book by Patrick Cozzi and Kevin Ring describing the essential techniques and algorithms used for the design of planetary scale 3D engines. It's interesting to note that even though virtual globes gained the popularity a long time ago with software like Google Earth or NASA World Wind, there wasn't any book dealing with this topic until now.
As the topic of the book is relevant also for planetary engines like Outerra, I would like to do a short review here. I have been initially contacted by Patrick to review the chapter about the depth precision, and later he also asked for a permission to include some images from Outerra there. You can check out the sample chapters, for example the Level of Detail.
Behind the simple title you'll find almost surprisingly in-depth analysis of techniques essential for the design of virtual globe and planetary-scale 3D engines. After the intro, the book starts with the fundamentals: the basic math apparatus, and the basic building blocks of a modern, hardware friendly 3D renderer. The fundamentals conclude with a chapter about globe rendering, on the ways of tesselating the globe in order to be able to feed it to the renderer, together with appropriate globe texturing and lighting.
Part II of the book guides you to the area that you cannot afford to neglect if you don't want to hit the wall further along in your design - precision. Regardless of what spatial units you are using, it's the range of detail expressible in floating point values supported by 3D hardware that is limiting you. If you want to achieve both global view on a planet from space, and a ground-level view on it's surface, without handling the precision you'll get jitter as you zoom in and it soon becomes unusable. The book introduces several approaches used to solve these vertex precision issues, each possibly suited for different areas.
Another precision issue that affects the rendering of large areas is the precision of depth buffer. Because of an old non-ideal hardware design that reuses values from perspective division also for the depth values it writes, depth buffer issues show up even in games with larger outdoor levels. In planetary engines that also want a human scale detail this problem grows beyond the bounds. The chapter on depth buffer precision compares several algorithms that more or less solve this problem, including the algorithm we use in Outerra - logarithmic depth buffer. Who knows, maybe one day we'll get a direct hardware support for it, as per Thatcher Ulrich's suggestion, and it becomes a thing of the past.
Third part of the book concerns with the rendering of vector data in virtual globes, used to render things like country boundaries or rivers, or polygon overlays to highlight areas of interest. It also deals with the rendering of billboards (marks) on terrain, and rendering of text labels on virtual globes.
The last chapter in this part, Exploiting Parallelism in Resource Preparation, deals with an important issue popping up in virtual globes: utilizing parallelism in the management of content and resources. Being able to load data on the background, not interfering with the main rendering is one of the crucial requirements here.
The last part of the book talks about the rendering of massive terrains in hardware friendly manner: about the representation of terrain, preprocessing, level of detail. Two major rendering approaches have their dedicated chapters in the book: geometry clipmapping and chunked LOD, together with a comparison. Of course, the book also comes with a comprehensive list of external resources in each chapter.
We've received many questions from several people that wanted to know how we started programming our engine and what problems we have encountered, or how did we solve this or that. Many of them I can now direct to this book, which really covers the essential stuff one needs to know here.
I've never been much of a speaker, not in my native language and even less so in English. When Markus Volter, the man behind SE Radio and omega tau science & technology podcasts contacted me to make a podcast about Outerra and some of the technology behind it, I initially hesitated. But then I decided that it cannot hurt, and that I must force myself to train my tongue a bit.
So, after some time we recorded an hour long interview and you can listen to it here:
Our old terrain mapping and compression tool has been recently replaced by a new one, developed from scratch. The old tool has been the only piece that was not done completely by us (core Outerra people), and as the result it felt somewhat "detached" and not entirely designed in line with our concepts. It was quite slow and contained several bugs that caused artifacts mainly in coastal regions.
What's going on with the tool? Its purpose is to convert terrain data from usual WGS84 projection into a variant of quadrilateralized spherical cube projection we are using, along with wavelet-based compression of the data during the process. It takes ~70GB of raw data and processes them into a 14GB datased usable in Outerra, endowing it with ability to be streamed effectively and to provide the needed level of detail.
With the aforementioned defects in mind, and with the need to compile a new dataset with a better detail for northern regions above 60? latitude, we've decided to rework the tool, in order to speed it up and to extend the functionality as well.
I originally planned to implement it using CUDA or OpenCL, but after analyzing it deeper I decided to make it a part of the engine, using OpenGL 3.x shaders for the processing. This will allow for creating an integrated and interactive planet or terrain creator tool later, which is worth it in itself.
The results are surprisingly good. For comparison: to process the data for whole Earth, the old CPU-only tool needed to run continuously for one week (!) on a 4-core processor. The same thing now takes just one hour, using a single CPU core for preparing the data and running bitplane compressor, and a GTX 460 GPU for mapping and computation of wavelet coefficients. In fact the new tool is processing more data, as there are also the northern parts of Scandinavia, Russia and more included in the new dataset.
All in all it represents roughly a 200X speedup, which is way more than we expected and hoped for. Although GPU processing plays a significant role in it, without the other improvements it would show much less. The old tool was often bound on I/O transfers - it synchronously processed and streamed the data. The new one does things asynchronously, additionally it now reads the source data directly in packed form, saving the disk I/O bandwidth - it can do the unpacking without losing time because the main load has been moved from CPU to GPU. Another thing that attributed to the speedup is a much better caching mechanism that plays nicely with the GPU job.
There's another interesting piece used in the new tool - unlike the old one, this traverses the terrain using adaptive Hilbert curves.
Hilbert curve is a continuous fractal space-filling curve that has an interesting property - despite being just a line, it can fill a whole enclosed 2D area. Space-filling curves were discovered after mathematician Georg Cantor found out that an infinite number of points in a unit interval has the same cardinality as infinite number of points in a any finitely dimensional enclosed surface (manifold). In other words that there is a 1:1 mapping from points on a line segment into the points of a 2D rectangle. These functions belong to our beloved family of functions - fractals.
In the mapping tool it's being used in the form of a hierarchical recursive & adaptive Hilbert curve. While any recursive quad-tree traversal method would work effectively, Hilbert curve was used because it preserves locality better (which has a positive effect on cache management), and because it is cool Here is a video showing it in action - the tool shows the progress of data processing on the map:
Apart from the speedup, the new dataset compiled with the tool is also smaller - the size fell down by 2GB to ~12GB, despite containing more detailed terrain for all parts of the world.
I'm not complaining, but I'm not entirely sure why is that. There was one minor optimization in wavelet encoding that can't explain it. The main suspect is that the old tool was encoding wide coastal areas with higher resolution than actually needed.
*** Despite the base terrain resolution being the same in both cases (3" or roughly 90m spacing), the new dataset comes with much better erosion shapes that were previously rather washed out. The new data come from multiple sources, mainly the original SRTM data and data from Viewfinder Panoramas that provide enhanced data for Eurasia. It appears that the old data were somehow blurred, and fractal algorithms that refine the terrain down didn't like it. The difference shows best in Himalayas - the screens below are from there, starting with Mt.Everest.
old | new
There are also finer, 1" (~30m) resolution data for some mountainous areas of the world, and we plan to test these too - interested to see how it affects the size and changes the look.
There are two types of waves mixed - open sea waves with the direction of wind (fixed for now), and shore waves (the surf) that orient themselves perpendicularly to the shore, appearing as the result of oscillating water volume that gets compressed with rising underwater terrain.
Open sea waves are simulated in a usual way by summing a bunch of trochoidal (Gerstner) waves with various frequencies over a 2D texture that is then tiled over the sea surface. Obviously, the texture should be seamlessly tileable, and that puts some constraints on possible frequencies of the waves. Basically, the wave should peak on each point of the grid. This can be satisfied by guaranteeing that the wave has an integral number of peaks in both u,v texture directions. Resulting wave frequency is then
Other wave parameters depend on the frequency. Generally, wave amplitude should be kept below 1/20th of wave length, as larger ones would break. Wave speed for deep waves can be computed using the wavelength ? as:
Direction of waves can be determined by manipulating the amplitudes of generated wave, for example the directions that lie closer to the direction of wind can have larger amplitudes than the ones flowing in opposite direction. The opposite wave directions can be even suppressed completely, which may be usable e.g. for rivers.
Shore waves form as the terrain rises and water slows down, while the wave amplitude rises. These waves tend to be perpendicular to shore lines.
In order to make the beach waves we need to know the distance from particular point in water to shore. Additionally, a direction vector is needed to animate the foam.
Distance from shore is used as an argument to wave shape function, stored in a texture. This shape is again trochoidal, but to simulate a breaking wave the equation has been extended to a skewed trochodial wave by adding another parameter determining the skew. Here's how it affects the wave shape:
The equation for skewed trochoidal wave is:
Skew ?=1 gives a normal Gerstner wave. Several differently skewed waves are precomputed in a small helper texture, and the algorithm chooses the right one depending on water depth.
Distance map is computed for terrain tiles that contain a shore, i.e. those with maximum elevation above sea level and minimum elevation below it. Shader finds the nearest point of opposite type (above or below sea level) and outputs the distance. Resulting distance map is filtered to smooth it out. Gradient vectors are computed by applying Sobel filter on the distance map.
Gradient field created from Gaussian filtered distance map Both wave types are then added together. The beach waves are conditioned using another texture with mask changing in time so that they aren't continual all around the shore.
Water color is determined by several indirect parameters, most importantly by the absorption of color components under the water. For most of the screen shots shown here it was set to values of 7/30/70m for RGB colors, respectively. These values specify the distances at which the respective light components get reduced to approximately one third of their original value.
Another parameter is a reflectivity coefficient that tells how much light is scattered towards the viewer. Interestingly, scattering effect in pure water is negligible in comparison with the effect of light absorption. Main contributor to the observed scattering effect is dissolved organic matter, followed by inorganic compounds. This also gives seas slightly different colors.
TODO Water rendering is not yet finished, this should be considered a first version. Here's a list of things that will be enhanced:
Better effect for wave breaking. This will probably require additional geometry, maybe a tesselation shader could be used for that.
Enhanced wave spectrum - currently the spectrum is flat, which doesn't correspond to reality. Wave frequencies could be even generated adaptively, reflecting the detail needed for the viewer.
Fixing various errors - underwater lighting, waves against the horizon, lighting of objects on and under the water, LOD level switching ...
Support for other types of wave breaking
Integrating climate type support to the engine, that will allow different sea parameters across the world
UI for setting water parameters
Reflect the waves in physics for boats A few of ocean sunset and underwater screenshots that were posted on the forums during the development.
Our friend from Texas, artist and science-fiction writer born under nickname of C. Shawn Smith, made this nice compilation from pieces of our published and some unpublished videos created during the year 2010.
Here's also an older video showing Apache AH-64 after the support for helicopters has been recently added to JSBSim library. There still seem to be some bugs and our parameters for the model aren't entirely right either, so the behavior might not be absolutely correct.
Still, flying the helicopter is a great fun. I wasn't able to fly helicopter in a simulator before, probably just because I never tried hard enough. But flying it here - over the forests and through the canyons and close to the rocky walls - that gives it a completely different feeling and experience, so even the types of me can get easily lost in time while wandering over the unknown lands here.
Finally here comes the video we promised some time ago, with Lukla airport and more. The video is quite long, over 13 minutes and that's after some heavy trimming.
[size="1"](gamedev embeds to a mini window, can it be enlarged somehow?) [media]
For those not interested in the flying parts, here's just the last part with Tatra trucks:
Making of video
It consumed three music tracks, and it consists of three main parts as well:
landing at Lukla, a short fly over and then a take off with mountain scenery views
approaching and landing at a fictional airport built at the edge of Lesser Himalayas
driving Tatra truck to a log cabin in woods on a nearby hill A simple LAN networking was added to make the scenes where two vehicles can be seen moving: two Cessna planes in Lukla, a Cessna flying over the truck in the second part and finally two trucks in the last part. The network connects up automatically whenever the other machine is reachable. It led to several surprising encounters when we were each just debugging and tuning our code, suddenly seeing the other one go by, like when one was working on the pavements in Lukla and while the other suddenly showed up, training the landing at the airport. It was quite a refreshment during the development.
Otherwise the making of the video resembled real movie making - each scene was taken several times from various angles, capturing two video streams in each run. There was probably 20 times more material taken than used for the final scene. Not counting those shots where we forgot to adjust sun position chosen for given location.
The video shows the current state of engine, and given that it's still in an alpha stage there are some noticeable bugs. Bear in mind that we aren't scenery designers so basically everything you see there is programmers art.
Here are also some images from used locations.
Fictional airport & city
Roads & Woods
Woods & Mountains
[size="1"]Used music: Fresh Air / Airports and Hotels / Great Outdoors by ibaudio.com Meadow ambient sound by eric5335
Outerra on Twitter Outerra on Facebook [size="1"] Note: this is a repost from outerra.blogspot.com testing the blog feature on gamedev
Another collection of screen shots from the recent development.
Stars rendering was added to make the background more interesting. It uses a real star database of more than 100k stars. It will need some additional effects to make it nicer, like a halo around bright stars etc. There should be also an adaptive HDR to manage the large differences in luminance.
The terrain is quite dark at night even when sky is still lit, it doesn't take secondary scattering into account yet.
Something from another corner - the following image shows fractal blending of two materials and how it looks like from up close and from a distance. A similar approach will be used to blend land classes.
A quick hack of the terrain material system to make it like there is a snow on the mountains. The snow covers the surface if it is flatter than a critical slope, whereas the critical slope depends also on the elevation. This way the transition to lower altitudes looks better, although it still needs some tuning.
At the moment snow doesn't have any thickness, but later a build up of snow will be possible along with temporary tracks behind vehicles.
Last images, showing distant marked peaks and how they are visible against the background, viewed from altitude 7000m (~23,000ft):
Here are some screen shots from the development, that we collected during the past week.
Cab view is working, speed indicator in Tatra works as well. Also the basics of scene editing are there so we could create this small airfield with some hangars and scattered containers. Somewhere in Nepal, I think.
In the previous post about roads in Outerra I mentioned that different road types can be created by using road profiles. Here comes one example.
Dirt roads are using the very same mechanism as normal roads, in fact the only difference is that road nodes here contain a different road type identifier. That is used to look up the corresponding road profile in shaders that generate the roads, and it determines other things as well - the pavement and border materials, for example.
Road profile specifies "exactness" values across the road width, in range from 0 to 1. Normal roads have value 1 across the whole road width, meaning that the elevation given by the road spline interpolation is exact and the resulting surface will be smooth and level. Values less than 1 will cause that the computed surface elevation is blended with the actual underlying terrain elevation with given blending coefficient. Additionally, the less exact the road surface elevation is, the more it is randomized by fractal to make it rougher.
The roughness created by the fractal noise can vary with the road type. The depth of the furrows can be determined by setting road surface under the terrain by a specific amount, and it can change node by node. On the sample road above it creates deeper or shallower parts. Or even a ridge when the way points are not set densely enough to follow the terrain accurately.
Here is a road that was created from way points lying 0.4m (on average) below the terrain surface with high roughness:
Another one that was put only slightly under the terrain and the roughness is low:
Lastly, few screen shots how it all looks in the terrain.
In order to utilize the generated detail we also set a finer level to be used for vehicle physics. In the previous ">video with Tatra truck the resolution for physics was around 1.2m, and it was quite visible. In the following video the resolution for physics is around 0.15m, 8 times better. The wheels are simulated as simple ray casts so it may be occasionally visible. Also, Angrypig toyed with the suspension and I left it in for the video; Tatra truck now sways way too much, but at least it serves to show the response to the terrain shape.
You may remember an earlier blog about roads where the approach and algorithms for creating spline based roads were outlined. At that time it was just a concept of how it may work, with many unresolved issues.
In the past weeks I occupied myself with integrating the road system into the engine, making it more robust and powerful on the way. The road way points now contain additional information such as road type, width, marking style and more that will allow to make a variety of road types, also including forest roads with ruts, for example.
As you can see on the following screenshots, the roads are completely integrated into the terrain - no terrain poking through the road surface ever. The integration also modifies the surrounding terrain to make the embedment natural. Side of the road is initially smooth up to a specified border width (can be specified per road spline node), and after that the fractal roughness gradually takes on.
Depressions are filled and an embankment is created.
Routing road through a rock or hill makes rocky slopes with defined steepness on the side(s).
Some screenshots showing the creation process and the results, from various level of detail.
Trees are automatically removed from the road surface and its border area, although sometimes you can see branches hanging over the road.
Splines allow for some mad but still fitting roads too :)
Terrain under the road is initially roughly prepared - gaps are filled and excess volume is cut out. This is being done at tile with resolution ~10m, so that fractal algorithm can refine it down naturally. Previously, when the road cut into a hill, the cut was unnaturally smooth. Now only the road border is smooth, gradually transitioning into a rougher fractal structure.
In the following screenshots you can see the process documented:
I. Placing the waypoints - this is currently being done as you move over the terrain pressing a build-road key, later a built-in editor will allow for better editing modes and options.
II. Rough terrain treatment - note this is actually hidden within the process, I've visualized it here just to show what effect it has on the terrain.
III. Finalizing the road
The engine has no problems with terrain resolution (as is apparently the case with many planetary engines), to the extent that asphalt can have actual thickness - I think it's 3cm now (~1.2inch).
Other notable features are
vector data are automatically cut and indexed to quad-tree managing the terrain
overlay dirt textures per road type
roads can connect, additional helper code will be needed to adjust the spline points and their attributes so that the connection is smooth
the same system is also used for terrain leveling, with or without pavement placing or with gravel surface etc.
Finally the process can be seen on the YouTube video. Original uploaded video was 500MB with good quality, but the recompression step on YouTube reduced it considerably.
The same system is used also for runways, here's a teaser screenshot for upcoming video where Angrypig tries to properly land at .. you can guess where this is from :-)
The corresponding discussion topic on Outerra Forums can be found here
This article describes a method of recording in-game HD videos without the large impact on frame rate as with an external video capture software.
The method used in this approach was inspired by an article about Real-Time YCoCg-DXT Compression which presented a real-time GPU compression algorithm to DXT formats. Standard DXT texture formats aren't very suitable for compression of general images like the game frames, the higher contrast results in artifacts like color bleeding and color blocking. The article introduced YCoCg-DXT format that encodes colors to YCoCg color space (intensity and orange and green chrominance). It also contains the source code for real-time GPU compression and comparison of achieved results.
The YCoCg format is suitable for decompression on GPU, because decoding YCoCg values back to RGB only takes a few shader instructions. However, for the purpose of decoding the frame data in a video codec, a better format is a YUV-based one that allows to decode the data directly to the video surface without additional conversions. The best format for this seemed to be YUYV with 16 bits per sample, which means there's one U and V value per 2 horizontal samples. The compression algorithm differs from the YCoCg-DXT one in the initial color space conversion to YUYV and in that it encodes 4x4 YY, U and V blocks in the way alpha component is encoded in DXT5 format.
The algorithm is as follows:
Video frames are compressed with fragment shader to YUYV-DXT format by render to texture technique, reducing the data to 1/3 of its original size
The compressed textures are asynchronously read back to CPU
The data are continuously written to disk
The compression on GPU reduces the bandwidth needed between CPU and GPU, but more importantly also the bandwidth needed for disk writes. Sustainable write speed of a SATA drives is somewhere around 55MB/s, transferring a raw 1280x720/30fps video takes 79.1MB/s, while the DXT compressed video only takes 26.4MB/s. A Full-HD video stream is 59.3MB/s.
To capture the frame buffer data the application first renders to an intermediate target. The compression shader uses this as the input texture, rendering to a uint4 target with one quarter width and height of the original resolution, that is then read back to CPU memory.
The next step is decoding the captured video. To make this easy I've written a custom video codec and video format plugin for ffmpeg library. The format was named Yog (from YCoCg) as the encoding was originally in YCoCg format, changed only later to YUYV. The game produces *.yog video files that can be directly replayed by ffplay or converted to another video format with the ffmpeg utility. They are also recognized by any video processing software that uses ffmpeg or ffplay executables or uses the avcodec and avformat dlls from the suite, such as WinFF or FFe or many others.
After starting the video recording in our game the frame rate drops only by a few fps, and it's still playable normally, unlike when recording for example with Fraps. Disadvantage is that this has to be integrated into the renderer path. Quality wise the results are quite good, as it can be seen on the following screen shots:
YUYV compressed, note this is slightly lighter because of an issue in ffmpeg that has to be solved yet.
The difference, 4X amplified
The source code and further implementation details can be found at outerra.com/video/index.html
Previous video with Cessna test flight apparently made some good (for most) waves in the flight sim scene. Here comes another video, this time with the Tatra T813 truck, focused on its independent swing suspension exercising on bumpy slopes somewhere in High Tatras mountains (it's not a coincidence, the truck got its name after the mountain range).
Suspension limits and parameters were set only approximately so the behavior may not be precise. But it's normal that an unloaded Tatra truck has its hind wheels in V shape. Also the dynamically refined terrain has a higher resolution than the mesh used for physics, that may be slightly visible in closeups.
As a technical note, both videos were captured using GPU-based compression of video frames to reduce the bandwidth needed for sustained disk writes, while not imposing a noticeable performance penalty. Since the video frames are being compressed on GPU, GPU->CPU transfers are also smaller what is a good thing as well. I also made a custom video codec for ffmpeg that can decode these videos so one can recompress the video directly with ffmpeg or any other tool that uses the libavcodec dll. There will be a separate blog entry with more technical description and the code to be usable for others.
It's been some time since I blogged about the Outerra engine, so here comes the latest info about what we've been doing.
Lots of stuff has been done on the visuals - shadows are finally in, and even though they are not finished yet the output is much nicer with them. It uses a randomized lookup into the shadow map but the blurring pass is not present yet, so the closeups show noisy shadow edges. This will go away with the blurring pass on the shadow map.
The ugly patterns on the ground visible in older screen shots, resulting from tiled textures have been suppressed by more fractal magic - a free fractal channel has been used to mix three textures (daisies, grass and a lighter grass) together and the pattern is almost completely gone.
Another thing that helped a lot was the color transformation to linear space. This included both the input (loading the textures in sRGB format and also correctly computing the mipmaps) and setting the render target. The fix is most obvious on the atmosphere that now looks more natural.
The trees are also slightly randomly colorized to break the monotonicity.
The material system has progressed as well, as it can be seen on the new truck model here.
Also new is the support for dirty glass windows.
A gun is mounted on the roof, with a separate controller. It should be also functional soon, along with some flying prey to ground [smile]
The new model appearing in the shots is the Tatra T813 8x8 heavy all-terrain truck with unique independent swing half axles. I wrote specialized code to handle its physics, and it works quite nicely. It is much better visible in motion, a video will be coming soon.
I had always thought that using a floating point depth buffer on modern hardware would solve all the depth buffer problems, in a similar way than the logarithmic depth buffer but without requiring any changes in the shader code, and having no artifacts and potential performance losses in their workarounds. So I was quite surprised when swiftcoder mentioned in this thread that he had found that the floating point depth buffer had insufficient resolution for a planetary renderer.
The value that gets written into the Z-buffer is value of z/w after projection, and it has an unfortunate shape that gives enormous precision to a very narrow part close to the near plane. In fact, almost half of the possible values lie within two times the distance of the near plane. In the picture below it's the red "curve". The logarithmic distribution (blue curve), on the other hand, is optimal with regards to object sizes that can be visible at given distance.
Floating point depth buffer should be able to handle the original z/w curve because the exponent part corresponds to the logarithm of the number.
However, here's the catch. Depth buffer values converge towards value of 1.0, or in fact most of the depth range gives values very close to it. The resolution of floating point around 1.0 is entirely given by the mantissa, and it's approximately 1e-7. That is not enough for planetary rendering, given the ugly shape of z/w.
However, the solution is quite easy. Swapping the values of far and near plane and changing the depth function to "greater" inverts the z/w shape so that it iterates towards zero with rising distance, where there is a plenty of resolution in the floating point.
I've also found an earlier post by Humus where he says the same thing, and also gives more insight into the old W-buffers and various Z-buffer properties and optimizations.
I tried to compute the depth resolution of various Z-buffers for whole range from 0 to 10,000 kilometers. Here's the result (if I didn't make a mistake), showing the resolution at given distance (lower is better):
The red curve shows resolution of the logarithmic Z-buffer. It scales linearly with distance, what means that in the screen space it's constant. That's also the ideal behavior.
The blue curve shows the resolution of 32 bit floating point depth buffer with swapped far and near plane. The spikes are formed with each decrement of the exponent. Floating point depth buffer has worse resolution than the logarithmic one, but it should be still sufficient. The green line is for an ideal Z-buffer that has depth resolution of 0.12 millimeters at one kilometer. It also happens to be the worst case bound for the floating point depth buffer.
First test of vehicle physics in the engine. Camera isn't yet attached to the vehicle so I had to chase it, without much success I admit [smile] Also it's really easy to lose sight of the truck in the big world.
Click on the image below to see the video of Gaz-66 truck skidding down the bumpy slopes.
Angrypig managed to get Collada importer into beta stage, so here are some first screens with new Cessna model and Gaz 66 truck. Too bad neither AO nor shadows are ready yet, the screens would be much nicer with it.
Cessna cockpit has quite a high resolution; the indicators should become alive soon.
Views from the outside. Propeller is already turning, but there's no motion blur on it yet. Transparency works too but there's a bug in the model. The windows ask for bit of dirt here and there, too.
Recently I've been working on the code that computes fractal data - a bit of reorganization to be able to have more independent fractal channels that are used everywhere in the engine, as I was running short of them already. Previously we had one channel where the terrain elevation fractal was computed, another channel with low-pass filtered terrain slope and 2 independent fractal channels. After the redesign we have 4 independent fractal channels, but additionally the filtered slope is computed independently for u,v terrain directions.
The two-dimensional slope values allow for better horizontal displacement effect on the generated terrain, because now it's possible to make the rock bulge from the hill slope in the right direction. In the previous version only the absolute slope value was known, and the fractals extruded the mesh independently in two orthogonal directions, and of course that did not always look good.
The equation for the displacement was in addition parametrized, to be able to get more effects out of it. Currently it's possible to vary the dominant wavelength, bias and amplitude of the used fractal.
Here is a quick comparison of what the parameters do.
Bias +0.5 (bulging outwards from the slope), wavelength 19m
In the last screenshot the fractal used for the displacement is already only slightly visible and the bulbous shape of purely slope-dependent displacement shows up. Here's how the displacement looks like when the bias is even larger, i.e. when the sloped parts are pushed even more outwards (bias +1.5):
It's also possible to use a negative bias values, that make the sloped parts carved into the hill (bias -1.0):
On the other hand, amplitude boost can emphasize the effect of the fractal, creating more visible overhangs here and there:
For comparison, here's how the same terrain looks like without any horizontal fractal effect at all:
... and without the vertical fractal, only a bicubic subdivision of original 76m terrain grid:
Next thing to try could be using a texture containing rough shape of specific type of erosion one would like to achieve. Current technique still cannot generate proper cliff and canyon walls, but combining it with the shape map lookup should theoretically do the job.
Some time ago I had an idea to make the web site with white background. That idea was inspired by a bug in early atmosphere rendering that made Earth as if submerged in watery milk. I remembered it and managed to reproduce the bug in RenderMonkey, tuned it and here's how it looks:
The whole new look built around that idea can be seen here: outerra.com
It's been one year since we released the first videos from Outerra engine. So it's about time to release another short one featuring a lifeless Jalapeno and a pilotless Cessna plane with broken propeller [cool]