• Content count

  • Joined

  • Last visited

Community Reputation

487 Neutral

About cameni

  • Rank

Personal Information

  1. Thanks for submissions! I published the results in two blogs so far:   grass - blocks -
  2. Hi everybody I'd be very grateful if you could help us gathering OpenGL performance data with 3 low level benchmarks we made to test various procedural and non-procedural rendering approaches across different hardware. There are two procedural-type tests: grass and buildings, that primarily measure achievable triangle throughput vs the instance mesh size. Here are some performance results from our procedural grass test: Raw performance Mtris/s Performance/price, Mtris/s per dollar (using for prices) The third test (cubes) is focusing on finding the optimal settings for non-procedural model rendering. This one is measuring the achievable triangle throughput and number of draw calls in several rendering modes and with varying mesh sizes. There are more than 150 test combinations total, each taking roughly 5 seconds, so it can take 13-15 minutes total. If you would like to help us gathering performance data, the test suite can be downloaded here: Outerra perf tests download After unpacking run the three test batches there: grass_test.bat, buildings_test.bat and cubes_test.bat When each tests completes, you will see your score, and you will be asked to send the results to us so we can incorporate results from different hardware into our results. Please note that the test program requires OpenGL 4.x drivers and it may not run on some older cards that only support OpenGL 3.x. Also, the test program requires 64-bit Windows. On some older AMD cards the cubes test will likely crash the drivers, also not all of the modes work (bindless textures will crash any AMD card/driver currently, and basevertex mode won't work on AMD either). Thanks for your help! Complete results will be published on Outerra blog together with comments. Sources for the tests are at
  3. No. Though you can, if your use case makes it advantageous to do so.   Think of "channels" as meaning "layers" of noise in this context. This.   Basically, we are generating 1+4 independent layers of fractal noise, the first one used for terrain height modulation, and four less precise ones (each with a specific spectrum) for various tasks within the generator. The generated noise is cached in textures for perf reasons, though you don't have to, as swiftcoder wrote.   Running out of the channels here means that for some (new) stuff we don't have enough of these layers, and would need more, with different characteristics maybe. But that means there would be more GPU memory needed for the cached data, and that is not very good since Outerra consumes quite a lot of it already.   There are some tricks that I hope to use, like combining several noise channels of different spectra into one, and then separating them with a filter. That would reduce the precision though, so I need to experiment.
  4. It's a variant of chunked LOD - though I didn't know the technique had that name, only after talking with Thatcher Ulrich about something else, and looking it up afterwards :)
  5. Top of the line vs Old crap

    You don't need to keep view space position for the logarithmic depth. You can use the value of w, since it contains the view space depth after projection: output.Position.z = log(0.001 * output.Position.w + 1) / log(0.001 * FarPlane + 1) * output.Position.w; That's because the projection matrix (D3DXMatrixPerspectiveFovLH) is: w 0 0 0 0 h 0 0 0 0 zf/(zf-zn) 1 0 0 -zn*zf/(zf-zn) 0 And thus w ? z, w ends up with the view space depth.
  6. This is actually pretty cool, with simple rules and quite nice results. Of course, technically it's still just a concept, and I guess the devil will show up in the details. Mapping it onto a sphere, getting a better set of rules that respect the energies needed (occasionally there's a sudden large-scale unrealistic shift). It would be best to move this process to the GPU. With a kilometer resolution on an earth-sized planet it would need simulating a 40000x20000 region. That would take quite a long time with that algo on the CPU, but optimized on the GPU it could run quite fast. On the other hand, even 1km resolution is too detailed for this anyway (without simulating erosion), so maybe a much coarser set would be sufficient as well. However, the fractal algorithms in Outerra are currently tuned for below 100m detail, so it would not look as good as it could. It would be best to augment it with some erosion simulation later. Btw no need to be shy about it [img][/img] I only found it by a chance, you should have gone to our forum without hesitation. Yet another forum can't be considered "all over internet" anyway [img][/img]
  7. Bow shock - A summary of work done so far

    [quote]I get chills up my spine whenever I think of atmospheric scattering. I spent so much time tweaking, retweaking, dropping everything and starting from scratch that I had enough of it. First I implemented Nishita’smethod from his original paper, then moved to Oneil’s implementation and finally I dropped everything and settled to Bruneton’s method for multiple scattering which also took some code-porting, tweaking and bugfixing. I am pleased with this final method and I hope I will never have to touch that part of code again.[/quote] This is so true .. I laughed when I read this because it quite describes what I went through as well. I think Ysaneya mentioned the same thing about the atmospheric code ... Otherwise, great work ^.^
  8. Yep, the ATI installer sometimes messes it up, not overwriting dll files depending on some heuristics or something. Most often happens when you have installed a beta driver some time ago.
  9. [color=#0088FF]Outerra Tech Demo[/color] Finally here comes the Outerra tech demo, coming together with the alpha release of our game Anteworld. Recent download links can be found here: This alpha release features: A complete, real scale planet Earth that can be explored Created from real elevation data with resolution 90m where available, 1km resolution for oceans; data are dynamically downloaded as you go Further refined by fractal-based procedural techniques down to centimeter-level details Vector-based road system that integrates with the procedurally generated terrain Ability to place static stock objects and drive provided vehicles The demo comes with the whole planet Earth that can be explored in a free-camera mode or in a 8-wheeler truck. People who like it and/or want to support us and the development of Outerra engine can buy the alpha release of Anteworld at a discounted price ($15), half the amount for the final release. Doing so will give you access to regularly released alpha/beta updates of the game, together with the final version when it's done. The price will gradually rise with each major release. You will also become our beta-testers, with ability to influence the priorities of the development. The full game includes also a plane and a helicopter, and basic sandbox tools that allow you to create roads and runways and place stock objects. A model importer and vehicle configurator that will allow creating custom models and vehicles will be coming soon in an update to the game. The demo contains a few locations around the world (a couple of them were created by our tester Pico). Data for the default location are already included within the installer, the rest of it will be downloaded automatically on demand as you explore the world (note: proxy servers aren't supported yet for data download). The total size of the data is around 12GB, but normally you'll download just a fraction of the size, unless you traverse the whole planet surface at a low altitude. Hardware requirements Outerra engine runs on OpenGL 3.3 and requires recent graphics drivers. It will warn you if your drivers are outdated, or even refuse to run in case you've got old ATI drivers that are known not to work at all. The minimum requirements are: [color=#800000]Nvidia 8800GT or better, ATI [s]4850[/s] (discontinued support by AMD) 57xx or better[/color] [color=#800000]512MB GPU memory[/color] [color=#800000]a 2-core CPU[/color] Recommended: Nvidia 460GTX or better, ATI 6850 or better 1GB GPU memory Limitations of current alpha state of the technology [color=#800000]This alpha release comes out to show the potential of the engine, but it still lacks many features commonly found in other engines, and especially the effects are postponed until the major features are implemented. The demo currently comes with just a single biome - northern type forests. There are no rivers and lakes implemented yet, and no weather yet. Almost all the areas are work in progress.[/color] Known Issues There are still some driver issues with ATI cards, the most problematic being the 4xxx line, where there are still some seemingly random crashes. The alpha state of the engine also means that it's not very optimized yet. It consumes more GPU memory than it should, and spends some time rendering things that is not eventually visible etc. [color=#0088FF]Anteworld game[/color] Anteworld[color=#0088ff]*[/color] is a world-building game on a massive true-to-life scale of our planet. Returning aboard an interstellar colonizer ship built in the Golden Age of Mankind, players arrive on the planet earth to discover civilization and humanity vanished. They will have to rebuild the civilization - exploring, fighting, and competing for resources while searching for clues to the disappearance of humanity. The game will contain several modes, the basic one will be a single-player game but with player-built locations being synchronized and replicated between clients. That means player can settle in a free location of his choice where he can build and play, and when he goes exploring he'll be able to observe and visit other sites where other players are building their world. There's going to be also a multiplayer mode for gaming in the existing world. Sim-connect mode should allow to use Anteworld as an image generator for another simulation program. In fact, Anteworld is meant to create the basis for an Outerra game/sim platform, allowing to create mods and new game modules that would run on the existing backend. [color=#0088ff]*[/color]The name comes from Latin prefix Ante-, with the meaning of prior-to in time. A world that was. There's going to be an accompanying novella written by C.Shawn Smith that should be loosely tied to the game. Here's a sample, the epilogue: The Outerra Initiative - Epilogue
  10. Integrating Vector Data - Roads

    Separate vectors that define the roads, height is generated from them with the 3cm asphalt layer
  11. 3D Engine Design for Virtual Globes is a book by Patrick Cozzi and Kevin Ring describing the essential techniques and algorithms used for the design of planetary scale 3D engines. It's interesting to note that even though virtual globes gained the popularity a long time ago with software like Google Earth or NASA World Wind, there wasn't any book dealing with this topic until now. As the topic of the book is relevant also for planetary engines like Outerra, I would like to do a short review here. I have been initially contacted by Patrick to review the chapter about the depth precision, and later he also asked for a permission to include some images from Outerra there. You can check out the sample chapters, for example the Level of Detail. Behind the simple title you'll find almost surprisingly in-depth analysis of techniques essential for the design of virtual globe and planetary-scale 3D engines. After the intro, the book starts with the fundamentals: the basic math apparatus, and the basic building blocks of a modern, hardware friendly 3D renderer. The fundamentals conclude with a chapter about globe rendering, on the ways of tesselating the globe in order to be able to feed it to the renderer, together with appropriate globe texturing and lighting. Part II of the book guides you to the area that you cannot afford to neglect if you don't want to hit the wall further along in your design - precision. Regardless of what spatial units you are using, it's the range of detail expressible in floating point values supported by 3D hardware that is limiting you. If you want to achieve both global view on a planet from space, and a ground-level view on it's surface, without handling the precision you'll get jitter as you zoom in and it soon becomes unusable. The book introduces several approaches used to solve these vertex precision issues, each possibly suited for different areas. Another precision issue that affects the rendering of large areas is the precision of depth buffer. Because of an old non-ideal hardware design that reuses values from perspective division also for the depth values it writes, depth buffer issues show up even in games with larger outdoor levels. In planetary engines that also want a human scale detail this problem grows beyond the bounds. The chapter on depth buffer precision compares several algorithms that more or less solve this problem, including the algorithm we use in Outerra - logarithmic depth buffer. Who knows, maybe one day we'll get a direct hardware support for it, as per Thatcher Ulrich's suggestion, and it becomes a thing of the past. Third part of the book concerns with the rendering of vector data in virtual globes, used to render things like country boundaries or rivers, or polygon overlays to highlight areas of interest. It also deals with the rendering of billboards (marks) on terrain, and rendering of text labels on virtual globes. The last chapter in this part, Exploiting Parallelism in Resource Preparation, deals with an important issue popping up in virtual globes: utilizing parallelism in the management of content and resources. Being able to load data on the background, not interfering with the main rendering is one of the crucial requirements here. The last part of the book talks about the rendering of massive terrains in hardware friendly manner: about the representation of terrain, preprocessing, level of detail. Two major rendering approaches have their dedicated chapters in the book: geometry clipmapping and chunked LOD, together with a comparison. Of course, the book also comes with a comprehensive list of external resources in each chapter. We've received many questions from several people that wanted to know how we started programming our engine and what problems we have encountered, or how did we solve this or that. Many of them I can now direct to this book, which really covers the essential stuff one needs to know here.
  12. I've never been much of a speaker, not in my native language and even less so in English. When Markus Volter, the man behind SE Radio and omega tau science & technology podcasts contacted me to make a podcast about Outerra and some of the technology behind it, I initially hesitated. But then I decided that it cannot hurt, and that I must force myself to train my tongue a bit. So, after some time we recorded an hour long interview and you can listen to it here: Beware though that I'm slow speaker with monotonous voice. Enjoy
  13. [quote name='RSI' timestamp='1302850906'] I thought you should be encouraged. The before and after pics show quite an improvement. And yes the video was trippy. Thanks.. I actually watched the whole thing. [/quote] Yea, I caught myself staring at the Hilbert going around there for a way too long time too
  14. Our old terrain mapping and compression tool has been recently replaced by a new one, developed from scratch. The old tool has been the only piece that was not done completely by us (core Outerra people), and as the result it felt somewhat "detached" and not entirely designed in line with our concepts. It was quite slow and contained several bugs that caused artifacts mainly in coastal regions. What's going on with the tool? Its purpose is to convert terrain data from usual WGS84 projection into a variant of quadrilateralized spherical cube projection we are using, along with wavelet-based compression of the data during the process. It takes ~70GB of raw data and processes them into a 14GB datased usable in Outerra, endowing it with ability to be streamed effectively and to provide the needed level of detail. With the aforementioned defects in mind, and with the need to compile a new dataset with a better detail for northern regions above 60? latitude, we've decided to rework the tool, in order to speed it up and to extend the functionality as well. I originally planned to implement it using CUDA or OpenCL, but after analyzing it deeper I decided to make it a part of the engine, using OpenGL 3.x shaders for the processing. This will allow for creating an integrated and interactive planet or terrain creator tool later, which is worth it in itself. The results are surprisingly good. For comparison: to process the data for whole Earth, the old CPU-only tool needed to run continuously for one week (!) on a 4-core processor. The same thing now takes just one hour, using a single CPU core for preparing the data and running bitplane compressor, and a GTX 460 GPU for mapping and computation of wavelet coefficients. In fact the new tool is processing more data, as there are also the northern parts of Scandinavia, Russia and more included in the new dataset. All in all it represents roughly a 200X speedup, which is way more than we expected and hoped for. Although GPU processing plays a significant role in it, without the other improvements it would show much less. The old tool was often bound on I/O transfers - it synchronously processed and streamed the data. The new one does things asynchronously, additionally it now reads the source data directly in packed form, saving the disk I/O bandwidth - it can do the unpacking without losing time because the main load has been moved from CPU to GPU. Another thing that attributed to the speedup is a much better caching mechanism that plays nicely with the GPU job. There's another interesting piece used in the new tool - unlike the old one, this traverses the terrain using adaptive Hilbert curves. Hilbert curve is a continuous fractal space-filling curve that has an interesting property - despite being just a line, it can fill a whole enclosed 2D area. Space-filling curves were discovered after mathematician Georg Cantor found out that an infinite number of points in a unit interval has the same cardinality as infinite number of points in a any finitely dimensional enclosed surface (manifold). In other words that there is a 1:1 mapping from points on a line segment into the points of a 2D rectangle. These functions belong to our beloved family of functions - fractals. In the mapping tool it's being used in the form of a hierarchical recursive & adaptive Hilbert curve. While any recursive quad-tree traversal method would work effectively, Hilbert curve was used because it preserves locality better (which has a positive effect on cache management), and because it is cool Here is a video showing it in action - the tool shows the progress of data processing on the map: [media][/media] Apart from the speedup, the new dataset compiled with the tool is also smaller - the size fell down by 2GB to ~12GB, despite containing more detailed terrain for all parts of the world. I'm not complaining, but I'm not entirely sure why is that. There was one minor optimization in wavelet encoding that can't explain it. The main suspect is that the old tool was encoding wide coastal areas with higher resolution than actually needed. *** Despite the base terrain resolution being the same in both cases (3" or roughly 90m spacing), the new dataset comes with much better erosion shapes that were previously rather washed out. The new data come from multiple sources, mainly the original SRTM data and data from Viewfinder Panoramas that provide enhanced data for Eurasia. It appears that the old data were somehow blurred, and fractal algorithms that refine the terrain down didn't like it. The difference shows best in Himalayas - the screens below are from there, starting with Mt.Everest. old | new There are also finer, 1" (~30m) resolution data for some mountainous areas of the world, and we plan to test these too - interested to see how it affects the size and changes the look.
  15. [quote name='BenS1' timestamp='1298884235' post='4780006'] Do you have any plans for more technical explainations of your Outerra engine? The water article was excellent... no if only there was a similar article on how you achieve your zooming all the way from planet scale down to cm scale that would be very interesting too. [/quote] I actually plan to write something about that too, but it will be a bit longer. Also I'm usually motivated to document stuff (by blogging about it) after finishing some piece, but this is a wider topic encompassing it all .. I guess it will take some time until I kick myself into finishing it. The basics are easy though, starting zoomed out, a [url=""]cube[/url] with a quad-tree on each face. Once the quad-tree subdivision works there's no problem with zooming in on the planet. No, I'm lying. There are thousands of problems to make it work on such range of detail, but conceptually it's simple There's plenty of devil in the details.