# Daishim

Member

523

244 Neutral

• Rank

2. ## Procedural Terrain Texturing

I have kept a polygon soup, I just use the logical patches (which store some basic LOD values and the rendering indices for that patch) to actually render from the terrain mesh as a whole. I've seen a lot of methods which insist on completely dividing up the terrain mesh into tiny subdivisions, but this seems grossly inefficient to me, and as you mentioned, you lose a lot of functionality of having the whole mesh to work with. For example, when calculating average normals or interpolating overall slope of an area, the edges of the patches/subdivisions lose their precision since they do not have access to adjacent data. I'm doing some more research into more natural texture generation, using slope and other factors as determinants as well as the height. I hope to have some more stuff soon.
3. ## Procedural Terrain Texturing

I've been working on this terrain engine for quite some time. It has also been a long learning project for me. I started the engine with basically knowing only how to draw some simple shapes to the screen using OpenGL, and now I'm generating terrain textures procedurally and in a fresh new engine architecture. I'm using a fairly simple algorithm that is based off of Trent Polack's implementation as defined in his book "3D Terrain Programming". The algorithm simply iterates through each triangle row and column and interpolates the height of the of surface at that point and determines its pixel from the textures that affect that height. For example, if we have 3 textures, grass, dirt/rock, and snow that affect the following height ranges: =Grass= Optimal: 20 Min: -15 Max: 50 =Dirt/Rock= Optimal: 115 Min: 20 Max: 175 =Snow= Optimal: 175 Min: 115 Max: 255 Pseudo overview Loop(through each element row of the map) { Loop(through each element column of the map) { Loop(through each texel of the corresponding destination texture area) { Loop(through each layer) { Calculate blend percentage of layer based on height Calculate blended RGB value from source texel Add blended RGB to overall texel RGB } Set texel RGB in destination texture } } } This algorithm has generated the following images from my terrain editor (work in progress [wink]). Shot 1 Shot 2 Shot 3 There is no detail texture yet, and I have disabled lighting from my previous work to help simplify the new engine architecture implementation testing and the procedural texture generation. I plan to do some more research into procedural texture generation and implement slope variation and other variables into the algorithm to produce a more natural looking terrain. That's all for now.
4. ## Engine Architecture Postmortem

I've been fairly busy working on my terrain engine which is coming along pretty well. I've done a lot of restructuring of the engine, mostly as it applys to data transfer through the engine. My original architecture was to separate every object from any other object, except it's dependencies (pretty much only inherited objects or global types). Objects just contained their data, knew some basic operations of that data, and knew how to give the data out. The renderer knew how to draw basic things. The engine core was left up to getting the data from the object and then handing it to the renderer to be drawn. For encapsulation and data management in a large project (like a game engine), this seemed like a good idea. However, this didn't turn out to be the case. Every time I wanted to draw something, per say the terrain, I had to strip together indices for rendering the patches. This required prompting the terrain map object to generate these indices every frame, which got kind of expensive (even though it was mostly just a fetch of patch indices and then a degenerate triangle to stitch them together). This also made it very difficult to implement level of detail, blending, and all those nifty graphical tricks since objects didn't know how to draw themselves. The engine had to get the texture token(s) from the terrain map, vertex array/buffer token(s), bind them, enable blending, retrieve the indices and perform the draw. The engine was constantly busy trafficking data back and forth to various components of the engine, and it got insanely cluttered really fast. Unfortunately, I couldn't really find a light at the end of the tunnel for this architecture and it was just getting out of hand. So I took a few steps back and reevaluated some things that I was doing. I had somewhat neglected the use of interfaces in my engine, mostly due to my ignorance on their extreme usefulness even for some of the simplest of objects. So, I ran back down the tunnel to the fork of architecture type decision and took a different branch. I wrapped a lot of my components with interfaces and allowed components to know about the interfaces to other components that they communicate with. This allows me to simply invoke a function and pass the object to communicate with to the component that needs to do the communicating and the interfaces take care of keeping GUI objects from knowing about OpenGL and other things they really don't need to know specifically about. This allows me to keep the code cleaner and I can completely rewrite the engine components using different APIs or rolling my own without having much, if any, of an effect on the rest the engine. As far as performance goes, I've increased the efficiency of the engine and it's overall feel of interaction to something more befitting of an actual game engine. Frame rates have increased and I have greatly increased my capability of performing graphical effects much easier since objects control their own rendering. I've alleviated the core of the engine from playing messenger and having to decipher and translate and leaving it up the the objects and components to handle that, and leaving the core to simply orchestrate the interaction. The biggest beneficiary of this switch, is probably my eyes. They aren't screaming at me any more for having to read through the cluttered code anymore ;-).
5. ## Basic Terrain Rendering

I've finally gathered the time to get back to working on my engine, now that the semester is over. My current dilemma with the engine is what model and terrain/world geometry formats I should use to get accomplished what I need. After doing some research, it looks like I'm going to be rolling my own terrain/world geometry format, which is okay by me. I've actually got some basic functionality of a heightmap terrain renderer up and running as demonstrated in the following screenshots. It still needs a LOT of work, but in its basic form, its working. I need to add texturing, smoothing, LOD, and much much more. I'm planning to combine this with a BSP format also, which I'm still working out the details of. The next half of my dilemma is a model format. I have a very basic (again) static mesh rendering function in place that uses Wavefront OBJ files. I have been researching the MD# model formats and have found them to be interesting at the least. I haven't had much time to spend in this area yet to do much research, so this will probably be my next focus. I'm still not sure which format is going to win out in the end, or if I'm going to end up rolling my own. I'll have to experiment with a few and see what works out best for what I need. The only things I know for sure so far is that I will need a skeletally animated format. I still need to finish implementing lighting into the engine. I may start implementing shaders for some more advanced detailing as I start to get more features up and running. I have managed to obtain the Ageia PhysX physics SDK. This will probably take up some of my model researching time to make sure that I pick a format, adapt one, or roll my own that will easily work with the physics code. Many more things to come still... I'm kind of flying by the seat of my pants at the moment, experimenting with all kinds of things, but hopefully I can bring some sane order to it once I'm done playing with things and lock down a solid design and implement it.
6. ## The beginnings of a powertrain

So, I have begun design and implementation of an engine of my own. I have decided to create my own engine for several reasons. The first and most important of those reasons, is because I want to better understand and appreciate what an engine is. Simply using another engine just doesn't quite give you the same appreciation. I also want the experience of having created a large scale project, while creating a dynamic engine that has the abilities and features that I think are important. My last reason is that I want to help revolutionize the gaming industry, and not just repeat the already repeated. I have broken the design and implementation into several parts. First, I will begin to write the renderer. I have chosen to write this first because it is something that is not necesarilly dependant on the rest of the engine. I have decided to implement the renderer as the baseline and the rest of the engine to interface with the renderer. The renderer will be implemented using OpenGL initially. Once the OpenGL renderer is finished and the rest of the engine is under way, I may toy with implementing a parallel Direct 3D renderer to plug in as well. However, I'm not holding my breath on this one. As for the audio subsystem, I'm going to go with OpenAL. The OpenAL seems to be a very powerful audio API that is very similar in API to OpenGL. OpenAL also allows for EAX extensions and has been used in some large scale commercial products. I am shooting to implement a full surround sound environment with sound physics, which I will discuss later on as I learn more about the physics SDK and OpenAL. I would like to attach a physics engine to the engine structure as well. I have been quite attracted to Aegia's PhysX physics SDK by the good word that has been going around about it and their quite astonishing hardware physics processor that is now on the market. If I remember correctly, a modern CPU can support full life-like physics for approximately 10-15 objects simultaneously, whereas the Aegeia PhysX processor is capable of handling 40,000 objects. I vaguely remember this number, so I'd take it with a grain of salt. If you want an idea of what I'm talking about, refer to the Cell Factor demo video. As for the core of the engine itself, this is still somewhat up in the air. I would like to implement a threaded architecture to support multi-core systems. I would like to seperate the I/O systems, such as the renderer, input, audio from intensive all CPU functions such as physics, AI, and resource loading. I would like to also write in support for heightmaps for wide open terrain as well as BSP support for tight high resolution and polygon areas. Perhaps merging the two into a unique format for crossing the boundries between tight close quarters or wide open areas. I have earnestly begun work on the renderer, and have a good chunk in place currently. I have the ability to cache vertex arrays in the renderer, with the option of using vertex buffers to store vertex array data in video RAM. The renderer has basic texturing capabilities and a quite half-assed very very basic lighting implementation just to test it. I will discuss the renderer in a later post, I just wanted to get some stuff out of my head and in writing.