JTippettsMember Since 04 Jul 2003
Offline Last Active Yesterday, 07:38 AM
- Group Moderators
- Active Posts 3,165
- Profile Views 19,027
- Member Title Moderator - Visual Arts
- Age Age Unknown
- Birthday Birthday Unknown
Posted by JTippetts on 24 September 2011 - 07:52 AM
My implementation uses FNV-1A hashing.
Posted by JTippetts on 23 September 2011 - 10:38 AM
Posted by JTippetts on 23 September 2011 - 09:24 AM
If the doubling-up is used to avoid overflow, why not simply change the array references to modulo (e.g. AA=p[A & 255]+Z)?
I think I'm starting to get it though. I've always considered "permutation" to be bound to probability calculations in college math classes, but I never thought to see it as just "one permutation" and divorce it from the standard practice of calculation all of the permutations of a list. With that out of the way, I get why the "hashing" does what it does.
You probably could do that, I suppose. In my own implementation, I use a FNV-1A hash instead of the permutation table, since I never really liked the idea of limiting myself to a domain of [0,255]. Using the permutation table as in the reference implementation, with mod'ed coordinates as input, means that the noise pattern repeats every 256 units, and since I've done a lot of procedural planet and other large-scale generation, that was undesirable behavior. But basically any kind of hash will work as long as it maps (X,Y,Z) to a pseudo-random gradient array index in a deterministic fashion.
Haha, I'll bet. Thankfully, this implementation is trivially easy to implement in C#, so I didn't run into any problems (except for the slow-to-come realization that it was mapping to [-1,1], but that's easily fixable). I just don't feel very comfortable implementing algorithms I don't understand.
[-1,1] can in some cases be highly desirable, for instance in the case of turbulence. A noise function can be used to perturb the inputs to another function, and in such a case it can be useful to have the perturbation be centered about the original point. For example, in my procedural island generation, if I use noise in the range [0,1] to perturb instead of [-1,1], it has the effect of offsetting the island away from the center of the mapped region.
So in my library, I use [-1,1] noise output by default, but offer a function that can remap the range to [0,1] or whatever arbitrary range I may require.
Posted by JTippetts on 23 September 2011 - 07:47 AM
Consider the case where, say, X=255, Y=140 and Z=24. If you perform the operation A=p[X]+Y, then the result will be 180+140, or 320. So then you perform the operation AA=p[A]+Z. If p wasn't a doubled-up version of permutation in this case, then this would result in an array bounds overflow. But since the maximum value stored in p is 255, and the maximum value that any coordinate can be is 255, then the maximum array index value ever used would be 255+255, or 510. Doubling the permutation table to 512 in length prevents overflow.
The big, ugly return statement is simply interpolating the corners of a cube. Consider the 2D case, where each of the corners of a square is assigned a noise value. The final result is calculated by interpolating the 4 corners. First, the top-left and bottom-left values are interpolated using the fractional part of the Y coordinate. Then the top-right and bottom-right corners are interpolated, again using fractional Y. Finally, these two intermediate results are interpolated using the fractional X coordinate, to get the final output value. By the same token, the 8 corners of the cube are interpolated, using the fractional X,Y and Z components. Since there are more points, more dimensions, it requires more interpolation operations. The exact number of interpolations is equal to 2N-1, where N is the dimensionality of the noise being generated. So if you think the 3D version is hairy, you should see the 4D version.
Posted by JTippetts on 22 September 2011 - 07:15 AM
I think the main problem that I have with discussions like this is that the discussion takes one (relatively small) aspect of a larger system, and analyzes it in a vacuum. You hand wave away the assumption that all the rest of the content, the "equipment, maps, enemies" is there, and then ignore it. When in reality, you can't ignore it. Player progression isn't just the one vector called experience level. The game needs to provide rewards in a structured manner along many different vectors. Some of these rewards come from gaining a level. Some from gaining a new piece of equipment. Some from seeing a new spell. Some from encountering a new enemy type with new abilities. Some from finding a cool new area. Still others are visceral: dropping a pile of crates on an enemy with a satisfying crunch and seeing the XP bar tick upward a little bit. In a well-designed game, even the simple act of interacting with the UI should provide rewards, however small they are. Responsive clicks when buttons are pushed, sounds when items are used, etc...
Game design is all about building a macro structure to house all of these rewards, and to deliver them on a timely schedule. The most satisfying, long-lived games will build this design as a whole cloth, where each piece plays its part. This is why, in my opinion, it is counter-productive to try to theorize and calculate about the power progression in isolation, without the macro structure in place and feedback from the rest of the system contributing to your decisions. Because, really, just about any power progression will work fine as long as it is supported by the rest of the system and as long as its rewards mesh well with the schedule and with the rewards provided by the rest of the system.
Posted by JTippetts on 20 September 2011 - 09:22 PM
Out of curiosity, why is -> indirection unsuitable?
Posted by JTippetts on 20 September 2011 - 05:30 PM
Many people base terrain texturing on simplistic factors such as elevation (low-lying areas in green that graduate to brown then white at the caps of mountains) or, as you tried, the normal (steep areas go to rock, not-steep areas go to grass/dirt/whatever). While this can provide a good start, the artificiality of such a terrain can be quite painfully obvious.
Vegetation density and composition are direct results of various factors: Rainfall/moisture quantity, elevation, temperature, wind exposure, depth of soil, etc... Some of these can be simulated through non-physical means, others can be simulated as a byproduct of other systems, still others can be merely approximated. For example, soil depth can be modeled as a function of ground steepness coupled with erosion-deposition patterns computed by an erosion simulation. Moisture levels can be calculated as a function of elevation, proximity to water bodies, and a simple model of rainfall "shadowing" that calculates the aridity of the land as a gradient with shadowed areas cast in the lee of mountains. Mountains tend to filter off rainfall from storms, leaving the areas in their rain shadows much drier than the areas facing the prevailing storms. Steepness of terrain, too, can affect the water content of a location, leading to swampy flats where water is allowed to pool on ground that has a relatively deep soil bed.
Coloration of the rock is a function of the underlying rock strata in real life. I typically say bollox to simulating that deeply, and just choose a set of rock/stone textures that suit the theme and color scheme of the level. Exposure of rock would be a function of soil depth, in the case of bedrock, and allocation of rock outcroppings, in the case of prop/scenery object placement.
Posted by JTippetts on 18 September 2011 - 09:41 PM
s="for i=1,10,1 do print(i) end" chunk=loadstring(s) chunk() 1 2 3 4 5 6 7 8 9 10
Posted by JTippetts on 16 September 2011 - 10:03 AM
However, if you operate on the assumption that your world is a grid of cubes, and that your camera is fixed in orientation, there are optimizations that you can make to the rendering process that a general 3D scene manager can not. If you find yourself fighting for every last bit of performance your engine can eke out, then perhaps it would behoove you to write your own scene manager. Otherwise, the general 3D engine should be fine for your needs.
Posted by JTippetts on 15 September 2011 - 07:54 AM
In order to get the best performance, you may need to write your own scene manager. Standard engines such as Ogre, Horde3D, etc... include scene management that is built for the general 3D case, but there are plenty of optimizations to be made if you roll your own, given that the camera in an isometric never changes orientation. This becomes even more of an issue if, as I do, you use anti-aliased and smoothly-blended sprites. In a general 3D engine, these types of sprites require a sort of all objects from back to front, which can be expensive. Whereas with a special-case isometric engine, the sorting can be done for most objects as a side-effect of how the visible part of the scene is traversed for rendering, rather than a separate pass with the overhead of a sorting function.
Given the orthographic projection and the fixed camera, some of this performance overhead is mitigated. However, an isometric back-to-front scene with alpha blending also requires lots and lots of overdraw, which can use up fill-rate in a hurry. If objects are not blended using partial translucency, they can be drawn front-to-back to allow early discarding of fragments based on z-buffer testing, eliminating most of the overdraw, but in my own tests I was unable to use this optimization due to my stubborn insistence on using smoothly anti-aliased renders. After all, the ability to use alpha-blended sprites to generate a smooth scene is half the reason I like isometric games in the first place.
Another performance gain of rolling your own is that you can use a specialized scene traversal computed directly from the camera, rather than relying on a general-case engine's scene traversal which may require expensive frustum intersection tests against a large spatial graph. In a general 3D case, you can't really make any assumptions about what the camera is looking at, so non-visible-culling is done scene-wide. With an isometric camera, you can make assumptions about the visible set by pre-calculating some volume data at game start, then taking into account the translation of the camera. In my approach, I do this calculation by reverse-projecting the corners of the screen against two planes representing the upper and lower bounds of the Y coordinate in game space. This set of eight points can be used to define a volume or an area, and can then be used to calculate an iterative structure to traverse a portion of the scene graph back-to-front.
As I said earlier, I am currently in the process of evaluating Horde3D for this purpose. So far I've gotten acceptable results, but I am still in the process of deriving an actual real use-case test of it. Even though the framerate remains acceptable on my crappy lappy, it still represents a rather drastic drop in framerate compared to my custom-built isometric engine. While some aspects of using a general 3D engine are nice (encapsulation of shaders into a material system, abstraction of the underlying renderer, etc...) I strongly advise choosing a 3D library that gives you support for creating a custom-built scene manager. So check the documentation and/or source of any candidate library to see for yourself how easy or hard it might be to do so.
Posted by JTippetts on 12 September 2011 - 07:31 AM
I think that any kind of attack that costs innocent lives is worthy of thorough and impartial investigation into cause and responsibility.
Posted by JTippetts on 08 September 2011 - 10:23 AM
A couple possibly useful links:
Digital Sculpting and Modeling
Specific Links that I Really Like
Posted by JTippetts on 08 September 2011 - 08:53 AM
The old FlipCode fixed timestep loop
The problem, as you have deduced for yourself, is that you can't count on the computer to be exactly as fast or as slow as you need it to be. The two above linked articles detail common approaches to solving the problem of game loops. My own personal preference leans toward fixing the logical framerate and allowing the visual frame rate to run as fast as it can, in the manner of the Flipcode loop.
The trick with this approach is to use a timer class to track elapsed time, and when enough time has elapsed, perform a logic update. Outside of this timed loop, the main loop continues to run, updating input and drawing the frame. In order to achieve a smooth framerate, each visible "thing" in the world retains a memory of its last position/orientation/etc... as well as a memory of its current position/orientation/etc... An interpolant is calculated based on how long it is until the next logic update, and this is used to smoothly interpolate between last and current state for rendering. The above links should explain the concepts of game loops pretty well.
Posted by JTippetts on 04 September 2011 - 12:47 AM
So, I want to make a cliff. I begin the process by roughing in the basic shape and creating a crude, relatively low poly form. Something like this:
The above is shown flat-shaded to show how low-poly it is. This is approximately the level of detail I can see in your screenshots. The basic shape of the cliff is there. I like to start with basic primitives (planes, cubes, etc...) and apply judicious use of the Multiresolution modifier in Blender as well as Blender's Sculpt mode to rough out the shape.
(Note on Blender's Multiresolution modifier: If you aren't familiar with Blender, the Multires modifier is a modifier applied to a mesh object that allows you to add levels of subdivision to the mesh, as many as you require. Sculpting can be performed on any of the levels as desired. I make heavy use of it.)
Now, at this point the beginner is tempted to slap a texture on there and call it done. Many will scrounge around the internet's many "free texture" sites, find something that looks like stone, and just drape it on. This appears to be the method you have done in the above. For the sake of illustration, I've done just that. I scrounged around on my drive, found a texture that I snipped from a photo of a lava flow I took once, and slapped it on:
As far as I can tell, that's about what you have done. Trust me, plenty of people have done it this way, including myself. However, if you ask me, it just looks bad. This is the problem, in my opinion, with using "real world" textures. The shading and detail incorporated into the texture is appropriate for the cliff or rock which you photographed, but has no correlation whatsoever with the underlying shape of the rock you are modeling. Sometimes this method works out well as a sort of happy accident, but we don't really want to rely on happy accidents to get a good result. So rather than derive surface texture from some slapped-on texture, we'll derive it instead from the cliff we are trying to model. In order to do that, we need more detail. Lots and lots more detail.
So, we throw away the lava texture, go back to the cliff's Multiresolution options, and add several more levels of subdivision. We don't care if the mesh ends up super dense; in fact that's what we want. This high-resolution mesh won't be the object that we see in the game, so just go crazy with it. If you have a powerful enough computer, you can subdivide pretty deep and do lots of intricate detail. There will come a point when further subdivision won't make any visible difference, though. Once we've turned up the detail, we'll go back into sculpt mode and go crazy. We'll add bumps, fissures, cracks, and so forth.
Now, even though that cliff is colored simple grey, to me it looks a lot more "cliff-like" than the one with the lava texture. The areas of shadow correspond to detail features on the cliff, rather than detail features from some random lava outcrop in Flagstaff Arizona.
At this point, it is time to paint some color and bump information on this thing.
For this stage, I commonly use "real world" colormap textures. The real world provides plenty of variety in coloration etc... However, I want to avoid textures such as the lava texture, that have lots of bright/dark shadow detail. I don't want a whole lot of non-correlated shadow detail polluting the texture, so instead I'll try to find textures that have good color and no low-frequency shading detail. If I can't find a texture I like, then often I will use a procedural texture. Either way, I can usually get a pretty good result.
So that gives us a little color. Now we want to add surface bump.
It is commonly not feasible to try to physically model every bump and every tiny detail, especially on large cliffs. So rather than try, we can apply a bump-map instead. A bump-map, of course, is just a texture that affects the rendered normal of the mesh. We can obtain bump information from real world sources or, again, from procedural sources. Bump maps can be applied relative to the surface to which they are mapped, so they thus appear to be features of the cliff, with correlation to the underlying shape and form. Here is the cliff with the former lava texture applied as a bump map:
Just that easily, we now have a surface that looks a whole lot like rock. The application of the lava texture as bump rather than as color means that the information encoded in the texture appears more to be features relative to the underlying shape.
Now, this is all well and good, but how do we make use of this? Certainly we can't use that high-detail model in a game. The answer is "baking." We can bake detail from this high-resolution version onto the low resolution version. Basically, what baking does is to extract information (color, ambient occlusion shading, and normal are the ones that interest us) from the high-resolution and "bake" or copy that information into a set of textures that are applied to the low resolution version. In order to do this, we can duplicate the high-res cliff object, then on the duplicate we can reduce the Multiresolution modifier to the desired final detail level of the cliff object, and apply the modifier to get the final mesh. Then we can use the Bake menu in Blender to bake the details. I won't really go into too much detail here; to learn more, you can read this article ( http://www.gamedev.net/blog/33/entry-2250095-indie-game-graphics-on-the-cheap/ ) that I wrote, which pertains to 3D characters but which is still relevant. After baking, we end up with the following texture maps:
Typically, to save texture space, you will combine the AO and Color maps via multiplication:
And that's basically it. The baked texture maps can now be applied to the low-poly model. The Color/AO contributed color and AO shading, and the normal is used to provide the detail that is missing in the low-poly version of the mesh:
In the above we have the low-poly version with the color and normal maps applied, sitting in front of the high poly version. You can see that even though the mesh is low-resolution, many of the missing details are supplied by the color and normal maps. And the detail that is provided is consistent with the shape of the cliff, rather than just being detail applied willy-nilly by some seat-of-our-pants texture. Now, this example was done hurriedly using very low-resolution textures, and put together in the space of about 20 minutes or so. If the time is taken to use appropriate high-detail textures, and to model the details of the cliff more carefully, the results can be quite amazing.