• Create Account

# FLeBlanc

Member Since 10 Sep 2011
Offline Last Active Mar 21 2014 04:56 PM

### #4979332[SOLVED] 2d cordinate rotation in 3d space

Posted by on 12 September 2012 - 09:24 AM

Rotations are performed around an axis. 2D rotation is a simplified form of 3D, in which the axis around which the rotation is performed is a conceptual axis extending normal to the 2D plane. IE, if your 2D plane is the XY plane, the axis of rotation is the Z axis. Z coordinates are "ignored", and the math simplifies down to a simple transformation.

In 3D, it gets a bit more complicated because the rotation can be performed around any arbitrary axis. Using axis-angle rotation, you simply need to establish the axis around which you want to rotate, and specify the angle of rotation.

### #4978003What is the guy called with all the money? boss?

Posted by on 08 September 2012 - 09:44 AM

In short, if you want to be the money man, be the money man. Invest in a good team, let them do what they do best, and sit back to reap the profits. But don't try to be the designer, because you lack the skills and your interference will harm the project. If you do want to be the designer, then listen to the feedback of those on your team with more experience, and don't arbitrarily supercede them because you think that somehow your ideas are better than theirs, and maybe you will learn something.

### #4977982Perspective to Ortho Collision fails.

Posted by on 08 September 2012 - 08:22 AM

It's not a matter of converting ray position to screen space. It's a matter of generating a ray for every screen position. For perspective, imagine the rays as streams of water squirting from a single nozzle and fanning out in a cone shape. The rays start from a common point and diverge. For orthographic, imagine the rays as coming from one of those fancy shower heads with hundreds of little nozzles all in a neat array. All the streams are parallel to each other, and the resulting spray shape is more cylindrical than conic. The rays never start from the same point, and in fact never cross each other (barring gravity, of course). It's the same idea with an ortho matrix. The idea of a single ray start position really makes no sense for an ortho projection. If you move your camera straight back, things on the screen remain exactly the same; there is no dwindling in size with distance, as there would be with rays that diverge from a single point.

### #4977839How to use Perlin noise in terrain generation

Posted by on 07 September 2012 - 08:33 PM

1D noise is useful if you want to do just simple rolling terrain. 2D is useful if you want to do things like caves and overhangs. Again, I refer you to this article written by JTippetts. It's pretty interesting, and I think you can do some cool stuff with it. I hope he doesn't mind me linking to his images, but I think they really help to make the point. (JTippetts, if you don't want me linking to them, let me know and I'll remove them.) The premise of it is this: a function differentiates between ground and sky (anything below a certain level is ground, anything above is sky). Normally this threshold is a flat line at a certain level (Y=0, for example). A 1D noise function is used to push the ground level up or down by a certain amount.

Here is the basic ground/sky:

The white part (which is hard to see, of course, on the white background here) is the ground.

The article talks about using a 1D function to perturb the ground, which would end up looking like this:

It also talks about using a 2D function to perturb it, which would look like this:

The second one, imo, is "cooler" looking, but it does result in floating islands.

The article also talks about using a second function to impose caves upon the terrain:

It's really an interesting article, and while it is written sort of centric to his own library, the idea of it should be usable with any library.

### #4977730How to use Perlin noise in terrain generation

Posted by on 07 September 2012 - 11:09 AM

I don't know that I'd say that was a larger problem. It doesn't matter how many layers of detail you stack on; unless you do your mapping correctly, all you're adding is ever smaller detail that still gets lost in the mapping. The first layer defines the over-all feature size, successive layers are detail. Fix the mapping first, then add more detail.

### #4977701How to use Perlin noise in terrain generation

Posted by on 07 September 2012 - 10:29 AM

What you are doing is a common mistake:

```for (int x = 0; x < world.GetLength(0) - 1; x++)
{
for (int y = 0; y < world.GetLength(1) - 1; y++)
{
diamond[x,y] = Noise.Generate(x, y);
}
}
```

Perlin noise is generated by interpolating values that are generated at integer boundaries. If you sample the noise at integer coordinates, then, you won't get the smooth in-between values you are expecting; what you get instead is basically white noise, as you have discovered.

Something like this might get you better results:
```float frequency=1.0f/(float)world.GetLength(0);
for (int x = 0; x < world.GetLength(0) - 1; x++)
{
for (int y = 0; y < world.GetLength(1) - 1; y++)
{
diamond[x,y] = Noise.Generate((float)x*frequency (float)y*frequency);
}
}
```

You do need to tweak the exact mapping by playing with the frequency value.

A good rule of thumb for choosing frequency is that a Perlin function has 1 feature per unit, where a feature is a hill or valley. So if you want 1 hill, use a frequency equal to your world size dimension. If you want more hills, use a larger value.

As far as using noise in a sane manner to generate terrain, you might check out this article.

### #4976953Worried about how many assets we need

Posted by on 05 September 2012 - 01:43 PM

One thing you could do is contract for the models+rigging+animations alone, then use Blender or another 3D app to render out your sprite frames yourself. You can construct a stage (camera in the isometric position, lighting rig set up for consistency across models, etc...) then import the models and render out the frames, spin the camera, render more, etc... These kinds of jobs can be highly batched and automated, so that the grunt work is done by the renderer. But it does cost you a bit of time setting up the stage.

You can duplicate sprites or make things similar to one another, and in fact this is a common trick. Particularly, you can have a single model and multiple variants of the texture mapped to it, or 'skins'. Merely by modifying the coloration of the skin texture you can create, for example, creature variants. For your giant above, you could alter the hue to have a giant with a more grayish skin, another with a more bluish skin for a frost giant. And so forth.

You might also consider contracting for "monster kits". That is, kits of base model + equipment for a given monster archetype that allows for variants, paper-doll fashion. Consider a giant wielding an axe as opposed to one hurling boulders. Same base model, different animations and poses, and equipment that can be mixed/matched.

A final solution might be to just use 3D models directly in your game. The transition to 3D is not too difficult, if the game is done right, and you save yourself a crap-ton of storage and graphics RAM usage by eschewing those thousands of frames in favor of a single model+texture+rig+animation set. You do lose some nice things (easy anti-aliasing using alpha blending, for example) but you gain quite a bit in return: vastly reduced video RAM usage, not limited to 8 facing directions, elimination of the time-sink of rendering nearly 5000 frames of sprites, etc...

### #4969928I'd like to see games made around the following plot

Posted by on 15 August 2012 - 02:10 PM

Yeah I guess not. I guess this series would just make the NRA grab onto their guns even tighter. The natural response of NRA affiliated persons to violence is to load their guns, after all. I thought it might make them stop and think, but I guess I misjudged their rationale for wanting guns. It's not just to defend against crazies... it's to defend against their critics as well.

I'm a member of the NRA. I own three guns: a handgun, a rifle and a shotgun. I hunt elk, when I get lucky enough to draw a tag and have the time to do so. I have never had the inclination, desire, or even random thought to draw a weapon on another human being. I could conceivably do so in defense of self or family, but such a thing would not be done lightly, if at all. I have far too much respect for the potential destructive power of firearms, and the necessity of using them responsibly. But, yeah, all of us NRA members are crazies. Just a bunch of dumb rednecks, just itching for an excuse to shoot somebody. You, sir, seem to have fallen victim to propaganda, and I truly feel sorry for your lack of critical thinking skills or basic ability to reason for yourself. Sorry you're dumb, bro. Maybe you'll get better.

### #4969926Designing a 2D isometric engine with OpenGL

Posted by on 15 August 2012 - 02:02 PM

### #4969840When am I qualified?

Posted by on 15 August 2012 - 08:24 AM

Get a job. If you don't get fired in the first couple of months, you're probably qualified for that particular job. There really aren't any hard and fast rules for this sort of thing.

### #4969608Orthographic matrix won't display textured quad

Posted by on 14 August 2012 - 03:53 PM

Something that people always seem to forget is that there is a 1:1 relationship between world units and screen units (pixels) when you do an orthographic transform, if the ortho bounds are the same as the screen resolution. What this means is that if you draw an object that is 1x1x1 unit in size, it will appear approximately 1 pixel in size on the screen, regardless of the distance from the camera. In the code you present above, you are drawing an object that is 0.2x0.2x0.2 in size. That means, onscreen it is going to appear to be less than one pixel in size. Make that bad boy bigger.

### #4968102Texture Randomization

Posted by on 10 August 2012 - 09:30 AM

The main source of patterns, or artifacts, in Perlin noise comes from the fact that the noise is generated on a grid lattice. Perlin specifically implemented his gradient and simplex noise variants to try to reduce the occurrence of grid artifacts. Nevertheless, the grid structure is still there. However, there are a few tricks you can do to help eliminate them.

1) Use a lacunarity that is not an integer. In most implementations, the term lacunarity is used to describe how the successive octaves of a fractal line up. For instance, a lacunarity of 2 (commonly used) means that each octave will be 2x the frequency of the preceding one. Integral lacunarity values will cause the grid boundaries of successive octaves to line up exactly, exacerbating the grid artifacts at each level. Here is an example of ridged multi-fractal noise. The first half of the image uses a lacunarity of 2, the second half uses a lacunarity of 2.333:

You can see that the change to lacunarity did help a little bit, but there are still artifacts. To take it further, you could alter the lacunarity at each step. However, this will only reduce the artifacts. It won't eliminate them completely.

2) Apply a rotation to the octaves. Each time an octave layer is sampled, rotate the input coordinates by some amount first. Each octave should have a different rotation (or you'll just end up with the lattice problem again). This is a more effective method than merely changing the lacunarity, but it may have some repercussions as far as the character of the noise itself. Here is an example of the same ridged noise as above, but with randomized octave rotations:

You can see that the grid artifacts are basically gone.

Another source of artifacts in using noise to generate textures is in the algorithm that is used to generate a seamless tile from an existing image. There are typically three commonly used ways people generate tiling textures from noise, and two of the ways are well implemented in the Gimp.

1) Alter the grid lattice of values under-pinning the noise function so that each octave wraps around. This is how Gimp's Seamless Noise filter works. Each octave of the function is itself a tiling pattern. This works well for noise that has integral lacunarity and no domain rotation, and not so well for other noise, so I have found that it doesn't work so great when used in conjunction with the above techniques for reducing artifacts.

2) Perform a 4-way blend of noise patterns. Given a chunk of non-tiling noise, you can conceptually split it into 4 quadrants, and perform a specialized blend using gradients so that edges blend from one quadrant to the next, and the result will tile. There is no inherent support for this in Gimp, but it can be done using gradient layers and Multiplicative layer blending.

3) Duplicate your layer of noise some number of times and offset the duplicates, then perform a blend between the duplicates to eliminate seams. This is what Gimp's Make Seamless filter does. the main drawback of this is that characteristics of the texture pattern are repeated several times in the same texture, kind of a no-no if you want to eliminate repetition.

(2) and (3) have additional drawbacks in that the character of the function is altered by the blending. If you have, for example, a high-contrast texture consisting of black and white, the blending process will corrupt the contrast, giving you a mottled pattern of blended grays instead. Worse, the blending will occur more in the center of the image, and less at the edges. Here is an example using Gimp's Make Seamless. On the left is the original texture, on the right the seamless version:

See how the blend caused the nice contrasting texture to basically be ugly? Not only did it corrupt our nice blacks and whites, it introduced "pinch" patterns where the seam blending was done These blending techniques are suitable only for lower-contrast textures where the additional artifacts will be less noticeable.

Now, our very own JTippetts here at gamedev has done quite a bit with noise, including implementing his own library. Some time back he wrote a journal post about seamless noise that includes a discussion of these very drawbacks. In his article and his noise library, he proposes a solution to the blending problem for seamless high-contrast functions by using higher dimensionality of noise functions and sampling them in "circles" to generate a seamless pattern. The technique works, and works very well, for high contrast textures (kinda hurts my brain, though); however it has the drawback of requiring, for example, 4D noise in order to generate a seamlessly tiling 2D texture. (And 6D noise, if you want to generate a seamlessly tiling 3D texture. Yikes.)

Of course, his technique isn't implemented in Gimp or in any image editing package, to my knowledge. The higher dimensionality requirement might make it a bit more intensive to implement in a shader as well.

So by using a combination of techniques (domain rotation for octave inputs, altering lacunarity, using higher-order functions to generate the seamless mappings) it is possible to eliminate any grid artifacts inherent in the texture itself. The only thing left will be hiding the repeating pattern generated by tiling the texture multiple times.

For these artifacts, there are solutions such as Wang tiles and aperiodic tiling that can be used. Generating Wang tiles is kind of difficult, and works best with the blending methods of seamless tiling. You generate multiple variants of a texture then use gradient blending to blend in the proper edge given a particular edge pattern. Of course, this would result in the same blending artifacts as the seamless algorithm does, so it would not be very good for high-contrast textures. This means that all the work of using JTipetts' method is just thrown out the window anyway. I still haven't wrapped my head around his technique well enough to figure out if there is any way you could use his method to generate Wang tiles directly. It kind of fries my brain just thinking about it. I have a hard time thinking in coordinate spaces higher than 3D.

Wang tiles can be used to tile a surface aperiodically, meaning that there will be no recurring macro patterns. However, you are still limited by the number of tiles in your Wang set, so given a large enough visible area, the user is probably still going to see that you reuse tiles. But still, it seems to be about the best you can get with a tiling system.

Now, if you implement shader-based in-place procedural materials, there is no requirement of actually generating tiles (seamless or otherwise). However, other limitations arise in this case. By using complex sequences of noise functions, you can generate some very elaborate texture patterns. For an example of this, I again refer you to JTippetts' journal. At that link you will see a mosaic of grayscale textures he generated via his noise library. The problem is, there are some patterns you can generate using an offline library like that, that would be very difficult to generate within the constraints of a shader environment. It could be done in most cases, I think, but the processing overhead could be quite drastic. In my experience, shader-based procedural textures tend to be rather simplistic indeed. As hardware becomes more powerful, though, this option becomes more and more viable.

### #4967134Using Visual C++ 6

Posted by on 07 August 2012 - 02:47 PM

The point isn't if your code compiles, it's whether or not standard code will compile on your compiler. Trust me, there is plenty of perfectly standard code that VC6 will choke on like a baby eating a jawbreaker. You're using an obsolete and provably broken tool. If you want to convince yourself that your reasons for using it aren't stupid, that's fine. Just don't expect the rest of us to think you're anything other than an idiot.

### #4967106Creating detailed environments with tiles, suggestions/advice

Posted by on 07 August 2012 - 01:16 PM

One thing to remember that is important not only with tiled backgrounds, but with game worlds in general, is that the most important part of the game is not the background, but the characters upon it. The player, the enemies, etc... The background is merely a backdrop. In light of that fact, it is best that the backdrop not overwhelm the foreground objects, be it by high-contrast or clashing colors, etc...The main problem I see with your tiles is that they are very noise, very high-contrast. Any object viewed against it is going to sort of disappear.

For the most part, you should use "cooler" colors for the background, "warmer" colors for the foreground objects. You should keep contrasts on the background relatively low. Keep the colors slightly desaturated in comparison to the foreground. Also, you want to avoid using the upper and lower ends of the value scale for very common usage, and reserve those extremes for highlights and details. By this, what I mean is that you shouldn't make your floor such a huge chunk of very dark black. Very dark black should be reserved for shadows and lighting, rather than actual coloration. Keep your tones more in the middle ranges of the value scale, and by contrast the shadows will be more emphasized. For example, your pillar sprite has a soft shadow, but when the sprite is placed against the very dark background, the shadow is almost completely unnoticable. Shading is VERY important for properly conveying shape and form, but against a black background the nuances of shadow are lost.

Another problem I see is that your stone floor simply doesn't look like stone. It looks more like shiny bits of metal floating in blackness. The pillars seem to be made of an entirely different material altogether. In reality, if you were making a dungeon or castle, you would use a lot of the same stone for the floor and the supports. The exception would be for things meant to be decorative, in which case you would choose a material meant to complement the base material: colorful white marble against gray stone, etc...

Now, your lava tiles can be warm and bright; if viewed alongside a more muted stone, it would make them "pop" more than they currently do. But the problem with the torches and lava is that their emitted light doesn't affect the scene at all. To see what I mean, take a look at this screenshot from Diablo 2:

You see how the light of the lava casts a reddish hue on the stone immediately adjacent to it? Some lighting highlights applied to the tiles around a lava pool in your tileset would help immensely. Similarly with your lights. Whether you accomplish this through actual lighting, or whether you accomplish it with specially lit tiles, the results would definitely be worth the effort.

A final criticism is that I find the viewing angle of the column to be jarring. The column is viewed at a slightly oblique angle, showing the front side, but the floor is strictly top down. Perhaps you could construct some tiles to show the sides of the pit the lava is in, to maintain consistency.

### #49670552D terrain with elevation

Posted by on 07 August 2012 - 10:12 AM

What really bites you in the ass when faking 3D with strictly 2D tiles is permutations. Regarding the Stronghold video posted above: I never played the game, don't know anything about it, but doing something like that with strictly 2D tiles would require a lot of tiny little tiles.

Here are the issues of 2D tiles:

1) Terrain transitions. Going from Grass to Dirt, Dirt to Rocky, Grass to Rocky, Dirt to Sand, Grass to Sand, Rocky to Sand, etc... Requires a lot of little transitional tiles, or a layering scheme of alpha-blended transitional boundaries.

2) Elevation transitions. Going from Level 0 to Level 1, Level 1 to Level 2, Level 0 to Level 2, etc... Requires a lot of tile sets. Many games use different shades of texture coloring to represent the elevation tiers, out of necessity. This helps the user to differentiate between grassy ground that is on Level 0 from grassy ground that is on Level 2. Provides a visual cue to help the player play the game, but it adds even more tile permutations. Because now you have to take into account the elevation slopes, the different elevation colors, and the transitions from (1).

3) Smooth elevation vs. Cliff elevation. Some terrain is smoothly sloped, some is sharp enough to be a cliff, and some terrain is transitional from smooth to cliff. You can see all of it in the Stronghold video posted above. So now, not only do you have to worry about the permutations from the previous categories, but now you have to factor in transitions from all those permutations to Cliff terrain.

Handling this kind of complexity requires either a vast amount of time spent by the artists juggling the permutations, or a set of specially-designed tools that can handle some of the grunt work. Either way, though, it requires a considerable time investment to get right. It becomes a quite bewildering mess when you try to tackle all of it, which is why so many tile-based games simplify things. If Stronghold was done using pure 2D tiles, kudos to them for being one of the more complex 2D tile schemes I've ever seen. However...

All of this is made almost absurdly easy (by comparison) if you only make the change to full 3D. All of these weird permutation problems go away. So many exploding permutations can be reduced to a simple set of primitives, with a bit of judicious blending thrown in. You don't need to make it look 3D, though. You can use an orthographic camera, keep it tile-based, etc... You can make it look just like those old school games. It's just that modern 3D graphics pipelines provide just so many additional tools that they didn't have access to in old school 2D formats, tools that vastly simplify the complexity and make it more like something that a lone wolf developer could easily handle on his own, without spending hours and hours juggling tile permutations.

There is a reason that all of those old popular 2D franchises (Age of Empires, Diablo, Starcraft, etc...) eventually ditched their 2D systems as soon as they could and went full 3D without ever looking back.

PARTNERS