Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 10 Sep 2011
Offline Last Active Mar 21 2014 04:56 PM

#4877688 Questions on Noise

Posted by on 27 October 2011 - 03:32 PM

JTippetts here at gamedev wrote some stuff about this that I found while browsing around a little while ago:


I haven't played with Perlin noise a whole lot until recently so I'm not really an expert, but from what I gather from those articles, the scaling means to scale the domain in one coordinate direction. For example, if you call a 2D Perlin noise function with an (X,Y) coordinate, but you scale the Y coordinate by 0, you end up with basically a 1D function (which is what JTippetts is doing in the first linked article for 2D terrains). You can use small-but-non-0 scalings of Y to get a function that has some Y-axis variation, but by using a small scale this variation is limited and thus less likely to result in floating geometry.

#4877242 Why Downsampling : 4 8 16 32

Posted by on 26 October 2011 - 09:52 AM

Likely it's because power-of-2 downsampling is vastly simpler (and thus less computationally expensive) than arbitrary down-sampling. If you, say, halve the size of an image via down-sampling, then each pixel in the destination is an evenly weighted sum of a nice, tidy little 2x2 neighborhood of pixels in the destination (if using linear sampling). Arbitrary downsampling requires calculating weighting factors for sometimes odd-shaped neighborhoods in the original because destination pixels don't always fall in convenient sampling locations in the source image.

#4876355 GLSL learning need Help

Posted by on 24 October 2011 - 10:54 AM

Technicality is just part of game programming, I'm afraid. Admittedly, though, shader programming can be a little complex to wrap your head around at first. However, once you understand how shaders work and how everything goes together, shader-based programming is vastly simpler to trying to do everything you want to do via fixed function.

Understand that the basic structure of a shader-based pipeline goes as such:

Geometry shader->Vertex shader->Fragment shader

Geometry shaders are still relatively new and supported only on newer hardware. Unless you have a need to perform dynamic generation or amplification of geometry via the graphics hardware, you don't need to worry about them (And I would say that, given the apparent development of your skill-set, trying to implement geometry shaders right now would be an error).

So, removing geometry shaders from the equation, you are left with vertex and fragment shaders.

Vertex shaders accept streams of input in the form of vertex definitions. You know, your vertex positions, normals, texture coordinates, etc... The shader is a program that operates on these vertices in some way. Typical operations include transforming them by the model/view/projection matrix chain, transforming texture coordinates via texture matrices, etc... Other calculations can be performed, such as vertex-based lighting, etc... The output of a vertex shader includes data that is to be interpolated by the rasterizer in order to generate the inputs for the fragment shader.

The fragment shader operates on fragments, which roughly correspond to "pixels", hence the alternative name of pixel shader. It accepts input as the interpolated values output by the vertex shader, including interpolated vertex positions, interpolated vertex colors, interpolated texture coordinates, etc... These inputs are then used to generate the final color/depth/alpha values for the given pixel that are drawn to the render buffer and depth buffer.

In fragment shaders, you are allowed to access texture data. (Later versions of the shading paradigm allow texture access in vertex shaders, ie for displacement and other effects, but we'll not worry about that). Given the computed texture coordinates, a texture can be sampled to obtain a texture value that can then be operated upon in some way.

Looking at the example you linked, you can see that the tutorial presents a very simple vertex shader that accepts the input vertex data, transforms it by the matrix chain, then passes on the transformed vertex and passes through a single set of texture coordinates. Then in the fragment shader, the interpolated texture coordinates are used to sample the texture. The remainder of the shader then simply does a test to see if the color value sampled from the texture is equal to the color red (red=1, green=0, blue=0). If this is the case, the fragment is discarded; ie, the shader exits without writing any output data to the render buffer.

It is a very simple example, but it requires a few things. First, you need to ensure that your hardware actually supports the shader language required, you need to ensure that Glew (or whatever extension assistant library is used) is initialized (yes, I remember your previous thread), and that your graphics environment is all good to go. If Glew fails to obtain the extension pointers, then you can not use shaders. Once you are assured that the relevant extensions are supported, the programs are successfully compiled, etc... then you can bind your vertex streams, your texture samplers, etc... and render your primitives.

In order to see anything on screen, you need to ensure that your model/view/projection matrices are set to sane values, and that the rendered geometry is actually visible on screen. You need to ensure that you have a texture bound to the named sampler in the GLSL shader (in this case it is called myTexture). If you don't bind a texture, it won't work.

Now, all that being said, I feel like I need to add this: shader-based colorkeying is kind of ass-backwards. You can write a utility function to convert a color key to an alpha channel, and use simple fixed-function with alpha blending to exclude the color keyed parts, no shaders needed. If color-keying is your sole reason for needing a shader, then perhaps you can make do without. Of course, given that shaders are the way of the future, you'll probably want to learn them anyway. :D

#4875142 Problem with skills

Posted by on 21 October 2011 - 01:40 PM

I visited a brick works once, and the failure rate is actually quite high, and was very much higher back in the days when they would cook bricks in huge stacks, kindling a fire at the center and letting them bake for a couple weeks. The "over-cooked" ones were often called clinkers, and were the glazed-looking blackened bricks you can sometimes find if you are doing demolition on older buildings. They were used on non-visible applications, being structurally sound but ugly. The good bricks were graded and priced according to their finish.

Also, the clay going into a brick has an effect, as does the temperature it is kilned at, etc... If you think the real world doesn't have multiple types of bricks, then might I recommend you take a job with a stone-mason for a summer? That'll set you straight. Try building a high-temp fireplace with a basic red-clay structure brick sometime, then play a fun game of "dodge the exploding fragments of overheated brick."

Now, from where I sit, the skill of a craftsman would come into play in every single area which you seem determined to exclude:

1) Material consumption. To follow the brick analogy, a lesser-skilled brickmaker might not screen his clay carefully enough, resulting in a spoiled batch here and there that is unsuited even for non-visible usage. He might be sloppy in filling the forms, resulting in lost clay during the moulding process. He might botch the firing and burn too many bricks, resulting in spoilage.

2) Production quantity and rate. While the firing might take the same length of time regardless of the skill of the brick-maker, it is likely that the preparation phases (clay processing, forming, stacking) would go more quickly for an expert than for an amateur. Thus, a higher-skilled craftsman could feasibly have more stacks firing at any given time, resulting in more production for less time.

3) Quality. A skilled craftsman would have a better eye for good clay, and a better sense for the best way to stack, the optimal temperature to kiln, and the optimal duration of firing.

So from where I sit, instead of limiting the field you should instead try to implement everything you seem intent on excluding.

#4872590 Will this new rpg work?

Posted by on 14 October 2011 - 11:22 AM

It sounds to me like you just don't like RPGs. So why are you making one? Who says that doing things like "getting armor or buying items" are frustrating? I certainly don't think that, not by any stretch. I also love the thrill of leveling up, of getting new toys to play with, not having toys taken away; we are all just kids inside, and what kid likes to have their toys taken away? Maybe the idea does have merit. Personally, I just don't see it.

#4871860 Will this new rpg work?

Posted by on 12 October 2011 - 08:56 AM

As a long-time RPG fan, I am offended on the deepest levels by this idea. If you sold this game to me as an RPG, I would only play it for just exactly as long as it took me to figure out what kind of bastard trick you were playing on me. After that I would dust-bin it, and make a note of the developer so that I wouldn't accidentally buy one of their games again.

#4867590 the problem with coding?

Posted by on 30 September 2011 - 08:47 AM

Awesome sense of entitlement you've got there, bro.

#4866088 Dynamic tree generation

Posted by on 26 September 2011 - 09:10 AM

Check out Modelling the Mighty Maple. Also, you might look into BlobTree Trees.

#4864412 Thousands protest in Wall street.., media blockade ?

Posted by on 21 September 2011 - 04:19 PM

It's not being covered up, per se. It's just that no one gives a shit.

#4862814 Ray picking fails with glTranslatef? Why

Posted by on 17 September 2011 - 09:06 AM

Instead of pissing and moaning about nobody knowing about picking, why don't you try posting some code so we can see what you are screwing up.