Likely it's because power-of-2 downsampling is vastly simpler (and thus less computationally expensive) than arbitrary down-sampling. If you, say, halve the size of an image via down-sampling, then each pixel in the destination is an evenly weighted sum of a nice, tidy little 2x2 neighborhood of pixels in the destination (if using linear sampling). Arbitrary downsampling requires calculating weighting factors for sometimes odd-shaped neighborhoods in the original because destination pixels don't always fall in convenient sampling locations in the source image.
Technicality is just part of game programming, I'm afraid. Admittedly, though, shader programming can be a little complex to wrap your head around at first. However, once you understand how shaders work and how everything goes together, shader-based programming is vastly simpler to trying to do everything you want to do via fixed function.
Understand that the basic structure of a shader-based pipeline goes as such:
Geometry shader->Vertex shader->Fragment shader
Geometry shaders are still relatively new and supported only on newer hardware. Unless you have a need to perform dynamic generation or amplification of geometry via the graphics hardware, you don't need to worry about them (And I would say that, given the apparent development of your skill-set, trying to implement geometry shaders right now would be an error).
So, removing geometry shaders from the equation, you are left with vertex and fragment shaders.
Vertex shaders accept streams of input in the form of vertex definitions. You know, your vertex positions, normals, texture coordinates, etc... The shader is a program that operates on these vertices in some way. Typical operations include transforming them by the model/view/projection matrix chain, transforming texture coordinates via texture matrices, etc... Other calculations can be performed, such as vertex-based lighting, etc... The output of a vertex shader includes data that is to be interpolated by the rasterizer in order to generate the inputs for the fragment shader.
The fragment shader operates on fragments, which roughly correspond to "pixels", hence the alternative name of pixel shader. It accepts input as the interpolated values output by the vertex shader, including interpolated vertex positions, interpolated vertex colors, interpolated texture coordinates, etc... These inputs are then used to generate the final color/depth/alpha values for the given pixel that are drawn to the render buffer and depth buffer.
In fragment shaders, you are allowed to access texture data. (Later versions of the shading paradigm allow texture access in vertex shaders, ie for displacement and other effects, but we'll not worry about that). Given the computed texture coordinates, a texture can be sampled to obtain a texture value that can then be operated upon in some way.
Looking at the example you linked, you can see that the tutorial presents a very simple vertex shader that accepts the input vertex data, transforms it by the matrix chain, then passes on the transformed vertex and passes through a single set of texture coordinates. Then in the fragment shader, the interpolated texture coordinates are used to sample the texture. The remainder of the shader then simply does a test to see if the color value sampled from the texture is equal to the color red (red=1, green=0, blue=0). If this is the case, the fragment is discarded; ie, the shader exits without writing any output data to the render buffer.
It is a very simple example, but it requires a few things. First, you need to ensure that your hardware actually supports the shader language required, you need to ensure that Glew (or whatever extension assistant library is used) is initialized (yes, I remember your previous thread), and that your graphics environment is all good to go. If Glew fails to obtain the extension pointers, then you can not use shaders. Once you are assured that the relevant extensions are supported, the programs are successfully compiled, etc... then you can bind your vertex streams, your texture samplers, etc... and render your primitives.
In order to see anything on screen, you need to ensure that your model/view/projection matrices are set to sane values, and that the rendered geometry is actually visible on screen. You need to ensure that you have a texture bound to the named sampler in the GLSL shader (in this case it is called myTexture). If you don't bind a texture, it won't work.
Now, all that being said, I feel like I need to add this: shader-based colorkeying is kind of ass-backwards. You can write a utility function to convert a color key to an alpha channel, and use simple fixed-function with alpha blending to exclude the color keyed parts, no shaders needed. If color-keying is your sole reason for needing a shader, then perhaps you can make do without. Of course, given that shaders are the way of the future, you'll probably want to learn them anyway.
I visited a brick works once, and the failure rate is actually quite high, and was very much higher back in the days when they would cook bricks in huge stacks, kindling a fire at the center and letting them bake for a couple weeks. The "over-cooked" ones were often called clinkers, and were the glazed-looking blackened bricks you can sometimes find if you are doing demolition on older buildings. They were used on non-visible applications, being structurally sound but ugly. The good bricks were graded and priced according to their finish.
Also, the clay going into a brick has an effect, as does the temperature it is kilned at, etc... If you think the real world doesn't have multiple types of bricks, then might I recommend you take a job with a stone-mason for a summer? That'll set you straight. Try building a high-temp fireplace with a basic red-clay structure brick sometime, then play a fun game of "dodge the exploding fragments of overheated brick."
Now, from where I sit, the skill of a craftsman would come into play in every single area which you seem determined to exclude:
1) Material consumption. To follow the brick analogy, a lesser-skilled brickmaker might not screen his clay carefully enough, resulting in a spoiled batch here and there that is unsuited even for non-visible usage. He might be sloppy in filling the forms, resulting in lost clay during the moulding process. He might botch the firing and burn too many bricks, resulting in spoilage.
2) Production quantity and rate. While the firing might take the same length of time regardless of the skill of the brick-maker, it is likely that the preparation phases (clay processing, forming, stacking) would go more quickly for an expert than for an amateur. Thus, a higher-skilled craftsman could feasibly have more stacks firing at any given time, resulting in more production for less time.
3) Quality. A skilled craftsman would have a better eye for good clay, and a better sense for the best way to stack, the optimal temperature to kiln, and the optimal duration of firing.
So from where I sit, instead of limiting the field you should instead try to implement everything you seem intent on excluding.
It sounds to me like you just don't like RPGs. So why are you making one? Who says that doing things like "getting armor or buying items" are frustrating? I certainly don't think that, not by any stretch. I also love the thrill of leveling up, of getting new toys to play with, not having toys taken away; we are all just kids inside, and what kid likes to have their toys taken away? Maybe the idea does have merit. Personally, I just don't see it.
As a long-time RPG fan, I am offended on the deepest levels by this idea. If you sold this game to me as an RPG, I would only play it for just exactly as long as it took me to figure out what kind of bastard trick you were playing on me. After that I would dust-bin it, and make a note of the developer so that I wouldn't accidentally buy one of their games again.