So now that I think I understand what you want, I guess we can go back to the original question: determining the frequency of a signal.
In all but the very trivial cases where the signal is basically only a tone, either pure or with little overtones, this problem is not well defined unless you have specific requirements of what the frequency of a signal really is. A pure tone contains a single frequency, but anything else will contain multiple frequencies, so there is no the frequency.
Two examples could be the dominant frequency, or the pitch of the signal. The dominant frequency can be determined by the frequency bin from a DFT with the largest magnitude. In this case, you can look for a DFT library, such as fftw. You can also approximate it by the number of zero-crossings (for example, your signals in the pictures, has one zero-crossing; two periods would have three zero-crossings, but you have to be careful with the corner cases at the borders of the buffer). The pitch is the perceived frequency of a signal. Without going into details, it is extremely simplified as the average of all frequencies weighted by their corresponding magnitude. Again, you would need a DFT library to determine the frequencies and magnitudes.
As I said, the problem is not well defined for arbitrary signals, only for very trivial signals that are mostly tonal.
It would also repeat once every second, so the fundamental frequency would still be 1 Hz. But it also contains overtones, since it's not pure tone, which are multiples of the fundamental frequency. So you would hear not only 1 Hz, but also 2 Hz, and 3 Hz and so on, but with a much lower amplitude.
Although the optimum isn't unique, you can restrict yourself to a unique minimum by choosing the minimizer that itself has smallest norm. This means choosing a minimizer orthogonal to A's nullspace -- i.e., in the row space of A.
I am not super-familiar with the numerical method Numsgil describes, but I assume it works along these lines. The technique I have seen is the following: Let A = U S VT be the singular value decomposition of A, and S' be the matrix you get by replacing each nonzero singular value by its reciprocal. Then construct the pseudoinverse as A' = V S' UT. The solution is x = A' b. Although this works, I assume that computing the full SVD is doing more work than you need to.
That way of calculating the pseudo-inverse appears correct. The solution you get is the one with the minimum norm you mentioned in the first quote. There are, however, other solutions that satisfies other definitions of "minimum", one common one being the minimum non-zero solution.
As an interesting side-note to this, which is something many Matlab users appear to be unaware of, is the difference between the various ways of solving equation systems in Matlab.
x = inv(A)*b
x = pinv(A)*b
x = A\b
All three are equivalent for square and full-rank matrices A, and the only case where 1 is possible. For overdetermined systems, solutions 2 and 3 are equivalent and is the least-squares solution. But for underdetermined systems, 2 gives the minimum norm solution, while 3 gives the minimum non-zero solution. The minimum non-zero solution is the solution with maximum number of zero-elements in the solution vector x.
But that was just a side-note, that there are many solutions and different methods give different solutions.
Nothing should happen when you press Z, because your keyboard handler check for D. But ignoring that, I don't see how the position in the keyboard handler can have any effect on the cube. You do set a new position in the keyboard handler, but int he draw function, you set a new position for every cube you draw.
If you want to distribute the money inverse proportional to the distance, then assign the weight wn = 1/distance to person number n. Then, normalize the weights so they add up to unity: w'n = wn/sum(wn). Now you just distribute the money as 100 dollars * w'nto person n.
The method works for any weighing, just change the 1/distance to whatever weighting function you want.
A quick glance at the GLM manual suggests that you use glm::value_ptr() to get a pointer to the data of the vector or the matrix.
Anyway, you can use pretty much any library. I use CML myself. Also, if you like the linear algebra library in DirectX, then I'm not aware of any problems using it with OpenGL either. It is, as far as I understand, a stand-alone library not directly tied to and dependent on the other libraries
I understand that these lines enable blending so translucency effect happens, but it seems that the alpha value just changes the brightness of the color.
For example, in lesson 8, the blending enable code is like this:
glColor4f(1.0f, 1.0f, 1.0f, 0.5f); // Full Brightness. 50% Alpha
glBlendFunc(GL_SRC_ALPHA,GL_ONE); // Set The Blending Function For Translucency
If I set the alpha variable in glColor4f to 0.0f the cube becomes invisible, what makes sense as it should be completely transparent. However, when setting alpha to 1.0f, instead of having an opaque cube, it seems to be again 50% transparent but with lighter colors. I know that if I want to get opaque textures I just have to turn off blending (as it is shown in the same code of lesson 8), but what if I want to get a 33% or 85% transparency for example? Setting 0.33f or 0.85f in the 4th argument just makes the transparent cube lighter or darker, no effect on translucency at all.
Probably I'm missing something but, as I said before, I'm a complete newbie and I would like to learn with this; that's why I'm asking .
Thank you so much for your help.
The parameters to glBlendFunc controls how much of the source color (what you draw) is blended with the destination color (what is already in the frame buffer). The first factor, GL_SRC_ALPHA, means the source color is multiplied by the source alpha value. The second parameter, GL_ONE, means the destination color is multiplied by the constant factor 1. These two colors are then added to produce the final color that is written back to the frame buffer.
So if alpha is zero, the final color is srcColor*0+dstColor*1, which is the same as dstColor. The object thus appear invisible, since the result of the blending stage is whatever was in the frame buffer before blending in the second object. If alpha is one, the final color is srcColor*1+dstColor*1, which is brighter and an equal blend of the source and destination color. The object appears brighter since the total additive factor is 2, and half transparent since the final color is a blend of 1 part source and 1 part destination color. Or, in other words, a 50/50 blending.
What you probably want is to decrease the contribution from the destination color as the alpha increases. Luckily, there is a constant GL_ONE_MINUS_SRC_ALPHA, and I'll let you figure out what it means, where to use it, and how it affects the final blended color.
This is only a problem with dynamic textures that you continuously need to feed with new data.
You have no way to control the internal format any more than effectively just specifying how many color components you want. The external format can be controlled, but unless you're continuously updating the texture at run-time, this is only going to be a one-time conversion that is automatic by the driver, and you should trust the driver to make the proper internal format (not only should you trust the driver, but you have to since you cannot control the color order of the internal format).
Where in the process of loading the file, storing the data in memory and passing the data to OpenGL are you concerned about the color order?
If you are using core profile on GL 3.2, then you must use VAO when you call glBindBuffer and glVertexAttribPointer.
VAO? Vertex arrays? I thought vertex arrays were older and got replaced by VBOs. Weird. Do you have any links where I can get more info on these functions requiring VAOs? The OpenGL man page doesn't mention it as far as I can tell.
To clarify the terms here:
Vertex arrays (VA) is the mechanism by which you store objects in arrays and draw vertices in batches. Vertex arrays are most certainly not old and replaced. It is, in fact, the only way to draw something today.
Vertex buffer object (VBO) is an object that stores vertex array data. It is the OpenGL equivalent to, for example, new or malloc. Its only purpose is to provide storage for your vertex arrays.
Vertex array objects (VAO) is an object that stores VBO bindings. You can see it as an object that stores calls to glVertexAttribPointer and glEnableVertexAttribArray so you can quickly re-bind and enable a set of arrays by binding the VAO object, instead of binding several VBO objects and resetting pointers for ever attribute.
All three of these are quite useless on their own today, but they all exist to work together. For example, in previous versions you could use VAs alone by passing pointers to your own memory. That is not possible anymore, and now you need store the vertex arrays in a VBO instead.
The functions for VAO you want to look for are glGenVertexArrays and glBindVertexArray.
First you need to ensure that the image is not altered in any way before it ends up in OpenGL as a texture. glBuild2DMipmaps will resize the image, which includes a filtering operation, if your source image dimensions are not a powers of two. Then you need to be very careful with how you set up your coordinate system so that you actually sample the texture exactly on a texel, or you will have yet another filtering operation with the linear interpolation.
The first two things is that you should drop mipmaps, or at least not let gluBuild2DMipmap generate them, and second is to enable nearest filtering instead of linear filtering.
Your shader is not important; you just need to make sure that you pass the image to glTexImage correctly.
There is nothing that outright prevents you from passing the glyph bitmap data directly to glTexImage. In some cases, there are just some things you need to be aware of, but nothing that applies directly in your case. It mostly concerns padding and differences between bitmap width and actual row pitch.
The call to glTexImage you showed should work just fine, given that you have adjusted the unpack alignment (which by the way was one correct way to do it). You only need to consider that bitmap.buffer is a pointer to the image data, so &bitmap.buffer is a pointer to a pointer to the image data. glTexImage accepts only the first, so drop the & to not end up with a double pointer.
None of that has anything to do with FreeType and it returning a padded bitmap. Are you really sure the glyph bitmap is really padded? As I said, my experience is that the glyph bitmaps are not padded in any way.
Do you really meant to say that the texture contains padding, and not the glyph bitmap? That is more natural, since you round the size to (what appears to be) the next nearest power of two. That will introduce padding in the texture, not the glyph bitmap from FreeType, but that is also entirely under your own control. If you don't want that padding, then don't pad the texture in the first place.