When I've tested it in the past it helps a bit.. In my font-wrapper I added if(alpha == 0) discard; which I tested to improve performance if there's a lot of pixels drawn, though not by very much. I've seen the same result with sprites. It probably depends on the graphics card etc..
There's a difference between blending and discarding, as the z-buffer will still get updated with blending, but not for discarded pixels.
SrcBlend = operation performed on source color, that is the color returned from your pixel shader
DestBlend = operation performed on destination color, that is the current value in the back-buffer
BlendOp = what do do with the results from Src and Dest before outputting the final result
I'm not sure.. but I would guess neither.. and that r4.yzw = v0.y * c0[a0.w].xyz The docs http://msdn.microsoft.com/en-us/library/windows/desktop/hh447193(v=vs.85).aspx says the destination register has a writemask.. which is not the same as a swizzle. I interpret it as that the calculation is done on 4 components and the components of the result is written to the destination register or skipped depending on the mask.. That is, xxyz * v0.y is calculated to a temporary result xyzw, and the x is skipped on write to destination since it's not in the mask.
Do you recreate the vertex buffers every frame also?
You should probably have a static number of vertex buffers always created and then just refill them with new data when a chunk is switched out and another takes its place. How big is the heightmap?
Perhaps you can store all the vertices at least in RAM all the time, and just update vertex buffers when changes occur.
If your heightmap isn't very very large then you can probably even store all the vertices in a vertex-buffer statically, and just use different index buffers to draw different LOD levels to improve performance. If your heightmap is so large that you run out of memory, then look into reusing the same memory for a new chunk instead of reallocating things.
You could read back the depth buffer value of the pixel under the cursor to get Z, and use that to unproject the point and get the world position. If you don't want to do that, you can write a function that calculates the height on the fly while doing the intersection test, in the same way as the shader, by keeping a copy of the heightmap in memory.
The problem is that I get alpha values between 0 and 1, but I have a pixel culling algorithm in the fragment shader that depends on the alpha being either 0 or 1.
I am not sure if this design can be combined with the use of mipmaps?
You can create the mipmaps manually, and use an algorithm that doesn't do that. For example, instead of averaging 4 pixels, you can use the maximum value, or the minimum value, or the median, or the average of only pixels with alpha = 1, or similar. How do you create mipmaps? glTexSubImage2d can be used to manually specify a separate image for each mip level, which allows you to control how the different mip levels look.
Of course, comparing a float against exact equality to 0 is dangerous sometimes. But in this case, the threshold seems to be 0.33 (after some trial-and-error testing)
When your mipmaps are created the mipmap pixels are set to the average of several pixels in the larger mip-levels, ...
Thanks for the suggestion, which is the obvious answer, but probably not relevant as I don't use interpolation or averaging.
OpenGL uses averaging when creating mipmaps. Your problem has nothing to do with floating point comparison, in which case 0.33 would be a ridiculously high threshold for a number between 0 and 1, but the fact that mipmaps use averaging even with GL_NEAREST_MIPMAP_NEAREST. Mipmaps are defined that way, and level 2 will always be half as wide and high as level 1, and each pixel in level 2 will be the average of 4 pixels in level 1. That way your alpha values could be 0, 0.25, 0.5, or 1.0 in the first created mipmap. In the next level it could have any possible average of 4 of the previous levels values.
If you have the available port and get an additional monitor, it works perfectly fine to use two different GPUs simultaneously on Windows 7. Just connect them both, install both drivers, and they will work side by side as a normal multi-monitor environment. I currently have one NVidia and one AMD running.
If you get a very old and a newer card from the same vendor there might be driver collisions.. I haven't tried that...
If you want only one monitor and two graphics cards and switch between them that should also be possible, just put the DVI cable into one card and the VGA cable into the other, and let Windows think it's two different monitors. Then switch between which card/input you use with the monitors control panel and the monitor input settings.
EDIT: The monitor/input that you use for OpenGL must be set to the 'primary monitor' in the Windows monitors control panel. So to switch between the cards for OpenGL, switch which one is the primary monitor.
The caller is responsible for allocating space for parameters to the callee, and must always allocate sufficient space for the 4 register parameters, even if the callee doesn’t have that many parameters.
So something like this:
mov rcx, GL_COLOR_BUFFER_BIT ; parameter
sub rsp, 32 ; shadow space for 4 registers
add rsp, 32 ; pop register shadows
Definitely use more tutorials, and you will understand more and more as you go along. Most things have been written thousands of times already by different people, and by learning from those with experience you avoid making a thousand unnecessary mistakes.
That said it's always very good for learning purposes to try and solve something using the knowledge you already have, before looking up the answer, but if you hit a wall then don't hesitate to search for an answer, and if you can't find one, ask for help. Then when it works, try to understand the answer.
The "sky-quad" or sky-triangle works by multiplying each vertex by the inverse of the camerarotation * projection matrix, and using the resultant vectors as 3D texture-coords for a cube-map lookup.
I don't think there are any tradeoffs you have to worry about for the simple case, and if you do more advanced effects you will probably end up with a method that's needed for the particular effect you want to achieve, so the choice will be made for you.