I can't think of any blend function that will give you exactly what you want there.
Have you considered that you can probably get a good look without doing any blending? You could only put a cap on lines that have no connecting lines, and otherwise bisect your 'line quads' at the half angle between their intersection. This would ensure that there's no drawing overlap between two lines.
Are you storing the normals in a floating point texture, or a regular color texture? If you try to stuff a normal value into an RGBA8 texture, it will clamp all negative values to zero. This sounds like it could fit with the problem you're describing (failing positive lights imply negative signed normals could be not working).
A common trick for this would be to normalize your normal, before writing, then scale the values from (-1/1) to (0/1), and then do the reverse operation from reading from the texture.
No, you can't draw the data by separate indexes. You must perform a conversion like you have done to draw obj formatted data in a vertex buffer.
I don't know the particulars of your algorithm, but I would expect loading a medium OBJ file to take on the order of a few seconds, using a well-known importer like Assimp. 45 seconds might not be unusual for one as large as yours.
If you want the ultimate fastest solution, you should not use OBJ directly in your engine. A good idea may be to parse the OBJ into some kind of machine readable opengl-formatted binary data stream in a separate pre-build step, and then load this data at runtime instead of the obj. You could probably cut down the load times by 95% using such a system.
If you want to move an object left/right or up/down in view space, then you just translate it into view space via the modelview matrix, apply the translation, the send that through the projection matrix. Now if you want to figure out the direction of this vector in world space, then you have to multiply it by the inverse of the view matrix.
You'll have to keep the model and view matrices separate if you want to do that, because you need a pure view matrix to get you back to world space from the view space. Otherwise transforming the viewspace direction by the inverse modelview will put your direction in model space, which may or may not be what you want.
You'll find the list of vertex shader inputs on page 7. The value you're looking for is gl_MultiTexCoord0 through gl_MultiTexCoord7.
Whether you go to slot 0,1,2...7 depends on if you call glClientActiveTexture. The default texcoord is gl_MultiTexCoord0.
Looks to me like you either don't have depth testing enabled correctly, or you're culling the frontfacing polygons. You can see the lid of the teapot from behind the rim, where its properly obscured in the second image.
Post your render code if you can't figure out how to fix that.
For a first person camera, you should always first rotate around the Y axis, then around the X. Don't compound matrices across several frames, you always want your Y rotation to be around the global up vector, not the local one.
Also, I realize that if you're talking about constructing a view matrix, you have to do the inverse opposite of that. Otherwise just construct your camera like a regular object and invert the model matrix to get the view matrix.
(My shader also might be somewhat wrong in how it handles a 12-byte input. It seems to work, but my transformations don't apply unless I add another vect4(0.0, 0.0, 0.0, 1.0). If it's obviously wrong in some way shape or form, that would be good to know.)
I don't think I really understand what you're saying here, but I think you need to post your opengl client code.
1) You call LoadIdentity in the beginning of render on an unspecified matrix stack. Does this clear the projection matrix on the first pass, as that stack is still bound when you exit the setup function?
2) What color is your quad supposed to be? Why is texturing enabled?
3) Your quad is lying on the near plane, not sure how the clipping calculations work exactly but I'm not positive if things exactly on the near plane get clipped or not.
@jyk: When I stated row-major, I was refering to the fact that each row in a rotation matrix represents the direction vector of the rotated axes in terms of original reference coordinates, which is the case for pre-multiplication type of transformation matrices as opposed to the post-multiplication type where each column represted the rotated axes vectors. I was not talking about the layout of matrix in memory, which I honestly do not have much understanding in that area. I think I may have used the wrong term to express my intended meaning. Is there a more technically correct term for that?
I believe the correct term is 'row vectors' vs 'column vectors'.