For direct cell shading control use a 1D texture of all the tones you want. Since dot product is in range 0 to 1, use that as a 1D UV coord into the texture. You can then at any time change from 2 tones to 4,8....and so on.
For outlines it depends how specific you want to outline. There is a case where say a camera is in front of you and you have your hand over your chest. You either want just the silhouette of the entire body, you want to draw outlines around the hand. (The hand is inside the silhoutte of the chest/body).
For just silhouette, draw your model 2x, once with GL_LINES and changing the GL_LINE_WIDTH bigger depending on your outline thickness.
For the second case of all outlines, just take the normal relative to the eye/screen. As it approaches 0, that means the normal is starting to point away from you.
if( dot(normal, vec3(0,0,1) < VALUE)
gl_FragColor = black;
Where value would be between 0 and maybe .3 depeding on how thick you want it to be.
Simple way without using a line drawing algorithm like Bresenham:
Create a 2D vector between start and end brush.
Find the length in pixels of that line using pythagorean theorem.
float timeStep = 1.0/length in pixels
float time = 0;
for(int i = 0; i < length in pixels; i++)
Vec2 fillPos = start + time*brushvector;
time += timeStep;
When time = 1, the fill pos is the endpoint
When time = .5, the fill pos is halfway to the endpoint
Based on your drawing you are trying to fill the points between the last brush position and the current. So you know where the user is drawing.
The only way to fix this is to draw a brush for each pixel in between the circles. If you use a distance bigger than that you won't get a brush stroke, you will get more of a caterpillar looking brush, like the medium drawing you posted.
So you are most likely doing it wrong because shadow mapping does not require a vector to the sun at all, and will have nothing to do with normalizing it either.
When learning shadow mapping, try projecting an image texture instead of a depth buffer, this way you can get the math part down and then just replace the image with a depth buffer, because sometimes you will user a depth buffer that is not what you thought it was.
I would post your shader or the shadow portion of it.
Wouldn't to the west and up (as per my prior description) be (-1,0,1)
Assuming that for whatever reason I had translated to some arbitrary point, making it the center (say, [x, y, z]), the resulting light would require the light direction to be (x -(-1), y, z - 1) => (x + 1, y, z -1), correct?
A light with w = 0, means it is directional and not effected by translation basically 0*translation is what the math comes out to be. If you want a positional light such as a lamp post then yea something like that.
I'm pretty sure the lights position is normalized if you use GL without shaders. The vector 1,0,1 is bigger than 1,0,0. If you don't know the Pythagorean theorem then thats what it is for.
Doesn't make a whole lot of sense that you need to learn Maya for an indie place. You can always export your models from blender and import into maya. Your question about learning it is pretty stupid though. What don't you specifically get? Edge loops? Uv-unwrap? Why can't you google those things for maya and read about it. You asked "I know blender so how do I learn maya." You learn maya, blender has nothing to do with it. Its not like someone here wrote a book called Blender to Maya, you just have to read about it and figure out hotkeys.
You have to keep both vectors in the same space. Right now your sun vector is in the world, A_Normal is static/in object space. As the model rotates, you need to take a_normal and multiply it by the model matrix. This keeps it in world space. If you put the view matrix in there as well then the normal is in view space relative to the viewer.
So multiply a_Normal by the model matrix and you are fine, or
multiply a_Normal by the modelviewmatrix and the sun by the view matrix to get them both into view space.