Public Group

# How to visualize Linear Algebra or anymath

This topic is 592 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Trying to understand how one visualizes linear algebra. I've been trying to get better at reading, writing, and understanding shaders using linear algebra. But I have a feeling that if I want to be proficient at writing shaders I need to have a better idea at what the calculations are doing visually (I am a visual learner, helps to know whats going on). For example, while looking at ADS code I can understand the following code somewhat

vec4 eyeCoords = ModelViewMatrix * vec4(VertexPosition, 1.0);
vec3 s = normalize(vec3(Light.position - eyeCoords));


So essentially, a vector is being made that points from the geometry surface to the light source. That vector is then shorten to a vector length of 1.

But I am confused at what this line does

vec3 ambient = Light.La * Material.Ka   // What does the multiplication operation do? Why couldn't have addition sufficed. What does this visually look like


or

vec3 tnorm = normalize(NormalMatrix * VertexNormal);    // Don't understand what NormalMatrix * VertexNormal does? What does it look like visually?


Do I have the wrong mindset when reading shaders? Is trying to visualize everything line by line the wrong approach? How can I get better at reading shaders? Also is it normal to write a phong shader program completely from memory without using an equation or an explanation (I have a feeling proficient graphics programmers can write one without referencing but I digress).

Here is the vertex shader that I am referencing, for those wandering.

<pre>
layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec3 VertexNormal;

out vec3 LightIntensity;

struct LightInfo {
vec4 Position; // Light position in eye coords.
vec3 La;       // Ambient light intensity
vec3 Ld;       // Diffuse light intensity
vec3 Ls;       // Specular light intensity
};
uniform LightInfo Light;

struct MaterialInfo {
vec3 Ka;            // Ambient reflectivity
vec3 Kd;            // Diffuse reflectivity
vec3 Ks;            // Specular reflectivity
float Shininess;    // Specular shininess factor
};
uniform MaterialInfo Material;

uniform mat4 ModelViewMatrix;
uniform mat3 NormalMatrix;
uniform mat4 ProjectionMatrix;
uniform mat4 MVP;

void main()
{
vec3 tnorm = normalize( NormalMatrix * VertexNormal);
vec4 eyeCoords = ModelViewMatrix * vec4(VertexPosition,1.0);
vec3 s = normalize(vec3(Light.Position - eyeCoords));
vec3 v = normalize(-eyeCoords.xyz);
vec3 r = reflect( -s, tnorm );
float sDotN = max( dot(s,tnorm), 0.0 );
vec3 ambient = Light.La * Material.Ka;
vec3 diffuse = Light.Ld * Material.Kd * sDotN;
vec3 spec = vec3(0.0);
if( sDotN > 0.0 )
spec = Light.Ls * Material.Ks *
pow( max( dot(r,v), 0.0 ), Material.Shininess );

LightIntensity = ambient + diffuse + spec;
gl_Position = MVP * vec4(VertexPosition,1.0);
}</pre>


##### Share on other sites

But I am confused at what this line does

vec3 ambient = Light.La * Material.Ka   // What does the multiplication operation do? Why couldn't have addition sufficed. What does this visually look like


Those are RGB channel values. Light ambient intensity * Material ambient reflectivity. The reflectivity values scale the ambient light color by channel.

vec3 tnorm = normalize(NormalMatrix * VertexNormal);    // Don't understand what NormalMatrix * VertexNormal does? What does it look like visually?


Assuming NormalMatrix is the transform intended to be used for the normals, this will orient VertexNormal to match the orientation of the object (again, assuming the nature of the matrix).

##### Share on other sites

by learning the underlying graphics theories they implement.

is it normal to write a phong shader program completely from memory without using an equation or an explanation

as i recall, phong is a rather straight forward calculation. once one had the phong formula / theory and the shader syntax memorized, i would not be at all surprised if one could not write a basic phong shader from memory. in fact, i'd be rather surprised if one couldn't.

Here's a GLSL implementation of phong from

https://en.wikipedia.org/wiki/Specular_highlight

#version 440
out vec4 color;
in vec3 frag_pos;
in vec3 normal;
uniform float ambient_intensity;
uniform float specular_intensity;
uniform vec3 specular_factor;
uniform vec3 light_pos;
uniform vec3 view_pos;
uniform vec3 light_color;
uniform vec3 object_color;
void main()
{
/* Calculate the ambient compoonent. */
float ambient_intensity                                     = ambient_intensity;
vec3 ambient                                                = ( ambient_intensity * light_color );
/* Calculate the diffuse component. */
vec3 norm                                                   = normalize ( normal);
vec3 light_direction                                        = normalize ( light_pos - frag_pos );
float diff                                                  = max ( dot ( norm, light_direction ), 0.0 );
vec3 diffuse                                                = ( diff * light_color );
/* Calculate the specular component. */
vec3 view_direction                                         = normalize ( view_pos - frag_pos );
float spec                                                  = pow ( ( dot ( norm, view_direction ) ), specular_factor );
vec3 specular                                               = ( specular_intensity * spec * light_color );
vec3 result                                                 = ( ambient + specular + diffuse ) * object_color;
color                                                       = vec4 ( result, 1.0f );
}



as you can see. its only 11 lines of code, plus variable declarations.

Edited by Norman Barrows

##### Share on other sites

I think I have just what you're looking for.

My YouTube channel covers basically what you're asking including a detailed walk through of Blinn Phong shading in my HLSL series. I walk through the all the math. I find Blinn Phong math to be a little ugly. I can tell you off the top of my head the basic idea of how it works, but to get into the specific math and why the math works requires me to relearn it every time I get into it. That's another reason I did the HLSL series, so that I could remind myself why it works the way it does when I need to. It's the Phong specular that's the ugly part mathematically. The Gouraud shading is actually super straight forward.

I start off with Vector and Matrix videos and draw a lot of it so you can visualize it. Then it gets into the HLSL series. The Vector and Matrix videos are not specific to any framework, SDK, or computer language. The HLSL series is obviously HLSL and you're working in GLSL. I've been meaning to do a video that ties GLSL into the HLSL series. The math is the same in both but the two languages have pretty different syntax.

Here's my GLSL shader that's almost exactly the same as my HLSL shader that you have by the end of the HLSL series.

#version 450 core
layout (location = 0) in vec3 Pos;
layout (location = 1) in vec2 UV;
layout (location = 2) in vec3 Normal;
layout (location = 3) in vec4 Color;

uniform mat4 WorldMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ProjectionMatrix;

smooth out vec2 TextureCoordinates;
smooth out vec3 VertexNormal;
smooth out vec4 RGBAColor;
smooth out vec4 PositionRelativeToCamera;
out vec3 WorldSpacePosition;

void main()
{
gl_Position = WorldMatrix * vec4(Pos, 1.0f);                //Apply object's world matrix.
WorldSpacePosition = gl_Position.xyz;                        //Save the position of the vertex in the 3D world just calculated. Convert to vec3 because it will be used with other vec3's.
gl_Position = ViewMatrix * gl_Position;                        //Apply the view matrix for the camera.
PositionRelativeToCamera = gl_Position;
gl_Position = ProjectionMatrix * gl_Position;                //Apply the Projection Matrix to project it on to a 2D plane.
TextureCoordinates = UV;                                    //Pass through the texture coordinates to the fragment shader.
VertexNormal = mat3(WorldMatrix) * Normal;                    //Rotate the normal according to how the model is oriented in the 3D world.
RGBAColor = Color;                                            //Pass through the color to the fragment shader.
};

#version 450 core

in vec2 TextureCoordinates;
in vec3 VertexNormal;
in vec4 RGBAColor;
in float FogFactor;
in vec4 PositionRelativeToCamera;
in vec3 WorldSpacePosition;

layout (location = 0) out vec4 OutputColor;

uniform vec4 AmbientLightColor;
uniform vec3 DiffuseLightDirection;
uniform vec4 DiffuseLightColor;
uniform vec3 CameraPosition;
uniform float SpecularPower;
uniform vec4 FogColor;
uniform float FogStartDistance;
uniform float FogMaxDistance;
uniform bool UseTexture;
uniform sampler2D Texture0;

vec4 BlinnSpecular(in vec3 LightDirection, in vec4 LightColor, in vec3 PixelNormal, in vec3 CameraDirection, in float SpecularPower)
{
vec3 HalfwayNormal;
vec4 SpecularLight;
float SpecularHighlightAmount;

SpecularHighlightAmount = pow(clamp(dot(PixelNormal, HalfwayNormal), 0.0, 1.0), SpecularPower);
SpecularLight = SpecularHighlightAmount * LightColor;

return SpecularLight;
}

vec4 PhongSpecular(in vec3 LightDirection, in vec4 LightColor, in vec3 PixelNormal, in vec3 CameraDirection, in float SpecularPower)
{
vec3 ReflectedLightDirection;
vec4 SpecularLight;
float SpecularHighlightAmount;

ReflectedLightDirection = 2.0 * PixelNormal * clamp(dot(PixelNormal, LightDirection), 0.0, 1.0) - LightDirection;
SpecularHighlightAmount = pow(clamp(dot(ReflectedLightDirection, CameraDirection), 0.0, 1.0), SpecularPower);
SpecularLight = SpecularHighlightAmount * LightColor;

return SpecularLight;
}

void main()
{
vec3 LightDirection;
float DiffuseLightPercentage;
vec4 SpecularColor;
vec3 CameraDirection;    //Float3 because the w component really doesn't belong in a 3D vector normal.
vec4 AmbientLight;
vec4 DiffuseLight;
vec4 InputColor;

if (UseTexture)
{
InputColor = texture(Texture0, TextureCoordinates);
}
else
{
InputColor = RGBAColor; // vec4(0.0, 0.0, 0.0, 1.0);
}

LightDirection = -normalize(DiffuseLightDirection);    //Normal must face into the light, rather than WITH the light to be lit up.
DiffuseLightPercentage = max(dot(VertexNormal, LightDirection), 0.0);    //Percentage is based on angle between the direction of light and the vertex's normal.
DiffuseLight = clamp((DiffuseLightColor * InputColor) * DiffuseLightPercentage, 0.0, 1.0);    //Apply only the percentage of the diffuse color. Saturate clamps output between 0.0 and 1.0.

CameraDirection = normalize(CameraPosition - WorldSpacePosition);    //Create a normal that points in the direction from the pixel to the camera.

if (DiffuseLightPercentage == 0.0f)
{
SpecularColor  = vec4(0.0f, 0.0f, 0.0f, 1.0f);
}
else
{
//SpecularColor = BlinnSpecular(LightDirection, DiffuseLightColor, normalize(VertexNormal), CameraDirection, SpecularPower);
SpecularColor = PhongSpecular(LightDirection, DiffuseLightColor, normalize(VertexNormal), CameraDirection, SpecularPower);
}

float FogDensity = 0.01f;
float LOG2 = 1.442695f;
float FogFactor = exp2(-FogDensity * FogDensity * PositionRelativeToCamera.z * PositionRelativeToCamera.z * LOG2);
FogFactor = 1 - FogFactor;
//float FogFactor = clamp((FogMaxDistance - PositionRelativeToCamera.z)/(FogMaxDistance - FogStartDistance), 0.0, 1.0);

OutputColor = RGBAColor * (AmbientLightColor * InputColor) + DiffuseLight + SpecularColor;
OutputColor = mix (OutputColor, FogColor, FogFactor);
};

I tried to write these shaders out in a language that makes it pretty self evident what they're doing, which is pretty uncommon, especially in the Blinn-Phong part.

There's actually two different shaders: Phong is the original shader as I recall and Blinn came up with the half-vector idea to "improve" on Phong's idea. That's why it's known as Blinn-Phong, because it's basically the same shader with slightly different math. I have both shaders here written as functions. So, you can comment out the one you don't want to use.

Also, I added fog to this GLSL shader, which is something I didn't cover in the HLSL series. There are a couple different ways to do fog mathematically.

I've also been meaning to put together a high level video to explain that you feed the vertex shader a buffer of vertices that define your model which outputs the altered vertices to the rasterizer (and stages I don't use like geometry and tesselation). At that stage my shader converts the vertices from the 3D game world to 2D normalized device coordinates that can be drawn on the computer screen and the next stages are actually done in 2D. Rasterization is mostly done for you and shades the area in between every 3 vertices to form triangles that look 3D on the 2D drawing surface. As it does this it sends every pixel to the fragment/pixel shader and allows you to alter the color of that individual pixel. So, your output there is a single pixel color and nothing more. There is a blend stage after that that you can set the state of but otherwise don't really write into your shaders. Then the back buffer is presented to the screen and you see the image. The whole thing is about drawing triangles.

Anyway, the way they usually write out Phong, it's nearly impossible to understand. I've written it out here to be as understandable as possible and it's still pretty ugly:

vec4 PhongSpecular(in vec3 LightDirection, in vec4 LightColor, in vec3 PixelNormal, in vec3 CameraDirection, in float SpecularPower)
{
vec3 ReflectedLightDirection;
vec4 SpecularLight;
float SpecularHighlightAmount;

ReflectedLightDirection = 2.0 * PixelNormal * clamp(dot(PixelNormal, LightDirection), 0.0, 1.0) - LightDirection;
SpecularHighlightAmount = pow(clamp(dot(ReflectedLightDirection, CameraDirection), 0.0, 1.0), SpecularPower);
SpecularLight = SpecularHighlightAmount * LightColor;

return SpecularLight;
}


You're basically determining the color and thus the amount of light from the specular light to apply to every pixel individually. It gets a percentage of the light based on how much the camera aligns with the angle that the light is reflecting off of the pixel at. (Imagine the pixel in 3D space and it's normal is the direction it faces.)

vec3 ambient = Light.La * Material.Ka


The code you provided seems to be cut off a bit prematurely and it's written in a way that's a bit difficult to follow. But it would appear that you have a light intensity color and a light reflectivity color for the ambient light. Multiplying two colors is going to basically give you the weaker of the two. Imagine white being all 1's and the other color being something like 0.4's and when you multiply 0.4 times 1.0 you will get 0.4. So, it's blending the two colors in a way that is weighted to the weaker color. Adding colors tends to blow them out because the sum would be 1.4 and 1.0 is white. You can't get any more white than white. But there are some places where you add them.

Ambient light is super straight forward. It's just a color and you're done. Here they appear to be complicating it a bit by giving you some control over the brightness of that color and it's unclear what they mean by "reflectivity". There is no reflectivity in Gouraud shading. They seem to be using it to control brightness of the ambient color. I just specify a color that in theory takes all that into account. When I originally specify the color I can say how bright I want the color to be and then just hand the shader the color. Ambient light is just a color that gets applied to everything. By itself it results in a silhouette shader and it doesn't look any more 3D than a shadow.

With this line:

vec3 tnorm = normalize( NormalMatrix * VertexNormal);


It's confusing again, especially since I think we're in the fragment/pixel shader here. So first off, we're dealing with a pixel, not a vertex at this point. Assuming that's the case, the values in the fragment/pixel shader are interpolated across the face of the triangle that you're drawing. So, colors are interpolated (given a weighted average dependent on distance from each vertex) to blend them between the three vertices of the triangle. So, the fragment/pixel shader gets the interpolated/averaged value. In this case, you get a "pixel normal" from the vertex normal here. It's an interpolated vertex normal between the 3 vertex normals of the 3 vertices of the triangle. It's a weighted average of the directions the 3 vertices face in based on distance from each vertex to give you the direction the pixel faces.

It doesn't explain what a "normal matrix" is. So, I can only guess.

Probably what's going on here is that your vertex normals were defined in your modeling program like Maya, Max, or Blender. They were imported as a file where the normals and vertices are in the positions and directions defined before you exported the model file. They call that "model space". Your world/object matrix modified those vertex positions to place and orient the model in the 3D world. The view matrix further modified them to simulate a camera moving through the scene. And the Projection matrix modified the vertex positions to project them onto the 2D screen as normalized device coordinates for actual drawing. They call that the MVP matrix in your code there. They give you those matrices in different levels of being combined. You can see how I handle that in my shader which is a bit different.

But I believe the "normal matrix" is probably due to the fact that all these matrices changed the positions of the vertices, not their normals. So, the normal for the vertices still points in the direction it was in your modeling program before you exported the file, which is not helpful at all. You need to apply the world matrix to those normals to reorient them according to how they were placed in the scene so that they point in the correct directions. That's probably what that is: the World matrix.

Oh sorry. They call it the ModelViewProjection Matrix rather than the WorldViewProjection Matrix. 6 of one and half a dozen of the other. Model, Object, World, it doesn't matter what you call it; it's the matrix that positions the model in the scene.

Anyway, I suggest going through my HLSL series. It won't hurt you to learn HLSL since some of the books are going to have HLSL instead of GLSL. The math is all the same either way. You can probably skip the first video as it is just talking about the calling code, which will be different from OpenGL anyway. But you can look at the GLSL shader above to compare it to what's in the HLSL video.

Off the top of my head, the primary difference is that HLSL calls them constant buffers and GLSL's Uniforms appear to be basically the same thing. You don't deal with individual registers in GLSL either, which was always the semi-confusing part of HLSL and thus kind of nice. GLSL calls it a fragment shader and HLSL calls it a pixel shader. The way you tell the shader how your vertex buffer is laid out is slightly different between the two. Other than that it's mostly the same thing with just slightly different syntax. Having the GLSL version of the exact same shader given above should help bring the two together.

I have the entire project and source code in OGL 4.5 here. So, there you can see the OGL code that calls the shader as well. I haven't had time to comment and cleanup that code as much as I would like, but it still should be somewhat self documenting. On the web page there, there is a link to a video that shows what the program looks like running that uses the shader above.

Edited by BBeck

##### Share on other sites

For learning to visualize linear algebra, you can try to go through this course: http://www.3blue1brown.com/essence-of-linear-algebra/

If you still have specific questions after going through that, feel free to ask again and I'll be happy to take a closer look.

Thanks! That's some pretty good stuff. I put a link to it on my website.

##### Share on other sites

Wow, great resource Alvaro Thank you. And thanks for the well detailed response BBeck.

1. 1
2. 2
Rutin
19
3. 3
JoeJ
16
4. 4
5. 5

• 35
• 23
• 13
• 13
• 17
• ### Forum Statistics

• Total Topics
631701
• Total Posts
3001810
×