Is upper division Linear Algebra a good idea or necessary?

Started by
17 comments, last by BitMaster 9 years, 6 months ago

I have a few options for mathematics this spring and I would like your input. I have posted something like this before but now I will be a bit more specific.

I've taken linear algebra and did very well -- it was probably about 30-40% proofs, and the rest was practical problem solving. Now I can take the following:

  • Calc 3 (multivariable)
  • Differential Equations
  • Linear Algebra (upper div with lots of proofs) (last chance to take it also)
  • Number Theory

Which, if any or all, might best serve me for game dev, specifically graphics and/or AI? Is another proof-laden linear algebra class necessary or overkill? Thanks.

Advertisement

They're all useful. Number theory tends to be a little more essoteric, but is useful for understanding things like the minimum number of bits needed to store (complex) information, or, in understanding how much accuracy you can rely on maintaining through a series of floating-point operations (and strategies to maximize accuracy by ordering operations differently).

Linear algebra will have the most direct application to geometry transformations used in rendering and physics. Diff Eq. and Calculus will be more in the realm of physics, statistics, with applications also in AI and graphics.

The more math the better though. Plan to learn all 4 if you can, and take now what will serve you best and most immediately, or in better shape to pursue the others later. If you're in college there's no rule against taking more classes than you need, it'll only cost you money and time -- be careful about keeping minimum grades up though, if you intend to take a heavier course-load than usual.

throw table_exception("(? ???)? ? ???");

I found multi-vector calculus to be far more useful than one would initially imagine. I like linear algebra, but the later stuff tended to be (as you mentioned) more proofs and less applicable to games.

You are not a career. Study what you're most interested in.

Stephen M. Webb
Professional Free Software Developer

I scored 1 from Linear Algebra in first semester from MatFyz (surprised myself). Some people argued over me here that axioma that a vector has the inverse vector, resulting in multiplication neutral vector if those two gets multiplied, with fact that there is no such thing as "multipilaction of vectors" minding and missing that multiplication and addition are operations to be defined on a space (that how you yield a vector space or not).

a vector space on a scalar body is a space where :

-a vector added to a vector is a vector of the same space

- to each vector exists exclusive vector that when added result in the addition neutral vector (zero vector)

- to each vector exists exclusive multiplication inverse vector ( if multipled with, result in multiplication neutral vector= I so that I*x=x)

- a vector multiplied by vector is a vector of the space

- .. and a few more axioms

in OpenGL, multiplying is defined as per component throw, such as v*v1=(v.i*v1.i,....i+) vector

This breaks association of vector/matrix multiplication in GLSL/HLSL vector space thus, I was unable to explain recantly, being forced to accept ungenerall rules of linear algebra.

I have to accept this though, since linear algebra in US is studying 3 dimensional spaces only, while on my school it was studying spaces, forced to reduce much lesser defintion.

Computer graphics is like 70% linear algebra, 20% calculus, 10% misc. But that doesn't mean a course in linear algebra is useful, partly because a lot of courses and books on linear algebra are !@#$ing terrible. The proofs in particular are completely pointless unless you have aspirations towards a master's degree or PhD in mathematics.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

I scored 1 from Linear Algebra in first semester from MatFyz (surprised myself). Some people argued over me here that axioma that a vector has the inverse vector, resulting in multiplication neutral vector if those two gets multiplied, with fact that there is no such thing as "multipilaction of vectors" minding and missing that multiplication and addition are operations to be defined on a space (that how you yield a vector space or not).

a vector space on a scalar body is a space where :

-a vector added to a vector is a vector of the same space
- to each vector exists exclusive vector that when added result in the addition neutral vector (zero vector)
- to each vector exists exclusive multiplication inverse vector ( if multipled with, result in multiplication neutral vector= I so that I*x=x)
- a vector multiplied by vector is a vector of the space
- .. and a few more axioms

in OpenGL, multiplying is defined as per component throw, such as v*v1=(v.i*v1.i,....i+) vector

This breaks association of vector/matrix multiplication in GLSL/HLSL vector space thus, I was unable to explain recantly, being forced to accept ungenerall rules of linear algebra.

I have to accept this though, since linear algebra in US is studying 3 dimensional spaces only, while on my school it was studying spaces, forced to reduce much lesser defintion.


The people who told you "there is no such thing as multiplication of vectors" were correct. The axioms that define what a vector space is talk about sum and difference of vectors and about multiplying a vector by a scalar. You can look it up anywhere.

You are spreading misinformation.

They're all useful. Number theory tends to be a little more essoteric, but is useful for understanding things like the minimum number of bits needed to store (complex) information, or, in understanding how much accuracy you can rely on maintaining through a series of floating-point operations (and strategies to maximize accuracy by ordering operations differently).


The exact cource contents vary between colleges/universities but usually Number theory is about integers, e.g., prime numbers and such. In my university, a course that deals with floating point approximations of real numbers and the like is named Numerical analysis.

The people who told you "there is no such thing as multiplication of vectors" were correct. The axioms that define what a vector space is talk about sum and difference of vectors and about multiplying a vector by a scalar. You can look it up anywhere.

I am not word picking, I will refrase again- that if one defines multiplication of vectors on a vector space, it will stop being a vector space if multiplication operation will break axioms mentioned.

If it is so that vectors do not multiply and have no rule towards if doing so, then why we were all the time speaking about matrix multiplication being associative, and a vector being a matrix special case , and distinguishing column and row vectors and actual dimensions of those- those are relations existing only if multiplying of vectors is defined.

But I insist that in linear algebra generaly, there are axioms about multiplication operation if multiplication gets introduced onto vector space , having to be fullfilled with the new space and operations to keep it being a vector space .

In Opengl, the operation is component wise multiply, though it has no sence to be used in manner of geomteric need, it ields a vector space where column and row vectors are the same, implying certain inpropriatness of talking about transposing a matrix, since it transposes by its order with multiply operation (that is impossible in linear algebra becouse of NxM demand in mutipling matricies/vectors, implying also vectors to be nx1x1xm, but that algebra has its definition of operation, that opengl does differ in (it will multiply vectors without their dimension introduced), thus opengl has a vector space of different properties- where multiplication is not associative- but it is a vector space).

If it is so that vectors do not multiply and have no rule towards if doing so, then why we were all the time speaking about matrix multiplication being associative, and a vector being a matrix special case , and distinguishing column and row vectors and actual dimensions of those- those are relations existing only if multiplying of vectors is defined.


I can tell you that. In modern mathematics (after Grothendieck, I believe), you don't just study objects like vector spaces: You also study the morphisms between them that preserve the structure. In the case of vector spaces, the corresponding morphisms are linear mappings(*). When we talk about matrix multiplication being associative and all those things you mention, we are looking at matrices as representing linear mappings between vector spaces. A column vector represents a vector, and a row vector represents a co-vector, which is a linear mapping from the vector space to the field of scalars. The whole thing makes perfect sense without introducing a mechanism to multiply vectors.


(*)- Other examples are sets and functions, topological spaces and continuous functions, differentiable manifolds and smooth functions, and groups and group morphisms (they are just called that).

This topic is closed to new replies.

Advertisement