Sign in to follow this  

How is Discrete Math used in video games

Recommended Posts

Posted (edited)

Hi everyone, I am a college student working my to becoming a game programmer. I am taking my first computer science course (c++ focused), calc 1 and discrete math 1. I am just looking for knowledge on how discrete math is used in the use of making the algorithms for video games. I understand how calculus is used, just not discrete.

Edited by AlphaWolfKing

Share this post


Link to post
Share on other sites

Surprise surprise but calculus is not actually used anywhere in computer programming(1), because it is based on the concept of continuous function over the real (or complex) domain.  Digital computers are capable only of performing discrete math, which is sort of the equivalent of calculus over the integral (or rational) domain.

Discrete math covers pretty much everything you do in a video game, from the convolution of a compute matrix over a manifold (the general term for things like projecting vertexes and shading surfaces for 3D shooters or the current trendy "artificial intelligence" of edge detection and eigenvalue computation used for self-driving cars) to integrating components of a fast Fourier transform (like playing digital music).  Computer programming is 100% discrete math, even the physics.

The more discrete math you learn, the better a programmer you will be, especially in 10 or 20 years when novel applications of decades-old or millennium-old discrete math concepts are the hot new trend.  Trust me, I graduated more than 30 years ago and I have lived this advice more than once.

(1) A lot of calculus is used in the theoretical underpinnings of much of modern programming.  It's very much worth knowing and understanding, especially if you want to extend current capabilities.  Convolving a compute kernel over a manifold is theoretically just applying a set of ordinary differential equations to a set, except it's discrete instead of continuous.

Share this post


Link to post
Share on other sites

First of all, it's very possible different people have different ideas of what "discrete math" means. I wouldn't call any of Bregma's examples discrete math: They are mostly examples of linear algebra or calculus.

I wouldn't worry too much about immediate applicability of any math you learn to any field. You learn math so when you encounter new situations you can reason about them and solve your problems. There might be specific contents in a math class that you can use directly in video games, but if you only studied those you wouldn't have a diverse-enough library of examples to be able to face new situations.

 

 

Share this post


Link to post
Share on other sites
27 minutes ago, alvaro said:

First of all, it's very possible different people have different ideas of what "discrete math" means.

Of course.

The dictionary definition of "discrete" is "individually separate and distinct" which when combined with the term "mathematics" refers to that broad category of logical reasoning about anything that is not a continuous function.  So, technically, "discrete math" refers to all of mathematics disjoint from the calculus of continuous functions.  It's a very broad  and vague term, so of course people will use it to mean what they like and not be incorrect.

When I was in university, the courses titled "discrete math" were either about simplex programming, linear algebra, or in one case inductive reasoning used to provide proofs of programming correctness (I think the latter involved a professor who ignored the departmental syllabus and taught what he was interested in, but I'll never know... rather wretched class as I recall).

The most important thing you will learn by studying any math is not the actual topic (which could also be vitally important, make no mistake) but rather the mental discipline required for problem solving.  Being able to lay out the logical progression and showing your work for a mathematical proof is very much the same as writing a computer program.

Share this post


Link to post
Share on other sites
12 hours ago, Bregma said:

Surprise surprise but calculus is not actually used anywhere in computer programming

I use calculus regularly. Not daily, but  I work with coworkers who try to avoid it, but really it isn't that hard.

Any time you're working with change over time you have the option to use it.  Sadly the vast majority of gameplay programmers will accumulate values over time, every update trying to compute how much the change is over the time, then dividing by the time slice and storing the tiny result.  Many times it is far better, both less compute time and less numerical error, to do the math before coding the function and compute the solution directly.  

Most of us remember doing the exercise as a student where we had to compute and tabulate a bunch of individual slices, then compare them to the directly computed value; I know I hated wasting the time and effort to compute all the tiny little slices and I assume most others did as well. But we have no difficulty assigning the computer to run those numbers even though a direct numerical solution is easily available.

Similarly we want things to flow smoothly, but instead of taking a few seconds on paper to figure out C2 or C3 continuity, or use a sigmoidal function that can be easily tuned by designers for a smooth continuous flow, many programmers who only think in iterative terms will instead turn to an arbitrary scalar value, perhaps "Just multiply by 0.95 every update, and stop when motion is less than 0.001".

 

Learn the math. Use it when people talk about rates of change, or how things flow, slide, or animate.  Accumulation is the source of many subtle unnecessary defects.

Share this post


Link to post
Share on other sites

Hoo boy, I know this is a topic on discrete math, but I just wanted to chime in and say that I use calculus all the time in programming. Any type of signal processing is going to need calculus. Fourier transforms in particular are extremely useful for, say, virtual reality or eeg programming. Dead reckoning positional tracking is essentially all calculus. Any time you have anything that is sample based over time, you're going to be using calculus. This applies to both EE and general comp sci.

Edited by The Perfect K

Share this post


Link to post
Share on other sites
3 hours ago, The Perfect K said:

Any time you have anything that is sample based over time, you're going to be using calculus.

Funny thing, that if you're processing discrete samples over time, you're using discrete math.  If you're doing dead reckoning on a computer, you're using discrete math.  Digital computers are not capable of doing anything except discrete math.  In fact, it's the only thing they're capable of doing.

It's great that you understand the theory of what's going on because calculus is really useful to describe it.  No argument.  The problem is that if you don't understand the discrete math you're actually using to implement these things you can run into all kinds of problems. For example the accumulation of error in your dead reckoning -- such a thing doesn't exist in the theoretical world of calculus and the infinite precision of real numbers and continuous functions, but it can add up quickly in the real world of discrete math until it overwhelms the signal if you take any of the more common naive implementations.

Knowing and understanding the difference between theory (calculus) and application (programming) is important.  You need to study both if you want to be a programmer, and know when each is relevant.

Share this post


Link to post
Share on other sites
2 hours ago, Bregma said:

Funny thing, that if you're processing discrete samples over time, you're using discrete math.  If you're doing dead reckoning on a computer, you're using discrete math.  Digital computers are not capable of doing anything except discrete math.  In fact, it's the only thing they're capable of doing.

It's great that you understand the theory of what's going on because calculus is really useful to describe it.  No argument.  The problem is that if you don't understand the discrete math you're actually using to implement these things you can run into all kinds of problems. For example the accumulation of error in your dead reckoning -- such a thing doesn't exist in the theoretical world of calculus and the infinite precision of real numbers and continuous functions, but it can add up quickly in the real world of discrete math until it overwhelms the signal if you take any of the more common naive implementations.

Knowing and understanding the difference between theory (calculus) and application (programming) is important.  You need to study both if you want to be a programmer, and know when each is relevant.

Heh, I do suppose that's true, and much to your sub-point at the end of your original post. Funny thing though, the very first bits of calculus I learned was pretty much this how to transform to discrete values in order to solve equations by hand (or with a calculator). It sort of blurs the line, in my mind at least, when calculus ends and begins.

But hey, at least through this back and forth we gave the OP an other example of how discrete computation is important in computer programming.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this