• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Nicholas Kong

How to analyze run time complexity in code in a trivial way

10 posts in this topic

How would I know if a run-time of an algorithm is one of the four types of run-time: quadratic, linear, constant, and logarithmic in a trivial way based on code itself and definition of the four terms?

 

I did some research on the definition on the four types of run-time but I am putting it in my own words so hopefully my phrasing is correct:

1) an algorithm run-time is constant if the run-time is same no matter the size of the input of the algorithm

I have a question about this one: if the run-time is constant why is it O(1). Why choose 1 of all numbers? 

2) an algorithm run-time is linear if the run-time increases as the input size increases

3) an algorithm run-time is logarithmic if the run time increases to the log of the input size. I have a question about this too: are we assuming log of some input to the base 10 or is the base of the log arbitrary?

4) an algorithm run-time is quadratic if the run time increases to the input size squared.

 

On a side note what is the difference between lg and log?

 

Here's an example:

loglog n = _ (loglog(n^3)) <-I think the example is taking a log of log n 

What should n be assumed to be to evaluate the statement in a trivial way.

 

On a side note, I'm not familiar with this notation lg < what does lg mean?

 

Example:

n*lg n = _ ( n*lglgn)

Edited by warnexus
0

Share this post


Link to post
Share on other sites

a trivial way

The only way I see is, to change the input set and check the run-time performance. The standard way would be to analyse the algorithm on paper, this is not without reason part of the theoretical branch of computer science smile.png

0

Share this post


Link to post
Share on other sites

no loops with no recursive calls = constant

1 loop without recursive calls = linear

Other usually need deeper analysis, but most of the time it's not hard.

 

  1. O is some constant, so O(1) means constant*1;
  2. Yes, but cache & other things might make it slower than linear when you compare different amounts of data;
  3. It doesn't matter, point is it's between constant and linear, also it means the more data you add the smaller difference in time will be (assuming just logn);

lg is logarithm with base 10, ln is with base e.

0

Share this post


Link to post
Share on other sites

O is some constant, so O(1) means constant*1;

O doesn't mean some constant. It means "any term of a lower order" (there is a formal definition involving limits at infinity). So if you have an algorithm which runs in complexity (2n)^2 + n log(n), the asymptotic complexity is O(n^2).

 

We use O(1) by convention, since 1 is independent of the problem size n, you could choose any nonzero finite number and get the same asymptotic complexity as per the definition. 1 just seemed the most meaningful choice. If another number had been chosen people would be asking "why O(3)?", so..

Edited by Bacterius
2

Share this post


Link to post
Share on other sites

On a side note what is the difference between lg and log?

Different subjects deal with different bases regularly and so tend to be lazy and create small notation.

 

'log' written on its own should be assumed as base 10 if a base isn't specified, 'ln' is to the base 'e', and 'lg' I think is commonly base 2 but that notation is fairly informal (is my understanding).

1

Share this post


Link to post
Share on other sites

Where I live we learned ln is base e, lg base 10 and ld base 2. lg being (mis-)used as base 2 is new to me, inside O() I just assumed people used a random logarithm to have to write one character less. This I just found while searching for it:

http://mathworld.wolfram.com/BinaryLogarithm.html

http://en.wikipedia.org/wiki/Binary_logarithm

 

There is not just the O notation, there are other related things like o,Theta, big Omega, small omega which are not that popular although they can give other clues over complexity. http://en.wikipedia.org/wiki/O_notation

It can also be frustrating to find 2 algorithms advertised at same complexity and one being secretly much slower, all while the people who calculated it must have had more exact data to prove it just to throw half of it away and then later cause other people speculating and trying to find which is the better. I'd very much like to know if one is n^2-100*n*log n and the other 10*(n^2+1000*n*log n) or if both are n log n on average and one is best case n, worst case n log n and other best case n log n and worst case n^2.

0

Share this post


Link to post
Share on other sites

I would think the 1 comes from n^0 = 1.  Any other number would be substitutable, but arbitrary.  As far as evaluating you could also walk through a small data set and see how many times it loops.  Basically, how many times does the most inner loop run in the worst case scenario.  Then evaluate that number versus your n.  Though I am not very versed on Big O so there may be a reason why this would not work. 

0

Share this post


Link to post
Share on other sites

For a very large number of algorithms you'll end up writing in every day work, it's pretty easy. They tend to be simply counting up the number of times a loop iterates and possibly accounting for nested loops.
 
O(n):

for (int i = 0; i < n; ++i)
{
    ...
}

 
O(n2):

for (int i = 0; i < n; ++i)
{
    for (int j = 0; j < n; ++j)
    {
        ...
    }
}

 
The same method can be analogized to include any function calls made within the loop bodies.  Say, for instance:

for (int i = 0; i < n; ++i)
{
    for (int j = 0; j < n; ++j)
    {
        f(j, n); // f() is O(n).
    }
}

The function call is basically another nested loop, hence O(n3).

But this kind of analysis really only works for very simple algorithms that do simple iteration. Once you get into recursion, you need to start doing some more math to be able to come to asymptotically tight analyses. For some illustrative examples, take a look at some of the material here: http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/

 

In particular:

 

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/99-recurrences.pdf

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/98-induction.pdf

 

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/00-intro.pdf

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/05-dynprog.pdf

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/06-sparsedynprog.pdf

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/14-amortize.pdf

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/16-unionfind.pdf

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/27-lowerbounds.pdf

 

The first two links are particularly important/relevant.  The rest sort of just show you the analyses in action.  I would highly recommend knowing how to solve for recurrences.  You cannot hope to analyze anything but the simplest algorithms without knowing how to solve for recurrences.

1

Share this post


Link to post
Share on other sites

On a side note what is the difference between lg and log?

Asymptotically, none. Logs to any base are big theta of each other.

It can also be frustrating to find 2 algorithms advertised at same complexity and one being secretly much slower, all while the people who calculated it must have had more exact data to prove it just to throw half of it away and then later cause other people speculating and trying to find which is the better. I'd very much like to know if one is n^2-100*n*log n and the other 10*(n^2+1000*n*log n) or if both are n log n on average and one is best case n, worst case n log n and other best case n log n and worst case n^2.

But, this.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0