If you're seeing this message, it means we're having trouble loading external resources on our website.

Hvis du er bak et webfilter, vær vennlig å sørge for at domenene .kastatic.org og .kastatic.org ikke er blokkert.

Hovedinnhold

Funksjoner med asymptotisk notasjon

When we use asymptotic notation to express the rate of growth of an algorithm's running time in terms of the input size n, it's good to bear a few things in mind.
Let's start with something easy. Suppose that an algorithm took a constant amount of time, regardless of the input size. For example, if you were given an array that is already sorted into increasing order and you had to find the minimum element, it would take constant time, since the minimum element must be at index 0. Since we like to use a function of n in asymptotic notation, you could say that this algorithm runs in Θ(n0) time. Hvorfor? Because n0=1, and the algorithm's running time is within some constant factor of 1. In practice, we don't write Θ(n0), however; we write Θ(1).
Now suppose an algorithm took Θ(log10n) time. You could also say that it took Θ(log2n) time. Whenever the base of the logarithm is a constant, it doesn't matter what base we use in asymptotic notation. Why not? Because there's a mathematical formula that says
logan=logbnlogba
for all positive numbers a, b, and n. Therefore, if a and b are constants, then logan and logbn differ only by a factor of logba, and that's a constant factor which we can ignore in asymptotic notation.
Therefore, we can say that the worst-case running time of binary search is Θ(logan) for any positive constant a. Why? The number of guesses is at most log2n+1, generating and testing each guess takes constant time, and setting up and returning take constant time. However, as a matter of practice, we often write that binary search takes Θ(log2n) time because computer scientists like to think in powers of 2.
There is an order to the functions that we often see when we analyze algorithms using asymptotic notation. If a and b are constants and a<b, then a running time of Θ(na) grows more slowly than a running time of Θ(nb). For example, a running time of Θ(n), which is Θ(n1), grows more slowly than a running time of Θ(n2). The exponents don't have to be integers, either. For example, a running time of Θ(n2) grows more slowly than a running time of Θ(n2n), which is Θ(n2.5).
The following graph compares the growth of n,n2, and n2.5:
Graph comparing n, n squared, and n to the 2.5 power
Logarithms grow more slowly than polynomials. That is, Θ(log2n) grows more slowly than Θ(na) for any positive constant a. But since the value of log2n increases as n increases, Θ(log2n) grows faster than Θ(1).
The following graph compares the growth of 1, n, and log2n:
Graph comparing 1, log based 2 of n, and n
Here's a list of functions in asymptotic notation that we often encounter when analyzing algorithms, ordered by slowest to fastest growing:
  1. Θ(1)
  2. Θ(log2n)
  3. Θ(n)
  4. Θ(nlog2n)
  5. Θ(n2)
  6. Θ(n2log2n)
  7. Θ(n3)
  8. Θ(2n)
  9. Θ(n!)
Note that an exponential function an, where a>1, grows faster than any polynomial function nb, where b is any constant.
The list above is not exhaustive, there are many functions with running times not listed there. You'll hopefully run into a few of those in your computer science journey.

This content is a collaboration of Dartmouth Computer Science professors Thomas Cormen and Devin Balkcom, plus the Khan Academy computing curriculum team. The content is licensed CC-BY-NC-SA.

Ønsker du å delta i samtalen?

Ingen innlegg enda.
Forstår du engelsk? Klikk her for å se flere diskusjoner på Khan Academys engelske side.