# Analyse av sortert sammenslåing

Given that the

`merge`

function runs in $\Theta(n)$ time when merging elements, how do we get to showing that `mergeSort`

runs in $\Theta(n \lg n)$ time? We start by thinking about the three parts of divide-and-conquer and how to account for their running times. We assume that we're sorting a total of elements in the entire array.- The divide step takes constant time, regardless of the subarray size. After all, the divide step just computes the midpoint of the indices and . Recall that in big-Θ notation, we indicate constant time by $\Theta(1)$.
- The conquer step, where we recursively sort two subarrays of approximately elements each, takes some amount of time, but we'll account for that time when we consider the subproblems.
- The combine step merges a total of elements, taking $\Theta(n)$ time.

If we think about the divide and combine steps together, the $\Theta(1)$ running time for the divide step is a low-order term when compared with the $\Theta(n)$ running time of the combine step. So let's think of the divide and combine steps together as taking $\Theta(n)$ time. To make things more concrete, let's say that the divide and combine steps together take time for some constant .

To keep things reasonably simple, let's assume that if , then is always even, so that when we need to think about , it's an integer. (Accounting for the case in which is odd doesn't change the result in terms of big-Θ notation.) So now we can think of the running time of

`mergeSort`

on an -element subarray as being the sum of twice the running time of `mergeSort`

on an -element subarray (for the conquer step) plus (for the divide and combine steps—really for just the merging).Now we have to figure out the running time of two recursive calls on elements. Each of these two recursive calls takes twice of the running time of

`mergeSort`

on an -element subarray (because we have to halve ) plus to merge. We have two subproblems of size , and each takes time to merge, and so the total time we spend merging for subproblems of size is .Let's draw out the merging times in a "tree":

Computer scientists draw trees upside-down from how actual trees grow. A

**tree**is a graph with no cycles (paths that start and end at the same place). Convention is to call the vertices in a tree its**nodes**. The**root**node is on top; here, the root is labeled with the subarray size for the original -element array. Below the root are two**child**nodes, each labeled to represent the subarray sizes for the two subproblems of size .Each of the subproblems of size recursively sorts two subarrays of size , or . Because there are two subproblems of size , there are four subproblems of size . Each of these four subproblems merges a total of elements, and so the merging time for each of the four subproblems is . Summed up over the four subproblems, we see that the total merging time for all subproblems of size is :

What do you think would happen for the subproblems of size ? There will be eight of them, and the merging time for each will be , for a total merging time of :

As the subproblems get smaller, the number of subproblems doubles at each "level" of the recursion, but the merging time halves. The doubling and halving cancel each other out, and so the total merging time is at each level of recursion. Eventually, we get down to subproblems of size 1: the base case. We have to spend $\Theta(1)$ time to sort subarrays of size 1, because we have to test whether , and this test takes time. How many subarrays of size 1 are there? Since we started with elements, there must be of them. Since each base case takes $\Theta(1)$ time, let's say that altogether, the base cases take time:

Now we know how long merging takes for each subproblem size. The total time for

`mergeSort`

is the sum of the merging times for all the levels. If there are levels in the tree, then the total merging time is . So what is ? We start with subproblems of size and repeatedly halve until we get down to subproblems of size 1. We saw this characteristic when we analyzed binary search, and the answer is $l = \lg n + 1$. For example, if , then $\lg n + 1 = 4$, and sure enough, the tree has four levels: . The total time for `mergeSort`

, then, is $cn (\lg n + 1)$. When we use big-Θ notation to describe this running time, we can discard the low-order term () and the constant coefficient (), giving us a running time of $\Theta(n \lg n)$, as promised.One other thing about merge sort is worth noting. During merging, it makes a copy of the entire array being sorted, with one half in

`lowHalf`

and the other half in `highHalf`

. Because it copies more than a constant number of elements at some time, we say that merge sort does not work **in place**. By contrast, both selection sort and insertion sort do work in place, since they never make a copy of more than a constant number of array elements at any one time. Computer scientists like to consider whether an algorithm works in place, because there are some systems where space is at a premium, and thus in-place algorithms are preferred.This content is a collaboration of Dartmouth Computer Science professors Thomas Cormen and Devin Balkcom, plus the Khan Academy computing curriculum team. The content is licensed CC-BY-NC-SA.