# Algorithms¶

There are many algorithms that can accomplish the same task, but depending on the situation, some are more efficient than others. In order to analyze efficiency, we will discuss basic complexity analysis.

## Complexity analysis¶

There are many different mathematical tools one can use to analyze an algorithm. Below we describe the most prevalant: Big-O notation.

### Big-O notation¶

When analyzing a function, often we only care about the asymptotic behavior and growth rate. This is the premise behind Big-O notation. Formally, for two postive, monotonically increasing functions $$f$$ and $$g$$, we say that $$f(n) = O(g(n))$$ if there exist positive constants $$L$$ and $$M$$ such that for all $$x\geq L$$, we have $$f(n) \leq M\cdot g(n)$$. In other words, this means $$g$$ grows at least as fast asymptotically as $$f$$.

For example, $$2n^2 + 6\log n = O(n^2)$$. Although we lose a lot of information this way, it makes complex algorithms easier to analyze, and allows us to make statements like doubling the size of the input will roughly quadruple the run time. This does not mean that short-term and medium-term behavior should be neglected, especially if this is the primary use-case of your algorithm.

There are many properties of Big-O notation that are easy to prove and useful for analysis. Some notable properties are provided below.

• $$O(cf(n)) = O(f(n))$$ for all $$c>0$$
• $$f_1(n) = O(g_1(n))$$ and $$f_2(n) = O(g_2(n))$$ imply
• $$f_1(n) + f_2(n) = O(g_1(n) + g_2(n))$$
• $$f_1(n)\times f_2(n) = O(g_1(n)\times g_2(n))$$
• $$n^p = O(n^q)$$ for all $$q\ge p$$
• $$\log n = O(n^\epsilon)$$ for all $$\epsilon > 0$$
• $$n^p = O(a^n)$$ for all $$p \ge 0$$ and $$a > 1$$

Usually, when we write $$f(n) = O(g(n))$$, the function $$g$$ is chosen to be as tight and compact as possible, and therefore are usually monomial expressions.

Big-O notation is used to estimate how run time and memory usage grow with respect to the size of the input, counting the number of basic operations. These include arithmetic operations for both floating point and integers, binary operations and read/writes.

### Amortized analysis¶

Even within the realm of Big-O notation, there are many ways of analyzing runtime complexity when the algorithm’s efficiency depends on the current state. There are several ways to handle this. One way is considering the average case, but sometimes it can be difficult to define a notion of probability over use cases. Another approach is to consider worst case efficiency, in which the algorithm is analyzed under the worst possible state. It makes analysis more straightforward, but sometimes provides an overly conservative estimate.

The generally accepted solution to this is amortization. The analysis is still done over the worst-case, but looks at performing the algorithm repeatedly and sequentially, then analyzing its efficiency in aggregate. In this way, the numerous “good” states where the algorithm is efficient makes up for the rare “bad” cases.

For example, consider an array implementation of a list of integers. The capacity of the internal array is fixed until more room is needed, in which case, its contents are copied to a new area of doubled size. In most cases, this does not occur, and adding an element to this list can be done in constant time. The “bad” cases take a linear amount of time, due to copying the entire contents of an array. Suppose the list has total capacity $$k$$ before it needs to be doubled, and consider adding $$k$$ integers to the list from scratch. The total number of operations done would include $$k$$ adds to the array, $$k$$ updates for the list size, and $$k$$ copies to the new array. Therefore, the amortized cost of inserting a new integer is $$3k/k = O(1)$$.