## Sorting

> Selection Sort

> Quicksort

## Searching

> Sequential Search

> Binary Search

## Geometric

> Package Wrapping

> Graham's Scan

## Root Finding

> Bisection Method

> Newton's Method

# Selection Sort

### Brute Force

Selection sort is a simple sorting algorithm that works by selecting the smallest unsorted element on each iteration and placing it in the correct location.

< Random >

Selection Sort is a very simple algorithm as it functions the way people generally sort objects - find the smallest value and put it at the front, then the next smallest and so on. It is also an in-place algorithm, meaning it's space complexity (or memory requirement) is only *O(1)*. It minimises the number of swaps required to sort the array compared to more advanced sorting methods, which may be advantageous where swapping is costly.

Despite these advantages Selection Sort is a very slow algorithm, with *O(n ^{2})* in its best, worst and average cases. Insertion Sort should generally be used where a simple in-place sorting algorithm is required.

# Quicksort

### Divide-and-conquer

Quicksort is an efficient divide-and-conquer sorting algorithm. It works by partitioning the array around a pivot and recursively sorting the resulting 2 sub-arrays.

< Random >

Quicksort was developed in 1959 and is still commonly used today as it is significantly faster than rudamentary sorts such as insertion and selection sort.

In its best and average cases quicksort is very fast, performing with *O(nlogn)*. The best case occurs when the middle element is always chosen as a pivot.

The main disadvantage to quicksort is its very poor worst case performance of *O(n ^{2})*. This occurs when each partition is as unbalanced as possible. This situation can be mitigated by choosing a better initial pivot, such as comparing the leftmost, rightmost and central elements in the array and choosing the median of them as the pivot.

Quicksort is not a stable sort, as the relative order of elements with the same value is not preserved.

# Sequential Search

### Brute Force

Sequential Search is a brute force algorithm for finding an element in an array. It sequentially checks each element until a match is found.

< Random >

Sequential search, or linear search is the simplest searching algorithm to understand and implement. It has limited use for searching unsorted or very small arrays, but generally there are many more efficient searching algorithms.

Best case performance is *O(1)*, where the first element in the array is the target value (this is obviously a rare occurance). The performance in the average and worst case is *O(n)*

# Binary Search

### Divide-and-conquer

Binary Search is an efficient algorithm for finding an element in a sorted array. It works by repeatedly dividing the array in half and checking the middle element of the sub-array that could contain the element.

Sorted

Binary Search is an intuitive algorithm for searching that people generally use to search through a sorted list (e.g. finding a word in a dictionary). It is a very efficient method for fixed arrays, however the overhead of keeping the array sorted when inserting new data can significantly lower overall efficiency.

Best case performance is *O(1)* where the target value is in the middle of the array (the first element checked). The performance in average and worst cases is *O(logn)*.

Although hashing is a faster algorithm for searching with *O(1)* performance in the majority of cases, it does not support approximate matches such as the next smallest or largest values. Binary search is ideal for such cases.

# Package Wrapping Algorithm

### Brute Force

Package Wrapping is a brute force algorithm for computing the convex hull of a set of points. It works by starting with the lowest point, and iteratively finding the next point on the hull by minimum anti-clockwise angle.

< Random >

: convex hull vertex

The package wrapping algorithm (also called the gift wrapping algorithm, or Jarvis march when used in 2 dimensions) has an average and worst case performance of *O(n ^{2})* where every point (or a large number of points in the average case) is a point on the convex hull (shown in the

*Large Hull*run), making it an output-sensitive algorithm.

Runtime complexity can also be defined as *O(nh)*, where *h* is the number of points on the hull. In the best case (*h*=3), the algorithm performs in *O(n)* time.

# Graham's Scan

### Backtracking

Graham's Scan is an efficient algorithm for computing the convex hull of a set of points. It works by sorting the points by angle from the lowest point, adding each to the hull, and backtracking if previously added elements cannot be on the hull.

< Random >

: convex hull vertex

Graham's scan is an efficient method for computing convex hull with a time complexity in all cases of *O(nlogn)*.

Unlike the package wrapping algorith, it requires the set of points to be sorted in increasing order by their anti-clockwise angle to the anchor point (the point with the lowest y value).

Graham's scan then adds each point to the hull, backtracking to remove any points that cannot be points on the convex hull.

# Bisection Method for Root Finding

### Greedy

The Bisection Method is an algorithm for finding a root of a given function. It works by choosing 2 initial points with opposite signs as an interval containing the root. The interval is repeatedly cut in half until stopping criteria is reached (a root is found).

f(x)=x^{3}-3x+1

: x0, x1

: m

The bisection method uses the Intermediate Value Theorem, meaning if the two initial points have opposite signs, one will be above the y-axis and one will be below, therefore straddling a root somewhere in the middle.

The algorithm has three stopping criteria: if *x0* and *x1* are very close; if for a new midpoint *m*, f(*m*) is close to zero; or if the maximum number of iterations has been reached. As directly comparing equality of real numbers is not possible, a tolerance function (*close()*), is used.

The time complexity of the bisection method cannot be defined in terms of Big-O. Instead, if the error in estimating a root is *h*, the rate of convergence toward a root is *h/2*.

# Newton's Method for Root Finding

### Greedy

Newton's Method is an efficient algorithm for root finding that starts with an approximation for the root, and iteratively calculates a new approximation for the root using the tangent at the current point.

f(x)=x^{3}-3x+1

: tangent

: x

Newton's method uses differentiation to converge much faster towards a root than the bisection method.

The time complexity of Newton's method cannot be defined in terms of Big-O. Instead, if the error in estimating a root is *h*, the rate of convergence toward a root is *h ^{2}*, significantly faster than the bisection method.

However, for ever iteration of Newton's method f' must be evaluated, a costly calculation that is abscent in the bisection method. Also, if f'(x)=0 then the method fails as the tangent will longer intersect the y-axis. This can mostly be mitigated by carefully choosing initial values to avoid flat spots, runaways and cycles.