*This blog post is part of a series as I research how to move from Automation Development to being a Software Developer in Test. These next few posts will deal with algorithms.*

What is the difference? ... Well, the main thing is that as an automated tester, I haven't needed to worry about Big-O notation and sorting algorithms, such as the ones found on Eric Drowell's Big-O Cheat Sheet.

## Big-O Complexity Chart

`Excellent` | `Good` | `Fair` | `Bad` | `Horrible` |

## Array Sorting Algorithms

Algorithm | Time Complexity | Space Complexity | ||
---|---|---|---|---|

Best | Average | Worst | Worst | |

Quicksort | `O(n log(n))` | `O(n log(n))` | `O(n^2)` | `O(log(n))` |

Mergesort | `O(n log(n))` | `O(n log(n))` | `O(n log(n))` | `O(n)` |

Timsort | `O(n)` | `O(n log(n))` | `O(n log(n))` | `O(n)` |

Heapsort | `O(n log(n))` | `O(n log(n))` | `O(n log(n))` | `O(1)` |

Bubble Sort | `O(n)` | `O(n^2)` | `O(n^2)` | `O(1)` |

Insertion Sort | `O(n)` | `O(n^2)` | `O(n^2)` | `O(1)` |

Selection Sort | `O(n^2)` | `O(n^2)` | `O(n^2)` | `O(1)` |

Tree Sort | `O(n log(n))` | `O(n log(n))` | `O(n^2)` | `O(n)` |

Shell Sort | `O(n log(n))` | `O(n(log(n))^2)` | `O(n(log(n))^2)` | `O(1)` |

Bucket Sort | `O(n+k)` | `O(n+k)` | `O(n^2)` | `O(n)` |

Radix Sort | `O(nk)` | `O(nk)` | `O(nk)` | `O(n+k)` |

Counting Sort | `O(n+k)` | `O(n+k)` | `O(n+k)` | `O(k)` |

Cubesort | `O(n)` | `O(n log(n))` | `O(n log(n))` | `O(n)` |

*The Big O Cheatsheet By Eric Drowell*In this blog post, we are going to be focusing on walking through the sorting algorithms

**,**

*Insertion Sort***, and**

*Mergesort***, and implementing them in Java code.**

*Quicksort*First things first, though...

##

What is Big O Notation?

Big O notation is how computer scientists compare and contrast the efficiency of sorting and searching algorithms, the tried-and-true recipes used to write programs that search through massive amounts of data.The concept of Big O notation was first introduced and the popularized by the German mathematicians, first by Paul Gustav Bachman, in his book on analytical number theory, Analytische Zahlentheorie, and then by Edmund Landau.

The "O" symbol mentioned in the book was initially an omicron, the 15th letter of the Greek alphabet, showing a formula to show the formula of a rate of growth.

"Big O notation (with a capital letter O, not a zero), also calledLandau's symbol, is a symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines.

"Landau's symbol comes from the name of the German number theoretician Edmund Landau who invented the notation [...]

"For example, when analyzing some algorithm, one might find that the time (or the number of steps) it takes to complete a problem of sizenis given byT(n)= 4 n^{2}- 2n+ 2.

"If we ignore constants (which makes sense because those depend on the particular hardware the program is run on) and slower growing terms, we could say 'T(n) grows at the order of n^{2}' and write:T(n) = O(n. - MIT 16.070, Introduction to Computers and Programming.^{2})

Here is a list of functions, from slow growing to fast growing. If you are dealing with massive amounts of data, you want to match the most effective searching or sorting algorithm with its Big O notation.

**Constant growth**: O(1)**Logarithmic growth**: O(log n)**Linear growth**: O(n)**Polynomial growth**: O(n^{c})**Quadratic growth**: O(n^{2})**Exponential growth**: O(c^{n})

##

What Was That About Complexity?

From

**The Idiot's Guide to the Big O**at http://www.corejavainterviewquestions.com/idiots-guide-big-o/

"O(1)/Constant Complexity: Constant. This means irrelevant of the size of the data set the algorithm will always take a constant time.1 item takes 1 second, 10 items takes 1 second, 100 items takes 1 second. It always takes the same amount of time.

"O(log n)/Logarithmic Complexity: Not as good as constant, but still pretty good. The time taken increases with the size of the data set, but not proportionately so. This means the algorithm takes longer per item on smaller datasets relative to larger ones.1 item takes 1 second, 10 items takes 2 seconds, 100 items takes 3 seconds. If your dataset has 10 items, each item causes 0.2 seconds latency. If your dataset has 100, it only takes 0.03 seconds extra per item. This makes log n algorithms very scalable.

"O(n)/Linear Complexity: The larger the data set, the time taken grows proportionately.1 item takes 1 second, 10 items takes 10 seconds, 100 items takes 100 seconds.

"O(n log n): A nice combination of the previous two. Normally there’s 2 parts to the sort, the first loop is O(n), the second is O(log n), combining to form O(n log n)1 item takes 2 seconds, 10 items takes 12 seconds, 100 items takes 103 seconds.

"O(n^2)/Quadratic Complexity: Things are getting extra slow.1 item takes 1 second, 10 items takes 100, 100 items takes 10000.

"O(2^n): Exponential Growth! The algorithm takes twice as long for every new element added.1 item takes 1 second, 10 items takes 1024 seconds, 100 items takes1267650600228229401496703205376 seconds.

"It is important to notice that the above is not ordered by the best to worst complexity. There is no 'best' algorithm, as it completely hinges on the size of the dataset and the task at hand".

## Visualizing Growth

If you are more of a visual person (like me), maybe this chart would help. It is taken from Richard Meyr's lecture on

**Discrete Mathematics**.