At first glance, Big O notation sounded like a concept reserved for computer science majors, which does not include me. However, after much reading, I can decipher the once cryptic symbols. As with many blog posts, I write this selfishly to cement my understanding, but I also write this in hopes that my perspective will help a fellow programmer read the notations without thinking ‘it’s all Greek to me’ (unless you happen to read Greek, then Αυτά μου φαίνονται αλαμπουρνέζικα.).

So what is Big O notation? Big O notation measures the time and/or space complexity of an algorithm.Big O refers to the worst case scenario. For example, imagine trying to select a red skittle from a bag without looking. In the best case scenario, you would get a red skittle on the first try. On the other hand, you might not pick a red skittle until all the other colors are gone. For this blog post, I will be focusing on the time complexity aspect of three common notations:

**O(n) **

This is the custom formatting for Big O notation: O is the function, while n is the input. I think of this specific notation O(n) as like reading a book or a list in order. Here’s an example of an O(n) notation:

```
function(array, value) {
for(var i = 0; i < array.length; i++) {
if (array[i] == value) return true;
});
return false;
}
```

It is possible that this function would return true on the first go; however, with Big O, we have to consider what could possibly be the worst case scenario, which is going through every item ‘n’ in the array once. It’s like looking for an item in a cluttered drawer and not finding it until it’s the remaining, final item. Thus, this function has an O(n) notation.

**O(n²)**

This one is like checking a shopping list with items in your cart. Here’s an example:

```
function(shoppingList, cartList) {
for(let s = 0; s < shoppingList.length; s++) {
for(let c = 0; c < cartList.length; c++) {
if(shoppingList[s] == cartList[c]) return true;
}
}
return false;
}
```

Imagine having a shopping list with these items: milk, butter, and dark chocolate. You want to check to see if you have at least one of the items on your shopping list, so you look at the first item -milk, and then you check to see if milk matches the first item in your cart – oranges. It doesn’t match, so you move onto the next item – carrots. Hm, it still doesn’t match, so you go onto the next item… apples. That was the last item in your cart. Since milk did not match any of the times, you repeat the same process with the next shopping list item – bread. And for some reason, you suffer from selective grocery amnesia, so you try to match bread with the first item in your cart – oranges. As you can see, this process can get repetitive. What’s worse, the shopping list and the cart have completely different items, so you will have to look through the cart four times(once for every item on the shopping list).

Often, O(n²) is the brute force solution. This O notation is less optimal than O(n), where you’re only going through the items once. You might think O(n²) can be faster if it had two, small arrays with exactly the same items, but since the input is dynamic, we’re concerned about the worst case scenario. This is why Big O uses an abstract variable ‘n’ instead of a concrete number to represent its input.

**O(log n)**

First, let’s establish what is a log. It’s certainly not a part of a wooden cabin. It might smell like something you would find in your old high school calculus textbook.

Log2 8 = ?

This equation is a logarithm, which is the inverse of the exponent. The logarithm is asking how much should base 2 be raised to get 8? It is 3 because 2 * 2 * 2 = 8.

Log2 8 = 3 Or 2³

So back to Big O. O(log n) is like finding a word in a dictionary. Let’s say you want to find the word ‘hippo’. You would not start from the beginning and keep reading until you hit the word hippo! No, what you would do instead is go to the ‘h’ section of the dictionary, and try to find the word ‘hippo’. It’s a divide and conquer method. You focused your search on a part of the dictionary. This is essentially what you are doing with O(log n).

You have this array [2, 5, 6, 9, 14, 17, 26] and you are trying to find the number ‘17’. If you know the numbers are in order, you can find the number in O(log n) time by comparing 17 with the number in the middle ‘9’. Since 17 is greater than 9, you ignore all the numbers on the left, and only look at the numbers on the right, giving you [14, 17, 26]. Now, you again compare 17 to the middle number, which in this case is 17. You can now stop looking! Since you do not have to look at every element, you can see how this Big O notation is more optimal than O(n) and O(n^2).

**Conclusion**

See this blog post as a stepping stone to understanding Big O notation, rather than a Rosetta stone that will transform your thinking. I deepened my knowledge of Big O by identifying the notation of algorithms I would encounter and asking myself if there is way I can optimize this. Reading can only get you so far; practice is key.