-
Notifications
You must be signed in to change notification settings - Fork 84
Algorithm: Big O Notation
The Big O Notation generally describes how efficient an algorithm is. Specifically, it measures how long an algorithm takes to execute, under the worst case scenario.
Given a sequence of statements
statement 1
statement 2
...
statement k
If we assume each statement are constant time operations, then the total time for running the above set of statements are as follows:
total time = time(statement 1) + time(statement 2) + ... + time(statement k)
When presented an if-elif-else statement,
if (condition 1): block 1 (sequence of statements)
elif (condition 2): block 2 (sequence of statements)
else: block 3 (sequence of statements)the worst-case time, is the slowest of the three possibilities.
max(time(block 1), time(block 2), time(block3))
An algorithm is referred to constant time, if it executes in the same amount of time regardless of the input value provided.
## check_int: check if supplied value is an integer type.
def check_int(value):
return isinstance(value, int)An algorithm is referred to linear time, if the time it takes to execute, is directly proportional to the input value size.
## print_range: print values within the supplied 'value' range.
def print_range(value):
for x in range(value):
print xAn algorithm is referred to quadratic time, if the time it takes to execute, is directly proportional to the squared input value size.
## print_nested_range: print values within the supplied 'value' range.
def print_nested_range(value):
for x in range(value):
for y in range(value):
print x, yNote: the above example, the range(value) were identical for both loops. Instead, if we had z number of nested loops, with the same range(value), then the algorithm time would be zth power. However, if the above example had different range(value), then the algorithm time be O(N * M).
An algorithm is referred to exponential time, if the time it takes to execute, is exponentially proportional to the input value size. Therefore, with increase input value size, the runtime increases by a factor of a, where a > 1.
The following is O(2^n) exponential time:
## fibonacci: return the nth fibonacci number.
def fibonacci(value):
if value == 0: return 0
elif value == 1: return 1
else: return fibonacci(value - 1) + fibonacci(value - 2)An algorithm is referred to factorial time, if the time to execute, is directly proportional to the factorial of the input value size.
## factorial: prints the factorial for the given value.
def factorial(value):
if n < 1:
return 1
else:
returnNumber = value * factorial( value - 1 )
print(str(n) + '! = ' + str(returnNumber))Note: factorial time is slower than exponential time.
An algorithm is referred to logarithmic time, if the time to execute, is logarithmically proportional to the input value size.
A binary search tree consists of a single root node, where each node (including the root), may contain at most two child nodes (left, or right). By definition, the left child node is less than it's parent node, while the right node is greater than or equal to it's parent node.
Note: we could arbitrarily restrict the left node to be less than or equal to it's parent node. This would mean the right now would strictly be greater than it's parent node.
When searching for values within a binary tree, the logic is as follows:
Base Case: is value the root node, then stop
is value less than the root, traverse the left node, enter induction
is value greater than or equal to the root, traverse the right node, enter induction
Induction: is value the parent node, then stop
is value less than the parent, traverse the left node, and repeat the induction
is value greater than or equal to the parent, traverse the right node, and repeat the induction
More specifically, when trying to find a value within a tree, the search space, n, is divided by 2 repeatedly until the value is found. By definition, the worst case scenario is assumed as one search space, or one search iteration, which contains the maximum number of times the single search space can be split in half.
Algorithmically, this can be represented as follows:
(n / (2^x)) = 1
And, can be rewritten as follows:
2^x = n
log(2^x) = log(n)
xlog(2) = log(n)
x = log(n) / log(2) = log_2(n)
This means, the maximum number of steps required to search a space of n elements is log_2(n). Since the difference between log_2(n), and log_10(n) is a constant factor, it is acceptable to generalize the binary search tree to be O(log(n))