Date Added: October 14, 2022
The history of humans holds several gems from various cultures that shaped and gave dimension to humans, from printing to inventing mathematics, decimal points, and printing to algorithms. An algorithm’s efficiency determines how many computational resources it uses and how long it takes to generate the intended output. An algorithm is considered efficient if it uses fewer resources and computes results for a problem in the shortest amount of time. The purpose of algorithms before the invention of computers was also the same: to allow easy mathematical calculations and for bigger numbers. These algorithms created in Arabia led Fibonacci to give birth to Fibonacci Numbers, a practical example of recursion in the data structure, finding their place in architecture, economics, arts, biology, demography, and plenty of others.
Before algorithms were invented, it was very difficult to perform mathematical operations on even small expressions. Similarly, before the advent of data structures and implementation of algorithms on modern computational devices, the precise value of higher Fibonacci numbers was. Even today, following an exponential algorithm, finding the values of higher Fibonacci sequences such as F100 and F200 might take years. Since the execution time of fib1(n) is equivalent to 2 0.694n (1.6)n, computing Fn+1 requires 1.6 times as much time as computing Fn. Currently, NEC Earth Simulator is the world’s fastest computer, which clocks in at 40 trillion steps per second. Fib1(200) should require at least 4.9517602e+27 seconds, even on this machine. This implies that if the calculation were to begin today, it would continue long after the sun transforms into a giant red star. And even if computers evolve at their current pace of doubling their speed every 1.75 years, one number from the Fibonacci sequence will take a year to solve and yield results.
This is where a polynomial algorithm comes into action. The inner loop only requires one computing step and runs n times. As a result, fib2 uses a linear number of computing steps in n. The transition from exponential to polynomial represents a significant advance in running time. Right now, it makes perfect sense to calculate F200 or even F200,00. All the time it takes to solve any problem is the number of steps involved in concluding. Computers do not solve Fibonacci numbers, just like a normal mathematical expression. In the case of Fib1, this is the case, as the nth Fibonacci number is 0.694n bits longer, and as the value of n rises, the growth of bits goes beyond the standard operation of 32 bits. Executing mathematical operations on arbitrarily big numbers in a single, continual step is impossible. The time frame also reduces significantly by reducing the number of steps, but the smaller time expectations for a single step are untrue; not all steps take similar amounts of time. However, the implementation of this algorithm must be counted as a breakthrough.
The oversimplification of the processing time required for each step can yield inaccuracy on multiple fronts. Big O comes into action to handle these inaccuracies or miscalculations by evaluating superiority. Big-O doesn’t oversimplify as dependencies are unique to each design, and taking into account these architectural details is a nightmare-inducing complex operation that does not produce results that apply to all computers. Seeking a clear, machine-independent assessment of an algorithm’s effectiveness makes more sense. To do this, we will always measure running time due to the amount of input by counting fundamental computer operations. It helps concentrate the target using big-O notation. It requires substituting a complex function with O(f(n)), wherein f(n) is as straightforward as possible when presented with a complex function.