Engineer bro!
Fri Apr 29 2022 (3 weeks ago)
Engineer by mistake!
I am a software engineer by passion
The asymptotic analysis defines the mathematical foundation of an algorithm's run time performance. If there is no input to an algorithm then the algorithm will always work at a constant time. Asymptotic analysis is the running time of any process or algorithm in mathematical terms.
As we have seen in analysing-the-algorithm article, there are two types of analysis:
Empirical analysis - Run the Algorithm and measure how much processor time and storage are needed.
Theoretical analysis - Mathematically computing how much time and space are needed as a function of input size.
We have also seen that empirical analysis is not feasible as you will always have to buy the different hardware and software then run the algorithm and record the running time and storage taken by the algorithm.
So, the process of buying new hardware, and running the algorithm using different software is a tedious task.
The running time of an algorithm depends on two factors:
Input size
Nature of input
Generally, the running time grows for the algorithm along with input size, for example, sorting 100 numbers will take less time than sorting $$1{0}^{10}10^\{10\}$$ numbers. So, the running time of the algorithm is usually measured as the function of input size.
Instead of measuring the actual time required in executing each statement in the code, we consider how many times each statement is executed. So, the theoretical complexity of the algorithm is measured in terms of the number of steps/primitive operations performed.
Let's see what is a problem and its instance.
The instance of the problem consists of the input needed to compute the solution to the problem.
For example:
Problem :- Multiply two numbers 8 and 9.
Instance :- 8 & 9
Any integer(generally n) that in some way measures the number of the component in an instance.
For example:
Sorting problem - Instance size will be the number of elements to be sorted
Graph problem - Instance size will be the number of nodes/edges in the graph.
A linear search or sequential search is a method for finding an element within a list. It sequentially checks each element of the list until a match is found or the whole list has been searched.
Consider the below array of numbers, you need to find 41 among them.
To do so, you need to go through all the positions of the array one by one. Stop the iteration once you found 41.
Algorithm
function linearSearch(array,toFind){
for(var i=0; i<array.length; i++){
if(array[i] == toFind){
// return the position of the found number
return i;
}
}
// number does not found
return -1;
}
There are three cases of algorithmic complexity:
Best Case - Minimum comparison is required
Average Case - Average number of comparisons is required
Worst-Case - Maximum number of comparisons is required.
Best Case | Average Case | Worst-Case |
---|---|---|
Resource uses is minimum | Resource uses is average | Resource uses is maximum |
Algorithm's behaviour under optimal condition | Algorithm's behaviour under random condition | Algorithm behaviour under worst condition |
Minimum number of steps required | Average number of steps required | Maximum number of steps required |
Lower bound on running time | Average bound on running time | Upper bound in running time |
Generally do not occur in real | Average and worst-case performance are mostly used in real life |
Consider the below array of numbers.
The best-case for linear search on the above array will be finding the number 45. As it is at the first position of the array, so it requires a single comparison.
The average-case for linear search on the above array will be finding any number between 45 and 90.
The worst-case for linear search on the above array will be finding any 90, as it requires the maximum number of comparisons.
Computing the running time of the Algorithm's operations in mathematical units of computation and defining the mathematical formula of its run time performance is referred to as Asymptotic analysis.
An algorithm may not have the same performance for different types of inputs, with the increase in the input size or nature will change the overall performance of the algorithm. As we have seen in the case of the linear search, the running time totally depends on the input given to the algorithm.
Asymptotic analysis accomplishes the study of change in performance of the algorithm with the change in the order of the input size. Using Asymptotic analysis, we can very well define the best case, average case and worst case of the algorithm.
There are three main mathematical notation is used in Asymptotic analysis:
These are also known as the Algorithm's growth rate, let's see them one by one
The notation $$O(n)O(n)$$ is the formal way to express the upper bound of an Algorithm's running time. It measures the worst-case time complexity or the longest amount of time an algorithm can possibly take to complete.
$$O(g(n))O(g(n))$$ = $$f(n)f(n)$$ : there exists some positive constants $$cc$$ and $${n}_{0}n\_0$$ such that $$0<=f(n)<=cg(n)0<=f(n)\; <=\; cg(n)$$ for all $${n}_{0}<=nn\_0\; <=\; n$$.
$$g(n)g(n)$$ is an asymptotically upper bound for $$\mathit{f}(\mathit{n})\mathit{f}(\mathit{n})$$.
An upper bound $$\mathit{g}(\mathit{n})\mathit{g}(\mathit{n})$$ of an algorithm defines the maximum time required, we can always solve the problem in at most $$\mathit{g}(\mathit{n})\mathit{g}(\mathit{n})$$ time.
Time taken by a known algorithm to solve a problem with worse case input gives the upper bound.
$$nn$$ | $$f(n)={n}^{2}f(n)=n^2$$ | $$g(n)={2}^{n}g(n)\; =\; 2^n$$ | |
---|---|---|---|
1 | 1 | 2 | $$f(n)<g(n)f(n)<g(n)$$ |
2 | 4 | 4 | $$f(n)=g(n)f(n)=g(n)$$ |
3 | 9 | 8 | $$f(n)>g(n)f(n)>g(n)$$ |
4 | 16 | 16 | $$f(n)=g(n)f(n)=g(n)$$ |
5 | 25 | 32 | $$f(n)<g(n)f(n)<g(n)$$ |
6 | 36 | 64 | $$f(n)<g(n)f(n)<g(n)$$ |
Big Omega notation (Ω ) is used to define the lower bound of any algorithm or we can say the best case of any algorithm. This always indicates the minimum time required for any algorithm for all input values, therefore the best case of any algorithm. When a time complexity for any algorithm is represented in the form of big-Ω, it means that the algorithm will take at least this much time to complete it's execution. It can definitely take more time than this too.
$$\mathrm{\Omega}(g(n))\Omega (g(n))$$ = { $$f(n)f(n)$$:There exists a positive constants $$cc$$ and $${n}_{0}n\_0$$ such that $$0<=cg(n)<=f(n)0\; <=\; cg(n)\; <=\; f(n)$$ for all $${n}_{0}<=nn\_0\; <=\; n$$ }
$$\mathit{g}(\mathit{n})\mathit{g}(\mathit{n})$$ is an asymptotically lower bound for $$\mathit{f}(\mathit{n})\mathit{f}(\mathit{n})$$.
$$\mathit{f}(\mathit{n})=\mathrm{\Omega}(\mathit{g}(\mathit{n}))\mathit{f}(\mathit{n})=\; \Omega (\mathit{g}(\mathit{n}))$$ implies: $$\bm{f}(\bm{n})\ge \bm{c}\mathrm{.}\bm{g}(\bm{n})\bm{f}(\bm{n})\; \ge \; \bm{c}.\bm{g}(\bm{n})$$
A lower bound $$\mathit{g}(\mathit{n})\mathit{g}(\mathit{n})$$ of an algorithm defines the minimum time required, it is not possible to have any other algorithm (for the same problem) whose time complexity is less than $$\mathit{g}(\mathit{n})\mathit{g}(\mathit{n})$$ for random input.
$$nn$$ | $$g(n)={2}^{n}g(n)\; =\; 2^n$$ | $$f(n)={n}^{2}f(n)\; =\; n^2$$ | |
---|---|---|---|
1 | 2 | 1 | $$f(n)>g(n)f(n)\; >\; g(n)$$ |
2 | 4 | 4 | $$f(n)=g(n)f(n)\; =\; g(n)$$ |
3 | 8 | 9 | $$f(n)<g(n)f(n)\; <\; g(n)$$ |
4 | 16 | 16 | $$f(n)=g(n)f(n)\; =\; g(n)$$ |
5 | 32 | 25 | $$f(n)>g(n)f(n)\; >\; g(n)$$ |
6 | 64 | 36 | $$f(n)>g(n)f(n)\; >\; g(n)$$ |
The notation $$\theta (n)\theta (n)$$ is the formal way to enclose both the lower bound and the upper bound of an algorithm's running time. Since it represents the upper and the lower bound of the running time of an algorithm, it is used for analyzing the average case complexity of an algorithm. The time complexity represented by the Big-θ notation is the range within which the actual running time of the algorithm will be. So, it defines the exact Asymptotic behaviour of an algorithm.
𝜃(𝑔(𝑛)) is a set, we can write 𝑓(𝑛) 𝜖 𝜃(𝑔(𝑛)) to indicate that 𝑓(𝑛) is a member of 𝜃(𝑔(𝑛)).
𝑔(𝑛) is an asymptotically tight bound for 𝑓(𝑛).
𝑓(𝑛)=𝜃(𝑔(𝑛)) implies: 𝒇(𝒏) = 𝒄.𝒈(𝒏)
Ο(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐 and $${n}_{0}n\_0$$ such that 𝟎≤𝒇(𝒏)≤𝒈(𝒏) for all 𝑛0≤𝑛}
i.e -> 𝐟(𝐧) = 𝐎(𝐠(𝐧))
Ω(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐 and $${n}_{0}n\_0$$ such that 𝟎≤𝒄𝒈(𝒏)≤𝒇(𝒏) for all 𝑛_0≤𝑛}
i.e -> 𝐟(𝐧) = Ω(𝐠(𝐧))
θ(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants $${c}_{1},{\mathit{c}}_{2}c\_1,\; \mathit{c}\_2$$ and $${n}_{0}n\_0$$ such that $$\U0001d7ce\le {\mathbf{c}}_{\U0001d7cf}\mathbf{g}(\mathbf{n})\le \mathbf{f}(\mathbf{n})\le {\mathbf{c}}_{\U0001d7d0}\mathbf{g}(\mathbf{n})for\text{}all\text{}{\mathit{n}}_{0}\le \mathit{n}\U0001d7ce\le \mathbf{c}\_\U0001d7cf\; \mathbf{g}(\mathbf{n})\le \mathbf{f}(\mathbf{n})\le \mathbf{c}\_\U0001d7d0\; \mathbf{g}(\mathbf{n})\; for\; \backslash space\; all\; \backslash space\; \mathit{n}\_0\le \mathit{n}$$ }
i.e -> 𝐟(𝐧) = 𝛉(𝐠(𝐧))
Let's suppose we have two function $$f(n)={n}^{2}and\text{}g(n)=nf(n)\; =\; n^2\; and\; \backslash space\; g(n)\; =\; n$$. By looking at the functions we can say that $$f(n)\ge g(n)\text{\hspace{0.25em}\hspace{0.05em}}\u27f9\text{\hspace{0.25em}\hspace{0.05em}}f(n)=\mathrm{\Omega}(g(n))f(n)\; \backslash geq\; g(n)\; \backslash implies\; f(n)\; =\; \backslash Omega(g(n))$$ .
For another example, let suppose we have two functions. $$f(n)=n\text{}and\text{}g(n)={n}^{2}f(n)=n\; \backslash space\; and\; \backslash space\; g(n)\; =\; n^2$$. By looking at the functions we can say that $$f(n)\le g(n)\text{\hspace{0.25em}\hspace{0.05em}}\u27f9\text{\hspace{0.25em}\hspace{0.05em}}f(n)=O(g(n))f(n)\; \backslash leq\; g(n)\; \backslash implies\; f(n)\; =\; O(g(n))$$
The computer science community has defined some common orders of magnitude, which are as follows along with their values.
$$nn$$ | $$\mathrm{log}n\backslash log\; n$$ | $$n\mathrm{log}nn\; \backslash log\; n$$ | $${n}^{2}n^2$$ | $${n}^{3}n^3$$ | $${2}^{n}2^n$$ | $$!n!n$$ |
---|---|---|---|---|---|---|
4 | 2 | 8 | 16 | 64 | 16 | 24 |
16 | 4 | 64 | 256 | 4096 | 65536 | $$2.09\times 1{0}^{13}2.09\; \backslash times\; 10^\{13\}$$ |
64 | 6 | 384 | 4096 | 262144 | $$1.84\times 1{0}^{9}1.84\; \backslash times\; 10^9$$ | $$1.26\times 1{0}^{2}91.26\; \backslash times\; 10^29$$ |
256 | 8 | 2048 | 65536 | 16777216 | $$1.15\times 1{0}^{77}1.15\; \backslash times\; 10^\{77\}$$ | $$\mathrm{\infty}\backslash infty$$ |
1024 | 10 | 10240 | 1048576 | $$1.07\times 1{0}^{9}1.07\; \backslash times\; 10^9$$ | $$1.79\times 1{0}^{308}1.79\; \backslash times\; 10^\{308\}$$ | $$\mathrm{\infty}\backslash infty$$ |
4096 | 12 | 49152 | 16777216 | $$6.87\times 1{0}^{1}06.87\; \backslash times\; 10^10$$ | $$1{0}^{1233}10^\{1233\}$$ | $$\mathrm{\infty}\backslash infty$$ |
We have seen introduction of algorithm analysis and its types. We have also seen asymptotic notations along with examples, common orders of magnitude, etc.
Thanks for reading the article.
© 2021 dsabyte. All rights reserved
Comments (0)