Sun May 01 2022 (5 months ago)
Engineer by mistake!
I am a software engineer by passion
Amortized analysis considers not just one operation but a sequence of operations on a given data structures or a database.
Amortized analysis is used for the algorithms here an occasional operation is very slow but most of the other operations are very faster.
The time required to perform sequence of data structures operations is average over all operation performed.
In amortized analysis, we analyze a sequence of operations and guarantee worst case average time which is lower than the worst case time of particular expensive operation.
So, amortized analysis be used to show that the average cost of the operation is very slow even though a single operation is expensive.
There are three most common techniques of amortized analysis.
The aggregate method
A sequence of n operations takes worst case time
Amortized case per operation is
The accounting method
Assign each time of operation an amortized cost
Overcharge some operations
Store the overcharge as credits on specific objects
Then use credit for compensation for some later operation
The potential method
Same as accounting method
But store the credit as
"potential energy" and as a whole
Amortized analysis requires knowledge of which series of operations are possible. This is most commonly the case with data structures, which have state that persists between operations. The basic idea is that a worst-case operation can alter the state in such a way that the worst case cannot occur again for a long time, thus "amortizing" its cost.
Consider a dynamic array that grows in size as more elements are added to it, such as ArrayList in Java or std::vector in C++. If we started out with a dynamic array of size 4, we could push 4 elements onto it, and each operation would take constant time. Yet pushing a fifth element onto that array would take longer as the array would have to create a new array of double the current size (8), copy the old elements onto the new array, and then add the new element. The next three push operations would similarly take constant time, and then the subsequent addition would require another slow doubling of the array size.
In general if we consider an arbitrary number of pushes to an array of size , we notice that push operations take constant time except for the last one which takes time to perform the size doubling operation. Since there were operations total we can take the average of this and find that pushing elements onto the dynamic array takes: , constant time.
The enqueue operation just pushes an element onto the input array; this operation does not depend on the lengths of either input or output and therefore runs in constant time.
However the dequeue operation is more complicated. If the output array already has some elements in it, then dequeue runs in constant time; otherwise, dequeue takes time to add all the elements onto the output array from the input array, where n is the current length of the input array. After copying n elements from input, we can perform n dequeue operations, each taking constant time, before the output array is empty again. Thus, we can perform a sequence of n dequeue operations in only time, which implies that the amortized time of each dequeue operation is .
Alternatively, we can charge the cost of copying any item from the input array to the output array to the earlier enqueue operation for that item. This charging scheme doubles the amortized time for enqueue but reduces the amortized time for dequeue to .
We have seen introduction of amortize analysis and techniques to calculate it. We have also seen examples of amortized analysis.
Thanks for reading this article.
© 2021 dsabyte. All rights reserved