Algorithms, Web Page 06A
Amortizing (not in Levitin)Back to the last of Sorting.
You should be familiar with the
implementation of classses like ArrayList in Java, where elements are
stored in an array, and when there is no space to add the next element,
a larger array is allocated, and all the n current elements are
copied. This operation is O(n), but we refer to adding an element
to a list as an O(1) operation. Technically, adding an element is
O(1) on average.
Similarly for adding an element to a hash table, even when the
hashtable may need to be resized. Array doubling, as a way to
arbitrary growth of a stack stored in an array, is the
example for the topic of amortizing, where we look at bounds for the
average time of operations that in some instances can take much more
time than in other instances.
We take the array doubling as a central example. We deal with
an operation that in the general case has a basic cost. For
push in an array based stack, there is the cost to copy the element
into the next free place in the array. Sometimes this basic
operation is not enough (when the array is already full) and a special
extra operation needs to be done of a higher order of cost (like
copying n elements, not 1, to a new array). In a naive
analysis this screws up most anything with a stack: Do n
pushes when the worst case time for one is O(n) gives a quick and dirty
bound for all the pushes together of O(n2).
A better analysis would see that the larger cost is only
incurred a small fraction of the time, and take that into account.
At the end of an algorithm only the average cost of any
particular operation is important (except in some real-time applications). The average is basically what we
are trying to bound.
1. cost of employee:
while working is salary + contribution to pension
after retire; just from pension.
Here think of working years as the "normal" ones, and the retirement
years as the special cases:
Idea is to set the pension contribution high enough to pay for
retirement. (Don't look at the state of Illinois for a good
2. What routine deposits do you makle to a bank
account being kept for emergencies, so you never overdraw the account
in an emergency?
One common situation in programming is to have a normal low actual
cost and costs associated with the bank for ocasional larger costs.
The amortized cost is normal basic cost + normal
bank deposit. This only makes sense if the bank is
never overdrawn in special cases/emergencies.
For our purposes, we do not need to always know the exact bank balance.
We only want to prove the amount in the bank will never go
The term accounting cost is used for the bank transactions.
Deposits are positive in the usual case and negative in the
special cases where we want to offset high actual costs.
Array doubling example in detail, two ways
Assume the normal cost is 1 to transfer one number to the storage array.
Assume the real excess cost of changing array size
is the number
of elements copied.
We can calculate the
cost of the normal bank deposit, call it c, to absorb the extra cost of
a push when the array is already full and needs to be doubled in size.
For simplicity assume the initial array size is 1, so all the
sizes when it needs to be doubled are powers of 2. Thus the
special cases only come when n = 2k for some k.
In the worst case there are no pops, which just relieve the
problems. We will consider the worst case, when there are
You could try to figure this out by trial and error. For
example, assume the initial array size is 1, and assume it costs 1 for
each element copied to
a new array, and
the accounting cost c = 1. The columns show the state after
action, with n being the number of elements in the stack, and N being
the array size. The calculation in the accounting column
shows the old balance plus the routine addition of 1, minus the special
costs at array doubling/copying time, that comes when the stack grows
just beyond a power of 2:
Action n N
0 1 0
Push 1 1 0+1=1
4 4 0+1=1
1+1-4=-2 oops, accounting cost 1 does not work!
let us try to solve this in general, assuming a constant accounting
cost for each push. Call it c. The total positive
contributions for n pushes are cn. The balance can only go
negative after a debit, so only consider n = 2k +
1, when the previous array size 2k is
too small and it must be doubled to 2k+1,
copying the 2k previous
elements. The total debits are not only for this most recent
size, but all the doublings before at all the powers of two
up to 2k:
1 +2 + 4 + .... + 2k = 2k+1
To avoid debt we need all the deposits so far to be as big as all the
debits so far:
cn >= 2k+1 - 1 for
each k = 0, 1, 2, ...
To reduce the number of symbols, use n = 2k+1, so
the right-hand side of the required inequality can be
2k+1 - 1 = 2(2k) - 1 =
2(n-1) -1 = 2n - 3 for arbitrarily large values of n.
Clearly to keep cn >= 2n - 3, c = 2 is the best value.
This approach works because it is easy to simplify to a final state
with one clear worst path to it (after doubling space
ending with n = 2k +
1, with no pops). In these case we can add all the
contributions along this
worst path and calculate what we need for amortizing. In many
cases there will be multiple different program paths that you will want
amortize for, and no obvious worst case path to a specific
endpoint. (The homework exercises below have a more complicated case, where
you consider both growing and shrinking an array).
We now consider an alternate inductive approach that does not
depend on the whole history of the the program:
approach that may provide a usable
cost: The balance must always be
nonnegative, and in particular, right after we must make a withdrawal
from the amortizing account, it must be nonnegative. We may
able to use an inductive approach where we only look from one
action requiring a withdrawal to the next. If we add enough
between the withdrawals to sustain the next withdrawal, we are covered.
Induction basis: Starting balance = 0
Through k deductions, the amortizing balance is nonnegative,
Then show the additions to the amortizing balance for the time
after the kth deduction through k+1st deduction will be enough to pay
for the k+1st deduction. Hence the amortizing balance after the
k+1st deduction will be at least as high as after the kth deduction.
The amortizing balance will be nonnegative after any number
approach works for the array doubling example above. We
further assumption about the balance after the previous array
expansion, other than it must be nonnegative, and assume the next
expansion comes after the array fills up
at size N. The last expansion was caused when an array of size
N/2 was full, and another element needed to be addeded. That
triggered the allocation of an array of length N, receiving N/2+1
possibly no accounting balance. The expansion from N to N+1
elements will require copying N elements into an array of size 2N, at a
real cost of N, and the accounting balance needs to pay for
Pushes increased the size from N/2+1 to N+1, for a total of N/2 pushes
which contribute cN/2 to the accounting balance. It is
if these most recent additions at least pay for the next doubling,
which must cover the
copying cost N: cN/2 >= N.
Again c = 2.
What about adding
K to the array size each times it fills up
Assume the initial array size is K. What would happen if we
try to find the value, c, for accounting costs for each push?
Following the first analysis: the worst case will
be pushes up
past some multiple of K, say bK, so n = bK+1, b = (n-1)/K. We need cn
>= K + 2K
+ .... +
bK = Kb(b+1)/2 = (n-1)((n-1)/K+1)/2, which is Θ(n2).
If such a c existed, it would imply n2
is O(n), which
We cannot have a constant amortization of the costs if we add a fixed
amount to the array size each time it fills up.
growth of the length in our first version is important!
sort of analysis with an extra payment for each "normal" operation,
allows a stack or a growable list to be used in any larger algorithm,
and always assume that pushes and pops average a constant amount of
are other sophisticated approaches to amortizing (for instance in the
proof for the overall time for union-find in Ch 9), but we will not
look in detail at any other sort of examples in this course, so we stop here.
array doubling algorithm allows the storage to start off small and grow
as needed, but it may later take up way more space than needed:
Stacks could grow large and then later have many fewer elements
after many pops. It could be useful to shrink the size of
the array if too little of it is used. It is important to be
careful of the model for shrinking, making sure it interacts
appropriately with the doubling mechanism.
shrinking mechanism is the inverse of the growing one: As soon as
the stack uses less than half of its space, halve the size of the
array. Look at the interaction with the doubling part, and show
the worst case behavior of this stack is O(n), not O(1). If you
do not see this right away, more systematically consider breaking the
problem into four cases, as in the next problem.
What if the array only shrinks to half its size when 1/4 of the
array is occupied? Is the average here for pushes and pops O(1)?
Hints: You can use my simpler approach to accounting, that only looks
from one withdrawal ( array size change) to the next. In the case
where the array only doubled, there was one clear worse case. Now
there are several patterns. Still there are a finite number of
situations: grow after growing, shrink after growing, shrink
after shrinking, and grow after shrinking. The overall worst case
is the worse possibility among these 4. Let c, the regular bank
contribution, be a parameter and solve an inequality (4 times).
Make clear in your argument the distinction between the symbols
for the current size of the array and the current number of elements in
it, vs the number of elements at the next or previous array doubling.
On to some dynamic data structures.