SlideShare a Scribd company logo
DAA Introduction , Divide
&Conquer Technique
B.Sailaja,
Asst Prof,Dept of CSE,
Vidya Jyothi Institute of
Technology
Algorithm
Definition
An algorithm is a finite set of instructions that
accomplishes a particular task.
All algorithms must satisfy the following criteria.
Criteria
1. Input : Zero or more quantities are
externally supplied.
2. Output : At least one quantity is produced.
3. Definiteness : Each instruction is clear and
unambiguous.
Statements such as “add 6 or 7 to x”
or “Compute 5/0” are not permitted”
4. Finiteness : The algorithm should terminate
after a finite number of steps.
5. Effectiveness : Instruction is basic enough to be
carried out.
Pseudo code for expressing algorithms
We present most of our algorithms using pseudo code that
resembles C and Pascal.
1. Comments begin with // and continue until end of the line.
2. Blocks are indicated with matching braces: { and }.
i. A compound statement
ii. Body of a procedure.
3.
i. An identifier begins with a letter.
ii. The data types of variables are not explicitly declared.
iii. The types will be clear from the context.
iv. Whether a variable is global or local to a procedure will also be evident from
the context.
v. We assume simple data types such as integer, float, char, boolean, and so
on.
vi. Compound data types can be formed with records.
node = record
{
datatype_1 data_1;
:
datatype_n data_n;
node *link
}
data items of a record can be accessed
with  and period( . )
C style :-
struct node
{
datatype_1 data_1;
:
datatype_n data_n;
struct node *link
}
4. Assignment of values to variables is done using the assignment
statement.
< variable > := < expression >
5. There are two boolean values true and false. To produce
these values, logical operators and, or and not and the
relational operators <, ≤,=, ≠, ≥ and > are provided.
6. Elements of multidimensional arrays are accessed using
[ and ]. For example the (i,j)th element of the array A is
denoted as A[i,j].
7. The following looping statements are used: for, while, and
repeat until.
The general form of a while loop:
while( condition ) do
{
statement_1;
:
statement_n;
}
The general form of a for loop:
for variable := value1 to value2 step step do
◦ Here value1, value2, and step are arithmetic expressions.
◦ The clause “step step” is optional and taken as +1 if it does not occur.
◦ step could be either positive or negative.
◦ variable is tested for termination at the start of each iteration.
The for loop can be implemented as a
while loop as follows:
variable:=value1;
incr:=step;
while( ( variable – value2)*step ≤ 0) do
{
<statement 1>
:
<statement n>
variable :=variable+incr;
}
The general form of a repeat-until loop:
repeat
<statement 1>
:
<statement n>
until ( condition )
The statements are executed as long as
condition is false.
8. A conditional statement has the following
forms:
if< condition > then < statement >
if< condition > then < statement 1> else
< statement 2>
9. Input and output are done using the instructions read and
write.
10. Procedure or function starts with the word
Algorithm.
General form :
Algorithm Name( <parameter list> )
{
body
}
where Name is the name of the procedure.
◦ Simple variables to procedures are passed by value.
◦ Arrays and records are passed by reference
Ex:-Algorithm that finds and returns the
maximum of n given numbers
Algorithm(a,n)
// a is an array of size n
{
Result:=a[1];
for i:=2 to n do
if a[i]>Result then Result:=a[i];
return Result;
}
Ex:-Write an algorithm to sort an array of n
integers using bubble sort.
Algorithm(a,n)
// a is an array of size n
{
for i:=1 to n-1 do
{
for j:=1 to n-i do
{
if( a[j] >a[j+1] ) then
t=:a[j]; a[j]:=a[j+1]; a[j]:=t;
}
}
}
Performance Analysis
(machine independent)
There are many things upon which the
performance will depend.
◦ Does the program efficiently use primary and Secondary
storage?
◦ Is the program's running Time acceptable for the task?
◦ Does it do what we want it to do?
◦ Does it work correctly according to the specifications of
the task?
◦ Does the program contain documentation that shows how
to use it and how it works?
◦ Is the program's code readable?
How to achieve them?
◦ Good programming style, experience, and practice
◦ Discuss and think.
Practice
Practice
Practice
Think
Think
Think
Space Complexity
The space needed by a program has the following components.
◦ Instruction space
◦ Instruction space is the space needed to store the compiled version of the program instructions.
◦ The amount of instruction space that is needed depends on the compiler used to compile the
program into machine code.
◦ Data space
◦ Data space is the space needed to store all constant and variable values. Data space has two
components.
◦ Space needed by constants( ex; 1 and 2 in max of n num algorithm) and simple variables( such
as i, j, n etc).
◦ Space needed by a dynamically allocated objects such as arrays and class instances.
◦ Space taken by the variables and constants varies from language to language and platform to
platform.
◦ Environmental stack space
◦ The environment stack is used to save information needed to resume execution of partially
completed functions.
◦ Each time a function is invoked the following data are saved on the environment stack.
◦ The return address .
◦ The values of all local variables and formal parameters in the function being
invoked( necessary for recursive functions only).
Recursive algorithm
Algorithm rfactorial(n)
// n is an integer
{ fact=1;
if(n=1 or n=0) return fact;
else
fact=n*rfactorial(n-1);
return fact;
}
Each time the recursive function rfactorial is invoked,
the current values of n and fact and the program location to return to on
completion are saved in the environment stack.
void main()
{int
R=rfactorial(3);
printf(R);}
R:
int rfactorial(n){
fact=1;
if (n=0 or n=1)return fact;
else
fact=n*rfactorial(n-1);
return fact;
}
n : 3
int rfactorial(n){
fact=1
if (n=0 or n=1)return fact
else
fact=n*rfactorial(n-1);
return fact;
}
int rfactorial(n){
if (n=0 or n=1)
return 1;
else
fact=n*rfactorial(n-1);
return fact;
}
n : 2
n : 1
2000
3
1
fact
n
fact=3*
3000
2
1
fact
n
fact=2*
4000
1
1
fact
n
fact=1
location to return
Execution
Stack
Summary of space complexity
The space needed by a program depends on several factors.
We cannot make an accurate analysis of the space requirements
of a program unless we know the computer or compile that will
be used.
However, we can determine the components that depend on the
characteristics of the problem instance (e.x., the number of inputs and
outputs or magnitude of the numbers involved ) to be solved.
Ex:-1. Space requirements of a program that sorts n elements can be
expressed as a function of n.
2. Space requirements of a program that adds two m×n
matrices can be expressed as a function of m, n.
◦ The size of the instruction space is independent of the problem instance to
be solved.
◦ The contribution of the constants and simple variables to the data space is
also independent of the problem instance to be solved.
◦ Most of the dynamically allocated memory( ex., arrays, class instances etc)
depends on problem instance to be solved.
◦ The environmental stack space is generally independent of the problem
instance unless recursive functions are in use.
Contd..
Contd..
Therefore, We can divide the total space needed by a
program into two parts:
i) Fixed Space Requirements (C)
Independent of the characteristics of the problem instance ( I )
◦ instruction space
◦ space for simple variables and constants.
ii) Variable Space Requirements (SP(I))
depend on the characteristics of the problem instance ( I )
◦ Number of inputs and outputs associated with I
◦ recursive stack space ( formal parameters, local variables, return
address ).
◦ Therefore, the space requirement of any problem P can be
written as
S(p)=C +Sp( Instance characteristics )
Note:
◦ When analyzing the space complexity of an algorithm, we
concentrate only on estimating
Sp (Instance characteristics ).
◦ We do not concentrate on estimating fixed part c .
◦ We need to identify the instance characteristics of the
problem to measure Sp
Example1
Algorithm abc(a,b,c)
{
return a+b+b*c+(a+b-c)/(a+b)+4.0;
}
Problem instance characterized by the specific values of a,b,and c.
If we assume one word (4 bytes) is adequate to store the values of each
a, b, and c , then the space needed by abc is independent of the
instance characteristics.
Therefore, Sabc( instance characteristics)=0
Example2
Algorithm sum(a,n)
{
s:=0;
for i:=1 to n do
s:=s+a[i];
return s;
}
Problem instance characterized by n.
The amount of space needed does not depend on the value of n.
Therefore, Ssum(n)=0
Recall: Address of the
first element of the array will
be passed .
Example3
Algorithm RSum(a,n)
{
if(n ≤ 0) then return 0;
else return RSum(a,n-1)+a[n];
}
Type Name Number of bytes
formal parameter: int a 2
formal parameter: int n 2
return address
( used internally)
2
Total per one recursive
call
6
Total no.of recursive calls n, therefore SRSum (n)=6(n+1)
Example4
Calculate space complexity sp(n) for the following algorithm
Algorithm rfactorial(n)
{
fact=1;
if (n=0 or n=1)return fact;
else
fact=n*rfactorial(n-1);
return fact;
}
Contd..
Time Complexity:
T(P)=C+TP(I)
◦ The time, T(P), taken by a program, P, is the sum of its
compile time C and its run (or execution) time, TP(I).
◦ The compile time does not depend on the instance
characteristics.
◦ We will concentrate on estimating run time Tp(I).
If we know the characteristics of the compiler to be used, we can
determine the
◦ No. of additions , subtractions, multiplications, divisions, compares, and so
on.
Then we can obtain an expression for Tp(n) Of the form
TP(n)=caADD(n)+csSUB(n)+cmMUL(n)+cdDIV(n)+……..
where,
◦ n denotes the instance characteristics.
◦ ca, cs, cm, cd and so on denote the time needed for an addition,
subtraction, multiplication, division and so on. And
◦ ADD, SUB, MUL, DIV, and so on are functions whose values are the no.of
additions, subtractions, multiplications, divisions, and so on.
Obtaining such an exact formula is an impossible task, since
time needed for an addition, subtraction, and so on,
depends an numbers being added, subtracted, and so on.
The value of Tp(n) for any given n can be obtained only
experimentally.
Even with the experimental approach, one could face
difficulties.
In a multiuser system the execution time of a program p
depends on the number of other programs running on the
computer at the time program p is running.
Contd..
As there were some problems in determining the execution time using
earlier methods, we will go one step further and count only the number
of program steps.
program step
program step is loosely defined as a syntactically or
semantically meaningful program segment whose execution
time is independent of the instance characteristics.
◦ Example
result = a + b + b * c + (a + b - c) / (a + b) + 4.0;
sum = a + b + c;
The number of steps assigned to any program statement
depends on the kind of statement.
Comments are counted as zero number of steps.
An assignment statement which does not involve any calls to
other functions counted as one step.
For loops, such as the for, while, and repeat-until, we consider
the step counts only for the control part of the statement.
The control parts for for and while statements have the
following forms:
for i:= <expr1> to <expr2> do
while ( <expr> ) do
Each execution of the control part of a while statement is one, unless
<expr> is a function of instance characteristics.
The step count for each execution of the control part of a for statement is
one, unless <expr1> and <expr2> are functions of the instance
characteristics.
The step count for each execution of the condition of a conditional
statements is one, unless condition is a function of instance characteristics.
If any statement ( assignment statement, control part, condition etc.)
involves function calls, then the step count is equal to the number of steps
assignable to the function plus one.
Methods to compute the step count
1) Introduce global variable count into programs with
initial value zero.
◦ Statements to increment count by the appropriate amount are
introduced into the program.
◦ The value of the count by the time program terminates is the
number steps taken by the program.
2) Tabular method
◦ Determine the total number of steps contributed by each
statement per execution  frequency
◦ Add up the contribution of all statements
Method-I: Introduce variable count into programs
EX:- Iterative sum of n numbers
Algorithm sum(a, n)
{
s:=0;
count:=count+1; // for assignment statement
for i:=1 to n do
{ count:=count+1; // For for
s:=s+a[i];
count:=count+1; // for assignment statement
}
count:=count+1; // for last time of for
return s;
count:=count+1; // for return
}
2n + 3 steps
Note : Step count tells us how the run time for a program changes with
changes in the instance characteristics.
Ex:- Addition of two m×n matrices
Algorithm Add(a,b,c,,m,n)
{
for i:=1 to m do
{
for j:=1 to n do
{
c[i,j]:=a[i,j]+b[i,j];
}
}
}
2mn + 2m+1 steps
EX:- Recursive sum of n numbers
Algorithm RSum(a,n)
{
count:=count+1; // for the if conditional
if(n ≤ 0) then
{
return 0;
count:=count+1; // for the return
}
else
{
return RSum(a,n-1)+a[n];
count:=count+1; // For the addition, function invocation and return
}
}
When analyzing a recursive program for its step count, we often obtain
a recursive formula for the step count.
We obtain the following recursive formula for above (RSum) algorithm.
tRSum (n)=
2 If n=0
If n>0
2+ tRSum(n-1)
One way of solving such recursive formula is by repeated substitutions for
each occurrence of the function tRSum on the right side until all such
occurrences disappear:
tRSum (n) = 2+tRSum(n-1)
= 2+2+tRSum(n-2)
= 2(2)+tRSum(n-2)
:
:
= n(2) +tRSum(n-n)
=2n+tRSum(0)
= 2n+2
The step count for Rsum is 2n+2
Method-II: Tabular method
EX:- Iterative sum of n numbers
Statement s/e frequency Total steps
Algorithm sum(a, n)
{
s:=0 ;
for i:=1 to n do
s:=s+a[i];
return s;
}
0
0
1
1
1
1
0
--
--
1
n+1
n
1
--
0
0
1
n+1
n
1
0
Total 2n+3
EX:- Addition of two m×n matrices
Statement s/e frequency Total steps
Algorithm Add(a,b,c,m, n)
{
for i:=1 to m do
for j:=1 to n do
c[i,j]:=a[i,j]+b[i,j] ;
}
0
0
1
1
1
0
--
--
m+1
m(n+1)
mn
--
0
0
m+1
mn+m
mn
0
Total 2mn+2m+1
EX:- Recursive sum of n numbers
Statement s/e Frequency
n=0 n>0
Total steps
n=0 n>0
Algorithm RSum(a,n)
{
if( n ≤ 0 ) then
return 0;
else return
Rsum(a,n-1)+a[n] ;
}
0
0
1
1
1+x
0
-- --
-- --
1 1
1 0
0 1
-- --
0 0
0 0
1 1
1 0
0 1+x
0 0
Total 2 2+x
x=tRSum(n-1)
Best, Worst, Average Cases
 Not all inputs of a given size take the same number of program steps.
 Sequential search for K in an array of n integers:
• Begin at first element in array and look at each element in turn until
K is found.
1. Best-Case Step count:-
Minimum number of steps executed by the algorithm for the given
parameters.
2. Worst-Case Step count:-
Maximum number of steps executed by the algorithm for the given
parameters.
3.Average-Case Step count:-
Average number of steps executed by an algorithm.
Contd..
5 ms
4 ms
3 ms
2 ms
1 ms
Best-case time
Worst -case time
A B C D E F G
Input
Running
Time
Average-case time =
A time + B time+ ………..+G time
----------------------------------------------------
7 ( total number of possible inputs )
Inexactness of step count.
• Both the instructions x=y; and x=y+z+(x/y) count as one step.
• Therefore, two analysts may arrive at 4n2
+6n+2 and 7n2
+3n+4 as the
step count for the same program.
• Any step count of the form c1n2
+c2n+c3 could be a correct step count
for the program.
• Because of the inexactness of what a step count stands for, the exact
step count is not very useful for comparison of algorithms.
Asymptotic efficiency
Asymptotic efficiency means study of algorithms
efficiency for large inputs.
To compare two algorithms with running times f(n) and
g(n), we need a rough measure that characterizes how
fast each function grows as n grows.
Hint: use rate of growth
Compare functions asymptotically!
(i.e., for large values of n)
Rate of Growth
Ex:- F(n)=n2
+100n+log10n+1000
n f(n) n2 100n log10n 1000
value value % value % value % value %
1 1,101 1 0.1 100 9.1 0 0.0 1,000 90.83
10 2,101 100 4.76 1,000 47.6 1 0.05 1,000 47.60
100 21,002 10,000 47.6 10,000 47.6 2 0.001 1,000 4.76
1,000 1,101,003 1,000,000 90.8 100,000 9.1 3 0.0003 1,000 0.09
10,000 101,001,004 100,000,000 99.0 1,000,000 0.99 4 0.0 1,000 0.001
1,00000 10,010,001005 10,000,000,000 99.9 10,000,000 0.099 5 0.0 1,000 0.00
The low order terms and constants in a function are relatively insignificant for
large n
n2
+ 100n + log10n + 1000 ~ n2
i.e., we say that n2
+ 100n + log10n + 1000 and n2
have the same rate of growth
Some more examples
n4
+ 100n2
+ 10n + 50 is ~n4
10n3
+ 2n2
is ~n3
n3
- n2
is ~n3
constants
◦ 10 is ~1
◦ 1273 is ~1
Asymptotic Notations
Asymptotic notation describes the behavior of
functions for the large inputs.
Big Oh(O) notation:
◦ The big oh notation describes an upper bound on the
asymptotic growth rate of the function f.
Definition: [Big “oh’’]
◦ f(n) = O(g(n)) (read as “f of n is big oh of g of n”)
iff there exist positive constants c and n0 such
that
f(n)  cg(n) for all n, n  n0.
The definition states that the function f(n) is at most c times the
function g(n) except when n is smaller than n0.
In other words, f(n) grows slower than or same rate as” g(n).
When providing an upper –bound function g for f, we normally
use a single term in n.
Examples
◦ f(n) = 3n+2
◦ 3n + 2 <= 4n, for all n >= 2, 3n + 2 =  (n)
◦ f(n) = 10n2
+4n+2
◦ 10n2
+4n+2 <= 11n2
, for all n >= 5,  10n2
+4n+2 =  (n2
)
◦ f(n)=6*2n
+n2
=O(2n
) /* 6*2n
+n2
7*2n
for n4 */
It also possible to write 10n2
+4n+2 = O(n3
) since 10n2
+4n+2
<=7n3
for n>=2
Although n3
is an upper bound for 10n2
+4n+2, it is not a
tight upper bound; we can find a smaller function (n2)
that
satisfies big oh relation.
But, we can not write 10n2
+4n+2=O(n), since it does not
satisfy the big oh relation for sufficiently large input.
Omega () notation:
◦ The omega notation describes a lower bound on the
asymptotic growth rate of the function f.
Definition: [Omega]
◦ f(n) = (g(n)) (read as “f of n is omega
of g of n”) iff there exist positive
constants c and n0 such that f(n) 
cg(n) for all n,
n  n0.
The definition states that the function f(n) is at least c times the function g(n)
except when n is smaller than n0.
In other words,f(n) grows faster than or same rate as” g(n).
Examples
◦ f(n) = 3n+2
◦ 3n + 2 >= 3n, for all n >= 1, 3n + 2 =  (n)
◦ f(n) = 10n2
+4n+2
◦ 10n2
+4n+2 >= n2
, for all n >= 1,  10n2
+4n+2 =  (n2
)
It also possible to write 10n2
+4n+2 = (n) since 10n2
+4n+2 >=nfor n>=0
Although n is a lower bound for 10n2
+4n+2, it is not a tight lower bound; we
can find a larger function (n2)
that satisfies omega relation.
But, we can not write 10n2
+4n+2= (n3
), since it does not satisfy the omega
relation for sufficiently large input.
Theta () notation:
◦ The Theta notation describes a tight bound on the
asymptotic growth rate of the function f.
Definition: [Theta]
◦ f(n) = (g(n)) (read as “f of n is theta of g of n”)
iff there exist positive constants c1, c2, and n0
such that c1g(n)  f(n)  c2g(n) for all n, n  n0.
The definition states that the function f(n) lies between c1 times the function
g(n) and c2 times the function g(n) except when n is smaller than n0.
In other words,f(n) grows same rate as” g(n).
Examples:-
◦ f(n) = 3n+2
◦ 3n <= 3n + 2 <= 4n, for all n >= 2,  3n + 2 =  (n)
◦ f(n) = 10n2
+4n+2
◦ n2
<= 10n2
+4n+2 <= 11n2
, for all n >= 5,  10n2
+4n+2 =  (n2
)
But, we can not write either 10n2
+4n+2= (n) or 10n2
+4n+2= (n3
), since
neither of these will satisfy the theta relation.
Little Oh(O) notation:
◦ The big oh notation describes a strict upper bound
on the asymptotic growth rate of the function f.
Definition: [Little “oh’’]
◦ f(n) = o(g(n)) (read as “f of n is little oh of g of
n”) iff
Lim
n->∞
f(n)
g(n)
= 0
The definition states that the function f(n) is less than c times the function g(n)
except when n is smaller than n0.
In other words, f(n) grows slower than” g(n).
Examples
◦ f(n) = 3n+2=o(n2
)
◦ Since
◦ However, 3n+2 ≠ o(n)
Lim
n->∞
3n+2
n2
= 0
Big-Oh, Theta, Omega and Little-oh
Tips :
Think of O(g(n)) as “less than or equal to” g(n)
◦ Upper bound: “grows slower than or same rate as” g(n)
Think of Ω(g(n)) as “greater than or equal to” g(n)
◦ Lower bound: “grows faster than or same rate as” g(n)
Think of Θ(g(n)) as “equal to” g(n)
◦ “Tight” bound: same growth rate
Think of o(g(n)) as “less than to” g(n)
◦ Strict Upper bound: “grows slower than ” g(n)
(True for large N)
Functions ordered by growth rate
Function Name
1 Growth is constant
logn Growth is logarithmic
n Growth is linear
nlogn Growth is n-log-n
n2
Growth is quadratic
n3
Growth is cubic
2n
Growth is exponential
n! Growth is factorial
1 < logn < n < nlogn < n2
< n3
< 2n
< n!
◦ To get a feel for how the various functions grow with n, you are advised to study the
following figs:
2
The following fig gives the time needed by a 1 billion
instructions per second computer to execute a program of
complexity f(n) instructions.
Probabilistic analysis
The Hiring Problem
You are using an employment agency to hire a new assistant.
The agency sends you one candidate each day.
You interview the candidate and must immediately decide whether or not to
hire that person. But if you hire, you must also fire your current office assistant
—even if it’s someone you have recently hired.
Cost to interview is ci per candidate.
Cost to hire is ch per candidate.
You want to have, at all times, the best candidate seen so far.
When you interview a candidate who is better than your current assistant, you
fire the current assistant and hire the candidate.
You will always hire the first candidate that you interview.
Problem: What is the cost of this strategy?
Pseudo-code to Model the Scenario
Hire-Assistant (n)
best  0 ;;Candidate 0 is a least qualified candidate
for i  1 to n
do interview candidate i
if candidate i is better than candidate best
then best  i
hire candidate i
Cost Model:
•We are not concerned with the running time of Hire-Assistant.
•We want to determine the total cost of hiring the best candidate.
• If n candidates interviewed and m hired, then cost is nci+mch.
• Have to pay nci to interview, no matter how many we hire.
• So, focus on analyzing the hiring cost mch.
• m varies with order of candidates.
Best-Case Analysis
Best case is when the agency sends you the best
applicant on the first day.
Cost is nci+ch.
Worst-case Analysis
In the worst case, we hire all n candidates.
This happens if each candidate is better than all those who came
before. Candidates come in increasing order of quality.
Cost is nci+nch.
If this happens, we fire the agency.
What should happen in the average case?
Probabilistic analysis
Probabilistic analysis is the use of probability in the analysis of
problems.
Generally, we use probabilistic analysis to analyze the running
time of an algorithm.
Sometimes, we can also use it to analyze quantities such as the
hiring cost in procedure Hire-Assistant.
Hiring problem-Average –case Analysis
An input to the hiring problem is an ordering of the n applicants.
There are n! different inputs.
To use probabilistic analysis, we assume that the candidates arrive in a
random order .
Then we analyze our algorithm by computing an expected running time.
The expectation is taken over the distribution of the possible inputs.
Thus, we are averaging the running time over all possible inputs.
Indicator Random Variable
A powerful technique for computing the expected value of a
random variable.
Convenient method for converting between probabilities and
expectations.
Takes only 2 values, 1 and 0.
Indicator Random Variable XA=I{A} for an event A of a sample
space is defined as:
The expected value of an indicator Random Variable associated
with an event A is equal to the probability that A occurs.




occur.
not
does
if
0
occurs,
if
1
}
{
A
A
A
I
Indicator RV – Example1
Problem: Determine the expected number of heads in one
coin flip.
Sample space is s{ H,T}
Indicator random variable
XA= I{ coin coming up with heads}=1/2
The expected number of heads obtained in one flip of the
coin is equal to the expected value of indicator random
variable.
Therefore, E[XA] = 1/2
Indicator RV – Example2
Problem: Determine the expected number of heads in n coin flips.
Let X be the random variable denoting the total number of heads in
n coin flips.
Define n indicator random variables, Xi, 1  i  n.
Let Xi be the indicator random variable for the event that the ith
flip results in a Head.
Xi = I{the ith
flip results in H}
Then X = X1 + X2 + …+ Xn = i=1..n Xi.
E[Xi] = Pr{H} = ½, 1  i  n.
Expected number of heads is
E[X] = E[i=1..n Xi].
= i=1..n E[Xi] = i=1..n ½ = n/2.
Back to the Hiring Problem
We want to know the expected cost of our hiring algorithm,
in terms of how many times we hire an applicant.
Random variable X(s) is the number of applicants that are
hired, given the input sequence s.
Indicator random variable Xi for applicant I = 1 if applicant i
is hired, 0 otherwise.
What is E[X]?
Probability of Hiring i-th Applicant
Probability of hiring i is probability that i is better than the
previous i-1 applicants…
Pr[Xi = 1] = 1/i.
Suppose n = 4 and i = 3.
Ex:-Probability that 3rd applicant is better than the 2 previous
ones.
1234
1243
1324
1342
1423
1432
2134
2143
2314
2341
2413
2431
3124
3142
3214
3241
3412
3421
4123
4132
4213
4231
4312
4321
8/24 = 1/3
Analysis of the Hiring Problem
Compute E[X], the number of candidates we expect to
hire.
)
1
(
ln
1
]
[
]
[
1
1
1
O
n
i
X
E
X
E
X
E
n
i
n
i
i
n
i
i

















Expected hiring cost = O(chln n).
1
Amortized Analysis
Data structures are subjected to a sequence of operations
rather than one operation.
In this sequence, one operation possibly performs certain
modifications that have an impact on the run time of the
next operation in the sequence.
One way of assessing the worst case run time of the entire
sequence is to add worst case efficiencies for each
operation.
This may result in an excessively large and unrealistic time
complexity.
Amortized analysis concentrates on finding the average
complexity of a worst case sequence of operations.
Amortized analysis considers interdependence between
operations.
◦ For example, if an array is sorted and only a very few new elements
are added, then re-sorting this array will be must faster than sorting
it for the first time.
Amortized Analysis
AMORTIZED ANALYSIS:
You try to estimate an upper bound of the total
work T(n) required for a sequence of n operations…
Some operations may be cheap some may be expensive.
Overall, your algorithm does T(n) of work for n operations …
Therefore, by simple reasoning, the amortized cost of each
operation is T(n)/n
Disjoint Sets
Two sets A and B are said to be disjoint if there
no commone elements i.e., A  B =  .
Example:
1) S1={1,7,8,9}, S2={2,5,10}, and S3={3,4,6}.
are three disjoint sets.
We identify a set by choosing a representative element of the set.
It doesn’t matter which element we choose, but once chosen, it
can’t be changed.
Disjoint set operations:
◦ FIND-SET(x): Returns the representative of the set
containing x.
◦ UNION(i,j): Combines the two sets i and j into one new
set. A new representative is selected.
( Here i and j are the representatives of the sets )
Disjoint Sets Example
Make-Set(1)
Make-Set(2)
Make-Set(3 )
Make-Set(4)
Union(1,2)
Union(3,4)
Find-Set(2)
Find-Set(4)
Union(2,4)
2
1
3
4
returns 1
returns 3
Up-Trees
A simple data structure for implementing disjoint sets is the up-
tree.
1
7 8 9
5
2 10
3
6 6
S1 S2
S3
1, 7,8, and 9 belong to the same
set. 1 is the representative.
Union operation
To obtain union of two sets, set the parent field of one
of the roots to the other root.
1
7 8 9 5
2 10
1
7 8 9
5
2 10
or
S1U S2
S1U S2
Array Representation of Up-tree
Assume each element is associated with an integer i=1…n. From now on, we deal
only with i.
Create an integer array, p[1:n]
An array entry is the element’s parent
A -1 entry signifies that element i is the representative element.
S1={1,7,8,9}, S2={2,5,10}, and S3={3,4,6}.
Ex:- 1 2 3 4 5 6 7 8 9 10
-1 5 -1 3 -1 3 1 1 1 5
p
1 2 3 4 5 6 7 8 9 10
Now we can implement Find by following the indices, starting at i
until we reach a node with parent value -1.
Ex:- Find(6) moves to 6’s parent, 3. Since p[3] is negative, we
have reached the root.
The operation Union(i,j) is very simple. We pass two trees with
roots i and j. The statement p[i]:=j; accomplishes the union.
Simple union algorithm :
Algorithm SimpleUnion(i,j)
{
p[i] = j; // attaches i to j
}
Simple find algorithm
Algorithm SimpleFind(i)
{
while(p[i] > 0) do i:=p[i];
return i;
}
Performance: ???
The performance characteristics of these algorithms are not very
good.
For example, start with q elements each is in its own set, then
process the following sequence of union-find operations:
Union(1,2), Union(2,3), Union(3,4), ……, Union(n-1,n)
Find(1), Find(2),………, Find(n).
This sequence results in the degenerate tree shown below.
1
.
.
.
n-1
n
The time taken for a union is a o(1), the n-1 unions can be
processed in time o(n).
However, each find requires following a sequence of parent
pointers from the element to be found to the root.
Since the time required to process a find for an element at level i
of a tree is o(i), the total time needed to process the n finds is
We can improve the performance of union and find algorithms by
avoiding the creation of degenerate trees.
To do this, we use weighting rule for union and collapsing rule for
find.
O( ∑ i )=O(n2
)
1≤i ≤n
Weighting rule for Union(i,j)
Def:-If the number of nodes in the tree with root i
is less than the number in the tree with root j,
then make j the parent of i; otherwise make i
the parent of j.
OR
Always attach smaller tree to larger.
If we use weighting rule, we obtain the following trees.
1 2 n
…
initial
1 3 n
… 1 4 n
…
2 2 3
Union(1,2) Union(1,3)
. . . 1
4
2 3 ... n
Union(1,n)
To implement weighting rule, we maintain a count field in the
root of every tree.
If i is a root node, then count[i] equals the number of nodes in
the tree.
Since all nodes other than the roots of trees have a positive
number in the p field, we can maintain the count in the p field of
the roots as a negative number.
Algorithm WeightedUnion(i,j)
// Union sets with roots i, and j, i ≠ j, using
// the weighting rule. p[i]= -count[i] and p[j]= -count[j].
{
temp:=p[ i ]+p[ j ];
if( p[ i ]>p[ j ] ) then
{ // i has fewer nodes.
p[i]:=j; p[j]:=temp;
}
else
{ // j has fewer or equal nodes.
p[ j ]:=i; p[ i ]:=temp;
}
}
In this algorithm, again time required to perform a union is o(1),
the n-1 unions can be processed in time o(n).
The find algorithm is unchanged.
Lemma:-
Assume that we start with a forest of trees, each having one node. Let T be a
tree with n nodes created as a result of a sequence of unions each performed
using WeightedUnion. Then the height of T is no greater than log n +1.
The maximum time required to perform a find is O(logn).
If there are n find operations, then time required to perform this sequence is
O(nlogn).
2
Ex:-Consider the behavior of WeightedUnion on the following
sequence of unions starting from the initial configuration
p[i]= -count[i]= -1, 1≤ i ≤ 8=n .
Union(1,2), Union(3,4), Union(5,6), Union(7,8),
Union(1,3), Union(5,7), Union(1,5)
1 2
[-1] [-1]
3 4
[-1] [-1]
5 6
[-1] [-1]
7 8
[-1] [-1]
a) Initial height-1 trees
1
2
[-2]
3
4
[-2]
5
6
[-2]
7
8
[-2]
b) height-2 trees following Union(1,2), (3,4), (5,6), and (7,8)
1
2
[- 4]
3
4
5
6
[- 4]
7
8
c) height-3 trees following Union(1,3), and (5,7)
1
2
[- 8]
3
4
5
6 7
8
d) height-4 tree following Union(1,5)
Collapsing rule for find
Def:- When we do a find on an element i, we make each
node in the path from i to the root directly point to
the root.
f
e
d
c
f
e
d
c
Ex:- Consider the previous tree created by WeightedUnion on the
sequence of unions. Now process the following finds:
Find(8), Find(7), Find(5),Find(8)
Find(8) requires going up three
parent link fields, Find(7) requires
going up two parent fields, and
Find(5) requires one parent field.
There total cost for the above
sequence is 9 moves.
1
2
[- 8]
3
4
5
6 7
8
When CollapsingFind is used, the first(8) requires going up three
links then resetting two links:
Now the total cost for the above sequence is 6 moves.
1
2
[- 8]
3
4
5
6
7 8
Algorithm CollapsingFind(i)
// Find the root of the tree containing element i.
// Use the collapsing rule to collapse all nodes from i to the root.
{
r:=i;
while( p[ r ] >0 ) do r:=p[r]; // Find root.
while( i ≠ r ) do // collapse nodes from i to r.
{
s:=p[ i ]; p[ i ]:=r; i:=s;
}
return r;
}
Divide and Conquer Technique
General Method:
 The Divide and Conquer Technique splits n inputs into k subsets , 1< k ≤ n,
yielding k subproblems.
 These subproblems will be solved and then combined by using a separate
method to get a solution to a whole problem.
 If the subproblems are large, then the Divide and Conquer Technique will be
reapplied.
 Often subproblems resulting from a Divide and Conquer Technique are of
the same type as the original problem.
For those cases the reapplication of the Divide and Conquer Technique
is naturally expressed by a recursive algorithm.
Now smaller and smaller problems of the same kind are generated
until subproblems that are small enough to be solved without splitting
are produced.
Control Abstraction/general method for Divide and Conquer
Technique
Algorithm DAndC(p)
{
if Small(p) then return s(p);
else
{
divide p into smaller instances p1,p2,…….,pk, k≥1;
Apply DAndC to each of these subproblems;
return Combine(DAndC(p1), DAndC(p2),……,DAndC(pk));
}
}
If the size of p is n and the sizes of the k subproblems are n1,n2,….,nk,then the
computing time of DAndC is described by the recurrence relation
T(n)= g( n) n small
T(n1)+T(n2)+……+T(nk)+f(n) Otherwise
Where T(n) is the time for DAndC on any input of size n and g(n) is the time to
compute the answer directly for small inputs.
The function f(n) is the time for dividing p and combining the solutions to
subproblems.
The Complexity of many divide-and-conquer algorithms is
given by recurrences of the form
c n small
aT(n/b)+f(n) Otherwise
Where a , b and c are known constants.
and n is a power of (i.e n=bk
)
T(n)=
Applications
1.Binary search Algorithm
Iterative algorithm
Algorithm BinSearch(a,n,x)
{
low:=1; high:=n;
while(low≤high)
{
mid:=(low+high)/2;
if( x<a[mid] ) then high:=mid-1;
else if( x> a[mid] ) then low:=mid+1;
else return mid;
}
return 0;
}
Recursive Algorithm ( Divide and Conquer Technique)
Algorithm BinSrch (a,i,l,x)
//given an array a [i:l]of elements in nondecreasing
//order,1≤i≤l,determine whether x is present,and
//if so, return j such that x=a[j]; else return 0.
{
if(l=i) then // If small(P)
{
if(x=a[i]) then return i;
else return 0;
}
else
{ //Reduce p into a smaller subproblem.
mid:= (i+l)/2
if(x=a[mid]) then return mid;
else if (x<a[mid]) then
return BinSrch (a,i,mid-1,x);
else return BinSrch(a,mid+1,l,x);
}
}
Time complexity of Binary Seaych
If the time for diving the list is a constant, then the computing time for binary
search is described by the recurrence relation
T(n) = c1 n=1, c1 is a constant
T(n/2) + c2 n>1, c2 is a constant
T(n) = T(n/2) + c2
=T(n/4)+c2+c2
=T(n/8) +c2+c2+c2
…..
…..
= T(1)+ kc2
= c1+kc2 =c1+ logn*c2 = O(logn)
Assume n=2k
, then
Time Complexity of Binary Search
Successful searches:
best average worst
O(1) O(log n) O( log n)
Unsuccessful searches :
best average worst
O(log n) O(log n) O( log n)
2. Merge Sort
1. Base Case, solve the problem directly
if it is small enough(only one element).
2. Divide the problem into two or more
similar and smaller subproblems.
3. Recursively solve the subproblems.
4. Combine solutions to the subproblems.
Merge Sort: Idea
Merge
Recursively
sort
Divide into
two subsists
FirstPart SecondPart
FirstPart SecondPart
A
A is sorted!
Recursively
sort
6 2 8 4 3 7 5 1
6 2 8 4 3 7 5 1
Merge-Sort(A, 0, 7)
Divide
A:
6 2 8 4
3 7 5 1
6 2 8 4
Merge-Sort(A, 0, 3) , divide
A:
Merge-Sort(A, 0, 7)
3 7 5 1
8 4
6 2
6 2
Merge-Sort(A, 0, 1), divide
A:
Merge-Sort(A, 0, 7)
3 7 5 1
8 4
6
2
Merge-Sort(A, 0, 0) , base case
A:
Merge-Sort(A, 0, 7)
3 7 5 1
8 4
6 2
Merge-Sort(A, 0, 0), return
A:
Merge-Sort(A, 0, 7)
3 7 5 1
8 4
6
2
Merge-Sort(A, 1, 1), base case
A:
Merge-Sort(A, 0, 7)
3 7 5 1
8 4
6 2
Merge-Sort(A, 1, 1), return
A:
Merge-Sort(A, 0, 7)
3 7 5 1
8 4
2 6
Merge(A, 0, 0, 1)
A:
Merge-Sort(A, 0, 7)
3 7 5 1
8 4
2 6
Merge-Sort(A, 0, 1), return
A:
Merge-Sort(A, 0, 7)
3 7 5 1
8 4
2 6
Merge-Sort(A, 2, 3)
4
8
, divide
A:
Merge-Sort(A, 0, 7)
3 7 5 1
4
2 6
8
Merge-Sort(A, 2, 2), base case
A:
Merge-Sort(A, 0, 7)
3 7 5 1
4
2 6
8
Merge-Sort(A, 2, 2), return
A:
Merge-Sort(A, 0, 7)
4
2 6
8
Merge-Sort(A, 3, 3), base case
A:
Merge-Sort(A, 0, 7)
3 7 5 1
4
2 6
8
Merge-Sort(A, 3, 3), return
A:
Merge-Sort(A, 0, 7)
3 7 5 1
2 6
4 8
Merge(A, 2, 2, 3)
A:
Merge-Sort(A, 0, 7)
3 7 5 1
2 6 4 8
Merge-Sort(A, 2, 3), return
A:
Merge-Sort(A, 0, 7)
3 7 5 1
2 4 6 8
Merge(A, 0, 1, 3)
A:
Merge-Sort(A, 0, 7)
3 7 5 1
2 4 6 8
Merge-Sort(A, 0, 3), return
A:
Merge-Sort(A, 0, 7)
3 7 5 1
2 4 6 8
Merge-Sort(A, 4, 7)
A:
Merge-Sort(A, 0, 7)
1 3 5 7
2 4 6 8
A:
Merge (A, 4, 5, 7)
Merge-Sort(A, 0, 7)
1 3 5 7
2 4 6 8
Merge-Sort(A, 4, 7), return
A:
Merge-Sort(A, 0, 7)
1 2 3 4 5 6 7 8
Merge(A, 0, 3, 7)
A:
Merge-Sort(A, 0, 7)
Merge-Sort(A, 0, 7), done!
Ex:- [ 179, 254, 285, 310, 351, 423, 450, 520, 652,861 ]
Tree of calls of merge sort
1,10
1,5 6,10
1,3 4,5 6,8 9,10
1,2 3,3 4,4 5,5 6,7 8,8 9,9 10,10
1,1 2,2 6,6 7,7
Merge Sort: Algorithm
MergeSort ( low,high)
// sorts the elements a[low],…,a[high] which reside in the global array
//a[1:n] into ascending order.
// Small(p) is true if there is only one element to sort. In this case the list is
// already sorted.
{ if ( low<high ) then // if there are more than one element
{
mid ← (low+high)/2;
MergeSort(low,mid);
MergeSort(mid+1, high);
Merge(low, mid, high);
}
Recursive Calls
6 10 14 22
3 5 15 28
L:
L: R:
R:
5 15 28 30 6 10 14
5
Merge-Sort: Merge Example
B:
B:
5 15 28 30 6 10 14
5
2 3 7 8 1 4 5 6
A:
A:
low mid high
Merge-Sort: Merge Example
3 5 15 28 30 6 10 14
L:
L:
B:
B:
3 15 28 30 6 10 14 22
R:
R:
i=low j=mid+1
k=low
2 3 7 8 1 4 5 6
1
5 15 28 30 6 10 14
5
A:
A:
Merge-Sort: Merge Example
1 5 15 28 30 6 10 14
L:
L:
B:
B:
3 5 15 28 6 10 14 22
R:
R:
k
2 3 7 8 1 4 5 6
2
i j
5 15 28 30 6 10 14
5
A:
A:
Merge-Sort: Merge Example
1 2 15 28 30 6 10 14
L:
L:
B:
B:
6 10 14 22
R:
R:
i
k
2 3 7 8 1 4 5 6
3
j
5 15 28 30 6 10 14
5
A:
A:
Merge-Sort: Merge Example
1 2 3 6 10 14
L:
L:
B:
B:
6 10 14 22
R:
R:
i j
k
2 3 7 8 1 4 5 6
4
5 15 28 30 6 10 14
5
A:
A:
Merge-Sort: Merge Example
1 2 3 4 6 10 14
L:
L:
B:
B:
6 10 14 22
R:
R:
j
k
2 3 7 8 1 4 5 6
i
5
5 15 28 30 6 10 14
5
A:
A:
Merge-Sort: Merge Example
1 2 3 4 5 6 10 14
L:
L:
B:
B:
6 10 14 22
R:
R:
i j
k
2 3 7 8 1 4 5 6
6
5 15 28 30 6 10 14
5
A:
A:
Merge-Sort: Merge Example
1 2 3 4 5 6 14
L:
L:
B:
B:
6 10 14 22
R:
R:
k
2 3 7 8 1 4 5 6
7
i j
5 15 28 30 6 10 14
5
A:
A:
Merge-Sort: Merge Example
1 2 3 4 5 6 7 14
L:
L:
B:
B:
3 5 15 28 6 10 14 22
R:
R:
2 3 7 8 1 4 5 6
8
i j
k
5 15 28 30 6 10 14
5
A:
A:
Merge-Sort: Merge Example
1 2 3 4 5 6 7 8
L:
L:
B:
B:
3 5 15 28 6 10 14 22
R:
R:
2 3 7 8 1 4 5 6
i j
k
5 15 28 30 6 10 14
5
A:
A:
1 2 3 4 5 6 7 8
B:
B:
5 15 28 30 6 10 14
5
A:
A:
Algorithm Merge(low,mid,high)
// a[low:high] is a global array containing two sorted subsets in a[low:mid]
// and in a[mid+1:high]. The goal is to merge these two sets into a single set
// residing in a [low:high]. b[ ] is a temporary global array.
{
h:=low; i:=low; j:=mid+1;
while( h ≤ mid ) and ( j ≤ high ) do
{
if( a[h] ≤ a[j] ) then
{
b[i]:=a[h]; h:=h+1;
}
else
{
b[i]:=a[j]; j:=j+1;
}
i:=i+1;
}
if( h > mid ) then
for k:=j to high do
{
b[i] := a[k]; i:= i+1;
}
else
for k:=h to mid do
{
b[i] := a[k]; i:= i+1;
}
for k:= low to high do a[k]:=b[k];
}
Merge-Sort Analysis
n
n/2 n/2
n/4 n/4 n/4 n/4
2 2 2
Merge-Sort Time Complexity
If the time for the merging operation is proportional to n, then the
computing time for merge sort is described by the recurrence relation
n>1, c2 is a constant
n=1, c1 is a constant
2T(n/2) + c2n
c1
T(n) =
Assume n=2k
, then
T(n) =2T(n/2) + c2n
=2(2T(n/4)+c2n/2)+cn
=4T(n/4)+2c2n
…..
…..
=2k
T(1)+ kc2n
= c1n+c2nlogn = = O(nlogn)
Summary
Merge-Sort
◦ Most of the work done in combining the solutions.
◦ Best case takes o(n log(n)) time
◦ Average case takes o(n log(n)) time
◦ Worst case takes o(n log(n)) time
3. Quick Sort
Divide:
◦ Pick any element as the pivot, e.g, the first element
◦ Partition the remaining elements into
FirstPart, which contains all elements < pivot
SecondPart, which contains all elements > pivot
Recursively sort FirstPart and SecondPart.
Combine: no work is necessary since sorting is done in place.
pivot divides a into two sublists x and y.
4 2 7 8 1 9 3 6 5
4
pivo
t
4 2 7 8 1 9 3 6 5
x y
The whole process
4 2 7 8 1 9 3 6 5
2 1 3 4 7 8 9 6 5
1 2 3 6 5 7 8 9
5 9
1 3 5 6 8 9
Keep moving from left side as long as a[ i ]<pivot and from the
right side as long as a[ j ]>pivot
85 24 63 95 17 31 45 98
i j
85 24 63 95 17 31 45 98
85 24 63 95 17 31 45 98
85 24 63 95 17 31 45 98
i
i
i
j
j
j
Process:
pivot
If i<j interchange ith
and j th
elements and
then Continue the process.
85 24 63 45 17 31 95 98
i j
85 24 63 45 17 31 95 98
i j
85 24 63 45 17 31 95 98
i
85 24 63 45 17 31 95 98
i
j
85 24 63 45 17 31 95 98
j
If i ≥j interchange jth
and pivot elements
and then divide the list into two sublists.
i
35 24 63 45 17 85 95 98
Two sublists:
35 24 63 45 17 95 98
Recursively sort
FirstPart and
SecondPart
QickSort( low, j-1 ) QickSort( j+1,high )
j
85
Quick Sort Algorithm :
Algorithm QuickSort(low,high)
//Sorts the elements a[low],…..,a[high] which resides
//in the global array a[1:n] into ascending order;
// a[n+1] is considered to be defined and must ≥ all the
// elements in a[1:n].
{
if( low< high ) // if there are more than one element
{ // divide p into two subproblems.
j :=Partition(low,high);
// j is the position of the partitioning element.
QuickSort(low,j-1);
QuickSort(j+1,high);
// There is no need for combining solutions.
}
}
Algorithm Partition(l,h)
{
pivot:= a[l] ; i:=l; j:= h+1;
while( i < j ) do
{
i++;
while( a[ i ] < pivot) do
i++;
j--;
while( a[ j ] >= pivot ) do
j--;
if ( i < j ) then Interchange(i,j ); // interchange ith
and
} // jth
elements.
Interchange(l, j ); return j; // interchange pivot and jth
element.
}
Algorithm interchange (x,y )
{
temp=a[x];
a[x]=a[y];
a[y]=temp;
}
Time complexity analysis
A worst/bad case
8
7
6
5
4
3
2
1
1 2 3 4 5 6 7 8
2 3 4 5 6 7 8
3 4 5 6 7 8
4 5 6 7 8
5 6 7 8
6 7 8
7 8
8
O(n2
)
9
9
9
9
9
9
9
9
9
9
cn
c(n-1)
3c
2c
n
n-1
n-2
3
2
c(n-2)
Happens only if
• input is sortd
• input is reversely sorted
Worst/bad Case
Total time:
O(n2
)
1
1c
A best/good case
It occurs only if each partition divides the list into two equal size
sublists.
O(n logn)
Best/good Case
• Total time: O(nlogn)
n
n/2 n/2
n/4 n/4 n/4 n/4
2 2 2
Summary
Quick-Sort
◦ Most of the work done in partitioning
◦ Best case takes O(n log(n)) time
◦ Average case takes O(n log(n)) time
◦ Worst case takes O(n2
) time
4.Strassen’s Matrix Multiplication
Basic Matrix Multiplication
void matrix_mult (){
for (i = 1; i <= N; i++) {
for (j = 1; j <= N; j++) {
for(k=1; k<=N; k++){
C[i,j]=C[i,j]+A[i,k]+B[k,j];
}
}}
Time complexity of above algorithm is
T(n)=O(n3)
Let A an B two n×n matrices. The product C=AB is also an n×n matrix.
Divide and Conquer technique
We want to compute the product C=AB, where each of A,B, and C are
n×n matrices.
Assume n is a power of 2.
If n is not a power of 2, add enough rows and columns of zeros.
We divide each of A,B, and C into four n/2×n/2 matrices, rewriting the
equation C=AB as follows:
Then,
C11=A11B11+A12B21
C12=A11B12+A12B22
C21=A21B11+A22B21
C22=A21B12+A22B22
Each of these four equations specifies two multiplications of n/2×n/2 matrices
and the addition of their n/2×n/2 products.
We can derive the following recurrence relation for the time T(n) to multiply
two n×n matrices:
T(n)= c1 if n<=2
8T(n/2)+ c2n2
if n>2
T(n) = O(n3)
• This method is no faster than the ordinary method.

































8
8
8
8
7
7
7
7
6
6
6
6
5
5
5
5
4
4
4
4
3
3
3
3
2
2
2
2
1
1
1
1
22
21
12
11
C
C
C
C
C
c11 c12
c22
c21
A11 A12
A21 A22 B21 B22
B11 B12
T(n)= 8T(n/2)+ c2n2
=8 8T(n/4)+ c2(n/2)2 +
c2n2
= 82
T(n/4)+ c22n2 +
c2n2
=82
8T(n/8)+ c2(n/4)2
+c22n2
+ c2n2
=83
T(n/8)+ c24n2
+c22n2
+ c2n2
:
=8k
T(1)+ ………………+ c24n2
+c22n2
+ c2n2
=
8log
2
n
c1 + c n2
=nlog
2 c1 + c n2
=n3
c1+ cn2 =
O(n3
)
.
8
Strassen’s method
Matrix multiplications are more expensive than matrix additions or
subtractions( O(n3
) versus O(n2
)).
Strassen has discovered a way to compute the multiplication using only 7
multiplications and 18 additions or subtractions.
His method involves computing 7 n×n matrices M1,M2,M3,M4,M5,M6, and M7,
then cij’s
are calculated using these matrices.
Formulas for Strassen’s Algorithm
M1 = (A11 + A22)  (B11 + B22)
M2 = (A21 + A22)  B11
M3 = A11  (B12 – B22)
M4 = A22  (B21 – B11)
M5 = (A11 + A12)  B22
M6 = (A21 – A11)  (B11 + B12)
M7 = (A12 – A22)  (B21 + B22)
C11=M1 + M4 - M5 + M7
C12= M3 + M5
C21= M2 + M4
C22=M1 + M3 - M2 + M6
C11 C12 A11 A12 B11 B12
= *
C21 C22 A21 A22 B21 B22
M1 + M4 - M5 + M7 M3 + M5
=
M2 + M4 M1 + M3 - M2 + M6
The resulting recurrence relation for T(n) is
T(n)= c1 n<=2
7T(n/2) +c2n2
n>2
T(n)= 7k
T(1) + c2n2
1+ 7/4 + (7/4)2
+ (7/4) 3
+……………..+ (7/4)k-1
=
7log
2
n
c1 +c2 n2
(7/4)log
2
n
=
nlog
2
7
+ nlog
2
4
( n log
2
7-log
2
4
)
=2 nlog
2
7
= O(nlog
2
7
) ~ O( n2.81
)
. .
.
Ad

More Related Content

Similar to DAA-Introduction-Divide and Conquer Technique (20)

Unit ii algorithm
Unit   ii algorithmUnit   ii algorithm
Unit ii algorithm
Tribhuvan University
 
Unit i basic concepts of algorithms
Unit i basic concepts of algorithmsUnit i basic concepts of algorithms
Unit i basic concepts of algorithms
sangeetha s
 
Performance analysis and randamized agoritham
Performance analysis and randamized agorithamPerformance analysis and randamized agoritham
Performance analysis and randamized agoritham
lilyMalar1
 
VCE Unit 01 (2).pptx
VCE Unit 01 (2).pptxVCE Unit 01 (2).pptx
VCE Unit 01 (2).pptx
skilljiolms
 
Introduction to data structures and complexity.pptx
Introduction to data structures and complexity.pptxIntroduction to data structures and complexity.pptx
Introduction to data structures and complexity.pptx
PJS KUMAR
 
DSAsunbeam pdf useful for cceegot did it and enjoy.pdf
DSAsunbeam pdf useful for cceegot did it and enjoy.pdfDSAsunbeam pdf useful for cceegot did it and enjoy.pdf
DSAsunbeam pdf useful for cceegot did it and enjoy.pdf
akashanpat19999
 
BCS401 ADA First IA Test Question Bank.pdf
BCS401 ADA First IA Test Question Bank.pdfBCS401 ADA First IA Test Question Bank.pdf
BCS401 ADA First IA Test Question Bank.pdf
VENKATESHBHAT25
 
Data Structure & Algorithms - Mathematical
Data Structure & Algorithms - MathematicalData Structure & Algorithms - Mathematical
Data Structure & Algorithms - Mathematical
babuk110
 
Data Structures- Part2 analysis tools
Data Structures- Part2 analysis toolsData Structures- Part2 analysis tools
Data Structures- Part2 analysis tools
Abdullah Al-hazmy
 
Module-1.pptxbdjdhcdbejdjhdbchchchchchjcjcjc
Module-1.pptxbdjdhcdbejdjhdbchchchchchjcjcjcModule-1.pptxbdjdhcdbejdjhdbchchchchchjcjcjc
Module-1.pptxbdjdhcdbejdjhdbchchchchchjcjcjc
shashashashashank
 
Data Structures Notes
Data Structures NotesData Structures Notes
Data Structures Notes
RobinRohit2
 
DSA
DSADSA
DSA
rrupa2
 
Space Complexity in Data Structure.docx
Space Complexity in Data Structure.docxSpace Complexity in Data Structure.docx
Space Complexity in Data Structure.docx
Mani .S (Specialization in Semantic Web)
 
Introduction to design and analysis of algorithm
Introduction to design and analysis of algorithmIntroduction to design and analysis of algorithm
Introduction to design and analysis of algorithm
DevaKumari Vijay
 
Programming Fundamentals
Programming FundamentalsProgramming Fundamentals
Programming Fundamentals
umar78600
 
Module 1_ Introduction.pptx
Module 1_ Introduction.pptxModule 1_ Introduction.pptx
Module 1_ Introduction.pptx
nikshaikh786
 
Design and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms Lecture NotesDesign and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms Lecture Notes
Sreedhar Chowdam
 
DS Unit-1.pptx very easy to understand..
DS Unit-1.pptx very easy to understand..DS Unit-1.pptx very easy to understand..
DS Unit-1.pptx very easy to understand..
KarthikeyaLanka1
 
UNIT-1-PdjfjfjfjfjfjfjfjfjfjfjPTS-DAA.pdf
UNIT-1-PdjfjfjfjfjfjfjfjfjfjfjPTS-DAA.pdfUNIT-1-PdjfjfjfjfjfjfjfjfjfjfjPTS-DAA.pdf
UNIT-1-PdjfjfjfjfjfjfjfjfjfjfjPTS-DAA.pdf
NagendraK18
 
UNIT-1-PPTS-DAA_cofjfjvjcjcncnfncmpressed.pdf
UNIT-1-PPTS-DAA_cofjfjvjcjcncnfncmpressed.pdfUNIT-1-PPTS-DAA_cofjfjvjcjcncnfncmpressed.pdf
UNIT-1-PPTS-DAA_cofjfjvjcjcncnfncmpressed.pdf
NagendraK18
 
Unit i basic concepts of algorithms
Unit i basic concepts of algorithmsUnit i basic concepts of algorithms
Unit i basic concepts of algorithms
sangeetha s
 
Performance analysis and randamized agoritham
Performance analysis and randamized agorithamPerformance analysis and randamized agoritham
Performance analysis and randamized agoritham
lilyMalar1
 
VCE Unit 01 (2).pptx
VCE Unit 01 (2).pptxVCE Unit 01 (2).pptx
VCE Unit 01 (2).pptx
skilljiolms
 
Introduction to data structures and complexity.pptx
Introduction to data structures and complexity.pptxIntroduction to data structures and complexity.pptx
Introduction to data structures and complexity.pptx
PJS KUMAR
 
DSAsunbeam pdf useful for cceegot did it and enjoy.pdf
DSAsunbeam pdf useful for cceegot did it and enjoy.pdfDSAsunbeam pdf useful for cceegot did it and enjoy.pdf
DSAsunbeam pdf useful for cceegot did it and enjoy.pdf
akashanpat19999
 
BCS401 ADA First IA Test Question Bank.pdf
BCS401 ADA First IA Test Question Bank.pdfBCS401 ADA First IA Test Question Bank.pdf
BCS401 ADA First IA Test Question Bank.pdf
VENKATESHBHAT25
 
Data Structure & Algorithms - Mathematical
Data Structure & Algorithms - MathematicalData Structure & Algorithms - Mathematical
Data Structure & Algorithms - Mathematical
babuk110
 
Data Structures- Part2 analysis tools
Data Structures- Part2 analysis toolsData Structures- Part2 analysis tools
Data Structures- Part2 analysis tools
Abdullah Al-hazmy
 
Module-1.pptxbdjdhcdbejdjhdbchchchchchjcjcjc
Module-1.pptxbdjdhcdbejdjhdbchchchchchjcjcjcModule-1.pptxbdjdhcdbejdjhdbchchchchchjcjcjc
Module-1.pptxbdjdhcdbejdjhdbchchchchchjcjcjc
shashashashashank
 
Data Structures Notes
Data Structures NotesData Structures Notes
Data Structures Notes
RobinRohit2
 
Introduction to design and analysis of algorithm
Introduction to design and analysis of algorithmIntroduction to design and analysis of algorithm
Introduction to design and analysis of algorithm
DevaKumari Vijay
 
Programming Fundamentals
Programming FundamentalsProgramming Fundamentals
Programming Fundamentals
umar78600
 
Module 1_ Introduction.pptx
Module 1_ Introduction.pptxModule 1_ Introduction.pptx
Module 1_ Introduction.pptx
nikshaikh786
 
Design and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms Lecture NotesDesign and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms Lecture Notes
Sreedhar Chowdam
 
DS Unit-1.pptx very easy to understand..
DS Unit-1.pptx very easy to understand..DS Unit-1.pptx very easy to understand..
DS Unit-1.pptx very easy to understand..
KarthikeyaLanka1
 
UNIT-1-PdjfjfjfjfjfjfjfjfjfjfjPTS-DAA.pdf
UNIT-1-PdjfjfjfjfjfjfjfjfjfjfjPTS-DAA.pdfUNIT-1-PdjfjfjfjfjfjfjfjfjfjfjPTS-DAA.pdf
UNIT-1-PdjfjfjfjfjfjfjfjfjfjfjPTS-DAA.pdf
NagendraK18
 
UNIT-1-PPTS-DAA_cofjfjvjcjcncnfncmpressed.pdf
UNIT-1-PPTS-DAA_cofjfjvjcjcncnfncmpressed.pdfUNIT-1-PPTS-DAA_cofjfjvjcjcncnfncmpressed.pdf
UNIT-1-PPTS-DAA_cofjfjvjcjcncnfncmpressed.pdf
NagendraK18
 

Recently uploaded (20)

GROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdf
GROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdfGROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdf
GROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdf
kemimafe11
 
Using the Artificial Neural Network to Predict the Axial Strength and Strain ...
Using the Artificial Neural Network to Predict the Axial Strength and Strain ...Using the Artificial Neural Network to Predict the Axial Strength and Strain ...
Using the Artificial Neural Network to Predict the Axial Strength and Strain ...
Journal of Soft Computing in Civil Engineering
 
Frontend Architecture Diagram/Guide For Frontend Engineers
Frontend Architecture Diagram/Guide For Frontend EngineersFrontend Architecture Diagram/Guide For Frontend Engineers
Frontend Architecture Diagram/Guide For Frontend Engineers
Michael Hertzberg
 
22PCOAM16 ML Unit 3 Full notes PDF & QB.pdf
22PCOAM16 ML Unit 3 Full notes PDF & QB.pdf22PCOAM16 ML Unit 3 Full notes PDF & QB.pdf
22PCOAM16 ML Unit 3 Full notes PDF & QB.pdf
Guru Nanak Technical Institutions
 
Machine foundation notes for civil engineering students
Machine foundation notes for civil engineering studentsMachine foundation notes for civil engineering students
Machine foundation notes for civil engineering students
DYPCET
 
Environment .................................
Environment .................................Environment .................................
Environment .................................
shadyozq9
 
01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf
01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf
01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf
PawachMetharattanara
 
Water Industry Process Automation & Control Monthly May 2025
Water Industry Process Automation & Control Monthly May 2025Water Industry Process Automation & Control Monthly May 2025
Water Industry Process Automation & Control Monthly May 2025
Water Industry Process Automation & Control
 
Mode-Wise Corridor Level Travel-Time Estimation Using Machine Learning Models
Mode-Wise Corridor Level Travel-Time Estimation Using Machine Learning ModelsMode-Wise Corridor Level Travel-Time Estimation Using Machine Learning Models
Mode-Wise Corridor Level Travel-Time Estimation Using Machine Learning Models
Journal of Soft Computing in Civil Engineering
 
Automatic Quality Assessment for Speech and Beyond
Automatic Quality Assessment for Speech and BeyondAutomatic Quality Assessment for Speech and Beyond
Automatic Quality Assessment for Speech and Beyond
NU_I_TODALAB
 
IBAAS 2023 Series_Lecture 8- Dr. Nandi.pdf
IBAAS 2023 Series_Lecture 8- Dr. Nandi.pdfIBAAS 2023 Series_Lecture 8- Dr. Nandi.pdf
IBAAS 2023 Series_Lecture 8- Dr. Nandi.pdf
VigneshPalaniappanM
 
Transport modelling at SBB, presentation at EPFL in 2025
Transport modelling at SBB, presentation at EPFL in 2025Transport modelling at SBB, presentation at EPFL in 2025
Transport modelling at SBB, presentation at EPFL in 2025
Antonin Danalet
 
Lecture - 7 Canals of the topic of the civil engineering
Lecture - 7  Canals of the topic of the civil engineeringLecture - 7  Canals of the topic of the civil engineering
Lecture - 7 Canals of the topic of the civil engineering
MJawadkhan1
 
David Boutry - Specializes In AWS, Microservices And Python
David Boutry - Specializes In AWS, Microservices And PythonDavid Boutry - Specializes In AWS, Microservices And Python
David Boutry - Specializes In AWS, Microservices And Python
David Boutry
 
Smart City is the Future EN - 2024 Thailand Modify V1.0.pdf
Smart City is the Future EN - 2024 Thailand Modify V1.0.pdfSmart City is the Future EN - 2024 Thailand Modify V1.0.pdf
Smart City is the Future EN - 2024 Thailand Modify V1.0.pdf
PawachMetharattanara
 
Agents chapter of Artificial intelligence
Agents chapter of Artificial intelligenceAgents chapter of Artificial intelligence
Agents chapter of Artificial intelligence
DebdeepMukherjee9
 
860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT
860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT
860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT
Pierre Celestin Eyock
 
hypermedia_system_revisit_roy_fielding .
hypermedia_system_revisit_roy_fielding .hypermedia_system_revisit_roy_fielding .
hypermedia_system_revisit_roy_fielding .
NABLAS株式会社
 
vtc2018fall_otfs_tutorial_presentation_1.pdf
vtc2018fall_otfs_tutorial_presentation_1.pdfvtc2018fall_otfs_tutorial_presentation_1.pdf
vtc2018fall_otfs_tutorial_presentation_1.pdf
RaghavaGD1
 
Personal Protective Efsgfgsffquipment.ppt
Personal Protective Efsgfgsffquipment.pptPersonal Protective Efsgfgsffquipment.ppt
Personal Protective Efsgfgsffquipment.ppt
ganjangbegu579
 
GROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdf
GROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdfGROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdf
GROUP 2 - MANUFACTURE OF LIME, GYPSUM AND CEMENT.pdf
kemimafe11
 
Frontend Architecture Diagram/Guide For Frontend Engineers
Frontend Architecture Diagram/Guide For Frontend EngineersFrontend Architecture Diagram/Guide For Frontend Engineers
Frontend Architecture Diagram/Guide For Frontend Engineers
Michael Hertzberg
 
Machine foundation notes for civil engineering students
Machine foundation notes for civil engineering studentsMachine foundation notes for civil engineering students
Machine foundation notes for civil engineering students
DYPCET
 
Environment .................................
Environment .................................Environment .................................
Environment .................................
shadyozq9
 
01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf
01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf
01.คุณลักษณะเฉพาะของอุปกรณ์_pagenumber.pdf
PawachMetharattanara
 
Automatic Quality Assessment for Speech and Beyond
Automatic Quality Assessment for Speech and BeyondAutomatic Quality Assessment for Speech and Beyond
Automatic Quality Assessment for Speech and Beyond
NU_I_TODALAB
 
IBAAS 2023 Series_Lecture 8- Dr. Nandi.pdf
IBAAS 2023 Series_Lecture 8- Dr. Nandi.pdfIBAAS 2023 Series_Lecture 8- Dr. Nandi.pdf
IBAAS 2023 Series_Lecture 8- Dr. Nandi.pdf
VigneshPalaniappanM
 
Transport modelling at SBB, presentation at EPFL in 2025
Transport modelling at SBB, presentation at EPFL in 2025Transport modelling at SBB, presentation at EPFL in 2025
Transport modelling at SBB, presentation at EPFL in 2025
Antonin Danalet
 
Lecture - 7 Canals of the topic of the civil engineering
Lecture - 7  Canals of the topic of the civil engineeringLecture - 7  Canals of the topic of the civil engineering
Lecture - 7 Canals of the topic of the civil engineering
MJawadkhan1
 
David Boutry - Specializes In AWS, Microservices And Python
David Boutry - Specializes In AWS, Microservices And PythonDavid Boutry - Specializes In AWS, Microservices And Python
David Boutry - Specializes In AWS, Microservices And Python
David Boutry
 
Smart City is the Future EN - 2024 Thailand Modify V1.0.pdf
Smart City is the Future EN - 2024 Thailand Modify V1.0.pdfSmart City is the Future EN - 2024 Thailand Modify V1.0.pdf
Smart City is the Future EN - 2024 Thailand Modify V1.0.pdf
PawachMetharattanara
 
Agents chapter of Artificial intelligence
Agents chapter of Artificial intelligenceAgents chapter of Artificial intelligence
Agents chapter of Artificial intelligence
DebdeepMukherjee9
 
860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT
860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT
860556374-10280271.pptx PETROLEUM COKE CALCINATION PLANT
Pierre Celestin Eyock
 
hypermedia_system_revisit_roy_fielding .
hypermedia_system_revisit_roy_fielding .hypermedia_system_revisit_roy_fielding .
hypermedia_system_revisit_roy_fielding .
NABLAS株式会社
 
vtc2018fall_otfs_tutorial_presentation_1.pdf
vtc2018fall_otfs_tutorial_presentation_1.pdfvtc2018fall_otfs_tutorial_presentation_1.pdf
vtc2018fall_otfs_tutorial_presentation_1.pdf
RaghavaGD1
 
Personal Protective Efsgfgsffquipment.ppt
Personal Protective Efsgfgsffquipment.pptPersonal Protective Efsgfgsffquipment.ppt
Personal Protective Efsgfgsffquipment.ppt
ganjangbegu579
 
Ad

DAA-Introduction-Divide and Conquer Technique

  • 1. DAA Introduction , Divide &Conquer Technique B.Sailaja, Asst Prof,Dept of CSE, Vidya Jyothi Institute of Technology
  • 2. Algorithm Definition An algorithm is a finite set of instructions that accomplishes a particular task. All algorithms must satisfy the following criteria. Criteria 1. Input : Zero or more quantities are externally supplied. 2. Output : At least one quantity is produced. 3. Definiteness : Each instruction is clear and unambiguous. Statements such as “add 6 or 7 to x” or “Compute 5/0” are not permitted”
  • 3. 4. Finiteness : The algorithm should terminate after a finite number of steps. 5. Effectiveness : Instruction is basic enough to be carried out.
  • 4. Pseudo code for expressing algorithms We present most of our algorithms using pseudo code that resembles C and Pascal. 1. Comments begin with // and continue until end of the line. 2. Blocks are indicated with matching braces: { and }. i. A compound statement ii. Body of a procedure. 3. i. An identifier begins with a letter. ii. The data types of variables are not explicitly declared. iii. The types will be clear from the context. iv. Whether a variable is global or local to a procedure will also be evident from the context. v. We assume simple data types such as integer, float, char, boolean, and so on. vi. Compound data types can be formed with records.
  • 5. node = record { datatype_1 data_1; : datatype_n data_n; node *link } data items of a record can be accessed with  and period( . ) C style :- struct node { datatype_1 data_1; : datatype_n data_n; struct node *link }
  • 6. 4. Assignment of values to variables is done using the assignment statement. < variable > := < expression > 5. There are two boolean values true and false. To produce these values, logical operators and, or and not and the relational operators <, ≤,=, ≠, ≥ and > are provided. 6. Elements of multidimensional arrays are accessed using [ and ]. For example the (i,j)th element of the array A is denoted as A[i,j]. 7. The following looping statements are used: for, while, and repeat until.
  • 7. The general form of a while loop: while( condition ) do { statement_1; : statement_n; }
  • 8. The general form of a for loop: for variable := value1 to value2 step step do ◦ Here value1, value2, and step are arithmetic expressions. ◦ The clause “step step” is optional and taken as +1 if it does not occur. ◦ step could be either positive or negative. ◦ variable is tested for termination at the start of each iteration.
  • 9. The for loop can be implemented as a while loop as follows: variable:=value1; incr:=step; while( ( variable – value2)*step ≤ 0) do { <statement 1> : <statement n> variable :=variable+incr; }
  • 10. The general form of a repeat-until loop: repeat <statement 1> : <statement n> until ( condition ) The statements are executed as long as condition is false.
  • 11. 8. A conditional statement has the following forms: if< condition > then < statement > if< condition > then < statement 1> else < statement 2> 9. Input and output are done using the instructions read and write.
  • 12. 10. Procedure or function starts with the word Algorithm. General form : Algorithm Name( <parameter list> ) { body } where Name is the name of the procedure. ◦ Simple variables to procedures are passed by value. ◦ Arrays and records are passed by reference
  • 13. Ex:-Algorithm that finds and returns the maximum of n given numbers Algorithm(a,n) // a is an array of size n { Result:=a[1]; for i:=2 to n do if a[i]>Result then Result:=a[i]; return Result; }
  • 14. Ex:-Write an algorithm to sort an array of n integers using bubble sort. Algorithm(a,n) // a is an array of size n { for i:=1 to n-1 do { for j:=1 to n-i do { if( a[j] >a[j+1] ) then t=:a[j]; a[j]:=a[j+1]; a[j]:=t; } } }
  • 15. Performance Analysis (machine independent) There are many things upon which the performance will depend. ◦ Does the program efficiently use primary and Secondary storage? ◦ Is the program's running Time acceptable for the task? ◦ Does it do what we want it to do? ◦ Does it work correctly according to the specifications of the task? ◦ Does the program contain documentation that shows how to use it and how it works? ◦ Is the program's code readable?
  • 16. How to achieve them? ◦ Good programming style, experience, and practice ◦ Discuss and think. Practice Practice Practice Think Think Think
  • 17. Space Complexity The space needed by a program has the following components. ◦ Instruction space ◦ Instruction space is the space needed to store the compiled version of the program instructions. ◦ The amount of instruction space that is needed depends on the compiler used to compile the program into machine code.
  • 18. ◦ Data space ◦ Data space is the space needed to store all constant and variable values. Data space has two components. ◦ Space needed by constants( ex; 1 and 2 in max of n num algorithm) and simple variables( such as i, j, n etc). ◦ Space needed by a dynamically allocated objects such as arrays and class instances. ◦ Space taken by the variables and constants varies from language to language and platform to platform.
  • 19. ◦ Environmental stack space ◦ The environment stack is used to save information needed to resume execution of partially completed functions. ◦ Each time a function is invoked the following data are saved on the environment stack. ◦ The return address . ◦ The values of all local variables and formal parameters in the function being invoked( necessary for recursive functions only).
  • 20. Recursive algorithm Algorithm rfactorial(n) // n is an integer { fact=1; if(n=1 or n=0) return fact; else fact=n*rfactorial(n-1); return fact; } Each time the recursive function rfactorial is invoked, the current values of n and fact and the program location to return to on completion are saved in the environment stack.
  • 21. void main() {int R=rfactorial(3); printf(R);} R: int rfactorial(n){ fact=1; if (n=0 or n=1)return fact; else fact=n*rfactorial(n-1); return fact; } n : 3 int rfactorial(n){ fact=1 if (n=0 or n=1)return fact else fact=n*rfactorial(n-1); return fact; } int rfactorial(n){ if (n=0 or n=1) return 1; else fact=n*rfactorial(n-1); return fact; } n : 2 n : 1 2000 3 1 fact n fact=3* 3000 2 1 fact n fact=2* 4000 1 1 fact n fact=1 location to return Execution Stack
  • 22. Summary of space complexity The space needed by a program depends on several factors. We cannot make an accurate analysis of the space requirements of a program unless we know the computer or compile that will be used. However, we can determine the components that depend on the characteristics of the problem instance (e.x., the number of inputs and outputs or magnitude of the numbers involved ) to be solved. Ex:-1. Space requirements of a program that sorts n elements can be expressed as a function of n. 2. Space requirements of a program that adds two m×n matrices can be expressed as a function of m, n.
  • 23. ◦ The size of the instruction space is independent of the problem instance to be solved. ◦ The contribution of the constants and simple variables to the data space is also independent of the problem instance to be solved. ◦ Most of the dynamically allocated memory( ex., arrays, class instances etc) depends on problem instance to be solved. ◦ The environmental stack space is generally independent of the problem instance unless recursive functions are in use. Contd..
  • 24. Contd.. Therefore, We can divide the total space needed by a program into two parts: i) Fixed Space Requirements (C) Independent of the characteristics of the problem instance ( I ) ◦ instruction space ◦ space for simple variables and constants. ii) Variable Space Requirements (SP(I)) depend on the characteristics of the problem instance ( I ) ◦ Number of inputs and outputs associated with I ◦ recursive stack space ( formal parameters, local variables, return address ). ◦ Therefore, the space requirement of any problem P can be written as S(p)=C +Sp( Instance characteristics )
  • 25. Note: ◦ When analyzing the space complexity of an algorithm, we concentrate only on estimating Sp (Instance characteristics ). ◦ We do not concentrate on estimating fixed part c . ◦ We need to identify the instance characteristics of the problem to measure Sp
  • 26. Example1 Algorithm abc(a,b,c) { return a+b+b*c+(a+b-c)/(a+b)+4.0; } Problem instance characterized by the specific values of a,b,and c. If we assume one word (4 bytes) is adequate to store the values of each a, b, and c , then the space needed by abc is independent of the instance characteristics. Therefore, Sabc( instance characteristics)=0
  • 27. Example2 Algorithm sum(a,n) { s:=0; for i:=1 to n do s:=s+a[i]; return s; } Problem instance characterized by n. The amount of space needed does not depend on the value of n. Therefore, Ssum(n)=0 Recall: Address of the first element of the array will be passed .
  • 28. Example3 Algorithm RSum(a,n) { if(n ≤ 0) then return 0; else return RSum(a,n-1)+a[n]; } Type Name Number of bytes formal parameter: int a 2 formal parameter: int n 2 return address ( used internally) 2 Total per one recursive call 6 Total no.of recursive calls n, therefore SRSum (n)=6(n+1)
  • 29. Example4 Calculate space complexity sp(n) for the following algorithm Algorithm rfactorial(n) { fact=1; if (n=0 or n=1)return fact; else fact=n*rfactorial(n-1); return fact; }
  • 30. Contd.. Time Complexity: T(P)=C+TP(I) ◦ The time, T(P), taken by a program, P, is the sum of its compile time C and its run (or execution) time, TP(I). ◦ The compile time does not depend on the instance characteristics. ◦ We will concentrate on estimating run time Tp(I).
  • 31. If we know the characteristics of the compiler to be used, we can determine the ◦ No. of additions , subtractions, multiplications, divisions, compares, and so on. Then we can obtain an expression for Tp(n) Of the form TP(n)=caADD(n)+csSUB(n)+cmMUL(n)+cdDIV(n)+…….. where, ◦ n denotes the instance characteristics. ◦ ca, cs, cm, cd and so on denote the time needed for an addition, subtraction, multiplication, division and so on. And ◦ ADD, SUB, MUL, DIV, and so on are functions whose values are the no.of additions, subtractions, multiplications, divisions, and so on.
  • 32. Obtaining such an exact formula is an impossible task, since time needed for an addition, subtraction, and so on, depends an numbers being added, subtracted, and so on. The value of Tp(n) for any given n can be obtained only experimentally. Even with the experimental approach, one could face difficulties. In a multiuser system the execution time of a program p depends on the number of other programs running on the computer at the time program p is running.
  • 33. Contd.. As there were some problems in determining the execution time using earlier methods, we will go one step further and count only the number of program steps. program step program step is loosely defined as a syntactically or semantically meaningful program segment whose execution time is independent of the instance characteristics. ◦ Example result = a + b + b * c + (a + b - c) / (a + b) + 4.0; sum = a + b + c;
  • 34. The number of steps assigned to any program statement depends on the kind of statement. Comments are counted as zero number of steps. An assignment statement which does not involve any calls to other functions counted as one step. For loops, such as the for, while, and repeat-until, we consider the step counts only for the control part of the statement. The control parts for for and while statements have the following forms: for i:= <expr1> to <expr2> do while ( <expr> ) do
  • 35. Each execution of the control part of a while statement is one, unless <expr> is a function of instance characteristics. The step count for each execution of the control part of a for statement is one, unless <expr1> and <expr2> are functions of the instance characteristics. The step count for each execution of the condition of a conditional statements is one, unless condition is a function of instance characteristics. If any statement ( assignment statement, control part, condition etc.) involves function calls, then the step count is equal to the number of steps assignable to the function plus one.
  • 36. Methods to compute the step count 1) Introduce global variable count into programs with initial value zero. ◦ Statements to increment count by the appropriate amount are introduced into the program. ◦ The value of the count by the time program terminates is the number steps taken by the program. 2) Tabular method ◦ Determine the total number of steps contributed by each statement per execution  frequency ◦ Add up the contribution of all statements
  • 37. Method-I: Introduce variable count into programs EX:- Iterative sum of n numbers Algorithm sum(a, n) { s:=0; count:=count+1; // for assignment statement for i:=1 to n do { count:=count+1; // For for s:=s+a[i]; count:=count+1; // for assignment statement } count:=count+1; // for last time of for return s; count:=count+1; // for return } 2n + 3 steps Note : Step count tells us how the run time for a program changes with changes in the instance characteristics.
  • 38. Ex:- Addition of two m×n matrices Algorithm Add(a,b,c,,m,n) { for i:=1 to m do { for j:=1 to n do { c[i,j]:=a[i,j]+b[i,j]; } } } 2mn + 2m+1 steps
  • 39. EX:- Recursive sum of n numbers Algorithm RSum(a,n) { count:=count+1; // for the if conditional if(n ≤ 0) then { return 0; count:=count+1; // for the return } else { return RSum(a,n-1)+a[n]; count:=count+1; // For the addition, function invocation and return } }
  • 40. When analyzing a recursive program for its step count, we often obtain a recursive formula for the step count. We obtain the following recursive formula for above (RSum) algorithm. tRSum (n)= 2 If n=0 If n>0 2+ tRSum(n-1)
  • 41. One way of solving such recursive formula is by repeated substitutions for each occurrence of the function tRSum on the right side until all such occurrences disappear: tRSum (n) = 2+tRSum(n-1) = 2+2+tRSum(n-2) = 2(2)+tRSum(n-2) : : = n(2) +tRSum(n-n) =2n+tRSum(0) = 2n+2 The step count for Rsum is 2n+2
  • 42. Method-II: Tabular method EX:- Iterative sum of n numbers Statement s/e frequency Total steps Algorithm sum(a, n) { s:=0 ; for i:=1 to n do s:=s+a[i]; return s; } 0 0 1 1 1 1 0 -- -- 1 n+1 n 1 -- 0 0 1 n+1 n 1 0 Total 2n+3
  • 43. EX:- Addition of two m×n matrices Statement s/e frequency Total steps Algorithm Add(a,b,c,m, n) { for i:=1 to m do for j:=1 to n do c[i,j]:=a[i,j]+b[i,j] ; } 0 0 1 1 1 0 -- -- m+1 m(n+1) mn -- 0 0 m+1 mn+m mn 0 Total 2mn+2m+1
  • 44. EX:- Recursive sum of n numbers Statement s/e Frequency n=0 n>0 Total steps n=0 n>0 Algorithm RSum(a,n) { if( n ≤ 0 ) then return 0; else return Rsum(a,n-1)+a[n] ; } 0 0 1 1 1+x 0 -- -- -- -- 1 1 1 0 0 1 -- -- 0 0 0 0 1 1 1 0 0 1+x 0 0 Total 2 2+x x=tRSum(n-1)
  • 45. Best, Worst, Average Cases  Not all inputs of a given size take the same number of program steps.  Sequential search for K in an array of n integers: • Begin at first element in array and look at each element in turn until K is found. 1. Best-Case Step count:- Minimum number of steps executed by the algorithm for the given parameters. 2. Worst-Case Step count:- Maximum number of steps executed by the algorithm for the given parameters. 3.Average-Case Step count:- Average number of steps executed by an algorithm.
  • 46. Contd.. 5 ms 4 ms 3 ms 2 ms 1 ms Best-case time Worst -case time A B C D E F G Input Running Time Average-case time = A time + B time+ ………..+G time ---------------------------------------------------- 7 ( total number of possible inputs )
  • 47. Inexactness of step count. • Both the instructions x=y; and x=y+z+(x/y) count as one step. • Therefore, two analysts may arrive at 4n2 +6n+2 and 7n2 +3n+4 as the step count for the same program. • Any step count of the form c1n2 +c2n+c3 could be a correct step count for the program. • Because of the inexactness of what a step count stands for, the exact step count is not very useful for comparison of algorithms.
  • 48. Asymptotic efficiency Asymptotic efficiency means study of algorithms efficiency for large inputs. To compare two algorithms with running times f(n) and g(n), we need a rough measure that characterizes how fast each function grows as n grows. Hint: use rate of growth Compare functions asymptotically! (i.e., for large values of n)
  • 49. Rate of Growth Ex:- F(n)=n2 +100n+log10n+1000 n f(n) n2 100n log10n 1000 value value % value % value % value % 1 1,101 1 0.1 100 9.1 0 0.0 1,000 90.83 10 2,101 100 4.76 1,000 47.6 1 0.05 1,000 47.60 100 21,002 10,000 47.6 10,000 47.6 2 0.001 1,000 4.76 1,000 1,101,003 1,000,000 90.8 100,000 9.1 3 0.0003 1,000 0.09 10,000 101,001,004 100,000,000 99.0 1,000,000 0.99 4 0.0 1,000 0.001 1,00000 10,010,001005 10,000,000,000 99.9 10,000,000 0.099 5 0.0 1,000 0.00
  • 50. The low order terms and constants in a function are relatively insignificant for large n n2 + 100n + log10n + 1000 ~ n2 i.e., we say that n2 + 100n + log10n + 1000 and n2 have the same rate of growth Some more examples n4 + 100n2 + 10n + 50 is ~n4 10n3 + 2n2 is ~n3 n3 - n2 is ~n3 constants ◦ 10 is ~1 ◦ 1273 is ~1
  • 51. Asymptotic Notations Asymptotic notation describes the behavior of functions for the large inputs. Big Oh(O) notation: ◦ The big oh notation describes an upper bound on the asymptotic growth rate of the function f. Definition: [Big “oh’’] ◦ f(n) = O(g(n)) (read as “f of n is big oh of g of n”) iff there exist positive constants c and n0 such that f(n)  cg(n) for all n, n  n0.
  • 52. The definition states that the function f(n) is at most c times the function g(n) except when n is smaller than n0. In other words, f(n) grows slower than or same rate as” g(n). When providing an upper –bound function g for f, we normally use a single term in n. Examples ◦ f(n) = 3n+2 ◦ 3n + 2 <= 4n, for all n >= 2, 3n + 2 =  (n) ◦ f(n) = 10n2 +4n+2 ◦ 10n2 +4n+2 <= 11n2 , for all n >= 5,  10n2 +4n+2 =  (n2 ) ◦ f(n)=6*2n +n2 =O(2n ) /* 6*2n +n2 7*2n for n4 */
  • 53. It also possible to write 10n2 +4n+2 = O(n3 ) since 10n2 +4n+2 <=7n3 for n>=2 Although n3 is an upper bound for 10n2 +4n+2, it is not a tight upper bound; we can find a smaller function (n2) that satisfies big oh relation. But, we can not write 10n2 +4n+2=O(n), since it does not satisfy the big oh relation for sufficiently large input.
  • 54. Omega () notation: ◦ The omega notation describes a lower bound on the asymptotic growth rate of the function f. Definition: [Omega] ◦ f(n) = (g(n)) (read as “f of n is omega of g of n”) iff there exist positive constants c and n0 such that f(n)  cg(n) for all n, n  n0.
  • 55. The definition states that the function f(n) is at least c times the function g(n) except when n is smaller than n0. In other words,f(n) grows faster than or same rate as” g(n). Examples ◦ f(n) = 3n+2 ◦ 3n + 2 >= 3n, for all n >= 1, 3n + 2 =  (n) ◦ f(n) = 10n2 +4n+2 ◦ 10n2 +4n+2 >= n2 , for all n >= 1,  10n2 +4n+2 =  (n2 ) It also possible to write 10n2 +4n+2 = (n) since 10n2 +4n+2 >=nfor n>=0 Although n is a lower bound for 10n2 +4n+2, it is not a tight lower bound; we can find a larger function (n2) that satisfies omega relation. But, we can not write 10n2 +4n+2= (n3 ), since it does not satisfy the omega relation for sufficiently large input.
  • 56. Theta () notation: ◦ The Theta notation describes a tight bound on the asymptotic growth rate of the function f. Definition: [Theta] ◦ f(n) = (g(n)) (read as “f of n is theta of g of n”) iff there exist positive constants c1, c2, and n0 such that c1g(n)  f(n)  c2g(n) for all n, n  n0.
  • 57. The definition states that the function f(n) lies between c1 times the function g(n) and c2 times the function g(n) except when n is smaller than n0. In other words,f(n) grows same rate as” g(n). Examples:- ◦ f(n) = 3n+2 ◦ 3n <= 3n + 2 <= 4n, for all n >= 2,  3n + 2 =  (n) ◦ f(n) = 10n2 +4n+2 ◦ n2 <= 10n2 +4n+2 <= 11n2 , for all n >= 5,  10n2 +4n+2 =  (n2 ) But, we can not write either 10n2 +4n+2= (n) or 10n2 +4n+2= (n3 ), since neither of these will satisfy the theta relation.
  • 58. Little Oh(O) notation: ◦ The big oh notation describes a strict upper bound on the asymptotic growth rate of the function f. Definition: [Little “oh’’] ◦ f(n) = o(g(n)) (read as “f of n is little oh of g of n”) iff Lim n->∞ f(n) g(n) = 0
  • 59. The definition states that the function f(n) is less than c times the function g(n) except when n is smaller than n0. In other words, f(n) grows slower than” g(n). Examples ◦ f(n) = 3n+2=o(n2 ) ◦ Since ◦ However, 3n+2 ≠ o(n) Lim n->∞ 3n+2 n2 = 0
  • 60. Big-Oh, Theta, Omega and Little-oh Tips : Think of O(g(n)) as “less than or equal to” g(n) ◦ Upper bound: “grows slower than or same rate as” g(n) Think of Ω(g(n)) as “greater than or equal to” g(n) ◦ Lower bound: “grows faster than or same rate as” g(n) Think of Θ(g(n)) as “equal to” g(n) ◦ “Tight” bound: same growth rate Think of o(g(n)) as “less than to” g(n) ◦ Strict Upper bound: “grows slower than ” g(n) (True for large N)
  • 61. Functions ordered by growth rate Function Name 1 Growth is constant logn Growth is logarithmic n Growth is linear nlogn Growth is n-log-n n2 Growth is quadratic n3 Growth is cubic 2n Growth is exponential n! Growth is factorial 1 < logn < n < nlogn < n2 < n3 < 2n < n!
  • 62. ◦ To get a feel for how the various functions grow with n, you are advised to study the following figs:
  • 63. 2
  • 64. The following fig gives the time needed by a 1 billion instructions per second computer to execute a program of complexity f(n) instructions.
  • 66. The Hiring Problem You are using an employment agency to hire a new assistant. The agency sends you one candidate each day. You interview the candidate and must immediately decide whether or not to hire that person. But if you hire, you must also fire your current office assistant —even if it’s someone you have recently hired. Cost to interview is ci per candidate. Cost to hire is ch per candidate. You want to have, at all times, the best candidate seen so far. When you interview a candidate who is better than your current assistant, you fire the current assistant and hire the candidate. You will always hire the first candidate that you interview. Problem: What is the cost of this strategy?
  • 67. Pseudo-code to Model the Scenario Hire-Assistant (n) best  0 ;;Candidate 0 is a least qualified candidate for i  1 to n do interview candidate i if candidate i is better than candidate best then best  i hire candidate i Cost Model: •We are not concerned with the running time of Hire-Assistant. •We want to determine the total cost of hiring the best candidate. • If n candidates interviewed and m hired, then cost is nci+mch. • Have to pay nci to interview, no matter how many we hire. • So, focus on analyzing the hiring cost mch. • m varies with order of candidates.
  • 68. Best-Case Analysis Best case is when the agency sends you the best applicant on the first day. Cost is nci+ch.
  • 69. Worst-case Analysis In the worst case, we hire all n candidates. This happens if each candidate is better than all those who came before. Candidates come in increasing order of quality. Cost is nci+nch. If this happens, we fire the agency. What should happen in the average case?
  • 70. Probabilistic analysis Probabilistic analysis is the use of probability in the analysis of problems. Generally, we use probabilistic analysis to analyze the running time of an algorithm. Sometimes, we can also use it to analyze quantities such as the hiring cost in procedure Hire-Assistant.
  • 71. Hiring problem-Average –case Analysis An input to the hiring problem is an ordering of the n applicants. There are n! different inputs. To use probabilistic analysis, we assume that the candidates arrive in a random order . Then we analyze our algorithm by computing an expected running time. The expectation is taken over the distribution of the possible inputs. Thus, we are averaging the running time over all possible inputs.
  • 72. Indicator Random Variable A powerful technique for computing the expected value of a random variable. Convenient method for converting between probabilities and expectations. Takes only 2 values, 1 and 0. Indicator Random Variable XA=I{A} for an event A of a sample space is defined as: The expected value of an indicator Random Variable associated with an event A is equal to the probability that A occurs.     occur. not does if 0 occurs, if 1 } { A A A I
  • 73. Indicator RV – Example1 Problem: Determine the expected number of heads in one coin flip. Sample space is s{ H,T} Indicator random variable XA= I{ coin coming up with heads}=1/2 The expected number of heads obtained in one flip of the coin is equal to the expected value of indicator random variable. Therefore, E[XA] = 1/2
  • 74. Indicator RV – Example2 Problem: Determine the expected number of heads in n coin flips. Let X be the random variable denoting the total number of heads in n coin flips. Define n indicator random variables, Xi, 1  i  n. Let Xi be the indicator random variable for the event that the ith flip results in a Head. Xi = I{the ith flip results in H} Then X = X1 + X2 + …+ Xn = i=1..n Xi. E[Xi] = Pr{H} = ½, 1  i  n. Expected number of heads is E[X] = E[i=1..n Xi]. = i=1..n E[Xi] = i=1..n ½ = n/2.
  • 75. Back to the Hiring Problem We want to know the expected cost of our hiring algorithm, in terms of how many times we hire an applicant. Random variable X(s) is the number of applicants that are hired, given the input sequence s. Indicator random variable Xi for applicant I = 1 if applicant i is hired, 0 otherwise. What is E[X]?
  • 76. Probability of Hiring i-th Applicant Probability of hiring i is probability that i is better than the previous i-1 applicants… Pr[Xi = 1] = 1/i. Suppose n = 4 and i = 3. Ex:-Probability that 3rd applicant is better than the 2 previous ones. 1234 1243 1324 1342 1423 1432 2134 2143 2314 2341 2413 2431 3124 3142 3214 3241 3412 3421 4123 4132 4213 4231 4312 4321 8/24 = 1/3
  • 77. Analysis of the Hiring Problem Compute E[X], the number of candidates we expect to hire. ) 1 ( ln 1 ] [ ] [ 1 1 1 O n i X E X E X E n i n i i n i i                  Expected hiring cost = O(chln n). 1
  • 78. Amortized Analysis Data structures are subjected to a sequence of operations rather than one operation. In this sequence, one operation possibly performs certain modifications that have an impact on the run time of the next operation in the sequence. One way of assessing the worst case run time of the entire sequence is to add worst case efficiencies for each operation. This may result in an excessively large and unrealistic time complexity.
  • 79. Amortized analysis concentrates on finding the average complexity of a worst case sequence of operations. Amortized analysis considers interdependence between operations. ◦ For example, if an array is sorted and only a very few new elements are added, then re-sorting this array will be must faster than sorting it for the first time.
  • 80. Amortized Analysis AMORTIZED ANALYSIS: You try to estimate an upper bound of the total work T(n) required for a sequence of n operations… Some operations may be cheap some may be expensive. Overall, your algorithm does T(n) of work for n operations … Therefore, by simple reasoning, the amortized cost of each operation is T(n)/n
  • 81. Disjoint Sets Two sets A and B are said to be disjoint if there no commone elements i.e., A  B =  . Example: 1) S1={1,7,8,9}, S2={2,5,10}, and S3={3,4,6}. are three disjoint sets.
  • 82. We identify a set by choosing a representative element of the set. It doesn’t matter which element we choose, but once chosen, it can’t be changed. Disjoint set operations: ◦ FIND-SET(x): Returns the representative of the set containing x. ◦ UNION(i,j): Combines the two sets i and j into one new set. A new representative is selected. ( Here i and j are the representatives of the sets )
  • 83. Disjoint Sets Example Make-Set(1) Make-Set(2) Make-Set(3 ) Make-Set(4) Union(1,2) Union(3,4) Find-Set(2) Find-Set(4) Union(2,4) 2 1 3 4 returns 1 returns 3
  • 84. Up-Trees A simple data structure for implementing disjoint sets is the up- tree. 1 7 8 9 5 2 10 3 6 6 S1 S2 S3 1, 7,8, and 9 belong to the same set. 1 is the representative.
  • 85. Union operation To obtain union of two sets, set the parent field of one of the roots to the other root. 1 7 8 9 5 2 10 1 7 8 9 5 2 10 or S1U S2 S1U S2
  • 86. Array Representation of Up-tree Assume each element is associated with an integer i=1…n. From now on, we deal only with i. Create an integer array, p[1:n] An array entry is the element’s parent A -1 entry signifies that element i is the representative element. S1={1,7,8,9}, S2={2,5,10}, and S3={3,4,6}. Ex:- 1 2 3 4 5 6 7 8 9 10 -1 5 -1 3 -1 3 1 1 1 5 p 1 2 3 4 5 6 7 8 9 10
  • 87. Now we can implement Find by following the indices, starting at i until we reach a node with parent value -1. Ex:- Find(6) moves to 6’s parent, 3. Since p[3] is negative, we have reached the root. The operation Union(i,j) is very simple. We pass two trees with roots i and j. The statement p[i]:=j; accomplishes the union.
  • 88. Simple union algorithm : Algorithm SimpleUnion(i,j) { p[i] = j; // attaches i to j } Simple find algorithm Algorithm SimpleFind(i) { while(p[i] > 0) do i:=p[i]; return i; } Performance: ???
  • 89. The performance characteristics of these algorithms are not very good. For example, start with q elements each is in its own set, then process the following sequence of union-find operations: Union(1,2), Union(2,3), Union(3,4), ……, Union(n-1,n) Find(1), Find(2),………, Find(n). This sequence results in the degenerate tree shown below. 1 . . . n-1 n
  • 90. The time taken for a union is a o(1), the n-1 unions can be processed in time o(n). However, each find requires following a sequence of parent pointers from the element to be found to the root. Since the time required to process a find for an element at level i of a tree is o(i), the total time needed to process the n finds is We can improve the performance of union and find algorithms by avoiding the creation of degenerate trees. To do this, we use weighting rule for union and collapsing rule for find. O( ∑ i )=O(n2 ) 1≤i ≤n
  • 91. Weighting rule for Union(i,j) Def:-If the number of nodes in the tree with root i is less than the number in the tree with root j, then make j the parent of i; otherwise make i the parent of j. OR Always attach smaller tree to larger.
  • 92. If we use weighting rule, we obtain the following trees. 1 2 n … initial 1 3 n … 1 4 n … 2 2 3 Union(1,2) Union(1,3) . . . 1 4 2 3 ... n Union(1,n)
  • 93. To implement weighting rule, we maintain a count field in the root of every tree. If i is a root node, then count[i] equals the number of nodes in the tree. Since all nodes other than the roots of trees have a positive number in the p field, we can maintain the count in the p field of the roots as a negative number.
  • 94. Algorithm WeightedUnion(i,j) // Union sets with roots i, and j, i ≠ j, using // the weighting rule. p[i]= -count[i] and p[j]= -count[j]. { temp:=p[ i ]+p[ j ]; if( p[ i ]>p[ j ] ) then { // i has fewer nodes. p[i]:=j; p[j]:=temp; } else { // j has fewer or equal nodes. p[ j ]:=i; p[ i ]:=temp; } }
  • 95. In this algorithm, again time required to perform a union is o(1), the n-1 unions can be processed in time o(n). The find algorithm is unchanged. Lemma:- Assume that we start with a forest of trees, each having one node. Let T be a tree with n nodes created as a result of a sequence of unions each performed using WeightedUnion. Then the height of T is no greater than log n +1. The maximum time required to perform a find is O(logn). If there are n find operations, then time required to perform this sequence is O(nlogn). 2
  • 96. Ex:-Consider the behavior of WeightedUnion on the following sequence of unions starting from the initial configuration p[i]= -count[i]= -1, 1≤ i ≤ 8=n . Union(1,2), Union(3,4), Union(5,6), Union(7,8), Union(1,3), Union(5,7), Union(1,5) 1 2 [-1] [-1] 3 4 [-1] [-1] 5 6 [-1] [-1] 7 8 [-1] [-1] a) Initial height-1 trees 1 2 [-2] 3 4 [-2] 5 6 [-2] 7 8 [-2] b) height-2 trees following Union(1,2), (3,4), (5,6), and (7,8)
  • 97. 1 2 [- 4] 3 4 5 6 [- 4] 7 8 c) height-3 trees following Union(1,3), and (5,7) 1 2 [- 8] 3 4 5 6 7 8 d) height-4 tree following Union(1,5)
  • 98. Collapsing rule for find Def:- When we do a find on an element i, we make each node in the path from i to the root directly point to the root. f e d c f e d c
  • 99. Ex:- Consider the previous tree created by WeightedUnion on the sequence of unions. Now process the following finds: Find(8), Find(7), Find(5),Find(8) Find(8) requires going up three parent link fields, Find(7) requires going up two parent fields, and Find(5) requires one parent field. There total cost for the above sequence is 9 moves. 1 2 [- 8] 3 4 5 6 7 8
  • 100. When CollapsingFind is used, the first(8) requires going up three links then resetting two links: Now the total cost for the above sequence is 6 moves. 1 2 [- 8] 3 4 5 6 7 8
  • 101. Algorithm CollapsingFind(i) // Find the root of the tree containing element i. // Use the collapsing rule to collapse all nodes from i to the root. { r:=i; while( p[ r ] >0 ) do r:=p[r]; // Find root. while( i ≠ r ) do // collapse nodes from i to r. { s:=p[ i ]; p[ i ]:=r; i:=s; } return r; }
  • 102. Divide and Conquer Technique General Method:  The Divide and Conquer Technique splits n inputs into k subsets , 1< k ≤ n, yielding k subproblems.  These subproblems will be solved and then combined by using a separate method to get a solution to a whole problem.  If the subproblems are large, then the Divide and Conquer Technique will be reapplied.  Often subproblems resulting from a Divide and Conquer Technique are of the same type as the original problem.
  • 103. For those cases the reapplication of the Divide and Conquer Technique is naturally expressed by a recursive algorithm. Now smaller and smaller problems of the same kind are generated until subproblems that are small enough to be solved without splitting are produced.
  • 104. Control Abstraction/general method for Divide and Conquer Technique Algorithm DAndC(p) { if Small(p) then return s(p); else { divide p into smaller instances p1,p2,…….,pk, k≥1; Apply DAndC to each of these subproblems; return Combine(DAndC(p1), DAndC(p2),……,DAndC(pk)); } }
  • 105. If the size of p is n and the sizes of the k subproblems are n1,n2,….,nk,then the computing time of DAndC is described by the recurrence relation T(n)= g( n) n small T(n1)+T(n2)+……+T(nk)+f(n) Otherwise Where T(n) is the time for DAndC on any input of size n and g(n) is the time to compute the answer directly for small inputs. The function f(n) is the time for dividing p and combining the solutions to subproblems.
  • 106. The Complexity of many divide-and-conquer algorithms is given by recurrences of the form c n small aT(n/b)+f(n) Otherwise Where a , b and c are known constants. and n is a power of (i.e n=bk ) T(n)=
  • 107. Applications 1.Binary search Algorithm Iterative algorithm Algorithm BinSearch(a,n,x) { low:=1; high:=n; while(low≤high) { mid:=(low+high)/2; if( x<a[mid] ) then high:=mid-1; else if( x> a[mid] ) then low:=mid+1; else return mid; } return 0; }
  • 108. Recursive Algorithm ( Divide and Conquer Technique) Algorithm BinSrch (a,i,l,x) //given an array a [i:l]of elements in nondecreasing //order,1≤i≤l,determine whether x is present,and //if so, return j such that x=a[j]; else return 0. { if(l=i) then // If small(P) { if(x=a[i]) then return i; else return 0; } else
  • 109. { //Reduce p into a smaller subproblem. mid:= (i+l)/2 if(x=a[mid]) then return mid; else if (x<a[mid]) then return BinSrch (a,i,mid-1,x); else return BinSrch(a,mid+1,l,x); } }
  • 110. Time complexity of Binary Seaych If the time for diving the list is a constant, then the computing time for binary search is described by the recurrence relation T(n) = c1 n=1, c1 is a constant T(n/2) + c2 n>1, c2 is a constant T(n) = T(n/2) + c2 =T(n/4)+c2+c2 =T(n/8) +c2+c2+c2 ….. ….. = T(1)+ kc2 = c1+kc2 =c1+ logn*c2 = O(logn) Assume n=2k , then
  • 111. Time Complexity of Binary Search Successful searches: best average worst O(1) O(log n) O( log n) Unsuccessful searches : best average worst O(log n) O(log n) O( log n)
  • 112. 2. Merge Sort 1. Base Case, solve the problem directly if it is small enough(only one element). 2. Divide the problem into two or more similar and smaller subproblems. 3. Recursively solve the subproblems. 4. Combine solutions to the subproblems.
  • 113. Merge Sort: Idea Merge Recursively sort Divide into two subsists FirstPart SecondPart FirstPart SecondPart A A is sorted! Recursively sort
  • 114. 6 2 8 4 3 7 5 1 6 2 8 4 3 7 5 1 Merge-Sort(A, 0, 7) Divide A:
  • 115. 6 2 8 4 3 7 5 1 6 2 8 4 Merge-Sort(A, 0, 3) , divide A: Merge-Sort(A, 0, 7)
  • 116. 3 7 5 1 8 4 6 2 6 2 Merge-Sort(A, 0, 1), divide A: Merge-Sort(A, 0, 7)
  • 117. 3 7 5 1 8 4 6 2 Merge-Sort(A, 0, 0) , base case A: Merge-Sort(A, 0, 7)
  • 118. 3 7 5 1 8 4 6 2 Merge-Sort(A, 0, 0), return A: Merge-Sort(A, 0, 7)
  • 119. 3 7 5 1 8 4 6 2 Merge-Sort(A, 1, 1), base case A: Merge-Sort(A, 0, 7)
  • 120. 3 7 5 1 8 4 6 2 Merge-Sort(A, 1, 1), return A: Merge-Sort(A, 0, 7)
  • 121. 3 7 5 1 8 4 2 6 Merge(A, 0, 0, 1) A: Merge-Sort(A, 0, 7)
  • 122. 3 7 5 1 8 4 2 6 Merge-Sort(A, 0, 1), return A: Merge-Sort(A, 0, 7)
  • 123. 3 7 5 1 8 4 2 6 Merge-Sort(A, 2, 3) 4 8 , divide A: Merge-Sort(A, 0, 7)
  • 124. 3 7 5 1 4 2 6 8 Merge-Sort(A, 2, 2), base case A: Merge-Sort(A, 0, 7)
  • 125. 3 7 5 1 4 2 6 8 Merge-Sort(A, 2, 2), return A: Merge-Sort(A, 0, 7)
  • 126. 4 2 6 8 Merge-Sort(A, 3, 3), base case A: Merge-Sort(A, 0, 7)
  • 127. 3 7 5 1 4 2 6 8 Merge-Sort(A, 3, 3), return A: Merge-Sort(A, 0, 7)
  • 128. 3 7 5 1 2 6 4 8 Merge(A, 2, 2, 3) A: Merge-Sort(A, 0, 7)
  • 129. 3 7 5 1 2 6 4 8 Merge-Sort(A, 2, 3), return A: Merge-Sort(A, 0, 7)
  • 130. 3 7 5 1 2 4 6 8 Merge(A, 0, 1, 3) A: Merge-Sort(A, 0, 7)
  • 131. 3 7 5 1 2 4 6 8 Merge-Sort(A, 0, 3), return A: Merge-Sort(A, 0, 7)
  • 132. 3 7 5 1 2 4 6 8 Merge-Sort(A, 4, 7) A: Merge-Sort(A, 0, 7)
  • 133. 1 3 5 7 2 4 6 8 A: Merge (A, 4, 5, 7) Merge-Sort(A, 0, 7)
  • 134. 1 3 5 7 2 4 6 8 Merge-Sort(A, 4, 7), return A: Merge-Sort(A, 0, 7)
  • 135. 1 2 3 4 5 6 7 8 Merge(A, 0, 3, 7) A: Merge-Sort(A, 0, 7) Merge-Sort(A, 0, 7), done!
  • 136. Ex:- [ 179, 254, 285, 310, 351, 423, 450, 520, 652,861 ] Tree of calls of merge sort 1,10 1,5 6,10 1,3 4,5 6,8 9,10 1,2 3,3 4,4 5,5 6,7 8,8 9,9 10,10 1,1 2,2 6,6 7,7
  • 137. Merge Sort: Algorithm MergeSort ( low,high) // sorts the elements a[low],…,a[high] which reside in the global array //a[1:n] into ascending order. // Small(p) is true if there is only one element to sort. In this case the list is // already sorted. { if ( low<high ) then // if there are more than one element { mid ← (low+high)/2; MergeSort(low,mid); MergeSort(mid+1, high); Merge(low, mid, high); } Recursive Calls
  • 138. 6 10 14 22 3 5 15 28 L: L: R: R: 5 15 28 30 6 10 14 5 Merge-Sort: Merge Example B: B: 5 15 28 30 6 10 14 5 2 3 7 8 1 4 5 6 A: A: low mid high
  • 139. Merge-Sort: Merge Example 3 5 15 28 30 6 10 14 L: L: B: B: 3 15 28 30 6 10 14 22 R: R: i=low j=mid+1 k=low 2 3 7 8 1 4 5 6 1 5 15 28 30 6 10 14 5 A: A:
  • 140. Merge-Sort: Merge Example 1 5 15 28 30 6 10 14 L: L: B: B: 3 5 15 28 6 10 14 22 R: R: k 2 3 7 8 1 4 5 6 2 i j 5 15 28 30 6 10 14 5 A: A:
  • 141. Merge-Sort: Merge Example 1 2 15 28 30 6 10 14 L: L: B: B: 6 10 14 22 R: R: i k 2 3 7 8 1 4 5 6 3 j 5 15 28 30 6 10 14 5 A: A:
  • 142. Merge-Sort: Merge Example 1 2 3 6 10 14 L: L: B: B: 6 10 14 22 R: R: i j k 2 3 7 8 1 4 5 6 4 5 15 28 30 6 10 14 5 A: A:
  • 143. Merge-Sort: Merge Example 1 2 3 4 6 10 14 L: L: B: B: 6 10 14 22 R: R: j k 2 3 7 8 1 4 5 6 i 5 5 15 28 30 6 10 14 5 A: A:
  • 144. Merge-Sort: Merge Example 1 2 3 4 5 6 10 14 L: L: B: B: 6 10 14 22 R: R: i j k 2 3 7 8 1 4 5 6 6 5 15 28 30 6 10 14 5 A: A:
  • 145. Merge-Sort: Merge Example 1 2 3 4 5 6 14 L: L: B: B: 6 10 14 22 R: R: k 2 3 7 8 1 4 5 6 7 i j 5 15 28 30 6 10 14 5 A: A:
  • 146. Merge-Sort: Merge Example 1 2 3 4 5 6 7 14 L: L: B: B: 3 5 15 28 6 10 14 22 R: R: 2 3 7 8 1 4 5 6 8 i j k 5 15 28 30 6 10 14 5 A: A:
  • 147. Merge-Sort: Merge Example 1 2 3 4 5 6 7 8 L: L: B: B: 3 5 15 28 6 10 14 22 R: R: 2 3 7 8 1 4 5 6 i j k 5 15 28 30 6 10 14 5 A: A:
  • 148. 1 2 3 4 5 6 7 8 B: B: 5 15 28 30 6 10 14 5 A: A:
  • 149. Algorithm Merge(low,mid,high) // a[low:high] is a global array containing two sorted subsets in a[low:mid] // and in a[mid+1:high]. The goal is to merge these two sets into a single set // residing in a [low:high]. b[ ] is a temporary global array. { h:=low; i:=low; j:=mid+1; while( h ≤ mid ) and ( j ≤ high ) do { if( a[h] ≤ a[j] ) then { b[i]:=a[h]; h:=h+1; } else { b[i]:=a[j]; j:=j+1; } i:=i+1; }
  • 150. if( h > mid ) then for k:=j to high do { b[i] := a[k]; i:= i+1; } else for k:=h to mid do { b[i] := a[k]; i:= i+1; } for k:= low to high do a[k]:=b[k]; }
  • 151. Merge-Sort Analysis n n/2 n/2 n/4 n/4 n/4 n/4 2 2 2
  • 152. Merge-Sort Time Complexity If the time for the merging operation is proportional to n, then the computing time for merge sort is described by the recurrence relation n>1, c2 is a constant n=1, c1 is a constant 2T(n/2) + c2n c1 T(n) = Assume n=2k , then T(n) =2T(n/2) + c2n =2(2T(n/4)+c2n/2)+cn =4T(n/4)+2c2n ….. ….. =2k T(1)+ kc2n = c1n+c2nlogn = = O(nlogn)
  • 153. Summary Merge-Sort ◦ Most of the work done in combining the solutions. ◦ Best case takes o(n log(n)) time ◦ Average case takes o(n log(n)) time ◦ Worst case takes o(n log(n)) time
  • 154. 3. Quick Sort Divide: ◦ Pick any element as the pivot, e.g, the first element ◦ Partition the remaining elements into FirstPart, which contains all elements < pivot SecondPart, which contains all elements > pivot Recursively sort FirstPart and SecondPart. Combine: no work is necessary since sorting is done in place.
  • 155. pivot divides a into two sublists x and y. 4 2 7 8 1 9 3 6 5 4 pivo t 4 2 7 8 1 9 3 6 5 x y
  • 156. The whole process 4 2 7 8 1 9 3 6 5 2 1 3 4 7 8 9 6 5 1 2 3 6 5 7 8 9 5 9 1 3 5 6 8 9
  • 157. Keep moving from left side as long as a[ i ]<pivot and from the right side as long as a[ j ]>pivot 85 24 63 95 17 31 45 98 i j 85 24 63 95 17 31 45 98 85 24 63 95 17 31 45 98 85 24 63 95 17 31 45 98 i i i j j j Process: pivot
  • 158. If i<j interchange ith and j th elements and then Continue the process. 85 24 63 45 17 31 95 98 i j 85 24 63 45 17 31 95 98 i j 85 24 63 45 17 31 95 98 i
  • 159. 85 24 63 45 17 31 95 98 i j 85 24 63 45 17 31 95 98 j If i ≥j interchange jth and pivot elements and then divide the list into two sublists. i
  • 160. 35 24 63 45 17 85 95 98 Two sublists: 35 24 63 45 17 95 98 Recursively sort FirstPart and SecondPart QickSort( low, j-1 ) QickSort( j+1,high ) j 85
  • 161. Quick Sort Algorithm : Algorithm QuickSort(low,high) //Sorts the elements a[low],…..,a[high] which resides //in the global array a[1:n] into ascending order; // a[n+1] is considered to be defined and must ≥ all the // elements in a[1:n]. { if( low< high ) // if there are more than one element { // divide p into two subproblems. j :=Partition(low,high); // j is the position of the partitioning element. QuickSort(low,j-1); QuickSort(j+1,high); // There is no need for combining solutions. } }
  • 162. Algorithm Partition(l,h) { pivot:= a[l] ; i:=l; j:= h+1; while( i < j ) do { i++; while( a[ i ] < pivot) do i++; j--; while( a[ j ] >= pivot ) do j--; if ( i < j ) then Interchange(i,j ); // interchange ith and } // jth elements. Interchange(l, j ); return j; // interchange pivot and jth element. }
  • 163. Algorithm interchange (x,y ) { temp=a[x]; a[x]=a[y]; a[y]=temp; }
  • 164. Time complexity analysis A worst/bad case 8 7 6 5 4 3 2 1 1 2 3 4 5 6 7 8 2 3 4 5 6 7 8 3 4 5 6 7 8 4 5 6 7 8 5 6 7 8 6 7 8 7 8 8 O(n2 ) 9 9 9 9 9 9 9 9 9 9
  • 165. cn c(n-1) 3c 2c n n-1 n-2 3 2 c(n-2) Happens only if • input is sortd • input is reversely sorted Worst/bad Case Total time: O(n2 ) 1 1c
  • 166. A best/good case It occurs only if each partition divides the list into two equal size sublists. O(n logn)
  • 167. Best/good Case • Total time: O(nlogn) n n/2 n/2 n/4 n/4 n/4 n/4 2 2 2
  • 168. Summary Quick-Sort ◦ Most of the work done in partitioning ◦ Best case takes O(n log(n)) time ◦ Average case takes O(n log(n)) time ◦ Worst case takes O(n2 ) time
  • 170. Basic Matrix Multiplication void matrix_mult (){ for (i = 1; i <= N; i++) { for (j = 1; j <= N; j++) { for(k=1; k<=N; k++){ C[i,j]=C[i,j]+A[i,k]+B[k,j]; } }} Time complexity of above algorithm is T(n)=O(n3) Let A an B two n×n matrices. The product C=AB is also an n×n matrix.
  • 171. Divide and Conquer technique We want to compute the product C=AB, where each of A,B, and C are n×n matrices. Assume n is a power of 2. If n is not a power of 2, add enough rows and columns of zeros. We divide each of A,B, and C into four n/2×n/2 matrices, rewriting the equation C=AB as follows:
  • 172. Then, C11=A11B11+A12B21 C12=A11B12+A12B22 C21=A21B11+A22B21 C22=A21B12+A22B22 Each of these four equations specifies two multiplications of n/2×n/2 matrices and the addition of their n/2×n/2 products. We can derive the following recurrence relation for the time T(n) to multiply two n×n matrices: T(n)= c1 if n<=2 8T(n/2)+ c2n2 if n>2 T(n) = O(n3) • This method is no faster than the ordinary method.                                  8 8 8 8 7 7 7 7 6 6 6 6 5 5 5 5 4 4 4 4 3 3 3 3 2 2 2 2 1 1 1 1 22 21 12 11 C C C C C c11 c12 c22 c21 A11 A12 A21 A22 B21 B22 B11 B12
  • 173. T(n)= 8T(n/2)+ c2n2 =8 8T(n/4)+ c2(n/2)2 + c2n2 = 82 T(n/4)+ c22n2 + c2n2 =82 8T(n/8)+ c2(n/4)2 +c22n2 + c2n2 =83 T(n/8)+ c24n2 +c22n2 + c2n2 : =8k T(1)+ ………………+ c24n2 +c22n2 + c2n2 = 8log 2 n c1 + c n2 =nlog 2 c1 + c n2 =n3 c1+ cn2 = O(n3 ) . 8
  • 174. Strassen’s method Matrix multiplications are more expensive than matrix additions or subtractions( O(n3 ) versus O(n2 )). Strassen has discovered a way to compute the multiplication using only 7 multiplications and 18 additions or subtractions. His method involves computing 7 n×n matrices M1,M2,M3,M4,M5,M6, and M7, then cij’s are calculated using these matrices.
  • 175. Formulas for Strassen’s Algorithm M1 = (A11 + A22)  (B11 + B22) M2 = (A21 + A22)  B11 M3 = A11  (B12 – B22) M4 = A22  (B21 – B11) M5 = (A11 + A12)  B22 M6 = (A21 – A11)  (B11 + B12) M7 = (A12 – A22)  (B21 + B22) C11=M1 + M4 - M5 + M7 C12= M3 + M5 C21= M2 + M4 C22=M1 + M3 - M2 + M6
  • 176. C11 C12 A11 A12 B11 B12 = * C21 C22 A21 A22 B21 B22 M1 + M4 - M5 + M7 M3 + M5 = M2 + M4 M1 + M3 - M2 + M6 The resulting recurrence relation for T(n) is T(n)= c1 n<=2 7T(n/2) +c2n2 n>2
  • 177. T(n)= 7k T(1) + c2n2 1+ 7/4 + (7/4)2 + (7/4) 3 +……………..+ (7/4)k-1 = 7log 2 n c1 +c2 n2 (7/4)log 2 n = nlog 2 7 + nlog 2 4 ( n log 2 7-log 2 4 ) =2 nlog 2 7 = O(nlog 2 7 ) ~ O( n2.81 ) . . .

Editor's Notes

  • #45: Best: Find at first position. Cost is 1 compare. Worst: Find at last position. Cost is n compares. Average: (n+1)/2 compares IF we assume the element with value K is equally likely to be in any position in the array.
  • #86: Example: V = {0,1,2,3,4,5,6,7} array: -1, -1, -1, -1, -1, -1, -1, -1, -1 union(4,5) union(6,7) union(4,6) now, E={(5,4),(7,6),(6,4)} array: -1, -1, -1, -1, -1, 4, 4, 6
  • #113: Divide: if the initial array A has at least two elements (nothing needs to be done if A has zero or one elements), divide A into two subarrays each containing about half of the elements of A. Conquer: sort the two subarrays using Merge Sort. Combine: merging the two sorted subarray into one sorted array
  • #153: no specific input triggers worst-case behavior the worst-case is only determined by the output of the random-number generator Disadvantage: Unstable in running time Finding “pivot” element is a big issue!
  • #154: since sorting is done in place
  • #168: no specific input triggers worst-case behavior the worst-case is only determined by the output of the random-number generator Disadvantage: Unstable in running time Finding “pivot” element is a big issue!
  翻译: