Data structures notes for college students btech.pptxKarthikVijay59
The document provides information about data structures and algorithms. It defines key concepts such as data structures, algorithms, time complexity, space complexity, asymptotic notation (Big O, Big Omega, Big Theta). It discusses various data structures like stacks, queues, trees, graphs and sorting/searching techniques. It also lists the objectives of learning data structures as understanding linear and non-linear data structures and their applications, sorting/searching techniques, and memory management.
data structures using C 2 sem BCA univeristy of mysoreambikavenkatesh2
The document discusses reallocating memory using the realloc() function in C. It provides code to allocate memory for an integer array, print the memory addresses, reallocate the array to a larger size, and print the new memory addresses. The memory addresses for the previously allocated blocks do not change after reallocating, but new contiguous blocks are added to increase the array size.
DSA Complexity.pptx What is Complexity Analysis? What is the need for Compl...2022cspaawan12556
What is Complexity Analysis?
What is the need for Complexity Analysis?
Asymptotic Notations
How to measure complexity?
1. Time Complexity
2. Space Complexity
3. Auxiliary Space
How does Complexity affect any algorithm?
How to optimize the time and space complexity of an Algorithm?
Different types of Complexity exist in the program:
1. Constant Complexity
2. Logarithmic Complexity
3. Linear Complexity
4. Quadratic Complexity
5. Factorial Complexity
6. Exponential Complexity
Worst Case time complexity of different data structures for different operations
Complexity Analysis Of Popular Algorithms
Practice some questions on Complexity Analysis
practice with giving Quiz
Conclusion
Euclid's algorithm is a method for finding the greatest common divisor (GCD) of two numbers. It works by taking the remainder of dividing the larger number by the smaller number at each step, and repeating this process until the remainder is zero. The last non-zero remainder is the GCD. The key steps are: (1) Take the remainder of dividing the two input numbers; (2) Set the smaller number equal to the remainder, and repeat from step 1 until the remainder is zero.
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMSTanya Makkar
what is Algorithm and classification and its complexity
Time Complexity
Time Space trade-off
Asymptotic time complexity of algorithm and its notation
Why do we need to classify running time of algorithm into growth rates?
Big O-h notation and example
Big omega notation and example
Big theta notation and its example
best among the 3 notation
finding complexity f(n) for certain cases
1. Average case
2.Best case
3.Worst case
Searching
Sorting
complexity of Sorting
Conclusion
The document discusses stacks and queues as linear data structures. A stack follows LIFO (last in first out) where the last element inserted is the first removed. Common stack operations are push to insert and pop to remove elements. Stacks can be implemented using arrays or linked lists. A queue follows FIFO (first in first out) where the first element inserted is the first removed. Common queue operations are enqueue to insert and dequeue to remove elements. Queues can also be implemented using arrays or linked lists. Circular queues and priority queues are also introduced.
The document discusses stacks and queues as linear data structures. A stack follows LIFO (last in first out) where the last element inserted is the first to be removed. Common stack operations are push to add an element and pop to remove an element. Stacks can be implemented using arrays or linked lists. A queue follows FIFO (first in first out) where the first element inserted is the first to be removed. Common queue operations are enqueue to add an element and dequeue to remove an element. Queues can also be implemented using arrays or linked lists. Circular queues and priority queues are also discussed briefly.
This document provides an overview of algorithms including definitions, characteristics, design, and analysis. It defines an algorithm as a finite step-by-step procedure to solve a problem and discusses their key characteristics like input, definiteness, effectiveness, finiteness, and output. The document outlines the design of algorithms using pseudo-code and their analysis in terms of time and space complexity using asymptotic notations like Big O, Big Omega, and Big Theta. Examples are provided to illustrate linear search time complexity and the use of different notations to determine algorithm efficiency.
The document discusses algorithms and their analysis. It defines an algorithm as a step-by-step procedure to solve a problem and get a desired output. Key aspects of algorithms discussed include their time and space complexity, asymptotic analysis to determine best, average, and worst case running times, and common asymptotic notations like Big O that are used to analyze algorithms. Examples are provided to demonstrate how to determine the time and space complexity of different algorithms like those using loops, recursion, and nested loops.
Performance analysis and randamized agorithamlilyMalar1
The document discusses performance analysis of algorithms in terms of space and time complexity. It provides examples to show how to calculate the space and time complexity of algorithms. Specifically, it analyzes the space and time complexity of a sum algorithm. For space complexity, it identifies the fixed and variable components, showing the space complexity is O(n). For time complexity, it analyzes the number of steps and their frequency to determine the time complexity is O(2n+3). The document also discusses other algorithm analysis topics like asymptotic notations, amortized analysis, and randomized algorithms.
The document discusses algorithms, data abstraction, asymptotic analysis, arrays, polynomials, and sparse matrices. It defines algorithms and discusses their advantages and disadvantages. It explains how to design an algorithm and describes iterative and recursive algorithms. It defines data abstraction and gives an example using smartphones. It discusses time and space complexity analysis and different asymptotic notations like Big O, Omega, and Theta. It describes what arrays are, different types of arrays, and applications of arrays. It explains how to represent and add polynomials using linked lists. Finally, it defines sparse matrices and two methods to represent them using arrays and linked lists.
Introduction to data structures and complexity.pptxPJS KUMAR
The document discusses data structures and algorithms. It defines data structures as the logical organization of data and describes common linear and nonlinear structures like arrays and trees. It explains that the choice of data structure depends on accurately representing real-world relationships while allowing effective processing. Key data structure operations are also outlined like traversing, searching, inserting, deleting, sorting, and merging. The document then defines algorithms as step-by-step instructions to solve problems and analyzes the complexity of algorithms in terms of time and space. Sub-algorithms and their use are also covered.
Data Structure & Algorithms - Mathematicalbabuk110
This document discusses various mathematical notations and asymptotic analysis used for analyzing algorithms. It covers floor and ceiling functions, remainder function, summation symbol, factorial function, permutations, exponents, logarithms, Big-O, Big-Omega and Theta notations. It provides examples of calculating time complexity of insertion sort and bubble sort using asymptotic notations. It also discusses space complexity analysis and how to calculate the space required by an algorithm.
This document discusses algorithm analysis tools. It explains that algorithm analysis is used to determine which of several algorithms to solve a problem is most efficient. Theoretical analysis counts primitive operations to approximate runtime as a function of input size. Common complexity classes like constant, linear, quadratic, and exponential time are defined based on how quickly runtime grows with size. Big-O notation represents the asymptotic upper bound of a function's growth rate to classify algorithms.
This document contains lecture notes on data structures that discuss programming performance, space complexity, time complexity, asymptotic notations, and searching and sorting algorithms. The key points covered are:
- Programming performance is measured by the memory and time needed to run a program, which can be analyzed or experimentally measured.
- Space complexity accounts for memory needed and includes instruction, data, and environment stack spaces. Time complexity accounts for time needed and includes compilation and execution times.
- Common asymptotic notations like Big-O, Omega, and Theta are used to describe time and space complexity behaviors.
- Searching algorithms like linear and binary search are used to find elements. Sorting algorithms like bubble, quick, selection and heap sorts
This document discusses data structures and algorithms. It provides course objectives which include imparting concepts of data structures and algorithms, introducing searching and sorting techniques, and developing applications using suitable data structures. Course outcomes include understanding algorithm performance analysis, concepts of data structures, linear data structures, and identifying efficient data structure implementations. The document also covers algorithm basics, specifications, expressions, analysis techniques and asymptotic notations for analyzing algorithms.
This document discusses space complexity in data structures and algorithms. It defines space complexity as the amount of memory space an algorithm or problem takes during execution. Calculating space complexity is important for analyzing algorithm efficiency and ensuring algorithms can run within the memory constraints of real-world systems. The document provides examples of calculating space complexity for different algorithms like sorting, searching, and recursion, showing how space complexity can be constant, linear, or other orders of complexity depending on the algorithm.
Introduction to design and analysis of algorithmDevaKumari Vijay
This document defines algorithms and describes how to analyze their efficiency. It states that an algorithm is a set of unambiguous instructions that accepts input and produces output within a finite number of steps. The document outlines criteria algorithms must satisfy like being definite, finite, and effective. It also describes different representations of algorithms like pseudocode and flowcharts. The document then discusses analyzing algorithms' time and space efficiency using asymptotic notations like Big-O, Big-Omega, and Big-Theta. It defines these notations and provides examples to classify algorithms' order of growth.
Complete Information till 2D arrays. In this slides you can also find information about loops and control decision....
Best slides for beginners who wants to learn about C programming language..
The document provides an introduction to analyzing the performance of algorithms. It discusses time complexity and space complexity, which measure the running time and memory usage of algorithms. Common asymptotic notations like Big-O, Big-Omega, and Big-Theta are introduced to concisely represent the growth rates of algorithms. Specific algorithms like selection sort and insertion sort are analyzed using these metrics. Recursion and recurrence relations are also covered as methods to analyze algorithm efficiency.
This document discusses algorithms and their analysis. It begins by defining an algorithm and its key characteristics like being finite, definite, and terminating after a finite number of steps. It then discusses designing algorithms to minimize cost and analyzing algorithms to predict their performance. Various algorithm design techniques are covered like divide and conquer, binary search, and its recursive implementation. Asymptotic notations like Big-O, Omega, and Theta are introduced to analyze time and space complexity. Specific algorithms like merge sort, quicksort, and their recursive implementations are explained in detail.
The document discusses algorithms, including their definition, properties, analysis of time and space complexity, and examples of recursion and iteration. It defines an algorithm as a finite set of instructions to accomplish a task. Properties include inputs, outputs, finiteness, definiteness, and effectiveness. Time complexity is analyzed using big-O notation, while space complexity considers static and variable parts. Recursion uses function calls to solve sub-problems, while iteration uses loops. Examples include factorial calculation, GCD, and Towers of Hanoi solved recursively.
The main purpose of the current study was to formulate an empirical expression for predicting the axial compression capacity and axial strain of concrete-filled plastic tubular specimens (CFPT) using the artificial neural network (ANN). A total of seventy-two experimental test data of CFPT and unconfined concrete were used for training, testing, and validating the ANN models. The ANN axial strength and strain predictions were compared with the experimental data and predictions from several existing strength models for fiber-reinforced polymer (FRP)-confined concrete. Five statistical indices were used to determine the performance of all models considered in the present study. The statistical evaluation showed that the ANN model was more effective and precise than the other models in predicting the compressive strength, with 2.8% AA error, and strain at peak stress, with 6.58% AA error, of concrete-filled plastic tube tested under axial compression load. Similar lower values were obtained for the NRMSE index.
Ad
More Related Content
Similar to DAA-Introduction-Divide and Conquer Technique (20)
This document provides an overview of algorithms including definitions, characteristics, design, and analysis. It defines an algorithm as a finite step-by-step procedure to solve a problem and discusses their key characteristics like input, definiteness, effectiveness, finiteness, and output. The document outlines the design of algorithms using pseudo-code and their analysis in terms of time and space complexity using asymptotic notations like Big O, Big Omega, and Big Theta. Examples are provided to illustrate linear search time complexity and the use of different notations to determine algorithm efficiency.
The document discusses algorithms and their analysis. It defines an algorithm as a step-by-step procedure to solve a problem and get a desired output. Key aspects of algorithms discussed include their time and space complexity, asymptotic analysis to determine best, average, and worst case running times, and common asymptotic notations like Big O that are used to analyze algorithms. Examples are provided to demonstrate how to determine the time and space complexity of different algorithms like those using loops, recursion, and nested loops.
Performance analysis and randamized agorithamlilyMalar1
The document discusses performance analysis of algorithms in terms of space and time complexity. It provides examples to show how to calculate the space and time complexity of algorithms. Specifically, it analyzes the space and time complexity of a sum algorithm. For space complexity, it identifies the fixed and variable components, showing the space complexity is O(n). For time complexity, it analyzes the number of steps and their frequency to determine the time complexity is O(2n+3). The document also discusses other algorithm analysis topics like asymptotic notations, amortized analysis, and randomized algorithms.
The document discusses algorithms, data abstraction, asymptotic analysis, arrays, polynomials, and sparse matrices. It defines algorithms and discusses their advantages and disadvantages. It explains how to design an algorithm and describes iterative and recursive algorithms. It defines data abstraction and gives an example using smartphones. It discusses time and space complexity analysis and different asymptotic notations like Big O, Omega, and Theta. It describes what arrays are, different types of arrays, and applications of arrays. It explains how to represent and add polynomials using linked lists. Finally, it defines sparse matrices and two methods to represent them using arrays and linked lists.
Introduction to data structures and complexity.pptxPJS KUMAR
The document discusses data structures and algorithms. It defines data structures as the logical organization of data and describes common linear and nonlinear structures like arrays and trees. It explains that the choice of data structure depends on accurately representing real-world relationships while allowing effective processing. Key data structure operations are also outlined like traversing, searching, inserting, deleting, sorting, and merging. The document then defines algorithms as step-by-step instructions to solve problems and analyzes the complexity of algorithms in terms of time and space. Sub-algorithms and their use are also covered.
Data Structure & Algorithms - Mathematicalbabuk110
This document discusses various mathematical notations and asymptotic analysis used for analyzing algorithms. It covers floor and ceiling functions, remainder function, summation symbol, factorial function, permutations, exponents, logarithms, Big-O, Big-Omega and Theta notations. It provides examples of calculating time complexity of insertion sort and bubble sort using asymptotic notations. It also discusses space complexity analysis and how to calculate the space required by an algorithm.
This document discusses algorithm analysis tools. It explains that algorithm analysis is used to determine which of several algorithms to solve a problem is most efficient. Theoretical analysis counts primitive operations to approximate runtime as a function of input size. Common complexity classes like constant, linear, quadratic, and exponential time are defined based on how quickly runtime grows with size. Big-O notation represents the asymptotic upper bound of a function's growth rate to classify algorithms.
This document contains lecture notes on data structures that discuss programming performance, space complexity, time complexity, asymptotic notations, and searching and sorting algorithms. The key points covered are:
- Programming performance is measured by the memory and time needed to run a program, which can be analyzed or experimentally measured.
- Space complexity accounts for memory needed and includes instruction, data, and environment stack spaces. Time complexity accounts for time needed and includes compilation and execution times.
- Common asymptotic notations like Big-O, Omega, and Theta are used to describe time and space complexity behaviors.
- Searching algorithms like linear and binary search are used to find elements. Sorting algorithms like bubble, quick, selection and heap sorts
This document discusses data structures and algorithms. It provides course objectives which include imparting concepts of data structures and algorithms, introducing searching and sorting techniques, and developing applications using suitable data structures. Course outcomes include understanding algorithm performance analysis, concepts of data structures, linear data structures, and identifying efficient data structure implementations. The document also covers algorithm basics, specifications, expressions, analysis techniques and asymptotic notations for analyzing algorithms.
This document discusses space complexity in data structures and algorithms. It defines space complexity as the amount of memory space an algorithm or problem takes during execution. Calculating space complexity is important for analyzing algorithm efficiency and ensuring algorithms can run within the memory constraints of real-world systems. The document provides examples of calculating space complexity for different algorithms like sorting, searching, and recursion, showing how space complexity can be constant, linear, or other orders of complexity depending on the algorithm.
Introduction to design and analysis of algorithmDevaKumari Vijay
This document defines algorithms and describes how to analyze their efficiency. It states that an algorithm is a set of unambiguous instructions that accepts input and produces output within a finite number of steps. The document outlines criteria algorithms must satisfy like being definite, finite, and effective. It also describes different representations of algorithms like pseudocode and flowcharts. The document then discusses analyzing algorithms' time and space efficiency using asymptotic notations like Big-O, Big-Omega, and Big-Theta. It defines these notations and provides examples to classify algorithms' order of growth.
Complete Information till 2D arrays. In this slides you can also find information about loops and control decision....
Best slides for beginners who wants to learn about C programming language..
The document provides an introduction to analyzing the performance of algorithms. It discusses time complexity and space complexity, which measure the running time and memory usage of algorithms. Common asymptotic notations like Big-O, Big-Omega, and Big-Theta are introduced to concisely represent the growth rates of algorithms. Specific algorithms like selection sort and insertion sort are analyzed using these metrics. Recursion and recurrence relations are also covered as methods to analyze algorithm efficiency.
This document discusses algorithms and their analysis. It begins by defining an algorithm and its key characteristics like being finite, definite, and terminating after a finite number of steps. It then discusses designing algorithms to minimize cost and analyzing algorithms to predict their performance. Various algorithm design techniques are covered like divide and conquer, binary search, and its recursive implementation. Asymptotic notations like Big-O, Omega, and Theta are introduced to analyze time and space complexity. Specific algorithms like merge sort, quicksort, and their recursive implementations are explained in detail.
The document discusses algorithms, including their definition, properties, analysis of time and space complexity, and examples of recursion and iteration. It defines an algorithm as a finite set of instructions to accomplish a task. Properties include inputs, outputs, finiteness, definiteness, and effectiveness. Time complexity is analyzed using big-O notation, while space complexity considers static and variable parts. Recursion uses function calls to solve sub-problems, while iteration uses loops. Examples include factorial calculation, GCD, and Towers of Hanoi solved recursively.
The main purpose of the current study was to formulate an empirical expression for predicting the axial compression capacity and axial strain of concrete-filled plastic tubular specimens (CFPT) using the artificial neural network (ANN). A total of seventy-two experimental test data of CFPT and unconfined concrete were used for training, testing, and validating the ANN models. The ANN axial strength and strain predictions were compared with the experimental data and predictions from several existing strength models for fiber-reinforced polymer (FRP)-confined concrete. Five statistical indices were used to determine the performance of all models considered in the present study. The statistical evaluation showed that the ANN model was more effective and precise than the other models in predicting the compressive strength, with 2.8% AA error, and strain at peak stress, with 6.58% AA error, of concrete-filled plastic tube tested under axial compression load. Similar lower values were obtained for the NRMSE index.
Welcome to the May 2025 edition of WIPAC Monthly celebrating the 14th anniversary of the WIPAC Group and WIPAC monthly.
In this edition along with the usual news from around the industry we have three great articles for your contemplation
Firstly from Michael Dooley we have a feature article about ammonia ion selective electrodes and their online applications
Secondly we have an article from myself which highlights the increasing amount of wastewater monitoring and asks "what is the overall" strategy or are we installing monitoring for the sake of monitoring
Lastly we have an article on data as a service for resilient utility operations and how it can be used effectively.
This research is oriented towards exploring mode-wise corridor level travel-time estimation using Machine learning techniques such as Artificial Neural Network (ANN) and Support Vector Machine (SVM). Authors have considered buses (equipped with in-vehicle GPS) as the probe vehicles and attempted to calculate the travel-time of other modes such as cars along a stretch of arterial roads. The proposed study considers various influential factors that affect travel time such as road geometry, traffic parameters, location information from the GPS receiver and other spatiotemporal parameters that affect the travel-time. The study used a segment modeling method for segregating the data based on identified bus stop locations. A k-fold cross-validation technique was used for determining the optimum model parameters to be used in the ANN and SVM models. The developed models were tested on a study corridor of 59.48 km stretch in Mumbai, India. The data for this study were collected for a period of five days (Monday-Friday) during the morning peak period (from 8.00 am to 11.00 am). Evaluation scores such as MAPE (mean absolute percentage error), MAD (mean absolute deviation) and RMSE (root mean square error) were used for testing the performance of the models. The MAPE values for ANN and SVM models are 11.65 and 10.78 respectively. The developed model is further statistically validated using the Kolmogorov-Smirnov test. The results obtained from these tests proved that the proposed model is statistically valid.
David Boutry - Specializes In AWS, Microservices And PythonDavid Boutry
With over eight years of experience, David Boutry specializes in AWS, microservices, and Python. As a Senior Software Engineer in New York, he spearheaded initiatives that reduced data processing times by 40%. His prior work in Seattle focused on optimizing e-commerce platforms, leading to a 25% sales increase. David is committed to mentoring junior developers and supporting nonprofit organizations through coding workshops and software development.
この資料は、Roy FieldingのREST論文(第5章)を振り返り、現代Webで誤解されがちなRESTの本質を解説しています。特に、ハイパーメディア制御やアプリケーション状態の管理に関する重要なポイントをわかりやすく紹介しています。
This presentation revisits Chapter 5 of Roy Fielding's PhD dissertation on REST, clarifying concepts that are often misunderstood in modern web design—such as hypermedia controls within representations and the role of hypermedia in managing application state.
1. DAA Introduction , Divide
&Conquer Technique
B.Sailaja,
Asst Prof,Dept of CSE,
Vidya Jyothi Institute of
Technology
2. Algorithm
Definition
An algorithm is a finite set of instructions that
accomplishes a particular task.
All algorithms must satisfy the following criteria.
Criteria
1. Input : Zero or more quantities are
externally supplied.
2. Output : At least one quantity is produced.
3. Definiteness : Each instruction is clear and
unambiguous.
Statements such as “add 6 or 7 to x”
or “Compute 5/0” are not permitted”
3. 4. Finiteness : The algorithm should terminate
after a finite number of steps.
5. Effectiveness : Instruction is basic enough to be
carried out.
4. Pseudo code for expressing algorithms
We present most of our algorithms using pseudo code that
resembles C and Pascal.
1. Comments begin with // and continue until end of the line.
2. Blocks are indicated with matching braces: { and }.
i. A compound statement
ii. Body of a procedure.
3.
i. An identifier begins with a letter.
ii. The data types of variables are not explicitly declared.
iii. The types will be clear from the context.
iv. Whether a variable is global or local to a procedure will also be evident from
the context.
v. We assume simple data types such as integer, float, char, boolean, and so
on.
vi. Compound data types can be formed with records.
5. node = record
{
datatype_1 data_1;
:
datatype_n data_n;
node *link
}
data items of a record can be accessed
with and period( . )
C style :-
struct node
{
datatype_1 data_1;
:
datatype_n data_n;
struct node *link
}
6. 4. Assignment of values to variables is done using the assignment
statement.
< variable > := < expression >
5. There are two boolean values true and false. To produce
these values, logical operators and, or and not and the
relational operators <, ≤,=, ≠, ≥ and > are provided.
6. Elements of multidimensional arrays are accessed using
[ and ]. For example the (i,j)th element of the array A is
denoted as A[i,j].
7. The following looping statements are used: for, while, and
repeat until.
7. The general form of a while loop:
while( condition ) do
{
statement_1;
:
statement_n;
}
8. The general form of a for loop:
for variable := value1 to value2 step step do
◦ Here value1, value2, and step are arithmetic expressions.
◦ The clause “step step” is optional and taken as +1 if it does not occur.
◦ step could be either positive or negative.
◦ variable is tested for termination at the start of each iteration.
9. The for loop can be implemented as a
while loop as follows:
variable:=value1;
incr:=step;
while( ( variable – value2)*step ≤ 0) do
{
<statement 1>
:
<statement n>
variable :=variable+incr;
}
10. The general form of a repeat-until loop:
repeat
<statement 1>
:
<statement n>
until ( condition )
The statements are executed as long as
condition is false.
11. 8. A conditional statement has the following
forms:
if< condition > then < statement >
if< condition > then < statement 1> else
< statement 2>
9. Input and output are done using the instructions read and
write.
12. 10. Procedure or function starts with the word
Algorithm.
General form :
Algorithm Name( <parameter list> )
{
body
}
where Name is the name of the procedure.
◦ Simple variables to procedures are passed by value.
◦ Arrays and records are passed by reference
13. Ex:-Algorithm that finds and returns the
maximum of n given numbers
Algorithm(a,n)
// a is an array of size n
{
Result:=a[1];
for i:=2 to n do
if a[i]>Result then Result:=a[i];
return Result;
}
14. Ex:-Write an algorithm to sort an array of n
integers using bubble sort.
Algorithm(a,n)
// a is an array of size n
{
for i:=1 to n-1 do
{
for j:=1 to n-i do
{
if( a[j] >a[j+1] ) then
t=:a[j]; a[j]:=a[j+1]; a[j]:=t;
}
}
}
15. Performance Analysis
(machine independent)
There are many things upon which the
performance will depend.
◦ Does the program efficiently use primary and Secondary
storage?
◦ Is the program's running Time acceptable for the task?
◦ Does it do what we want it to do?
◦ Does it work correctly according to the specifications of
the task?
◦ Does the program contain documentation that shows how
to use it and how it works?
◦ Is the program's code readable?
16. How to achieve them?
◦ Good programming style, experience, and practice
◦ Discuss and think.
Practice
Practice
Practice
Think
Think
Think
17. Space Complexity
The space needed by a program has the following components.
◦ Instruction space
◦ Instruction space is the space needed to store the compiled version of the program instructions.
◦ The amount of instruction space that is needed depends on the compiler used to compile the
program into machine code.
18. ◦ Data space
◦ Data space is the space needed to store all constant and variable values. Data space has two
components.
◦ Space needed by constants( ex; 1 and 2 in max of n num algorithm) and simple variables( such
as i, j, n etc).
◦ Space needed by a dynamically allocated objects such as arrays and class instances.
◦ Space taken by the variables and constants varies from language to language and platform to
platform.
19. ◦ Environmental stack space
◦ The environment stack is used to save information needed to resume execution of partially
completed functions.
◦ Each time a function is invoked the following data are saved on the environment stack.
◦ The return address .
◦ The values of all local variables and formal parameters in the function being
invoked( necessary for recursive functions only).
20. Recursive algorithm
Algorithm rfactorial(n)
// n is an integer
{ fact=1;
if(n=1 or n=0) return fact;
else
fact=n*rfactorial(n-1);
return fact;
}
Each time the recursive function rfactorial is invoked,
the current values of n and fact and the program location to return to on
completion are saved in the environment stack.
21. void main()
{int
R=rfactorial(3);
printf(R);}
R:
int rfactorial(n){
fact=1;
if (n=0 or n=1)return fact;
else
fact=n*rfactorial(n-1);
return fact;
}
n : 3
int rfactorial(n){
fact=1
if (n=0 or n=1)return fact
else
fact=n*rfactorial(n-1);
return fact;
}
int rfactorial(n){
if (n=0 or n=1)
return 1;
else
fact=n*rfactorial(n-1);
return fact;
}
n : 2
n : 1
2000
3
1
fact
n
fact=3*
3000
2
1
fact
n
fact=2*
4000
1
1
fact
n
fact=1
location to return
Execution
Stack
22. Summary of space complexity
The space needed by a program depends on several factors.
We cannot make an accurate analysis of the space requirements
of a program unless we know the computer or compile that will
be used.
However, we can determine the components that depend on the
characteristics of the problem instance (e.x., the number of inputs and
outputs or magnitude of the numbers involved ) to be solved.
Ex:-1. Space requirements of a program that sorts n elements can be
expressed as a function of n.
2. Space requirements of a program that adds two m×n
matrices can be expressed as a function of m, n.
23. ◦ The size of the instruction space is independent of the problem instance to
be solved.
◦ The contribution of the constants and simple variables to the data space is
also independent of the problem instance to be solved.
◦ Most of the dynamically allocated memory( ex., arrays, class instances etc)
depends on problem instance to be solved.
◦ The environmental stack space is generally independent of the problem
instance unless recursive functions are in use.
Contd..
24. Contd..
Therefore, We can divide the total space needed by a
program into two parts:
i) Fixed Space Requirements (C)
Independent of the characteristics of the problem instance ( I )
◦ instruction space
◦ space for simple variables and constants.
ii) Variable Space Requirements (SP(I))
depend on the characteristics of the problem instance ( I )
◦ Number of inputs and outputs associated with I
◦ recursive stack space ( formal parameters, local variables, return
address ).
◦ Therefore, the space requirement of any problem P can be
written as
S(p)=C +Sp( Instance characteristics )
25. Note:
◦ When analyzing the space complexity of an algorithm, we
concentrate only on estimating
Sp (Instance characteristics ).
◦ We do not concentrate on estimating fixed part c .
◦ We need to identify the instance characteristics of the
problem to measure Sp
26. Example1
Algorithm abc(a,b,c)
{
return a+b+b*c+(a+b-c)/(a+b)+4.0;
}
Problem instance characterized by the specific values of a,b,and c.
If we assume one word (4 bytes) is adequate to store the values of each
a, b, and c , then the space needed by abc is independent of the
instance characteristics.
Therefore, Sabc( instance characteristics)=0
27. Example2
Algorithm sum(a,n)
{
s:=0;
for i:=1 to n do
s:=s+a[i];
return s;
}
Problem instance characterized by n.
The amount of space needed does not depend on the value of n.
Therefore, Ssum(n)=0
Recall: Address of the
first element of the array will
be passed .
28. Example3
Algorithm RSum(a,n)
{
if(n ≤ 0) then return 0;
else return RSum(a,n-1)+a[n];
}
Type Name Number of bytes
formal parameter: int a 2
formal parameter: int n 2
return address
( used internally)
2
Total per one recursive
call
6
Total no.of recursive calls n, therefore SRSum (n)=6(n+1)
29. Example4
Calculate space complexity sp(n) for the following algorithm
Algorithm rfactorial(n)
{
fact=1;
if (n=0 or n=1)return fact;
else
fact=n*rfactorial(n-1);
return fact;
}
30. Contd..
Time Complexity:
T(P)=C+TP(I)
◦ The time, T(P), taken by a program, P, is the sum of its
compile time C and its run (or execution) time, TP(I).
◦ The compile time does not depend on the instance
characteristics.
◦ We will concentrate on estimating run time Tp(I).
31. If we know the characteristics of the compiler to be used, we can
determine the
◦ No. of additions , subtractions, multiplications, divisions, compares, and so
on.
Then we can obtain an expression for Tp(n) Of the form
TP(n)=caADD(n)+csSUB(n)+cmMUL(n)+cdDIV(n)+……..
where,
◦ n denotes the instance characteristics.
◦ ca, cs, cm, cd and so on denote the time needed for an addition,
subtraction, multiplication, division and so on. And
◦ ADD, SUB, MUL, DIV, and so on are functions whose values are the no.of
additions, subtractions, multiplications, divisions, and so on.
32. Obtaining such an exact formula is an impossible task, since
time needed for an addition, subtraction, and so on,
depends an numbers being added, subtracted, and so on.
The value of Tp(n) for any given n can be obtained only
experimentally.
Even with the experimental approach, one could face
difficulties.
In a multiuser system the execution time of a program p
depends on the number of other programs running on the
computer at the time program p is running.
33. Contd..
As there were some problems in determining the execution time using
earlier methods, we will go one step further and count only the number
of program steps.
program step
program step is loosely defined as a syntactically or
semantically meaningful program segment whose execution
time is independent of the instance characteristics.
◦ Example
result = a + b + b * c + (a + b - c) / (a + b) + 4.0;
sum = a + b + c;
34. The number of steps assigned to any program statement
depends on the kind of statement.
Comments are counted as zero number of steps.
An assignment statement which does not involve any calls to
other functions counted as one step.
For loops, such as the for, while, and repeat-until, we consider
the step counts only for the control part of the statement.
The control parts for for and while statements have the
following forms:
for i:= <expr1> to <expr2> do
while ( <expr> ) do
35. Each execution of the control part of a while statement is one, unless
<expr> is a function of instance characteristics.
The step count for each execution of the control part of a for statement is
one, unless <expr1> and <expr2> are functions of the instance
characteristics.
The step count for each execution of the condition of a conditional
statements is one, unless condition is a function of instance characteristics.
If any statement ( assignment statement, control part, condition etc.)
involves function calls, then the step count is equal to the number of steps
assignable to the function plus one.
36. Methods to compute the step count
1) Introduce global variable count into programs with
initial value zero.
◦ Statements to increment count by the appropriate amount are
introduced into the program.
◦ The value of the count by the time program terminates is the
number steps taken by the program.
2) Tabular method
◦ Determine the total number of steps contributed by each
statement per execution frequency
◦ Add up the contribution of all statements
37. Method-I: Introduce variable count into programs
EX:- Iterative sum of n numbers
Algorithm sum(a, n)
{
s:=0;
count:=count+1; // for assignment statement
for i:=1 to n do
{ count:=count+1; // For for
s:=s+a[i];
count:=count+1; // for assignment statement
}
count:=count+1; // for last time of for
return s;
count:=count+1; // for return
}
2n + 3 steps
Note : Step count tells us how the run time for a program changes with
changes in the instance characteristics.
38. Ex:- Addition of two m×n matrices
Algorithm Add(a,b,c,,m,n)
{
for i:=1 to m do
{
for j:=1 to n do
{
c[i,j]:=a[i,j]+b[i,j];
}
}
}
2mn + 2m+1 steps
39. EX:- Recursive sum of n numbers
Algorithm RSum(a,n)
{
count:=count+1; // for the if conditional
if(n ≤ 0) then
{
return 0;
count:=count+1; // for the return
}
else
{
return RSum(a,n-1)+a[n];
count:=count+1; // For the addition, function invocation and return
}
}
40. When analyzing a recursive program for its step count, we often obtain
a recursive formula for the step count.
We obtain the following recursive formula for above (RSum) algorithm.
tRSum (n)=
2 If n=0
If n>0
2+ tRSum(n-1)
41. One way of solving such recursive formula is by repeated substitutions for
each occurrence of the function tRSum on the right side until all such
occurrences disappear:
tRSum (n) = 2+tRSum(n-1)
= 2+2+tRSum(n-2)
= 2(2)+tRSum(n-2)
:
:
= n(2) +tRSum(n-n)
=2n+tRSum(0)
= 2n+2
The step count for Rsum is 2n+2
42. Method-II: Tabular method
EX:- Iterative sum of n numbers
Statement s/e frequency Total steps
Algorithm sum(a, n)
{
s:=0 ;
for i:=1 to n do
s:=s+a[i];
return s;
}
0
0
1
1
1
1
0
--
--
1
n+1
n
1
--
0
0
1
n+1
n
1
0
Total 2n+3
43. EX:- Addition of two m×n matrices
Statement s/e frequency Total steps
Algorithm Add(a,b,c,m, n)
{
for i:=1 to m do
for j:=1 to n do
c[i,j]:=a[i,j]+b[i,j] ;
}
0
0
1
1
1
0
--
--
m+1
m(n+1)
mn
--
0
0
m+1
mn+m
mn
0
Total 2mn+2m+1
44. EX:- Recursive sum of n numbers
Statement s/e Frequency
n=0 n>0
Total steps
n=0 n>0
Algorithm RSum(a,n)
{
if( n ≤ 0 ) then
return 0;
else return
Rsum(a,n-1)+a[n] ;
}
0
0
1
1
1+x
0
-- --
-- --
1 1
1 0
0 1
-- --
0 0
0 0
1 1
1 0
0 1+x
0 0
Total 2 2+x
x=tRSum(n-1)
45. Best, Worst, Average Cases
Not all inputs of a given size take the same number of program steps.
Sequential search for K in an array of n integers:
• Begin at first element in array and look at each element in turn until
K is found.
1. Best-Case Step count:-
Minimum number of steps executed by the algorithm for the given
parameters.
2. Worst-Case Step count:-
Maximum number of steps executed by the algorithm for the given
parameters.
3.Average-Case Step count:-
Average number of steps executed by an algorithm.
46. Contd..
5 ms
4 ms
3 ms
2 ms
1 ms
Best-case time
Worst -case time
A B C D E F G
Input
Running
Time
Average-case time =
A time + B time+ ………..+G time
----------------------------------------------------
7 ( total number of possible inputs )
47. Inexactness of step count.
• Both the instructions x=y; and x=y+z+(x/y) count as one step.
• Therefore, two analysts may arrive at 4n2
+6n+2 and 7n2
+3n+4 as the
step count for the same program.
• Any step count of the form c1n2
+c2n+c3 could be a correct step count
for the program.
• Because of the inexactness of what a step count stands for, the exact
step count is not very useful for comparison of algorithms.
48. Asymptotic efficiency
Asymptotic efficiency means study of algorithms
efficiency for large inputs.
To compare two algorithms with running times f(n) and
g(n), we need a rough measure that characterizes how
fast each function grows as n grows.
Hint: use rate of growth
Compare functions asymptotically!
(i.e., for large values of n)
50. The low order terms and constants in a function are relatively insignificant for
large n
n2
+ 100n + log10n + 1000 ~ n2
i.e., we say that n2
+ 100n + log10n + 1000 and n2
have the same rate of growth
Some more examples
n4
+ 100n2
+ 10n + 50 is ~n4
10n3
+ 2n2
is ~n3
n3
- n2
is ~n3
constants
◦ 10 is ~1
◦ 1273 is ~1
51. Asymptotic Notations
Asymptotic notation describes the behavior of
functions for the large inputs.
Big Oh(O) notation:
◦ The big oh notation describes an upper bound on the
asymptotic growth rate of the function f.
Definition: [Big “oh’’]
◦ f(n) = O(g(n)) (read as “f of n is big oh of g of n”)
iff there exist positive constants c and n0 such
that
f(n) cg(n) for all n, n n0.
52. The definition states that the function f(n) is at most c times the
function g(n) except when n is smaller than n0.
In other words, f(n) grows slower than or same rate as” g(n).
When providing an upper –bound function g for f, we normally
use a single term in n.
Examples
◦ f(n) = 3n+2
◦ 3n + 2 <= 4n, for all n >= 2, 3n + 2 = (n)
◦ f(n) = 10n2
+4n+2
◦ 10n2
+4n+2 <= 11n2
, for all n >= 5, 10n2
+4n+2 = (n2
)
◦ f(n)=6*2n
+n2
=O(2n
) /* 6*2n
+n2
7*2n
for n4 */
53. It also possible to write 10n2
+4n+2 = O(n3
) since 10n2
+4n+2
<=7n3
for n>=2
Although n3
is an upper bound for 10n2
+4n+2, it is not a
tight upper bound; we can find a smaller function (n2)
that
satisfies big oh relation.
But, we can not write 10n2
+4n+2=O(n), since it does not
satisfy the big oh relation for sufficiently large input.
54. Omega () notation:
◦ The omega notation describes a lower bound on the
asymptotic growth rate of the function f.
Definition: [Omega]
◦ f(n) = (g(n)) (read as “f of n is omega
of g of n”) iff there exist positive
constants c and n0 such that f(n)
cg(n) for all n,
n n0.
55. The definition states that the function f(n) is at least c times the function g(n)
except when n is smaller than n0.
In other words,f(n) grows faster than or same rate as” g(n).
Examples
◦ f(n) = 3n+2
◦ 3n + 2 >= 3n, for all n >= 1, 3n + 2 = (n)
◦ f(n) = 10n2
+4n+2
◦ 10n2
+4n+2 >= n2
, for all n >= 1, 10n2
+4n+2 = (n2
)
It also possible to write 10n2
+4n+2 = (n) since 10n2
+4n+2 >=nfor n>=0
Although n is a lower bound for 10n2
+4n+2, it is not a tight lower bound; we
can find a larger function (n2)
that satisfies omega relation.
But, we can not write 10n2
+4n+2= (n3
), since it does not satisfy the omega
relation for sufficiently large input.
56. Theta () notation:
◦ The Theta notation describes a tight bound on the
asymptotic growth rate of the function f.
Definition: [Theta]
◦ f(n) = (g(n)) (read as “f of n is theta of g of n”)
iff there exist positive constants c1, c2, and n0
such that c1g(n) f(n) c2g(n) for all n, n n0.
57. The definition states that the function f(n) lies between c1 times the function
g(n) and c2 times the function g(n) except when n is smaller than n0.
In other words,f(n) grows same rate as” g(n).
Examples:-
◦ f(n) = 3n+2
◦ 3n <= 3n + 2 <= 4n, for all n >= 2, 3n + 2 = (n)
◦ f(n) = 10n2
+4n+2
◦ n2
<= 10n2
+4n+2 <= 11n2
, for all n >= 5, 10n2
+4n+2 = (n2
)
But, we can not write either 10n2
+4n+2= (n) or 10n2
+4n+2= (n3
), since
neither of these will satisfy the theta relation.
58. Little Oh(O) notation:
◦ The big oh notation describes a strict upper bound
on the asymptotic growth rate of the function f.
Definition: [Little “oh’’]
◦ f(n) = o(g(n)) (read as “f of n is little oh of g of
n”) iff
Lim
n->∞
f(n)
g(n)
= 0
59. The definition states that the function f(n) is less than c times the function g(n)
except when n is smaller than n0.
In other words, f(n) grows slower than” g(n).
Examples
◦ f(n) = 3n+2=o(n2
)
◦ Since
◦ However, 3n+2 ≠ o(n)
Lim
n->∞
3n+2
n2
= 0
60. Big-Oh, Theta, Omega and Little-oh
Tips :
Think of O(g(n)) as “less than or equal to” g(n)
◦ Upper bound: “grows slower than or same rate as” g(n)
Think of Ω(g(n)) as “greater than or equal to” g(n)
◦ Lower bound: “grows faster than or same rate as” g(n)
Think of Θ(g(n)) as “equal to” g(n)
◦ “Tight” bound: same growth rate
Think of o(g(n)) as “less than to” g(n)
◦ Strict Upper bound: “grows slower than ” g(n)
(True for large N)
61. Functions ordered by growth rate
Function Name
1 Growth is constant
logn Growth is logarithmic
n Growth is linear
nlogn Growth is n-log-n
n2
Growth is quadratic
n3
Growth is cubic
2n
Growth is exponential
n! Growth is factorial
1 < logn < n < nlogn < n2
< n3
< 2n
< n!
62. ◦ To get a feel for how the various functions grow with n, you are advised to study the
following figs:
66. The Hiring Problem
You are using an employment agency to hire a new assistant.
The agency sends you one candidate each day.
You interview the candidate and must immediately decide whether or not to
hire that person. But if you hire, you must also fire your current office assistant
—even if it’s someone you have recently hired.
Cost to interview is ci per candidate.
Cost to hire is ch per candidate.
You want to have, at all times, the best candidate seen so far.
When you interview a candidate who is better than your current assistant, you
fire the current assistant and hire the candidate.
You will always hire the first candidate that you interview.
Problem: What is the cost of this strategy?
67. Pseudo-code to Model the Scenario
Hire-Assistant (n)
best 0 ;;Candidate 0 is a least qualified candidate
for i 1 to n
do interview candidate i
if candidate i is better than candidate best
then best i
hire candidate i
Cost Model:
•We are not concerned with the running time of Hire-Assistant.
•We want to determine the total cost of hiring the best candidate.
• If n candidates interviewed and m hired, then cost is nci+mch.
• Have to pay nci to interview, no matter how many we hire.
• So, focus on analyzing the hiring cost mch.
• m varies with order of candidates.
69. Worst-case Analysis
In the worst case, we hire all n candidates.
This happens if each candidate is better than all those who came
before. Candidates come in increasing order of quality.
Cost is nci+nch.
If this happens, we fire the agency.
What should happen in the average case?
70. Probabilistic analysis
Probabilistic analysis is the use of probability in the analysis of
problems.
Generally, we use probabilistic analysis to analyze the running
time of an algorithm.
Sometimes, we can also use it to analyze quantities such as the
hiring cost in procedure Hire-Assistant.
71. Hiring problem-Average –case Analysis
An input to the hiring problem is an ordering of the n applicants.
There are n! different inputs.
To use probabilistic analysis, we assume that the candidates arrive in a
random order .
Then we analyze our algorithm by computing an expected running time.
The expectation is taken over the distribution of the possible inputs.
Thus, we are averaging the running time over all possible inputs.
72. Indicator Random Variable
A powerful technique for computing the expected value of a
random variable.
Convenient method for converting between probabilities and
expectations.
Takes only 2 values, 1 and 0.
Indicator Random Variable XA=I{A} for an event A of a sample
space is defined as:
The expected value of an indicator Random Variable associated
with an event A is equal to the probability that A occurs.
occur.
not
does
if
0
occurs,
if
1
}
{
A
A
A
I
73. Indicator RV – Example1
Problem: Determine the expected number of heads in one
coin flip.
Sample space is s{ H,T}
Indicator random variable
XA= I{ coin coming up with heads}=1/2
The expected number of heads obtained in one flip of the
coin is equal to the expected value of indicator random
variable.
Therefore, E[XA] = 1/2
74. Indicator RV – Example2
Problem: Determine the expected number of heads in n coin flips.
Let X be the random variable denoting the total number of heads in
n coin flips.
Define n indicator random variables, Xi, 1 i n.
Let Xi be the indicator random variable for the event that the ith
flip results in a Head.
Xi = I{the ith
flip results in H}
Then X = X1 + X2 + …+ Xn = i=1..n Xi.
E[Xi] = Pr{H} = ½, 1 i n.
Expected number of heads is
E[X] = E[i=1..n Xi].
= i=1..n E[Xi] = i=1..n ½ = n/2.
75. Back to the Hiring Problem
We want to know the expected cost of our hiring algorithm,
in terms of how many times we hire an applicant.
Random variable X(s) is the number of applicants that are
hired, given the input sequence s.
Indicator random variable Xi for applicant I = 1 if applicant i
is hired, 0 otherwise.
What is E[X]?
76. Probability of Hiring i-th Applicant
Probability of hiring i is probability that i is better than the
previous i-1 applicants…
Pr[Xi = 1] = 1/i.
Suppose n = 4 and i = 3.
Ex:-Probability that 3rd applicant is better than the 2 previous
ones.
1234
1243
1324
1342
1423
1432
2134
2143
2314
2341
2413
2431
3124
3142
3214
3241
3412
3421
4123
4132
4213
4231
4312
4321
8/24 = 1/3
77. Analysis of the Hiring Problem
Compute E[X], the number of candidates we expect to
hire.
)
1
(
ln
1
]
[
]
[
1
1
1
O
n
i
X
E
X
E
X
E
n
i
n
i
i
n
i
i
Expected hiring cost = O(chln n).
1
78. Amortized Analysis
Data structures are subjected to a sequence of operations
rather than one operation.
In this sequence, one operation possibly performs certain
modifications that have an impact on the run time of the
next operation in the sequence.
One way of assessing the worst case run time of the entire
sequence is to add worst case efficiencies for each
operation.
This may result in an excessively large and unrealistic time
complexity.
79. Amortized analysis concentrates on finding the average
complexity of a worst case sequence of operations.
Amortized analysis considers interdependence between
operations.
◦ For example, if an array is sorted and only a very few new elements
are added, then re-sorting this array will be must faster than sorting
it for the first time.
80. Amortized Analysis
AMORTIZED ANALYSIS:
You try to estimate an upper bound of the total
work T(n) required for a sequence of n operations…
Some operations may be cheap some may be expensive.
Overall, your algorithm does T(n) of work for n operations …
Therefore, by simple reasoning, the amortized cost of each
operation is T(n)/n
81. Disjoint Sets
Two sets A and B are said to be disjoint if there
no commone elements i.e., A B = .
Example:
1) S1={1,7,8,9}, S2={2,5,10}, and S3={3,4,6}.
are three disjoint sets.
82. We identify a set by choosing a representative element of the set.
It doesn’t matter which element we choose, but once chosen, it
can’t be changed.
Disjoint set operations:
◦ FIND-SET(x): Returns the representative of the set
containing x.
◦ UNION(i,j): Combines the two sets i and j into one new
set. A new representative is selected.
( Here i and j are the representatives of the sets )
84. Up-Trees
A simple data structure for implementing disjoint sets is the up-
tree.
1
7 8 9
5
2 10
3
6 6
S1 S2
S3
1, 7,8, and 9 belong to the same
set. 1 is the representative.
85. Union operation
To obtain union of two sets, set the parent field of one
of the roots to the other root.
1
7 8 9 5
2 10
1
7 8 9
5
2 10
or
S1U S2
S1U S2
86. Array Representation of Up-tree
Assume each element is associated with an integer i=1…n. From now on, we deal
only with i.
Create an integer array, p[1:n]
An array entry is the element’s parent
A -1 entry signifies that element i is the representative element.
S1={1,7,8,9}, S2={2,5,10}, and S3={3,4,6}.
Ex:- 1 2 3 4 5 6 7 8 9 10
-1 5 -1 3 -1 3 1 1 1 5
p
1 2 3 4 5 6 7 8 9 10
87. Now we can implement Find by following the indices, starting at i
until we reach a node with parent value -1.
Ex:- Find(6) moves to 6’s parent, 3. Since p[3] is negative, we
have reached the root.
The operation Union(i,j) is very simple. We pass two trees with
roots i and j. The statement p[i]:=j; accomplishes the union.
88. Simple union algorithm :
Algorithm SimpleUnion(i,j)
{
p[i] = j; // attaches i to j
}
Simple find algorithm
Algorithm SimpleFind(i)
{
while(p[i] > 0) do i:=p[i];
return i;
}
Performance: ???
89. The performance characteristics of these algorithms are not very
good.
For example, start with q elements each is in its own set, then
process the following sequence of union-find operations:
Union(1,2), Union(2,3), Union(3,4), ……, Union(n-1,n)
Find(1), Find(2),………, Find(n).
This sequence results in the degenerate tree shown below.
1
.
.
.
n-1
n
90. The time taken for a union is a o(1), the n-1 unions can be
processed in time o(n).
However, each find requires following a sequence of parent
pointers from the element to be found to the root.
Since the time required to process a find for an element at level i
of a tree is o(i), the total time needed to process the n finds is
We can improve the performance of union and find algorithms by
avoiding the creation of degenerate trees.
To do this, we use weighting rule for union and collapsing rule for
find.
O( ∑ i )=O(n2
)
1≤i ≤n
91. Weighting rule for Union(i,j)
Def:-If the number of nodes in the tree with root i
is less than the number in the tree with root j,
then make j the parent of i; otherwise make i
the parent of j.
OR
Always attach smaller tree to larger.
92. If we use weighting rule, we obtain the following trees.
1 2 n
…
initial
1 3 n
… 1 4 n
…
2 2 3
Union(1,2) Union(1,3)
. . . 1
4
2 3 ... n
Union(1,n)
93. To implement weighting rule, we maintain a count field in the
root of every tree.
If i is a root node, then count[i] equals the number of nodes in
the tree.
Since all nodes other than the roots of trees have a positive
number in the p field, we can maintain the count in the p field of
the roots as a negative number.
94. Algorithm WeightedUnion(i,j)
// Union sets with roots i, and j, i ≠ j, using
// the weighting rule. p[i]= -count[i] and p[j]= -count[j].
{
temp:=p[ i ]+p[ j ];
if( p[ i ]>p[ j ] ) then
{ // i has fewer nodes.
p[i]:=j; p[j]:=temp;
}
else
{ // j has fewer or equal nodes.
p[ j ]:=i; p[ i ]:=temp;
}
}
95. In this algorithm, again time required to perform a union is o(1),
the n-1 unions can be processed in time o(n).
The find algorithm is unchanged.
Lemma:-
Assume that we start with a forest of trees, each having one node. Let T be a
tree with n nodes created as a result of a sequence of unions each performed
using WeightedUnion. Then the height of T is no greater than log n +1.
The maximum time required to perform a find is O(logn).
If there are n find operations, then time required to perform this sequence is
O(nlogn).
2
96. Ex:-Consider the behavior of WeightedUnion on the following
sequence of unions starting from the initial configuration
p[i]= -count[i]= -1, 1≤ i ≤ 8=n .
Union(1,2), Union(3,4), Union(5,6), Union(7,8),
Union(1,3), Union(5,7), Union(1,5)
1 2
[-1] [-1]
3 4
[-1] [-1]
5 6
[-1] [-1]
7 8
[-1] [-1]
a) Initial height-1 trees
1
2
[-2]
3
4
[-2]
5
6
[-2]
7
8
[-2]
b) height-2 trees following Union(1,2), (3,4), (5,6), and (7,8)
97. 1
2
[- 4]
3
4
5
6
[- 4]
7
8
c) height-3 trees following Union(1,3), and (5,7)
1
2
[- 8]
3
4
5
6 7
8
d) height-4 tree following Union(1,5)
98. Collapsing rule for find
Def:- When we do a find on an element i, we make each
node in the path from i to the root directly point to
the root.
f
e
d
c
f
e
d
c
99. Ex:- Consider the previous tree created by WeightedUnion on the
sequence of unions. Now process the following finds:
Find(8), Find(7), Find(5),Find(8)
Find(8) requires going up three
parent link fields, Find(7) requires
going up two parent fields, and
Find(5) requires one parent field.
There total cost for the above
sequence is 9 moves.
1
2
[- 8]
3
4
5
6 7
8
100. When CollapsingFind is used, the first(8) requires going up three
links then resetting two links:
Now the total cost for the above sequence is 6 moves.
1
2
[- 8]
3
4
5
6
7 8
101. Algorithm CollapsingFind(i)
// Find the root of the tree containing element i.
// Use the collapsing rule to collapse all nodes from i to the root.
{
r:=i;
while( p[ r ] >0 ) do r:=p[r]; // Find root.
while( i ≠ r ) do // collapse nodes from i to r.
{
s:=p[ i ]; p[ i ]:=r; i:=s;
}
return r;
}
102. Divide and Conquer Technique
General Method:
The Divide and Conquer Technique splits n inputs into k subsets , 1< k ≤ n,
yielding k subproblems.
These subproblems will be solved and then combined by using a separate
method to get a solution to a whole problem.
If the subproblems are large, then the Divide and Conquer Technique will be
reapplied.
Often subproblems resulting from a Divide and Conquer Technique are of
the same type as the original problem.
103. For those cases the reapplication of the Divide and Conquer Technique
is naturally expressed by a recursive algorithm.
Now smaller and smaller problems of the same kind are generated
until subproblems that are small enough to be solved without splitting
are produced.
104. Control Abstraction/general method for Divide and Conquer
Technique
Algorithm DAndC(p)
{
if Small(p) then return s(p);
else
{
divide p into smaller instances p1,p2,…….,pk, k≥1;
Apply DAndC to each of these subproblems;
return Combine(DAndC(p1), DAndC(p2),……,DAndC(pk));
}
}
105. If the size of p is n and the sizes of the k subproblems are n1,n2,….,nk,then the
computing time of DAndC is described by the recurrence relation
T(n)= g( n) n small
T(n1)+T(n2)+……+T(nk)+f(n) Otherwise
Where T(n) is the time for DAndC on any input of size n and g(n) is the time to
compute the answer directly for small inputs.
The function f(n) is the time for dividing p and combining the solutions to
subproblems.
106. The Complexity of many divide-and-conquer algorithms is
given by recurrences of the form
c n small
aT(n/b)+f(n) Otherwise
Where a , b and c are known constants.
and n is a power of (i.e n=bk
)
T(n)=
108. Recursive Algorithm ( Divide and Conquer Technique)
Algorithm BinSrch (a,i,l,x)
//given an array a [i:l]of elements in nondecreasing
//order,1≤i≤l,determine whether x is present,and
//if so, return j such that x=a[j]; else return 0.
{
if(l=i) then // If small(P)
{
if(x=a[i]) then return i;
else return 0;
}
else
109. { //Reduce p into a smaller subproblem.
mid:= (i+l)/2
if(x=a[mid]) then return mid;
else if (x<a[mid]) then
return BinSrch (a,i,mid-1,x);
else return BinSrch(a,mid+1,l,x);
}
}
110. Time complexity of Binary Seaych
If the time for diving the list is a constant, then the computing time for binary
search is described by the recurrence relation
T(n) = c1 n=1, c1 is a constant
T(n/2) + c2 n>1, c2 is a constant
T(n) = T(n/2) + c2
=T(n/4)+c2+c2
=T(n/8) +c2+c2+c2
…..
…..
= T(1)+ kc2
= c1+kc2 =c1+ logn*c2 = O(logn)
Assume n=2k
, then
111. Time Complexity of Binary Search
Successful searches:
best average worst
O(1) O(log n) O( log n)
Unsuccessful searches :
best average worst
O(log n) O(log n) O( log n)
112. 2. Merge Sort
1. Base Case, solve the problem directly
if it is small enough(only one element).
2. Divide the problem into two or more
similar and smaller subproblems.
3. Recursively solve the subproblems.
4. Combine solutions to the subproblems.
137. Merge Sort: Algorithm
MergeSort ( low,high)
// sorts the elements a[low],…,a[high] which reside in the global array
//a[1:n] into ascending order.
// Small(p) is true if there is only one element to sort. In this case the list is
// already sorted.
{ if ( low<high ) then // if there are more than one element
{
mid ← (low+high)/2;
MergeSort(low,mid);
MergeSort(mid+1, high);
Merge(low, mid, high);
}
Recursive Calls
149. Algorithm Merge(low,mid,high)
// a[low:high] is a global array containing two sorted subsets in a[low:mid]
// and in a[mid+1:high]. The goal is to merge these two sets into a single set
// residing in a [low:high]. b[ ] is a temporary global array.
{
h:=low; i:=low; j:=mid+1;
while( h ≤ mid ) and ( j ≤ high ) do
{
if( a[h] ≤ a[j] ) then
{
b[i]:=a[h]; h:=h+1;
}
else
{
b[i]:=a[j]; j:=j+1;
}
i:=i+1;
}
150. if( h > mid ) then
for k:=j to high do
{
b[i] := a[k]; i:= i+1;
}
else
for k:=h to mid do
{
b[i] := a[k]; i:= i+1;
}
for k:= low to high do a[k]:=b[k];
}
152. Merge-Sort Time Complexity
If the time for the merging operation is proportional to n, then the
computing time for merge sort is described by the recurrence relation
n>1, c2 is a constant
n=1, c1 is a constant
2T(n/2) + c2n
c1
T(n) =
Assume n=2k
, then
T(n) =2T(n/2) + c2n
=2(2T(n/4)+c2n/2)+cn
=4T(n/4)+2c2n
…..
…..
=2k
T(1)+ kc2n
= c1n+c2nlogn = = O(nlogn)
153. Summary
Merge-Sort
◦ Most of the work done in combining the solutions.
◦ Best case takes o(n log(n)) time
◦ Average case takes o(n log(n)) time
◦ Worst case takes o(n log(n)) time
154. 3. Quick Sort
Divide:
◦ Pick any element as the pivot, e.g, the first element
◦ Partition the remaining elements into
FirstPart, which contains all elements < pivot
SecondPart, which contains all elements > pivot
Recursively sort FirstPart and SecondPart.
Combine: no work is necessary since sorting is done in place.
155. pivot divides a into two sublists x and y.
4 2 7 8 1 9 3 6 5
4
pivo
t
4 2 7 8 1 9 3 6 5
x y
157. Keep moving from left side as long as a[ i ]<pivot and from the
right side as long as a[ j ]>pivot
85 24 63 95 17 31 45 98
i j
85 24 63 95 17 31 45 98
85 24 63 95 17 31 45 98
85 24 63 95 17 31 45 98
i
i
i
j
j
j
Process:
pivot
158. If i<j interchange ith
and j th
elements and
then Continue the process.
85 24 63 45 17 31 95 98
i j
85 24 63 45 17 31 95 98
i j
85 24 63 45 17 31 95 98
i
159. 85 24 63 45 17 31 95 98
i
j
85 24 63 45 17 31 95 98
j
If i ≥j interchange jth
and pivot elements
and then divide the list into two sublists.
i
161. Quick Sort Algorithm :
Algorithm QuickSort(low,high)
//Sorts the elements a[low],…..,a[high] which resides
//in the global array a[1:n] into ascending order;
// a[n+1] is considered to be defined and must ≥ all the
// elements in a[1:n].
{
if( low< high ) // if there are more than one element
{ // divide p into two subproblems.
j :=Partition(low,high);
// j is the position of the partitioning element.
QuickSort(low,j-1);
QuickSort(j+1,high);
// There is no need for combining solutions.
}
}
162. Algorithm Partition(l,h)
{
pivot:= a[l] ; i:=l; j:= h+1;
while( i < j ) do
{
i++;
while( a[ i ] < pivot) do
i++;
j--;
while( a[ j ] >= pivot ) do
j--;
if ( i < j ) then Interchange(i,j ); // interchange ith
and
} // jth
elements.
Interchange(l, j ); return j; // interchange pivot and jth
element.
}
168. Summary
Quick-Sort
◦ Most of the work done in partitioning
◦ Best case takes O(n log(n)) time
◦ Average case takes O(n log(n)) time
◦ Worst case takes O(n2
) time
170. Basic Matrix Multiplication
void matrix_mult (){
for (i = 1; i <= N; i++) {
for (j = 1; j <= N; j++) {
for(k=1; k<=N; k++){
C[i,j]=C[i,j]+A[i,k]+B[k,j];
}
}}
Time complexity of above algorithm is
T(n)=O(n3)
Let A an B two n×n matrices. The product C=AB is also an n×n matrix.
171. Divide and Conquer technique
We want to compute the product C=AB, where each of A,B, and C are
n×n matrices.
Assume n is a power of 2.
If n is not a power of 2, add enough rows and columns of zeros.
We divide each of A,B, and C into four n/2×n/2 matrices, rewriting the
equation C=AB as follows:
172. Then,
C11=A11B11+A12B21
C12=A11B12+A12B22
C21=A21B11+A22B21
C22=A21B12+A22B22
Each of these four equations specifies two multiplications of n/2×n/2 matrices
and the addition of their n/2×n/2 products.
We can derive the following recurrence relation for the time T(n) to multiply
two n×n matrices:
T(n)= c1 if n<=2
8T(n/2)+ c2n2
if n>2
T(n) = O(n3)
• This method is no faster than the ordinary method.
8
8
8
8
7
7
7
7
6
6
6
6
5
5
5
5
4
4
4
4
3
3
3
3
2
2
2
2
1
1
1
1
22
21
12
11
C
C
C
C
C
c11 c12
c22
c21
A11 A12
A21 A22 B21 B22
B11 B12
174. Strassen’s method
Matrix multiplications are more expensive than matrix additions or
subtractions( O(n3
) versus O(n2
)).
Strassen has discovered a way to compute the multiplication using only 7
multiplications and 18 additions or subtractions.
His method involves computing 7 n×n matrices M1,M2,M3,M4,M5,M6, and M7,
then cij’s
are calculated using these matrices.
#45: Best: Find at first position. Cost is 1 compare.
Worst: Find at last position. Cost is n compares.
Average: (n+1)/2 compares IF we assume the element with value K is equally likely to be in any position in the array.
#113: Divide: if the initial array A has at least two elements (nothing needs to be done if A has zero or one elements), divide A into two subarrays each containing about half of the elements of A.
Conquer: sort the two subarrays using Merge Sort.
Combine: merging the two sorted subarray into one sorted array
#153: no specific input triggers worst-case behavior
the worst-case is only determined by the output of the random-number generator
Disadvantage:
Unstable in running time
Finding “pivot” element is a big issue!
#168: no specific input triggers worst-case behavior
the worst-case is only determined by the output of the random-number generator
Disadvantage:
Unstable in running time
Finding “pivot” element is a big issue!