1. An algorithm is a precise set of instructions to perform a computation or solve a problem. Some algorithms are easier than others, such as finding the maximum value in a list, while others are harder like finding the shortest path between two locations.
2. The document describes several common algorithms including ones for finding the maximum value, doing linear and binary searches of a list, and sorting lists with bubble sort and insertion sort.
3. The running time of algorithms, measured by the number of steps needed in the worst case, varies from linear time (n steps) for linear search to logarithmic time (log n steps) for binary search to quadratic time (n^2 steps) for bubble and insertion sorts.
1. The document discusses several algorithms including those for finding the maximum element in a list, linear search, binary search, bubble sort, and insertion sort.
2. It provides pseudocode to describe each algorithm and analyzes their running times, finding that linear search takes n steps, binary search takes log2n steps, and bubble sort and insertion sort each take about n2 steps.
3. Faster sorting algorithms like heap sort, quick sort, and merge sort have running times of nlogn steps.
This document provides an overview of algorithms, including definitions and examples. It discusses different types of algorithms like searching and sorting algorithms. For searching, it describes linear search and binary search algorithms. It also analyzes the running time of these algorithms. For sorting, it introduces bubble sort and insertion sort algorithms and analyzes their running times, which are both about O(n^2) time. Faster sorting algorithms like heap sort, quick sort, and merge sort have running times of O(nlogn).
This document discusses algorithms and their properties. It begins by defining an algorithm as a finite set of precise instructions to perform a computation or solve a problem. It provides examples of algorithms like directions to a location or a recipe. The document then discusses how some algorithms are easier than others and provides examples. It also outlines key properties of algorithms like inputs, outputs, definiteness, and effectiveness. Later sections summarize various algorithms for tasks like searching, sorting, and finding maximum/minimum elements with analysis of their running times.
The greedy choice at each step is to select the talk that ends earliest among the compatible options. This maximizes the chance of fitting additional talks into the schedule.
The document discusses various algorithms like priority queues, heaps, heap sort, merge sort, quick sort, binary search, and algorithms for finding the maximum and minimum elements in an array. It provides definitions and explanations of these algorithms along with code snippets and examples. Key steps of algorithms like heap sort, merge sort, and quick sort are outlined. Methods for implementing priority queues, binary search and finding max/min in optimal comparisons are also presented.
Dsa – data structure and algorithms sortingsajinis3
This document summarizes several sorting algorithms: bubble sort, insertion sort, selection sort, quicksort, merge sort, and radix sort. For each algorithm, it provides a high-level description of the approach, pseudocode for the algorithm, and time complexities. The key points are that sorting algorithms arrange data in a certain order, different techniques exist like comparing adjacent elements or dividing and merging, and the time complexities typically range from O(n log n) for efficient algorithms to O(n^2) for less efficient ones.
The document discusses various searching, sorting, and hashing techniques used in data structures and algorithms. It describes linear and binary search methods for finding elements in a list or array. It also explains bubble, insertion, selection, and shell sort algorithms for arranging elements in ascending or descending order. Finally, it covers hashing techniques like hash functions, separate chaining, and open addressing that are used to map keys to values in a hash table.
The document discusses linear and binary search algorithms.
Linear search is the simplest algorithm that sequentially compares each element of an array to the target element. It has a worst case time complexity of O(N).
Binary search works on a sorted array by comparing the middle element to the target. It eliminates half of the remaining elements with each comparison. It has a worst case time complexity of O(log n), which is faster than linear search for large data sets.
Pseudocode and C programs are provided as examples to implement linear and binary search.
The document discusses two searching algorithms - linear search and binary search. Linear search sequentially compares the target element to each element in the array, while binary search uses a divide and conquer approach to quickly hone in on the target element in a sorted array. Both algorithms are demonstrated with pseudocode and examples.
BASIC ALGORITHMS
SEARCHING (LINEAR SEARCH, BINARY SEARCH ETC.), BASIC SORTING ALGORITHMS (BUBBLE, INSERTION AND SELECTION), FINDING ROOTS OF EQUATIONS, NOTION OF ORDER OF COMPLEXITY THROUGH EXAMPLE PROGRAMS (NO FORMAL DEFINITION REQUIRED)
In the binary search, if the array being searched has 32 elements in.pdfarpitaeron555
In the binary search, if the array being searched has 32 elements in it, how many elements of the
array must be examined to be certain that the array does not contain the key? What about 1024
elements? Note: the answer is the same regardless of whether the algorithm is recursive or
iterative.
Solution
Binary Search Algorithm- Fundamentals, Implementation and Analysis
Hitesh Garg | May 15, 2015 | algorithms | 5 Comments
Binary Search Algorithm and its Implementation
In our previous tutorial we discussed about Linear search algorithm which is the most basic
algorithm of searching which has some disadvantages in terms of time complexity,so to
overcome them to a level an algorithm based on dichotomic (i.e. selection between two distinct
alternatives) divide and conquer technique is used i.e. Binarysearch algorithm and it is used to
find an element in a sorted array (yes, it is a prerequisite for this algorithm and a limitation too).
In this algorithm we use the sorted array so as to reduce the time complexity to O(log n). In this,
size of the elements reduce to half after each iteration and this is achieved by comparing the
middle element with the key and if they are unequal then we choose the first or second half,
whichever is expected to hold the key (if available) based on the comparison i.e. if array is sorted
in an increasing manner and the key is smaller than middle element than definitely if key exists,
it will be in the first half, we chose it and repeat same operation again and again until key is
found or no more elements are left in the array.
Recursive Pseudocode:
1
2
3
4
5
6
7
8
9
10
11
12
// initially called with low = 0, high = N – 1
BinarySearch_Right(A[0..N-1], value, low, high) {
// invariants: value >= A[i] for all i < low
value < A[i] for all i > high
if (high < low)
return low
mid = low +((high – low) / 2) // THIS IS AN IMPORTANT STEP TO AVOID BUGS
if (A[mid] > value)
return BinarySearch_Right(A, value, low, mid-1)
else
return BinarySearch_Right(A, value, mid+1, high)
}
Iterative Pseudocode:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
BinarySearch_Right(A[0..N-1], value) {
low = 0
high = N - 1
while (low <= high) {
// invariants: value >= A[i] for all i < low
value < A[i] for all i > high
mid = low +((high – low) / 2) // THIS IS AN IMPORTANT STEP TO AVOID BUGS
if (A[mid] > value)
high = mid - 1
else
low = mid + 1
}
return low
}
Asymptotic Analysis
Since this algorithm halves the no of elements to be checked after every iteration it will take
logarithmic time to find any element i.e. O(log n) (where n is number of elements in the list) and
its expected cost is also proportional to log n provided that searching and comparing cost of all
the elements is same
Data structure used -> Array
Worst case performance -> O(log n)
Best case performance -> O(1)
Average case performance -> O(log n)
Worst case space complexity -> O(1)
So the idea is-
RECURSIVE Implementation of Binary search in C programming language
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
.
1. The document discusses various data structures concepts including arrays, dynamic arrays, operations on arrays like traversing, insertion, deletion, sorting, and searching.
2. It provides examples of declaring and initializing arrays, as well as dynamic array allocation using pointers and new/delete operators.
3. Searching techniques like linear search and binary search are explained, with linear search comparing each element sequentially while binary search eliminates half the elements at each step for sorted arrays.
Binary Search - Design & Analysis of AlgorithmsDrishti Bhalla
Binary search is an efficient algorithm for finding a target value within a sorted array. It works by repeatedly dividing the search range in half and checking the value at the midpoint. This eliminates about half of the remaining candidates in each step. The maximum number of comparisons needed is log n, where n is the number of elements. This makes binary search faster than linear search, which requires checking every element. The algorithm works by first finding the middle element, then checking if it matches the target. If not, it recursively searches either the lower or upper half depending on if the target is less than or greater than the middle element.
This document discusses various searching and sorting algorithms. It begins by defining searching as finding an element in a given list. Linear search and binary search are described as two common searching algorithms. Linear search has O(n) time complexity while binary search has O(log n) time complexity but requires a sorted list. The document also discusses different sorting algorithms like bubble sort, insertion sort, and merge sort, and defines key concepts related to sorting like stability, efficiency, and passes.
An Experiment to Determine and Compare Practical Efficiency of Insertion Sort...Tosin Amuda
Sorting is a fundamental operation in computer science (many programs use it as an intermediate step), and as a result a large number of good sorting algorithms have been developed. Which algorithm is best for a given application depends on—among other factors—the number of items to be sorted, the extent to which the items are already somewhat sorted, possible restrictions on the item values, and the kind of storage device to be used: main memory, disks, or tapes.
There are three reasons to study sorting algorithms. First, sorting algorithms illustrate many creative approaches to problem solving, and these approaches can be applied to solve other problems. Second, sorting algorithms are good for practicing fundamental programming techniques using selection statements, loops, methods, and arrays. Third, sorting algorithms are excellent examples to demonstrate algorithm performance.
However, this paper attempt to compare the practical efficiency of three sorting algorithms – Insertion, Quick and mere Sort using empirical analysis. The result of the experiment shows that insertion sort is a quadratic time sorting algorithm and that it’s more applicable to subarray that is sufficiently small. The merge sort performs better with larger size of input as compared to insertion sort. Quicksort runs the most efficiently.
Oak Ridge National Laboratory (ORNL) is a leading science and technology laboratory under the direction of the Department of Energy.
Hilda Klasky is part of the R&D Staff of the Systems Modeling Group in the Computational Sciences & Engineering Division at ORNL. To prepare the data of the radiology process from the Veterans Affairs Corporate Data Warehouse for her process mining analysis, Hilda had to condense and pre-process the data in various ways. Step by step she shows the strategies that have worked for her to simplify the data to the level that was required to be able to analyze the process with domain experts.
The greedy choice at each step is to select the talk that ends earliest among the compatible options. This maximizes the chance of fitting additional talks into the schedule.
The document discusses various algorithms like priority queues, heaps, heap sort, merge sort, quick sort, binary search, and algorithms for finding the maximum and minimum elements in an array. It provides definitions and explanations of these algorithms along with code snippets and examples. Key steps of algorithms like heap sort, merge sort, and quick sort are outlined. Methods for implementing priority queues, binary search and finding max/min in optimal comparisons are also presented.
Dsa – data structure and algorithms sortingsajinis3
This document summarizes several sorting algorithms: bubble sort, insertion sort, selection sort, quicksort, merge sort, and radix sort. For each algorithm, it provides a high-level description of the approach, pseudocode for the algorithm, and time complexities. The key points are that sorting algorithms arrange data in a certain order, different techniques exist like comparing adjacent elements or dividing and merging, and the time complexities typically range from O(n log n) for efficient algorithms to O(n^2) for less efficient ones.
The document discusses various searching, sorting, and hashing techniques used in data structures and algorithms. It describes linear and binary search methods for finding elements in a list or array. It also explains bubble, insertion, selection, and shell sort algorithms for arranging elements in ascending or descending order. Finally, it covers hashing techniques like hash functions, separate chaining, and open addressing that are used to map keys to values in a hash table.
The document discusses linear and binary search algorithms.
Linear search is the simplest algorithm that sequentially compares each element of an array to the target element. It has a worst case time complexity of O(N).
Binary search works on a sorted array by comparing the middle element to the target. It eliminates half of the remaining elements with each comparison. It has a worst case time complexity of O(log n), which is faster than linear search for large data sets.
Pseudocode and C programs are provided as examples to implement linear and binary search.
The document discusses two searching algorithms - linear search and binary search. Linear search sequentially compares the target element to each element in the array, while binary search uses a divide and conquer approach to quickly hone in on the target element in a sorted array. Both algorithms are demonstrated with pseudocode and examples.
BASIC ALGORITHMS
SEARCHING (LINEAR SEARCH, BINARY SEARCH ETC.), BASIC SORTING ALGORITHMS (BUBBLE, INSERTION AND SELECTION), FINDING ROOTS OF EQUATIONS, NOTION OF ORDER OF COMPLEXITY THROUGH EXAMPLE PROGRAMS (NO FORMAL DEFINITION REQUIRED)
In the binary search, if the array being searched has 32 elements in.pdfarpitaeron555
In the binary search, if the array being searched has 32 elements in it, how many elements of the
array must be examined to be certain that the array does not contain the key? What about 1024
elements? Note: the answer is the same regardless of whether the algorithm is recursive or
iterative.
Solution
Binary Search Algorithm- Fundamentals, Implementation and Analysis
Hitesh Garg | May 15, 2015 | algorithms | 5 Comments
Binary Search Algorithm and its Implementation
In our previous tutorial we discussed about Linear search algorithm which is the most basic
algorithm of searching which has some disadvantages in terms of time complexity,so to
overcome them to a level an algorithm based on dichotomic (i.e. selection between two distinct
alternatives) divide and conquer technique is used i.e. Binarysearch algorithm and it is used to
find an element in a sorted array (yes, it is a prerequisite for this algorithm and a limitation too).
In this algorithm we use the sorted array so as to reduce the time complexity to O(log n). In this,
size of the elements reduce to half after each iteration and this is achieved by comparing the
middle element with the key and if they are unequal then we choose the first or second half,
whichever is expected to hold the key (if available) based on the comparison i.e. if array is sorted
in an increasing manner and the key is smaller than middle element than definitely if key exists,
it will be in the first half, we chose it and repeat same operation again and again until key is
found or no more elements are left in the array.
Recursive Pseudocode:
1
2
3
4
5
6
7
8
9
10
11
12
// initially called with low = 0, high = N – 1
BinarySearch_Right(A[0..N-1], value, low, high) {
// invariants: value >= A[i] for all i < low
value < A[i] for all i > high
if (high < low)
return low
mid = low +((high – low) / 2) // THIS IS AN IMPORTANT STEP TO AVOID BUGS
if (A[mid] > value)
return BinarySearch_Right(A, value, low, mid-1)
else
return BinarySearch_Right(A, value, mid+1, high)
}
Iterative Pseudocode:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
BinarySearch_Right(A[0..N-1], value) {
low = 0
high = N - 1
while (low <= high) {
// invariants: value >= A[i] for all i < low
value < A[i] for all i > high
mid = low +((high – low) / 2) // THIS IS AN IMPORTANT STEP TO AVOID BUGS
if (A[mid] > value)
high = mid - 1
else
low = mid + 1
}
return low
}
Asymptotic Analysis
Since this algorithm halves the no of elements to be checked after every iteration it will take
logarithmic time to find any element i.e. O(log n) (where n is number of elements in the list) and
its expected cost is also proportional to log n provided that searching and comparing cost of all
the elements is same
Data structure used -> Array
Worst case performance -> O(log n)
Best case performance -> O(1)
Average case performance -> O(log n)
Worst case space complexity -> O(1)
So the idea is-
RECURSIVE Implementation of Binary search in C programming language
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
.
1. The document discusses various data structures concepts including arrays, dynamic arrays, operations on arrays like traversing, insertion, deletion, sorting, and searching.
2. It provides examples of declaring and initializing arrays, as well as dynamic array allocation using pointers and new/delete operators.
3. Searching techniques like linear search and binary search are explained, with linear search comparing each element sequentially while binary search eliminates half the elements at each step for sorted arrays.
Binary Search - Design & Analysis of AlgorithmsDrishti Bhalla
Binary search is an efficient algorithm for finding a target value within a sorted array. It works by repeatedly dividing the search range in half and checking the value at the midpoint. This eliminates about half of the remaining candidates in each step. The maximum number of comparisons needed is log n, where n is the number of elements. This makes binary search faster than linear search, which requires checking every element. The algorithm works by first finding the middle element, then checking if it matches the target. If not, it recursively searches either the lower or upper half depending on if the target is less than or greater than the middle element.
This document discusses various searching and sorting algorithms. It begins by defining searching as finding an element in a given list. Linear search and binary search are described as two common searching algorithms. Linear search has O(n) time complexity while binary search has O(log n) time complexity but requires a sorted list. The document also discusses different sorting algorithms like bubble sort, insertion sort, and merge sort, and defines key concepts related to sorting like stability, efficiency, and passes.
An Experiment to Determine and Compare Practical Efficiency of Insertion Sort...Tosin Amuda
Sorting is a fundamental operation in computer science (many programs use it as an intermediate step), and as a result a large number of good sorting algorithms have been developed. Which algorithm is best for a given application depends on—among other factors—the number of items to be sorted, the extent to which the items are already somewhat sorted, possible restrictions on the item values, and the kind of storage device to be used: main memory, disks, or tapes.
There are three reasons to study sorting algorithms. First, sorting algorithms illustrate many creative approaches to problem solving, and these approaches can be applied to solve other problems. Second, sorting algorithms are good for practicing fundamental programming techniques using selection statements, loops, methods, and arrays. Third, sorting algorithms are excellent examples to demonstrate algorithm performance.
However, this paper attempt to compare the practical efficiency of three sorting algorithms – Insertion, Quick and mere Sort using empirical analysis. The result of the experiment shows that insertion sort is a quadratic time sorting algorithm and that it’s more applicable to subarray that is sufficiently small. The merge sort performs better with larger size of input as compared to insertion sort. Quicksort runs the most efficiently.
Oak Ridge National Laboratory (ORNL) is a leading science and technology laboratory under the direction of the Department of Energy.
Hilda Klasky is part of the R&D Staff of the Systems Modeling Group in the Computational Sciences & Engineering Division at ORNL. To prepare the data of the radiology process from the Veterans Affairs Corporate Data Warehouse for her process mining analysis, Hilda had to condense and pre-process the data in various ways. Step by step she shows the strategies that have worked for her to simplify the data to the level that was required to be able to analyze the process with domain experts.
在线制作本科文凭利物浦约翰摩尔斯大学英文学位证书影本英国成绩单利物浦约翰摩尔斯大学文凭【q微1954292140】高仿真还原英国文凭证书和外壳,定制英国利物浦约翰摩尔斯大学成绩单和信封。成绩单丢失补办LJMU毕业证【q微1954292140】办理英国利物浦约翰摩尔斯大学毕业证(LJMU毕业证书)【q微1954292140】学历认证定制利物浦约翰摩尔斯大学offer/学位证成绩单英文版、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作。帮你解决利物浦约翰摩尔斯大学学历学位认证难题。
《利物浦约翰摩尔斯大学毕业证书英国毕业证书办理LJMU成绩单详解细节》【q微1954292140】学位证1:1完美还原海外各大学毕业材料上的工艺:水印,阴影底纹,钢印LOGO烫金烫银,LOGO烫金烫银复合重叠。文字图案浮雕、激光镭射、紫外荧光、温感、复印防伪等防伪工艺。
主营项目:
1、真实教育部国外学历学位认证《英国毕业文凭证书快速办理利物浦约翰摩尔斯大学毕业证购买》【q微1954292140】《论文没过利物浦约翰摩尔斯大学正式成绩单》,教育部存档,教育部留服网站100%可查.
2、办理LJMU毕业证,改成绩单《LJMU毕业证明办理利物浦约翰摩尔斯大学文凭在线制作》【Q/WeChat:1954292140】Buy Liverpool John Moores University Certificates《正式成绩单论文没过》,利物浦约翰摩尔斯大学Offer、在读证明、学生卡、信封、证明信等全套材料,从防伪到印刷,从水印到钢印烫金,高精仿度跟学校原版100%相同.
3、真实使馆认证(即留学人员回国证明),使馆存档可通过大使馆查询确认.
4、留信网认证,国家专业人才认证中心颁发入库证书,留信网存档可查.
利物浦约翰摩尔斯大学offer/学位证、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作【q微1954292140】Buy Liverpool John Moores University Diploma购买美国毕业证,购买英国毕业证,购买澳洲毕业证,购买加拿大毕业证,以及德国毕业证,购买法国毕业证(q微1954292140)购买荷兰毕业证、购买瑞士毕业证、购买日本毕业证、购买韩国毕业证、购买新西兰毕业证、购买新加坡毕业证、购买西班牙毕业证、购买马来西亚毕业证等。包括了本科毕业证,硕士毕业证。
特殊原因导致无法毕业,也可以联系我们帮您办理相关材料:
1:在利物浦约翰摩尔斯大学挂科了,不想读了,成绩不理想怎么办???
2:打算回国了,找工作的时候,需要提供认证《LJMU成绩单购买办理利物浦约翰摩尔斯大学毕业证书范本》【Q/WeChat:1954292140】Buy Liverpool John Moores University Diploma《正式成绩单论文没过》有文凭却得不到认证。又该怎么办???英国毕业证购买,英国文凭购买,【q微1954292140】英国文凭购买,英国文凭定制,英国文凭补办。专业在线定制英国大学文凭,定做英国本科文凭,【q微1954292140】复制英国Liverpool John Moores University completion letter。在线快速补办英国本科毕业证、硕士文凭证书,购买英国学位证、利物浦约翰摩尔斯大学Offer,英国大学文凭在线购买。
办理利物浦约翰摩尔斯大学学位证(LJMU毕业证书)毕业证详解细节【q微1954292140】帮您解决在英国利物浦约翰摩尔斯大学未毕业难题(Liverpool John Moores University)文凭购买、毕业证购买、大学文凭购买、大学毕业证购买、买文凭、日韩文凭、英国大学文凭、美国大学文凭、澳洲大学文凭、加拿大大学文凭(q微1954292140)新加坡大学文凭、新西兰大学文凭、爱尔兰文凭、西班牙文凭、德国文凭、教育部认证,买毕业证,毕业证购买,买大学文凭,购买日韩毕业证、英国大学毕业证、美国大学毕业证、澳洲大学毕业证、加拿大大学毕业证(q微1954292140)新加坡大学毕业证、新西兰大学毕业证、爱尔兰毕业证、西班牙毕业证、德国毕业证,回国证明,留信网认证,留信认证办理,学历认证。从而完成就业。利物浦约翰摩尔斯大学毕业证办理,利物浦约翰摩尔斯大学文凭办理,利物浦约翰摩尔斯大学成绩单办理和真实留信认证、留服认证、利物浦约翰摩尔斯大学学历认证。学院文凭定制,利物浦约翰摩尔斯大学原版文凭补办,Diploma,扫描件文凭定做,100%文凭复刻。
T4media specializes in optimizing and personalizing websites for customers. Vanessa shows us what process mining adds to her toolbox as a customer journey analyst. Of course, she still uses web analytics tools like Google Analytics, but process mining helps her focus on the user’s actual behavior.
Technically, the data is available without any problems: The Case ID is the user on the website, the Activity is the website's page name, and the Timestamp is the time of the visit. What is difficult is the complexity of the user journeys: The data needs to be simplified to answer targeted questions. Vanessa demonstrates, based on several examples, how this works.
This project looks at a research paper that breaks down how Large Language Models like GPT and LLaMA are trained, what makes them powerful, and how they're being used in real-world AI tools from chatbots to reasoning agents.
PGGM is a non-profit cooperative pension administration organization. They are founded by social partners in the care and welfare sector and serve four million participants.
Bas van Beek is a process consultant and Frank Nobel is a process and data analyst at PGGM. Instead of establishing process mining either in the data science corner or in the Lean Six Sigma corner, they approach every process improvement initiative as a multi-disciplinary team with people from both groups.
The nature of each initiative can be quite different. For example, some projects are more focused on the redesign or implementation of an IT solution. Others require extensive involvement from the business to change the way of working. In a third example, they showed how they used process mining for compliance purposes: Because they were able to demonstrate that certain individual funds actually follow the same process, they could group these funds and simplify the audit by using generic controls.
Wil van der Aalst gave the closing keynote at camp. He started with giving an overview of the progress that has been made in the process mining field over the past 20 years. Process mining unlocks great potential but also comes with a huge responsibility. Responsible data science focuses on positive technological breakthroughs and aims to prevent “pollution” by “bad data science”.
Wil gave us a sneak peek at current responsible process mining research from the area of ‘fairness’ (how to draw conclusions from data that are fair without sacrificing accuracy too much) and ‘confidentiality’ (how to analyze data without revealing secrets). While research can provide some solutions by developing new techniques, understanding these risks is a responsibility of the process miner.
Dr. Robert Krug - Expert In Artificial IntelligenceDr. Robert Krug
Dr. Robert Krug is a New York-based expert in artificial intelligence, with a Ph.D. in Computer Science from Columbia University. He serves as Chief Data Scientist at DataInnovate Solutions, where his work focuses on applying machine learning models to improve business performance and strengthen cybersecurity measures. With over 15 years of experience, Robert has a track record of delivering impactful results. Away from his professional endeavors, Robert enjoys the strategic thinking of chess and urban photography.
Carbon Nanomaterials Market Size, Trends and Outlook 2024-2030Industry Experts
Global Carbon Nanomaterials market size is estimated at US$2.2 billion in 2024 and primed to post a robust CAGR of 17.2% between 2024 and 2030 to reach US$5.7 billion by 2030. This comprehensive report analyzes and projects the global Carbon Nanomaterials market by material type (Carbon Foams, Carbon Nanotubes (CNTs), Carbon-based Quantum Dots, Fullerenes, Graphene).
2. 2
What is an algorithm?
An algorithm is “a finite set of precise
instructions for performing a computation or for
solving a problem”
A program is one type of algorithm
All programs are algorithms
Not all algorithms are programs!
Directions to somebody’s house is an algorithm
A recipe for cooking a cake is an algorithm
The steps to compute the cosine of 90° is an
algorithm
3. 3
Some algorithms are harder than
others
Some algorithms are easy
Finding the largest (or smallest) value in a list
Finding a specific value in a list
Some algorithms are a bit harder
Sorting a list
Some algorithms are very hard
Finding the shortest path between Miami and Seattle
Some algorithms are essentially impossible
Factoring large composite numbers
In section 2.2, we’ll see how to rate how “hard”
algorithms are
4. 4
Algorithm 1: Maximum element
Given a list, how do we find the maximum
element in the list?
To express the algorithm, we’ll use
pseudocode
Pseudocode is kinda like a programming
language, but not really
5. 5
Algorithm 1: Maximum element
Algorithm for finding the maximum
element in a list:
procedure max (a1, a2, …, an: integers)
max := a1
for i := 2 to n
if max < ai then max := ai
{max is the largest element}
6. 6
procedure max (a1, a2, …, an: integers)
max := a1
for i := 2 to n
if max < ai then max := ai
max := a1
for i := 2 to n
if max < ai then max := ai
Algorithm 1: Maximum element
4 1 7 0 5 2 9 3 6 8
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
max
i 2
3
4
5
6
7
8
9
10
4
7
9
7. 7
Maximum element running time
How long does this take?
If the list has n elements, worst case
scenario is that it takes n “steps”
Here, a step is considered a single step
through the list
8. 8
Properties of algorithms
Algorithms generally share a set of properties:
Input: what the algorithm takes in as input
Output: what the algorithm produces as output
Definiteness: the steps are defined precisely
Correctness: should produce the correct output
Finiteness: the steps required should be finite
Effectiveness: each step must be able to be
performed in a finite amount of time
Generality: the algorithm should be applicable to all
problems of a similar form
9. 9
Searching algorithms
Given a list, find a specific element in the
list
We will see two types
Linear search
a.k.a. sequential search
Binary search
10. 10
Algorithm 2: Linear search
Given a list, find a specific element in the list
List does NOT have to be sorted!
procedure linear_search (x: integer; a1, a2, …, an:
integers)
i := 1
while ( i ≤ n and x ≠ ai )
i := i + 1
if i ≤ n then location := i
else location := 0
{location is the subscript of the term that equals x, or it
is 0 if x is not found}
11. 11
procedure linear_search (x: integer; a1, a2, …, an: integers)
i := 1
while ( i ≤ n and x ≠ ai )
i := i + 1
if i ≤ n then location := i
else location := 0
i := 1
while ( i ≤ n and x ≠ ai )
i := i + 1
if i ≤ n then location := i
else location := 0
Algorithm 2: Linear search, take 1
4 1 7 0 5 2 9 3 6 8
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
i 2
3
4
5
6
7
8
1
x 3
location 8
12. 12
procedure linear_search (x: integer; a1, a2, …, an: integers)
i := 1
while ( i ≤ n and x ≠ ai )
i := i + 1
if i ≤ n then location := i
else location := 0
i := 1
while ( i ≤ n and x ≠ ai )
i := i + 1
if i ≤ n then location := i
else location := 0
Algorithm 2: Linear search, take 2
4 1 7 0 5 2 9 3 6 8
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
i 2
3
4
5
6
7
8
9
10
1
x 11
location 0
11
13. 13
Linear search running time
How long does this take?
If the list has n elements, worst case
scenario is that it takes n “steps”
Here, a step is considered a single step
through the list
14. 14
Algorithm 3: Binary search
Given a list, find a specific element in the list
List MUST be sorted!
Each time it iterates through, it cuts the list in half
procedure binary_search (x: integer; a1, a2, …, an: increasing integers)
i := 1 { i is left endpoint of search interval }
j := n { j is right endpoint of search interval }
while i < j
begin
m := (i+j)/2 { m is the point in the middle }
if x > am then i := m+1
else j := m
end
if x = ai then location := i
else location := 0
{location is the subscript of the term that equals x, or it is 0 if x is not found}
15. 15
Algorithm 3: Binary search, take 1
2 4 6 8 10 12 14 16 18 20
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
i j
m
i := 1
j := n
procedure binary_search (x: integer; a1, a2, …, an: increasing integers)
while i < j
begin
m := (i+j)/2
if x > am then i := m+1
else j := m
end
if x = ai then location := i
else location := 0
i := 1
j := n
while i < j
begin
m := (i+j)/2
if x > am then i := m+1
else j := m
end
if x = ai then location := i
1
x 14
10
5
6 8 8
7 7
6
7
location 7
16. 16
Algorithm 3: Binary search, take 2
2 4 6 8 10 12 14 16 18 20
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
i j
m
i := 1
j := n
procedure binary_search (x: integer; a1, a2, …, an: increasing integers)
while i < j
begin
m := (i+j)/2
if x > am then i := m+1
else j := m
end
if x = ai then location := i
else location := 0
i := 1
j := n
while i < j
begin
m := (i+j)/2
if x > am then i := m+1
else j := m
end
if x = ai then location := I
else location := 0
1
x 15
10
5
6 8 8
7
location 0
8
18. 18
Binary search running time
How long does this take (worst case)?
If the list has 8 elements
It takes 3 steps
If the list has 16 elements
It takes 4 steps
If the list has 64 elements
It takes 6 steps
If the list has n elements
It takes log2 n steps
19. 19
Sorting algorithms
Given a list, put it into some order
Numerical, lexicographic, etc.
We will see two types
Bubble sort
Insertion sort
20. 20
Algorithm 4: Bubble sort
One of the most simple sorting algorithms
Also one of the least efficient
It takes successive elements and “bubbles” them
up the list
procedure bubble_sort (a1, a2, …, an)
for i := 1 to n-1
for j := 1 to n-i
if aj > aj+1
then interchange aj and aj+1
{ a1, …, an are in increasing order }
22. 23
Bubble sort running time
Bubble sort algorithm:
for i := 1 to n-1
for j := 1 to n-i
if aj > aj+1
then interchange aj and aj+1
Outer for loop does n-1 iterations
Inner for loop does
n-1 iterations the first time
n-2 iterations the second time
…
1 iteration the last time
Total: (n-1) + (n-2) + (n-3) + … + 2 + 1 = (n2-n)/2
We can say that’s “about” n2 time
23. 25
Algorithm 5: Insertion sort
Another simple (and inefficient) algorithm
It starts with a list with one element, and inserts new elements into
their proper place in the sorted part of the list
procedure insertion_sort (a1, a2, …, an)
for j := 2 to n
begin
i := 1
while aj > ai
i := i +1
m := aj
for k := 0 to j-i-1
aj-k := aj-k-1
ai := m
end { a1, a2, …, an are sorted }
take successive elements in the list
find where that element should be
in the sorted portion of the list
move all elements in the sorted
portion of the list that are greater
than the current element up by one
put the current element into it’s proper place in the sorted portion of the list
24. 26
Insertion sort running time
for j := 2 to n begin
i := 1
while aj > ai
i := i +1
m := aj
for k := 0 to j-i-1
aj-k := aj-k-1
ai := m
end { a1, a2, …, an are sorted }
Outer for loop runs n-1 times
In the inner for loop:
Worst case is when the while keeps i at 1, and the for loop runs lots of
times
If i is 1, the inner for loop runs 1 time (k goes from 0 to 0) on the first
iteration, 1 time on the second, up to n-2 times on the last iteration
Total is 1 + 2 + … + n-2 = (n-1)(n-2)/2
We can say that’s “about” n2 time
25. 27
Comparison of running times
Searches
Linear: n steps
Binary: log2 n steps
Binary search is about as fast as you can get
Sorts
Bubble: n2 steps
Insertion: n2 steps
There are other, more efficient, sorting techniques
In principle, the fastest are heap sort, quick sort, and merge
sort
These each take take n * log2 n steps
In practice, quick sort is the fastest, followed by merge sort