Deep learning is a type of machine learning that uses neural networks inspired by the human brain. It has been successfully applied to problems like image recognition, speech recognition, and natural language processing. Deep learning requires large datasets, clear goals, computing power, and neural network architectures. Popular deep learning models include convolutional neural networks and recurrent neural networks. Researchers like Geoffry Hinton and companies like Google have advanced the field through innovations that have won image recognition challenges. Deep learning will continue solving harder artificial intelligence problems by learning from massive amounts of data.
Dynamic Itemset Counting (DIC) is an algorithm for efficiently mining frequent itemsets from transactional data that improves upon the Apriori algorithm. DIC allows itemsets to begin being counted as soon as it is suspected they may be frequent, rather than waiting until the end of each pass like Apriori. DIC uses different markings like solid/dashed boxes and circles to track the counting status of itemsets. It can generate frequent itemsets and association rules using conviction in fewer passes over the data compared to Apriori.
Fundamentals of the Analysis of Algorithm EfficiencySaranya Natarajan
This document discusses analyzing the efficiency of algorithms. It introduces the framework for analyzing algorithms in terms of time and space complexity. Time complexity indicates how fast an algorithm runs, while space complexity measures the memory required. The document outlines steps for analyzing algorithms, including measuring input size, determining the basic operations, calculating frequency counts of operations, and expressing efficiency in Big O notation order of growth. Worst-case, best-case, and average-case time complexities are also discussed.
The document provides an overview of the Internet of Things (IoT). It defines IoT as the network of physical objects embedded with sensors that collect and exchange data. It discusses how IoT works by connecting devices through sensors, processors and communication hardware. Examples of applications include building automation, manufacturing, healthcare, transportation and more. The document also outlines some current technological challenges of IoT like scalability, standardization and security/privacy issues. It concludes with a discussion of the future prospects and criticisms of expanding IoT connectivity.
The document discusses the knapsack problem and greedy algorithms. It defines the knapsack problem as an optimization problem where given constraints and an objective function, the goal is to find the feasible solution that maximizes or minimizes the objective. It describes the knapsack problem has having two versions: 0-1 where items are indivisible, and fractional where items can be divided. The fractional knapsack problem can be solved using a greedy approach by sorting items by value to weight ratio and filling the knapsack accordingly until full.
The document discusses Python's four main collection data types: lists, tuples, sets, and dictionaries. It provides details on lists, including that they are ordered and changeable collections that allow duplicate members. Lists can be indexed, sliced, modified using methods like append() and insert(), and have various built-in functions that can be used on them. Examples are provided to demonstrate list indexing, slicing, changing elements, adding elements, removing elements, and built-in list methods.
This document provides information about an e-commerce presentation given by a group of students. It introduces the group members and defines e-commerce as buying and selling goods over the internet. It describes the features of the group's e-commerce website system, including browsing products, shopping cart, checkout, and payment gateway. It outlines the technologies used to build the system like HTML, CSS, PHP and MySQL. It also discusses the advantages and disadvantages of e-commerce, and future plans to improve the system by adding more user-friendly interfaces and social media login.
This document provides an introduction to asymptotic analysis of algorithms. It discusses analyzing algorithms based on how their running time increases with the size of the input problem. The key points are:
- Algorithms are compared based on their asymptotic running time as the input size increases, which is more useful than actual running times on a specific computer.
- The main types of analysis are worst-case, best-case, and average-case running times.
- Asymptotic notations like Big-O, Omega, and Theta are used to classify algorithms based on their rate of growth as the input increases.
- Common orders of growth include constant, logarithmic, linear, quadratic, and exponential time.
The little Oh (o) notation is a method of expressing the an upper bound on the growth rate of an algorithm’s
running time which may or may not be asymptotically tight therefore little oh(o) is also called a loose upper
bound we use little oh (o) notations to denote upper bound that is asymptotically not tight.
Decision tree induction \ Decision Tree Algorithm with Example| Data scienceMaryamRehman6
This Decision Tree Algorithm in Machine Learning Presentation will help you understand all the basics of Decision Tree along with what Machine Learning is, what Machine Learning is, what Decision Tree is, the advantages and disadvantages of Decision Tree, how Decision Tree algorithm works with resolved examples, and at the end of the decision Tree use case/demo in Python for loan payment. For both beginners and experts who want to learn Machine Learning Algorithms, this Decision Tree tutorial is perfect.
This document discusses randomized algorithms. It begins by listing different categories of algorithms, including randomized algorithms. Randomized algorithms introduce randomness into the algorithm to avoid worst-case behavior and find efficient approximate solutions. Quicksort is presented as an example randomized algorithm, where randomness improves its average runtime from quadratic to linear. The document also discusses the randomized closest pair algorithm and a randomized algorithm for primality testing. Both introduce randomness to improve efficiency compared to deterministic algorithms for the same problems.
Digital signatures provide authentication of digital messages or documents. There are three main algorithms involved: hashing, signature generation, and signature verification. Common digital signature schemes include ElGamal, Schnorr, and the Digital Signature Standard (DSS). The DSS is based on ElGamal and Schnorr schemes. It uses smaller signatures than ElGamal by employing two moduli, one smaller than the other. Digital signatures are widely used to provide authentication in protocols like IPSec, SSL/TLS, and S/MIME.
This document discusses bi-connected components in graphs. It defines an articulation point as a vertex in a connected graph whose removal would disconnect the graph. A bi-connected component is a maximal subgraph that contains no articulation points. The document presents algorithms for identifying articulation points and bi-connected components in a graph using depth-first search (DFS). It introduces the concepts of tree edges, back edges, forward edges and cross edges in a DFS tree and explains how to use these edge types to determine if a vertex is an articulation point based on the minimum discovery time of its descendant vertices.
This document discusses the traveling salesman problem and a dynamic programming approach to solving it. It was presented by Maharaj Dey, a 6th semester CSE student with university roll number 11500117099 for the paper CS-681 (SEMINAR). The document concludes with a thank you.
Growth of Functions
CMSC 56 | Discrete Mathematical Structure for Computer Science
October 6, 2018
Instructor: Allyn Joy D. Calcaben
College of Arts & Sciences
University of the Philippines Visayas
The document discusses the branch and bound algorithm for solving the 15-puzzle problem. It describes the key components of branch and bound including live nodes, e-nodes, and dead nodes. It also defines the cost function used to evaluate nodes as the sum of the path length and number of misplaced tiles. The algorithm generates all possible child nodes from the current node and prunes the search tree by comparing node costs to avoid exploring subtrees without solutions.
Elliptic Curve Cryptography was presented by Ajithkumar Vyasarao. He began with an introduction to ECC, noting its advantages over RSA like smaller key sizes providing equal security. He described how ECC works using elliptic curves over real numbers and finite fields. He demonstrated point addition and scalar multiplication on curves. ECC can be used for applications like smart cards and mobile devices. For key exchange, Alice and Bob can agree on a starting point and generate secret keys by multiplying a private value with the shared point. ECC provides security through the difficulty of solving the elliptic curve discrete logarithm problem.
Greedy algorithms make locally optimal choices at each step to try to find a global optimum. They choose the best option available at each state. For the travelling salesman problem, the greedy approach is to always choose the nearest unvisited city from the current city to build the route. While greedy algorithms are simple to implement, they do not always find the true optimal solution, especially for large complex problems, as they only consider the best choice at each step rather than the overall route.
Knowledge representation and reasoning (KR) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language
This document discusses the job sequencing problem, where the goal is to schedule jobs to be completed by their deadlines to maximize total profit. It provides an example problem with 4 jobs, their profits, deadlines, and the optimal solution of scheduling jobs J1 and J2 to earn a total profit of 140.
- Recurrences describe functions in terms of their values on smaller inputs and arise when algorithms contain recursive calls to themselves.
- To analyze the running time of recursive algorithms, the recurrence must be solved to find an explicit formula or bound the expression in terms of n.
- Examples of recurrences and their solutions are given, including binary search (O(log n)), dividing the input in half at each step (O(n)), and dividing the input in half but examining all items (O(n)).
- Methods for solving recurrences include iteration, substitution, and using recursion trees to "guess" the solution.
This document discusses asymptotic analysis and big-O notation for analyzing algorithms. It provides examples of analyzing runtimes of algorithms as functions of input size n and comparing their growth rates. The key points covered are:
- Algorithms are analyzed by expressing their runtime as a function of input size n, and comparing these functions asymptotically (for large n) using notations like O(), Ω(), and Θ().
- Common orders of growth include O(1), O(log n), O(n), O(n log n), O(n^2), O(2^n), O(n!).
- Big-O notation describes an upper bound, Ω() a lower bound
This document provides an introduction to asymptotic analysis of algorithms. It discusses analyzing algorithms based on how their running time increases with the size of the input problem. The key points are:
- Algorithms are compared based on their asymptotic running time as the input size increases, which is more useful than actual running times on a specific computer.
- The main types of analysis are worst-case, best-case, and average-case running times.
- Asymptotic notations like Big-O, Omega, and Theta are used to classify algorithms based on their rate of growth as the input increases.
- Common orders of growth include constant, logarithmic, linear, quadratic, and exponential time.
The little Oh (o) notation is a method of expressing the an upper bound on the growth rate of an algorithm’s
running time which may or may not be asymptotically tight therefore little oh(o) is also called a loose upper
bound we use little oh (o) notations to denote upper bound that is asymptotically not tight.
Decision tree induction \ Decision Tree Algorithm with Example| Data scienceMaryamRehman6
This Decision Tree Algorithm in Machine Learning Presentation will help you understand all the basics of Decision Tree along with what Machine Learning is, what Machine Learning is, what Decision Tree is, the advantages and disadvantages of Decision Tree, how Decision Tree algorithm works with resolved examples, and at the end of the decision Tree use case/demo in Python for loan payment. For both beginners and experts who want to learn Machine Learning Algorithms, this Decision Tree tutorial is perfect.
This document discusses randomized algorithms. It begins by listing different categories of algorithms, including randomized algorithms. Randomized algorithms introduce randomness into the algorithm to avoid worst-case behavior and find efficient approximate solutions. Quicksort is presented as an example randomized algorithm, where randomness improves its average runtime from quadratic to linear. The document also discusses the randomized closest pair algorithm and a randomized algorithm for primality testing. Both introduce randomness to improve efficiency compared to deterministic algorithms for the same problems.
Digital signatures provide authentication of digital messages or documents. There are three main algorithms involved: hashing, signature generation, and signature verification. Common digital signature schemes include ElGamal, Schnorr, and the Digital Signature Standard (DSS). The DSS is based on ElGamal and Schnorr schemes. It uses smaller signatures than ElGamal by employing two moduli, one smaller than the other. Digital signatures are widely used to provide authentication in protocols like IPSec, SSL/TLS, and S/MIME.
This document discusses bi-connected components in graphs. It defines an articulation point as a vertex in a connected graph whose removal would disconnect the graph. A bi-connected component is a maximal subgraph that contains no articulation points. The document presents algorithms for identifying articulation points and bi-connected components in a graph using depth-first search (DFS). It introduces the concepts of tree edges, back edges, forward edges and cross edges in a DFS tree and explains how to use these edge types to determine if a vertex is an articulation point based on the minimum discovery time of its descendant vertices.
This document discusses the traveling salesman problem and a dynamic programming approach to solving it. It was presented by Maharaj Dey, a 6th semester CSE student with university roll number 11500117099 for the paper CS-681 (SEMINAR). The document concludes with a thank you.
Growth of Functions
CMSC 56 | Discrete Mathematical Structure for Computer Science
October 6, 2018
Instructor: Allyn Joy D. Calcaben
College of Arts & Sciences
University of the Philippines Visayas
The document discusses the branch and bound algorithm for solving the 15-puzzle problem. It describes the key components of branch and bound including live nodes, e-nodes, and dead nodes. It also defines the cost function used to evaluate nodes as the sum of the path length and number of misplaced tiles. The algorithm generates all possible child nodes from the current node and prunes the search tree by comparing node costs to avoid exploring subtrees without solutions.
Elliptic Curve Cryptography was presented by Ajithkumar Vyasarao. He began with an introduction to ECC, noting its advantages over RSA like smaller key sizes providing equal security. He described how ECC works using elliptic curves over real numbers and finite fields. He demonstrated point addition and scalar multiplication on curves. ECC can be used for applications like smart cards and mobile devices. For key exchange, Alice and Bob can agree on a starting point and generate secret keys by multiplying a private value with the shared point. ECC provides security through the difficulty of solving the elliptic curve discrete logarithm problem.
Greedy algorithms make locally optimal choices at each step to try to find a global optimum. They choose the best option available at each state. For the travelling salesman problem, the greedy approach is to always choose the nearest unvisited city from the current city to build the route. While greedy algorithms are simple to implement, they do not always find the true optimal solution, especially for large complex problems, as they only consider the best choice at each step rather than the overall route.
Knowledge representation and reasoning (KR) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language
This document discusses the job sequencing problem, where the goal is to schedule jobs to be completed by their deadlines to maximize total profit. It provides an example problem with 4 jobs, their profits, deadlines, and the optimal solution of scheduling jobs J1 and J2 to earn a total profit of 140.
- Recurrences describe functions in terms of their values on smaller inputs and arise when algorithms contain recursive calls to themselves.
- To analyze the running time of recursive algorithms, the recurrence must be solved to find an explicit formula or bound the expression in terms of n.
- Examples of recurrences and their solutions are given, including binary search (O(log n)), dividing the input in half at each step (O(n)), and dividing the input in half but examining all items (O(n)).
- Methods for solving recurrences include iteration, substitution, and using recursion trees to "guess" the solution.
This document discusses asymptotic analysis and big-O notation for analyzing algorithms. It provides examples of analyzing runtimes of algorithms as functions of input size n and comparing their growth rates. The key points covered are:
- Algorithms are analyzed by expressing their runtime as a function of input size n, and comparing these functions asymptotically (for large n) using notations like O(), Ω(), and Θ().
- Common orders of growth include O(1), O(log n), O(n), O(n log n), O(n^2), O(2^n), O(n!).
- Big-O notation describes an upper bound, Ω() a lower bound
This document discusses algorithms and their analysis. It begins by defining an algorithm and analyzing its time and space complexity. It then discusses different asymptotic notations used to describe an algorithm's runtime such as Big-O, Omega, and Theta notations. Examples are provided to illustrate how to determine the tight asymptotic bound of functions. The document also covers algorithm design techniques like divide-and-conquer and analyzes merge sort as an example. It concludes by defining recurrences used to describe algorithms and provides an example recurrence for merge sort.
This document discusses algorithm analysis and complexity. It defines key terms like algorithm, asymptotic complexity, Big-O notation, and time complexity. It provides examples of analyzing simple algorithms like summing array elements. The running time is expressed as a function of input size n. Common complexities like constant, linear, quadratic, and exponential time are introduced. Nested loops and sequences of statements are analyzed. The goal of analysis is to classify algorithms into complexity classes to understand how input size affects runtime.
how to calclute time complexity of algortihmSajid Marwat
This document discusses algorithm analysis and complexity. It defines key terms like asymptotic complexity, Big-O notation, and time complexity. It provides examples of analyzing simple algorithms like a sum function to determine their time complexity. Common analyses include looking at loops, nested loops, and sequences of statements. The goal is to classify algorithms according to their complexity, which is important for large inputs and machine-independent. Algorithms are classified based on worst, average, and best case analyses.
The document discusses asymptotic analysis and big-O, Ω, and Θ notation for analyzing how algorithms scale with increasing input size. It defines asymptotic analysis as depicting the running time of an algorithm as the input size increases without bound. It provides examples of using asymptotic notation to classify functions based on their growth rates and compares the orders of common functions like n, n2, n3, logn.
The document discusses data structures and algorithms. It defines key concepts like algorithms, programs, data structures, and asymptotic analysis. It explains how to analyze algorithms to determine their efficiency, including analyzing best, worst, and average cases. Common notations for describing asymptotic running time like Big-O, Big-Omega, and Big-Theta are introduced. The document provides examples of analyzing sorting algorithms like insertion sort and calculating running times. It also discusses techniques for proving an algorithm's correctness like assertions and loop invariants.
The document discusses fundamentals of analyzing algorithm efficiency, including:
- Measuring an algorithm's time efficiency based on input size and number of basic operations.
- Using asymptotic notations like O, Ω, Θ to classify algorithms by order of growth.
- Analyzing worst-case, best-case, and average-case efficiencies.
- Setting up recurrence relations to analyze recursive algorithms like merge sort.
The document discusses algorithms and algorithm analysis. It provides examples to illustrate key concepts in algorithm analysis including worst-case, average-case, and best-case running times. The document also introduces asymptotic notation such as Big-O, Big-Omega, and Big-Theta to analyze the growth rates of algorithms. Common growth rates like constant, logarithmic, linear, quadratic, and exponential functions are discussed. Rules for analyzing loops and consecutive statements are provided. Finally, algorithms for two problems - selection and maximum subsequence sum - are analyzed to demonstrate algorithm analysis techniques.
This document discusses analyzing the efficiency and complexity of algorithms. It begins by explaining that running time depends on input size and nature, and is generally measured by the number of steps or operations. Different examples are provided to demonstrate analyzing loops and recursive functions to derive asymptotic complexity bounds. Key points covered include using Big-O notation to classify algorithms according to worst-case running time, analyzing nested loops, sequences of statements, and conditional statements. The document emphasizes that asymptotic complexity focuses on higher-order terms as input size increases.
This document provides an introduction to algorithm analysis. It discusses why algorithm analysis is important, as an inefficient program may have poor running time even if it is functionally correct. It introduces different algorithm analysis approaches like empirical, simulational, and analytical. Key concepts in algorithm analysis like worst-case, average-case, and best-case running times are explained. Different asymptotic notations like Big-O, Big-Omega, and Big-Theta that are used to describe the limiting behavior of functions are also introduced along with examples. Common algorithms like linear search and binary search are analyzed to demonstrate how to determine the time complexity of algorithms.
The document discusses algorithm analysis and asymptotic notation. It defines algorithm analysis as comparing algorithms based on running time and other factors as problem size increases. Asymptotic notation such as Big-O, Big-Omega, and Big-Theta are introduced to classify algorithms based on how their running times grow relative to input size. Common time complexities like constant, logarithmic, linear, quadratic, and exponential are also covered. The properties and uses of asymptotic notation for equations and inequalities are explained.
This document discusses asymptotic analysis and big-O notation for analyzing the time complexity of algorithms. It begins by defining key concepts like growth rate, asymptotic notations such as O(n), Ω(n) and Θ(n). It then provides examples of analyzing the time efficiency of different algorithms like finding the maximum element in an array and computing prefix averages. The document explains how to determine the asymptotic complexity by counting the total number of operations and expressing it using big-O notation. It also discusses properties of big-O notation like rules for dropping constant factors and lower order terms.
This document discusses using data mining classifiers and attribute reduction techniques to predict chronic kidney disease (CKD) more accurately and efficiently. It first provides background on CKD and the need for early detection. It then discusses data mining, classification algorithms, attribute selection filters and wrappers. The document analyzes several studies that predicted CKD using techniques like decision trees, SVM and Naive Bayes. It describes the dataset used from the UCI repository and evaluation metrics. The results section compares J48, Decision Tree and IBK classifiers with and without attribute reduction using CfsSubsetEval, ClassifierSubsetEval and WrapperSubsetEval. Attribute reduction improved accuracy, especially for IBK which achieved 100% accuracy with 72% fewer attributes.
The document discusses graph traversal algorithms depth-first search (DFS) and breadth-first search (BFS). DFS uses a stack and visits nodes by traversing as deep as possible before backtracking. BFS uses a queue and visits all nodes at each level from the starting node before moving to the next level. Examples are given applying DFS and BFS to a sample graph. Applications of DFS and BFS are also listed such as computing distances, checking for cycles/bipartiteness, and topological sorting.
AVL Tree in Data Structures- It is height balanced tree with balance factor 1, -1 or 0. The different type of rotations used in this tree are: RR, LL, RL, LR
This document contains a quiz on operating system concepts with 25 multiple choice questions. Some key topics covered include processes, process states, process scheduling, interprocess communication, memory management, and operating system functions like file management and resource management. The questions test understanding of fundamental OS concepts like processes, threads, scheduling queues, and the role of the operating system.
The document discusses deadlocks in computer systems. It defines deadlock, presents examples, and describes four conditions required for deadlock to occur. Several methods for handling deadlocks are discussed, including prevention, avoidance, detection, and recovery. Prevention methods aim to ensure deadlocks never occur, while avoidance allows the system to dynamically prevent unsafe states. Detection identifies when the system is in a deadlocked state.
The document discusses various software engineering methodologies including the waterfall model, iterative model, Rational Unified Process (RUP), and agile methodologies like extreme programming (XP) and Scrum. It provides detailed descriptions of each methodology's phases and workflows. The waterfall model divides the life cycle into sequential phases while iterative models allow revisiting previous phases. RUP includes inception, elaboration, construction, and transition phases. Agile prioritizes customer satisfaction, working software, and flexibility over documentation and processes.
An enterprise is a large organization, and an enterprise application is software that helps an organization solve business problems. Enterprise applications can be categorized by their customer visibility (upstream, downstream, business enabler), the industry and business functions they support, how they process data (OLTP, OLAP), whether they are custom-built or commercial, and if they are host-centric or distributed. They must enhance efficiency, ensure security, handle large data volumes, and be easily maintained. Challenges for enterprise applications include automating business processes, integrating applications, maintaining security, and providing rich user experiences.
The TRB AJE35 RIIM Coordination and Collaboration Subcommittee has organized a series of webinars focused on building coordination, collaboration, and cooperation across multiple groups. All webinars have been recorded and copies of the recording, transcripts, and slides are below. These resources are open-access following creative commons licensing agreements. The files may be found, organized by webinar date, below. The committee co-chairs would welcome any suggestions for future webinars. The support of the AASHTO RAC Coordination and Collaboration Task Force, the Council of University Transportation Centers, and AUTRI’s Alabama Transportation Assistance Program is gratefully acknowledged.
This webinar overviews proven methods for collaborating with USDOT University Transportation Centers (UTCs), emphasizing state departments of transportation and other stakeholders. It will cover partnerships at all UTC stages, from the Notice of Funding Opportunity (NOFO) release through proposal development, research and implementation. Successful USDOT UTC research, education, workforce development, and technology transfer best practices will be highlighted. Dr. Larry Rilett, Director of the Auburn University Transportation Research Institute will moderate.
For more information, visit: https://aub.ie/trbwebinars
Welcome to the May 2025 edition of WIPAC Monthly celebrating the 14th anniversary of the WIPAC Group and WIPAC monthly.
In this edition along with the usual news from around the industry we have three great articles for your contemplation
Firstly from Michael Dooley we have a feature article about ammonia ion selective electrodes and their online applications
Secondly we have an article from myself which highlights the increasing amount of wastewater monitoring and asks "what is the overall" strategy or are we installing monitoring for the sake of monitoring
Lastly we have an article on data as a service for resilient utility operations and how it can be used effectively.
This slide deck presents a detailed overview of the 2025 survey paper titled “A Survey of Personalized Large Language Models” by Liu et al. It explores how foundation models like GPT and LLaMA can be personalized to better reflect user-specific needs, preferences, and behaviors.
The presentation is structured around a 3-level taxonomy introduced in the paper:
Input-Level Personalization (e.g., user-profile prompting, memory retrieval)
Model-Level Personalization (e.g., LoRA, PEFT, adapters)
Objective-Level Personalization (e.g., RLHF, preference alignment)
Several studies have established that strength development in concrete is not only determined by the water/binder ratio, but it is also affected by the presence of other ingredients. With the increase in the number of concrete ingredients from the conventional four materials by addition of various types of admixtures (agricultural wastes, chemical, mineral and biological) to achieve a desired property, modelling its behavior has become more complex and challenging. Presented in this work is the possibility of adopting the Gene Expression Programming (GEP) algorithm to predict the compressive strength of concrete admixed with Ground Granulated Blast Furnace Slag (GGBFS) as Supplementary Cementitious Materials (SCMs). A set of data with satisfactory experimental results were obtained from literatures for the study. Result from the GEP algorithm was compared with that from stepwise regression analysis in order to appreciate the accuracy of GEP algorithm as compared to other data analysis program. With R-Square value and MSE of -0.94 and 5.15 respectively, The GEP algorithm proves to be more accurate in the modelling of concrete compressive strength.
PRIZ Academy - Functional Modeling In Action with PRIZ.pdfPRIZ Guru
This PRIZ Academy deck walks you step-by-step through Functional Modeling in Action, showing how Subject-Action-Object (SAO) analysis pinpoints critical functions, ranks harmful interactions, and guides fast, focused improvements. You’ll see:
Core SAO concepts and scoring logic
A wafer-breakage case study that turns theory into practice
A live PRIZ Platform demo that builds the model in minutes
Ideal for engineers, QA managers, and innovation leads who need clearer system insight and faster root-cause fixes. Dive in, map functions, and start improving what really matters.
The use of huge quantity of natural fine aggregate (NFA) and cement in civil construction work which have given rise to various ecological problems. The industrial waste like Blast furnace slag (GGBFS), fly ash, metakaolin, silica fume can be used as partly replacement for cement and manufactured sand obtained from crusher, was partly used as fine aggregate. In this work, MATLAB software model is developed using neural network toolbox to predict the flexural strength of concrete made by using pozzolanic materials and partly replacing natural fine aggregate (NFA) by Manufactured sand (MS). Flexural strength was experimentally calculated by casting beams specimens and results obtained from experiment were used to develop the artificial neural network (ANN) model. Total 131 results values were used to modeling formation and from that 30% data record was used for testing purpose and 70% data record was used for training purpose. 25 input materials properties were used to find the 28 days flexural strength of concrete obtained from partly replacing cement with pozzolans and partly replacing natural fine aggregate (NFA) by manufactured sand (MS). The results obtained from ANN model provides very strong accuracy to predict flexural strength of concrete obtained from partly replacing cement with pozzolans and natural fine aggregate (NFA) by manufactured sand.
2. Analysis of Algorithms
An algorithm is a finite set of precise instructions for
performing a computation or for solving a problem.
What is the goal of analysis of algorithms?
To compare algorithms mainly in terms of running time
but also in terms of other factors (e.g., memory
requirements, programmer's effort etc.)
What do we mean by running time analysis?
Determine how running time increases as the size
of the problem increases.
2
3. Input Size
Input size (number of elements in the input)
size of an array
# of elements in a matrix
# of bits in the binary representation of the input
vertices and edges in a graph
3
4. Types of Analysis
Worst case
Provides an upper bound on running time
An absolute guarantee that the algorithm would not run longer, no
matter what the inputs are
Best case
Provides a lower bound on running time
Input is the one for which the algorithm runs the fastest
Average case
Provides a prediction about the running time
Assumes that the input is random
4
Lower Bound RunningTime Upper Bound≤ ≤
5. Types of Analysis: Example
Example: Linear Search Complexity
Best Case : Item found at the beginning: One comparison
Worst Case : Item found at the end: n comparisons
Average Case :Item may be found at index 0, or 1, or 2, . . . or n - 1
Average number of comparisons is: (1 + 2 + . . . + n) / n = (n+1) / 2
Worst and Average complexities of common sorting algorithms
5
Method Worst Case Average Case Best Case
Selection Sort n2
n2
n2
Insertion Sort n2
n2
n
Merge Sort nlogn nlogn nlogn
Quick Sort n2
nlogn nlogn
6. How do we compare algorithms?
We need to define a number of objective
measures.
(1) Compare execution times?
Not good: times are specific to a particular
computer !!
(2) Count the number of statements executed?
Not good: number of statements vary with the
programming language as well as the style of the
individual programmer.
6
7. Ideal Solution
Express running time as a function of the
input size n (i.e., f(n)).
Compare different functions corresponding
to running times.
Such an analysis is independent of
machine time, programming style, etc.
7
8. Example
Associate a "cost" with each statement.
Find the "total cost” by finding the total number of times
each statement is executed.
Algorithm 1 Algorithm 2
Cost Cost
arr[0] = 0; c1 for(i=0; i<N; i++) c2
arr[1] = 0; c1 arr[i] = 0; c1
arr[2] = 0; c1
... ...
arr[N-1] = 0; c1
----------- -------------
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
(c2 + c1) x N + c2
8
9. Another Example
Algorithm 3 Cost
sum = 0; c1
for(i=0; i<N; i++) c2
for(j=0; j<N; j++) c2
sum += arr[i][j]; c3
------------
c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N2
9
10. Asymptotic Analysis
To compare two algorithms with running
times f(n) and g(n), we need a rough
measure that characterizes how fast each
function grows.
Hint: use rate of growth
Compare functions in the limit, that is,
asymptotically!
(i.e., for large values of n)
10
11. Rate of Growth
Consider the example of buying elephants and
goldfish:
Cost: cost_of_elephants + cost_of_goldfish
Cost ~ cost_of_elephants (approximation)
The low order terms in a function are relatively
insignificant for large n
n4
+ 100n2
+ 10n + 50 ~ n4
i.e., we say that n4
+ 100n2
+ 10n + 50 and n4
have the
same rate of growth
11
17. Big-O Notation
We say fA(n)=30n+8 is order n, or O (n)
It is, at most, roughly proportional to n.
fB(n)=n2
+1 is order n2
, or O(n2
). It is, at most,
roughly proportional to n2
.
In general, any O(n2
) function is faster- growing
than any O(n) function.
17
18. Visualizing Orders of Growth
On a graph, as
you go to the
right, a faster
growing
function
eventually
becomes
larger...
18
fA(n)=30n+8
Increasing n →
fB(n)=n2
+1
Valueoffunction→
19. More Examples …
n4
+ 100n2
+ 10n + 50 is O(n4
)
10n3
+ 2n2
is O(n3
)
n3
- n2
is O(n3
)
constants
10 is O(1)
1273 is O(1)
19
20. Back to Our Example
Algorithm 1 Algorithm 2
Cost Cost
arr[0] = 0; c1 for(i=0; i<N; i++) c2
arr[1] = 0; c1 arr[i] = 0; c1
arr[2] = 0; c1
...
arr[N-1] = 0; c1
----------- -------------
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
(c2 + c1) x N + c2
Both algorithms are of the same order: O(N)
20
21. Example (cont’d)
Algorithm 3 Cost
sum = 0; c1
for(i=0; i<N; i++) c2
for(j=0; j<N; j++) c2
sum += arr[i][j]; c3
------------
c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N2
= O(N2
)
21
24. Examples
2n2
= O(n3
):
n2
= O(n2
):
1000n2
+1000n = O(n2
):
n = O(n2
):
24
2n2
≤ cn3
⇒ 2 ≤ cn ⇒ c = 1 and n0= 2
n2
≤ cn2
⇒ c ≥ 1 ⇒ c = 1 and n0= 1
1000n2
+1000n ≤ 1000n2
+ n2
=1001n2
⇒ c=1001 and n0 = 1000
n ≤ cn2
⇒ cn ≥ 1 ⇒ c = 1 and n0= 1
25. More Examples
Show that 30n+8 is O(n).
Show ∃c,n0: 30n+8 ≤ cn, ∀n>n0 .
Let c=31, n0=8. Assume n>n0=8. Then
cn = 31n = 30n + n > 30n+8, so 30n+8 < cn.
25
26. Big-O example, graphically
Note 30n+8 isn’t
less than n
anywhere (n>0).
It isn’t even
less than 31n
everywhere.
But it is less than
31n everywhere to
the right of n=8.
26
n>n0=8 →
Increasing n →
Valueoffunction→
n
30n+8
cn =
31n
30n+8
∈O(n)
27. No Uniqueness
There is no unique set of values for n0 and c in proving the
asymptotic bounds
Prove that 100n + 5 = O(n2
)
100n + 5 ≤ 100n + n = 101n ≤ 101n2
for all n ≥ 5
n0 = 5 and c = 101 is a solution
100n + 5 ≤ 100n + 5n = 105n ≤ 105n2
for all n ≥ 1
n0 = 1 and c = 105 is also a solution
Must find SOME constants c and n0 that satisfy the asymptotic notation relation
27
31. Examples
n2
/2 –n/2 = Θ(n2
)
½ n2
- ½ n ≤ ½ n2
∀n ≥ 0 ⇒ c2= ½
½ n2
- ½ n ≥ ½ n2
- ½ n * ½ n ( ∀n ≥ 2 ) = ¼ n2
⇒ c1=
¼
n ≠ Θ(n2
): c1 n2
≤ n ≤ c2 n2
⇒ only holds for: n ≤ 1/c1
31
33. Relations Between Different Sets
Subset relations between order-of-growth sets.
33
R→R
Ω( f )O( f )
Θ( f )
• f
34. Logarithms and properties
In algorithm analysis we often use the notation “log n”
without specifying the base
nn
nn
elogln
loglg 2
=
= =y
xlog
34
Binary logarithm
Natural logarithm
)lg(lglglg
)(lglg
nn
nn kk
=
=
xy log
=xylog yx loglog +
=
y
x
log yx loglog −
logb x =
ab
xlog
=xb
alog
log
log
a
a
x
b
35. More Examples
For each of the following pairs of functions, either f(n) is
O(g(n)), f(n) is Ω(g(n)), or f(n) = Θ(g(n)). Determine
which relationship is correct.
f(n) = log n2
; g(n) = log n + 5
f(n) = n; g(n) = log n2
f(n) = log log n; g(n) = log n
f(n) = n; g(n) = log2
n
f(n) = n log n + n; g(n) = log n
f(n) = 10; g(n) = log 10
f(n) = 2n
; g(n) = 10n2
f(n) = 2n
; g(n) = 3n
35
f(n) = Θ (g(n))
f(n) = Ω(g(n))
f(n) = O(g(n))
f(n) = Ω(g(n))
f(n) = Ω(g(n))
f(n) = Θ(g(n))
f(n) = Ω(g(n))
f(n) = O(g(n))
36. Properties
Theorem:
f(n) = Θ(g(n)) ⇔ f = O(g(n)) and f = Ω(g(n))
Transitivity:
f(n) = Θ(g(n)) and g(n) = Θ(h(n)) ⇒ f(n) = Θ(h(n))
Same for O and Ω
Reflexivity:
f(n) = Θ(f(n))
Same for O and Ω
Symmetry:
f(n) = Θ(g(n)) if and only if g(n) = Θ(f(n))
Transpose symmetry:
f(n) = O(g(n)) if and only if g(n) = Ω(f(n))
36