This document discusses dynamic programming and provides examples of serial and parallel formulations for several problems. It introduces classifications for dynamic programming problems based on whether the formulation is serial/non-serial and monadic/polyadic. Examples of serial monadic problems include the shortest path problem and 0/1 knapsack problem. The longest common subsequence problem is an example of a non-serial monadic problem. Floyd's all-pairs shortest path is a serial polyadic problem, while the optimal matrix parenthesization problem is non-serial polyadic. Parallel formulations are provided for several of these examples.
The document discusses time and space complexity analysis of algorithms. Time complexity measures the number of steps to solve a problem based on input size, with common orders being O(log n), O(n), O(n log n), O(n^2). Space complexity measures memory usage, which can be reused unlike time. Big O notation describes asymptotic growth rates to compare algorithm efficiencies, with constant O(1) being best and exponential O(c^n) being worst.
This document discusses lower bounds and limitations of algorithms. It begins by defining lower bounds and providing examples of problems where tight lower bounds have been established, such as sorting requiring Ω(nlogn) comparisons. It then discusses methods for establishing lower bounds, including trivial bounds, decision trees, adversary arguments, and problem reduction. The document covers several examples to illustrate these techniques. It also discusses the complexity classes P, NP, and NP-complete problems. Finally, it discusses approaches for tackling difficult combinatorial problems that are NP-hard, including exact and approximation algorithms.
This document discusses lower bounds and limitations of algorithms. It begins by defining lower bounds and providing examples of problems where tight lower bounds have been established, such as sorting requiring Ω(nlogn) comparisons. It then discusses methods for establishing lower bounds, including trivial bounds, decision trees, adversary arguments, and problem reduction. The document explores different classes of problems based on complexity, such as P, NP, and NP-complete problems. It concludes by examining approaches for tackling difficult combinatorial problems that are NP-hard, such as using exact algorithms, approximation algorithms, and local search heuristics.
The document discusses the dynamic programming approach to solving the Fibonacci numbers problem and the rod cutting problem. It explains that dynamic programming formulations first express the problem recursively but then optimize it by storing results of subproblems to avoid recomputing them. This is done either through a top-down recursive approach with memoization or a bottom-up approach by filling a table with solutions to subproblems of increasing size. The document also introduces the matrix chain multiplication problem and how it can be optimized through dynamic programming by considering overlapping subproblems.
Undecidable Problems and Approximation AlgorithmsMuthu Vinayagam
The document discusses algorithm limitations and approximation algorithms. It begins by explaining that some problems have no algorithms or cannot be solved in polynomial time. It then discusses different algorithm bounds and how to derive lower bounds through techniques like decision trees. The document also covers NP-complete problems, approximation algorithms for problems like traveling salesman, and techniques like branch and bound. It provides examples of approximation algorithms that provide near-optimal solutions when an optimal solution is impossible or inefficient to find.
Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller overlapping subproblems and storing the results of already solved subproblems, rather than recomputing them. It is applicable to problems exhibiting optimal substructure and overlapping subproblems. The key steps are to define the optimal substructure, recursively define the optimal solution value, compute values bottom-up, and optionally reconstruct the optimal solution. Common examples that can be solved with dynamic programming include knapsack, shortest paths, matrix chain multiplication, and longest common subsequence.
Dynamic programming is used to solve optimization problems by combining solutions to overlapping subproblems. It works by breaking down problems into subproblems, solving each subproblem only once, and storing the solutions in a table to avoid recomputing them. There are two key properties for applying dynamic programming: overlapping subproblems and optimal substructure. Some applications of dynamic programming include finding shortest paths, matrix chain multiplication, the traveling salesperson problem, and knapsack problems.
This document discusses advanced algorithm design and analysis techniques including dynamic programming, greedy algorithms, and amortized analysis. It provides examples of dynamic programming including matrix chain multiplication and longest common subsequence. Dynamic programming works by breaking problems down into overlapping subproblems and solving each subproblem only once. Greedy algorithms make locally optimal choices at each step to find a global optimum. Amortized analysis averages the costs of a sequence of operations to determine average-case performance.
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWERmuthukrishnavinayaga
This document discusses algorithms and their analysis. It begins by defining key properties of algorithms like their lower, upper, and tight bounds. It then discusses different techniques for determining algorithm lower bounds such as trivial, information theoretical, adversary, and reduction arguments. Decision trees are presented as a model for representing algorithms that use comparisons. Lower bounds proofs are given for sorting and searching algorithms. The document also covers polynomial time versus non-polynomial time problems, as well as NP-complete problems. Specific algorithms are analyzed like knapsack, traveling salesman, and approximation algorithms.
The document discusses various algorithms that use dynamic programming. It begins by defining dynamic programming as an approach that breaks problems down into optimal subproblems. It provides examples like knapsack and shortest path problems. It describes the characteristics of problems solved with dynamic programming as having optimal subproblems and overlapping subproblems. The document then discusses specific dynamic programming algorithms like matrix chain multiplication, string editing, longest common subsequence, shortest paths (Bellman-Ford and Floyd-Warshall). It provides explanations, recurrence relations, pseudocode and examples for these algorithms.
The document discusses various combinatorial optimization problems including the minimum spanning tree (MST), travelling salesman problem (TSP), and knapsack problem. It provides details on the MST and TSP, defining them, describing algorithms to solve them such as Kruskal's and Prim's for the MST and dynamic programming for the TSP, and discussing their applications and time complexities. The document also compares Prim and Kruskal algorithms and discusses how dynamic programming can provide an efficient solution for the TSP in some cases but not when the number of targets is too large.
High-dimensional polytopes defined by oracles: algorithms, computations and a...Vissarion Fisikopoulos
The document discusses algorithms for computing volumes of polytopes. It notes that exactly computing volumes is hard, but randomized polynomial-time algorithms can approximate volumes with high probability. It describes two algorithms: Random Directions Hit-and-Run (RDHR), which generates random points within a polytope via random walks; and Multiphase Monte Carlo, which approximates a polytope's volume by sampling points within a sequence of enclosing balls. RDHR mixes in O(d^3) steps and these algorithms can compute volumes of high-dimensional polytopes that exact algorithms cannot handle.
This document summarizes a lecture on recursive least squares (RLS) algorithms. RLS is an iterative approach based on Newton's method that uses all previous data to estimate the gradient, converging exponentially faster than LMS. The key steps are: (1) initialize the autocorrelation matrix R and weight vector f, (2) update R and f recursively using new data and the matrix inversion lemma to avoid direct inversion of R. This maintains an optimal solution at each step. The RLS algorithm can also be expressed using intermediate variables like an error vector z to simplify the update equations.
This document discusses several graph algorithms:
- Minimum spanning tree algorithms like Prim's and parallel formulations.
- Single-source and all-pairs shortest path algorithms like Dijkstra's and Floyd-Warshall. Parallel formulations are described.
- Other graph algorithms like connected components, transitive closure. Parallel formulations using techniques like merging forests are summarized.
Mathematics (from Greek μάθημα máthēma, “knowledge, study, learning”) is the study of topics such as quantity (numbers), structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope and definition of mathematics
The document discusses technology mapping for area minimization without breaking DAGs into trees. It proposes two approaches - 1) Extending an existing algorithm called Flowmap-r by forming Maximum Fanout Free Cones (MFFCs) for the DAG and 2) A divide and conquer approach that recursively divides the problem into subproblems until reaching leaf nodes. The key steps involve generating MFFCs, finding mappable MFFCs based on a logic library, and using a weighted set cover algorithm to determine the minimum area cover.
- Dimensionality reduction techniques assign instances to vectors in a lower-dimensional space while approximately preserving similarity relationships. Principal component analysis (PCA) is a common linear dimensionality reduction technique.
- Kernel PCA performs PCA in a higher-dimensional feature space implicitly defined by a kernel function. This allows PCA to find nonlinear structure in data. Kernel PCA computes the principal components by finding the eigenvectors of the normalized kernel matrix.
- For a new data point, its representation in the lower-dimensional space is given by projecting it onto the principal components in feature space using the kernel trick, without explicitly computing features.
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
The document describes efficient solution methods for two-stage stochastic linear programs (SLPs) using interior point methods. Interior point methods require solving large, dense systems of linear equations at each iteration, which can be computationally difficult for SLPs due to their structure leading to dense matrices. The paper reviews methods for improving computational efficiency, including reformulating the problem, exploiting special structures like transpose products, and explicitly factorizing the matrices to solve smaller independent systems in parallel. Computational results show explicit factorizations generally require the least effort.
This document discusses P, NP, NP-hard and NP-complete problems. It begins by defining tractable problems that can be solved in polynomial time as class P problems. Intractable problems that cannot be solved in reasonable time with increasing input size are also introduced. NP is the class of problems that can be solved by a non-deterministic machine in polynomial time. NP-hard problems are those that are at least as hard as the hardest problems in NP, and NP-complete problems are NP-hard problems that are also in NP. Common NP-complete problems like 3-SAT and the clique problem are reduced to each other to demonstrate their equivalence. Prior questions related to complexity classes are also addressed.
This document discusses P, NP, NP-hard and NP-complete problems. It begins by defining tractable problems that can be solved in polynomial time as class P problems. Intractable problems that cannot be solved in reasonable time with increasing input size are also introduced. NP is the class of problems that can be solved by a non-deterministic machine in polynomial time. NP-hard problems are those that are at least as hard as the hardest problems in NP, and NP-complete problems are NP-hard problems that are also in NP. Common NP-complete problems like 3-SAT and the clique problem are reduced to each other to demonstrate their equivalence. Prior questions related to complexity classes are also addressed.
Undecidable Problems and Approximation AlgorithmsMuthu Vinayagam
The document discusses algorithm limitations and approximation algorithms. It begins by explaining that some problems have no algorithms or cannot be solved in polynomial time. It then discusses different algorithm bounds and how to derive lower bounds through techniques like decision trees. The document also covers NP-complete problems, approximation algorithms for problems like traveling salesman, and techniques like branch and bound. It provides examples of approximation algorithms that provide near-optimal solutions when an optimal solution is impossible or inefficient to find.
Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller overlapping subproblems and storing the results of already solved subproblems, rather than recomputing them. It is applicable to problems exhibiting optimal substructure and overlapping subproblems. The key steps are to define the optimal substructure, recursively define the optimal solution value, compute values bottom-up, and optionally reconstruct the optimal solution. Common examples that can be solved with dynamic programming include knapsack, shortest paths, matrix chain multiplication, and longest common subsequence.
Dynamic programming is used to solve optimization problems by combining solutions to overlapping subproblems. It works by breaking down problems into subproblems, solving each subproblem only once, and storing the solutions in a table to avoid recomputing them. There are two key properties for applying dynamic programming: overlapping subproblems and optimal substructure. Some applications of dynamic programming include finding shortest paths, matrix chain multiplication, the traveling salesperson problem, and knapsack problems.
This document discusses advanced algorithm design and analysis techniques including dynamic programming, greedy algorithms, and amortized analysis. It provides examples of dynamic programming including matrix chain multiplication and longest common subsequence. Dynamic programming works by breaking problems down into overlapping subproblems and solving each subproblem only once. Greedy algorithms make locally optimal choices at each step to find a global optimum. Amortized analysis averages the costs of a sequence of operations to determine average-case performance.
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWERmuthukrishnavinayaga
This document discusses algorithms and their analysis. It begins by defining key properties of algorithms like their lower, upper, and tight bounds. It then discusses different techniques for determining algorithm lower bounds such as trivial, information theoretical, adversary, and reduction arguments. Decision trees are presented as a model for representing algorithms that use comparisons. Lower bounds proofs are given for sorting and searching algorithms. The document also covers polynomial time versus non-polynomial time problems, as well as NP-complete problems. Specific algorithms are analyzed like knapsack, traveling salesman, and approximation algorithms.
The document discusses various algorithms that use dynamic programming. It begins by defining dynamic programming as an approach that breaks problems down into optimal subproblems. It provides examples like knapsack and shortest path problems. It describes the characteristics of problems solved with dynamic programming as having optimal subproblems and overlapping subproblems. The document then discusses specific dynamic programming algorithms like matrix chain multiplication, string editing, longest common subsequence, shortest paths (Bellman-Ford and Floyd-Warshall). It provides explanations, recurrence relations, pseudocode and examples for these algorithms.
The document discusses various combinatorial optimization problems including the minimum spanning tree (MST), travelling salesman problem (TSP), and knapsack problem. It provides details on the MST and TSP, defining them, describing algorithms to solve them such as Kruskal's and Prim's for the MST and dynamic programming for the TSP, and discussing their applications and time complexities. The document also compares Prim and Kruskal algorithms and discusses how dynamic programming can provide an efficient solution for the TSP in some cases but not when the number of targets is too large.
High-dimensional polytopes defined by oracles: algorithms, computations and a...Vissarion Fisikopoulos
The document discusses algorithms for computing volumes of polytopes. It notes that exactly computing volumes is hard, but randomized polynomial-time algorithms can approximate volumes with high probability. It describes two algorithms: Random Directions Hit-and-Run (RDHR), which generates random points within a polytope via random walks; and Multiphase Monte Carlo, which approximates a polytope's volume by sampling points within a sequence of enclosing balls. RDHR mixes in O(d^3) steps and these algorithms can compute volumes of high-dimensional polytopes that exact algorithms cannot handle.
This document summarizes a lecture on recursive least squares (RLS) algorithms. RLS is an iterative approach based on Newton's method that uses all previous data to estimate the gradient, converging exponentially faster than LMS. The key steps are: (1) initialize the autocorrelation matrix R and weight vector f, (2) update R and f recursively using new data and the matrix inversion lemma to avoid direct inversion of R. This maintains an optimal solution at each step. The RLS algorithm can also be expressed using intermediate variables like an error vector z to simplify the update equations.
This document discusses several graph algorithms:
- Minimum spanning tree algorithms like Prim's and parallel formulations.
- Single-source and all-pairs shortest path algorithms like Dijkstra's and Floyd-Warshall. Parallel formulations are described.
- Other graph algorithms like connected components, transitive closure. Parallel formulations using techniques like merging forests are summarized.
Mathematics (from Greek μάθημα máthēma, “knowledge, study, learning”) is the study of topics such as quantity (numbers), structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope and definition of mathematics
The document discusses technology mapping for area minimization without breaking DAGs into trees. It proposes two approaches - 1) Extending an existing algorithm called Flowmap-r by forming Maximum Fanout Free Cones (MFFCs) for the DAG and 2) A divide and conquer approach that recursively divides the problem into subproblems until reaching leaf nodes. The key steps involve generating MFFCs, finding mappable MFFCs based on a logic library, and using a weighted set cover algorithm to determine the minimum area cover.
- Dimensionality reduction techniques assign instances to vectors in a lower-dimensional space while approximately preserving similarity relationships. Principal component analysis (PCA) is a common linear dimensionality reduction technique.
- Kernel PCA performs PCA in a higher-dimensional feature space implicitly defined by a kernel function. This allows PCA to find nonlinear structure in data. Kernel PCA computes the principal components by finding the eigenvectors of the normalized kernel matrix.
- For a new data point, its representation in the lower-dimensional space is given by projecting it onto the principal components in feature space using the kernel trick, without explicitly computing features.
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
The document describes efficient solution methods for two-stage stochastic linear programs (SLPs) using interior point methods. Interior point methods require solving large, dense systems of linear equations at each iteration, which can be computationally difficult for SLPs due to their structure leading to dense matrices. The paper reviews methods for improving computational efficiency, including reformulating the problem, exploiting special structures like transpose products, and explicitly factorizing the matrices to solve smaller independent systems in parallel. Computational results show explicit factorizations generally require the least effort.
This document discusses P, NP, NP-hard and NP-complete problems. It begins by defining tractable problems that can be solved in polynomial time as class P problems. Intractable problems that cannot be solved in reasonable time with increasing input size are also introduced. NP is the class of problems that can be solved by a non-deterministic machine in polynomial time. NP-hard problems are those that are at least as hard as the hardest problems in NP, and NP-complete problems are NP-hard problems that are also in NP. Common NP-complete problems like 3-SAT and the clique problem are reduced to each other to demonstrate their equivalence. Prior questions related to complexity classes are also addressed.
This document discusses P, NP, NP-hard and NP-complete problems. It begins by defining tractable problems that can be solved in polynomial time as class P problems. Intractable problems that cannot be solved in reasonable time with increasing input size are also introduced. NP is the class of problems that can be solved by a non-deterministic machine in polynomial time. NP-hard problems are those that are at least as hard as the hardest problems in NP, and NP-complete problems are NP-hard problems that are also in NP. Common NP-complete problems like 3-SAT and the clique problem are reduced to each other to demonstrate their equivalence. Prior questions related to complexity classes are also addressed.
OPTIMIZING DATA INTEROPERABILITY IN AGILE ORGANIZATIONS: INTEGRATING NONAKA’S...ijdmsjournal
Agile methodologies have transformed organizational management by prioritizing team autonomy and
iterative learning cycles. However, these approaches often lack structured mechanisms for knowledge
retention and interoperability, leading to fragmented decision-making, information silos, and strategic
misalignment. This study proposes an alternative approach to knowledge management in Agile
environments by integrating Ikujiro Nonaka and Hirotaka Takeuchi’s theory of knowledge creation—
specifically the concept of Ba, a shared space where knowledge is created and validated—with Jürgen
Habermas’s Theory of Communicative Action, which emphasizes deliberation as the foundation for trust
and legitimacy in organizational decision-making. To operationalize this integration, we propose the
Deliberative Permeability Metric (DPM), a diagnostic tool that evaluates knowledge flow and the
deliberative foundation of organizational decisions, and the Communicative Rationality Cycle (CRC), a
structured feedback model that extends the DPM, ensuring long-term adaptability and data governance.
This model was applied at Livelo, a Brazilian loyalty program company, demonstrating that structured
deliberation improves operational efficiency and reduces knowledge fragmentation. The findings indicate
that institutionalizing deliberative processes strengthens knowledge interoperability, fostering a more
resilient and adaptive approach to data governance in complex organizations.
Construction Materials (Paints) in Civil EngineeringLavish Kashyap
This file will provide you information about various types of Paints in Civil Engineering field under Construction Materials.
It will be very useful for all Civil Engineering students who wants to search about various Construction Materials used in Civil Engineering field.
Paint is a vital construction material used for protecting surfaces and enhancing the aesthetic appeal of buildings and structures. It consists of several components, including pigments (for color), binders (to hold the pigment together), solvents or thinners (to adjust viscosity), and additives (to improve properties like durability and drying time).
Paint is one of the material used in Civil Engineering field. It is especially used in final stages of construction project.
Paint plays a dual role in construction: it protects building materials and contributes to the overall appearance and ambiance of a space.
Introduction to ANN, McCulloch Pitts Neuron, Perceptron and its Learning
Algorithm, Sigmoid Neuron, Activation Functions: Tanh, ReLu Multi- layer Perceptron
Model – Introduction, learning parameters: Weight and Bias, Loss function: Mean
Square Error, Back Propagation Learning Convolutional Neural Network, Building
blocks of CNN, Transfer Learning, R-CNN,Auto encoders, LSTM Networks, Recent
Trends in Deep Learning.
This research presents the optimization techniques for reinforced concrete waffle slab design because the EC2 code cannot provide an efficient and optimum design. Waffle slab is mostly used where there is necessity to avoid column interfering the spaces or for a slab with large span or as an aesthetic purpose. Design optimization has been carried out here with MATLAB, using genetic algorithm. The objective function include the overall cost of reinforcement, concrete and formwork while the variables comprise of the depth of the rib including the topping thickness, rib width, and ribs spacing. The optimization constraints are the minimum and maximum areas of steel, flexural moment capacity, shear capacity and the geometry. The optimized cost and slab dimensions are obtained through genetic algorithm in MATLAB. The optimum steel ratio is 2.2% with minimum slab dimensions. The outcomes indicate that the design of reinforced concrete waffle slabs can be effectively carried out using the optimization process of genetic algorithm.
この資料は、Roy FieldingのREST論文(第5章)を振り返り、現代Webで誤解されがちなRESTの本質を解説しています。特に、ハイパーメディア制御やアプリケーション状態の管理に関する重要なポイントをわかりやすく紹介しています。
This presentation revisits Chapter 5 of Roy Fielding's PhD dissertation on REST, clarifying concepts that are often misunderstood in modern web design—such as hypermedia controls within representations and the role of hypermedia in managing application state.
Optimization techniques can be divided to two groups: Traditional or numerical methods and methods based on stochastic. The essential problem of the traditional methods, that by searching the ideal variables are found for the point that differential reaches zero, is staying in local optimum points, can not solving the non-linear non-convex problems with lots of constraints and variables, and needs other complex mathematical operations such as derivative. In order to satisfy the aforementioned problems, the scientists become interested on meta-heuristic optimization techniques, those are classified into two essential kinds, which are single and population-based solutions. The method does not require unique knowledge to the problem. By general knowledge the optimal solution can be achieved. The optimization methods based on population can be divided into 4 classes from inspiration point of view and physical based optimization methods is one of them. Physical based optimization algorithm: that the physical rules are used for updating the solutions are:, Lighting Attachment Procedure Optimization (LAPO), Gravitational Search Algorithm (GSA) Water Evaporation Optimization Algorithm, Multi-Verse Optimizer (MVO), Galaxy-based Search Algorithm (GbSA), Small-World Optimization Algorithm (SWOA), Black Hole (BH) algorithm, Ray Optimization (RO) algorithm, Artificial Chemical Reaction Optimization Algorithm (ACROA), Central Force Optimization (CFO) and Charged System Search (CSS) are some of physical methods. In this paper physical and physic-chemical phenomena based optimization methods are discuss and compare with other optimization methods. Some examples of these methods are shown and results compared with other well known methods. The physical phenomena based methods are shown reasonable results.
Design of Variable Depth Single-Span Post.pdfKamel Farid
Hunched Single Span Bridge: -
(HSSBs) have maximum depth at ends and minimum depth at midspan.
Used for long-span river crossings or highway overpasses when:
Aesthetically pleasing shape is required or
Vertical clearance needs to be maximized
The TRB AJE35 RIIM Coordination and Collaboration Subcommittee has organized a series of webinars focused on building coordination, collaboration, and cooperation across multiple groups. All webinars have been recorded and copies of the recording, transcripts, and slides are below. These resources are open-access following creative commons licensing agreements. The files may be found, organized by webinar date, below. The committee co-chairs would welcome any suggestions for future webinars. The support of the AASHTO RAC Coordination and Collaboration Task Force, the Council of University Transportation Centers, and AUTRI’s Alabama Transportation Assistance Program is gratefully acknowledged.
This webinar overviews proven methods for collaborating with USDOT University Transportation Centers (UTCs), emphasizing state departments of transportation and other stakeholders. It will cover partnerships at all UTC stages, from the Notice of Funding Opportunity (NOFO) release through proposal development, research and implementation. Successful USDOT UTC research, education, workforce development, and technology transfer best practices will be highlighted. Dr. Larry Rilett, Director of the Auburn University Transportation Research Institute will moderate.
For more information, visit: https://aub.ie/trbwebinars
2. Topic Overview
• Overview of Serial Dynamic Programming
• Serial Monadic DP Formulations
• Nonserial Monadic DP Formulations
• Serial Polyadic DP Formulations
• Nonserial Polyadic DP Formulations
3. Overview of Serial Dynamic Programming
• Dynamic programming (DP) is used to solve a wide
variety of discrete optimization problems such as
scheduling, string-editing, packaging, and inventory
management.
• Break problems into subproblems and combine their
solutions into solutions to larger problems.
• In contrast to divide-and-conquer, there may be
relationships across subproblems.
4. Dynamic Programming: Example
• Consider the problem of finding a shortest path between
a pair of vertices in an acyclic graph.
• An edge connecting node i to node j has cost c(i,j).
• The graph contains n nodes numbered 0,1,…, n-1, and
has an edge from node i to node j only if i < j. Node 0 is
source and node n-1 is the destination.
• Let f(x) be the cost of the shortest path from node 0 to
node x.
6. Dynamic Programming
• The solution to a DP problem is typically expressed as a
minimum (or maximum) of possible alternate solutions.
• If r represents the cost of a solution composed of
subproblems x1, x2,…, xl, then r can be written as
Here, g is the composition function.
• If the optimal solution to each problem is determined by
composing optimal solutions to the subproblems and
selecting the minimum (or maximum), the formulation is
said to be a DP formulation.
8. Dynamic Programming
• The recursive DP equation is also called the functional
equation or optimization equation.
• In the equation for the shortest path problem the
composition function is f(j) + c(j,x). This contains a single
recursive term (f(j)). Such a formulation is called
monadic.
• If the RHS has multiple recursive terms, the DP
formulation is called polyadic.
9. Dynamic Programming
• The dependencies between subproblems can be
expressed as a graph.
• If the graph can be levelized (i.e., solutions to problems
at a level depend only on solutions to problems at the
previous level), the formulation is called serial, else it is
called non-serial.
• Based on these two criteria, we can classify DP
formulations into four categories - serial-monadic, serial-
polyadic, non-serial-monadic, non-serial-polyadic.
• This classification is useful since it identifies concurrency
and dependencies that guide parallel formulations.
10. Serial Monadic DP Formulations
• It is difficult to derive canonical parallel formulations for
the entire class of formulations.
• For this reason, we select two representative examples,
the shortest-path problem for a multistage graph and the
0/1 knapsack problem.
• We derive parallel formulations for these problems and
identify common principles guiding design within the
class.
11. Shortest-Path Problem
• Special class of shortest path problem where the graph
is a weighted multistage graph of r + 1 levels.
• Each level is assumed to have n levels and every node
at level i is connected to every node at level i + 1.
• Levels zero and r contain only one node, the source and
destination nodes, respectively.
• The objective of this problem is to find the shortest path
from S to R.
12. Shortest-Path Problem
An example of a serial monadic DP formulation for finding
the shortest path in a graph whose nodes can be
organized into levels.
13. Shortest-Path Problem
• The ith
node at level l in the graph is labeled vi
l
and the
cost of an edge connecting vi
l
to node vj
l+1
is labeled ci
l
,j.
• The cost of reaching the goal node R from any node vi
l
is
represented by Ci
l
.
• If there are n nodes at level l, the vector
[C0
l
, C1
l,…,
Cn
l
-1]T
is referred to as Cl
. Note that
C0 = [C0
0
].
• We have Ci
l
= min {(ci
l
,j + Cj
l+1
) | j is a node at level l + 1}
14. Shortest-Path Problem
• Since all nodes vj
r-1
have only one edge connecting them
to the goal node R at level r, the cost Cj
r-1
is equal to cj
r
,
-
R
1
.
• We have:
Notice that this problem is serial and monadic.
16. Shortest-Path Problem
• We can express the solution to the problem as a
modified sequence of matrix-vector products.
• Replacing the addition operation by minimization and the
multiplication operation by addition, the preceding set of
equations becomes:
where Cl
and Cl+1
are n x 1 vectors representing the cost
of reaching the goal node from each node at levels l and
l + 1.
17. Shortest-Path Problem
• Matrix Ml,l+1 is an n x n matrix in which entry (i, j) stores
the cost of the edge connecting node i at level l to node j
at level l + 1.
• The shortest path problem has been formulated as a
sequence of r matrix-vector products.
18. Parallel Shortest-Path
• We can parallelize this algorithm using the parallel
algorithms for the matrix-vector product.
• Θ(n) processing elements can compute each vector Cl
in
time Θ(n) and solve the entire problem in time Θ(rn).
• In many instances of this problem, the matrix M may be
sparse. For such problems, it is highly desirable to use
sparse matrix techniques.
19. 0/1 Knapsack Problem
• We are given a knapsack of capacity c and a set of n objects
numbered 1,2,…,n. Each object i has weight wi and profit pi.
• Let v = [v1, v2,…, vn] be a solution vector in which vi = 0 if object i is
not in the knapsack, and vi = 1 if it is in the knapsack.
• The goal is to find a subset of objects to put into the knapsack so
that
(that is, the objects fit into the knapsack) and
is maximized (that is, the profit is maximized).
20. 0/1 Knapsack Problem
• The naive method is to consider all 2n
possible subsets
of the n objects and choose the one that fits into the
knapsack and maximizes the profit.
• Let F[i,x] be the maximum profit for a knapsack of
capacity x using only objects {1,2,…,i}. The DP
formulation is:
21. 0/1 Knapsack Problem
• Construct a table F of size n x c in row-major order.
• Filling an entry in a row requires two entries from the
previous row: one from the same column and one from
the column offset by the weight of the object
corresponding to the row.
• Computing each entry takes constant time; the
sequential run time of this algorithm is Θ(nc).
• The formulation is serial-monadic.
22. 0/1 Knapsack Problem
Computing entries of table F for the 0/1 knapsack problem. The computation of
entry F[i,j] requires communication with processing elements containing
entries F[i-1,j] and F[i-1,j-wi].
23. 0/1 Knapsack Problem
• Using c processors in a PRAM, we can derive a simple
parallel algorithm that runs in O(n) time by partitioning
the columns across processors.
• In a distributed memory machine, in the jth
iteration, for
computing F[j,r] at processing element Pr-1, F[j-1,r] is
available locally but F[j-1,r-wj] must be fetched.
• The communication operation is a circular shift and the
time is given by (ts + tw) log c. The total time is therefore
tc + (ts + tw) log c.
• Across all n iterations (rows), the parallel time is O(n log
c). Note that this is not cost optimal.
24. 0/1 Knapsack Problem
• Using p-processing elements, each processing element
computes c/p elements of the table in each iteration.
• The corresponding shift operation takes time (2ts + twc/p),
since the data block may be partitioned across two
processors, but the total volume of data is c/p.
• The corresponding parallel time is n(tcc/p + 2ts + twc/p),
or O(nc/p) (which is cost-optimal).
• Note that there is an upper bound on the efficiency of
this formulation.
25. Nonserial Monadic DP Formulations: Longest-
Common-Subsequence
• Given a sequence A = <a1, a2,…, an>, a subsequence of
A can be formed by deleting some entries from A.
• Given two sequences A = <a1, a2,…, an> and B = <b1, b2,
…, bm>, find the longest sequence that is a subsequence
of both A and B.
• If A = <c,a,d,b,r,z> and B = <a,s,b,z>, the longest
common subsequence of A and B is <a,b,z>.
26. Longest-Common-Subsequence Problem
• Let F[i,j] denote the length of the longest common
subsequence of the first i elements of A and the first j
elements of B. The objective of the LCS problem is to
find F[n,m].
• We can write:
27. Longest-Common-Subsequence Problem
• The algorithm computes the two-dimensional F table in a
row- or column-major fashion. The complexity is Θ(nm).
• Treating nodes along a diagonal as belonging to one
level, each node depends on two subproblems at the
preceding level and one subproblem two levels prior.
• This DP formulation is nonserial monadic.
28. Longest-Common-Subsequence Problem
(a) Computing entries of table for the longest-common-
subsequence problem. Computation proceeds along the dotted
diagonal lines. (b) Mapping elements of the table to processing
elements.
29. Longest-Common-Subsequence: Example
• Consider the LCS of two amino-acid sequences H E A G A W G H E E and P A W H E A E. For the interested reader,
the names of the corresponding amino-acids are A: Alanine, E: Glutamic acid, G: Glycine, H: Histidine, P: Proline,
and W: Tryptophan.
• The F table for computing the LCS of the sequences. The LCS is A W H E E.
30. Parallel Longest-Common-Subsequence
• Table entries are computed in a diagonal sweep from the
top-left to the bottom-right corner.
• Using n processors in a PRAM, each entry in a diagonal
can be computed in constant time.
• For two sequences of length n, there are 2n-1 diagonals.
• The parallel run time is Θ(n) and the algorithm is cost-
optimal.
31. Parallel Longest-Common-Subsequence
• Consider a (logical) linear array of processors. Processing
element Pi is responsible for the (i+1)th
column of the
table.
• To compute F[i,j], processing element Pj-1 may need either
F[i-1,j-1] or F[i,j-1] from the processing element to its left.
This communication takes time ts + tw.
• The computation takes constant time (tc).
• We have:
• Note that this formulation is cost-optimal, however, its
efficiency is upper-bounded by 0.5!
• Can you think of how to fix this?
32. Serial Polyadic DP Formulation: Floyd's All-
Pairs Shortest Path
• Given weighted graph G(V,E), Floyd's algorithm determines
the cost di,j of the shortest path between each pair of nodes in
V.
• Let di
k
,j be the minimum cost of a path from node i to node j,
using only nodes v0,v1,…,vk-1.
• We have:
• Each iteration requires time Θ(n2
) and the overall run time of
the sequential algorithm is Θ(n3
).
33. Serial Polyadic DP Formulation: Floyd's All-
Pairs Shortest Path
• A PRAM formulation of this algorithm uses n2
processors
in a logical 2D mesh. Processor Pi,j computes the value
of di
k
,j for k=1,2,…,n in constant time.
• The parallel runtime is Θ(n) and it is cost-optimal.
• The algorithm can easily be adapted to practical
architectures, as discussed in our treatment of Graph
Algorithms.
34. Nonserial Polyadic DP Formulation: Optimal Matrix-
Parenthesization Problem
• When multiplying a sequence of matrices, the order of
multiplication significantly impacts operation count.
• Let C[i,j] be the optimal cost of multiplying the matrices
Ai,…Aj.
• The chain of matrices can be expressed as a product of
two smaller chains, Ai,Ai+1,…,Ak and Ak+1,…,Aj.
• The chain Ai,Ai+1,…,Ak results in a matrix of dimensions ri-
1 x rk, and the chain Ak+1,…,Aj results in a matrix of
dimensions rk x rj.
• The cost of multiplying these two matrices is ri-1rkrj.
36. Optimal Matrix-Parenthesization Problem
A nonserial polyadic DP formulation for finding an optimal matrix
parenthesization for a chain of four matrices. A square node represents
the optimal cost of multiplying a matrix chain. A circle node represents a
possible parenthesization.
37. Optimal Matrix-Parenthesization Problem
• The goal of finding C[1,n] is accomplished in a bottom-up
fashion.
• Visualize this by thinking of filling in the C table
diagonally. Entries in diagonal l corresponds to the cost
of multiplying matrix chains of length l+1.
• The value of C[i,j] is computed as min{C[i,k] + C[k+1,j] +
ri-1rkrj}, where k can take values from i to j-1.
• Computing C[i,j] requires that we evaluate (j-i) terms and
select their minimum.
• The computation of each term takes time tc, and the
computation of C[i,j] takes time (j-i)tc. Each entry in
diagonal l can be computed in time ltc.
38. Optimal Matrix-Parenthesization Problem
• The algorithm computes (n-1) chains of length two. This
takes time (n-1)tc; computing n-2 chains of length three
takes time (n-2)tc. In the final step, the algorithm
computes one chain of length n in time (n-1)tc.
• It follows that the serial time is Θ(n3
).
40. Parallel Optimal Matrix-Parenthesization
Problem
• Consider a logical ring of processors. In step l, each processor computes a
single element belonging to the lth
diagonal.
• On computing the assigned value of the element in table C, each processor
sends its value to all other processors using an all-to-all broadcast.
• The next value can then be computed locally.
• The total time required to compute the entries along diagonal l is ltc+tslog
n+tw(n-1).
• The corresponding parallel time is given by:
41. Parallel Optimal Matrix-Parenthesization
Problem
• When using p (<n) processors, each processor stores n/p nodes.
• The time taken for all-to-all broadcast of n/p words is
and the time to compute n/p entries of the table in the lth
diagonal is ltcn/p.
• This formulation can be improved to use up to n(n+1)/2 processors using
pipelining.
42. Discussion of Parallel Dynamic Programming
Algorithms
• By representing computation as a graph, we identify
three sources of parallelism: parallelism within nodes,
parallelism across nodes at a level, and pipelining nodes
across multiple levels. The first two are available in serial
formulations and the third one in non-serial formulations.
• Data locality is critical for performance. Different DP
formulations, by the very nature of the problem instance,
have different degrees of locality.