The document discusses the greedy method algorithm design paradigm. It can be used to solve optimization problems with the greedy-choice property, where choosing locally optimal decisions at each step leads to a globally optimal solution. Examples discussed include fractional knapsack problem, task scheduling, and making change problem. The greedy algorithm works by always making the choice that looks best at the moment, without considering future implications of that choice.
This document discusses the greedy method algorithm design technique. It explains that greedy algorithms make locally optimal choices at each step to find a global solution. The document provides examples of problems that can be solved with greedy algorithms, including making change with coins, the fractional knapsack problem, and task scheduling. It describes the greedy choice, objective function, and correctness proof for each problem.
This document discusses various algorithm design methods and optimization problems. It provides examples of greedy algorithms for problems like machine scheduling, bin packing, and the 0/1 knapsack problem. While greedy algorithms provide efficient solutions, they do not always find the optimal solution. The document explores different greedy heuristics for the 0/1 knapsack problem and analyzes their performance compared to the best possible solution.
The document describes the greedy method algorithm design technique. It works in steps, selecting the best available option at each step until all options are exhausted. Many problems can be formulated as finding a feasible subset that optimizes an objective function. A greedy algorithm works in stages, making locally optimal choices at each stage to arrive at a global optimal solution. Several examples are provided to illustrate greedy algorithms for problems like change making, machine scheduling, container loading, knapsack problem, job sequencing with deadlines, and single-source shortest paths. Pseudocode is given for some of the greedy algorithms.
Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"22bcs058
Greedy algorithms are fundamental techniques used in computer science and optimization problems. They belong to a class of algorithms that make decisions based on the current best option without considering the overall future consequences. Despite their simplicity and intuitive appeal, greedy algorithms can provide efficient solutions to a wide range of problems across various domains.
At the core of greedy algorithms lies a simple principle: at each step, choose the locally optimal solution that seems best at the moment, with the hope that it will lead to a globally optimal solution. This principle makes greedy algorithms easy to understand and implement, as they typically involve iterating through a set of choices and making decisions based on some criteria.
One of the key characteristics of greedy algorithms is their greedy choice property, which states that at each step, the locally optimal choice leads to an optimal solution overall. This property allows greedy algorithms to make decisions without needing to backtrack or reconsider previous choices, resulting in efficient solutions for many problems.
Greedy algorithms are commonly used in problems involving optimization, scheduling, and combinatorial optimization. Examples include finding the minimum spanning tree in a graph (Prim's and Kruskal's algorithms), finding the shortest path in a weighted graph (Dijkstra's algorithm), and scheduling tasks to minimize completion time (interval scheduling).
Despite their effectiveness in many situations, greedy algorithms may not always produce the optimal solution for a given problem. In some cases, a greedy approach can lead to suboptimal solutions that are not globally optimal. This occurs when the greedy choice property does not guarantee an optimal solution at each step, or when there are conflicting objectives that cannot be resolved by a greedy strategy alone.
To mitigate these limitations, it is essential to carefully analyze the problem at hand and determine whether a greedy approach is appropriate. In some cases, greedy algorithms can be augmented with additional techniques or heuristics to improve their performance or guarantee optimality. Alternatively, other algorithmic paradigms such as dynamic programming or divide and conquer may be better suited for certain problems.
Overall, greedy algorithms offer a powerful and versatile tool for solving optimization problems efficiently. By understanding their principles and characteristics, programmers and researchers can leverage greedy algorithms to tackle a wide range of computational challenges and design elegant solutions that balance simplicity and effectiveness.
The document discusses greedy algorithms and provides examples of how they can be applied to solve optimization problems like the knapsack problem. It defines greedy techniques as making locally optimal choices at each step to arrive at a global solution. Examples where greedy algorithms are used include finding the shortest path, minimum spanning tree (using Prim's and Kruskal's algorithms), job sequencing with deadlines, and the fractional knapsack problem. Pseudocode and examples are provided to demonstrate how greedy algorithms work for the knapsack problem and job sequencing problem.
The document discusses greedy algorithms, their characteristics, and an example problem. Greedy algorithms make locally optimal choices at each step in the hope of finding a global optimum. They are simpler and faster than dynamic programming but may not always find the true optimal solution. The coin changing problem is used to illustrate a greedy approach of always selecting the largest valid coin denomination at each step.
This document provides an outline for a course on algorithms and analysis of algorithms. It discusses greedy algorithms as one topic that will be covered in the course. Greedy algorithms make locally optimal choices at each step in the hope of finding a globally optimal solution. The document provides examples of problems that can be solved using greedy algorithms, such as coin changing, fractional knapsack, and minimum spanning trees. Common greedy algorithms like Kruskal's algorithm and Prim's algorithm are described for finding minimum spanning trees in graphs.
The document discusses greedy algorithms, which attempt to find optimal solutions to optimization problems by making locally optimal choices at each step that are also globally optimal. It provides examples of problems that greedy algorithms can solve optimally, such as minimum spanning trees and change making, as well as problems they can provide approximations for, like the knapsack problem. Specific greedy algorithms covered include Kruskal's and Prim's for minimum spanning trees.
This document discusses greedy algorithms and provides examples of their application. It begins with an outline and overview of the greedy method approach. Key steps are presented, including determining optimal substructure, developing recursive and iterative solutions, and proving greedy choices are optimal. Examples analyzed include knapsack problems, activity selection, and Huffman codes. Details are given on solving fractional knapsack problems greedily. The optimal substructure of activity selection problems is explored.
Array is a container which can hold a fix number of items and these items should be of the same type. Most of the data structures make use of arrays to implement their algorithms. Following are the important terms to understand the concept of array.
The document discusses various algorithm design techniques including greedy algorithms, divide and conquer, and dynamic programming. It provides examples of greedy algorithms like job scheduling and activity selection. It also explains the divide and conquer approach with examples like merge sort, quicksort, and closest pair of points problems. Finally, it discusses running time analysis and big-O notation for classifying algorithms based on time complexity.
The document discusses various search algorithms including greedy search, A* search, and their application to problems like the knapsack problem. It provides an example of using a greedy approach to solve the fractional knapsack problem by selecting items to pack based on their value per unit weight. It also describes how A* search works by evaluating nodes using an f(n) function combining the actual cost to reach a node and the estimated cost to the goal.
The document discusses the greedy method algorithmic approach. It provides an overview of greedy algorithms including that they make locally optimal choices at each step to find a global optimal solution. The document also provides examples of problems that can be solved using greedy methods like job sequencing, the knapsack problem, finding minimum spanning trees, and single source shortest paths. It summarizes control flow and applications of greedy algorithms.
Introduction to Optimization revised.pptJahnaviGautam
The document provides an introduction to optimization problems. It defines optimization as involving an objective function to minimize or maximize, subject to constraints on variables. It categorizes problems as continuous or discrete and with or without objectives/constraints. Examples covered include shortest path problems, maximum flow problems, transportation problems, and task assignment problems. Algorithms for some problems are also mentioned.
The document discusses greedy algorithms and provides examples of problems that can be solved using greedy techniques. It introduces the coin changing problem and activity selection problem. For activity selection, it demonstrates that a greedy approach of always selecting the activity with the earliest finish time results in an optimal solution. It provides pseudo-code for a greedy algorithm and proves that the greedy solution is optimal for the activity selection problem by showing there is always an optimal solution that makes the greedy choice and combining the greedy choice with the optimal solution to the remaining subproblem yields an optimal solution to the original problem.
The document discusses greedy algorithms and their use for optimization problems. It provides examples of using greedy approaches to solve scheduling and knapsack problems. Specifically, it describes how a greedy algorithm works by making locally optimal choices at each step in hopes of reaching a globally optimal solution. While greedy algorithms do not always find the true optimal, they often provide good approximations. The document also proves that certain greedy strategies, such as always selecting the item with the highest value to weight ratio for the knapsack problem, will find the true optimal solution.
The document discusses the greedy method and its applications. It begins by defining the greedy approach for optimization problems, noting that greedy algorithms make locally optimal choices at each step in hopes of finding a global optimum. Some applications of the greedy method include the knapsack problem, minimum spanning trees using Kruskal's and Prim's algorithms, job sequencing with deadlines, and finding the shortest path using Dijkstra's algorithm. The document then focuses on explaining the fractional knapsack problem and providing a step-by-step example of solving it using a greedy approach. It also provides examples and explanations of Kruskal's algorithm for finding minimum spanning trees.
The document discusses linear programming, including:
1. It describes the basic concepts of linear programming, such as decision variables, constraints, and the objective function needing to be linear.
2. It explains the steps to formulate a linear programming problem, such as identifying decision variables and constraints, and writing the objective function and constraints as linear combinations of the variables.
3. It provides examples of how to write linear programming problems in standard form to maximize or minimize objectives subject to constraints.
The document discusses greedy algorithms and the knapsack problem. It defines greedy algorithms as making locally optimal choices at each step to find a global optimum. The general method is described as starting with a small solution and building up, making short-sighted choices. Knapsack problems aim to fill a knapsack of size S with items of varying sizes and values, often choosing items with the highest value-to-size ratios first. The fractional knapsack problem allows items to be partially selected to maximize total value within the size limit.
The document discusses greedy algorithms and matroids. It provides examples of problems that can be solved using greedy approaches, including sorting an array, the coin change problem, and activity selection. It defines key aspects of greedy algorithms like the greedy choice property and optimal substructure. Huffman coding is presented as an application that constructs optimal prefix codes. Finally, it introduces matroids as an abstract structure related to problems solvable by greedy methods.
Difference Between Normal & Smart/Automated HomeRuchika Sinha
Difference Between Normal Home & Smart/Automated Home.
In this we have discussed about the key differences, operational availability, problem statement, further enhancement
Ad
More Related Content
Similar to Greedy with Task Scheduling Algorithm.ppt (20)
This document provides an outline for a course on algorithms and analysis of algorithms. It discusses greedy algorithms as one topic that will be covered in the course. Greedy algorithms make locally optimal choices at each step in the hope of finding a globally optimal solution. The document provides examples of problems that can be solved using greedy algorithms, such as coin changing, fractional knapsack, and minimum spanning trees. Common greedy algorithms like Kruskal's algorithm and Prim's algorithm are described for finding minimum spanning trees in graphs.
The document discusses greedy algorithms, which attempt to find optimal solutions to optimization problems by making locally optimal choices at each step that are also globally optimal. It provides examples of problems that greedy algorithms can solve optimally, such as minimum spanning trees and change making, as well as problems they can provide approximations for, like the knapsack problem. Specific greedy algorithms covered include Kruskal's and Prim's for minimum spanning trees.
This document discusses greedy algorithms and provides examples of their application. It begins with an outline and overview of the greedy method approach. Key steps are presented, including determining optimal substructure, developing recursive and iterative solutions, and proving greedy choices are optimal. Examples analyzed include knapsack problems, activity selection, and Huffman codes. Details are given on solving fractional knapsack problems greedily. The optimal substructure of activity selection problems is explored.
Array is a container which can hold a fix number of items and these items should be of the same type. Most of the data structures make use of arrays to implement their algorithms. Following are the important terms to understand the concept of array.
The document discusses various algorithm design techniques including greedy algorithms, divide and conquer, and dynamic programming. It provides examples of greedy algorithms like job scheduling and activity selection. It also explains the divide and conquer approach with examples like merge sort, quicksort, and closest pair of points problems. Finally, it discusses running time analysis and big-O notation for classifying algorithms based on time complexity.
The document discusses various search algorithms including greedy search, A* search, and their application to problems like the knapsack problem. It provides an example of using a greedy approach to solve the fractional knapsack problem by selecting items to pack based on their value per unit weight. It also describes how A* search works by evaluating nodes using an f(n) function combining the actual cost to reach a node and the estimated cost to the goal.
The document discusses the greedy method algorithmic approach. It provides an overview of greedy algorithms including that they make locally optimal choices at each step to find a global optimal solution. The document also provides examples of problems that can be solved using greedy methods like job sequencing, the knapsack problem, finding minimum spanning trees, and single source shortest paths. It summarizes control flow and applications of greedy algorithms.
Introduction to Optimization revised.pptJahnaviGautam
The document provides an introduction to optimization problems. It defines optimization as involving an objective function to minimize or maximize, subject to constraints on variables. It categorizes problems as continuous or discrete and with or without objectives/constraints. Examples covered include shortest path problems, maximum flow problems, transportation problems, and task assignment problems. Algorithms for some problems are also mentioned.
The document discusses greedy algorithms and provides examples of problems that can be solved using greedy techniques. It introduces the coin changing problem and activity selection problem. For activity selection, it demonstrates that a greedy approach of always selecting the activity with the earliest finish time results in an optimal solution. It provides pseudo-code for a greedy algorithm and proves that the greedy solution is optimal for the activity selection problem by showing there is always an optimal solution that makes the greedy choice and combining the greedy choice with the optimal solution to the remaining subproblem yields an optimal solution to the original problem.
The document discusses greedy algorithms and their use for optimization problems. It provides examples of using greedy approaches to solve scheduling and knapsack problems. Specifically, it describes how a greedy algorithm works by making locally optimal choices at each step in hopes of reaching a globally optimal solution. While greedy algorithms do not always find the true optimal, they often provide good approximations. The document also proves that certain greedy strategies, such as always selecting the item with the highest value to weight ratio for the knapsack problem, will find the true optimal solution.
The document discusses the greedy method and its applications. It begins by defining the greedy approach for optimization problems, noting that greedy algorithms make locally optimal choices at each step in hopes of finding a global optimum. Some applications of the greedy method include the knapsack problem, minimum spanning trees using Kruskal's and Prim's algorithms, job sequencing with deadlines, and finding the shortest path using Dijkstra's algorithm. The document then focuses on explaining the fractional knapsack problem and providing a step-by-step example of solving it using a greedy approach. It also provides examples and explanations of Kruskal's algorithm for finding minimum spanning trees.
The document discusses linear programming, including:
1. It describes the basic concepts of linear programming, such as decision variables, constraints, and the objective function needing to be linear.
2. It explains the steps to formulate a linear programming problem, such as identifying decision variables and constraints, and writing the objective function and constraints as linear combinations of the variables.
3. It provides examples of how to write linear programming problems in standard form to maximize or minimize objectives subject to constraints.
The document discusses greedy algorithms and the knapsack problem. It defines greedy algorithms as making locally optimal choices at each step to find a global optimum. The general method is described as starting with a small solution and building up, making short-sighted choices. Knapsack problems aim to fill a knapsack of size S with items of varying sizes and values, often choosing items with the highest value-to-size ratios first. The fractional knapsack problem allows items to be partially selected to maximize total value within the size limit.
The document discusses greedy algorithms and matroids. It provides examples of problems that can be solved using greedy approaches, including sorting an array, the coin change problem, and activity selection. It defines key aspects of greedy algorithms like the greedy choice property and optimal substructure. Huffman coding is presented as an application that constructs optimal prefix codes. Finally, it introduces matroids as an abstract structure related to problems solvable by greedy methods.
Difference Between Normal & Smart/Automated HomeRuchika Sinha
Difference Between Normal Home & Smart/Automated Home.
In this we have discussed about the key differences, operational availability, problem statement, further enhancement
Greedy Algorithms WITH Activity Selection Problem.pptRuchika Sinha
An Activity Selection Problem
The activity selection problem is a mathematical optimization problem. Our first illustration is the problem of scheduling a resource among several challenge activities. We find a greedy algorithm provides a well designed and simple method for selecting a maximum- size set of manually compatible activities.
A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage.
A greedy algorithm is an approach for solving a problem by selecting the best option available at the moment.
The document discusses the 0-1 knapsack problem and provides an example of solving it using dynamic programming. The 0-1 knapsack problem aims to maximize the total value of items selected from a list that have a total weight less than or equal to the knapsack's capacity, where each item must either be fully included or excluded. The document outlines a dynamic programming algorithm that builds a table to store the maximum value for each item subset at each possible weight, recursively considering whether or not to include each additional item.
Dijkstra's algorithm finds the shortest paths between vertices in a graph with non-negative edge weights. It works by maintaining distances from the source vertex to all other vertices, initially setting all distances to infinity except the source which is 0. It then iteratively selects the unvisited vertex with the lowest distance, marks it as visited, and updates the distances to its neighbors if a shorter path is found through the selected vertex. This continues until all vertices are visited, at which point the distances will be the shortest paths from the source vertex.
Greedy with Task Scheduling Algorithm.pptRuchika Sinha
A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally optimal solutions that approximate a globally optimal solution in a reasonable amount of time
When we have to display a large portion of the picture, then not only scaling & translation is necessary, the visible part of picture is also identified. This process is not easy. Certain parts of the image are inside, while others are partially inside. The lines or elements which are partially visible will be omitted.
For deciding the visible and invisible portion, a particular process called clipping is used. Clipping determines each element into the visible and invisible portion. Visible portion is selected. An invisible portion is discarded.
Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California. It is the world's largest semiconductor chip manufacturer by revenue, and is the developer of the x86 series of microprocessors, the processors found in most personal computers
The main components of a PC and its purpose
Good price ranges for each component
Good options for each component
Cost breakdown for building a
PC Safety for assembly
How to wire a PC
Where each component goes
This document discusses computer casing and hardware. It defines computer casing as the box-like case that contains a computer's electronic components. It describes the main types of casings as desktop, mini tower, mid-size tower, and full-size tower. Each type is defined and their advantages/disadvantages listed. The parts of a computer case are identified as the front panel, back panel, and internal parts. Three factors that influence computer case design are identified as ergonomics, expansion capabilities, and cooling.
A motherboard is the main printed circuit board in general-purpose computers and other expandable systems. It holds and allows communication between many of the crucial electronic components of a system, such as the central processing unit and memory, and provides connectors for other peripherals.
In graph theory, the shortest path problem is the problem of finding a path between two vertices in a graph such that the sum of the weights of its constituent edges is minimized
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's algorithm for the same problem, but more versatile, as it is capable of handling graphs in which some of the edge weights are negative numbers.
Python is an interpreted high-level general-purpose programming language. Its design philosophy emphasizes code readability with its use of significant indentation. Its language constructs as well as its object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects
Python allows importing and using classes and functions defined in other files through modules. There are three main ways to import modules: import somefile imports everything and requires prefixing names with the module name, from somefile import * imports everything without prefixes, and from somefile import className imports a specific class. Modules look for files in directories listed in sys.path.
Classes define custom data types by storing shared data and methods. Instances are created using class() and initialized with __init__. Self refers to the instance inside methods. Attributes store an instance's data while class attributes are shared. Inheritance allows subclasses to extend and redefine parent class features. Special built-in methods control class behaviors like string representation or iteration.
Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:
An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set.
A problem with continuous variables is known as a continuous optimization, in which an optimal value from a continuous function must be found. They can include constrained problems and multimodal problems.
A grammar is said to be regular, if the production is in the form -
A → αB,
A -> a,
A → ε,
for A, B ∈ N, a ∈ Σ, and ε the empty string
A regular grammar is a 4 tuple -
G = (V, Σ, P, S)
V - It is non-empty, finite set of non-terminal symbols,
Σ - finite set of terminal symbols, (Σ ∈ V),
P - a finite set of productions or rules,
S - start symbol, S ∈ (V - Σ)
Software testing is a process that evaluates the functionality and quality of software. It involves examining software through various testing types and processes to verify it meets requirements and is error-free. The main types of software testing include static vs dynamic, black box vs white box, automated vs manual, and regression testing. The goal of testing is to identify bugs and ensure the software works as intended.
Backtracking is an algorithmic-technique for solving problems recursively by trying to build a solution incrementally, one piece at a time, removing those solutions that fail to satisfy the constraints of the problem at any point of time
Construction Materials (Paints) in Civil EngineeringLavish Kashyap
This file will provide you information about various types of Paints in Civil Engineering field under Construction Materials.
It will be very useful for all Civil Engineering students who wants to search about various Construction Materials used in Civil Engineering field.
Paint is a vital construction material used for protecting surfaces and enhancing the aesthetic appeal of buildings and structures. It consists of several components, including pigments (for color), binders (to hold the pigment together), solvents or thinners (to adjust viscosity), and additives (to improve properties like durability and drying time).
Paint is one of the material used in Civil Engineering field. It is especially used in final stages of construction project.
Paint plays a dual role in construction: it protects building materials and contributes to the overall appearance and ambiance of a space.
Design of Variable Depth Single-Span Post.pdfKamel Farid
Hunched Single Span Bridge: -
(HSSBs) have maximum depth at ends and minimum depth at midspan.
Used for long-span river crossings or highway overpasses when:
Aesthetically pleasing shape is required or
Vertical clearance needs to be maximized
OPTIMIZING DATA INTEROPERABILITY IN AGILE ORGANIZATIONS: INTEGRATING NONAKA’S...ijdmsjournal
Agile methodologies have transformed organizational management by prioritizing team autonomy and
iterative learning cycles. However, these approaches often lack structured mechanisms for knowledge
retention and interoperability, leading to fragmented decision-making, information silos, and strategic
misalignment. This study proposes an alternative approach to knowledge management in Agile
environments by integrating Ikujiro Nonaka and Hirotaka Takeuchi’s theory of knowledge creation—
specifically the concept of Ba, a shared space where knowledge is created and validated—with Jürgen
Habermas’s Theory of Communicative Action, which emphasizes deliberation as the foundation for trust
and legitimacy in organizational decision-making. To operationalize this integration, we propose the
Deliberative Permeability Metric (DPM), a diagnostic tool that evaluates knowledge flow and the
deliberative foundation of organizational decisions, and the Communicative Rationality Cycle (CRC), a
structured feedback model that extends the DPM, ensuring long-term adaptability and data governance.
This model was applied at Livelo, a Brazilian loyalty program company, demonstrating that structured
deliberation improves operational efficiency and reduces knowledge fragmentation. The findings indicate
that institutionalizing deliberative processes strengthens knowledge interoperability, fostering a more
resilient and adaptive approach to data governance in complex organizations.
The main purpose of the current study was to formulate an empirical expression for predicting the axial compression capacity and axial strain of concrete-filled plastic tubular specimens (CFPT) using the artificial neural network (ANN). A total of seventy-two experimental test data of CFPT and unconfined concrete were used for training, testing, and validating the ANN models. The ANN axial strength and strain predictions were compared with the experimental data and predictions from several existing strength models for fiber-reinforced polymer (FRP)-confined concrete. Five statistical indices were used to determine the performance of all models considered in the present study. The statistical evaluation showed that the ANN model was more effective and precise than the other models in predicting the compressive strength, with 2.8% AA error, and strain at peak stress, with 6.58% AA error, of concrete-filled plastic tube tested under axial compression load. Similar lower values were obtained for the NRMSE index.
David Boutry - Specializes In AWS, Microservices And PythonDavid Boutry
With over eight years of experience, David Boutry specializes in AWS, microservices, and Python. As a Senior Software Engineer in New York, he spearheaded initiatives that reduced data processing times by 40%. His prior work in Seattle focused on optimizing e-commerce platforms, leading to a 25% sales increase. David is committed to mentoring junior developers and supporting nonprofit organizations through coding workshops and software development.
2. The Greedy Method 2
Outline and Reading
The Greedy Method Technique (§5.1)
Fractional Knapsack Problem (§5.1.1)
Task Scheduling (§5.1.2)
Minimum Spanning Trees (§7.3) [future lecture]
3. The Greedy Method 3
The Greedy Method
Technique
The greedy method is a general algorithm
design paradigm, built on the following
elements:
configurations: different choices, collections, or
values to find
objective function: a score assigned to
configurations, which we want to either maximize or
minimize
It works best when applied to problems with the
greedy-choice property:
a globally-optimal solution can always be found by a
series of local improvements from a starting
configuration.
4. The Greedy Method 4
Making Change
Problem: A dollar amount to reach and a collection of
coin amounts to use to get there.
Configuration: A dollar amount yet to return to a
customer plus the coins already returned
Objective function: Minimize number of coins returned.
Greedy solution: Always return the largest coin you can
Example 1: Coins are valued $.32, $.08, $.01
Has the greedy-choice property, since no amount over $.32 can
be made with a minimum number of coins by omitting a $.32
coin (similarly for amounts over $.08, but under $.32).
Example 2: Coins are valued $.30, $.20, $.05, $.01
Does not have greedy-choice property, since $.40 is best made
with two $.20’s, but the greedy solution will pick three coins
(which ones?)
5. The Greedy Method 5
The Fractional Knapsack
Problem
Given: A set S of n items, with each item i having
bi - a positive benefit
wi - a positive weight
Goal: Choose items with maximum total benefit but with
weight at most W.
If we are allowed to take fractional amounts, then this is
the fractional knapsack problem.
In this case, we let xi denote the amount we take of item i
Objective: maximize
Constraint:
S
i
i
i
i w
x
b )
/
(
S
i
i W
x
6. The Greedy Method 6
Example
Given: A set S of n items, with each item i having
bi - a positive benefit
wi - a positive weight
Goal: Choose items with maximum total benefit but with
weight at most W.
Weight:
Benefit:
1 2 3 4 5
4 ml 8 ml 2 ml 6 ml 1 ml
$12 $32 $40 $30 $50
Items:
Value: 3
($ per ml)
4 20 5 50
10 ml
Solution:
• 1 ml of 5
• 2 ml of 3
• 6 ml of 4
• 1 ml of 2
“knapsack”
7. The Greedy Method 7
The Fractional Knapsack
Algorithm
Greedy choice: Keep taking
item with highest value
(benefit to weight ratio)
Since
Run time: O(n log n). Why?
Correctness: Suppose there
is a better solution
there is an item i with higher
value than a chosen item j,
but xi<wi, xj>0 and vi<vj
If we substitute some i with j,
we get a better solution
How much of i: min{wi-xi, xj}
Thus, there is no better
solution than the greedy one
Algorithm fractionalKnapsack(S, W)
Input: set S of items w/ benefit bi
and weight wi; max. weight W
Output: amount xi of each item i
to maximize benefit w/ weight
at most W
for each item i in S
xi 0
vi bi / wi {value}
w 0 {total weight}
while w < W
remove item i w/ highest vi
xi min{wi , W - w}
w w + min{wi , W - w}
S
i
i
i
i
S
i
i
i
i x
w
b
w
x
b )
/
(
)
/
(
8. The Greedy Method 8
Task Scheduling
Given: a set T of n tasks, each having:
A start time, si
A finish time, fi (where si < fi)
Goal: Perform all the tasks using a minimum number of
“machines.”
1 9
8
7
6
5
4
3
2
Machine 1
Machine 3
Machine 2
9. The Greedy Method 9
Task Scheduling
Algorithm
Greedy choice: consider tasks
by their start time and use as
few machines as possible with
this order.
Run time: O(n log n). Why?
Correctness: Suppose there is a
better schedule.
We can use k-1 machines
The algorithm uses k
Let i be first task scheduled
on machine k
Machine i must conflict with
k-1 other tasks
But that means there is no
non-conflicting schedule
using k-1 machines
Algorithm taskSchedule(T)
Input: set T of tasks w/ start time si
and finish time fi
Output: non-conflicting schedule
with minimum number of machines
m 0 {no. of machines}
while T is not empty
remove task i w/ smallest si
if there’s a machine j for i then
schedule i on machine j
else
m m + 1
schedule i on machine m
10. The Greedy Method 10
Example
Given: a set T of n tasks, each having:
A start time, si
A finish time, fi (where si < fi)
[1,4], [1,3], [2,5], [3,7], [4,7], [6,9], [7,8] (ordered by start)
Goal: Perform all tasks on min. number of machines
1 9
8
7
6
5
4
3
2
Machine 1
Machine 3
Machine 2