SlideShare a Scribd company logo
Introduction to Operation System:
Memory Management of Early Systems
Nana Kofi Annan
Learning Objectives
The basic functionality of the three memory allocation schemes presented in this section: fixed partitions,
dynamic partitions, relocatable dynamic partitions
Best-fit memory allocation as well as first-fit memory allocation schemes
How a memory list keeps track of available memory
The importance of deallocation of memory in a dynamic partition system
The importance of the bounds register in memory allocation schemes
The role of compaction and how it improves memory allocation efficiency
Memory Management
• The management of main memory is critical. In fact,
from a historical perspective, the performance of
the entire system has been directly dependent on
two things:
• How much memory is available
• How it is optimized while jobs are being
processed.
Memory
Management
Memory
Management
• These early memory
management schemes are
seldom used by today’s operating
systems, but they are important
to study because each one
introduced fundamental
concepts that helped memory
management evolve.
Single-User Contiguous Scheme
• The first memory allocation scheme worked like this: Each
program to be processed was loaded in its entirety into memory
and allocated as much contiguous space in memory as it needed,
as shown in Figure 2.2. The keywords here are entirety and
contiguous. If the program was too large and didn’t fit the
available memory space, it couldn’t be executed. And, although
early computers were physically large, they had very little memory
Single-User
Contiguous
Scheme
• A single-user scheme supports one user
on one computer running one job at a
time. Sharing isn’t possible.
Single-User Contiguous Scheme
Single-User Contiguous Scheme
• This demonstrates a significant limiting factor of all computers—they
have only a finite amount of memory and if a program doesn’t fit, then
either the size of the main memory must be increased or the program
must be modified. It’s usually modified by making it smaller or by using
methods that allow program segments (partitions made to the program)
to be overlaid. (To overlay is to transfer segments of a program from
secondary storage into main memory for execution, so that two or more
segments take turns occupying the same memory locations.)
Single-User Contiguous
Scheme
• Single-user systems in a
nonnetworked environment work the
same way. Each user is given access
to all available main memory for
each job, and jobs are processed
sequentially, one after the other. To
allocate memory, the operating
system uses a simple algorithm
(step-by-step procedure to solve a
problem):
Algorithm to Load a Job
in a Single-User System
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
Single-User Contiguous Scheme
• Notice that the amount of work done by the operating system’s Memory
Manager is minimal, the code to perform the functions is straightforward,
and the logic is quite simple. Only two hardware items are needed: a
register to store the base address and an accumulator to keep track of
the size of the program as it’s being read into memory. Once the program
is entirely loaded into memory, it remains there until execution is
complete, either through normal termination or by intervention of the
operating system.
Single-User Contiguous Scheme
• One major problem with this type of memory allocation scheme is that it
doesn’t support multiprogramming or networking; it can handle only one
job at a time. When these single-user configurations were first made
available commercially in the late 1940s and early 1950s, they were used
in research institutions but proved unacceptable for the business
community—it wasn’t cost effective to spend almost $200,000 for a piece
of equipment that could be used by only one person at a time. Therefore,
in the late 1950s and early 1960s a new scheme was needed to manage
memory, which used partitions to take advantage of the computer
system’s resources by overlapping independent operations
Fixed Partitions
Fixed Partitions
• The first attempt to allow for
multiprogramming used fixed partitions (also
called static partitions) within the main
memory—one partition for each job.
Because the size of each partition was
designated when the system was powered
on, each partition could only be reconfigured
when the computer system was shut down,
reconfigured, and restarted. Thus, once the
system was in operation the partition sizes
remained static.
Fixed Partitions
• Each partition could be used by only
one program. The size of each partition
was set in advance by the computer
operator so sizes couldn't be changed
without restarting the system.
Fixed Partitions
• A critical factor was introduced with this scheme: protection of
the job’s memory space. Once a partition was assigned to a job,
no other job could be allowed to enter its boundaries, either
accidentally or intentionally. This problem of partition intrusion
didn’t exist in single-user contiguous allocation schemes because
only one job was present in main memory at any given time so
only the portion of the operating system residing in main memory
had to be protected. However, for the fixed partition allocation
schemes, protection was mandatory for each partition present in
main memory. Typically this was the joint responsibility of the
hardware of the computer and the operating system.
Fixed Partitions
• The algorithm used to store jobs in memory
requires a few more steps than the one used
for a single-user system because the size of
the job must be matched with the size of the
partition to make sure it fits completely.
Then, when a block of sufficient size is
located, the status of the partition must be
checked to see if it’s available.
Algorithm to Load a Job in a
Fixed Partition
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
Fixed Partitions
• This partition scheme is more flexible than the single-user scheme
because it allows several programs to be in memory at the same
time. However, it still requires that the entire program be stored
contiguously and in memory from the beginning to the end of its
execution. In order to allocate memory spaces to jobs, the
operating system’s Memory Manager must keep a table, such as
Table 2.1, which shows each memory partition size, its address,
its access restrictions, and its current status (free or busy) for the
system illustrated in Figure 2.3. (In Table 2.1 K stands for kilobyte,
which is 1,024 bytes.”)
Fixed Partitions
Fixed Partitions
Fixed Partitions
• The fixed partition scheme works well if all of the jobs run on the system
are of the same size or if the sizes are known ahead of time and don’t vary
between reconfigurations. Ideally, that would require accurate advance
knowledge of all the jobs to be run on the system in the coming hours,
days, or weeks. However, unless the operator can accurately predict the
future, the sizes of the partitions are determined in an arbitrary fashion
and they might be too small or too large for the jobs coming in.
Fixed Partitions
• There are significant consequences if the partition sizes are too small;
larger jobs will be rejected if they’re too big to fit into the largest partitions
or will wait if the large partitions are busy. As a result, large jobs may have
a longer turnaround time as they wait for free partitions of sufficient size or
may never run.
Fixed
Partitions
• On the other hand, if the partition
sizes are too big, memory is
wasted. If a job does not occupy
the entire partition, the unused
memory in the partition will remain
idle; it can’t be given to another job
because each partition is
allocated to only one job at a time.
It’s an indivisible unit. Figure 2.3
demonstrates one such
circumstance.
Fixed Partitions
• This phenomenon of partial usage of fixed partitions and the coinciding
creation of unused spaces within the partition is called internal
fragmentation, and is a major drawback to the fixed partition memory
allocation scheme.
Fixed
Partitions
The type depends on the location of
the wasted space
There are two types of fragmentation:
Internal external
Dynamic Partitions
Dynamic Partitions
• With dynamic partitions, available memory is
still kept in contiguous blocks but jobs are given
only as much memory as they request when
they are loaded for processing. Although this is a
significant improvement over fixed partitions
because memory isn’t wasted within the
partition, it doesn’t entirely eliminate the
problem.
Dynamic Partitions
• As shown in Figure 2.4, a dynamic partition scheme fully utilizes
memory when the first jobs are loaded. But as new jobs enter the
system that are not the same size as those that just vacated
memory, they are fit into the available spaces on a priority basis.
Figure 2.4 demonstrates first-come, first-served priority.
Therefore, the subsequent allocation of memory creates
fragments of free memory between blocks of allocated memory.
This problem is called external fragmentation and, like internal
fragmentation, lets memory go to waste.
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
Dynamic Partitions
• In the last snapshot, (e) in Figure 2.4, there are three free
partitions of 5K, 10K, and 20K—35K in all—enough to
accommodate Job 8, which only requires 30K. However
they are not contiguous and, because the jobs are loaded
in a contiguous manner, this scheme forces Job 8 to wait.
Dynamic Partitions
• Before we go to the next allocation scheme, let’s examine
how the operating system keeps track of the free sections
of memory.
Best-Fit Versus First-Fit Allocation
Best-Fit Versus First-Fit Allocation
• For both fixed and dynamic memory allocation schemes, the
operating system must keep lists of each memory location noting
which are free and which are busy. Then as new jobs come into
the system, the free partitions must be allocated
Best-Fit Versus First-Fit Allocation
• These partitions may be allocated on the basis of first-fit memory
allocation (first partition fitting the requirements) or best-fit
memory allocation (least wasted space, the smallest partition
fitting the requirements). For both schemes, the Memory Manager
organizes the memory lists of the free and used partitions
(free/busy) either by size or by location. The best-fit allocation
method keeps the free/busy lists in order by size, smallest to
largest. The first-fit method keeps the free/busy lists organized by
memory locations, low-order memory to high-order memory. Each
has advantages depending on the needs of the particular
allocation scheme— best-fit usually makes the best use of
memory space; first-fit is faster in making the allocation.
Best-Fit Versus First-Fit Allocation
Best-Fit Versus First-Fit Allocation
• The first-fit algorithm assumes that the Memory Manager keeps
two lists, one for free memory blocks and one for busy memory
blocks. The operation consists of a simple loop that compares the
size of each job to the size of each memory block until a block is
found that’s large enough to fit the job. Then the job is stored into
that block of memory, and the Memory Manager moves out of the
loop to fetch the next job from the entry queue. If the entire list is
searched in vain, then the job is placed into a waiting queue. The
Memory Manager then fetches the next job and repeats the
process
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
Best-Fit Versus First-Fit Allocation
Best-Fit Versus First-Fit Allocation
• In Table 2.2, a request for a block of 200 spaces has just been
given to the Memory Manager. (The spaces may be words, bytes,
or any other unit the system handles.) Using the first-fit algorithm
and starting from the top of the list, the Memory Manager locates
the first block of memory large enough to accommodate the job,
which is at location 6785. The job is then loaded, starting at
location 6785 and occupying the next 200 spaces. The next step is
to adjust the free list to indicate that the block of free memory now
starts at location 6985 (not 6785 as before) and that it contains
only 400 spaces (not 600 as before).
Best-Fit Versus First-Fit
Allocation
• The algorithm for best-fit is slightly more
complex because the goal is to find the
smallest memory block into which the job will
fit:
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
Best-Fit Versus First-Fit Allocation
• One of the problems with the best-fit algorithm is that the entire
table must be searched before the allocation can be made
because the memory blocks are physically stored in sequence
according to their location in memory (and not by memory block
sizes as shown in Figure 2.6). The system could execute an
algorithm to continuously rearrange the list in ascending order by
memory block size, but that would add more overhead and might
not be an efficient use of processing time in the long run.
Best-Fit Versus First-Fit Allocation
• The best-fit algorithm is illustrated showing only the list of free
memory blocks. Table 2.3 shows the free list before and after the
best-fit block has been allocated to the same request presented
in Table 2.2.
Best-Fit Versus First-Fit Allocation
Best-Fit Versus First-Fit Allocation
• In Table 2.3, a request for a block of 200 spaces has just been
given to the Memory Manager. Using the best-fit algorithm and
starting from the top of the list, the Memory Manager searches the
entire list and locates a block of memory starting at location 7600,
which is the smallest block that’s large enough to accommodate
the job. The choice of this block minimizes the wasted space (only
5 spaces are wasted, which is less than in the four alternative
blocks). The job is then stored, starting at location 7600 and
occupying the next 200 spaces. Now the free list must be adjusted
to show that the block of free memory starts at location 7800 (not
7600 as before) and that it contains only 5 spaces (not 205 as
before).
Best-Fit Versus First-Fit Allocation
• Which is best—first-fit or best-fit? For many years there was no
way to answer such a general question because performance
depends on the job mix. Note that while the best-fit resulted in a
better fit, it also resulted (and does so in the general case) in a
smaller free space (5 spaces), which is known as a sliver.
Best-Fit Versus First-Fit Allocation
• In recent years, access times have become so fast that the
scheme that saves the more valuable resource, memory space,
may be the best in some cases. Research continues to focus on
finding the optimum allocation scheme. This includes optimum
page size.
Deallocation
• Deallocation is the release of memory space
Deallocation
• For a fixed partition system, the process is quite straightforward.
When the job is completed, the Memory Manager resets the
status of the memory block where the job was stored to “free.” Any
code—for example, binary values with 0 indicating free and 1
indicating busy—may be used so the mechanical task of
deallocating a block of memory is relatively simple.
Deallocation
• A dynamic partition system uses a more complex algorithm
because the algorithm tries to combine free areas of memory
whenever possible. Therefore, the system must be prepared for
three alternative situations:
• Case 1. When the block to be deallocated is adjacent to another free
block
• Case 2. When the block to be deallocated is between two free blocks
• Case 3. When the block to be deallocated is isolated from other free
blocks
Deallocation
• The deallocation algorithm must be prepared for all three
eventualities with a set of nested conditionals. The following
algorithm is based on the fact that memory locations are listed
using a lowest-to-highest address scheme. The algorithm would
have to be modified to accommodate a different organization of
memory locations. In this algorithm, job_size is the amount of
memory being released by the terminating job, and
beginning_address is the location of the first instruction for the
job.
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
Deallocation
• Case 1: Joining Two Free Blocks
• Table 2.4 shows how deallocation occurs in a dynamic memory
allocation system when the job to be deallocated is next to one
free memory block.
1 Although the numbers in
parentheses don’t appear in the
free list, they’ve been inserted
here for clarity. The job size is
200 and its beginning location is
7600.
Deallocation
• After deallocation the free list looks like the one shown in Table
2.5
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
Deallocation
• Using the deallocation algorithm, the system sees that the
memory to be released is next to a free memory block, which
starts at location 7800. Therefore, the list must be changed to
reflect the starting address of the new free block, 7600, which was
the address of the first instruction of the job that just released this
block. In addition, the memory block size for this new free space
must be changed to show its new size, which is the combined
total of the two free partitions (200 + 5).
Deallocation
• Case 2: Joining Three Free Blocks
Deallocation
• When the deallocated memory space is between two free
memory blocks, the process is similar, as shown in Table 2.6.
• Using the deallocation algorithm, the system learns that the
memory to be deallocated is between two free blocks of memory.
Therefore, the sizes of the three free partitions (20 + 20 + 205)
must be combined and the total stored with the smallest
beginning address, 7560
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
Deallocation
• Because the entry at location 7600 has been combined with the
previous entry, we must empty out this entry. We do that by
changing the status to null entry, with no beginning address and
no memory block size as indicated by an asterisk in Table 2.7. This
negates the need to rearrange the list at the expense of memory.
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
Deallocation
• Case 3: Deallocating an Isolated Block
• The third alternative is when the space to be deallocated is
isolated from all other free areas.
Deallocation
• For this example, we need to know more about how the busy
memory list is configured. To simplify matters, let’s look at the
busy list for the memory area between locations 7560 and 10250.
Remember that, starting at 7560, there’s a free memory block of
245, so the busy memory area includes everything from location
7805 (7560 + 245) to 10250, which is the address of the next free
block. The free list and busy list are shown in Table 2.8 and Table
2.9.
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
Deallocation
• Using the deallocation algorithm, the system learns that the memory
block to be released is not adjacent to any free blocks of memory;
instead it is between two other busy areas. Therefore, the system must
search the table for a null entry.
• The scheme presented in this example creates null entries in both the
busy and the free lists during the process of allocation or deallocation
of memory. An example of a null entry occurring as a result of
deallocation was presented in Case 2. A null entry in the busy list
occurs when a memory block between two other busy memory blocks
is returned to the free list, as shown in Table 2.10. This mechanism
ensures that all blocks are entered in the lists according to the
beginning address of their memory location from smallest to largest.
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
Deallocation
• When the null entry is found, the beginning memory location of
the terminating job is entered in the beginning address column,
the job size is entered under the memory block size column, and
the status is changed from a null entry to free to indicate that a
new block of memory is available, as shown in Table 2.11.
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
Relocatable Dynamic Partitions
Relocatable Dynamic Partitions
• Both of the fixed and dynamic memory allocation schemes
described thus far shared some unacceptable fragmentation
characteristics that had to be resolved before the number of jobs
waiting to be accepted became unwieldy. In addition, there was a
growing need to use all the slivers of memory often left over.
Relocatable Dynamic Partitions
• The solution to both problems was the development of
relocatable dynamic partitions. With this memory allocation
scheme, the Memory Manager relocates programs to gather
together all of the empty blocks and compact them to make one
block of memory large enough to accommodate some or all of the
jobs waiting to get in.
Relocatable Dynamic Partitions
• The compaction of memory, sometimes referred to as garbage
collection or defragmentation, is performed by the operating
system to reclaim fragmented sections of the memory space.
Remember our earlier example of the makeshift lending library? If
you stopped lending books for a few moments and rearranged the
books in the most effective order, you would be compacting your
collection. But this demonstrates its disadvantage—it’s an
overhead process, so that while compaction is being done
everything else must wait.
Relocatable Dynamic Partitions
• Compaction isn’t an easy task. First, every program in memory
must be relocated so they’re contiguous, and then every address,
and every reference to an address, within each program must be
adjusted to account for the program’s new location in memory.
However, all other values within the program (such as data values)
must be left alone. In other words, the operating system must
distinguish between addresses and data values, and the
distinctions are not obvious once the program has been loaded
into memory.
Relocatable Dynamic Partitions
• To appreciate the complexity of relocation, let’s look at a typical
program. Remember, all numbers are stored in memory as binary
values, and in any given program instruction it’s not uncommon to
find addresses as well as data values. For example, an assembly
language program might include the instruction to add the integer
1 to I. The source code instruction looks like this:
ADDI I, 1
Relocatable Dynamic Partitions
• However, after it has been translated into actual code it could look
like this (for readability purposes the values are represented here
in octal code, not binary code):
• It’s not immediately obvious which elements are addresses and
which are instruction codes or data values. In fact, the address is
the number on the left (000007). The instruction code is next
(271), and the data value is on the right (000001)
000007 271 01 0 00 000001
Relocatable Dynamic Partitions
• The operating system can tell the function of each group of digits
by its location in the line and the operation code. However, if the
program is to be moved to another place in memory, each address
must be identified, or flagged. So later the amount of memory
locations by which the program has been displaced must be
added to (or subtracted from) all of the original addresses in the
program.
Relocatable Dynamic Partitions
• This becomes particularly important when the program includes
loop sequences, decision sequences, and branching sequences,
as well as data references. If, by chance, every address was not
adjusted by the same value, the program would branch to the
wrong section of the program or to a section of another program,
or it would reference the wrong data.
Relocatable Dynamic Partitions
• The program in Figure 2.7 and Figure 2.8 shows how the operating
system flags the addresses so that they can be adjusted if and
when a program is relocated. Internally, the addresses are marked
with a special symbol (indicated in Figure 2.8 by apostrophes) so
the Memory Manager will know to adjust them by the value stored
in the relocation register. All of the other values (data values) are
not marked and won’t be changed after relocation. Other numbers
in the program, those indicating instructions, registers, or
constants used in the instruction, are also left alone.
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
Relocatable Dynamic Partitions
• Figure 2.9 illustrates what happens to a program in memory during
compaction and relocation.
• (figure 2.9) Three snapshots of memory before and after
compaction with the operating system occupying the first 10K of
memory. When Job 6 arrives requiring 84K, the initial memory
layout in (a) shows external fragmentation totaling 96K of space.
Immediately after compaction (b), external fragmentation has
been eliminated, making room for Job 6 which, after loading, is
shown in (c).
INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf
Relocatable Dynamic Partitions
• This discussion of compaction raises three questions:
• What goes on behind the scenes when relocation and compaction take
place?
• What keeps track of how far each job has moved from its original storage
area?
• What lists have to be updated?
Relocatable Dynamic Partitions
• The last question is easiest to answer. After relocation and
compaction, both the free list and the busy list are updated. The
free list is changed to show the partition for the new block of free
memory: the one formed as a result of compaction that will be
located in memory starting after the last location used by the last
job. The busy list is changed to show the new locations for all of
the jobs already in progress that were relocated. Each job will
have a new address except for those that were already residing at
the lowest memory locations.
Relocatable Dynamic Partitions
• To answer the other two questions we must learn more about the
hardware components of a computer, specifically the registers.
Special-purpose registers are used to help with the relocation. In
some computers, two special registers are set aside for this
purpose: the bounds register and the relocation register.
Relocatable Dynamic Partitions
• The bounds register is used to store the highest (or lowest,
depending on the specific system) location in memory accessible
by each program. This ensures that during execution, a program
won’t try to access memory locations that don’t belong to it—that
is, those that are out of bounds. The relocation register contains
the value that must be added to each address referenced in the
program so that the system will be able to access the correct
memory addresses after relocation. If the program isn’t relocated,
the value stored in the program’s relocation register is zero.
Relocatable Dynamic Partitions
• Figure 2.10 illustrates what happens during relocation by using the
relocation register (all values are shown in decimal form)
(figure 2.10) Contents of relocation register and close-up of Job 4 memory area (a) before
relocation and (b) after relocation and compaction.
Relocatable Dynamic Partitions
• Originally, Job 4 was loaded into memory starting at memory
location 30K. (1K equals 1,024 bytes. Therefore, the exact starting
address is: 30 * 1024 = 30,720.) It required a block of memory of
32K (or 32 * 1024 = 32,768) addressable locations. Therefore,
when it was originally loaded, the job occupied the space from
memory location 30720 to memory location 63488-1. Now,
suppose that within the program, at memory location 31744,
there’s an instruction that looks like this:
LOAD 4, ANSWER
Relocatable Dynamic Partitions
• This assembly language command asks that the data value known
as ANSWER be loaded into Register 4 for later computation.
ANSWER, the value 37, is stored at memory location 53248. (In
this example, Register 4 is a working/computation register, which
is distinct from either the relocation or the bounds register.)
Relocatable Dynamic Partitions
• After relocation, Job 4 has been moved to a new starting memory
address of 18K (actually 18 * 1024 = 18,432). Of course, the job
still has its 32K addressable locations, so it now occupies
memory from location 18432 to location 51200-1 and, thanks to
the relocation register, all of the addresses will be adjusted
accordingly.
Relocatable Dynamic Partitions
• What does the relocation register contain? In this example, it
contains the value –12288. As calculated previously, 12288 is the
size of the free block that has been moved forward toward the high
addressable end of memory. The sign is negative because Job 4
has been moved back, closer to the low addressable end of
memory, as shown at the top of Figure 2.10(b).
Relocatable Dynamic Partitions
• However, the program instruction (LOAD 4, ANSWER) has not
been changed. The original address 53248 where ANSWER had
been stored remains the same in the program no matter how
many times it is relocated. Before the instruction is executed,
however, the true address must be computed by adding the value
stored in the relocation register to the address found at that
instruction. If the addresses are not adjusted by the value stored
in the relocation register, then even though memory location
31744 is still part of the job’s accessible set of memory locations,
it would not contain the LOAD command.
Relocatable Dynamic Partitions
• Not only that, but location 53248 is now out of bounds. The
instruction that was originally at 31744 has been moved to
location 19456. That’s because all of the instructions in this
program have been moved back by 12K (12 * 1024 = 12,288),
which is the size of the free block. Therefore, location 53248 has
been displaced by –12288 and ANSWER, the data value 37, is now
located at address 40960.
Relocatable Dynamic Partitions
• In effect, by compacting and relocating, the Memory Manager
optimizes the use of memory and thus improves throughput—one
of the measures of system performance. An unfortunate side
effect is that more overhead is incurred than with the two previous
memory allocation schemes. The crucial factor here is the timing
of the compaction—when and how often it should be done. There
are three options
Relocatable Dynamic Partitions
• One approach is to do it when a certain percentage of memory
becomes busy, say 75 percent. The disadvantage of this approach
is that the system would incur unnecessary overhead if no jobs
were waiting to use the remaining 25 percent.
Relocatable Dynamic Partitions
• A second approach is to compact memory only when there are
jobs waiting to get in. This would entail constant checking of the
entry queue, which might result in unnecessary overhead and
slow down the processing of jobs already in the system
Relocatable Dynamic Partitions
• A third approach is to do it after a prescribed amount of time has
elapsed. If the amount of time chosen is too small, however, then
the system will spend more time on compaction than on
processing. If it’s too large, too many jobs will congregate in the
waiting queue and the advantages of compaction are lost
Relocatable Dynamic Partitions
• As you can see, each option has its good and bad points. The best
choice for any system is decided by the operating system designer
who, based on the job mix and other factors, tries to optimize both
processing time and memory use while keeping overhead as low
as possible.
Summary
• Four memory management techniques were presented in this
chapter: single-user systems, fixed partitions, dynamic partitions,
and relocatable dynamic partitions. They have three things in
common: They all require that the entire program (1) be loaded
into memory, (2) be stored contiguously, and (3) remain in memory
until the job is completed
• Consequently, each puts severe restrictions on the size of the jobs
because they can only be as large as the biggest partitions in
memory.
Summary
• These schemes were sufficient for the first three generations of
computers, which processed jobs in batch mode. Turnaround
time was measured in hours, or sometimes days, but that was a
period when users expected such delays between the submission
of their jobs and pick up of output. As you’ll see in the next
chapter, a new trend emerged during the third-generation
computers of the late 1960s and early 1970s: Users were able to
connect directly with the central processing unit via remote job
entry stations, loading their jobs from online terminals that could
interact more directly with the system. New methods of memory
management were needed to accommodate them.
Summary
• We’ll see that the memory allocation schemes that followed had
two new things in common. First, programs didn’t have to be
stored in contiguous memory locations— they could be divided
into segments of variable sizes or pages of equal size. Each page,
or segment, could be stored wherever there was an empty block
big enough to hold it. Second, not all the pages, or segments, had
to reside in memory during the execution of the job. These were
significant advances for system designers, operators, and users
alike.
Key Terms
• address: a number that designates a particular memory location.
• best-fit memory allocation: a main memory allocation scheme
that considers all free blocks and selects for allocation the one
that will result in the least amount of wasted space.
• bounds register: a register used to store the highest location in
memory legally accessible by each program.
• compaction: the process of collecting fragments of available
memory space into contiguous blocks by moving programs and
data in a computer’s memory or disk. Also called garbage
collection.
Key Terms
• deallocation: the process of freeing an allocated resource, whether
memory space, adevice, a file, or a CPU.
• dynamic partitions: a memory allocation scheme in which jobs are
given as much memory as they request when they are loaded for
processing, thus creating their own partitions in main memory.
• external fragmentation: a situation in which the dynamic allocation of
memory creates unusable fragments of free memory between blocks
of busy, or allocated, memory.
• first come first served (FCFS): a nonpreemptive process scheduling
policy that handles jobs according to their arrival time; the first job in
the READY queue is processed first.
Key Terms
• first-fit memory allocation: a main memory allocation scheme
that searches from the beginning of the free block list and selects
for allocation the first block of memory large enough to fulfill the
request.
• fixed partitions: a memory allocation scheme in which main
memory is sectioned off, with portions assigned to each job.
• internal fragmentation: a situation in which a fixed partition is
only partially used by the program; the remaining space within the
partition is unavailable to any other job and is therefore wasted.
• kilobyte (K): a unit of memory or storage space equal to 1,024
bytes or 210 bytes.
Key Terms
• main memory: the unit that works directly with the CPU and in which the
data and instructions must reside in order to be processed. Also called
random access memory (RAM), primary storage, or internal memory.
• null entry: an empty entry in a list.
• relocatable dynamic partitions: a memory allocation scheme in which the
system relocates programs in memory to gather together all of the empty
blocks and compact them to make one block of memory that’s large enough
to accommodate some or all of the jobs waiting for memory.
• relocation: (1) the process of moving a program from one area of memory to
another; or (2) the process of adjusting address references in a program, by
either software or hardware means, to allow the program to execute correctly
when loaded in different sections of memory
Key Terms
• relocation register: a register that contains the value that must
be added to each address referenced in the program so that it will
be able to access the correct memory addresses after relocation.
• static partitions: another term for fixed partitions.
END
Ad

More Related Content

What's hot (20)

Linux standard file system
Linux standard file systemLinux standard file system
Linux standard file system
Taaanu01
 
Unix memory management
Unix memory managementUnix memory management
Unix memory management
Tech_MX
 
Desktop and multiprocessor systems
Desktop and multiprocessor systemsDesktop and multiprocessor systems
Desktop and multiprocessor systems
V.V.Vanniaperumal College for Women
 
Unix Operating System
Unix Operating SystemUnix Operating System
Unix Operating System
subhsikha
 
Linux kernel modules
Linux kernel modulesLinux kernel modules
Linux kernel modules
Dheryta Jaisinghani
 
Structure of operating system
Structure of operating systemStructure of operating system
Structure of operating system
Rafi Dar
 
How to fix ‘database is corrupt: cannot allocate space’ error in lotus notes
How to fix ‘database is corrupt: cannot allocate space’ error in lotus notesHow to fix ‘database is corrupt: cannot allocate space’ error in lotus notes
How to fix ‘database is corrupt: cannot allocate space’ error in lotus notes
andrewscott01
 
Operating system - Process and its concepts
Operating system - Process and its conceptsOperating system - Process and its concepts
Operating system - Process and its concepts
Karan Thakkar
 
chapter 1 intoduction to operating system
chapter 1 intoduction to operating systemchapter 1 intoduction to operating system
chapter 1 intoduction to operating system
Siddhi Viradiya
 
Os solaris memory management
Os  solaris memory managementOs  solaris memory management
Os solaris memory management
Tech_MX
 
The linux file system structure
The linux file system structureThe linux file system structure
The linux file system structure
Teja Bheemanapally
 
Chapter 2: Operating System Structures
Chapter 2: Operating System StructuresChapter 2: Operating System Structures
Chapter 2: Operating System Structures
Shafaan Khaliq Bhatti
 
Server configuration
Server configurationServer configuration
Server configuration
Aisha Talat
 
Putty
PuttyPutty
Putty
rajpreet
 
Why VOLTHA is needed in todays time? How does it work?
Why VOLTHA is needed in todays time? How does it work?Why VOLTHA is needed in todays time? How does it work?
Why VOLTHA is needed in todays time? How does it work?
Jyoti Rawat
 
Threads in Operating System | Multithreading | Interprocess Communication
Threads in Operating System | Multithreading | Interprocess CommunicationThreads in Operating System | Multithreading | Interprocess Communication
Threads in Operating System | Multithreading | Interprocess Communication
Shivam Mitra
 
Install windows 11
Install windows 11Install windows 11
Install windows 11
ssuser1eca7d
 
Linux fundamental - Chap 15 Job Scheduling
Linux fundamental - Chap 15 Job SchedulingLinux fundamental - Chap 15 Job Scheduling
Linux fundamental - Chap 15 Job Scheduling
Kenny (netman)
 
Operating Systems: Process Scheduling
Operating Systems: Process SchedulingOperating Systems: Process Scheduling
Operating Systems: Process Scheduling
Damian T. Gordon
 
Operating system concepts
Operating system conceptsOperating system concepts
Operating system concepts
Arnav Chowdhury
 
Linux standard file system
Linux standard file systemLinux standard file system
Linux standard file system
Taaanu01
 
Unix memory management
Unix memory managementUnix memory management
Unix memory management
Tech_MX
 
Unix Operating System
Unix Operating SystemUnix Operating System
Unix Operating System
subhsikha
 
Structure of operating system
Structure of operating systemStructure of operating system
Structure of operating system
Rafi Dar
 
How to fix ‘database is corrupt: cannot allocate space’ error in lotus notes
How to fix ‘database is corrupt: cannot allocate space’ error in lotus notesHow to fix ‘database is corrupt: cannot allocate space’ error in lotus notes
How to fix ‘database is corrupt: cannot allocate space’ error in lotus notes
andrewscott01
 
Operating system - Process and its concepts
Operating system - Process and its conceptsOperating system - Process and its concepts
Operating system - Process and its concepts
Karan Thakkar
 
chapter 1 intoduction to operating system
chapter 1 intoduction to operating systemchapter 1 intoduction to operating system
chapter 1 intoduction to operating system
Siddhi Viradiya
 
Os solaris memory management
Os  solaris memory managementOs  solaris memory management
Os solaris memory management
Tech_MX
 
The linux file system structure
The linux file system structureThe linux file system structure
The linux file system structure
Teja Bheemanapally
 
Chapter 2: Operating System Structures
Chapter 2: Operating System StructuresChapter 2: Operating System Structures
Chapter 2: Operating System Structures
Shafaan Khaliq Bhatti
 
Server configuration
Server configurationServer configuration
Server configuration
Aisha Talat
 
Why VOLTHA is needed in todays time? How does it work?
Why VOLTHA is needed in todays time? How does it work?Why VOLTHA is needed in todays time? How does it work?
Why VOLTHA is needed in todays time? How does it work?
Jyoti Rawat
 
Threads in Operating System | Multithreading | Interprocess Communication
Threads in Operating System | Multithreading | Interprocess CommunicationThreads in Operating System | Multithreading | Interprocess Communication
Threads in Operating System | Multithreading | Interprocess Communication
Shivam Mitra
 
Install windows 11
Install windows 11Install windows 11
Install windows 11
ssuser1eca7d
 
Linux fundamental - Chap 15 Job Scheduling
Linux fundamental - Chap 15 Job SchedulingLinux fundamental - Chap 15 Job Scheduling
Linux fundamental - Chap 15 Job Scheduling
Kenny (netman)
 
Operating Systems: Process Scheduling
Operating Systems: Process SchedulingOperating Systems: Process Scheduling
Operating Systems: Process Scheduling
Damian T. Gordon
 
Operating system concepts
Operating system conceptsOperating system concepts
Operating system concepts
Arnav Chowdhury
 

Similar to INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf (20)

Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OS
Kumar Pritam
 
memory management IN OS ANURAG PATE.pptx
memory management IN OS ANURAG PATE.pptxmemory management IN OS ANURAG PATE.pptx
memory management IN OS ANURAG PATE.pptx
SHARMA ONLINE
 
memorymanagement-2112140500ygyftftt34.pptx
memorymanagement-2112140500ygyftftt34.pptxmemorymanagement-2112140500ygyftftt34.pptx
memorymanagement-2112140500ygyftftt34.pptx
bishalnandi2
 
CSI-503 - 6. Memory Management
CSI-503 - 6. Memory Management CSI-503 - 6. Memory Management
CSI-503 - 6. Memory Management
ghayour abbas
 
Memory management ppt
Memory management pptMemory management ppt
Memory management ppt
ManishaJha43
 
Memory Management
Memory ManagementMemory Management
Memory Management
jayalakshmi268
 
memory managment on computer science.ppt
memory managment on computer science.pptmemory managment on computer science.ppt
memory managment on computer science.ppt
footydigarse
 
Memory Management techniques in operating systems
Memory Management techniques in operating systemsMemory Management techniques in operating systems
Memory Management techniques in operating systems
backiyalakshmi14
 
B.Tech. Computer Science Engineering OS Notes Unit 3
B.Tech. Computer Science Engineering OS Notes Unit 3B.Tech. Computer Science Engineering OS Notes Unit 3
B.Tech. Computer Science Engineering OS Notes Unit 3
likatif784
 
Lecture 5 memory management in operating systems.pptx
Lecture 5 memory management in operating systems.pptxLecture 5 memory management in operating systems.pptx
Lecture 5 memory management in operating systems.pptx
HarrisChikunya
 
Memory Management
Memory ManagementMemory Management
Memory Management
SanthiNivas
 
UNIT-2 OS.pptx
UNIT-2 OS.pptxUNIT-2 OS.pptx
UNIT-2 OS.pptx
ssusera387fd1
 
Dynamic loading
Dynamic loadingDynamic loading
Dynamic loading
A. S. M. Shafi
 
Ch4 memory management
Ch4 memory managementCh4 memory management
Ch4 memory management
Bullz Musetsho
 
Storage management
Storage managementStorage management
Storage management
Atul Sharma
 
Memory Management in Operating Systems ppt.pptx
Memory Management in Operating Systems ppt.pptxMemory Management in Operating Systems ppt.pptx
Memory Management in Operating Systems ppt.pptx
bhaimodel20
 
Memory management concepts in operating system
Memory management concepts in operating systemMemory management concepts in operating system
Memory management concepts in operating system
GopikaS12
 
Memory Managementgggffffffffffgggggggggg
Memory ManagementgggffffffffffggggggggggMemory Managementgggffffffffffgggggggggg
Memory Managementgggffffffffffgggggggggg
BHUPESHRAHANGDALE200
 
07-MemoryManagement.ppt
07-MemoryManagement.ppt07-MemoryManagement.ppt
07-MemoryManagement.ppt
hello509579
 
Fixed partitioning of memory
Fixed partitioning of memoryFixed partitioning of memory
Fixed partitioning of memory
John Scott Giini
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OS
Kumar Pritam
 
memory management IN OS ANURAG PATE.pptx
memory management IN OS ANURAG PATE.pptxmemory management IN OS ANURAG PATE.pptx
memory management IN OS ANURAG PATE.pptx
SHARMA ONLINE
 
memorymanagement-2112140500ygyftftt34.pptx
memorymanagement-2112140500ygyftftt34.pptxmemorymanagement-2112140500ygyftftt34.pptx
memorymanagement-2112140500ygyftftt34.pptx
bishalnandi2
 
CSI-503 - 6. Memory Management
CSI-503 - 6. Memory Management CSI-503 - 6. Memory Management
CSI-503 - 6. Memory Management
ghayour abbas
 
Memory management ppt
Memory management pptMemory management ppt
Memory management ppt
ManishaJha43
 
memory managment on computer science.ppt
memory managment on computer science.pptmemory managment on computer science.ppt
memory managment on computer science.ppt
footydigarse
 
Memory Management techniques in operating systems
Memory Management techniques in operating systemsMemory Management techniques in operating systems
Memory Management techniques in operating systems
backiyalakshmi14
 
B.Tech. Computer Science Engineering OS Notes Unit 3
B.Tech. Computer Science Engineering OS Notes Unit 3B.Tech. Computer Science Engineering OS Notes Unit 3
B.Tech. Computer Science Engineering OS Notes Unit 3
likatif784
 
Lecture 5 memory management in operating systems.pptx
Lecture 5 memory management in operating systems.pptxLecture 5 memory management in operating systems.pptx
Lecture 5 memory management in operating systems.pptx
HarrisChikunya
 
Memory Management
Memory ManagementMemory Management
Memory Management
SanthiNivas
 
Storage management
Storage managementStorage management
Storage management
Atul Sharma
 
Memory Management in Operating Systems ppt.pptx
Memory Management in Operating Systems ppt.pptxMemory Management in Operating Systems ppt.pptx
Memory Management in Operating Systems ppt.pptx
bhaimodel20
 
Memory management concepts in operating system
Memory management concepts in operating systemMemory management concepts in operating system
Memory management concepts in operating system
GopikaS12
 
Memory Managementgggffffffffffgggggggggg
Memory ManagementgggffffffffffggggggggggMemory Managementgggffffffffffgggggggggg
Memory Managementgggffffffffffgggggggggg
BHUPESHRAHANGDALE200
 
07-MemoryManagement.ppt
07-MemoryManagement.ppt07-MemoryManagement.ppt
07-MemoryManagement.ppt
hello509579
 
Fixed partitioning of memory
Fixed partitioning of memoryFixed partitioning of memory
Fixed partitioning of memory
John Scott Giini
 
Ad

Recently uploaded (20)

ICDCC 2025: Securing Agentic AI - Eryk Budi Pratama.pdf
ICDCC 2025: Securing Agentic AI - Eryk Budi Pratama.pdfICDCC 2025: Securing Agentic AI - Eryk Budi Pratama.pdf
ICDCC 2025: Securing Agentic AI - Eryk Budi Pratama.pdf
Eryk Budi Pratama
 
Building Connected Agents: An Overview of Google's ADK and A2A Protocol
Building Connected Agents:  An Overview of Google's ADK and A2A ProtocolBuilding Connected Agents:  An Overview of Google's ADK and A2A Protocol
Building Connected Agents: An Overview of Google's ADK and A2A Protocol
Suresh Peiris
 
Breaking it Down: Microservices Architecture for PHP Developers
Breaking it Down: Microservices Architecture for PHP DevelopersBreaking it Down: Microservices Architecture for PHP Developers
Breaking it Down: Microservices Architecture for PHP Developers
pmeth1
 
Understanding SEO in the Age of AI.pdf
Understanding SEO in the Age of AI.pdfUnderstanding SEO in the Age of AI.pdf
Understanding SEO in the Age of AI.pdf
Fulcrum Concepts, LLC
 
Longitudinal Benchmark: A Real-World UX Case Study in Onboarding by Linda Bor...
Longitudinal Benchmark: A Real-World UX Case Study in Onboarding by Linda Bor...Longitudinal Benchmark: A Real-World UX Case Study in Onboarding by Linda Bor...
Longitudinal Benchmark: A Real-World UX Case Study in Onboarding by Linda Bor...
UXPA Boston
 
Google DeepMind’s New AI Coding Agent AlphaEvolve.pdf
Google DeepMind’s New AI Coding Agent AlphaEvolve.pdfGoogle DeepMind’s New AI Coding Agent AlphaEvolve.pdf
Google DeepMind’s New AI Coding Agent AlphaEvolve.pdf
derrickjswork
 
Building the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdfBuilding the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdf
Cheryl Hung
 
Top 5 Qualities to Look for in Salesforce Partners in 2025
Top 5 Qualities to Look for in Salesforce Partners in 2025Top 5 Qualities to Look for in Salesforce Partners in 2025
Top 5 Qualities to Look for in Salesforce Partners in 2025
Damco Salesforce Services
 
Bridging AI and Human Expertise: Designing for Trust and Adoption in Expert S...
Bridging AI and Human Expertise: Designing for Trust and Adoption in Expert S...Bridging AI and Human Expertise: Designing for Trust and Adoption in Expert S...
Bridging AI and Human Expertise: Designing for Trust and Adoption in Expert S...
UXPA Boston
 
machines-for-woodworking-shops-en-compressed.pdf
machines-for-woodworking-shops-en-compressed.pdfmachines-for-woodworking-shops-en-compressed.pdf
machines-for-woodworking-shops-en-compressed.pdf
AmirStern2
 
React Native for Business Solutions: Building Scalable Apps for Success
React Native for Business Solutions: Building Scalable Apps for SuccessReact Native for Business Solutions: Building Scalable Apps for Success
React Native for Business Solutions: Building Scalable Apps for Success
Amelia Swank
 
Developing Product-Behavior Fit: UX Research in Product Development by Krysta...
Developing Product-Behavior Fit: UX Research in Product Development by Krysta...Developing Product-Behavior Fit: UX Research in Product Development by Krysta...
Developing Product-Behavior Fit: UX Research in Product Development by Krysta...
UXPA Boston
 
Dark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanizationDark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanization
Jakub Šimek
 
Computer Systems Quiz Presentation in Purple Bold Style (4).pdf
Computer Systems Quiz Presentation in Purple Bold Style (4).pdfComputer Systems Quiz Presentation in Purple Bold Style (4).pdf
Computer Systems Quiz Presentation in Purple Bold Style (4).pdf
fizarcse
 
Multi-Agent AI Systems: Architectures & Communication (MCP and A2A)
Multi-Agent AI Systems: Architectures & Communication (MCP and A2A)Multi-Agent AI Systems: Architectures & Communication (MCP and A2A)
Multi-Agent AI Systems: Architectures & Communication (MCP and A2A)
HusseinMalikMammadli
 
AI and Meaningful Work by Pablo Fernández Vallejo
AI and Meaningful Work by Pablo Fernández VallejoAI and Meaningful Work by Pablo Fernández Vallejo
AI and Meaningful Work by Pablo Fernández Vallejo
UXPA Boston
 
DNF 2.0 Implementations Challenges in Nepal
DNF 2.0 Implementations Challenges in NepalDNF 2.0 Implementations Challenges in Nepal
DNF 2.0 Implementations Challenges in Nepal
ICT Frame Magazine Pvt. Ltd.
 
論文紹介:"InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning" ...
論文紹介:"InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning" ...論文紹介:"InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning" ...
論文紹介:"InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning" ...
Toru Tamaki
 
Refactoring meta-rauc-community: Cleaner Code, Better Maintenance, More Machines
Refactoring meta-rauc-community: Cleaner Code, Better Maintenance, More MachinesRefactoring meta-rauc-community: Cleaner Code, Better Maintenance, More Machines
Refactoring meta-rauc-community: Cleaner Code, Better Maintenance, More Machines
Leon Anavi
 
Crazy Incentives and How They Kill Security. How Do You Turn the Wheel?
Crazy Incentives and How They Kill Security. How Do You Turn the Wheel?Crazy Incentives and How They Kill Security. How Do You Turn the Wheel?
Crazy Incentives and How They Kill Security. How Do You Turn the Wheel?
Christian Folini
 
ICDCC 2025: Securing Agentic AI - Eryk Budi Pratama.pdf
ICDCC 2025: Securing Agentic AI - Eryk Budi Pratama.pdfICDCC 2025: Securing Agentic AI - Eryk Budi Pratama.pdf
ICDCC 2025: Securing Agentic AI - Eryk Budi Pratama.pdf
Eryk Budi Pratama
 
Building Connected Agents: An Overview of Google's ADK and A2A Protocol
Building Connected Agents:  An Overview of Google's ADK and A2A ProtocolBuilding Connected Agents:  An Overview of Google's ADK and A2A Protocol
Building Connected Agents: An Overview of Google's ADK and A2A Protocol
Suresh Peiris
 
Breaking it Down: Microservices Architecture for PHP Developers
Breaking it Down: Microservices Architecture for PHP DevelopersBreaking it Down: Microservices Architecture for PHP Developers
Breaking it Down: Microservices Architecture for PHP Developers
pmeth1
 
Understanding SEO in the Age of AI.pdf
Understanding SEO in the Age of AI.pdfUnderstanding SEO in the Age of AI.pdf
Understanding SEO in the Age of AI.pdf
Fulcrum Concepts, LLC
 
Longitudinal Benchmark: A Real-World UX Case Study in Onboarding by Linda Bor...
Longitudinal Benchmark: A Real-World UX Case Study in Onboarding by Linda Bor...Longitudinal Benchmark: A Real-World UX Case Study in Onboarding by Linda Bor...
Longitudinal Benchmark: A Real-World UX Case Study in Onboarding by Linda Bor...
UXPA Boston
 
Google DeepMind’s New AI Coding Agent AlphaEvolve.pdf
Google DeepMind’s New AI Coding Agent AlphaEvolve.pdfGoogle DeepMind’s New AI Coding Agent AlphaEvolve.pdf
Google DeepMind’s New AI Coding Agent AlphaEvolve.pdf
derrickjswork
 
Building the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdfBuilding the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdf
Cheryl Hung
 
Top 5 Qualities to Look for in Salesforce Partners in 2025
Top 5 Qualities to Look for in Salesforce Partners in 2025Top 5 Qualities to Look for in Salesforce Partners in 2025
Top 5 Qualities to Look for in Salesforce Partners in 2025
Damco Salesforce Services
 
Bridging AI and Human Expertise: Designing for Trust and Adoption in Expert S...
Bridging AI and Human Expertise: Designing for Trust and Adoption in Expert S...Bridging AI and Human Expertise: Designing for Trust and Adoption in Expert S...
Bridging AI and Human Expertise: Designing for Trust and Adoption in Expert S...
UXPA Boston
 
machines-for-woodworking-shops-en-compressed.pdf
machines-for-woodworking-shops-en-compressed.pdfmachines-for-woodworking-shops-en-compressed.pdf
machines-for-woodworking-shops-en-compressed.pdf
AmirStern2
 
React Native for Business Solutions: Building Scalable Apps for Success
React Native for Business Solutions: Building Scalable Apps for SuccessReact Native for Business Solutions: Building Scalable Apps for Success
React Native for Business Solutions: Building Scalable Apps for Success
Amelia Swank
 
Developing Product-Behavior Fit: UX Research in Product Development by Krysta...
Developing Product-Behavior Fit: UX Research in Product Development by Krysta...Developing Product-Behavior Fit: UX Research in Product Development by Krysta...
Developing Product-Behavior Fit: UX Research in Product Development by Krysta...
UXPA Boston
 
Dark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanizationDark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanization
Jakub Šimek
 
Computer Systems Quiz Presentation in Purple Bold Style (4).pdf
Computer Systems Quiz Presentation in Purple Bold Style (4).pdfComputer Systems Quiz Presentation in Purple Bold Style (4).pdf
Computer Systems Quiz Presentation in Purple Bold Style (4).pdf
fizarcse
 
Multi-Agent AI Systems: Architectures & Communication (MCP and A2A)
Multi-Agent AI Systems: Architectures & Communication (MCP and A2A)Multi-Agent AI Systems: Architectures & Communication (MCP and A2A)
Multi-Agent AI Systems: Architectures & Communication (MCP and A2A)
HusseinMalikMammadli
 
AI and Meaningful Work by Pablo Fernández Vallejo
AI and Meaningful Work by Pablo Fernández VallejoAI and Meaningful Work by Pablo Fernández Vallejo
AI and Meaningful Work by Pablo Fernández Vallejo
UXPA Boston
 
論文紹介:"InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning" ...
論文紹介:"InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning" ...論文紹介:"InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning" ...
論文紹介:"InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning" ...
Toru Tamaki
 
Refactoring meta-rauc-community: Cleaner Code, Better Maintenance, More Machines
Refactoring meta-rauc-community: Cleaner Code, Better Maintenance, More MachinesRefactoring meta-rauc-community: Cleaner Code, Better Maintenance, More Machines
Refactoring meta-rauc-community: Cleaner Code, Better Maintenance, More Machines
Leon Anavi
 
Crazy Incentives and How They Kill Security. How Do You Turn the Wheel?
Crazy Incentives and How They Kill Security. How Do You Turn the Wheel?Crazy Incentives and How They Kill Security. How Do You Turn the Wheel?
Crazy Incentives and How They Kill Security. How Do You Turn the Wheel?
Christian Folini
 
Ad

INTRODUCTION TO OPERATING SYSTEM_LESSON_2_SHARE.pdf

  • 1. Introduction to Operation System: Memory Management of Early Systems Nana Kofi Annan
  • 2. Learning Objectives The basic functionality of the three memory allocation schemes presented in this section: fixed partitions, dynamic partitions, relocatable dynamic partitions Best-fit memory allocation as well as first-fit memory allocation schemes How a memory list keeps track of available memory The importance of deallocation of memory in a dynamic partition system The importance of the bounds register in memory allocation schemes The role of compaction and how it improves memory allocation efficiency
  • 3. Memory Management • The management of main memory is critical. In fact, from a historical perspective, the performance of the entire system has been directly dependent on two things: • How much memory is available • How it is optimized while jobs are being processed.
  • 5. Memory Management • These early memory management schemes are seldom used by today’s operating systems, but they are important to study because each one introduced fundamental concepts that helped memory management evolve.
  • 6. Single-User Contiguous Scheme • The first memory allocation scheme worked like this: Each program to be processed was loaded in its entirety into memory and allocated as much contiguous space in memory as it needed, as shown in Figure 2.2. The keywords here are entirety and contiguous. If the program was too large and didn’t fit the available memory space, it couldn’t be executed. And, although early computers were physically large, they had very little memory
  • 7. Single-User Contiguous Scheme • A single-user scheme supports one user on one computer running one job at a time. Sharing isn’t possible.
  • 9. Single-User Contiguous Scheme • This demonstrates a significant limiting factor of all computers—they have only a finite amount of memory and if a program doesn’t fit, then either the size of the main memory must be increased or the program must be modified. It’s usually modified by making it smaller or by using methods that allow program segments (partitions made to the program) to be overlaid. (To overlay is to transfer segments of a program from secondary storage into main memory for execution, so that two or more segments take turns occupying the same memory locations.)
  • 10. Single-User Contiguous Scheme • Single-user systems in a nonnetworked environment work the same way. Each user is given access to all available main memory for each job, and jobs are processed sequentially, one after the other. To allocate memory, the operating system uses a simple algorithm (step-by-step procedure to solve a problem):
  • 11. Algorithm to Load a Job in a Single-User System
  • 13. Single-User Contiguous Scheme • Notice that the amount of work done by the operating system’s Memory Manager is minimal, the code to perform the functions is straightforward, and the logic is quite simple. Only two hardware items are needed: a register to store the base address and an accumulator to keep track of the size of the program as it’s being read into memory. Once the program is entirely loaded into memory, it remains there until execution is complete, either through normal termination or by intervention of the operating system.
  • 14. Single-User Contiguous Scheme • One major problem with this type of memory allocation scheme is that it doesn’t support multiprogramming or networking; it can handle only one job at a time. When these single-user configurations were first made available commercially in the late 1940s and early 1950s, they were used in research institutions but proved unacceptable for the business community—it wasn’t cost effective to spend almost $200,000 for a piece of equipment that could be used by only one person at a time. Therefore, in the late 1950s and early 1960s a new scheme was needed to manage memory, which used partitions to take advantage of the computer system’s resources by overlapping independent operations
  • 16. Fixed Partitions • The first attempt to allow for multiprogramming used fixed partitions (also called static partitions) within the main memory—one partition for each job. Because the size of each partition was designated when the system was powered on, each partition could only be reconfigured when the computer system was shut down, reconfigured, and restarted. Thus, once the system was in operation the partition sizes remained static.
  • 17. Fixed Partitions • Each partition could be used by only one program. The size of each partition was set in advance by the computer operator so sizes couldn't be changed without restarting the system.
  • 18. Fixed Partitions • A critical factor was introduced with this scheme: protection of the job’s memory space. Once a partition was assigned to a job, no other job could be allowed to enter its boundaries, either accidentally or intentionally. This problem of partition intrusion didn’t exist in single-user contiguous allocation schemes because only one job was present in main memory at any given time so only the portion of the operating system residing in main memory had to be protected. However, for the fixed partition allocation schemes, protection was mandatory for each partition present in main memory. Typically this was the joint responsibility of the hardware of the computer and the operating system.
  • 19. Fixed Partitions • The algorithm used to store jobs in memory requires a few more steps than the one used for a single-user system because the size of the job must be matched with the size of the partition to make sure it fits completely. Then, when a block of sufficient size is located, the status of the partition must be checked to see if it’s available.
  • 20. Algorithm to Load a Job in a Fixed Partition
  • 22. Fixed Partitions • This partition scheme is more flexible than the single-user scheme because it allows several programs to be in memory at the same time. However, it still requires that the entire program be stored contiguously and in memory from the beginning to the end of its execution. In order to allocate memory spaces to jobs, the operating system’s Memory Manager must keep a table, such as Table 2.1, which shows each memory partition size, its address, its access restrictions, and its current status (free or busy) for the system illustrated in Figure 2.3. (In Table 2.1 K stands for kilobyte, which is 1,024 bytes.”)
  • 25. Fixed Partitions • The fixed partition scheme works well if all of the jobs run on the system are of the same size or if the sizes are known ahead of time and don’t vary between reconfigurations. Ideally, that would require accurate advance knowledge of all the jobs to be run on the system in the coming hours, days, or weeks. However, unless the operator can accurately predict the future, the sizes of the partitions are determined in an arbitrary fashion and they might be too small or too large for the jobs coming in.
  • 26. Fixed Partitions • There are significant consequences if the partition sizes are too small; larger jobs will be rejected if they’re too big to fit into the largest partitions or will wait if the large partitions are busy. As a result, large jobs may have a longer turnaround time as they wait for free partitions of sufficient size or may never run.
  • 27. Fixed Partitions • On the other hand, if the partition sizes are too big, memory is wasted. If a job does not occupy the entire partition, the unused memory in the partition will remain idle; it can’t be given to another job because each partition is allocated to only one job at a time. It’s an indivisible unit. Figure 2.3 demonstrates one such circumstance.
  • 28. Fixed Partitions • This phenomenon of partial usage of fixed partitions and the coinciding creation of unused spaces within the partition is called internal fragmentation, and is a major drawback to the fixed partition memory allocation scheme.
  • 29. Fixed Partitions The type depends on the location of the wasted space There are two types of fragmentation: Internal external
  • 31. Dynamic Partitions • With dynamic partitions, available memory is still kept in contiguous blocks but jobs are given only as much memory as they request when they are loaded for processing. Although this is a significant improvement over fixed partitions because memory isn’t wasted within the partition, it doesn’t entirely eliminate the problem.
  • 32. Dynamic Partitions • As shown in Figure 2.4, a dynamic partition scheme fully utilizes memory when the first jobs are loaded. But as new jobs enter the system that are not the same size as those that just vacated memory, they are fit into the available spaces on a priority basis. Figure 2.4 demonstrates first-come, first-served priority. Therefore, the subsequent allocation of memory creates fragments of free memory between blocks of allocated memory. This problem is called external fragmentation and, like internal fragmentation, lets memory go to waste.
  • 37. Dynamic Partitions • In the last snapshot, (e) in Figure 2.4, there are three free partitions of 5K, 10K, and 20K—35K in all—enough to accommodate Job 8, which only requires 30K. However they are not contiguous and, because the jobs are loaded in a contiguous manner, this scheme forces Job 8 to wait.
  • 38. Dynamic Partitions • Before we go to the next allocation scheme, let’s examine how the operating system keeps track of the free sections of memory.
  • 40. Best-Fit Versus First-Fit Allocation • For both fixed and dynamic memory allocation schemes, the operating system must keep lists of each memory location noting which are free and which are busy. Then as new jobs come into the system, the free partitions must be allocated
  • 41. Best-Fit Versus First-Fit Allocation • These partitions may be allocated on the basis of first-fit memory allocation (first partition fitting the requirements) or best-fit memory allocation (least wasted space, the smallest partition fitting the requirements). For both schemes, the Memory Manager organizes the memory lists of the free and used partitions (free/busy) either by size or by location. The best-fit allocation method keeps the free/busy lists in order by size, smallest to largest. The first-fit method keeps the free/busy lists organized by memory locations, low-order memory to high-order memory. Each has advantages depending on the needs of the particular allocation scheme— best-fit usually makes the best use of memory space; first-fit is faster in making the allocation.
  • 43. Best-Fit Versus First-Fit Allocation • The first-fit algorithm assumes that the Memory Manager keeps two lists, one for free memory blocks and one for busy memory blocks. The operation consists of a simple loop that compares the size of each job to the size of each memory block until a block is found that’s large enough to fit the job. Then the job is stored into that block of memory, and the Memory Manager moves out of the loop to fetch the next job from the entry queue. If the entire list is searched in vain, then the job is placed into a waiting queue. The Memory Manager then fetches the next job and repeats the process
  • 46. Best-Fit Versus First-Fit Allocation • In Table 2.2, a request for a block of 200 spaces has just been given to the Memory Manager. (The spaces may be words, bytes, or any other unit the system handles.) Using the first-fit algorithm and starting from the top of the list, the Memory Manager locates the first block of memory large enough to accommodate the job, which is at location 6785. The job is then loaded, starting at location 6785 and occupying the next 200 spaces. The next step is to adjust the free list to indicate that the block of free memory now starts at location 6985 (not 6785 as before) and that it contains only 400 spaces (not 600 as before).
  • 47. Best-Fit Versus First-Fit Allocation • The algorithm for best-fit is slightly more complex because the goal is to find the smallest memory block into which the job will fit:
  • 49. Best-Fit Versus First-Fit Allocation • One of the problems with the best-fit algorithm is that the entire table must be searched before the allocation can be made because the memory blocks are physically stored in sequence according to their location in memory (and not by memory block sizes as shown in Figure 2.6). The system could execute an algorithm to continuously rearrange the list in ascending order by memory block size, but that would add more overhead and might not be an efficient use of processing time in the long run.
  • 50. Best-Fit Versus First-Fit Allocation • The best-fit algorithm is illustrated showing only the list of free memory blocks. Table 2.3 shows the free list before and after the best-fit block has been allocated to the same request presented in Table 2.2.
  • 52. Best-Fit Versus First-Fit Allocation • In Table 2.3, a request for a block of 200 spaces has just been given to the Memory Manager. Using the best-fit algorithm and starting from the top of the list, the Memory Manager searches the entire list and locates a block of memory starting at location 7600, which is the smallest block that’s large enough to accommodate the job. The choice of this block minimizes the wasted space (only 5 spaces are wasted, which is less than in the four alternative blocks). The job is then stored, starting at location 7600 and occupying the next 200 spaces. Now the free list must be adjusted to show that the block of free memory starts at location 7800 (not 7600 as before) and that it contains only 5 spaces (not 205 as before).
  • 53. Best-Fit Versus First-Fit Allocation • Which is best—first-fit or best-fit? For many years there was no way to answer such a general question because performance depends on the job mix. Note that while the best-fit resulted in a better fit, it also resulted (and does so in the general case) in a smaller free space (5 spaces), which is known as a sliver.
  • 54. Best-Fit Versus First-Fit Allocation • In recent years, access times have become so fast that the scheme that saves the more valuable resource, memory space, may be the best in some cases. Research continues to focus on finding the optimum allocation scheme. This includes optimum page size.
  • 55. Deallocation • Deallocation is the release of memory space
  • 56. Deallocation • For a fixed partition system, the process is quite straightforward. When the job is completed, the Memory Manager resets the status of the memory block where the job was stored to “free.” Any code—for example, binary values with 0 indicating free and 1 indicating busy—may be used so the mechanical task of deallocating a block of memory is relatively simple.
  • 57. Deallocation • A dynamic partition system uses a more complex algorithm because the algorithm tries to combine free areas of memory whenever possible. Therefore, the system must be prepared for three alternative situations: • Case 1. When the block to be deallocated is adjacent to another free block • Case 2. When the block to be deallocated is between two free blocks • Case 3. When the block to be deallocated is isolated from other free blocks
  • 58. Deallocation • The deallocation algorithm must be prepared for all three eventualities with a set of nested conditionals. The following algorithm is based on the fact that memory locations are listed using a lowest-to-highest address scheme. The algorithm would have to be modified to accommodate a different organization of memory locations. In this algorithm, job_size is the amount of memory being released by the terminating job, and beginning_address is the location of the first instruction for the job.
  • 60. Deallocation • Case 1: Joining Two Free Blocks • Table 2.4 shows how deallocation occurs in a dynamic memory allocation system when the job to be deallocated is next to one free memory block.
  • 61. 1 Although the numbers in parentheses don’t appear in the free list, they’ve been inserted here for clarity. The job size is 200 and its beginning location is 7600.
  • 62. Deallocation • After deallocation the free list looks like the one shown in Table 2.5
  • 64. Deallocation • Using the deallocation algorithm, the system sees that the memory to be released is next to a free memory block, which starts at location 7800. Therefore, the list must be changed to reflect the starting address of the new free block, 7600, which was the address of the first instruction of the job that just released this block. In addition, the memory block size for this new free space must be changed to show its new size, which is the combined total of the two free partitions (200 + 5).
  • 65. Deallocation • Case 2: Joining Three Free Blocks
  • 66. Deallocation • When the deallocated memory space is between two free memory blocks, the process is similar, as shown in Table 2.6. • Using the deallocation algorithm, the system learns that the memory to be deallocated is between two free blocks of memory. Therefore, the sizes of the three free partitions (20 + 20 + 205) must be combined and the total stored with the smallest beginning address, 7560
  • 68. Deallocation • Because the entry at location 7600 has been combined with the previous entry, we must empty out this entry. We do that by changing the status to null entry, with no beginning address and no memory block size as indicated by an asterisk in Table 2.7. This negates the need to rearrange the list at the expense of memory.
  • 70. Deallocation • Case 3: Deallocating an Isolated Block • The third alternative is when the space to be deallocated is isolated from all other free areas.
  • 71. Deallocation • For this example, we need to know more about how the busy memory list is configured. To simplify matters, let’s look at the busy list for the memory area between locations 7560 and 10250. Remember that, starting at 7560, there’s a free memory block of 245, so the busy memory area includes everything from location 7805 (7560 + 245) to 10250, which is the address of the next free block. The free list and busy list are shown in Table 2.8 and Table 2.9.
  • 73. Deallocation • Using the deallocation algorithm, the system learns that the memory block to be released is not adjacent to any free blocks of memory; instead it is between two other busy areas. Therefore, the system must search the table for a null entry. • The scheme presented in this example creates null entries in both the busy and the free lists during the process of allocation or deallocation of memory. An example of a null entry occurring as a result of deallocation was presented in Case 2. A null entry in the busy list occurs when a memory block between two other busy memory blocks is returned to the free list, as shown in Table 2.10. This mechanism ensures that all blocks are entered in the lists according to the beginning address of their memory location from smallest to largest.
  • 75. Deallocation • When the null entry is found, the beginning memory location of the terminating job is entered in the beginning address column, the job size is entered under the memory block size column, and the status is changed from a null entry to free to indicate that a new block of memory is available, as shown in Table 2.11.
  • 78. Relocatable Dynamic Partitions • Both of the fixed and dynamic memory allocation schemes described thus far shared some unacceptable fragmentation characteristics that had to be resolved before the number of jobs waiting to be accepted became unwieldy. In addition, there was a growing need to use all the slivers of memory often left over.
  • 79. Relocatable Dynamic Partitions • The solution to both problems was the development of relocatable dynamic partitions. With this memory allocation scheme, the Memory Manager relocates programs to gather together all of the empty blocks and compact them to make one block of memory large enough to accommodate some or all of the jobs waiting to get in.
  • 80. Relocatable Dynamic Partitions • The compaction of memory, sometimes referred to as garbage collection or defragmentation, is performed by the operating system to reclaim fragmented sections of the memory space. Remember our earlier example of the makeshift lending library? If you stopped lending books for a few moments and rearranged the books in the most effective order, you would be compacting your collection. But this demonstrates its disadvantage—it’s an overhead process, so that while compaction is being done everything else must wait.
  • 81. Relocatable Dynamic Partitions • Compaction isn’t an easy task. First, every program in memory must be relocated so they’re contiguous, and then every address, and every reference to an address, within each program must be adjusted to account for the program’s new location in memory. However, all other values within the program (such as data values) must be left alone. In other words, the operating system must distinguish between addresses and data values, and the distinctions are not obvious once the program has been loaded into memory.
  • 82. Relocatable Dynamic Partitions • To appreciate the complexity of relocation, let’s look at a typical program. Remember, all numbers are stored in memory as binary values, and in any given program instruction it’s not uncommon to find addresses as well as data values. For example, an assembly language program might include the instruction to add the integer 1 to I. The source code instruction looks like this: ADDI I, 1
  • 83. Relocatable Dynamic Partitions • However, after it has been translated into actual code it could look like this (for readability purposes the values are represented here in octal code, not binary code): • It’s not immediately obvious which elements are addresses and which are instruction codes or data values. In fact, the address is the number on the left (000007). The instruction code is next (271), and the data value is on the right (000001) 000007 271 01 0 00 000001
  • 84. Relocatable Dynamic Partitions • The operating system can tell the function of each group of digits by its location in the line and the operation code. However, if the program is to be moved to another place in memory, each address must be identified, or flagged. So later the amount of memory locations by which the program has been displaced must be added to (or subtracted from) all of the original addresses in the program.
  • 85. Relocatable Dynamic Partitions • This becomes particularly important when the program includes loop sequences, decision sequences, and branching sequences, as well as data references. If, by chance, every address was not adjusted by the same value, the program would branch to the wrong section of the program or to a section of another program, or it would reference the wrong data.
  • 86. Relocatable Dynamic Partitions • The program in Figure 2.7 and Figure 2.8 shows how the operating system flags the addresses so that they can be adjusted if and when a program is relocated. Internally, the addresses are marked with a special symbol (indicated in Figure 2.8 by apostrophes) so the Memory Manager will know to adjust them by the value stored in the relocation register. All of the other values (data values) are not marked and won’t be changed after relocation. Other numbers in the program, those indicating instructions, registers, or constants used in the instruction, are also left alone.
  • 89. Relocatable Dynamic Partitions • Figure 2.9 illustrates what happens to a program in memory during compaction and relocation. • (figure 2.9) Three snapshots of memory before and after compaction with the operating system occupying the first 10K of memory. When Job 6 arrives requiring 84K, the initial memory layout in (a) shows external fragmentation totaling 96K of space. Immediately after compaction (b), external fragmentation has been eliminated, making room for Job 6 which, after loading, is shown in (c).
  • 91. Relocatable Dynamic Partitions • This discussion of compaction raises three questions: • What goes on behind the scenes when relocation and compaction take place? • What keeps track of how far each job has moved from its original storage area? • What lists have to be updated?
  • 92. Relocatable Dynamic Partitions • The last question is easiest to answer. After relocation and compaction, both the free list and the busy list are updated. The free list is changed to show the partition for the new block of free memory: the one formed as a result of compaction that will be located in memory starting after the last location used by the last job. The busy list is changed to show the new locations for all of the jobs already in progress that were relocated. Each job will have a new address except for those that were already residing at the lowest memory locations.
  • 93. Relocatable Dynamic Partitions • To answer the other two questions we must learn more about the hardware components of a computer, specifically the registers. Special-purpose registers are used to help with the relocation. In some computers, two special registers are set aside for this purpose: the bounds register and the relocation register.
  • 94. Relocatable Dynamic Partitions • The bounds register is used to store the highest (or lowest, depending on the specific system) location in memory accessible by each program. This ensures that during execution, a program won’t try to access memory locations that don’t belong to it—that is, those that are out of bounds. The relocation register contains the value that must be added to each address referenced in the program so that the system will be able to access the correct memory addresses after relocation. If the program isn’t relocated, the value stored in the program’s relocation register is zero.
  • 95. Relocatable Dynamic Partitions • Figure 2.10 illustrates what happens during relocation by using the relocation register (all values are shown in decimal form)
  • 96. (figure 2.10) Contents of relocation register and close-up of Job 4 memory area (a) before relocation and (b) after relocation and compaction.
  • 97. Relocatable Dynamic Partitions • Originally, Job 4 was loaded into memory starting at memory location 30K. (1K equals 1,024 bytes. Therefore, the exact starting address is: 30 * 1024 = 30,720.) It required a block of memory of 32K (or 32 * 1024 = 32,768) addressable locations. Therefore, when it was originally loaded, the job occupied the space from memory location 30720 to memory location 63488-1. Now, suppose that within the program, at memory location 31744, there’s an instruction that looks like this: LOAD 4, ANSWER
  • 98. Relocatable Dynamic Partitions • This assembly language command asks that the data value known as ANSWER be loaded into Register 4 for later computation. ANSWER, the value 37, is stored at memory location 53248. (In this example, Register 4 is a working/computation register, which is distinct from either the relocation or the bounds register.)
  • 99. Relocatable Dynamic Partitions • After relocation, Job 4 has been moved to a new starting memory address of 18K (actually 18 * 1024 = 18,432). Of course, the job still has its 32K addressable locations, so it now occupies memory from location 18432 to location 51200-1 and, thanks to the relocation register, all of the addresses will be adjusted accordingly.
  • 100. Relocatable Dynamic Partitions • What does the relocation register contain? In this example, it contains the value –12288. As calculated previously, 12288 is the size of the free block that has been moved forward toward the high addressable end of memory. The sign is negative because Job 4 has been moved back, closer to the low addressable end of memory, as shown at the top of Figure 2.10(b).
  • 101. Relocatable Dynamic Partitions • However, the program instruction (LOAD 4, ANSWER) has not been changed. The original address 53248 where ANSWER had been stored remains the same in the program no matter how many times it is relocated. Before the instruction is executed, however, the true address must be computed by adding the value stored in the relocation register to the address found at that instruction. If the addresses are not adjusted by the value stored in the relocation register, then even though memory location 31744 is still part of the job’s accessible set of memory locations, it would not contain the LOAD command.
  • 102. Relocatable Dynamic Partitions • Not only that, but location 53248 is now out of bounds. The instruction that was originally at 31744 has been moved to location 19456. That’s because all of the instructions in this program have been moved back by 12K (12 * 1024 = 12,288), which is the size of the free block. Therefore, location 53248 has been displaced by –12288 and ANSWER, the data value 37, is now located at address 40960.
  • 103. Relocatable Dynamic Partitions • In effect, by compacting and relocating, the Memory Manager optimizes the use of memory and thus improves throughput—one of the measures of system performance. An unfortunate side effect is that more overhead is incurred than with the two previous memory allocation schemes. The crucial factor here is the timing of the compaction—when and how often it should be done. There are three options
  • 104. Relocatable Dynamic Partitions • One approach is to do it when a certain percentage of memory becomes busy, say 75 percent. The disadvantage of this approach is that the system would incur unnecessary overhead if no jobs were waiting to use the remaining 25 percent.
  • 105. Relocatable Dynamic Partitions • A second approach is to compact memory only when there are jobs waiting to get in. This would entail constant checking of the entry queue, which might result in unnecessary overhead and slow down the processing of jobs already in the system
  • 106. Relocatable Dynamic Partitions • A third approach is to do it after a prescribed amount of time has elapsed. If the amount of time chosen is too small, however, then the system will spend more time on compaction than on processing. If it’s too large, too many jobs will congregate in the waiting queue and the advantages of compaction are lost
  • 107. Relocatable Dynamic Partitions • As you can see, each option has its good and bad points. The best choice for any system is decided by the operating system designer who, based on the job mix and other factors, tries to optimize both processing time and memory use while keeping overhead as low as possible.
  • 108. Summary • Four memory management techniques were presented in this chapter: single-user systems, fixed partitions, dynamic partitions, and relocatable dynamic partitions. They have three things in common: They all require that the entire program (1) be loaded into memory, (2) be stored contiguously, and (3) remain in memory until the job is completed • Consequently, each puts severe restrictions on the size of the jobs because they can only be as large as the biggest partitions in memory.
  • 109. Summary • These schemes were sufficient for the first three generations of computers, which processed jobs in batch mode. Turnaround time was measured in hours, or sometimes days, but that was a period when users expected such delays between the submission of their jobs and pick up of output. As you’ll see in the next chapter, a new trend emerged during the third-generation computers of the late 1960s and early 1970s: Users were able to connect directly with the central processing unit via remote job entry stations, loading their jobs from online terminals that could interact more directly with the system. New methods of memory management were needed to accommodate them.
  • 110. Summary • We’ll see that the memory allocation schemes that followed had two new things in common. First, programs didn’t have to be stored in contiguous memory locations— they could be divided into segments of variable sizes or pages of equal size. Each page, or segment, could be stored wherever there was an empty block big enough to hold it. Second, not all the pages, or segments, had to reside in memory during the execution of the job. These were significant advances for system designers, operators, and users alike.
  • 111. Key Terms • address: a number that designates a particular memory location. • best-fit memory allocation: a main memory allocation scheme that considers all free blocks and selects for allocation the one that will result in the least amount of wasted space. • bounds register: a register used to store the highest location in memory legally accessible by each program. • compaction: the process of collecting fragments of available memory space into contiguous blocks by moving programs and data in a computer’s memory or disk. Also called garbage collection.
  • 112. Key Terms • deallocation: the process of freeing an allocated resource, whether memory space, adevice, a file, or a CPU. • dynamic partitions: a memory allocation scheme in which jobs are given as much memory as they request when they are loaded for processing, thus creating their own partitions in main memory. • external fragmentation: a situation in which the dynamic allocation of memory creates unusable fragments of free memory between blocks of busy, or allocated, memory. • first come first served (FCFS): a nonpreemptive process scheduling policy that handles jobs according to their arrival time; the first job in the READY queue is processed first.
  • 113. Key Terms • first-fit memory allocation: a main memory allocation scheme that searches from the beginning of the free block list and selects for allocation the first block of memory large enough to fulfill the request. • fixed partitions: a memory allocation scheme in which main memory is sectioned off, with portions assigned to each job. • internal fragmentation: a situation in which a fixed partition is only partially used by the program; the remaining space within the partition is unavailable to any other job and is therefore wasted. • kilobyte (K): a unit of memory or storage space equal to 1,024 bytes or 210 bytes.
  • 114. Key Terms • main memory: the unit that works directly with the CPU and in which the data and instructions must reside in order to be processed. Also called random access memory (RAM), primary storage, or internal memory. • null entry: an empty entry in a list. • relocatable dynamic partitions: a memory allocation scheme in which the system relocates programs in memory to gather together all of the empty blocks and compact them to make one block of memory that’s large enough to accommodate some or all of the jobs waiting for memory. • relocation: (1) the process of moving a program from one area of memory to another; or (2) the process of adjusting address references in a program, by either software or hardware means, to allow the program to execute correctly when loaded in different sections of memory
  • 115. Key Terms • relocation register: a register that contains the value that must be added to each address referenced in the program so that it will be able to access the correct memory addresses after relocation. • static partitions: another term for fixed partitions.
  • 116. END
  翻译: