The document discusses various aspects of computer system structures. It describes that a modern computer system consists of a CPU, memory, and device controllers connected through a system bus. I/O devices and the CPU can operate concurrently, with each device controller managing a specific device type. Interrupts are used to signal when I/O operations are complete. Memory is organized in a hierarchy from fastest and smallest registers to slower but larger magnetic disks. Various techniques like caching, paging and virtual memory help bridge differences in speed between CPU and I/O devices. The document also discusses hardware protection mechanisms like dual mode operation, memory protection using base and limit registers, and CPU protection using timers.
This document discusses input-output (I/O) organization in a computer system. It describes various I/O interface techniques including programmed I/O, interrupt-initiated I/O, and direct memory access (DMA). It also discusses I/O bus and interface modules, I/O versus memory bus, modes of data transfer, interrupt priorities, and the role of an input-output processor (IOP). The IOP allows direct communication between I/O devices and memory using DMA to facilitate efficient transfer of data.
This document compares polling/interrupt-driven I/O and DMA, describing their operations and overheads. It then explains that DMA uses a controller to transfer large blocks of data between I/O devices and memory without continuous processor intervention, reducing overhead. The DMA controller is programmed by the processor and raises interrupts to signal transfer completion. It allows high-speed transfer and sharing of bus access through techniques like cycle stealing and arbitration.
The document discusses various aspects of input and output devices and their interface with the central processing unit of a computer system. It describes how peripherals like keyboards, displays and printers are connected and controlled. It explains the different modes of data transfer between CPU and peripherals, including programmed I/O, interrupt-initiated I/O, and direct memory access. The document also covers topics like asynchronous and synchronous data transmission, handshaking, and hardware priority interrupts.
This document provides information about direct memory access (DMA) and input/output (I/O) techniques used by operating systems. It lists six group members and their topics which include DMA, how it works, and comparisons of polling and interrupts. DMA allows hardware devices to access memory independently of the CPU. The operating system uses DMA hardware by instructing a device driver to transfer data to a buffer, then the disk controller performs the DMA transfer to that address. Polling checks device status periodically while interrupts use hardware signals to notify the CPU when a device needs attention.
This document discusses various data transfer schemes between I/O devices and memory in a computer system. It describes DMA which allows direct transfer of data from I/O to memory without CPU involvement. It also covers programmed I/O which uses the CPU to control data transfers for small amounts of data. Asynchronous and synchronous transfer methods are explained along with handshaking techniques for asynchronous transfers between devices running at different speeds. Interrupt-driven I/O is also summarized which uses interrupt requests from I/O devices to signal the CPU to service a transfer.
This document discusses different techniques for data transfer between the CPU and I/O devices, including programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It describes the basic functioning of an I/O module, comparing programmed I/O to interrupt-driven I/O. It then provides details on DMA, including how it allows high-speed transfer of data directly between memory and I/O devices without CPU involvement. The document also covers I/O interfaces, asynchronous data transfer methods like handshaking, and serial transmission techniques.
03_Top Level View of Computer Function and Interconnection.pptChABiDRazZaQ
This document provides an overview of computer organization and architecture, including:
- The main components of a computer system are the central processing unit (CPU), main memory, and input/output (I/O).
- The CPU fetches instructions from memory and executes them in a fetch-execute cycle, potentially involving data transfer between the CPU and other components.
- Interrupts provide a way for hardware and software signals to be processed immediately by pausing the current process and launching an interrupt handler.
- Buses are used to connect the computer components and allow for transfer of data, addresses, and control signals between processors, memory, and I/O devices.
I/o management and disk scheduling .pptxwebip34973
This document provides an overview of I/O management and disk scheduling in operating systems. It begins with an introduction to different categories of I/O devices and how they vary. It then covers techniques for performing I/O like programmed I/O, interrupt-driven I/O, and direct memory access. The document discusses the evolution of the I/O function and a hierarchical model for organizing I/O. It also outlines concepts like I/O buffering, disk scheduling parameters, RAID levels, and disk caching.
This document provides information on input/output organization and interfaces in a computer system. It discusses different I/O techniques like interrupts and direct memory access. Interrupts allow I/O devices to signal the processor when they need attention. Direct memory access enables high-speed transfer of data directly between I/O devices and memory without processor involvement. The document also describes common I/O bus standards like PCI, SCSI and USB and how they facilitate communication between devices and the computer.
This document discusses different types of data transfer modes between I/O devices and memory, including programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It explains that DMA allows I/O devices to access memory directly without CPU intervention by using a DMA controller. The basic operations of DMA include the DMA controller gaining control of the system bus, transferring data directly between memory and I/O devices by updating address and count registers, and then relinquishing control back to the CPU. Different DMA transfer techniques like byte stealing, burst, and continuous modes are also covered.
Ch 7 io_management & disk schedulingmadhuributani
This document discusses input/output (I/O) management and disk scheduling. It begins by categorizing I/O devices as those for communicating with users, electronic equipment, and remote devices. It then describes how I/O devices differ in data rates, applications, control complexity, data transfer units, data representation, and error handling. The document outlines three I/O techniques - programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It also discusses the evolution of I/O architectures and covers I/O buffering, disk organization, and disk terminology.
This document summarizes input/output structures and interrupts in a computer system. It describes peripheral devices, interfaces for communication between the CPU and peripherals, and different modes of data transfer including programmed I/O, interrupt-initiated I/O, and direct memory access. It also discusses I/O processors that handle input/output tasks, how interrupts work to signal the CPU when a device needs attention, different types of interrupts, and daisy chaining to determine interrupt priority when multiple devices request service simultaneously.
Modes of transfer - Computer Organization & Architecture - Nithiyapriya Pasav...priya Nithya
The document discusses three modes of data transfer between the central processing unit (CPU) and input/output (I/O) devices: programmed I/O, interrupt-initiated I/O, and direct memory access (DMA). Programmed I/O requires the CPU to continuously monitor the I/O device for data readiness, slowing performance. Interrupt-initiated I/O allows the I/O device to generate interrupts when ready, pausing the CPU to service transfers. DMA bypasses the CPU by allowing direct memory access between I/O devices and memory, speeding large data transfers.
The document provides an overview of I/O systems. It discusses peripheral devices and their I/O management through various techniques like direct memory access, polling, interrupts and buffering. It also covers topics like clocks and timers, caching, spooling, error handling, power management, kernel data structures and improving I/O performance through techniques such as reducing interrupts and using direct memory access. The document contains information presented through multiple sections on different I/O related concepts and components.
This document discusses input/output (I/O) organization in computers. It covers several topics:
- I/O devices can connect to the CPU via a single shared bus using memory-mapped I/O. This allows direct reading/writing of device registers via memory addresses.
- Interrupts allow I/O devices to signal the CPU when an event occurs, so it can pause its current task and service the device. Interrupt handling involves disabling interrupts, servicing the device, then re-enabling interrupts.
- Direct memory access (DMA) allows high-speed transfer of large blocks of data directly between I/O devices and memory without CPU involvement, improving performance over interrupt-driven
This document discusses different input/output techniques for computer systems. It describes three main I/O techniques: programmed I/O, interrupt-driven I/O, and direct memory access. Programmed I/O involves the CPU waiting for I/O operations to complete, interrupt-driven I/O uses interrupts to notify the CPU when an operation is done, and DMA allows data transfers without CPU involvement. The document also outlines functions of I/O modules, which connect I/O devices to system buses, and different addressing and mapping schemes for I/O devices.
03 top level view of computer function and interconnection.ppt.encAnwal Mirza
The document summarizes key topics from Chapter 3 of William Stallings' Computer Organization and Architecture textbook, including:
1) It describes the basic components of a computer including the control unit, arithmetic logic unit, main memory, and input/output.
2) It explains the fetch-execute instruction cycle and how interrupts can alter the normal flow of instruction processing.
3) It discusses different types of computer buses that connect the central processing unit to main memory and input/output devices, such as data, address, and control buses, and how bus arbitration works.
This chapter discusses input/output (I/O) in computer systems. It covers the challenges posed by different peripheral devices having varying data amounts, speeds, and formats. I/O modules are used to interface between the CPU/memory and peripherals. The chapter describes various I/O module functions and the steps involved in I/O operations. It then discusses three main techniques for I/O - programmed I/O, interrupt-driven I/O, and direct memory access (DMA). Specific I/O components like the 8259A interrupt controller and 8237A DMA controller are also covered. The chapter concludes by examining external device types and I/O communication standards like FireWire and InfiniBand
DMA It can give more time to an intersection approach that is experiencing h...vaishnavipanditengg
It can give more time to an intersection approach that is experiencing heavy traffic, or shorten or even skip a phase that has little or no traffic waiting for a green light.
Newly poured concrete opposing hot and windy conditions is considerably susceptible to plastic shrinkage cracking. Crack-free concrete structures are essential in ensuring high level of durability and functionality as cracks allow harmful instances or water to penetrate in the concrete resulting in structural damages, e.g. reinforcement corrosion or pressure application on the crack sides due to water freezing effect. Among other factors influencing plastic shrinkage, an important one is the concrete surface humidity evaporation rate. The evaporation rate is currently calculated in practice by using a quite complex Nomograph, a process rather tedious, time consuming and prone to inaccuracies. In response to such limitations, three analytical models for estimating the evaporation rate are developed and evaluated in this paper on the basis of the ACI 305R-10 Nomograph for “Hot Weather Concreting”. In this direction, several methods and techniques are employed including curve fitting via Genetic Algorithm optimization and Artificial Neural Networks techniques. The models are developed and tested upon datasets from two different countries and compared to the results of a previous similar study. The outcomes of this study indicate that such models can effectively re-develop the Nomograph output and estimate the concrete evaporation rate with high accuracy compared to typical curve-fitting statistical models or models from the literature. Among the proposed methods, the optimization via Genetic Algorithms, individually applied at each estimation process step, provides the best fitting result.
This document discusses different techniques for data transfer between the CPU and I/O devices, including programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It describes the basic functioning of an I/O module, comparing programmed I/O to interrupt-driven I/O. It then provides details on DMA, including how it allows high-speed transfer of data directly between memory and I/O devices without CPU involvement. The document also covers I/O interfaces, asynchronous data transfer methods like handshaking, and serial transmission techniques.
03_Top Level View of Computer Function and Interconnection.pptChABiDRazZaQ
This document provides an overview of computer organization and architecture, including:
- The main components of a computer system are the central processing unit (CPU), main memory, and input/output (I/O).
- The CPU fetches instructions from memory and executes them in a fetch-execute cycle, potentially involving data transfer between the CPU and other components.
- Interrupts provide a way for hardware and software signals to be processed immediately by pausing the current process and launching an interrupt handler.
- Buses are used to connect the computer components and allow for transfer of data, addresses, and control signals between processors, memory, and I/O devices.
I/o management and disk scheduling .pptxwebip34973
This document provides an overview of I/O management and disk scheduling in operating systems. It begins with an introduction to different categories of I/O devices and how they vary. It then covers techniques for performing I/O like programmed I/O, interrupt-driven I/O, and direct memory access. The document discusses the evolution of the I/O function and a hierarchical model for organizing I/O. It also outlines concepts like I/O buffering, disk scheduling parameters, RAID levels, and disk caching.
This document provides information on input/output organization and interfaces in a computer system. It discusses different I/O techniques like interrupts and direct memory access. Interrupts allow I/O devices to signal the processor when they need attention. Direct memory access enables high-speed transfer of data directly between I/O devices and memory without processor involvement. The document also describes common I/O bus standards like PCI, SCSI and USB and how they facilitate communication between devices and the computer.
This document discusses different types of data transfer modes between I/O devices and memory, including programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It explains that DMA allows I/O devices to access memory directly without CPU intervention by using a DMA controller. The basic operations of DMA include the DMA controller gaining control of the system bus, transferring data directly between memory and I/O devices by updating address and count registers, and then relinquishing control back to the CPU. Different DMA transfer techniques like byte stealing, burst, and continuous modes are also covered.
Ch 7 io_management & disk schedulingmadhuributani
This document discusses input/output (I/O) management and disk scheduling. It begins by categorizing I/O devices as those for communicating with users, electronic equipment, and remote devices. It then describes how I/O devices differ in data rates, applications, control complexity, data transfer units, data representation, and error handling. The document outlines three I/O techniques - programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It also discusses the evolution of I/O architectures and covers I/O buffering, disk organization, and disk terminology.
This document summarizes input/output structures and interrupts in a computer system. It describes peripheral devices, interfaces for communication between the CPU and peripherals, and different modes of data transfer including programmed I/O, interrupt-initiated I/O, and direct memory access. It also discusses I/O processors that handle input/output tasks, how interrupts work to signal the CPU when a device needs attention, different types of interrupts, and daisy chaining to determine interrupt priority when multiple devices request service simultaneously.
Modes of transfer - Computer Organization & Architecture - Nithiyapriya Pasav...priya Nithya
The document discusses three modes of data transfer between the central processing unit (CPU) and input/output (I/O) devices: programmed I/O, interrupt-initiated I/O, and direct memory access (DMA). Programmed I/O requires the CPU to continuously monitor the I/O device for data readiness, slowing performance. Interrupt-initiated I/O allows the I/O device to generate interrupts when ready, pausing the CPU to service transfers. DMA bypasses the CPU by allowing direct memory access between I/O devices and memory, speeding large data transfers.
The document provides an overview of I/O systems. It discusses peripheral devices and their I/O management through various techniques like direct memory access, polling, interrupts and buffering. It also covers topics like clocks and timers, caching, spooling, error handling, power management, kernel data structures and improving I/O performance through techniques such as reducing interrupts and using direct memory access. The document contains information presented through multiple sections on different I/O related concepts and components.
This document discusses input/output (I/O) organization in computers. It covers several topics:
- I/O devices can connect to the CPU via a single shared bus using memory-mapped I/O. This allows direct reading/writing of device registers via memory addresses.
- Interrupts allow I/O devices to signal the CPU when an event occurs, so it can pause its current task and service the device. Interrupt handling involves disabling interrupts, servicing the device, then re-enabling interrupts.
- Direct memory access (DMA) allows high-speed transfer of large blocks of data directly between I/O devices and memory without CPU involvement, improving performance over interrupt-driven
This document discusses different input/output techniques for computer systems. It describes three main I/O techniques: programmed I/O, interrupt-driven I/O, and direct memory access. Programmed I/O involves the CPU waiting for I/O operations to complete, interrupt-driven I/O uses interrupts to notify the CPU when an operation is done, and DMA allows data transfers without CPU involvement. The document also outlines functions of I/O modules, which connect I/O devices to system buses, and different addressing and mapping schemes for I/O devices.
03 top level view of computer function and interconnection.ppt.encAnwal Mirza
The document summarizes key topics from Chapter 3 of William Stallings' Computer Organization and Architecture textbook, including:
1) It describes the basic components of a computer including the control unit, arithmetic logic unit, main memory, and input/output.
2) It explains the fetch-execute instruction cycle and how interrupts can alter the normal flow of instruction processing.
3) It discusses different types of computer buses that connect the central processing unit to main memory and input/output devices, such as data, address, and control buses, and how bus arbitration works.
This chapter discusses input/output (I/O) in computer systems. It covers the challenges posed by different peripheral devices having varying data amounts, speeds, and formats. I/O modules are used to interface between the CPU/memory and peripherals. The chapter describes various I/O module functions and the steps involved in I/O operations. It then discusses three main techniques for I/O - programmed I/O, interrupt-driven I/O, and direct memory access (DMA). Specific I/O components like the 8259A interrupt controller and 8237A DMA controller are also covered. The chapter concludes by examining external device types and I/O communication standards like FireWire and InfiniBand
DMA It can give more time to an intersection approach that is experiencing h...vaishnavipanditengg
It can give more time to an intersection approach that is experiencing heavy traffic, or shorten or even skip a phase that has little or no traffic waiting for a green light.
Newly poured concrete opposing hot and windy conditions is considerably susceptible to plastic shrinkage cracking. Crack-free concrete structures are essential in ensuring high level of durability and functionality as cracks allow harmful instances or water to penetrate in the concrete resulting in structural damages, e.g. reinforcement corrosion or pressure application on the crack sides due to water freezing effect. Among other factors influencing plastic shrinkage, an important one is the concrete surface humidity evaporation rate. The evaporation rate is currently calculated in practice by using a quite complex Nomograph, a process rather tedious, time consuming and prone to inaccuracies. In response to such limitations, three analytical models for estimating the evaporation rate are developed and evaluated in this paper on the basis of the ACI 305R-10 Nomograph for “Hot Weather Concreting”. In this direction, several methods and techniques are employed including curve fitting via Genetic Algorithm optimization and Artificial Neural Networks techniques. The models are developed and tested upon datasets from two different countries and compared to the results of a previous similar study. The outcomes of this study indicate that such models can effectively re-develop the Nomograph output and estimate the concrete evaporation rate with high accuracy compared to typical curve-fitting statistical models or models from the literature. Among the proposed methods, the optimization via Genetic Algorithms, individually applied at each estimation process step, provides the best fitting result.
Dear SICPA Team,
Please find attached a document outlining my professional background and experience.
I remain at your disposal should you have any questions or require further information.
Best regards,
Fabien Keller
In this paper, the cost and weight of the reinforcement concrete cantilever retaining wall are optimized using Gases Brownian Motion Optimization Algorithm (GBMOA) which is based on the gas molecules motion. To investigate the optimization capability of the GBMOA, two objective functions of cost and weight are considered and verification is made using two available solutions for retaining wall design. Furthermore, the effect of wall geometries of retaining walls on their cost and weight is investigated using four different T-shape walls. Besides, sensitivity analyses for effects of backfill slope, stem height, surcharge, and backfill unit weight are carried out and of soil. Moreover, Rankine and Coulomb methods for lateral earth pressure calculation are used and results are compared. The GBMOA predictions are compared with those available in the literature. It has been shown that the use of GBMOA results in reducing significantly the cost and weight of retaining walls. In addition, the Coulomb lateral earth pressure can reduce the cost and weight of retaining walls.
OPTIMIZING DATA INTEROPERABILITY IN AGILE ORGANIZATIONS: INTEGRATING NONAKA’S...ijdmsjournal
Agile methodologies have transformed organizational management by prioritizing team autonomy and
iterative learning cycles. However, these approaches often lack structured mechanisms for knowledge
retention and interoperability, leading to fragmented decision-making, information silos, and strategic
misalignment. This study proposes an alternative approach to knowledge management in Agile
environments by integrating Ikujiro Nonaka and Hirotaka Takeuchi’s theory of knowledge creation—
specifically the concept of Ba, a shared space where knowledge is created and validated—with Jürgen
Habermas’s Theory of Communicative Action, which emphasizes deliberation as the foundation for trust
and legitimacy in organizational decision-making. To operationalize this integration, we propose the
Deliberative Permeability Metric (DPM), a diagnostic tool that evaluates knowledge flow and the
deliberative foundation of organizational decisions, and the Communicative Rationality Cycle (CRC), a
structured feedback model that extends the DPM, ensuring long-term adaptability and data governance.
This model was applied at Livelo, a Brazilian loyalty program company, demonstrating that structured
deliberation improves operational efficiency and reduces knowledge fragmentation. The findings indicate
that institutionalizing deliberative processes strengthens knowledge interoperability, fostering a more
resilient and adaptive approach to data governance in complex organizations.
This research presents the optimization techniques for reinforced concrete waffle slab design because the EC2 code cannot provide an efficient and optimum design. Waffle slab is mostly used where there is necessity to avoid column interfering the spaces or for a slab with large span or as an aesthetic purpose. Design optimization has been carried out here with MATLAB, using genetic algorithm. The objective function include the overall cost of reinforcement, concrete and formwork while the variables comprise of the depth of the rib including the topping thickness, rib width, and ribs spacing. The optimization constraints are the minimum and maximum areas of steel, flexural moment capacity, shear capacity and the geometry. The optimized cost and slab dimensions are obtained through genetic algorithm in MATLAB. The optimum steel ratio is 2.2% with minimum slab dimensions. The outcomes indicate that the design of reinforced concrete waffle slabs can be effectively carried out using the optimization process of genetic algorithm.
The TRB AJE35 RIIM Coordination and Collaboration Subcommittee has organized a series of webinars focused on building coordination, collaboration, and cooperation across multiple groups. All webinars have been recorded and copies of the recording, transcripts, and slides are below. These resources are open-access following creative commons licensing agreements. The files may be found, organized by webinar date, below. The committee co-chairs would welcome any suggestions for future webinars. The support of the AASHTO RAC Coordination and Collaboration Task Force, the Council of University Transportation Centers, and AUTRI’s Alabama Transportation Assistance Program is gratefully acknowledged.
This webinar overviews proven methods for collaborating with USDOT University Transportation Centers (UTCs), emphasizing state departments of transportation and other stakeholders. It will cover partnerships at all UTC stages, from the Notice of Funding Opportunity (NOFO) release through proposal development, research and implementation. Successful USDOT UTC research, education, workforce development, and technology transfer best practices will be highlighted. Dr. Larry Rilett, Director of the Auburn University Transportation Research Institute will moderate.
For more information, visit: https://aub.ie/trbwebinars
Welcome to MIND UP: a special presentation for Cloudvirga, a Stewart Title company. In this session, we’ll explore how you can “mind up” and unlock your potential by using generative AI chatbot tools at work.
Curious about the rise of AI chatbots? Unsure how to use them-or how to use them safely and effectively in your workplace? You’re not alone. This presentation will walk you through the practical benefits of generative AI chatbots, highlight best practices for safe and responsible use, and show how these tools can help boost your productivity, streamline tasks, and enhance your workday.
Whether you’re new to AI or looking to take your skills to the next level, you’ll find actionable insights to help you and your team make the most of these powerful tools-while keeping security, compliance, and employee well-being front and center.
Introduction to ANN, McCulloch Pitts Neuron, Perceptron and its Learning
Algorithm, Sigmoid Neuron, Activation Functions: Tanh, ReLu Multi- layer Perceptron
Model – Introduction, learning parameters: Weight and Bias, Loss function: Mean
Square Error, Back Propagation Learning Convolutional Neural Network, Building
blocks of CNN, Transfer Learning, R-CNN,Auto encoders, LSTM Networks, Recent
Trends in Deep Learning.
Citizen Observatories (COs) are innovative mechanisms to engage citizens in monitoring and addressing environmental and societal challenges. However, their effectiveness hinges on seamless data crowdsourcing, high-quality data analysis, and impactful data-driven decision-making. This paper validates how the GREENGAGE project enables and encourages the accomplishment of the Citizen Science Loop within COs, showcasing how its digital infrastructure and knowledge assets facilitate the co-production of thematic co-explorations. By systematically structuring the Citizen Science Loop—from problem identification to impact assessment—we demonstrate how GREENGAGE enhances data collection, analysis, and evidence exposition. For that, this paper illustrates how the GREENGAGE approach and associated technologies have been successfully applied at a university campus to conduct an air quality and public space suitability thematic co-exploration.
2. I/O Interface
• used to transfer information between internal storage and external
I/O devices is known as I/O interface.
• used to resolve the differences between CPU and peripheral
• Data transfer to and from the peripherals may be done in any of the
three possible ways
1. Programmed I/O
2. Interrupt- initiated I/O
3. Direct memory access( DMA)
3. Programmed I/O
• Each data item transfer is initiated by an instruction in the program
• transfer is from a CPU register and memory
• constant monitoring by the CPU of the peripheral devices is necessary
• I/O device does not have direct access to the memory unit.
• A transfer from I/O device to memory requires the execution of several
instructions by the CPU, including an input instruction to transfer the data
from device to the CPU and store instruction to transfer the data from CPU
to memory
• In programmed I/O, the CPU stays in the program loop until the I/O unit
indicates that it is ready for data transfer.
• This is a time consuming process since it needlessly keeps the CPU busy
5. Interrupt- initiated I/O
• In the previous case we saw the CPU is kept busy unnecessarily
• Using an interrupt driven method for data transfer avoids this
situation
• Whenever it is determined that the device is ready for data transfer it
initiates an interrupt request signal to the computer
• Meantime, the CPU can proceed for any other program execution.
• Upon detection of an external interrupt signal, the CPU stops
momentarily the task that it was already performing, branches to the
service program to process the I/O transfer, and then return to the
task it was originally performing.
7. Drawbacks
• Both require active intervention of the processor to transfer data
between memory and the I/O module
• The I/O transfer rate is limited by the speed with which the processor
can test and service a device.
• The processor is tied up in managing an I/O transfer; a number of
instructions must be executed for each I/O transfer.
12. Direct Memory Access (DMA)
• The data transfer between a fast storage media such as magnetic disk and
memory unit is limited by the speed of the CPU
• DMA allows the peripherals directly communicate with each other using
the memory buses, removing the intervention of the CPU
• During DMA the CPU is idle and it has no control over the memory buses
• The DMA controller takes over the buses to manage the transfer directly
between the I/O devices and the memory unit.
• DMA controller is a special purpose processor which controls data
transfer between memory and I/O as it generates address and control
signals for memory
• DMA could work even when a instruction is executing by the CPU
14. Burst Transfer
DMA returns the bus after complete data transfer.
Steps involved are:
Bus grant request time.
Transfer the entire block of data at transfer rate of device because the
device is usually slow than the speed at which the data can be
transferred to CPU.
Release the control of the bus back to CPU
Tx = Time required to prepare data
Ty = Time required to transfer data
% of time CPU is idle/blocked = Ty/(Tx+Ty) * 100
% of time CPU is busy = Tx/(Tx+Ty) * 100
15. Cycle Stealing
• An alternative method in which DMA controller transfers one word at a time
after which it must return the control of the buses to the CPU
• The CPU delays its operation only for one memory cycle to allow the direct
memory I/O transfer to “steal” one memory cycle.
• Steps Involved are:
1. Buffer the byte into the buffer
2. Inform the CPU that the device has 1 byte to transfer (i.e. bus grant
request)
3. Transfer the byte (at system bus speed)
4. Release the control of the bus back to CPU.
% of time CPU is idle/blocked = Ty/Tx * 100
% of time CPU is busy = Tx/Ty * 100
16. Interleaving DMA
• The DMA controller takes over the system bus when the microprocessor is
not using it.
• CPU is not blocked due to DMA
• Maximum time required for data transfer
• Time required for data transfer : Interleaving > Cycle stealing > Burst mode
• Speed of data transfer : Burst mode > cycle stealing > interleaving
17. Vectored Interrupts
• A interrupting device inform special code to the processor to identify
itself
• The code/address points to the starting address of an ISR for that
device
• The bit size of the code may vary from 4-8 bits
• The processor can immediately start processing the ISR
• This scheme of handling interrupt is called as vectored Interrupts
SENSE,VIT Chennai
18. Interrupt Priority
• I / O devices are grouped into priorities order
• The priority level is used. It range from high priority to low
priority devices
• The interrupt request from the high priority devices are served
first
• If two devices are sending IRQ at same time the processor
resolve by priority and selects the device with highest priority
• The priorities can be fixed one and programmable with
privileged instruction
SENSE,VIT Chennai
21. SOFTWARE METHOD – POLLING
• In this method, all interrupts are serviced by branching to the same service
program.
• This program then checks with each device if it is the one generating the
interrupt.
• The order of checking is determined by the priority that has to be set.
• The device having the highest priority is checked first and then devices are
checked in descending order of priority.
• If the device is checked to be generating the interrupt, another service
program is called which works specifically for that particular device.
• The major disadvantage of this method is that it is quite slow. To overcome
this, we can use hardware solution, one of which involves connecting the
devices in series. This is called Daisy-chaining method.
24. Daisy Chaining – Priority Interrupt
• Also called as serial chaining used to handle priority interrupt
• All devices connected based on their priority
• The highest priority is directly connected to the CPU’s INTACK signal
• INTACK sends 1 if any request sent
• If device has requested for access
1 is consumed and P0 = 0
else
1 is passed and P0 = 1
• This disables all the other low priority request
• That device generates the vectored address to CPU
28. Cont…
• DMA sends Bus Request (BR) to the processor
• If the processor is ready to grant access in responds to the BR , it
sends Bus grant (BG1) signal to the first connected DMA controller
Informing that, it may use the bus once it is free
• The DMA controller1 receives the acknowledgement from the
processor. If the DMA 1 had requested the bus it will become the bus
master other wise it will forward the acknowledgement to the next
DMA with BG2 signal
• The mechanism of using acknowledgment if it belongs to requested
one, or else forwarding the acknowledgement to the next device is
called as Daisy Chain
• The processor sends Bus Busy signal to prevent other device to
access the bus
32. Parallel Chaining – Priority Interrupt
• IST = 1 if any device has generated an interrupt
• IST = 0 if none of the devices have generated an interrupt
• IEN = 1 if CPU is ready to handle the interrupt
• IEN = 0 if CPU is not ready to handle the interrupt
• IST and IEN has to be 1 for CPU to handle interrupt
• This generates 1 from the INTACK
• These 3 enabled signals enable the VAD in CPU
33. Synchronous Bus
Synchronous bus (e.g., processor-memory buses)
• All devices derive timing information from a common clock.
• Equal time intervals.
Advantage:
• It needs very little thought and can work very quickly.
Disadvantages:
• Every device communicating on the bus must use same clock
rate
• To avoid clock skew, they cannot be long if they are fast
34. Asynchronous Bus
It is not clocked, so a handshaking protocol is required
and additional control lines are needed.
Advantages:
• Can accommodate a wide range of devices and device speeds
• Can be lengthened without worrying about clock skew or
synchronization problem.
Disadvantage: slow