BUS Arbitration in Computer Organization
Last Updated :
09 Apr, 2024
Introduction :
In a computer system, multiple devices, such as the CPU, memory, and I/O controllers, are connected to a common communication pathway, known as a bus. In order to transfer data between these devices, they need to have access to the bus. Bus arbitration is the process of resolving conflicts that arise when multiple devices attempt to access the bus at the same time.
When multiple devices try to use the bus simultaneously, it can lead to data corruption and system instability. To prevent this, a bus arbitration mechanism is used to ensure that only one device has access to the bus at any given time.
There are several types of bus arbitration methods, including centralized, decentralized, and distributed arbitration. In centralized arbitration, a single device, known as the bus controller, is responsible for managing access to the bus. In decentralized arbitration, each device has its own priority level, and the device with the highest priority is given access to the bus. In distributed arbitration, devices compete for access to the bus by sending a request signal and waiting for a grant signal.
Bus Arbitration refers to the process by which the current bus master accesses and then leaves the control of the bus and passes it to another bus requesting processor unit. The controller that has access to a bus at an instance is known as a Bus master.
A conflict may arise if the number of DMA controllers or other controllers or processors try to access the common bus at the same time, but access can be given to only one of those. Only one processor or controller can be Bus master at the same point in time. To resolve these conflicts, the Bus Arbitration procedure is implemented to coordinate the activities of all devices requesting memory transfers. The selection of the bus master must take into account the needs of various devices by establishing a priority system for gaining access to the bus. The Bus Arbiter decides who would become the current bus master.
Applications of bus arbitration in computer organization:
Shared Memory Systems: In shared memory systems, multiple devices need to access the memory to read or write data. Bus arbitration allows multiple devices to access the memory without interfering with each other.
Multi-Processor Systems: In multi-processor systems, multiple processors need to communicate with each other to share data and coordinate processing. Bus arbitration allows multiple processors to share access to the bus to communicate with each other and with shared memory.
Input/Output Devices: Input/Output devices such as keyboards, mice, and printers need to communicate with the processor to exchange data. Bus arbitration allows multiple input/output devices to share access to the bus to communicate with the processor and memory.
Real-time Systems: In real-time systems, data needs to be transferred between devices and memory within a specific time frame to ensure timely processing. Bus arbitration can help to ensure that data transfer occurs within a specific time frame by managing access to the bus.
Embedded Systems: In embedded systems, multiple devices such as sensors, actuators, and controllers need to communicate with the processor to control and monitor the system. Bus arbitration allows multiple devices to share access to the bus to communicate with the processor and memory.
There are two approaches to bus arbitration:
- Centralized bus arbitration -
A single bus arbiter performs the required arbitration.
- Distributed bus arbitration -
All devices participating in the selection of the next bus master.
Methods of Centralized BUS Arbitration:
There are three bus arbitration methods:
(i) Daisy Chaining method: It is a simple and cheaper method where all the bus masters use the same line for making bus requests. The bus grant signal serially propagates through each master until it encounters the first one that is requesting access to the bus. This master blocks the propagation of the bus grant signal, therefore any other requesting module will not receive the grant signal and hence cannot access the bus.
During any bus cycle, the bus master may be any device - the processor or any DMA controller unit, connected to the bus.

Advantages:
- Simplicity and Scalability.
- The user can add more devices anywhere along the chain, up to a certain maximum value.
Disadvantages:
- The value of priority assigned to a device depends on the position of the master bus.
- Propagation delay arises in this method.
- If one device fails then the entire system will stop working.
(ii) Polling or Rotating Priority method: In this, the controller is used to generate the address for the master(unique priority), the number of address lines required depends on the number of masters connected in the system. The controller generates a sequence of master addresses. When the requesting master recognizes its address, it activates the busy line and begins to use the bus.

Advantages -
- This method does not favor any particular device and processor.
- The method is also quite simple.
- If one device fails then the entire system will not stop working.
Disadvantages -
- Adding bus masters is difficult as increases the number of address lines of the circuit.
(iii) Fixed priority or Independent Request method -
In this, each master has a separate pair of bus request and bus grant lines and each pair has a priority assigned to it.
The built-in priority decoder within the controller selects the highest priority request and asserts the corresponding bus grant signal.

Advantages -
- This method generates a fast response.
Disadvantages -
- Hardware cost is high as a large no. of control lines is required.
Distributed BUS Arbitration :
In this, all devices participate in the selection of the next bus master. Each device on the bus is assigned a 4bit identification number. The priority of the device will be determined by the generated ID.
Uses of BUS Arbitration in Computer Organization :
Bus arbitration is a critical process in computer organization that has several uses and benefits, including:
- Efficient use of system resources: By regulating access to the bus, bus arbitration ensures that each device has fair access to system resources, preventing any single device from monopolizing the bus and causing system slowdowns or crashes.
- Minimizing data corruption: Bus arbitration helps prevent data corruption by ensuring that only one device has access to the bus at a time, which minimizes the risk of multiple devices writing to the same location in memory simultaneously.
- Support for multiple devices: Bus arbitration enables multiple devices to share a common communication pathway, which is essential for modern computer systems with multiple peripherals, such as printers, scanners, and external storage devices.
- Real-time system support: In real-time systems, bus arbitration is essential to ensure that high-priority tasks are executed quickly and efficiently. By prioritizing access to the bus, bus arbitration can ensure that critical tasks are given the resources they need to execute in a timely manner.
- Improved system stability: By preventing conflicts between devices, bus arbitration helps to improve system stability and reliability. This is especially important in mission-critical systems where downtime or data corruption could have severe consequences.
Issues of BUS Arbitration in Computer Organization :
Bus arbitration is a critical process in computer organization that has several uses and benefits, including:
- Efficient use of system resources: By regulating access to the bus, bus arbitration ensures that each device has fair access to system resources, preventing any single device from monopolizing the bus and causing system slowdowns or crashes.
- Minimizing data corruption: Bus arbitration helps prevent data corruption by ensuring that only one device has access to the bus at a time, which minimizes the risk of multiple devices writing to the same location in memory simultaneously.
- Support for multiple devices: Bus arbitration enables multiple devices to share a common communication pathway, which is essential for modern computer systems with multiple peripherals, such as printers, scanners, and external storage devices.
- Real-time system support: In real-time systems, bus arbitration is essential to ensure that high-priority tasks are executed quickly and efficiently. By prioritizing access to the bus, bus arbitration can ensure that critical tasks are given the resources they need to execute in a timely manner.
- Improved system stability: By preventing conflicts between devices, bus arbitration helps to improve system stability and reliability. This is especially important in mission-critical systems where downtime or data corruption could have severe consequences.
Similar Reads
Machine instructions and addressing modes
Computer Organization is like understanding the "blueprint" of how a computer works internally. One of the most important models in this field is the Von Neumann architecture, which is the foundation of most modern computers. Named after John von Neumann, this architecture introduced the concept of
6 min read
Computer organization refers to the way in which the components of a computer system are organized and interconnected to perform specific tasks. One of the most fundamental aspects of computer organization is the set of basic computer instructions that the system can execute. Basic Computer Instruct
6 min read
Instruction formats refer to the way instructions are encoded and represented in machine language. There are several types of instruction formats, including zero, one, two, and three-address instructions. Each type of instruction format has its own advantages and disadvantages in terms of code size,
11 min read
Based on the number of address fields, CPU organization is of three types: Single Accumulator organization, register based organization and stack based CPU organization. Stack-Based CPU OrganizationThe computers which use Stack-based CPU Organization are based on a data structure called a stack. The
4 min read
When we are using multiple general-purpose registers, instead of a single accumulator register, in the CPU Organization then this type of organization is known as General register-based CPU Organization. In this type of organization, the computer uses two or three address fields in their instruction
3 min read
The computers, present in the early days of computer history, had accumulator-based CPUs. In this type of CPU organization, the accumulator register is used implicitly for processing all instructions of a program and storing the results into the accumulator. The instruction format that is used by th
2 min read
Prerequisite - Basic Computer Instructions, Instruction Formats Problem statement: Consider a computer architecture where instructions are 16 bits long. The first 6 bits of the instruction are reserved for the opcode, and the remaining 10 bits are used for the operands. There are three addressing mo
7 min read
Addressing modes are the techniques used by the CPU to identify where the data needed for an operation is stored. They provide rules for interpreting or modifying the address field in an instruction before accessing the operand. Addressing modes for 8086 instructions are divided into two categories:
7 min read
Machine Instructions are commands or programs written in the machine code of a machine (computer) that it can recognize and execute. A machine instruction consists of several bytes in memory that tell the processor to perform one machine operation. The processor looks at machine instructions in main
5 min read
In assembly language as well as in low level programming CALL and JUMP are the two major control transfer instructions. Both instructions enable a program to go to different other parts of the code but both are different. CALL is mostly used to direct calls to subroutine or a function and regresses
5 min read
Simplified Instructional Computer (SIC) is a hypothetical computer that has hardware features that are often found in real machines. There are two versions of this machine: SIC standard ModelSIC/XE(extra equipment or expensive)Object programs for SIC can be properly executed on SIC/XE which is known
4 min read
Let's discuss about parallel computing and hardware architecture of parallel computing in this post. Note that there are two types of computing but we only learn parallel computing here. As we are going to learn parallel computing for that we should know following terms. Era of computing - The two f
3 min read
Parallel computing is a computing where the jobs are broken into discrete parts that can be executed concurrently. Each part is further broken down to a series of instructions. Instructions from each part execute simultaneously on different CPUs. Parallel systems deal with the simultaneous use of mu
4 min read
The generation of computers refers to the progression of computer technology over time, marked by key advancements in hardware and software. These advancements are divided into five generations, each defined by improvements in processing power, size, efficiency, and overall capabilities. Starting wi
6 min read
It is named after computer scientist Gene Amdahl( a computer architect from IBM and Amdahl corporation) and was presented at the AFIPS Spring Joint Computer Conference in 1967. It is also known as Amdahl's argument. It is a formula that gives the theoretical speedup in latency of the execution of a
6 min read
ALU, dataââ¬Âpath and control unit
Instruction pipelining
Pipelining is a technique used in modern processors to improve performance by executing multiple instructions simultaneously. It breaks down the execution of instructions into several stages, where each stage completes a part of the instruction. These stages can overlap, allowing the processor to wo
9 min read
Please see Set 1 for Execution, Stages and Performance (Throughput) and Set 3 for Types of Pipeline and Stalling. Dependencies in a pipelined processor There are mainly three types of dependencies possible in a pipelined processor. These are : 1) Structural Dependency 2) Control Dependency 3) Data D
6 min read
Please see Set 1 for Execution, Stages and Performance (Throughput) and Set 2 for Dependencies and Data Hazard. Types of pipeline Uniform delay pipeline In this type of pipeline, all the stages will take same time to complete an operation. In uniform delay pipeline, Cycle Time (Tp) = Stage Delay If
3 min read
Introduction : Prerequisite - Execution, Stages and Throughput Registers Involved In Each Instruction Cycle: Memory address registers(MAR) : It is connected to the address lines of the system bus. It specifies the address in memory for a read or write operation.Memory Buffer Register(MBR) : It is co
11 min read
In computer organization, performance refers to the speed and efficiency at which a computer system can execute tasks and process data. A high-performing computer system is one that can perform tasks quickly and efficiently while minimizing the amount of time and resources required to complete these
6 min read
In computer organization, a micro-operation refers to the smallest tasks performed by the CPU's control unit. These micro-operations helps to execute complex instructions. They involve simple tasks like moving data between registers, performing arithmetic calculations, or executing logic operations.
3 min read
RISC is the way to make hardware simpler whereas CISC is the single instruction that handles multiple work. In this article, we are going to discuss RISC and CISC in detail as well as the Difference between RISC and CISC, Let's proceed with RISC first. Reduced Instruction Set Architecture (RISC) The
5 min read
Cache Memory
In the Computer System Design, Memory Hierarchy is an enhancement to organize the memory such that it can minimize the access time. The Memory Hierarchy was developed based on a program behavior known as locality of references (same data or nearby data is likely to be accessed again and again). The
6 min read
Cache memory is a small, high-speed storage area in a computer. The cache is a smaller and faster memory that stores copies of the data from frequently used main memory locations. There are various independent caches in a CPU, which store instructions and data. The most important use of cache memory
11 min read
Cache is close to CPU and faster than main memory. But at the same time is smaller than main memory. The cache organization is about mapping data in memory to a location in cache. A Simple Solution: One way to go about this mapping is to consider last few bits of long memory address to find small ca
3 min read
Caches are the faster memories that are built to deal with the Processor-Memory gap in data read operation, i.e. the time difference in a data read operation in a CPU register and that in the main memory. Data read operation in registers is generally 100 times faster than in the main memory and it k
7 min read
The CPU Cache and Translation Lookaside Buffer (TLB) are two important microprocessor hardware components that improve system performance, although they have distinct functions. Even though some people may refer to TLB as a kind of cache, it's important to recognize the different functions they serv
4 min read
A memory unit stores binary information in groups of bits called words. Data input lines provide the information to be stored into the memory, Data output lines carry the information out from the memory. The control lines Read and write specifies the direction of transfer of data. Basically, in the
3 min read
Prerequisite - Virtual Memory Abstraction is one of the most important aspects of computing. It is a widely implemented Practice in the Computational field. Memory Interleaving is less or More an Abstraction technique. Though it's a bit different from Abstraction. It is a Technique that divides memo
3 min read
Memory is required to save data and instructions. Memory is divided into cells, and they are stored in the storage space present in the computer. Every cell has its unique location/address. Memory is very essential for a computer as this is the way it becomes somewhat more similar to a human brain.
11 min read
Memory is a fundamental component of computing systems, essential for performing various tasks efficiently. It plays a crucial role in how computers operate, influencing speed, performance, and data management. In the realm of computer memory, two primary types stand out: Random Access Memory (RAM)
8 min read
In the computer world, memory plays an important component in determining the performance and efficiency of a system. In between various types of memory, Random Access Memory (RAM) stands out as a necessary component that enables computers to process and store data temporarily. In this article, we w
8 min read
Memory is an important part of the Computer which is responsible for storing data and information on a temporary or permanent basis. Memory can be classified into two broad categories: Primary Memory Secondary Memory What is Primary Memory? Primary Memory is a type of Computer Memory that the Prepro
7 min read
I/O interface (Interrupt and DMA mode)
The method that is used to transfer information between internal storage and external I/O devices is known as I/O interface. The CPU is interfaced using special communication links by the peripherals connected to any computer system. These communication links are used to resolve the differences betw
6 min read
The DMA mode of data transfer reduces the CPU's overhead when handling I/O operations. It also allows parallel processing between CPU and I/O operations. This parallelism is necessary to avoid the wastage of valuable CPU time when handling I/O devices whose speeds are much slower as compared to CPU.
5 min read
The kernel provides many services related to I/O. Several services such as scheduling, caching, spooling, device reservation, and error handling - are provided by the kernel's I/O subsystem built on the hardware and device-driver infrastructure. The I/O subsystem is also responsible for protecting i
7 min read
CPU needs to communicate with the various memory and input-output devices (I/O). Data between the processor and these devices flow with the help of the system bus. There are three ways in which system bus can be allotted to them: Separate set of address, control and data bus to I/O and memory.Have c
5 min read
Introduction : In a computer system, multiple devices, such as the CPU, memory, and I/O controllers, are connected to a common communication pathway, known as a bus. In order to transfer data between these devices, they need to have access to the bus. Bus arbitration is the process of resolving conf
7 min read
In I/O Interface (Interrupt and DMA Mode), we have discussed the concept behind the Interrupt-initiated I/O. To summarize, when I/O devices are ready for I/O transfer, they generate an interrupt request signal to the computer. The CPU receives this signal, suspends the current instructions it is exe
5 min read
Introduction : Asynchronous input/output (I/O) synchronization is a technique used in computer organization to manage the transfer of data between the central processing unit (CPU) and external devices. In asynchronous I/O synchronization, data transfer occurs at an unpredictable rate, with no fixed
7 min read
A port is basically a physical docking point which is basically used to connect the external devices to the computer, or we can say that A port act as an interface between the computer and the external devices, e.g., we can connect hard drives, printers to the computer with the help of ports. Featur
3 min read
A cluster is a set of loosely or tightly connected computers working together as a unified computing resource that can create the illusion of being one machine. Computer clusters have each node set to perform the same task, controlled and produced by the software. Clustered Operating Systems work si
7 min read
Introduction - The advent of a technological marvel called the âcomputerâ has revolutionized life in the twenty-first century. From IoT to self-driving cars to smart cities, computers have percolated through the fabric of society. Unsurprisingly the methods with which we interact with computers have
4 min read