SlideShare a Scribd company logo
Packet-Switching Networks
Packet-Switching Networks Network Services and Internal Network Operation Packet Network Topology Datagrams and Virtual Circuits Routing in Packet Networks Shortest Path Routing ATM Networks Traffic Management
Packet-Switching Networks Network Services and Internal Network Operation
Network Layer Network Layer: the most complex layer Requires the coordinated actions of multiple, geographically distributed network elements (switches & routers) Must be able to deal with very large scales Billions of users (people & communicating devices) Biggest Challenges Addressing:  where should information be directed to? Routing:  what path should be used to get information there?
Packet Switching Transfer of information as payload in data packets Packets undergo random delays & possible loss Different applications impose differing requirements on the transfer of information t 0 t 1 Network
Network Service Network layer can offer a variety of services to transport layer Connection-oriented service or connectionless service Best-effort or delay/loss guarantees End system β Physical layer Data link layer Physical layer Data link layer End system α Network layer Network layer Physical layer Data link layer Network layer Physical layer Data link layer Network layer Transport layer Transport layer Messages Messages Segments Network service Network service
Network Service vs. Operation Network Service Connectionless Datagram Transfer Connection-Oriented Reliable and possibly constant bit rate transfer Internal Network Operation Connectionless IP Connection-Oriented Telephone connection ATM Various combinations are possible Connection-oriented service over Connectionless operation Connectionless service over Connection-Oriented operation Context & requirements determine what makes sense
Complexity at the Edge or in the Core? 1 1 3 3 2 1 2 2 3 2 1 1 2 2 1 2 1 Medium A B 3 2 1 1 2 2 1 C 2 1 2 1 2 1 4 1 2 3 4 End system α End system β Network 1 2 Physical layer entity Data link layer entity 3 Network layer entity 3 Network layer entity Transport layer entity 4
The End-to-End Argument for System Design An end-to-end function is best implemented at a higher level than at a lower level End-to-end service requires all intermediate components to work properly Higher-level better positioned to ensure correct operation Example:  stream transfer service Establishing an explicit connection for each stream across network requires all network elements (NEs) to be aware of connection;  All NEs have to be involved in re-establishment of connections in case of network fault In connectionless network operation, NEs do not deal with each explicit connection and hence are much simpler in design
Network Layer Functions Essential Routing :  mechanisms for determining the set of best paths for routing packets requires the collaboration of network elements Forwarding :  transfer of packets from NE inputs to outputs Priority & Scheduling :  determining order of packet transmission in each NE Optional:  congestion control, segmentation & reassembly, security
Chapter 7 Packet-Switching Networks Packet Network Topology
End-to-End Packet Network Packet networks very different than telephone networks Individual packet streams are highly bursty  Statistical multiplexing is used to concentrate streams User demand can undergo dramatic change Peer-to-peer applications stimulated huge growth in traffic volumes Internet structure highly decentralized Paths traversed by packets can go through many networks controlled by different organizations No single entity responsible for end-to-end service
Access Multiplexing Packet traffic from users multiplexed at access to network into aggregated streams DSL traffic multiplexed at DSL Access Mux Cable modem traffic multiplexed at Cable Modem Termination System Access MUX To packet network
Oversubscription Access Multiplexer N subscribers connected @ c bps to mux Each subscriber active r/c of time Mux has C=nc bps to network Oversubscription rate:  N/n Find n so that at most 1% overflow probability Feasible oversubscription rate increases with size   N r/c n N/n 10 0.01 1 10 10 extremely lightly loaded users 10 0.05 3 3.3 10 very lightly loaded user 10 0.1 4 2.5 10 lightly loaded users 20 0.1 6 3.3 20 lightly loaded users 40 0.1 9 4.4 40 lightly loaded users 100 0.1 18 5.5 100 lightly loaded users •  • • Nr Nc r r r nc •  • •
Home LANs Home Router LAN Access using Ethernet or WiFi (IEEE 802.11) Private IP addresses in Home (192.168.0.x) using Network Address Translation (NAT) Single global IP address from ISP issued using Dynamic Host Configuration Protocol (DHCP) Home Router To packet network WiFi Ethernet
LAN Concentration LAN Hubs and switches in the access network also aggregate packet streams that flows into switches and routers Switch / Router                                
Campus Network R R R R S S S s  s  s  s  s s s s s s R  s R Backbone To Internet or wide area network Organization Servers Departmental Server Gateway Only outgoing packets leave LAN through router High-speed campus backbone net connects dept routers Servers have redundant connectivity to backbone
Connecting to Internet Service Provider Interdomain level Intradomain level Autonomous system or domain Border routers Border routers Internet service provider s s s LAN Campus Network network administered by single organization
Internet Backbone Network Access Points:  set up during original commercialization of Internet to facilitate exchange of traffic Private Peering Points: two-party inter-ISP agreements to exchange traffic National Service Provider A National Service Provider B National Service Provider C NAP NAP Private peering
R A R B R C Route Server NAP National Service Provider A National Service Provider B National Service Provider C LAN NAP NAP (a) (b) Private peering
Key Role of Routing How to get packet from here to there? Decentralized nature of Internet makes routing a major challenge Interior gateway protocols (IGPs) are used to determine routes within a domain  Exterior gateway protocols (EGPs) are used to determine routes across domains Routes must be consistent & produce stable flows Scalability required to accommodate growth Hierarchical structure of IP addresses essential to keeping size of routing tables manageable
Chapter 7 Packet-Switching Networks Datagrams and Virtual Circuits
The Switching Function Dynamic interconnection of inputs to outputs Enables dynamic sharing of transmission resource Two fundamental approaches: Connectionless Connection-Oriented:  Call setup control, Connection control Backbone Network Access Network Switch
Packet Switching Network Packet switching network Transfers packets between users Transmission lines + packet switches (routers) Origin in message switching Two modes of operation: Connectionless Virtual Circuit Packet switch Network Transmission line User
Message switching invented for telegraphy Entire messages multiplexed onto shared lines, stored & forwarded Headers for source & destination addresses Routing at message switches Connectionless Message Switching Switches Message Destination Source Message Message Message
Message Switching Delay Additional queueing delays possible at each link t t t t Delay Source Destination T  Minimum delay = 3   + 3 T Switch 1 Switch 2
Long Messages vs. Packets Approach 1:  send 1 Mbit message Probability message arrives correctly On average it takes about 3 transmissions/hop Total # bits transmitted  ≈ 6 Mbits 1 Mbit  message How many bits need to be transmitted to deliver message? Approach 2:  send 10 100-kbit packets Probability packet arrives correctly On average it takes about 1.1 transmissions/hop Total # bits transmitted  ≈ 2.2 Mbits source dest BER=p=10 -6 BER=10 -6
Packet Switching - Datagram Messages broken into smaller units (packets) Source & destination addresses in packet header Connectionless, packets routed independently (datagram) Packet may arrive out of order Pipelining of packets across network can reduce delay, increase throughput Lower delay that message switching, suitable for interactive traffic Packet 2 Packet 1 Packet 1 Packet 2 Packet 2
Packet Switching Delay Minimum Delay = 3 τ  + 5( T /3)  (single path assumed) Additional queueing delays possible at each link Packet pipelining enables message to arrive sooner Assume three packets corresponding to one message traverse same path t t t t Delay 3 1 2 3 1 2 3 2 1
Delay for k-Packet Message over L Hops t t t t 3 1 2 3 1 2 3 2 1 3   + 3( T /3)  first bit released 3    + 5 ( T /3)  last bit released L   + LP   first bit released L   +  LP + (k- 1 )P  last bit released where  T = k P 3 hops L  hops Source Destination Switch 1 Switch 2 
Routing Tables in Datagram Networks Route determined by table lookup Routing decision involves finding next hop in route to given destination Routing table has an entry for each destination specifying output port that leads to next hop Size of table becomes impractical for very large number of destinations Destination address Output port 1345 12 2458 7 0785 6 12 1566
Example:  Internet Routing Internet protocol uses datagram packet switching  across networks Networks are treated as data links Hosts have two-port IP address: Network address + Host address Routers do table lookup on network address This reduces size of routing table In addition, network addresses are assigned so that they can also be aggregated Discussed as CIDR in Chapter 8
Packet Switching – Virtual Circuit Call set-up phase sets ups pointers in fixed path along network All packets for a connection follow the same path Abbreviated header identifies connection on each link Packets queue for transmission Variable bit rates possible, negotiated during call set-up Delays variable, cannot be less than circuit switching Virtual circuit Packet Packet Packet Packet
Connection Setup Signaling messages propagate as route is selected Signaling messages identify connection and setup tables in switches Typically a connection is identified by a local tag, Virtual Circuit Identifier (VCI) Each switch only needs to know how to relate an incoming tag in one input to an outgoing tag in the corresponding output  Once tables are setup, packets can flow along path SW 1 SW 2 SW  n Connect request Connect request Connect request Connect confirm Connect confirm Connect confirm …
Connection Setup Delay Connection setup delay is incurred before any packet can be transferred Delay is acceptable for sustained transfer of large number of packets This delay may be unacceptably high if only a few packets are being transferred t t t t 3 1 2 3 1 2 3 2 1 Release Connect request CR CR Connect confirm CC CC
Virtual Circuit Forwarding Tables Each input port of packet switch has a forwarding table Lookup entry for VCI of incoming packet Determine output port (next hop) and insert VCI for next link Very high speeds are possible Table can also include priority or other information about how packet should be treated Input VCI Output port Output VCI 15 15 58 13 13 7 27 12 44 23 16 34
Cut-Through switching Some networks perform error checking on header only, so packet can be forwarded as soon as header is received & processed Delays reduced further with cut-through switching 3 1 2 3 1 2 3 2 1 Minimum delay = 3    +  T   t t t t Source Destination Switch 1 Switch 2
Message vs. Packet Minimum Delay Message: L     +  L T  =  L    +  ( L  – 1)  T  +  T Packet L    +  L P +  ( k –  1)  P  =  L    +  ( L –  1)  P  +  T Cut-Through Packet (Immediate forwarding after header) =  L    +  T Above neglect header processing delays
Example:  ATM Networks All information mapped into short fixed-length packets called  cells Connections set up across network Virtual circuits established across networks Tables setup at ATM switches Several types of network services offered Constant bit rate connections Variable bit rate connections
Chapter 7 Packet-Switching Networks Datagrams and Virtual Circuits Structure of a Packet Switch
Packet Switch:  Intersection where Traffic Flows Meet 1 2 N 1 2 N       Inputs contain multiplexed flows from access muxs & other packet switches Flows demultiplexed at input, routed and/or forwarded to output ports Packets buffered, prioritized, and multiplexed on output lines •  • • •  • • •  • •
Generic Packet Switch  “ Unfolded” View of Switch Ingress Line Cards Header processing Demultiplexing Routing in large switches Controller Routing in small switches Signalling & resource allocation Interconnection Fabric Transfer packets between line cards Egress Line Cards Scheduling & priority Multiplexing Controller 1 2 3 N Line card Line card Line card Line card Interconnection fabric Line card Line card Line card Line card 1 2 3 N Input ports Output ports Data path Control path (a) … … … …
Line Cards Folded View 1 circuit board is ingress/egress line card Physical layer processing Data link layer processing Network header processing Physical layer across fabric + framing Interconnection fabric Transceiver Transceiver Framer Framer Network processor Backplane transceivers To physical ports To switch fabric To other line cards
Shared Memory Packet Switch 1 2 3 N 1 2 3 N … Shared Memory Queue Control Ingress  Processing Connection Control … Output Buffering Small switches can be built by reading/writing into shared memory
Crossbar Switches 1 2 3 N 1 2 3 N Inputs Outputs (a) Input buffering 3 8 3 … … 1 2 3 N 1 2 3 N Inputs Outputs (b) Output buffering … … Large switches built from crossbar & multistage space switches Requires centralized controller/scheduler (who sends to whom when) Can buffer at input, output, or both (performance vs complexity)
Self-Routing Switches Self-routing switches do not require controller Output port number determines route 101  -> (1) lower port, (2) upper port, (3) lower port 0 1 2 Inputs Outputs 3 4 5 6 7 0 1 2 3 4 5 6 7 Stage 1 Stage 2 Stage 3
Chapter 7 Packet-Switching Networks Routing in Packet Networks
Routing in Packet Networks Three possible (loopfree) routes from 1 to 6: 1-3-6, 1-4-5-6, 1-2-5-6 Which is “best”? Min delay?  Min hop? Max bandwidth? Min cost?  Max reliability? 1 2 3 4 5 6 Node  (switch or router)
Creating the Routing Tables Need information on state of links Link up/down; congested; delay or other metrics Need to distribute link state information using a routing protocol What information is exchanged? How often? Exchange with neighbors; Broadcast or flood Need to compute routes based on information Single metric;  multiple metrics Single route; alternate routes
Routing Algorithm Requirements  Responsiveness to changes Topology or bandwidth changes, congestion  Rapid convergence of routers to consistent set of routes Freedom from persistent loops Optimality Resource utilization, path length  Robustness Continues working under high load, congestion, faults, equipment failures, incorrect implementations Simplicity Efficient software implementation, reasonable processing load
Centralized vs Distributed Routing Centralized Routing All routes determined by a central node All state information sent to central node Problems adapting to frequent topology changes Does not scale Distributed Routing Routes determined by routers using distributed algorithm State information exchanged by routers Adapts to topology and other changes Better scalability
Static vs Dynamic Routing Static Routing Set up manually, do not change;  requires administration Works when traffic predictable & network is simple Used to override some routes set by dynamic algorithm Used to provide default router Dynamic Routing Adapt to changes in network conditions Automated Calculates routes based on received updated network state information
Routing in Virtual-Circuit Packet Networks Route determined during connection setup Tables in switches implement forwarding that realizes selected route 1 2 3 4 5 6 A B C D 1 5 2 3 7 1 8 5 4 2 3 6 5 2 Switch or router Host VCI
Routing Tables in VC Packet Networks Example:  VCI from A to D From A & VCI 5  -> 3 & VCI 3 -> 4 & VCI 4 ->  5 & VCI 5 -> D & VCI 2 Incoming  Outgoing Node  VCI  Node  VCI A  1  3  2 A  5  3  3 3  2  A  1 3  3  A  5 Incoming  Outgoing Node  VCI  Node  VCI 1  2  6  7 1  3  4  4 4  2  6  1 6  7  1  2 6  1  4  2 4  4  1  3 Incoming  Outgoing Node  VCI  Node  VCI 3  7  B  8 3  1  B  5 B  5  3  1 B  8  3  7 Incoming  Outgoing Node  VCI  Node  VCI C  6  4  3 4  3  C  6 Incoming  Outgoing Node  VCI  Node  VCI 2  3  3  2 3  4  5  5 3  2  2  3 5  5  3  4 Incoming  Outgoing Node  VCI  Node  VCI 4  5  D  2 D  2  4  5 Node 1 Node 2 Node 3 Node 4 Node 6 Node 5
Routing Tables in Datagram Packet Networks 2  2 3  3 4  4 5  2 6  3 Node 1 Node 2 Node 3 Node 4 Node 6 Node 5 1  1 2  4 4  4 5  6 6  6 1  3 2  5 3  3 4  3 5  5 Destination  Next node 1  1 3    1 4  4 5  5 6    5 1  4 2  2 3  4 4  4 6  6 1  1 2  2 3  3 5  5 6  3 Destination  Next node Destination  Next node Destination  Next node Destination  Next node Destination  Next node
Non-Hierarchical Addresses and Routing No relationship between addresses & routing proximity Routing tables require 16 entries each 0000 0111 1010 1101 0001 0100 1011 1110 0011 0101 1000 1111 0011 0110 1001 1100 R 1 1 2 5 4 3 0000  1 0111  1 1010  1  …  … 0001  4 0100  4 1011  4  …  … R 2
Hierarchical Addresses and Routing Prefix indicates network where host is attached Routing tables require 4 entries each 0000 0001 0010 0011 0100 0101 0110 0111 1100 1101 1110 1111 1000 1001 1010 1011 R 1 R 2 1 2 5 4 3 00  1 01  3 10  2 11  3 00  3 01  4 10  3 11  5
Flat vs Hierarchical Routing Flat Routing All routers are peers Does not scale Hierarchical Routing Partitioning:  Domains, autonomous systems, areas...  Some routers part of routing backbone Some routers only communicate within an area Efficient because it matches typical traffic flow patterns Scales
Specialized Routing Flooding Useful in starting up network Useful in propagating information to all nodes Deflection Routing Fixed, preset routing procedure No route synthesis
Flooding Send a packet to all nodes in a network No routing tables available Need to broadcast packet to all nodes (e.g. to propagate link state information) Approach Send packet on all ports except one where it arrived Exponential growth in packet transmissions
Flooding is initiated from Node 1:  Hop 1 transmissions 1 2 3 4 5 6
Flooding is initiated from Node 1:  Hop 2 transmissions 1 2 3 4 5 6
1 2 3 4 5 6 Flooding is initiated from Node 1:  Hop 3 transmissions
Limited Flooding Time-to-Live field in each packet limits number of hops to certain diameter Each switch adds its ID before flooding; discards repeats Source puts sequence number in each packet; switches records source address and sequence number and discards repeats
Deflection Routing Network nodes forward packets to preferred port If preferred port busy, deflect packet to another port Works well with regular topologies Manhattan street network Rectangular array of nodes Nodes designated (i,j)  Rows alternate as one-way streets Columns alternate as one-way avenues Bufferless operation is possible Proposed for optical packet networks All-optical buffering currently not viable
Tunnel from last column to first column or vice versa 0,0 0,1 0,2 0,3 1,0 1,1 1,2 1,3 2,0 2,1 2,2 2,3 3,0 3,1 3,2 3,3
Example: Node (0,2) ->(1,0) 0,0 0,1 0,2 0,3 1,0 1,1 1,2 1,3 2,0 2,1 2,2 2,3 3,0 3,1 3,2 3,3 busy
Chapter 7 Packet-Switching Networks Shortest Path Routing
Shortest Paths & Routing Many possible paths connect any given source and to any given destination Routing involves the selection of the path to be used to accomplish a given transfer Typically it is possible to attach a cost or distance to a link connecting two nodes Routing can then be posed as a shortest path problem
Routing Metrics Means for measuring desirability of a path Path Length = sum of costs or distances Possible metrics Hop count:  rough measure of resources used Reliability:  link availability; BER Delay:  sum of delays along path;  complex & dynamic Bandwidth:  “available capacity” in a path Load:  Link & router utilization along path Cost:  $$$
Shortest Path Approaches Distance Vector Protocols Neighbors exchange list of distances to destinations Best next-hop determined for each destination Ford-Fulkerson (distributed) shortest path algorithm Link State Protocols Link state information flooded to all routers Routers have complete topology information Shortest path (& hence next hop) calculated  Dijkstra (centralized) shortest path algorithm
Distance Vector Do you know the way to San Jose? San Jose  392 San Jose  596 San Jose  294 San Jose  250
Distance Vector Local Signpost Direction Distance Routing Table For each destination list: Next Node Distance Table Synthesis Neighbors exchange table entries Determine current best next hop Inform neighbors Periodically After changes dest next dist
Shortest Path to SJ San Jose C ij D j D i If  D i   is the shortest distance to SJ from  i and if  j  is a neighbor on the shortest path, then  D i  = C ij  + D j Focus on how nodes find their shortest path to a given destination node, i.e.  SJ i j
But we don’t know the shortest paths i  only has local info from neighbors  D j" C ij” C ij D j D i C ij' D j' Pick current  shortest path  i San Jose j j" j'
Why Distance Vector Works 1 Hop From SJ 2 Hops From SJ 3 Hops From SJ Hop-1 nodes calculate current  (next hop, dist), & send to neighbors San Jose Accurate info about SJ ripples across network, Shortest Path Converges SJ sends accurate info
Bellman-Ford Algorithm Consider computations  for one destination  d Initialization Each node table has 1 row for destination  d Distance of node  d  to itself is zero:  D d =0 Distance of other node  j  to  d  is infinite:  D j =  ,  for  j    d Next hop node  n j  = -1 to indicate not yet defined for  j     d Send Step Send new distance vector to immediate neighbors across local link Receive Step At node  j , find the next hop that gives the minimum distance to  d ,  Min j  { C ij  + D j  } Replace old  (n j , D j (d))  by new  (n j *, D j *(d))  if new next node or distance Go to send step
Bellman-Ford Algorithm Now consider parallel computations  for all destinations  d Initialization Each node has 1 row for each destination  d Distance of node  d  to itself is zero:  D d (d)=0 Distance of other node  j  to  d  is infinite:  D j (d)=    ,   for  j     d Next node  n j  = -1 since not yet defined Send Step Send new distance vector to immediate neighbors across local link Receive Step For each destination  d , find the next hop that gives the minimum distance to  d ,  Min j  { C ij + D j (d) } Replace old  (n j , D i (d))  by new  (n j *, D j *(d))  if new next node or distance found Go to send step
San Jose Table entry  @ node 1 for dest SJ Table entry  @ node 3 for dest SJ Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1,   ) (-1,   ) (-1,   ) (-1,   ) (-1,   ) 1 2 3 3 1 5 4 6 2 2 3 4 2 1 1 2 3 5
San Jose 0 2 1 Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1,   ) (-1,   ) (-1,   ) (-1,   ) (-1,   ) 1 (-1,   ) (-1,   ) (6,1) (-1,   ) (6,2) 2 3 D 6 =0 D 3 =D 6 +1 n 3 =6 3 1 5 4 6 2 2 3 4 2 1 1 2 3 5 D 6 =0 D 5 =D 6 +2 n 5 =6
San Jose 0 1 2 3 3 6 Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1,   ) (-1,   ) (-1,   ) (-1,   ) (-1,   ) 1 (-1,   ) (-1,   ) (6, 1) (-1,   ) (6,2) 2 (3,3) (5,6) (6, 1) (3,3) (6,2) 3 3 1 5 4 6 2 2 3 4 2 1 1 2 3 5
San Jose 0 1 2 6 3 3 4 Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1,   ) (-1,   ) (-1,   ) (-1,   ) (-1,   ) 1 (-1,   ) (-1,   ) (6, 1) (-1,   ) (6,2) 2 (3,3) (5,6) (6, 1) (3,3) (6,2) 3 (3,3) (4,4) (6, 1) (3,3) (6,2) 3 1 5 4 6 2 2 3 4 2 1 1 2 3 5
San Jose 0 1 2 3 3 4 Network disconnected;  Loop created between nodes 3 and 4 5 Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (3,3) (4,4) (6, 1) (3,3) (6,2) 1 (3,3) (4,4) (4, 5) (3,3) (6,2) 2 3 3 1 5 4 6 2 2 3 4 2 1 1 2 3 5
San Jose 0 2 5 3 3 4 7 5 Node 4 could have chosen 2 as next node because of tie  Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (3,3) (4,4) (6, 1) (3,3) (6,2) 1 (3,3) (4,4) (4, 5) (3,3) (6,2) 2 (3,7) (4,4) (4, 5) (5,5) (6,2) 3 3 1 5 4 6 2 2 3 4 2 1 1 2 3 5
San Jose 0 2 5 5 7 4 7 6 Node 2 could have chosen 5 as next node because of tie Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (3,3) (4,4) (6, 1) (3,3) (6,2) 1 (3,3) (4,4) (4, 5) (3,3) (6,2) 2 (3,7) (4,4) (4, 5) (5,5) (6,2) 3 (3,7) (4,6) (4, 7) (5,5) (6,2) 3 1 5 4 6 2 2 3 4 2 1 1 2 3 5
San Jose 0 7 7 5 6 9 2 Node 1 could have chose 3 as next node because of tie 3 5 4 6 2 2 3 4 2 1 1 2 3 5 1 Iteration Node 1 Node 2 Node 3 Node 4 Node 5 1 (3,3) (4,4) (4, 5) (3,3) (6,2) 2 (3,7) (4,4) (4, 5) (2,5) (6,2) 3 (3,7) (4,6) (4, 7) (5,5) (6,2) 4 (2,9) (4,6) (4, 7) (5,5) (6,2)
Counting to Infinity Problem Nodes believe best path is through each other (Destination is node 4) 3 1 2 4 1 1 1 3 1 2 4 1 1 X (a) (b) Update Node 1 Node 2 Node 3 Before break (2,3) (3,2) (4, 1) After break (2,3) (3,2) (2,3) 1 (2,3) (3,4) (2,3) 2 (2,5) (3,4) (2,5) 3 (2,5) (3,6) (2,5) 4 (2,7) (3,6) (2,7) 5 (2,7) (3,8) (2,7) … … … …
Problem:  Bad News Travels Slowly Remedies Split Horizon Do not report route to a destination to the neighbor from which route was learned Poisoned Reverse Report route to a destination to the neighbor from which route was learned, but with infinite distance Breaks erroneous direct loops immediately Does not work on some indirect loops
Split Horizon with Poison Reverse Nodes believe best path is through each other 3 1 2 4 1 1 1 3 1 2 4 1 1 X (a) (b) Update Node 1 Node 2 Node 3 Before break (2, 3) (3, 2) (4, 1) After break (2, 3) (3, 2) (-1,   ) Node 2 advertizes its route to 4 to node 3 as having distance infinity;  node 3 finds there is no route to 4 1 (2, 3) (-1,   ) (-1,   ) Node 1 advertizes its route to 4 to node 2 as having distance infinity;  node 2 finds there is no route to 4 2 (-1,   ) (-1,   ) (-1,   ) Node 1 finds there is no route to 4
Link-State Algorithm Basic idea: two step procedure Each source node gets a map of all nodes and link metrics (link state) of the entire network  Find the shortest path on the map from the source node to all destination nodes Broadcast of link-state information Every node  i  in the network broadcasts to every other node in the network: ID’s of its neighbors:  N i =set of neighbors of i Distances to its neighbors:  { C ij  |  j   N i } Flooding is a popular method of broadcasting packets
Dijkstra Algorithm:  Finding shortest paths in order Closest node to  s  is 1 hop away 2 nd  closest node to  s  is 1 hop  away from  s  or  w” 3rd closest node to  s  is 1 hop  away from  s ,  w”,  or  x Find shortest paths from source s to all other destinations s w w" w' w" x x' x z z' w'
Dijkstra’s algorithm N :  set of nodes for which shortest path already found Initialization:  (S tart with source node s) N = {s}, D s  = 0, “ s  is distance zero from itself” D j =C sj  for all  j    s,  distances of directly-connected neighbors Step A: ( Find next closest node i )  Find  i     N  such that D i  = min  Dj   for  j    N   Add  i  to  N If  N  contains all the nodes, stop Step B: ( update minimum costs) For each node  j    N D j  = min ( D j , D i +C ij ) Go to Step A Minimum distance from s  to  j through node  i  in N
Execution of Dijkstra’s algorithm          Iteration N D 2 D 3 D 4 D 5 D 6 Initial {1} 3 2 5   1 {1,3} 3 2 4  3 2 {1,2,3} 3 2 4 7 3 3 {1,2,3,6} 3 2 4 5 3 4 {1,2,3,4,6} 3 2 4 5 3 5 {1,2,3,4,5,6} 3 2 4 5 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3
Shortest Paths in  Dijkstra’s Algorithm 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3
Reaction to Failure If a link fails, Router sets link distance to infinity & floods the network with an update packet All routers immediately update their link database & recalculate their shortest paths Recovery very quick But watch out for old update messages  Add time stamp or sequence # to each update message Check whether each received update message is new If new, add it to database and broadcast If older, send update message on arriving link
Why is Link State Better? Fast, loopless convergence Support for precise metrics, and multiple metrics if necessary (throughput, delay, cost, reliability) Support for multiple paths to a destination algorithm can be modified to find best two paths
Source Routing Source host selects path that is to be followed by a packet Strict:  sequence of nodes in path inserted into header Loose:  subsequence of nodes in path specified Intermediate switches read next-hop address and remove address Source host needs link state information or access to a route server Source routing allows the host to control the paths that its information traverses in the network Potentially the means for customers to select what service providers they use
Example 1 2 3 4 5 6 A B Source host Destination host 1,3,6,B 3,6,B 6,B B
Chapter 7 Packet-Switching Networks ATM Networks
Asynchronous Tranfer Mode (ATM) Packet multiplexing and switching Fixed-length packets: “cells” Connection-oriented Rich Quality of Service support Conceived as end-to-end Supporting wide range of services Real time voice and video Circuit emulation for digital transport Data traffic with bandwidth guarantees Detailed discussion in Chapter 9
ATM Networking End-to-end information transport using cells 53-byte cell provide low delay and fine multiplexing granularity ATM Adaptation Layer ATM Adaptation Layer ATM Network Video Packet Voice Video Packet Voice Support for many services through ATM Adaptation Layer
TDM vs. Packet Multiplexing    * * In mid-1980s, packet processing mainly in software and hence slow;  By late 1990s, very high speed packet processing possible Variable bit rate Delay Burst traffic Processing TDM Multirate only Low, fixed Inefficient Minimal, very high speed Packet Easily handled Variable Efficient Header & packet processing required
ATM:  Attributes of TDM & Packet Switching Packet structure gives flexibility & efficiency Synchronous slot transmission gives high speed & density Packet Header MUX Wasted bandwidth ATM TDM 3  2  1  4  3  2  1  4  3  2  1 4  3  1  3  2  2  1 Voice Data packets Images 1 2 3 4
ATM Switching Switch carries out table translation and routing ATM switches can be implemented using shared memory, shared backplanes, or self-routing multi-stage fabrics  2 3 N 1 Switch N 1 5 6 video video voice data 25 32 32 61 75 67 39 67 N 1 3 2 video 75 voice data video … … … 32 25 32 61 39 67 67
Virtual connections setup across network Connections identified by locally-defined tags ATM Header contains virtual connection information: 8-bit Virtual Path Identifier 16-bit Virtual Channel Identifier Powerful traffic grooming capabilities Multiple VCs can be bundled within a VP  Similar to tributaries with SONET, except variable bit rates possible ATM Virtual Connections Physical link Virtual paths Virtual channels
VPI/VCI switching & multiplexing Connections a,b,c bundled into VP at switch 1 Crossconnect switches VP without looking at VCIs VP unbundled at switch 2;  VC switching thereafter VPI/VCI structure allows creation virtual networks ATM Sw 1 ATM Sw 4 ATM Sw 2 ATM Sw 3 ATM cross- connect a b c d e VPI 3 VPI 5 VPI 2 VPI 1 a b c d e Sw = switch
MPLS & ATM ATM initially touted as more scalable than packet switching ATM envisioned speeds of 150-600 Mbps Advances in optical transmission proved ATM to be the less scalable:  @ 10 Gbps Segmentation & reassembly of messages & streams into 48-byte cell payloads difficult & inefficient Header must be processed every 53 bytes vs. 500 bytes on average for packets Delay due to 1250 byte packet at 10 Gbps = 1   sec;  delay due to 53 byte cell @ 150 Mbps  ≈ 3   sec   MPLS (Chapter 10) uses tags to transfer packets across virtual circuits in Internet
Chapter 7 Packet-Switching Networks Traffic Management  Packet Level Flow Level Flow-Aggregate Level
Traffic Management  Vehicular traffic management Traffic lights & signals control flow of traffic in city street system Objective is to maximize flow with tolerable delays Priority Services Police sirens Cavalcade for dignitaries Bus & High-usage lanes Trucks allowed only at night Packet traffic management Multiplexing & access mechanisms to control flow of packet traffic Objective is make efficient use of network resources & deliver QoS Priority Fault-recovery packets Real-time traffic Enterprise (high-revenue) traffic High bandwidth traffic
Time Scales & Granularities Packet Level Queueing & scheduling at multiplexing points Determines relative performance offered to packets over a short time scale (microseconds) Flow Level Management of traffic flows & resource allocation to ensure delivery of QoS (milliseconds to seconds) Matching traffic flows to resources available; congestion control Flow-Aggregate Level Routing of aggregate traffic flows across the network for efficient utilization of resources and meeting of service levels “ Traffic Engineering”, at scale of minutes to days
End-to-End QoS A packet traversing network encounters delay and possible loss at various multiplexing points End-to-end performance is accumulation of per-hop performances 1 2 N N  – 1 Packet buffer …
Scheduling & QoS End-to-End QoS & Resource Control Buffer & bandwidth control  ->  Performance Admission control to regulate traffic level Scheduling Concepts fairness/isolation priority, aggregation,  Fair Queueing & Variations WFQ, PGPS Guaranteed Service  WFQ, Rate-control Packet Dropping aggregation, drop priorities
FIFO Queueing All packet flows share the same buffer Transmission Discipline:  First-In, First-Out Buffering Discipline:  Discard arriving packets if buffer is full (Alternative:  random discard; pushout head-of-line, i.e. oldest, packet) Packet buffer Transmission link Arriving packets Packet discard when full
FIFO Queueing Cannot provide differential QoS to different packet flows Different packet flows interact strongly Statistical delay guarantees via load control Restrict number of flows allowed (connection admission control) Difficult to determine performance delivered Finite buffer determines a maximum possible delay Buffer size determines loss probability But depends on arrival & packet length statistics Variation:  packet enqueueing based on queue thresholds some packet flows encounter blocking before others higher loss, lower delay
FIFO Queueing with Discard Priority Packet buffer Transmission link Arriving packets Packet discard when full Packet buffer Transmission link Arriving packets Class 1 discard when full Class 2 discard when threshold exceeded (a) (b)
HOL Priority Queueing High priority queue serviced until empty High priority queue has lower waiting time Buffers can be dimensioned for different loss probabilities Surge in high priority queue can cause low priority queue to saturate Transmission link Packet discard when full High-priority packets Low-priority packets Packet discard when full When high-priority queue empty
HOL Priority Features Provides differential QoS Pre-emptive priority:  lower classes invisible Non-preemptive priority:  lower classes impact higher classes through residual service times High-priority classes can hog all of the bandwidth & starve lower priority classes Need to provide some isolation between classes (Note: Need labeling) Delay Per-class loads
Earliest Due Date Scheduling Queue in order of “due date” packets requiring low delay get earlier due date packets without delay get indefinite or very long due dates Sorted packet buffer Transmission link Arriving packets Packet discard when full Tagging unit
Each flow has its own logical queue:  prevents hogging; allows differential loss probabilities C bits/sec allocated equally among non-empty queues transmission rate = C / n(t),  where n(t)=# non-empty queues Idealized system assumes fluid flow from queues Implementation requires approximation:  simulate fluid system; sort packets according to completion time in ideal system Fair Queueing / Generalized Processor Sharing … … C  bits/second Transmission link Packet flow 1 Packet flow 2 Packet flow  n Approximated bit-level round robin service
Buffer 1 at  t =0 Buffer 2 at  t =0 1 t 1 2 Fluid-flow system: both packets served  at rate 1/2 Both packets complete service at  t  = 2 0 1 t 1 2 Packet-by-packet system: buffer 1 served first at rate 1; then buffer 2 served at rate 1. Packet from buffer 2 being served Packet from buffer 1 being served Packet from buffer 2 waiting 0
Buffer 1 at  t =0 Buffer 2 at  t =0 2 1 t 3 Fluid-flow system: both packets served  at rate 1/2 Packet from buffer 2 served at rate 1 0 2 1 t 1 2 Packet-by-packet  fair queueing: buffer 2 served at rate 1 Packet from buffer 1 served at rate 1 Packet from buffer 2 waiting 0 3
Buffer 1 at  t =0 Buffer 2 at  t =0 1 t 1 2 Fluid-flow system: packet from buffer 1 served at rate 1/4; Packet from buffer 1  served at rate 1 Packet from buffer 2 served at rate 3/4 0 1 t 1 2 Packet from buffer 1 served at rate 1 Packet from buffer 2  served at rate 1 Packet from buffer 1 waiting 0 Packet-by-packet weighted fair queueing: buffer 2 served first at rate 1; then buffer 1 served at rate 1
Packetized GPS/WFQ Compute packet completion time in ideal system add tag to packet sort packet in queue according to tag serve according to HOL Sorted packet buffer Transmission link Arriving packets Packet discard when full Tagging unit
Bit-by-Bit Fair Queueing Assume n flows, n queues 1 round = 1 cycle serving all n queues If each queue gets 1 bit per cycle, then 1 round = # active queues Round number = number of cycles of service that have been completed If packet arrives to idle queue: Finishing time = round number + packet size in bits If packet arrives to active queue:  Finishing time = finishing time of last packet in queue  + packet size rounds Current Round #
Differential Service:  If a traffic flow is to receive twice as much bandwidth as a regular flow, then its packet completion time would be half Buffer 1 Buffer 2 Buffer  n Number of rounds = Number of bit transmission opportunities Rounds Packet of length k bits begins transmission at this time Packet completes transmission k rounds later …
F(i,k,t)  = finish time of  k th packet that arrives at time  t  to flow  i   P(i,k,t)  = size of  k th packet that arrives at time t to flow  i R(t)  = round number at time  t Computing the Finishing Time Fair Queueing:(take care of both idle and active cases) F(i,k,t) = max{F(i,k-1,t), R(t)}  + P(i,k,t) Weighted Fair Queueing:   F(i,k,t) = max{F(i,k-1,t), R(t)}  + P(i,k,t)/w i Generalize so  R(t)  continuous, not discrete R(t)  grows at rate inversely proportional to  n(t) rounds
WFQ and Packet QoS WFQ and its many variations form the basis for providing QoS in packet networks Very high-speed implementations available, up to 10 Gbps and possibly higher WFQ must be combined with other mechanisms to provide end-to-end QoS (next section)
Buffer Management Packet drop strategy:  Which packet to drop when buffers full Fairness:  protect behaving sources from misbehaving sources Aggregation:  Per-flow buffers protect flows from misbehaving flows Full aggregation provides no protection Aggregation into classes provided intermediate protection Drop priorities:  Drop packets from buffer according to priorities Maximizes network utilization & application QoS Examples:  layered video, policing at network edge Controlling sources at the edge
Early or Overloaded Drop Random early detection: drop pkts if short-term avg of queue exceeds threshold pkt drop probability increases linearly with queue length mark offending pkts  improves performance of cooperating TCP sources increases loss probability of misbehaving sources
Random Early Detection (RED) Packets produced by TCP will reduce input rate in response to network congestion Early drop:  discard packets before buffers are full Random drop causes some sources to reduce rate before others, causing gradual reduction in aggregate input rate Algorithm: Maintain running average of queue length If Q avg  < minthreshold, do nothing If Q avg  > maxthreshold, drop packet If in between, drop packet according to probability Flows that send more packets are more likely to have packets dropped
Packet Drop Profile in RED Average queue length Probability of packet drop 1 0 min th max th full
Chapter 7 Packet-Switching Networks Traffic Management at the Flow Level
Congestion occurs when a surge of traffic overloads network resources Approaches to Congestion Control: Preventive Approaches:  Scheduling & Reservations Reactive Approaches:  Detect & Throttle/Discard 4 8 6 3 2 1 5 7 Congestion
Ideal effect of congestion control:  Resources used efficiently up to capacity available Offered load Throughput Controlled Uncontrolled
Open-Loop Control Network performance is guaranteed to all traffic flows that have been admitted into the network Initially for connection-oriented networks Key Mechanisms Admission Control Policing Traffic Shaping Traffic Scheduling
Admission Control Flows negotiate contract with network Specify requirements: Peak, Avg., Min Bit rate Maximum burst size Delay, Loss requirement  Network computes resources needed “ Effective” bandwidth If flow accepted, network allocates resources to ensure QoS delivered as long as source conforms to contract Typical bit rate demanded by a variable bit rate information source Time Bits/second Peak rate Average rate
Policing Network monitors traffic flows continuously to ensure they meet their traffic contract When a packet violates the contract, network can discard or tag the packet giving it lower priority If congestion occurs, tagged packets are discarded first Leaky Bucket Algorithm  is the most commonly used policing mechanism Bucket has specified leak rate for average contracted rate Bucket has specified depth to accommodate variations in arrival rate Arriving packet is  conforming  if it does not result in overflow
Leaky Bucket algorithm can be used to police arrival rate of a packet stream Let X = bucket content at last conforming packet arrival Let t a  – last conforming packet arrival time = depletion in bucket water drains at a constant rate leaky bucket water poured irregularly Leak rate corresponds to long-term rate Bucket depth corresponds to maximum allowable burst arrival 1 packet per unit time Assume constant-length packet as in ATM
Leaky Bucket Algorithm Depletion rate:  1 packet per unit time L+I  = Bucket Depth I  = increment per arrival,  nominal interarrival time Interarrival time Current bucket content arriving packet would cause  overflow empty Non-empty conforming packet Arrival of a packet at time  t a X’  =  X  - ( t a   -  LCT ) X’  < 0? X’  >  L ? X  =  X’  +  I LCT  =  t a   conforming packet X’  = 0 Nonconforming packet X  =  value of the leaky bucket counter X’  =  auxiliary variable LCT  = last conformance time Yes No Yes No
Leaky Bucket Example I  = 4  L  = 6 Non-conforming packets not allowed into bucket & hence not included in calculations I L+I Bucket content Time Time Packet arrival Nonconforming * * * * * * * * *
Policing Parameters T  = 1 / peak rate MBS = maximum burst size I  = nominal interarrival time = 1 / sustainable rate Time MBS T L I
Dual Leaky Bucket Dual leaky bucket to police PCR, SCR, and MBS: Tagged or dropped Untagged traffic Incoming traffic Untagged traffic PCR = peak cell rate CDVT = cell delay variation tolerance SCR = sustainable cell rate MBS = maximum burst size Leaky bucket 1 SCR and MBS Leaky bucket 2 PCR and CDVT Tagged or dropped
Traffic Shaping Networks police the incoming traffic flow Traffic shaping  is used to ensure that a packet stream conforms to specific parameters Networks can shape their traffic prior to passing it to another network Network C Network A Network B Traffic shaping Traffic shaping Policing Policing 1 2 3 4
Leaky Bucket Traffic Shaper Buffer incoming packets Play out periodically to conform to parameters Surges in arrivals are buffered & smoothed out Possible packet loss due to buffer overflow Too restrictive, since conforming traffic does not need to be completely smooth Incoming traffic Shaped traffic Size  N Packet Server
Token Bucket Traffic Shaper Token rate regulates transfer of packets If sufficient tokens available, packets enter network without delay K determines how much burstiness allowed into the network An incoming packet must have sufficient tokens before admission into the network Incoming traffic Shaped traffic Size  N Size  K Tokens arrive periodically Server Packet Token
Token Bucket Shaping Effect The token bucket constrains the traffic from a source to be limited to  b  +  r t  bits in an interval of length  t b + r t b  bytes instantly t r  bytes/second
Packet transfer with Delay Guarantees Token Shaper Bit rate  >  R > r e.g., using WFQ Assume fluid flow for information Token bucket allows burst of b bytes 1 & then r bytes/second Since R>r, buffer content @ 1 never greater than b byte Thus delay @ mux < b/R Rate into second mux is r<R, so bytes are never delayed bR b  R - r (b) Buffer occupancy at 1 0 Empty t t Buffer occupancy at 2 A(t) = b+rt R(t) No backlog of packets (a) 1 2
Delay Bounds with WFQ / PGPS Assume  traffic shaped to parameters b & r schedulers give flow at least rate R>r  H hop path m is maximum packet size for the given flow M maximum packet size in the network R j  transmission rate in jth hop Maximum end-to-end delay that can be experienced by a packet from flow i is:
Scheduling for Guaranteed Service Suppose guaranteed bounds on end-to-end delay across the network are to be provided A call admission control procedure is required to allocate resources & set schedulers Traffic flows from sources must be shaped/regulated so that they do not exceed their allocated resources Strict delay bounds can be met
Current View of Router Function Classifier Input driver Internet forwarder Pkt. scheduler Output driver Routing Agent Reservation Agent Mgmt. Agent Admission Control [Routing database] [Traffic control database]
Closed-Loop Flow Control Congestion control feedback information to regulate flow from sources into network Based on buffer content, link utilization, etc. Examples:  TCP at transport layer; congestion control at ATM level End-to-end vs. Hop-by-hop Delay in effecting control Implicit vs. Explicit Feedback Source deduces congestion from observed behavior Routers/switches generate messages alerting to congestion
End-to-End vs. Hop-by-Hop Congestion Control Source Destination Feedback information Packet flow Source Destination (a) (b)
Traffic Engineering Management exerted at flow aggregate level Distribution of flows in network to achieve efficient utilization of resources (bandwidth) Shortest path algorithm to route a given flow not enough Does not take into account requirements of a flow, e.g. bandwidth requirement Does not take account interplay between different flows Must take into account aggregate demand from all flows
Shortest path routing congests link 4 to 8 Better flow allocation distributes flows more uniformly 4 6 3 2 1 5 8 7 4 6 3 2 1 5 8 7 (a) (b)
Ad

More Related Content

What's hot (20)

E2 e+call+setup+time+(cst)+enhacenments p3_benchmarking
E2 e+call+setup+time+(cst)+enhacenments p3_benchmarkingE2 e+call+setup+time+(cst)+enhacenments p3_benchmarking
E2 e+call+setup+time+(cst)+enhacenments p3_benchmarking
Otseakemhe
 
Drx
DrxDrx
Drx
Muhammad Shehzad Ashraf
 
VoLTE and ViLTE.pdf
VoLTE and ViLTE.pdfVoLTE and ViLTE.pdf
VoLTE and ViLTE.pdf
AsitSwain5
 
Carrier aggregation
Carrier aggregationCarrier aggregation
Carrier aggregation
anuragchugh82
 
E nodeb kpi reference(v100r005c00 02)(pdf)-en
E nodeb kpi reference(v100r005c00 02)(pdf)-enE nodeb kpi reference(v100r005c00 02)(pdf)-en
E nodeb kpi reference(v100r005c00 02)(pdf)-en
tharinduwije
 
Lte ho parameters trial_01262011
Lte ho parameters trial_01262011Lte ho parameters trial_01262011
Lte ho parameters trial_01262011
Hatim100
 
2.oeo000010 lte handover fault diagnosis issue 1
2.oeo000010 lte handover fault diagnosis issue 12.oeo000010 lte handover fault diagnosis issue 1
2.oeo000010 lte handover fault diagnosis issue 1
Klajdi Husi
 
5G NR parameters
5G NR parameters5G NR parameters
5G NR parameters
Sasi Reddy
 
Tems layer3_messages
Tems  layer3_messagesTems  layer3_messages
Tems layer3_messages
badgirl3086
 
Call Setup Success Rate Definition and Troubleshooting
Call Setup Success Rate Definition and Troubleshooting Call Setup Success Rate Definition and Troubleshooting
Call Setup Success Rate Definition and Troubleshooting
Assim Mubder
 
Huawei rru5508 description
Huawei rru5508 descriptionHuawei rru5508 description
Huawei rru5508 description
Denmark Wilson
 
Factors affecting lte throughput and calculation methodology
Factors affecting lte throughput and calculation methodologyFactors affecting lte throughput and calculation methodology
Factors affecting lte throughput and calculation methodology
Abhijeet Kumar
 
422738668-LTE-Downlink-Throughput-Optimization-Based-on-Performance-Data [Rep...
422738668-LTE-Downlink-Throughput-Optimization-Based-on-Performance-Data [Rep...422738668-LTE-Downlink-Throughput-Optimization-Based-on-Performance-Data [Rep...
422738668-LTE-Downlink-Throughput-Optimization-Based-on-Performance-Data [Rep...
SudheeraIndrajith
 
Lte radio network planning huawei
Lte radio network planning huaweiLte radio network planning huawei
Lte radio network planning huawei
tharinduwije
 
Interview question for 2g,3g,4g
Interview question for 2g,3g,4gInterview question for 2g,3g,4g
Interview question for 2g,3g,4g
Vijay Anand
 
Handover 3g
Handover 3gHandover 3g
Handover 3g
Surinder Singh
 
LTE KPI and PI Formula_NOKIA.pdf
LTE KPI and PI Formula_NOKIA.pdfLTE KPI and PI Formula_NOKIA.pdf
LTE KPI and PI Formula_NOKIA.pdf
VahidZibakalam3
 
Rfu hardware description(v100 r008c00 04)(pdf)-en
Rfu hardware description(v100 r008c00 04)(pdf)-enRfu hardware description(v100 r008c00 04)(pdf)-en
Rfu hardware description(v100 r008c00 04)(pdf)-en
Charles Mbaziira
 
Ubbp
UbbpUbbp
Ubbp
Ahmed Aam
 
4g interview-question
4g interview-question4g interview-question
4g interview-question
Manpreet Singh
 
E2 e+call+setup+time+(cst)+enhacenments p3_benchmarking
E2 e+call+setup+time+(cst)+enhacenments p3_benchmarkingE2 e+call+setup+time+(cst)+enhacenments p3_benchmarking
E2 e+call+setup+time+(cst)+enhacenments p3_benchmarking
Otseakemhe
 
VoLTE and ViLTE.pdf
VoLTE and ViLTE.pdfVoLTE and ViLTE.pdf
VoLTE and ViLTE.pdf
AsitSwain5
 
E nodeb kpi reference(v100r005c00 02)(pdf)-en
E nodeb kpi reference(v100r005c00 02)(pdf)-enE nodeb kpi reference(v100r005c00 02)(pdf)-en
E nodeb kpi reference(v100r005c00 02)(pdf)-en
tharinduwije
 
Lte ho parameters trial_01262011
Lte ho parameters trial_01262011Lte ho parameters trial_01262011
Lte ho parameters trial_01262011
Hatim100
 
2.oeo000010 lte handover fault diagnosis issue 1
2.oeo000010 lte handover fault diagnosis issue 12.oeo000010 lte handover fault diagnosis issue 1
2.oeo000010 lte handover fault diagnosis issue 1
Klajdi Husi
 
5G NR parameters
5G NR parameters5G NR parameters
5G NR parameters
Sasi Reddy
 
Tems layer3_messages
Tems  layer3_messagesTems  layer3_messages
Tems layer3_messages
badgirl3086
 
Call Setup Success Rate Definition and Troubleshooting
Call Setup Success Rate Definition and Troubleshooting Call Setup Success Rate Definition and Troubleshooting
Call Setup Success Rate Definition and Troubleshooting
Assim Mubder
 
Huawei rru5508 description
Huawei rru5508 descriptionHuawei rru5508 description
Huawei rru5508 description
Denmark Wilson
 
Factors affecting lte throughput and calculation methodology
Factors affecting lte throughput and calculation methodologyFactors affecting lte throughput and calculation methodology
Factors affecting lte throughput and calculation methodology
Abhijeet Kumar
 
422738668-LTE-Downlink-Throughput-Optimization-Based-on-Performance-Data [Rep...
422738668-LTE-Downlink-Throughput-Optimization-Based-on-Performance-Data [Rep...422738668-LTE-Downlink-Throughput-Optimization-Based-on-Performance-Data [Rep...
422738668-LTE-Downlink-Throughput-Optimization-Based-on-Performance-Data [Rep...
SudheeraIndrajith
 
Lte radio network planning huawei
Lte radio network planning huaweiLte radio network planning huawei
Lte radio network planning huawei
tharinduwije
 
Interview question for 2g,3g,4g
Interview question for 2g,3g,4gInterview question for 2g,3g,4g
Interview question for 2g,3g,4g
Vijay Anand
 
LTE KPI and PI Formula_NOKIA.pdf
LTE KPI and PI Formula_NOKIA.pdfLTE KPI and PI Formula_NOKIA.pdf
LTE KPI and PI Formula_NOKIA.pdf
VahidZibakalam3
 
Rfu hardware description(v100 r008c00 04)(pdf)-en
Rfu hardware description(v100 r008c00 04)(pdf)-enRfu hardware description(v100 r008c00 04)(pdf)-en
Rfu hardware description(v100 r008c00 04)(pdf)-en
Charles Mbaziira
 

Viewers also liked (20)

Packet switching
Packet switchingPacket switching
Packet switching
asimnawaz54
 
Packet Switching
Packet SwitchingPacket Switching
Packet Switching
Fathin Fakhriah Abdul Aziz
 
Circuit switching packet switching
Circuit switching  packet  switchingCircuit switching  packet  switching
Circuit switching packet switching
Sneha Dalvi
 
Packet Switching
Packet SwitchingPacket Switching
Packet Switching
9181511
 
Packet switching
Packet switchingPacket switching
Packet switching
Sabyasachi Upadhyay
 
CCNA- part 11 frame relay
CCNA- part 11 frame relayCCNA- part 11 frame relay
CCNA- part 11 frame relay
Sandeep Sharma IIMK Smart City,IoT,Bigdata,Cloud,BI,DW
 
Packet Switching and X.25 Protocol
Packet Switching and X.25 ProtocolPacket Switching and X.25 Protocol
Packet Switching and X.25 Protocol
Miles Kevin Galario
 
Switching Techniques
Switching TechniquesSwitching Techniques
Switching Techniques
tameemyousaf
 
Evolution Of Telecommunication
Evolution Of TelecommunicationEvolution Of Telecommunication
Evolution Of Telecommunication
Rohan Attravanam
 
Computer Networks 2
Computer Networks 2Computer Networks 2
Computer Networks 2
Mr Smith
 
Bt0072 computer networks 2
Bt0072 computer networks  2Bt0072 computer networks  2
Bt0072 computer networks 2
Techglyphs
 
Lec 4 and_5
Lec 4 and_5Lec 4 and_5
Lec 4 and_5
hz3012
 
Chap 03
Chap 03Chap 03
Chap 03
Shraddha Patel
 
Palestra Fabiano Coura
Palestra Fabiano CouraPalestra Fabiano Coura
Palestra Fabiano Coura
markimtv
 
layering-protocol-verif
layering-protocol-veriflayering-protocol-verif
layering-protocol-verif
Ravindra Ganti
 
Layering and Architecture
Layering and ArchitectureLayering and Architecture
Layering and Architecture
selvakumar_b1985
 
Unit2.2
Unit2.2Unit2.2
Unit2.2
Anshumali Singh
 
2b switching in networks
2b switching in networks2b switching in networks
2b switching in networks
kavish dani
 
Packet switching
Packet switchingPacket switching
Packet switching
Vikash Dhal
 
11 circuit-packet
11 circuit-packet11 circuit-packet
11 circuit-packet
Hattori Sidek
 
Ad

Similar to Unit i packet switching networks (20)

Tcp ip
Tcp ipTcp ip
Tcp ip
mailalamin
 
Unit 4 - Network Layer
Unit 4 - Network LayerUnit 4 - Network Layer
Unit 4 - Network Layer
Chandan Gupta Bhagat
 
Lecture 12
Lecture 12Lecture 12
Lecture 12
Anwal Mirza
 
Chapter#3
Chapter#3Chapter#3
Chapter#3
Syed Muhammad ALi Shah
 
Wmcn ch.3
Wmcn ch.3Wmcn ch.3
Wmcn ch.3
Alaa2
 
Networks (Distributed computing)
Networks (Distributed computing)Networks (Distributed computing)
Networks (Distributed computing)
Sri Prasanna
 
Chapter7 l1
Chapter7 l1Chapter7 l1
Chapter7 l1
chandan_v8
 
Lecture 2 review of network technologies
Lecture 2 review of network technologiesLecture 2 review of network technologies
Lecture 2 review of network technologies
Batzaya Dashdondog
 
Unit 3 Network layer and protocols.pptx
Unit 3 Network layer and  protocols.pptxUnit 3 Network layer and  protocols.pptx
Unit 3 Network layer and protocols.pptx
pritimalkhede
 
Lecture set 1
Lecture set 1Lecture set 1
Lecture set 1
Gopi Saiteja
 
Osimodelwithneworkingoverview 150618094119-lva1-app6892
Osimodelwithneworkingoverview 150618094119-lva1-app6892Osimodelwithneworkingoverview 150618094119-lva1-app6892
Osimodelwithneworkingoverview 150618094119-lva1-app6892
Aswini Badatya
 
Osimodelwithneworkingoverview 150618094119-lva1-app6892
Osimodelwithneworkingoverview 150618094119-lva1-app6892Osimodelwithneworkingoverview 150618094119-lva1-app6892
Osimodelwithneworkingoverview 150618094119-lva1-app6892
Saumendra Pradhan
 
Osi model with neworking overview
Osi model with neworking overviewOsi model with neworking overview
Osi model with neworking overview
Sripati Mahapatra
 
Networking Fundamentals
Networking FundamentalsNetworking Fundamentals
Networking Fundamentals
UMA MAHESWARI
 
Data & comp. communication
Data & comp. communicationData & comp. communication
Data & comp. communication
Ashwin Namewar
 
22 circuits
22 circuits22 circuits
22 circuits
swethaVINAY
 
engineering cryptography 21ECE73 Module-3 (2).pptx
engineering cryptography 21ECE73 Module-3 (2).pptxengineering cryptography 21ECE73 Module-3 (2).pptx
engineering cryptography 21ECE73 Module-3 (2).pptx
shaziasulthana2
 
Network layer new
Network layer newNetwork layer new
Network layer new
reshmadayma
 
Ch1 2ed 29_dec03
Ch1 2ed 29_dec03Ch1 2ed 29_dec03
Ch1 2ed 29_dec03
Sugan Nalla
 
Concept of networking
Concept of networkingConcept of networking
Concept of networking
sumit dimri
 
Wmcn ch.3
Wmcn ch.3Wmcn ch.3
Wmcn ch.3
Alaa2
 
Networks (Distributed computing)
Networks (Distributed computing)Networks (Distributed computing)
Networks (Distributed computing)
Sri Prasanna
 
Lecture 2 review of network technologies
Lecture 2 review of network technologiesLecture 2 review of network technologies
Lecture 2 review of network technologies
Batzaya Dashdondog
 
Unit 3 Network layer and protocols.pptx
Unit 3 Network layer and  protocols.pptxUnit 3 Network layer and  protocols.pptx
Unit 3 Network layer and protocols.pptx
pritimalkhede
 
Osimodelwithneworkingoverview 150618094119-lva1-app6892
Osimodelwithneworkingoverview 150618094119-lva1-app6892Osimodelwithneworkingoverview 150618094119-lva1-app6892
Osimodelwithneworkingoverview 150618094119-lva1-app6892
Aswini Badatya
 
Osimodelwithneworkingoverview 150618094119-lva1-app6892
Osimodelwithneworkingoverview 150618094119-lva1-app6892Osimodelwithneworkingoverview 150618094119-lva1-app6892
Osimodelwithneworkingoverview 150618094119-lva1-app6892
Saumendra Pradhan
 
Osi model with neworking overview
Osi model with neworking overviewOsi model with neworking overview
Osi model with neworking overview
Sripati Mahapatra
 
Networking Fundamentals
Networking FundamentalsNetworking Fundamentals
Networking Fundamentals
UMA MAHESWARI
 
Data & comp. communication
Data & comp. communicationData & comp. communication
Data & comp. communication
Ashwin Namewar
 
engineering cryptography 21ECE73 Module-3 (2).pptx
engineering cryptography 21ECE73 Module-3 (2).pptxengineering cryptography 21ECE73 Module-3 (2).pptx
engineering cryptography 21ECE73 Module-3 (2).pptx
shaziasulthana2
 
Network layer new
Network layer newNetwork layer new
Network layer new
reshmadayma
 
Ch1 2ed 29_dec03
Ch1 2ed 29_dec03Ch1 2ed 29_dec03
Ch1 2ed 29_dec03
Sugan Nalla
 
Concept of networking
Concept of networkingConcept of networking
Concept of networking
sumit dimri
 
Ad

More from sangusajjan (19)

Unit iv atm networks
Unit iv atm networksUnit iv atm networks
Unit iv atm networks
sangusajjan
 
VoIP and multimedia networking
VoIP and multimedia networkingVoIP and multimedia networking
VoIP and multimedia networking
sangusajjan
 
TCPIP
TCPIPTCPIP
TCPIP
sangusajjan
 
Network management
Network managementNetwork management
Network management
sangusajjan
 
Vp ns
Vp nsVp ns
Vp ns
sangusajjan
 
Compression of digital voice and video
Compression of digital voice and videoCompression of digital voice and video
Compression of digital voice and video
sangusajjan
 
ATM Network
ATM NetworkATM Network
ATM Network
sangusajjan
 
Computer network lesson plan
Computer network lesson planComputer network lesson plan
Computer network lesson plan
sangusajjan
 
Question bank cn2
Question bank cn2Question bank cn2
Question bank cn2
sangusajjan
 
Profile
ProfileProfile
Profile
sangusajjan
 
VII VoIP
VII VoIPVII VoIP
VII VoIP
sangusajjan
 
VII Compression Introduction
VII Compression IntroductionVII Compression Introduction
VII Compression Introduction
sangusajjan
 
UNIT II tramission control
UNIT II tramission controlUNIT II tramission control
UNIT II tramission control
sangusajjan
 
Unit VI Overlays
Unit VI OverlaysUnit VI Overlays
Unit VI Overlays
sangusajjan
 
Unit V network management and security
Unit V network management and securityUnit V network management and security
Unit V network management and security
sangusajjan
 
Unit III IPV6 UDP
Unit III IPV6 UDPUnit III IPV6 UDP
Unit III IPV6 UDP
sangusajjan
 
Vivpn pp tfinal
Vivpn pp tfinalVivpn pp tfinal
Vivpn pp tfinal
sangusajjan
 
UnIT VIII manet
UnIT VIII manetUnIT VIII manet
UnIT VIII manet
sangusajjan
 
Unit VIII wireless sensor networks
Unit VIII wireless sensor networksUnit VIII wireless sensor networks
Unit VIII wireless sensor networks
sangusajjan
 
Unit iv atm networks
Unit iv atm networksUnit iv atm networks
Unit iv atm networks
sangusajjan
 
VoIP and multimedia networking
VoIP and multimedia networkingVoIP and multimedia networking
VoIP and multimedia networking
sangusajjan
 
Network management
Network managementNetwork management
Network management
sangusajjan
 
Compression of digital voice and video
Compression of digital voice and videoCompression of digital voice and video
Compression of digital voice and video
sangusajjan
 
Computer network lesson plan
Computer network lesson planComputer network lesson plan
Computer network lesson plan
sangusajjan
 
Question bank cn2
Question bank cn2Question bank cn2
Question bank cn2
sangusajjan
 
VII Compression Introduction
VII Compression IntroductionVII Compression Introduction
VII Compression Introduction
sangusajjan
 
UNIT II tramission control
UNIT II tramission controlUNIT II tramission control
UNIT II tramission control
sangusajjan
 
Unit VI Overlays
Unit VI OverlaysUnit VI Overlays
Unit VI Overlays
sangusajjan
 
Unit V network management and security
Unit V network management and securityUnit V network management and security
Unit V network management and security
sangusajjan
 
Unit III IPV6 UDP
Unit III IPV6 UDPUnit III IPV6 UDP
Unit III IPV6 UDP
sangusajjan
 
Unit VIII wireless sensor networks
Unit VIII wireless sensor networksUnit VIII wireless sensor networks
Unit VIII wireless sensor networks
sangusajjan
 

Unit i packet switching networks

  • 2. Packet-Switching Networks Network Services and Internal Network Operation Packet Network Topology Datagrams and Virtual Circuits Routing in Packet Networks Shortest Path Routing ATM Networks Traffic Management
  • 3. Packet-Switching Networks Network Services and Internal Network Operation
  • 4. Network Layer Network Layer: the most complex layer Requires the coordinated actions of multiple, geographically distributed network elements (switches & routers) Must be able to deal with very large scales Billions of users (people & communicating devices) Biggest Challenges Addressing: where should information be directed to? Routing: what path should be used to get information there?
  • 5. Packet Switching Transfer of information as payload in data packets Packets undergo random delays & possible loss Different applications impose differing requirements on the transfer of information t 0 t 1 Network
  • 6. Network Service Network layer can offer a variety of services to transport layer Connection-oriented service or connectionless service Best-effort or delay/loss guarantees End system β Physical layer Data link layer Physical layer Data link layer End system α Network layer Network layer Physical layer Data link layer Network layer Physical layer Data link layer Network layer Transport layer Transport layer Messages Messages Segments Network service Network service
  • 7. Network Service vs. Operation Network Service Connectionless Datagram Transfer Connection-Oriented Reliable and possibly constant bit rate transfer Internal Network Operation Connectionless IP Connection-Oriented Telephone connection ATM Various combinations are possible Connection-oriented service over Connectionless operation Connectionless service over Connection-Oriented operation Context & requirements determine what makes sense
  • 8. Complexity at the Edge or in the Core? 1 1 3 3 2 1 2 2 3 2 1 1 2 2 1 2 1 Medium A B 3 2 1 1 2 2 1 C 2 1 2 1 2 1 4 1 2 3 4 End system α End system β Network 1 2 Physical layer entity Data link layer entity 3 Network layer entity 3 Network layer entity Transport layer entity 4
  • 9. The End-to-End Argument for System Design An end-to-end function is best implemented at a higher level than at a lower level End-to-end service requires all intermediate components to work properly Higher-level better positioned to ensure correct operation Example: stream transfer service Establishing an explicit connection for each stream across network requires all network elements (NEs) to be aware of connection; All NEs have to be involved in re-establishment of connections in case of network fault In connectionless network operation, NEs do not deal with each explicit connection and hence are much simpler in design
  • 10. Network Layer Functions Essential Routing : mechanisms for determining the set of best paths for routing packets requires the collaboration of network elements Forwarding : transfer of packets from NE inputs to outputs Priority & Scheduling : determining order of packet transmission in each NE Optional: congestion control, segmentation & reassembly, security
  • 11. Chapter 7 Packet-Switching Networks Packet Network Topology
  • 12. End-to-End Packet Network Packet networks very different than telephone networks Individual packet streams are highly bursty Statistical multiplexing is used to concentrate streams User demand can undergo dramatic change Peer-to-peer applications stimulated huge growth in traffic volumes Internet structure highly decentralized Paths traversed by packets can go through many networks controlled by different organizations No single entity responsible for end-to-end service
  • 13. Access Multiplexing Packet traffic from users multiplexed at access to network into aggregated streams DSL traffic multiplexed at DSL Access Mux Cable modem traffic multiplexed at Cable Modem Termination System Access MUX To packet network
  • 14. Oversubscription Access Multiplexer N subscribers connected @ c bps to mux Each subscriber active r/c of time Mux has C=nc bps to network Oversubscription rate: N/n Find n so that at most 1% overflow probability Feasible oversubscription rate increases with size N r/c n N/n 10 0.01 1 10 10 extremely lightly loaded users 10 0.05 3 3.3 10 very lightly loaded user 10 0.1 4 2.5 10 lightly loaded users 20 0.1 6 3.3 20 lightly loaded users 40 0.1 9 4.4 40 lightly loaded users 100 0.1 18 5.5 100 lightly loaded users • • • Nr Nc r r r nc • • •
  • 15. Home LANs Home Router LAN Access using Ethernet or WiFi (IEEE 802.11) Private IP addresses in Home (192.168.0.x) using Network Address Translation (NAT) Single global IP address from ISP issued using Dynamic Host Configuration Protocol (DHCP) Home Router To packet network WiFi Ethernet
  • 16. LAN Concentration LAN Hubs and switches in the access network also aggregate packet streams that flows into switches and routers Switch / Router            
  • 17. Campus Network R R R R S S S s s s s s s s s s s R s R Backbone To Internet or wide area network Organization Servers Departmental Server Gateway Only outgoing packets leave LAN through router High-speed campus backbone net connects dept routers Servers have redundant connectivity to backbone
  • 18. Connecting to Internet Service Provider Interdomain level Intradomain level Autonomous system or domain Border routers Border routers Internet service provider s s s LAN Campus Network network administered by single organization
  • 19. Internet Backbone Network Access Points: set up during original commercialization of Internet to facilitate exchange of traffic Private Peering Points: two-party inter-ISP agreements to exchange traffic National Service Provider A National Service Provider B National Service Provider C NAP NAP Private peering
  • 20. R A R B R C Route Server NAP National Service Provider A National Service Provider B National Service Provider C LAN NAP NAP (a) (b) Private peering
  • 21. Key Role of Routing How to get packet from here to there? Decentralized nature of Internet makes routing a major challenge Interior gateway protocols (IGPs) are used to determine routes within a domain Exterior gateway protocols (EGPs) are used to determine routes across domains Routes must be consistent & produce stable flows Scalability required to accommodate growth Hierarchical structure of IP addresses essential to keeping size of routing tables manageable
  • 22. Chapter 7 Packet-Switching Networks Datagrams and Virtual Circuits
  • 23. The Switching Function Dynamic interconnection of inputs to outputs Enables dynamic sharing of transmission resource Two fundamental approaches: Connectionless Connection-Oriented: Call setup control, Connection control Backbone Network Access Network Switch
  • 24. Packet Switching Network Packet switching network Transfers packets between users Transmission lines + packet switches (routers) Origin in message switching Two modes of operation: Connectionless Virtual Circuit Packet switch Network Transmission line User
  • 25. Message switching invented for telegraphy Entire messages multiplexed onto shared lines, stored & forwarded Headers for source & destination addresses Routing at message switches Connectionless Message Switching Switches Message Destination Source Message Message Message
  • 26. Message Switching Delay Additional queueing delays possible at each link t t t t Delay Source Destination T  Minimum delay = 3  + 3 T Switch 1 Switch 2
  • 27. Long Messages vs. Packets Approach 1: send 1 Mbit message Probability message arrives correctly On average it takes about 3 transmissions/hop Total # bits transmitted ≈ 6 Mbits 1 Mbit message How many bits need to be transmitted to deliver message? Approach 2: send 10 100-kbit packets Probability packet arrives correctly On average it takes about 1.1 transmissions/hop Total # bits transmitted ≈ 2.2 Mbits source dest BER=p=10 -6 BER=10 -6
  • 28. Packet Switching - Datagram Messages broken into smaller units (packets) Source & destination addresses in packet header Connectionless, packets routed independently (datagram) Packet may arrive out of order Pipelining of packets across network can reduce delay, increase throughput Lower delay that message switching, suitable for interactive traffic Packet 2 Packet 1 Packet 1 Packet 2 Packet 2
  • 29. Packet Switching Delay Minimum Delay = 3 τ + 5( T /3) (single path assumed) Additional queueing delays possible at each link Packet pipelining enables message to arrive sooner Assume three packets corresponding to one message traverse same path t t t t Delay 3 1 2 3 1 2 3 2 1
  • 30. Delay for k-Packet Message over L Hops t t t t 3 1 2 3 1 2 3 2 1 3  + 3( T /3) first bit released 3  + 5 ( T /3) last bit released L  + LP first bit released L  + LP + (k- 1 )P last bit released where T = k P 3 hops L hops Source Destination Switch 1 Switch 2 
  • 31. Routing Tables in Datagram Networks Route determined by table lookup Routing decision involves finding next hop in route to given destination Routing table has an entry for each destination specifying output port that leads to next hop Size of table becomes impractical for very large number of destinations Destination address Output port 1345 12 2458 7 0785 6 12 1566
  • 32. Example: Internet Routing Internet protocol uses datagram packet switching across networks Networks are treated as data links Hosts have two-port IP address: Network address + Host address Routers do table lookup on network address This reduces size of routing table In addition, network addresses are assigned so that they can also be aggregated Discussed as CIDR in Chapter 8
  • 33. Packet Switching – Virtual Circuit Call set-up phase sets ups pointers in fixed path along network All packets for a connection follow the same path Abbreviated header identifies connection on each link Packets queue for transmission Variable bit rates possible, negotiated during call set-up Delays variable, cannot be less than circuit switching Virtual circuit Packet Packet Packet Packet
  • 34. Connection Setup Signaling messages propagate as route is selected Signaling messages identify connection and setup tables in switches Typically a connection is identified by a local tag, Virtual Circuit Identifier (VCI) Each switch only needs to know how to relate an incoming tag in one input to an outgoing tag in the corresponding output Once tables are setup, packets can flow along path SW 1 SW 2 SW n Connect request Connect request Connect request Connect confirm Connect confirm Connect confirm …
  • 35. Connection Setup Delay Connection setup delay is incurred before any packet can be transferred Delay is acceptable for sustained transfer of large number of packets This delay may be unacceptably high if only a few packets are being transferred t t t t 3 1 2 3 1 2 3 2 1 Release Connect request CR CR Connect confirm CC CC
  • 36. Virtual Circuit Forwarding Tables Each input port of packet switch has a forwarding table Lookup entry for VCI of incoming packet Determine output port (next hop) and insert VCI for next link Very high speeds are possible Table can also include priority or other information about how packet should be treated Input VCI Output port Output VCI 15 15 58 13 13 7 27 12 44 23 16 34
  • 37. Cut-Through switching Some networks perform error checking on header only, so packet can be forwarded as soon as header is received & processed Delays reduced further with cut-through switching 3 1 2 3 1 2 3 2 1 Minimum delay = 3  + T t t t t Source Destination Switch 1 Switch 2
  • 38. Message vs. Packet Minimum Delay Message: L  + L T = L  + ( L – 1) T + T Packet L  + L P + ( k – 1) P = L  + ( L – 1) P + T Cut-Through Packet (Immediate forwarding after header) = L  + T Above neglect header processing delays
  • 39. Example: ATM Networks All information mapped into short fixed-length packets called cells Connections set up across network Virtual circuits established across networks Tables setup at ATM switches Several types of network services offered Constant bit rate connections Variable bit rate connections
  • 40. Chapter 7 Packet-Switching Networks Datagrams and Virtual Circuits Structure of a Packet Switch
  • 41. Packet Switch: Intersection where Traffic Flows Meet 1 2 N 1 2 N       Inputs contain multiplexed flows from access muxs & other packet switches Flows demultiplexed at input, routed and/or forwarded to output ports Packets buffered, prioritized, and multiplexed on output lines • • • • • • • • •
  • 42. Generic Packet Switch “ Unfolded” View of Switch Ingress Line Cards Header processing Demultiplexing Routing in large switches Controller Routing in small switches Signalling & resource allocation Interconnection Fabric Transfer packets between line cards Egress Line Cards Scheduling & priority Multiplexing Controller 1 2 3 N Line card Line card Line card Line card Interconnection fabric Line card Line card Line card Line card 1 2 3 N Input ports Output ports Data path Control path (a) … … … …
  • 43. Line Cards Folded View 1 circuit board is ingress/egress line card Physical layer processing Data link layer processing Network header processing Physical layer across fabric + framing Interconnection fabric Transceiver Transceiver Framer Framer Network processor Backplane transceivers To physical ports To switch fabric To other line cards
  • 44. Shared Memory Packet Switch 1 2 3 N 1 2 3 N … Shared Memory Queue Control Ingress Processing Connection Control … Output Buffering Small switches can be built by reading/writing into shared memory
  • 45. Crossbar Switches 1 2 3 N 1 2 3 N Inputs Outputs (a) Input buffering 3 8 3 … … 1 2 3 N 1 2 3 N Inputs Outputs (b) Output buffering … … Large switches built from crossbar & multistage space switches Requires centralized controller/scheduler (who sends to whom when) Can buffer at input, output, or both (performance vs complexity)
  • 46. Self-Routing Switches Self-routing switches do not require controller Output port number determines route 101 -> (1) lower port, (2) upper port, (3) lower port 0 1 2 Inputs Outputs 3 4 5 6 7 0 1 2 3 4 5 6 7 Stage 1 Stage 2 Stage 3
  • 47. Chapter 7 Packet-Switching Networks Routing in Packet Networks
  • 48. Routing in Packet Networks Three possible (loopfree) routes from 1 to 6: 1-3-6, 1-4-5-6, 1-2-5-6 Which is “best”? Min delay? Min hop? Max bandwidth? Min cost? Max reliability? 1 2 3 4 5 6 Node (switch or router)
  • 49. Creating the Routing Tables Need information on state of links Link up/down; congested; delay or other metrics Need to distribute link state information using a routing protocol What information is exchanged? How often? Exchange with neighbors; Broadcast or flood Need to compute routes based on information Single metric; multiple metrics Single route; alternate routes
  • 50. Routing Algorithm Requirements Responsiveness to changes Topology or bandwidth changes, congestion Rapid convergence of routers to consistent set of routes Freedom from persistent loops Optimality Resource utilization, path length Robustness Continues working under high load, congestion, faults, equipment failures, incorrect implementations Simplicity Efficient software implementation, reasonable processing load
  • 51. Centralized vs Distributed Routing Centralized Routing All routes determined by a central node All state information sent to central node Problems adapting to frequent topology changes Does not scale Distributed Routing Routes determined by routers using distributed algorithm State information exchanged by routers Adapts to topology and other changes Better scalability
  • 52. Static vs Dynamic Routing Static Routing Set up manually, do not change; requires administration Works when traffic predictable & network is simple Used to override some routes set by dynamic algorithm Used to provide default router Dynamic Routing Adapt to changes in network conditions Automated Calculates routes based on received updated network state information
  • 53. Routing in Virtual-Circuit Packet Networks Route determined during connection setup Tables in switches implement forwarding that realizes selected route 1 2 3 4 5 6 A B C D 1 5 2 3 7 1 8 5 4 2 3 6 5 2 Switch or router Host VCI
  • 54. Routing Tables in VC Packet Networks Example: VCI from A to D From A & VCI 5 -> 3 & VCI 3 -> 4 & VCI 4 -> 5 & VCI 5 -> D & VCI 2 Incoming Outgoing Node VCI Node VCI A 1 3 2 A 5 3 3 3 2 A 1 3 3 A 5 Incoming Outgoing Node VCI Node VCI 1 2 6 7 1 3 4 4 4 2 6 1 6 7 1 2 6 1 4 2 4 4 1 3 Incoming Outgoing Node VCI Node VCI 3 7 B 8 3 1 B 5 B 5 3 1 B 8 3 7 Incoming Outgoing Node VCI Node VCI C 6 4 3 4 3 C 6 Incoming Outgoing Node VCI Node VCI 2 3 3 2 3 4 5 5 3 2 2 3 5 5 3 4 Incoming Outgoing Node VCI Node VCI 4 5 D 2 D 2 4 5 Node 1 Node 2 Node 3 Node 4 Node 6 Node 5
  • 55. Routing Tables in Datagram Packet Networks 2 2 3 3 4 4 5 2 6 3 Node 1 Node 2 Node 3 Node 4 Node 6 Node 5 1 1 2 4 4 4 5 6 6 6 1 3 2 5 3 3 4 3 5 5 Destination Next node 1 1 3 1 4 4 5 5 6 5 1 4 2 2 3 4 4 4 6 6 1 1 2 2 3 3 5 5 6 3 Destination Next node Destination Next node Destination Next node Destination Next node Destination Next node
  • 56. Non-Hierarchical Addresses and Routing No relationship between addresses & routing proximity Routing tables require 16 entries each 0000 0111 1010 1101 0001 0100 1011 1110 0011 0101 1000 1111 0011 0110 1001 1100 R 1 1 2 5 4 3 0000 1 0111 1 1010 1 … … 0001 4 0100 4 1011 4 … … R 2
  • 57. Hierarchical Addresses and Routing Prefix indicates network where host is attached Routing tables require 4 entries each 0000 0001 0010 0011 0100 0101 0110 0111 1100 1101 1110 1111 1000 1001 1010 1011 R 1 R 2 1 2 5 4 3 00 1 01 3 10 2 11 3 00 3 01 4 10 3 11 5
  • 58. Flat vs Hierarchical Routing Flat Routing All routers are peers Does not scale Hierarchical Routing Partitioning: Domains, autonomous systems, areas... Some routers part of routing backbone Some routers only communicate within an area Efficient because it matches typical traffic flow patterns Scales
  • 59. Specialized Routing Flooding Useful in starting up network Useful in propagating information to all nodes Deflection Routing Fixed, preset routing procedure No route synthesis
  • 60. Flooding Send a packet to all nodes in a network No routing tables available Need to broadcast packet to all nodes (e.g. to propagate link state information) Approach Send packet on all ports except one where it arrived Exponential growth in packet transmissions
  • 61. Flooding is initiated from Node 1: Hop 1 transmissions 1 2 3 4 5 6
  • 62. Flooding is initiated from Node 1: Hop 2 transmissions 1 2 3 4 5 6
  • 63. 1 2 3 4 5 6 Flooding is initiated from Node 1: Hop 3 transmissions
  • 64. Limited Flooding Time-to-Live field in each packet limits number of hops to certain diameter Each switch adds its ID before flooding; discards repeats Source puts sequence number in each packet; switches records source address and sequence number and discards repeats
  • 65. Deflection Routing Network nodes forward packets to preferred port If preferred port busy, deflect packet to another port Works well with regular topologies Manhattan street network Rectangular array of nodes Nodes designated (i,j) Rows alternate as one-way streets Columns alternate as one-way avenues Bufferless operation is possible Proposed for optical packet networks All-optical buffering currently not viable
  • 66. Tunnel from last column to first column or vice versa 0,0 0,1 0,2 0,3 1,0 1,1 1,2 1,3 2,0 2,1 2,2 2,3 3,0 3,1 3,2 3,3
  • 67. Example: Node (0,2) ->(1,0) 0,0 0,1 0,2 0,3 1,0 1,1 1,2 1,3 2,0 2,1 2,2 2,3 3,0 3,1 3,2 3,3 busy
  • 68. Chapter 7 Packet-Switching Networks Shortest Path Routing
  • 69. Shortest Paths & Routing Many possible paths connect any given source and to any given destination Routing involves the selection of the path to be used to accomplish a given transfer Typically it is possible to attach a cost or distance to a link connecting two nodes Routing can then be posed as a shortest path problem
  • 70. Routing Metrics Means for measuring desirability of a path Path Length = sum of costs or distances Possible metrics Hop count: rough measure of resources used Reliability: link availability; BER Delay: sum of delays along path; complex & dynamic Bandwidth: “available capacity” in a path Load: Link & router utilization along path Cost: $$$
  • 71. Shortest Path Approaches Distance Vector Protocols Neighbors exchange list of distances to destinations Best next-hop determined for each destination Ford-Fulkerson (distributed) shortest path algorithm Link State Protocols Link state information flooded to all routers Routers have complete topology information Shortest path (& hence next hop) calculated Dijkstra (centralized) shortest path algorithm
  • 72. Distance Vector Do you know the way to San Jose? San Jose 392 San Jose 596 San Jose 294 San Jose 250
  • 73. Distance Vector Local Signpost Direction Distance Routing Table For each destination list: Next Node Distance Table Synthesis Neighbors exchange table entries Determine current best next hop Inform neighbors Periodically After changes dest next dist
  • 74. Shortest Path to SJ San Jose C ij D j D i If D i is the shortest distance to SJ from i and if j is a neighbor on the shortest path, then D i = C ij + D j Focus on how nodes find their shortest path to a given destination node, i.e. SJ i j
  • 75. But we don’t know the shortest paths i only has local info from neighbors D j&quot; C ij” C ij D j D i C ij' D j' Pick current shortest path i San Jose j j&quot; j'
  • 76. Why Distance Vector Works 1 Hop From SJ 2 Hops From SJ 3 Hops From SJ Hop-1 nodes calculate current (next hop, dist), & send to neighbors San Jose Accurate info about SJ ripples across network, Shortest Path Converges SJ sends accurate info
  • 77. Bellman-Ford Algorithm Consider computations for one destination d Initialization Each node table has 1 row for destination d Distance of node d to itself is zero: D d =0 Distance of other node j to d is infinite: D j =  , for j  d Next hop node n j = -1 to indicate not yet defined for j  d Send Step Send new distance vector to immediate neighbors across local link Receive Step At node j , find the next hop that gives the minimum distance to d , Min j { C ij + D j } Replace old (n j , D j (d)) by new (n j *, D j *(d)) if new next node or distance Go to send step
  • 78. Bellman-Ford Algorithm Now consider parallel computations for all destinations d Initialization Each node has 1 row for each destination d Distance of node d to itself is zero: D d (d)=0 Distance of other node j to d is infinite: D j (d)=  , for j  d Next node n j = -1 since not yet defined Send Step Send new distance vector to immediate neighbors across local link Receive Step For each destination d , find the next hop that gives the minimum distance to d , Min j { C ij + D j (d) } Replace old (n j , D i (d)) by new (n j *, D j *(d)) if new next node or distance found Go to send step
  • 79. San Jose Table entry @ node 1 for dest SJ Table entry @ node 3 for dest SJ Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1,  ) (-1,  ) (-1,  ) (-1,  ) (-1,  ) 1 2 3 3 1 5 4 6 2 2 3 4 2 1 1 2 3 5
  • 80. San Jose 0 2 1 Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1,  ) (-1,  ) (-1,  ) (-1,  ) (-1,  ) 1 (-1,  ) (-1,  ) (6,1) (-1,  ) (6,2) 2 3 D 6 =0 D 3 =D 6 +1 n 3 =6 3 1 5 4 6 2 2 3 4 2 1 1 2 3 5 D 6 =0 D 5 =D 6 +2 n 5 =6
  • 81. San Jose 0 1 2 3 3 6 Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1,  ) (-1,  ) (-1,  ) (-1,  ) (-1,  ) 1 (-1,  ) (-1,  ) (6, 1) (-1,  ) (6,2) 2 (3,3) (5,6) (6, 1) (3,3) (6,2) 3 3 1 5 4 6 2 2 3 4 2 1 1 2 3 5
  • 82. San Jose 0 1 2 6 3 3 4 Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1,  ) (-1,  ) (-1,  ) (-1,  ) (-1,  ) 1 (-1,  ) (-1,  ) (6, 1) (-1,  ) (6,2) 2 (3,3) (5,6) (6, 1) (3,3) (6,2) 3 (3,3) (4,4) (6, 1) (3,3) (6,2) 3 1 5 4 6 2 2 3 4 2 1 1 2 3 5
  • 83. San Jose 0 1 2 3 3 4 Network disconnected; Loop created between nodes 3 and 4 5 Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (3,3) (4,4) (6, 1) (3,3) (6,2) 1 (3,3) (4,4) (4, 5) (3,3) (6,2) 2 3 3 1 5 4 6 2 2 3 4 2 1 1 2 3 5
  • 84. San Jose 0 2 5 3 3 4 7 5 Node 4 could have chosen 2 as next node because of tie Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (3,3) (4,4) (6, 1) (3,3) (6,2) 1 (3,3) (4,4) (4, 5) (3,3) (6,2) 2 (3,7) (4,4) (4, 5) (5,5) (6,2) 3 3 1 5 4 6 2 2 3 4 2 1 1 2 3 5
  • 85. San Jose 0 2 5 5 7 4 7 6 Node 2 could have chosen 5 as next node because of tie Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (3,3) (4,4) (6, 1) (3,3) (6,2) 1 (3,3) (4,4) (4, 5) (3,3) (6,2) 2 (3,7) (4,4) (4, 5) (5,5) (6,2) 3 (3,7) (4,6) (4, 7) (5,5) (6,2) 3 1 5 4 6 2 2 3 4 2 1 1 2 3 5
  • 86. San Jose 0 7 7 5 6 9 2 Node 1 could have chose 3 as next node because of tie 3 5 4 6 2 2 3 4 2 1 1 2 3 5 1 Iteration Node 1 Node 2 Node 3 Node 4 Node 5 1 (3,3) (4,4) (4, 5) (3,3) (6,2) 2 (3,7) (4,4) (4, 5) (2,5) (6,2) 3 (3,7) (4,6) (4, 7) (5,5) (6,2) 4 (2,9) (4,6) (4, 7) (5,5) (6,2)
  • 87. Counting to Infinity Problem Nodes believe best path is through each other (Destination is node 4) 3 1 2 4 1 1 1 3 1 2 4 1 1 X (a) (b) Update Node 1 Node 2 Node 3 Before break (2,3) (3,2) (4, 1) After break (2,3) (3,2) (2,3) 1 (2,3) (3,4) (2,3) 2 (2,5) (3,4) (2,5) 3 (2,5) (3,6) (2,5) 4 (2,7) (3,6) (2,7) 5 (2,7) (3,8) (2,7) … … … …
  • 88. Problem: Bad News Travels Slowly Remedies Split Horizon Do not report route to a destination to the neighbor from which route was learned Poisoned Reverse Report route to a destination to the neighbor from which route was learned, but with infinite distance Breaks erroneous direct loops immediately Does not work on some indirect loops
  • 89. Split Horizon with Poison Reverse Nodes believe best path is through each other 3 1 2 4 1 1 1 3 1 2 4 1 1 X (a) (b) Update Node 1 Node 2 Node 3 Before break (2, 3) (3, 2) (4, 1) After break (2, 3) (3, 2) (-1,  ) Node 2 advertizes its route to 4 to node 3 as having distance infinity; node 3 finds there is no route to 4 1 (2, 3) (-1,  ) (-1,  ) Node 1 advertizes its route to 4 to node 2 as having distance infinity; node 2 finds there is no route to 4 2 (-1,  ) (-1,  ) (-1,  ) Node 1 finds there is no route to 4
  • 90. Link-State Algorithm Basic idea: two step procedure Each source node gets a map of all nodes and link metrics (link state) of the entire network Find the shortest path on the map from the source node to all destination nodes Broadcast of link-state information Every node i in the network broadcasts to every other node in the network: ID’s of its neighbors: N i =set of neighbors of i Distances to its neighbors: { C ij | j  N i } Flooding is a popular method of broadcasting packets
  • 91. Dijkstra Algorithm: Finding shortest paths in order Closest node to s is 1 hop away 2 nd closest node to s is 1 hop away from s or w” 3rd closest node to s is 1 hop away from s , w”, or x Find shortest paths from source s to all other destinations s w w&quot; w' w&quot; x x' x z z' w'
  • 92. Dijkstra’s algorithm N : set of nodes for which shortest path already found Initialization: (S tart with source node s) N = {s}, D s = 0, “ s is distance zero from itself” D j =C sj for all j  s, distances of directly-connected neighbors Step A: ( Find next closest node i ) Find i  N such that D i = min Dj for j  N Add i to N If N contains all the nodes, stop Step B: ( update minimum costs) For each node j  N D j = min ( D j , D i +C ij ) Go to Step A Minimum distance from s to j through node i in N
  • 93. Execution of Dijkstra’s algorithm          Iteration N D 2 D 3 D 4 D 5 D 6 Initial {1} 3 2 5   1 {1,3} 3 2 4  3 2 {1,2,3} 3 2 4 7 3 3 {1,2,3,6} 3 2 4 5 3 4 {1,2,3,4,6} 3 2 4 5 3 5 {1,2,3,4,5,6} 3 2 4 5 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3
  • 94. Shortest Paths in Dijkstra’s Algorithm 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3 1 2 4 5 6 1 1 2 3 2 3 5 2 4 3 3
  • 95. Reaction to Failure If a link fails, Router sets link distance to infinity & floods the network with an update packet All routers immediately update their link database & recalculate their shortest paths Recovery very quick But watch out for old update messages Add time stamp or sequence # to each update message Check whether each received update message is new If new, add it to database and broadcast If older, send update message on arriving link
  • 96. Why is Link State Better? Fast, loopless convergence Support for precise metrics, and multiple metrics if necessary (throughput, delay, cost, reliability) Support for multiple paths to a destination algorithm can be modified to find best two paths
  • 97. Source Routing Source host selects path that is to be followed by a packet Strict: sequence of nodes in path inserted into header Loose: subsequence of nodes in path specified Intermediate switches read next-hop address and remove address Source host needs link state information or access to a route server Source routing allows the host to control the paths that its information traverses in the network Potentially the means for customers to select what service providers they use
  • 98. Example 1 2 3 4 5 6 A B Source host Destination host 1,3,6,B 3,6,B 6,B B
  • 99. Chapter 7 Packet-Switching Networks ATM Networks
  • 100. Asynchronous Tranfer Mode (ATM) Packet multiplexing and switching Fixed-length packets: “cells” Connection-oriented Rich Quality of Service support Conceived as end-to-end Supporting wide range of services Real time voice and video Circuit emulation for digital transport Data traffic with bandwidth guarantees Detailed discussion in Chapter 9
  • 101. ATM Networking End-to-end information transport using cells 53-byte cell provide low delay and fine multiplexing granularity ATM Adaptation Layer ATM Adaptation Layer ATM Network Video Packet Voice Video Packet Voice Support for many services through ATM Adaptation Layer
  • 102. TDM vs. Packet Multiplexing    * * In mid-1980s, packet processing mainly in software and hence slow; By late 1990s, very high speed packet processing possible Variable bit rate Delay Burst traffic Processing TDM Multirate only Low, fixed Inefficient Minimal, very high speed Packet Easily handled Variable Efficient Header & packet processing required
  • 103. ATM: Attributes of TDM & Packet Switching Packet structure gives flexibility & efficiency Synchronous slot transmission gives high speed & density Packet Header MUX Wasted bandwidth ATM TDM 3 2 1 4 3 2 1 4 3 2 1 4 3 1 3 2 2 1 Voice Data packets Images 1 2 3 4
  • 104. ATM Switching Switch carries out table translation and routing ATM switches can be implemented using shared memory, shared backplanes, or self-routing multi-stage fabrics 2 3 N 1 Switch N 1 5 6 video video voice data 25 32 32 61 75 67 39 67 N 1 3 2 video 75 voice data video … … … 32 25 32 61 39 67 67
  • 105. Virtual connections setup across network Connections identified by locally-defined tags ATM Header contains virtual connection information: 8-bit Virtual Path Identifier 16-bit Virtual Channel Identifier Powerful traffic grooming capabilities Multiple VCs can be bundled within a VP Similar to tributaries with SONET, except variable bit rates possible ATM Virtual Connections Physical link Virtual paths Virtual channels
  • 106. VPI/VCI switching & multiplexing Connections a,b,c bundled into VP at switch 1 Crossconnect switches VP without looking at VCIs VP unbundled at switch 2; VC switching thereafter VPI/VCI structure allows creation virtual networks ATM Sw 1 ATM Sw 4 ATM Sw 2 ATM Sw 3 ATM cross- connect a b c d e VPI 3 VPI 5 VPI 2 VPI 1 a b c d e Sw = switch
  • 107. MPLS & ATM ATM initially touted as more scalable than packet switching ATM envisioned speeds of 150-600 Mbps Advances in optical transmission proved ATM to be the less scalable: @ 10 Gbps Segmentation & reassembly of messages & streams into 48-byte cell payloads difficult & inefficient Header must be processed every 53 bytes vs. 500 bytes on average for packets Delay due to 1250 byte packet at 10 Gbps = 1  sec; delay due to 53 byte cell @ 150 Mbps ≈ 3  sec MPLS (Chapter 10) uses tags to transfer packets across virtual circuits in Internet
  • 108. Chapter 7 Packet-Switching Networks Traffic Management Packet Level Flow Level Flow-Aggregate Level
  • 109. Traffic Management Vehicular traffic management Traffic lights & signals control flow of traffic in city street system Objective is to maximize flow with tolerable delays Priority Services Police sirens Cavalcade for dignitaries Bus & High-usage lanes Trucks allowed only at night Packet traffic management Multiplexing & access mechanisms to control flow of packet traffic Objective is make efficient use of network resources & deliver QoS Priority Fault-recovery packets Real-time traffic Enterprise (high-revenue) traffic High bandwidth traffic
  • 110. Time Scales & Granularities Packet Level Queueing & scheduling at multiplexing points Determines relative performance offered to packets over a short time scale (microseconds) Flow Level Management of traffic flows & resource allocation to ensure delivery of QoS (milliseconds to seconds) Matching traffic flows to resources available; congestion control Flow-Aggregate Level Routing of aggregate traffic flows across the network for efficient utilization of resources and meeting of service levels “ Traffic Engineering”, at scale of minutes to days
  • 111. End-to-End QoS A packet traversing network encounters delay and possible loss at various multiplexing points End-to-end performance is accumulation of per-hop performances 1 2 N N – 1 Packet buffer …
  • 112. Scheduling & QoS End-to-End QoS & Resource Control Buffer & bandwidth control -> Performance Admission control to regulate traffic level Scheduling Concepts fairness/isolation priority, aggregation, Fair Queueing & Variations WFQ, PGPS Guaranteed Service WFQ, Rate-control Packet Dropping aggregation, drop priorities
  • 113. FIFO Queueing All packet flows share the same buffer Transmission Discipline: First-In, First-Out Buffering Discipline: Discard arriving packets if buffer is full (Alternative: random discard; pushout head-of-line, i.e. oldest, packet) Packet buffer Transmission link Arriving packets Packet discard when full
  • 114. FIFO Queueing Cannot provide differential QoS to different packet flows Different packet flows interact strongly Statistical delay guarantees via load control Restrict number of flows allowed (connection admission control) Difficult to determine performance delivered Finite buffer determines a maximum possible delay Buffer size determines loss probability But depends on arrival & packet length statistics Variation: packet enqueueing based on queue thresholds some packet flows encounter blocking before others higher loss, lower delay
  • 115. FIFO Queueing with Discard Priority Packet buffer Transmission link Arriving packets Packet discard when full Packet buffer Transmission link Arriving packets Class 1 discard when full Class 2 discard when threshold exceeded (a) (b)
  • 116. HOL Priority Queueing High priority queue serviced until empty High priority queue has lower waiting time Buffers can be dimensioned for different loss probabilities Surge in high priority queue can cause low priority queue to saturate Transmission link Packet discard when full High-priority packets Low-priority packets Packet discard when full When high-priority queue empty
  • 117. HOL Priority Features Provides differential QoS Pre-emptive priority: lower classes invisible Non-preemptive priority: lower classes impact higher classes through residual service times High-priority classes can hog all of the bandwidth & starve lower priority classes Need to provide some isolation between classes (Note: Need labeling) Delay Per-class loads
  • 118. Earliest Due Date Scheduling Queue in order of “due date” packets requiring low delay get earlier due date packets without delay get indefinite or very long due dates Sorted packet buffer Transmission link Arriving packets Packet discard when full Tagging unit
  • 119. Each flow has its own logical queue: prevents hogging; allows differential loss probabilities C bits/sec allocated equally among non-empty queues transmission rate = C / n(t), where n(t)=# non-empty queues Idealized system assumes fluid flow from queues Implementation requires approximation: simulate fluid system; sort packets according to completion time in ideal system Fair Queueing / Generalized Processor Sharing … … C bits/second Transmission link Packet flow 1 Packet flow 2 Packet flow n Approximated bit-level round robin service
  • 120. Buffer 1 at t =0 Buffer 2 at t =0 1 t 1 2 Fluid-flow system: both packets served at rate 1/2 Both packets complete service at t = 2 0 1 t 1 2 Packet-by-packet system: buffer 1 served first at rate 1; then buffer 2 served at rate 1. Packet from buffer 2 being served Packet from buffer 1 being served Packet from buffer 2 waiting 0
  • 121. Buffer 1 at t =0 Buffer 2 at t =0 2 1 t 3 Fluid-flow system: both packets served at rate 1/2 Packet from buffer 2 served at rate 1 0 2 1 t 1 2 Packet-by-packet fair queueing: buffer 2 served at rate 1 Packet from buffer 1 served at rate 1 Packet from buffer 2 waiting 0 3
  • 122. Buffer 1 at t =0 Buffer 2 at t =0 1 t 1 2 Fluid-flow system: packet from buffer 1 served at rate 1/4; Packet from buffer 1 served at rate 1 Packet from buffer 2 served at rate 3/4 0 1 t 1 2 Packet from buffer 1 served at rate 1 Packet from buffer 2 served at rate 1 Packet from buffer 1 waiting 0 Packet-by-packet weighted fair queueing: buffer 2 served first at rate 1; then buffer 1 served at rate 1
  • 123. Packetized GPS/WFQ Compute packet completion time in ideal system add tag to packet sort packet in queue according to tag serve according to HOL Sorted packet buffer Transmission link Arriving packets Packet discard when full Tagging unit
  • 124. Bit-by-Bit Fair Queueing Assume n flows, n queues 1 round = 1 cycle serving all n queues If each queue gets 1 bit per cycle, then 1 round = # active queues Round number = number of cycles of service that have been completed If packet arrives to idle queue: Finishing time = round number + packet size in bits If packet arrives to active queue: Finishing time = finishing time of last packet in queue + packet size rounds Current Round #
  • 125. Differential Service: If a traffic flow is to receive twice as much bandwidth as a regular flow, then its packet completion time would be half Buffer 1 Buffer 2 Buffer n Number of rounds = Number of bit transmission opportunities Rounds Packet of length k bits begins transmission at this time Packet completes transmission k rounds later …
  • 126. F(i,k,t) = finish time of k th packet that arrives at time t to flow i P(i,k,t) = size of k th packet that arrives at time t to flow i R(t) = round number at time t Computing the Finishing Time Fair Queueing:(take care of both idle and active cases) F(i,k,t) = max{F(i,k-1,t), R(t)} + P(i,k,t) Weighted Fair Queueing: F(i,k,t) = max{F(i,k-1,t), R(t)} + P(i,k,t)/w i Generalize so R(t) continuous, not discrete R(t) grows at rate inversely proportional to n(t) rounds
  • 127. WFQ and Packet QoS WFQ and its many variations form the basis for providing QoS in packet networks Very high-speed implementations available, up to 10 Gbps and possibly higher WFQ must be combined with other mechanisms to provide end-to-end QoS (next section)
  • 128. Buffer Management Packet drop strategy: Which packet to drop when buffers full Fairness: protect behaving sources from misbehaving sources Aggregation: Per-flow buffers protect flows from misbehaving flows Full aggregation provides no protection Aggregation into classes provided intermediate protection Drop priorities: Drop packets from buffer according to priorities Maximizes network utilization & application QoS Examples: layered video, policing at network edge Controlling sources at the edge
  • 129. Early or Overloaded Drop Random early detection: drop pkts if short-term avg of queue exceeds threshold pkt drop probability increases linearly with queue length mark offending pkts improves performance of cooperating TCP sources increases loss probability of misbehaving sources
  • 130. Random Early Detection (RED) Packets produced by TCP will reduce input rate in response to network congestion Early drop: discard packets before buffers are full Random drop causes some sources to reduce rate before others, causing gradual reduction in aggregate input rate Algorithm: Maintain running average of queue length If Q avg < minthreshold, do nothing If Q avg > maxthreshold, drop packet If in between, drop packet according to probability Flows that send more packets are more likely to have packets dropped
  • 131. Packet Drop Profile in RED Average queue length Probability of packet drop 1 0 min th max th full
  • 132. Chapter 7 Packet-Switching Networks Traffic Management at the Flow Level
  • 133. Congestion occurs when a surge of traffic overloads network resources Approaches to Congestion Control: Preventive Approaches: Scheduling & Reservations Reactive Approaches: Detect & Throttle/Discard 4 8 6 3 2 1 5 7 Congestion
  • 134. Ideal effect of congestion control: Resources used efficiently up to capacity available Offered load Throughput Controlled Uncontrolled
  • 135. Open-Loop Control Network performance is guaranteed to all traffic flows that have been admitted into the network Initially for connection-oriented networks Key Mechanisms Admission Control Policing Traffic Shaping Traffic Scheduling
  • 136. Admission Control Flows negotiate contract with network Specify requirements: Peak, Avg., Min Bit rate Maximum burst size Delay, Loss requirement Network computes resources needed “ Effective” bandwidth If flow accepted, network allocates resources to ensure QoS delivered as long as source conforms to contract Typical bit rate demanded by a variable bit rate information source Time Bits/second Peak rate Average rate
  • 137. Policing Network monitors traffic flows continuously to ensure they meet their traffic contract When a packet violates the contract, network can discard or tag the packet giving it lower priority If congestion occurs, tagged packets are discarded first Leaky Bucket Algorithm is the most commonly used policing mechanism Bucket has specified leak rate for average contracted rate Bucket has specified depth to accommodate variations in arrival rate Arriving packet is conforming if it does not result in overflow
  • 138. Leaky Bucket algorithm can be used to police arrival rate of a packet stream Let X = bucket content at last conforming packet arrival Let t a – last conforming packet arrival time = depletion in bucket water drains at a constant rate leaky bucket water poured irregularly Leak rate corresponds to long-term rate Bucket depth corresponds to maximum allowable burst arrival 1 packet per unit time Assume constant-length packet as in ATM
  • 139. Leaky Bucket Algorithm Depletion rate: 1 packet per unit time L+I = Bucket Depth I = increment per arrival, nominal interarrival time Interarrival time Current bucket content arriving packet would cause overflow empty Non-empty conforming packet Arrival of a packet at time t a X’ = X - ( t a - LCT ) X’ < 0? X’ > L ? X = X’ + I LCT = t a conforming packet X’ = 0 Nonconforming packet X = value of the leaky bucket counter X’ = auxiliary variable LCT = last conformance time Yes No Yes No
  • 140. Leaky Bucket Example I = 4 L = 6 Non-conforming packets not allowed into bucket & hence not included in calculations I L+I Bucket content Time Time Packet arrival Nonconforming * * * * * * * * *
  • 141. Policing Parameters T = 1 / peak rate MBS = maximum burst size I = nominal interarrival time = 1 / sustainable rate Time MBS T L I
  • 142. Dual Leaky Bucket Dual leaky bucket to police PCR, SCR, and MBS: Tagged or dropped Untagged traffic Incoming traffic Untagged traffic PCR = peak cell rate CDVT = cell delay variation tolerance SCR = sustainable cell rate MBS = maximum burst size Leaky bucket 1 SCR and MBS Leaky bucket 2 PCR and CDVT Tagged or dropped
  • 143. Traffic Shaping Networks police the incoming traffic flow Traffic shaping is used to ensure that a packet stream conforms to specific parameters Networks can shape their traffic prior to passing it to another network Network C Network A Network B Traffic shaping Traffic shaping Policing Policing 1 2 3 4
  • 144. Leaky Bucket Traffic Shaper Buffer incoming packets Play out periodically to conform to parameters Surges in arrivals are buffered & smoothed out Possible packet loss due to buffer overflow Too restrictive, since conforming traffic does not need to be completely smooth Incoming traffic Shaped traffic Size N Packet Server
  • 145. Token Bucket Traffic Shaper Token rate regulates transfer of packets If sufficient tokens available, packets enter network without delay K determines how much burstiness allowed into the network An incoming packet must have sufficient tokens before admission into the network Incoming traffic Shaped traffic Size N Size K Tokens arrive periodically Server Packet Token
  • 146. Token Bucket Shaping Effect The token bucket constrains the traffic from a source to be limited to b + r t bits in an interval of length t b + r t b bytes instantly t r bytes/second
  • 147. Packet transfer with Delay Guarantees Token Shaper Bit rate > R > r e.g., using WFQ Assume fluid flow for information Token bucket allows burst of b bytes 1 & then r bytes/second Since R>r, buffer content @ 1 never greater than b byte Thus delay @ mux < b/R Rate into second mux is r<R, so bytes are never delayed bR b R - r (b) Buffer occupancy at 1 0 Empty t t Buffer occupancy at 2 A(t) = b+rt R(t) No backlog of packets (a) 1 2
  • 148. Delay Bounds with WFQ / PGPS Assume traffic shaped to parameters b & r schedulers give flow at least rate R>r H hop path m is maximum packet size for the given flow M maximum packet size in the network R j transmission rate in jth hop Maximum end-to-end delay that can be experienced by a packet from flow i is:
  • 149. Scheduling for Guaranteed Service Suppose guaranteed bounds on end-to-end delay across the network are to be provided A call admission control procedure is required to allocate resources & set schedulers Traffic flows from sources must be shaped/regulated so that they do not exceed their allocated resources Strict delay bounds can be met
  • 150. Current View of Router Function Classifier Input driver Internet forwarder Pkt. scheduler Output driver Routing Agent Reservation Agent Mgmt. Agent Admission Control [Routing database] [Traffic control database]
  • 151. Closed-Loop Flow Control Congestion control feedback information to regulate flow from sources into network Based on buffer content, link utilization, etc. Examples: TCP at transport layer; congestion control at ATM level End-to-end vs. Hop-by-hop Delay in effecting control Implicit vs. Explicit Feedback Source deduces congestion from observed behavior Routers/switches generate messages alerting to congestion
  • 152. End-to-End vs. Hop-by-Hop Congestion Control Source Destination Feedback information Packet flow Source Destination (a) (b)
  • 153. Traffic Engineering Management exerted at flow aggregate level Distribution of flows in network to achieve efficient utilization of resources (bandwidth) Shortest path algorithm to route a given flow not enough Does not take into account requirements of a flow, e.g. bandwidth requirement Does not take account interplay between different flows Must take into account aggregate demand from all flows
  • 154. Shortest path routing congests link 4 to 8 Better flow allocation distributes flows more uniformly 4 6 3 2 1 5 8 7 4 6 3 2 1 5 8 7 (a) (b)

Editor's Notes

  • #25: 7-9 The text boxes are callouts or labels.
  翻译: