Network throughput: Difference between revisions

Content deleted Content added
mNo edit summary
Clarified language added links to other Wikipedia articles, and fixed typos.
Line 15:
{{See also|Peak information rate}}
 
Users of telecommunications devices, systems designers, and researchers into communication theory are often interested in knowing the expected performance of a system. From a user perspective, this is often phrased as either "which device will get my data there most effectively for my needs?", or "which device will deliver the most data per unit cost?". Systems designers are often interested in selectingselect the most effective architecture or design constraints for a system, which drive its final performance. In most cases, the benchmark of what a system is capable of, or its "maximum performance" is what the user or designer is interested in. The term maximum throughput is frequently used when discussing end-user maximum throughput tests.  
 
Maximum throughput is essentially synonymous to [[digital bandwidth capacity]].
 
Four different values are relevant in the context of "maximum throughput", used in comparing the 'upper limit' conceptual performance of multiple systems. They are 'maximum theoretical throughput', 'maximum achievable throughput', and 'peak measured throughput', and 'maximum sustained throughput'. These values represent different quantities, and care must be taken that the same definitions are used when comparing different 'maximum throughput' values. Each bit must carry the same amount of information if throughput values are to be compared. [[Data compression]] can significantly alter throughput calculations, including generating values exceeding 100% in some cases. If the communication is mediated by several links in series with different bit rates, the maximum throughput of the overall link is lower than or equal to the lowest bit rate. The lowest value link in the series is referred to as the [[bottleneck (traffic)|bottleneck]].
 
===Maximum theoretical throughput===
Line 25:
 
===Asymptotic throughput===
The '''asymptotic throughput''' (less formal ''asymptotic bandwidth'') for a packet-mode [[communication network]] is the value of the [[maximum throughput]] function, when the incoming network load approaches [[infinity]], either due to a [[Message passing|message size]] as it approaches [[infinity]],<ref>''Modeling Message Passing Overhead'' by C.Y Chou et al. in Advances in Grid and Pervasive Computing: First International Conference, GPC 2006 edited by Yeh-Ching Chung and José E. Moreira {{ISBN|3540338098}} pages 299-307</ref> or the number of data sources is very large. As other [[bit rate]]s and [[data bandwidth]]s, the asymptotic throughput is measured in [[bits per second]] (bit/s), very seldom [[byte]]s per second (B/s), where 1 B/s is 8 bit/s. [[Decimal prefix]]es are used, meaning that 1 Mbit/s is 1000000 bit/s.
 
Asymptotic throughput is usually estimated by sending or [[network simulation|simulating]] a very large message (sequence of data packets) through the network, using a [[greedy source]] and no [[flow control (data)|flow control]] mechanism (i.e., [[User Datagram Protocol|UDP]] rather than [[Transmission Control Protocol|TCP]]), and measuring the network path throughput in the destination node. Traffic load between other sources may reduce this maximum network path throughput. Alternatively, a large number of sources and sinks may be modeled, with or without flow control, and the aggregate maximum network throughput measured (the sum of traffic reaching its destinations). In a network simulation model with infinite packet queues, the asymptotic throughput occurs when the [[Network latency|latency]] (the packet queuing time) goes to infinity, while if the packet queues are limited, or the network is a multi-drop network with many sources, and collisions may occur, the packet-dropping rate approaches 100%.
 
A well -known application of asymptotic throughput is in modeling [[point-to-point communication]] where (following Hockney) [[latency (engineering)|message latency]] T(N) is modeled as a function of message length N as T(N) = (M + N)/A where A is the asymptotic bandwidth and M is the half-peak length.<ref>''Recent Advances in Parallel Virtual Machine and Message Passing Interface'' by Jack Dongarra, Emilio Luque and Tomas Margalef 1999 {{ISBN|3540665498}} page 134</ref>
 
As well as its use in general network modeling, asymptotic throughput is used in modeling performance on [[massively parallel]] computer systems, where system operation is highly dependent on communication overhead, as well as processor performance.<ref>M. Resch et al. ''A comparison of MPI performance on different MPPs''in Recent Advances in Parallel Virtual Machine and Message Passing Interface, Lecture Notes in Computer Science, 1997, Volume 1332/1997, 25-32</ref> In these applications, asymptotic throughput is used in Xu and Hwang model (more general than Hockney's approach) which includes the number of processors, so that both the latency and the asymptotic throughput are functions of the number of processors.<ref>''High-Performance Computing and Networking'' edited by Angelo Mañas, Bernardo Tafalla and Rou Rey Jay Pallones 1998 {{ISBN|3540644431}} page 935</ref>
Line 37:
 
===Maximum sustained throughput===
This value is the throughput averaged or integrated over a long time (sometimes considered infinity). For high duty cycle networks, this is likely to be the most accurate indicator of system performance. The maximum throughput is defined as the [[asymptotic throughput]] when the load (the amount of incoming data) is very large. In [[packet switched]] systems where the load and the throughput always are equal (where [[packet loss]] does not occur), the maximum throughput may be defined as the minimum load in bit/s that causes the delivery time (the [[Latency (engineering)|latency]]) to become unstable and increase towards infinity. This value can also be used deceptively in relation to peak measured throughput to conceal [[packet shaping]].
 
==Channel utilization and efficiency==
Throughput is sometimes normalized and measured in percentage, but normalization may cause confusion regarding what the percentage is related to. ''[[Channel utilization]]'', ''[[channel efficiency]]'' and ''[[packet drop rate]]'' in percentage are less ambiguous terms.
 
The channel efficiency, also known as [[bandwidth utilization efficiency]], is the percentage of the [[net bit rate]] (in bit/s) of a digital [[communication channel]] that goes to the actually achieved throughput. For example, if the throughput is 70&nbsp;Mbit/s in a 100&nbsp;Mbit/s Ethernet connection, the channel efficiency is 70%. In this example, effectiveeffectively 70&nbsp;Mbit of data are transmitted every second.
 
Channel utilization is instead a term related to the use of the channel, disregarding the throughput. It counts not only with the data bits, but also with the overhead that makes use of the channel. The transmission overhead consists of preamble sequences, frame headers and acknowledge packets. The definitions assume a noiseless channel. Otherwise, the throughput would not be only associated with the nature (efficiency) of the protocol, but also to retransmissions resultant from the quality of the channel. In a simplistic approach, channel efficiency can be equal to channel utilization assuming that acknowledge packets are zero-length and that the communications provider will not see any bandwidth relative to retransmissions or headers. Therefore, certain texts mark a difference between channel utilization and protocol efficiency.
Line 61:
Where Tr is the 10% to 90% rise time, and K is a constant of proportionality related to the pulse shape, equal to 0.35 for an exponential rise, and 0.338 for a Gaussian rise.
 
*RC losses: wiresWires have an inherent resistance, and an inherent [[capacitance]] when measured with respect to ground. This leads to effects called [[parasitic capacitance]], causing all wires and cables to act as RC lowpass filters.
*[[Skin effect]]: As frequency increases, electric charges migrate to the edges of wires or cable. This reduces the effective cross-sectional area available for carrying current, increasing resistance and reducing the signal-to-noise ratio. For [[American wire gauge|AWG]] 24 wire (of the type commonly found in [[Cat 5e]] cable), the skin effect frequency becomes dominant over the inherent resistivity of the wire at 100&nbsp;kHz. At 1&nbsp;GHz the resistivity has increased to 0.1 ohms/inch.<ref>Johnson, 1993, 154</ref>
*Termination and ringing: For long wires (wires longer than 1/6 wavelengths can be considered long) must be modeled as [[transmission line]]s and take termination into account. Unless this is done, reflected signals will travel back and forth across the wire, positively or negatively interfering with the information-carrying signal.<ref>Johnson, 1993, 160-170</ref>
Line 67:
 
===IC hardware considerations===
Computational systems have finite processing power, and can drive finite current. Limited current drive capability can limit the effective signal to noise ratio for high [[capacitance]] links.
 
Large data loads that require processing impose data processing requirements on hardware (such as routers). For example, a gateway router supporting a populated [[class B subnet]], handling 10 × 100 Mbit/s Ethernet channels, must examine 16 bits of address to determine the destination port for each packet. This translates into 81913 packets per second (assuming maximum data payload per packet) with a table of 2^16 addresses this requires the router to be able to perform 5.368 billion lookup operations per second. In a worst-case scenario, where the payloads of each Ethernet packet are reduced to 100 bytes, this number of operations per second jumps to 520 billion. This router would require a multi-teraflop processing core to be able to handle such a load.
Line 86:
{{main|Goodput}}
 
The maximum throughput is often an unreliable measurement of perceived bandwidth, for example the file transmission data rate in bits per seconds. As pointed out above, the achieved throughput is often lower than the maximum throughput. Also, the protocol overhead affects the perceived bandwidth. The throughput is not a well-defined metric when it comes to how to deal with protocol overhead. It is typically measured at a reference point below the network layer and above the physical layer. The most simplesimplest definition is the number of bits per second that are physically delivered. A typical example where this definition is practiced is an Ethernet network. In this case, the maximum throughput is the [[gross bit rate]] or raw bit rate.
 
However, in schemes that include [[forward error correction codes]] (channel coding), the redundant error code is normally excluded from the throughput. An example in [[modem]] communication, where the throughput typically is measured in the interface between the [[Point-to-Point Protocol]] (PPP) and the circuit-switched modem connection. In this case, the maximum throughput is often called [[net bit rate]] or useful bit rate.