Network throughput: Difference between revisions

Content deleted Content added
Abm9856 (talk | contribs)
m fix wikilink (cont'd)
review: thin unsourced and unclear.
 
(10 intermediate revisions by 6 users not shown)
Line 6:
}}
 
'''Network throughput''' (or just '''throughput''', when in context) refers to the rate of message delivery over a [[communication channel]] in a [[communication network]], such as [[Ethernet]] or [[packet radio]]. The data that these messages contain may be delivered over physical or logical links, or through [[network nodes]]. Throughput is usually measured in [[bits per second]] ({{nowrap|bit/s}}, sometimes abbreviated bps), and sometimes in '''packets per second''' ({{nowrap|p/s}} or pps) or data packets per [[time-division multiplexing|time slot]].
 
The '''system throughput''' or '''aggregate throughput''' is the sum of the data rates that are delivered over all channels in a network.<ref>[[Guowang Miao]], Jens Zander, K-W Sung, and Ben Slimane, Fundamentals of Mobile Data Networks, Cambridge University Press, {{ISBN|1107143217}}, 2016.</ref> Throughput represents [[Bandwidth (computing)|digital bandwidth]] consumption.
Line 19:
Four different values are relevant in the context of maximum throughput are used in comparing the ''upper limit'' conceptual performance of multiple systems. They are ''maximum theoretical throughput'', ''maximum achievable throughput'', ''peak measured throughput'', and ''maximum sustained throughput''. These values represent different qualities, and care must be taken that the same definitions are used when comparing different ''maximum throughput'' values.
 
Each bit must carry the same amount of information if throughput values are to be compared. [[Data compression]] can significantly alter throughput calculations, including generating values exceeding 100% in some cases.<!--[[User:Kvng/RTH]]-->
 
If the communication is mediated by several links in series with different bit rates, the maximum throughput of the overall link is lower than or equal to the lowest bit rate. The lowest value link in the series is referred to as the [[bottleneck (traffic)|bottleneck]].
 
===Maximum theoretical throughput===
ThisMaximum numbertheoretical throughput is closely related to the [[channel capacity]] of the system,<ref>Blahut, 2004, p.4</ref> and is the maximum possible quantity of data that can be transmitted under ideal circumstances. In some cases, this number is reported as equal to the channel capacity, though this can be deceptive, as only non-packetized systems (asynchronous) technologies can achieve this without data compression. Maximum theoretical throughput is more accurately reported taking into account format and specification [[protocol overhead|overhead]] with best -case assumptions. This number, like the closely related term 'maximum achievable throughput' below, is primarily used as a rough calculated value, such as for determining bounds on possible performance early in a system design phase.
 
===Asymptotic throughput===
The '''asymptotic throughput''' (less formal '''asymptotic bandwidth''') for a packet-mode [[communication network]] is the value of the [[maximum throughput]] function, when the incoming network load approaches [[infinity]], either due to a [[Message passing|message size]],<ref>''Modeling Message Passing Overhead'' by C.Y Chou et al. in Advances in Grid and Pervasive Computing: First International Conference, GPC 2006 edited by Yeh-Ching Chung and José E. Moreira {{ISBN|3540338098}} pages 299-307</ref> or the number of data sources. As with other [[bit rate]]s and [[data bandwidth]]s, the asymptotic throughput is measured in [[bits per second]] {{nowrap|(bit/s)}} or (rarely) [[byte]]s per second {{nowrap|(B/s)}}, where {{nowrap|1 B/s}} is {{nowrap|8 bit/s}}. [[Decimal prefix]]es are used, meaning that {{nowrap|1&nbsp; Mbit/s}} is {{nowrap|1000000 bit/s}}.
 
Asymptotic throughput is usually estimated by sending or [[network simulation|simulating]] a very large message (sequence of data packets) through the network, using a [[greedy source]] and no [[flow control (data)|flow control]] mechanism (i.e., [[User Datagram Protocol|UDP]] rather than [[Transmission Control Protocol|TCP]]), and measuring the networkvolume pathof throughputdata received inat the destination node. Traffic load between other sources may reduce this maximum network path throughput. Alternatively, a large number of sources and sinks may be modeled, with or without flow control, and the aggregate maximum network throughput measured (the sum of traffic reaching its destinations). In a network simulation model with infiniteinfinitately large packet queues, the asymptotic throughput occurs when the [[Network latency|latency]] (the packet queuing time) goes to infinity, while if the packet queues are limited, or the network is a multi-drop network with many sources, and collisions may occur, the packet-dropping rate approaches 100%.
 
A well-known application of asymptotic throughput is in modeling [[point-to-point communication]] where (following Hockney) [[Network latency|message latency]] <math>T(N)</math> is modeled as a function of message length <math>N</math> as <math>T(N) = (M + N)/A</math> where <math>A</math> is the asymptotic bandwidth and <math>M</math> is the half-peak length.<ref>''Recent Advances in Parallel Virtual Machine and Message Passing Interface'' by Jack Dongarra, Emilio Luque and Tomas Margalef 1999 {{ISBN|3540665498}} page 134</ref>
 
As well as its use in general network modeling, asymptotic throughput is used in modeling performance on [[massively parallel]] computer systems, where system operation is highly dependent on communication overhead, as well as processor performance.<ref>M. Resch et al. ''A comparison of MPI performance on different MPPs''in Recent Advances in Parallel Virtual Machine and Message Passing Interface, Lecture Notes in Computer Science, 1997, Volume 1332/1997, 25-32</ref> In these applications, asymptotic throughput is used in Xu and Hwang model (more general than Hockney's approach)modeling which includes the number of processors, so that both the latency and the asymptotic throughput are functions of the number of processors.<ref>''High-Performance Computing and Networking'' edited by Angelo Mañas, Bernardo Tafalla and Rou Rey Jay Pallones 1998 {{ISBN|3540644431}} page 935</ref>
 
===Peak measured throughput===
{{unsourced section|date=May 2025}}
TheWhere aboveasymptotic valuesthroughput areis a theoretical or calculated. Peakcapacity, ''peak measured throughput'' is throughput measured byon a real, implemented system, or on a simulated system. The value is the throughput measured over a short period of time; mathematically, this is the limit taken with respect to throughput as time approaches zero. This term is synonymous with ''instantaneous throughput''. This number is useful for systems that rely on burst data transmission; however, for systems with a high [[duty cycle]], this is less likely to be a useful measure of system performance.
 
===Maximum sustained throughput===
ThisMaximum valuesustained throughput is the throughput averaged or integrated over a long time (sometimes considered infinity). For highnetworks dutyunder cycleconstant networksload, this is likely to be the most accurate indicator of system performance. The maximum throughput is defined as the [[asymptotic throughput]] when the load (the amount of incoming data) is large. In [[packet-switched switchednetwork]]s systemswhile where[[packet loss]] is not occurring, the load and the throughput always are equal. (where [[packet loss]] does not occur), theThe maximum throughput may be defined as the minimum load in {{nowrap|bit/s}} that causes the[[packet deliveryloss]] timeor causes (the [[Network latency|latency]]) to become unstable and increase towards infinity. This value can also be used deceptively in relation to peak measured throughput to conceal [[packet shaping]].
 
==Channel utilization and efficiency==
Throughput is sometimes normalized and measured in percentage, but normalization may cause confusion regarding what the percentage is related to. ''Channel utilization'', ''channel efficiency'' and ''[[Packet loss|packet drop rate]]'' in percentage are less ambiguous terms.
 
The channel efficiency, also known as [[bandwidth utilization efficiency]], is the percentage of the [[net bit rate]] (in {{nowrap|bit/s}}) of a digital [[communication channel]] that goes to the actually achieved throughput. For example, if the throughput is {{nowrap|70&nbsp; Mbit/s}} inover a {{nowrap|100&nbsp; Mbit/s}} Ethernet connection, the channel efficiency is 70%. In this example, effectively 70&nbsp;Mbit of data are transmitted every second.
 
Channel utilization isincludes instead a term related to the use of the channel, disregarding the throughput. It counts not only withboth the data bits, butand alsothe with thetransmission overhead that makes use ofin the channel. The transmission overhead consists of preamble sequences, frame headers and acknowledgeacknowledgment packets. The definitions assume a noiseless channel. Otherwise, the throughput would not be only associated with the nature (efficiency) of the protocol, but also to retransmissions resultant from the quality of the channel. In a simplistic approach, channel efficiency can be equal to channel utilization assuming that acknowledge packets are zero-length and that the communications provider will not see any bandwidth relative to retransmissions or headers. Therefore, certain texts mark a difference between channel utilization and protocol efficiency.
 
In a point-to-point or [[point-to-multipoint communication]] link, where only one terminal is transmitting, the maximum throughput is often equivalent to or very near the physical data rate (the [[channel capacity]]), since the channel utilization can be almost 100% in such a network, except for a small [[inter-frame gap]].
 
For example, the maximum frame size in Ethernet is 1526 bytes: up to 1500 bytes for the payload, eight bytes for the preamble, 14 bytes for the header, and 4 bytes for the trailer. An additional minimum interframe gap corresponding to 12 bytes is inserted after each frame. This corresponds to a maximum channel utilization of 1526&nbsp;/ (1526&nbsp;+ 12)&nbsp;× 100%&nbsp;= 99.22%, or a maximum channel use of {{nowrap|99.22&nbsp; Mbit/s}} inclusive of Ethernet datalink layer protocol overhead inover a {{nowrap|100&nbsp; Mbit/s}} Ethernet connection. The maximum throughput or channel efficiency is then 1500&nbsp;/ (1526&nbsp;+ 12)&nbsp;= 97.5%, exclusive of the Ethernet protocol overhead.<!--[[User:Kvng/RTH]]-->
 
==Factors affecting throughput==
Line 58 ⟶ 59:
The maximum achievable throughput (the channel capacity) is affected by the bandwidth in hertz and [[signal-to-noise ratio]] of the analog physical medium.
 
Despite the conceptual simplicity of digital information, all electrical signals traveling over wires are analog. The analog limitations of wires or wireless systems inevitably provide an upper bound on the amount of information that can be sent. The dominant equation here is the [[Shannon–Hartley theorem]], and analog limitations of this type can be understood as factors that affect either the analog bandwidth of a signal or as factors that affect the signal-to-noise ratio. The bandwidth of wired systems can be in fact surprisingly{{according to whom?|date=May 2025}} narrow, with the bandwidth of Ethernet wire limited to approximately 1&nbsp;GHz, and PCB traces limited by a similar amount.
 
Digital systems refer to the 'knee frequency',<ref>Johnson, 1993, 2-5</ref> the amount of time for the digital voltage to rise from 10% of a nominal digital '0' to a nominal digital '1' or vice versa. The knee frequency is related to the required bandwidth of a channel, and can be related to the [[3 db bandwidth]] of a system by the equation:<ref>Johnson, 1993, 9</ref> <math>\ F_{3dB} \approx K/T_r </math>
Line 71 ⟶ 72:
Computational systems have finite processing power and can drive finite current. Limited current drive capability can limit the effective signal to noise ratio for high [[capacitance]] links.
 
Large data loads that require processing impose data processing requirements on hardware (such as routers). For example, a gateway router supporting a populated [[class B subnet]], handling 10 × {{nowrap|100&nbsp; Mbit/s}} Ethernet channels, must examine 16 bits of address to determine the destination port for each packet. This translates into 81913 packets per second (assuming maximum data payload per packet) with a table of 2^16 addresses this requires the router to be able to perform 5.368 billion lookup operations per second. In a worst-case scenario, where the payloads of each Ethernet packet are reduced to 100 bytes, this number of operations per second jumps to 520 billion. This router would require a multi-teraflop processing core to be able to handle such a load.
 
* [[CSMA/CD]] and [[CSMA/CA]] "backoff" waiting time and frame retransmissions after detected collisions. This may occur in Ethernet bus networks and hub networks, as well as in wireless networks.