Content deleted Content added
Guy Harris (talk | contribs) If a "NIC" is a card, there are a lot of computer systems that do not have NICs but that can be connected to networks, and people shouldn't refer to their network interfaces as "NICs". If it's a controller, however, you can use the term without checking whether the network controller is on a separate card or not. As people tend to refer to non-card network controllers as "NICs", the latter is the best choice. |
→Performance and advanced functionality: MOS:BOLD terms which redirect here. Merge two lists of software receive steering implementations. TBD: Is Intel Flow Director truly a software-only implementation? |
||
Line 88:
}}</ref>]]
'''Multiqueue NICs''' provide multiple transmit and receive [[Queue (abstract data type)|queues]], allowing packets received by the NIC to be assigned to one of its receive queues. The NIC may distribute incoming traffic between the receive queues using a [[hash function]]. Each receive queue is assigned to a separate [[interrupt]]; by routing each of those interrupts to different [[CPU]]s or [[Multi-core processor|CPU cores]], processing of the interrupt requests triggered by the network traffic received by a single NIC can be distributed improving performance.<ref name="linux-net-scaling">{{cite web
| url = https://www.kernel.org/doc/Documentation/networking/scaling.txt
| title = Linux kernel documentation: Documentation/networking/scaling.txt
Line 100:
| publisher = [[Intel]] }}</ref>
The hardware-based distribution of the interrupts, described above, is referred to as '''receive-side scaling''' (RSS).<ref name="intel-grantley">{{cite web
| url = http://www.intel.com/content/dam/technology-provider/secure/us/en/documents/product-marketing-information/tst-grantley-launch-presentation-2014.pdf
| title = Intel Look Inside: Intel Ethernet
Line 108:
| archive-url = https://web.archive.org/web/20150326095816/http://www.intel.com/content/dam/technology-provider/secure/us/en/documents/product-marketing-information/tst-grantley-launch-presentation-2014.pdf
| archive-date = March 26, 2015
}}</ref>{{rp|82}} Purely software implementations also exist, such as the [[receive packet steering]] (RPS)
| url = https://www.kernel.org/doc/Documentation/networking/ixgbe.txt
| title = Linux kernel documentation: Documentation/networking/ixgbe.txt
Line 122:
| title = Introduction to Intel Ethernet Flow Director and Memcached Performance
| date = October 14, 2014 | access-date = October 11, 2015
| publisher = [[Intel]] }}</ref> Further performance improvements can be achieved by routing the interrupt requests to the CPUs or cores executing the applications that are the ultimate destinations for [[network packet]]s that generated the interrupts. This technique improves [[locality of reference]] and results in higher overall performance, reduced latency and better hardware utilization because of the higher utilization of [[CPU cache]]s and fewer required [[context switch]]es.
With multi-queue NICs, additional performance improvements can be achieved by distributing outgoing traffic among different transmit queues. By assigning different transmit queues to different CPUs or CPU cores, internal operating system contentions can be avoided. This approach is usually referred to as '''transmit packet steering''' (XPS).<ref name="linux-net-scaling" />
Some products feature '''NIC partitioning''' ('''NPAR''', also known as '''port partitioning''') that uses [[SR-IOV]] virtualization to divide a single 10 Gigabit Ethernet NIC into multiple discrete virtual NICs with dedicated bandwidth, which are presented to the firmware and operating system as separate [[PCI device function]]s.<ref name="Dell">{{cite web
| url = http://www.dell.com/downloads/global/products/pedge/en/Dell-Broadcom-NPAR-White-Paper.pdf
| title = Enhancing Scalability Through Network Interface Card Partitioning
Line 137:
| publisher = [[Intel]] }}</ref>
Some NICs provide a [[TCP offload engine]]
| url = https://lwn.net/Articles/243949/
| title = Large receive offload
Line 145:
{{Anchor|SOLARFLARE|OPENONLOAD|USER-LEVEL-NETWORKING}}
Some NICs offer integrated [[field-programmable gate array]]s (FPGAs) for user-programmable processing of network traffic before it reaches the host computer, allowing for significantly reduced [[Latency (engineering)|latencies]] in time-sensitive workloads.<ref>{{cite web|title=High Performance Solutions for Cyber Security|url=http://newwavedv.com/markets/defense/cyber-security/|website=New Wave Design & Verification|publisher=New Wave DV}}</ref> Moreover, some NICs offer complete low-latency [[TCP/IP stack]]s running on integrated FPGAs in combination with [[userspace]] libraries that intercept networking operations usually performed by the [[operating system kernel]]; Solarflare's open-source '''OpenOnload''' network stack that runs on [[Linux]] is an example. This kind of functionality is usually referred to as '''user-level networking'''.<ref>{{cite web
| url = https://www.theregister.co.uk/2012/02/08/solarflare_application_onload_engine/
| title = Solarflare turns network adapters into servers: When a CPU just isn't fast enough
|