Content deleted Content added
No edit summary |
added citations |
||
Line 4:
As shown, by the figure beside, packets coming into the [[Network_interface_controller|network interface card (NIC)]] are processed and loaded to the receiving queues managed by the cores.
The main objective is being able to leverage all the cores available within the processor to process incoming packets, while also improving performances like [[Latency (engineering)|latency]] and [[Network throughput|throughput]].
<ref name="RSS kernel linux docs">{{Cite web|title=RSS kernel linux docs|url=https://www.kernel.org/doc/html/v5.1/networking/scaling.html#rss-receive-side-scaling|access-date=2025-07-08|website=kernel.org|language=en-US}}</ref><ref name="RFS by redhat">{{Cite web|title=RFS by redhat|url=https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/network-rfs|access-date=2025-07-08|website=docs.redhat.com|language=en-US}}</ref><ref name="RFS by nvidea">{{Cite web|title=RFS by nvidea|url=https://docs.nvidia.com/networking/display/mlnxofedv23070512/flow+steering|access-date=2025-07-08|website=docs.nvidia.com|language=en-US}}</ref><ref name="RSS overview by microsoft">{{Cite web|title=RSS overview by microsoft|url=https://learn.microsoft.com/en-us/windows-hardware/drivers/network/introduction-to-receive-side-scaling|access-date=2025-07-08|website=learn.microsoft.com|language=en-US}}</ref><ref name="RSS++">{{Cite journal |last=Barbette |first=Tom |last2=Katsikas |first2=Georgios P. |last3=Maguire |first3=Gerald Q. |last4=Kostić |first4=Dejan |date=2019-12-03 |title=RSS++: load and state-aware receive side scaling |url=https://dl.acm.org/doi/10.1145/3359989.3365412 |journal=Proceedings of the 15th International Conference on Emerging Networking Experiments And Technologies |series=CoNEXT '19 |___location=New York, NY, USA |publisher=Association for Computing Machinery |pages=318–333 |doi=10.1145/3359989.3365412 |isbn=978-1-4503-6998-5}}</ref><ref>{{Citation |last=Madden |first=Michael M. |title=Challenges Using the Linux Network Stack for Real-Time Communication |date=2019-01-06 |work=AIAA Scitech 2019 Forum |url=https://arc.aiaa.org/doi/10.2514/6.2019-0503 |access-date=2025-07-10 |series=AIAA SciTech Forum |publisher=American Institute of Aeronautics and Astronautics |doi=10.2514/6.2019-0503 |pages=9-11}}</ref><ref>{{Cite web |last=Herbert |first=Tom |date=2025-02-24 |title=The alphabet soup of receive packet steering: RSS, RPS, RFS, and aRFS |url=https://medium.com/@tom_84912/the-alphabet-soup-of-receive-packet-steering-rss-rps-rfs-and-arfs-c84347156d68 |access-date=2025-07-10 |website=Medium |language=en}}</ref><ref>{{Cite journal |last=Wu |first=Wenji |last2=DeMar |first2=Phil |last3=Crawford |first3=Matt |date=2011-02-01 |title=Why Can Some Advanced Ethernet NICs Cause Packet Reordering? |url=https://ieeexplore.ieee.org/document/5673999/ |journal=IEEE Communications Letters |volume=15 |issue=2 |pages=253–255 |doi=10.1109/LCOMM.2011.122010.102022 |issn=1558-2558}}</ref>
<ref name="RSS kernel linux docs">{{Cite web|title=RSS kernel linux docs|url=https://www.kernel.org/doc/html/v5.1/networking/scaling.html#rss-receive-side-scaling|access-date=2025-07-08|website=kernel.org|language=en-US}}</ref>▼
<ref name="RFS by redhat">{{Cite web|title=RFS by redhat|url=https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/network-rfs|access-date=2025-07-08|website=docs.redhat.com|language=en-US}}</ref>▼
<ref name="RFS by nvidea">{{Cite web|title=RFS by nvidea|url=https://docs.nvidia.com/networking/display/mlnxofedv23070512/flow+steering|access-date=2025-07-08|website=docs.nvidia.com|language=en-US}}</ref>▼
<ref name="RSS overview by microsoft">{{Cite web|title=RSS overview by microsoft|url=https://learn.microsoft.com/en-us/windows-hardware/drivers/network/introduction-to-receive-side-scaling|access-date=2025-07-08|website=learn.microsoft.com|language=en-US}}</ref>▼
== Hardware techniques ==
Hardware accelerated techniques like RSS and aRFS are used to route and load balance incoming [[Network_packet|packets]] across the multiple cores' queues of a processor.<br>
Those hardware supported methods achieve extremely low latencies and reduce the load on the CPU, as compared to the software based ones. However they require a specialized hardware integrated within the [[Network_interface_controller|network interface controller]] (which could be for example a [[Data_processing_unit|SmartNIC]]).
▲<ref name="
=== RSS ===
Line 24 ⟶ 18:
In this way, packets corresponding to the same flow will be directed to the same receiving queue, without loosing the original order causing an [[Out-of-order delivery|out-of-order delivery]]. Moreover all incoming flows will be [[Load balancing (computing)|load balanced]] across all the available cores thanks to the hash function. <br>
Another important feature introduced by the indirection table is the capability of changing the mapping of flows to the cores without having to change the hash function, but by simply updating the table entries.
<ref>{{Cite web|title=RSS intel doc|url=https://www.intel.com/content/dam/support/us/en/documents/network/sb/318483001us2.pdf|access-date=2025-07-08|website=earn.microsoft.com|language=en-US}}</ref><ref name="RSS overview by microsoft" /><ref>{{Cite web|title=RSS by redhat|url=https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/network-rss|access-date=2025-07-08|website=docs.redhat.com|language=en-US}}</ref><ref name="RSS kernel linux docs" /><ref name="RSS++" />
=== aRFS ===
Line 36 ⟶ 26:
RSS simply load balance incoming traffic across the cores; however if a packet flow is directed to the ''core i'' (as a result of the hash function) while the application needing the received packet is running on ''core j'', many cache misses could be avoided by simply forcing ''i=j'', so that packets are received exactly where they are needed. <br>
To do this aRFS doesn't forward packets directly from the result of the hash function, but using a configurable routing table (which can be filled and updated for instance by the [[Scheduling (computing)|scheduler]] through an [[API]]) packet flows can be steered to the specific consuming core.
▲<ref name="
== Software techniques ==
Software techniques like RPS and RFS employ one of the CPU cores to steer incoming packets across the other cores of the processor. This comes at the cost of introducing additional [[Inter-processor interrupt|inter-processor interrupts (IPIs)]], however the number of hardware interrupts will not increase and thanks to [[Interrupt coalescing|interrupt aggregation]] it could even be reduced.<br>
The benefits of a software solutions is the ease in implementation, without having to change any component (like the [[Network_interface_controller|NIC]]) of the currently used architecture, but by simply deploying the proper [[Loadable kernel module|kernel module]]. This benefit can be crucial especially in cases where the server machine can't be customized or accessed (like in [[Cloud computing#Infrastructure as a service (IaaS)|cloud computing]] environment), even if the network performances could be reduced as compared the hardware supported ones.
<ref name="RPS linux news (LWM)">{{Cite web|title=RPS linux news (LWM)|url=https://lwn.net/Articles/362339/|access-date=2025-07-08|website=lwn.net|language=en-US}}</ref><ref name="RPS by redhat">{{Cite web|title=RPS by redhat|url=https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/network-rps|access-date=2025-07-08|website=docs.redhat.com|language=en-US}}</ref><ref name="RFS by nvidea" />
=== RPS ===
Line 53 ⟶ 39:
This is usually done in the kernel, right after the NIC driver. Having handled the network interrupt and before it can be processed, the packet is sent to the receiving queue of a core, which is then notified thanks to an inter process interrupt. <br>
RPS can be used in conjunction with RSS, in case the number of hardware queues is lower than the number of cores. In this case after having distributed across the hardware queues the incoming packets, a pool of cores can be assigned to each queue and RPS will be used to spread again the incoming flows across the specified pool.
▲<ref name="
=== RFS ===
Line 64 ⟶ 48:
This table is managed by the scheduler, which updates its entries when the application processes are moved between the cores.
The overall CPU load distribution is balanced as long as the applications in [[User space and kernel space|user-space]] are evenly distributed across the multiple cores.
▲<ref name="
=== XPS (in transmission) ===
Transmit Packet Steering (XPS)
<ref>{{Cite web|title=XPS intel overview|url=https://www.intel.com/content/www/us/en/docs/programmable/683517/21-4/transmit-packet-steering-xps.html|access-date=2025-07-08|website=intel.com|language=en-US}}</ref><ref>{{Cite web|title=XPS linux news (LWM)|url=https://lwn.net/Articles/412062/|access-date=2025-07-08|website=lwn.net|language=en-US}}</ref><ref>{{Cite web|title=XPS kernel linux docs|url=https://www.kernel.org/doc/html/v5.1/networking/scaling.html#xps-transmit-packet-steering|access-date=2025-07-08|website=kernel.org|language=en-US}}</ref>
== See also ==
|