Content deleted Content added
No edit summary |
No edit summary |
||
Line 33:
[[File:RPS logic.png|upright=1.7|thumb|Diagram showing how RPS load balance incoming packets across the CPU cores]]
Receive Packet Steering (RPS) is the RSS parallel implemented in software. All packets received by the NIC are load balanced between the cores' queues by implementing an hash function using as configurable key the header fields (like the layer 3 source and destination IP and layer 4 source and destination ports), in the same fashion as RSS does.
Moreover thanks to the hash properties, packets belonging to the same flow will always be steered to the same core.<ref name="RPS by redhat" /><br>
This is usually done in the kernel, right after the NIC driver. Having handled the network interrupt and before it can be processed, the packet is sent to the receiving queue of a core, which is then notified thanks to an inter process interrupt.<ref name="RPS linux news (LWM)" /><br>
RPS can be used in conjunction with RSS, in case the number of queues managed by the hardware is lower than the number of cores. In this case after having distributed across the RSS queues the incoming packets, a pool of cores can be assigned to each queue and RPS will be used to spread again the incoming flows across the specified pool.
=== RFS ===
[[File:RFS logic.png|upright=1.7|thumb|Diagram showing how the RFS logic distribute each incoming packet to the core running the corresponding application]]
Receive Flow Steering (RFS) upgrades RPS in the same direction as the aRFS hardware solution does.
By routing packet flows to the same CPU core running the consuming application, cache locality can be improved and leveraged, avoiding many misses and reducing the latencies introduced by the retrieval of the data from the [[Memory hierarchy|central memory]].<ref name="RFS by redhat">{{Cite web|title=RFS by redhat|url=https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/network-rfs|access-date=2025-07-08|website=docs.redhat.com|publisher=Red Hat Documentation|language=en-US}}</ref><br>
To do this, after having computed the hash of the header fields for the current packet, the result is used to index a lookup table.
This table is managed by the scheduler, which updates its entries when the application processes are moved between the cores.<ref name="RFS kernel linux docs">{{Cite web|title=RFS kernel linux docs|url=https://www.kernel.org/doc/html/v5.1/networking/scaling.html#rfs-receive-flow-steering|access-date=2025-07-08|website=kernel.org|publisher=The Linux Kernel documentation|language=en-US}}</ref><br>
The overall CPU load distribution is balanced as long as the applications in [[User space and kernel space|user-space]] are evenly distributed across the multiple cores.
=== XPS (in transmission) ===
Transmit Packet Steering (XPS) differently from all the prevoius protocols exposed, is used in transmission. When packets need to be loaded on one of the transmission queues exposed by the NIC, there are again many possible optimization that could be done.<ref>{{Cite web|title=XPS linux news (LWM)|url=https://lwn.net/Articles/412062/|access-date=2025-07-08|website=lwn.net|publisher=Linux Weekly News|language=en-US}}</ref><br>
For instance if multiple transmission queues are assigned to a single core, an hash function could be used to load balance outgoing packets across the queues (similarly to how RPS does in reception).
Moreover in order to improve cache locality and hit-rate (similarly to how RFS does), XPS ensures that applications producing the outgoing traffic and running in ''core i'' will favor the transmitting queues associated with the same ''core i''. This reduces the inter-core communication and cache coherency protocols overheads, resulting in better performances in heavy load environments.<ref>{{Cite web|title=XPS intel overview|url=https://www.intel.com/content/www/us/en/docs/programmable/683517/21-4/transmit-packet-steering-xps.html|access-date=2025-07-08|website=intel.com|publisher=Intel corp
== See also ==
|