Remote direct memory access

This is an old revision of this page, as edited by Mclayto (talk | contribs) at 16:03, 16 June 2005. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Remote Direct Memory Access (RDMA) is a concept whereby two or more computers communicate via Direct Memory Access directly from the main memory of one system to the main memory of another. As there is no CPU, cache, or context switching overhead needed to perform the transfer, and transfers can continue in parallel with other system operations, this is particularly useful in applications where high throughput, low latency networking is needed such as in massively parallel Linux clusters. The most common RDMA implementation is over InfiniBand. Although RDMA over InfiniBand is technologically superior to most alternatives, it faces an uncertain commercial future.

RDMA over TCP/IP

An alternate proposal is RDMA over TCP/IP, in which the TCP/IP protocol is used to move the data over a commodity data networking technology such as Gigabit Ethernet. Unlike conventional TCP/IP implementations, the RDMA implementation would have its TCP/IP stack implemented on the network adapter card, which would thus act as an I/O processor, taking up the load of RDMA processing.

This also has the advantage that software-based RDMA emulation will be possible, allowing interoperation between systems with dedicated RDMA hardware and those without. One example of this might be the use of a server with a hardware-equipped RDMA to serve a large number of clients with software-emulated RDMA implementations.

See also