Content deleted Content added
Wavelength (talk | contribs) |
→History and Motivation: they where → they were; bloat |
||
Line 1:
The '''Recursive InterNetwork Architecture (RINA)''' is a computer [[network architecture]] that unifies [[distributed computing]] and [[telecommunications]]. RINA's fundamental principle is that [[computer network]]ing is just [[Inter-Process Communication]] or IPC. RINA reconstructs the overall structure of the [[Internet]], forming a model that comprises a single repeating layer, the DIF (Distributed IPC Facility), which is the minimal set of components required to allow distributed IPC between application processes. RINA inherently supports mobility, [[multihoming|multi-homing]] and [[Quality of Service]] without the need for extra mechanisms, provides a secure and programmable environment, motivates for a more competitive marketplace, and allows for a seamless adoption.
==History and
The principles behind RINA were first presented by [[John Day (computer scientist)|John Day]] in his book
From the early days of telephony to the present, the [[telecommunications]] and computing industries have evolved significantly. However, they have been following separate paths, without achieving full integration that can optimally support [[distributed computing]]; the paradigm shift from [[telephony]] to distributed applications is still not complete. Telecoms have been focusing on connecting devices, perpetuating the telephony model where devices and applications are the same. A look at the current [[Internet protocol suite]] shows many symptoms of this thinking:<ref name="INWG">A. McKenzie, “INWG and the Conception of the Internet: An Eyewitness Account”; IEEE Annals of the History of Computing, vol. 33, no. 1, pp. 66-71, 2011</ref>▼
▲From the early days of telephony to the present, the [[telecommunications]] and computing industries have evolved significantly. However, they have been following separate paths, without achieving full integration that can optimally support [[distributed computing]]; the paradigm shift from [[telephony]] to distributed applications is still not complete. Telecoms have been focusing on connecting devices, perpetuating the telephony model where devices and applications are the same. A look at the current [[Internet protocol suite]] shows many symptoms of this thinking:<ref name="INWG">A. McKenzie, “INWG and the Conception of the Internet: An Eyewitness Account”; IEEE Annals of the History of Computing, vol. 33, no. 1, pp.
* The network routes data between interfaces of computers, as the public switched telephone network switched calls between phone terminals. However, it is not the source and destination ''interfaces'' that wish to communicate, but the distributed ''applications''.
* Applications have no way of expressing their desired service characteristics to the network, other than choosing a reliable ([[Transmission Control Protocol|TCP]]) or unreliable ([[User Datagram Protocol|UDP]]) type of transport. The network assumes that applications are homogeneous by providing only a single quality of service.
* The network has no notion of application names, and has to use a combination of the interface address and transport layer port number to identify different applications. In other words, the network uses information on
Several attempts have been made to propose architectures that overcome the current [[Internet]] limitations, under the umbrella of the [[Future Internet]] research efforts. However, most proposals argue that requirements have changed, and therefore the
'''1972. Multi-homing not supported by the ARPANET'''. In 1972 the [[Tinker Air Force Base]] wanted connections to two different IMPs ([[Interface Message Processors]], the predecessors of today's routers) for redundancy. [[ARPANET]] designers realized that they couldn't support this feature because host addresses were the addresses of the IMP port number the host was connected to (borrowing from telephony). To the
'''1978. [[Transmission Control Protocol]] (TCP) split from the [[Internet Protocol]] (IP).''' Initial TCP versions performed the error and flow control (current TCP) and relaying and multiplexing (IP) functions in the same protocol. In 1978 TCP was split from IP, although the two layers had the same scope. This would not be a problem if: i) the two layers were independent and ii) the two layers didn't contain repeated functions. However none of both is right: in order to operate effectively IP needs to know what TCP is doing. IP fragmentation and the workaround of MTU discovery that TCP does in order to avoid it to happen is a clear example of this issue. In fact, as early as in 1987 the networking community was well aware of the IP fragmentation problems, to the point of considering it harmful.<ref>C.A. Kent and J.C. Mogul. Fragmentation considered harmful. Proceedings of Frontiers in Computer Communications Technologies, ACM SIGCOMM, 1987</ref> However, it was not understood as a symptom that TCP and IP were interdependent and therefore splitting it into two layers of the same scope was not a good decision.
Line 18 ⟶ 17:
'''1981. Watson's fundamental results ignored'''. Richard Watson in 1981 provided a fundamental theory of reliable transport,<ref>R. Watson. Timer-based mechanism in reliable transport protocol connection management. Computer Networks, 5:47–56, 1981</ref> whereby connection management requires only timers bounded by a small factor of the Maximum Packet Lifetime (MPL). Based on this theory, Watson et al. developed the Delta-t protocol <ref name="deltat">R. Watson. Delta-t protocol specification. Technical Report UCID-19293, Lawrence Livermore National Laboratory, December 1981</ref> in which the state of a connection at the sender and receiver can be safely removed once the connection-state timers expire without the need for explicit removal messages. And new connections are established without an explicit handshaking phase. On the other hand, TCP uses both explicit handshaking as well as more limited timer-based management of the connection’s state. Had TCP incorporated Watson's results it would be more efficient, robust and secure, eliminating the use of SYNs and FINs and therefore all the associated complexities and vulnerabilities to attack (such as [[SYN flood]]).
'''1983. Internetwork layer lost, the Internet ceases to be an Internet'''. Early in 1972 the International Network Working Group (INWG) was created to bring together the nascent network research community. One of the early tasks it accomplished was voting an international network transport protocol, which was approved in 1976.<ref name="INWG"/> A remarkable aspect is that the selected option, as well as all the other candidates, had an architecture composed of
[[File:INWG-arch.png|thumb|350px|Figure1. The Internet architecture as seen by the INWG]]
'''1983. First opportunity to fix addressing missed'''. The need for application names and distributed directories that mapped application names to internetwork addresses was well understood since mid
'''1986. [[Congestion collapse]] takes the Internet by surprise'''.
# congestion avoidance mechanisms are predatory: by definition they need to cause congestion to act; # congestion avoidance mechanisms may be triggered when the network is not congested, causing a downgrade in performance. '''1992. Second opportunity to fix addressing missed'''. In 1992 the [[Internet Architecture Board]] (IAB) produced a series of recommendations to resolve the scaling problems of the [[IPv4]]
There are still more wrong decisions{{Says who}} that have resulted in long-term problems for the current Internet, such as:
* In 1988 IAB recommended using the [[Simple Network Management Protocol]] (SNMP) as the initial network management protocol for the Internet to later transition to the object-oriented approach of the [[Common Management Information Protocol]] (CMIP).<ref>Internet Architecture Board. IAB Recommendations for the Development of Internet Network Management Standards. RFC 1052,
* Since
* The
* The inability to provide efficient solutions to security problems such as authentication, access control, integrity and confidentiality, since they were not part of the initial design. As stated in <ref>D. Clark, L. Chapin, V. Cerf, R. Braden and R. Hobby. Towards the Future Internet Architecture. RFC 1287 (Informational), December 1991</ref>
==Terminology==
|