Content deleted Content added
m convert deprecated magic links to template usage, update CS1 params in templates - BRFA |
|||
Line 4:
==History and motivation==
The principles behind RINA were first presented by [[John Day (computer scientist)|John Day]] in his book ''Patterns in Network Architecture: A return to Fundamentals''.<ref name="PNA">''Patterns in Network Architecture: A Return to Fundamentals'', John Day (2008), Prentice Hall, {{ISBN
From the early days of telephony to the present, the [[telecommunications]] and computing industries have evolved significantly. However, they have been following separate paths, without achieving full integration that can optimally support [[distributed computing]]; the paradigm shift from [[telephony]] to distributed applications is still not complete. Telecoms have been focusing on connecting devices, perpetuating the telephony model where devices and applications are the same. A look at the current [[Internet protocol suite]] shows many symptoms of this thinking:<ref name="INWG">A. McKenzie, “INWG and the Conception of the Internet: An Eyewitness Account”; IEEE Annals of the History of Computing, vol. 33, no. 1, pp. 66–71, 2011</ref>
Line 13:
Several attempts have been made to propose architectures that overcome the current [[Internet]] limitations, under the umbrella of the [[Future Internet]] research efforts. However, most proposals argue that requirements have changed, and therefore the Internet is no longer capable to cope with them. While it is true that the environment in which the technologies that support the Internet today live is very different from when they were conceived in the late 1970s, changing requirements are not the only reason behind the Internet's problems related to multihoming, mobility, security or QoS to name a few. The root of the problems may that current Internet is based on a tradition focused on keeping the original [[ARPANET]] demo working and fundamentally unchanged, as illustrated by the following paragraphs.
'''1972. Multi-homing not supported by the ARPANET'''. In 1972 the [[Tinker Air Force Base]] wanted connections to two different IMPs ([[Interface Message Processors]], the predecessors of today's routers) for redundancy. [[ARPANET]] designers realized that they couldn't support this feature because host addresses were the addresses of the IMP port number the host was connected to (borrowing from telephony). To the ARPANET, two interfaces of the same host had different addresses, therefore it had no way of knowing that they belong to the same host. The solution was obvious: as in operating systems, a logical address space naming the nodes (hosts and routers) was required on top of the physical interface address space. However, the implementation of this solution was left for future work, and it is still not done today: “IP addresses of all types are assigned to interfaces, not to nodes”.<ref name="IPv6">R. Hinden and S. Deering. "IP Version 6 Addressing Architecture".
'''1978. [[Transmission Control Protocol]] (TCP) split from the [[Internet Protocol]] (IP).''' Initial TCP versions performed the error and flow control (current TCP) and relaying and multiplexing (IP) functions in the same protocol. In 1978 TCP was split from IP, although the two layers had the same scope. This would not be a problem if: i) the two layers were independent and ii) the two layers didn't contain repeated functions. However none of both is right: in order to operate effectively IP needs to know what TCP is doing. IP fragmentation and the workaround of MTU discovery that TCP does in order to avoid it to happen is a clear example of this issue. In fact, as early as in 1987 the networking community was well aware of the IP fragmentation problems, to the point of considering it harmful.<ref>C.A. Kent and J.C. Mogul. Fragmentation considered harmful. Proceedings of Frontiers in Computer Communications Technologies, ACM SIGCOMM, 1987</ref> However, it was not understood as a symptom that TCP and IP were interdependent and therefore splitting it into two layers of the same scope was not a good decision.
Line 19:
'''1981. Watson's fundamental results ignored'''. Richard Watson in 1981 provided a fundamental theory of reliable transport,<ref>R. Watson. Timer-based mechanism in reliable transport protocol connection management. Computer Networks, 5:47–56, 1981</ref> whereby connection management requires only timers bounded by a small factor of the Maximum Packet Lifetime (MPL). Based on this theory, Watson et al. developed the Delta-t protocol <ref name="deltat">R. Watson. Delta-t protocol specification. Technical Report UCID-19293, Lawrence Livermore National Laboratory, December 1981</ref> in which the state of a connection at the sender and receiver can be safely removed once the connection-state timers expire without the need for explicit removal messages. And new connections are established without an explicit handshaking phase. On the other hand, TCP uses both explicit handshaking as well as more limited timer-based management of the connection’s state. Had TCP incorporated Watson's results it would be more efficient, robust and secure, eliminating the use of SYNs and FINs and therefore all the associated complexities and vulnerabilities to attack (such as [[SYN flood]]).
'''1983. Internetwork layer lost, the Internet ceases to be an Internet'''. Early in 1972 the International Network Working Group (INWG) was created to bring together the nascent network research community. One of the early tasks it accomplished was voting an international network transport protocol, which was approved in 1976.<ref name="INWG"/> A remarkable aspect is that the selected option, as well as all the other candidates, had an architecture composed of three layers of increasing scope: data link (to handle different types of physical media), network (to handle different types of networks) and internetwork (to handle a network of networks), each layer with its own addresses. When TCP/IP was introduced it ran at the internetwork layer on top of the [[Network Control Program]] and other network technologies. But when NCP was shut down, TCP/IP took the network role and the internetwork layer was lost.<ref name="lostlayer">J. Day. How in the Heck Do You Lose a Layer!? 2nd IFIP International Conference of the Network of the Future, Paris, France, 2011</ref> As a result, the Internet ceased to be an Internet and became a concatenation of IP networks with an end-to-end transport layer on top. A consequence of this decision is the complex routing system required today, with both intra-___domain and inter-___domain routing happening at the network layer <ref name="EGP">E.C. Rosen. Exterior Gateway Protocol (EGP).
[[File:INWG-arch.png|thumb|350px|Figure1. The Internet architecture as seen by the INWG]]
Line 33:
There are still more wrong decisions{{According to whom|date=October 2015}} that have resulted in long-term problems for the current Internet, such as:
* In 1988 IAB recommended using the [[Simple Network Management Protocol]] (SNMP) as the initial network management protocol for the Internet to later transition to the object-oriented approach of the [[Common Management Information Protocol]] (CMIP).<ref>Internet Architecture Board. IAB Recommendations for the Development of Internet Network Management Standards.
* Since IPv6 didn't solve the multi-homing problem and naming the node was not accepted, the major theory pursued by the field is that the IP address semantics are overloaded with both identity and ___location information, and therefore the solution is to separate the two, leading to the work on [[Locator/Identifier Separation Protocol]] (LISP). However all approaches based on LISP have scaling problems <ref name="lispis">D. Meyer and D. Lewis. Architectural implications of Locator/ID separation. Draft Meyer Loc Id implications, January 2009</ref> because i) it is based on a false distinction (identity vs. ___location) and ii) it is not routing packets to the end destination (LISP is using the locator for routing, which is an interface address; therefore the multi-homing problem is still there).<ref name="lispno">J. Day. Why loc/id split isn’t the answer, 2008. Available online at http://rina.tssg.org/docs/LocIDSplit090309.pdf</ref>
* The discovery of [[bufferbloat]] due to the use of large buffers in the network. Since the beginning of the 1980s it was already known that the buffer size should be the minimal to damp out transient traffic bursts,<ref>L. Pouzin. Methods, tools and observations on flow control in packet-switched data networks. IEEE Transactions on Communications, 29(4): 413–426, 1981</ref> but no more since buffers increase the transit delay of packets within the network.
* The inability to provide efficient solutions to security problems such as authentication, access control, integrity and confidentiality, since they were not part of the initial design. As stated in <ref>D. Clark, L. Chapin, V. Cerf, R. Braden and R. Hobby. Towards the Future Internet Architecture.
==Terminology==
Line 80:
[[File:TCPIP-arch.png|thumb|350px|Figure 4. Functional layering of the TCP/IP architecture]]
The current architecture just provides two scopes: data link (scope of layers 1 and 2), and global (scope of layers 3 and 4). However, layer 4 is just implemented in the hosts, therefore the “network side” of the Internet ends at layer 3. This means that the current Internet is able to handle a network with heterogeneous physical links, but it is not designed to handle heterogeneous networks, although this is supposed to be its operation. To be able to do it, it would require an “Internetwork” scope, which is now missing.<ref name="lostlayer"/> As ironic as it may sound, the current Internet is not really an internetwork, but a concatenation of IP networks with an end-to-end transport layer on top of them. The consequences of this flaw are several: both inter-___domain and intra-___domain routing have to happen within the network layer, and its scope had to be artificially isolated through the introduction of the concept of [[Autonomous System (Internet)]] and an [[Exterior Gateway Protocol]];<ref name="EGP"/> [[Network Address Translation]] (NATs) appeared as middleboxes in order to have a means of partitioning and reusing parts of the single IP address space.<ref>K. Egevang and P. Francis. The IP Network Address Translator (NAT).
With an internetwork layer none of this would be necessary: inter-___domain routing would happen at the internetwork layer, while intra-___domain routing within each network would occur at each respective network layer. NATs would not be required since each network could have its own internal address space; only the addresses in the internetwork layer would have to be common. Moreover, congestion could be confined to individual networks, instead of having to deal with it at a global scope as it is done today. The internetwork layer was there in previous internetwork architectures, for example, the INWG architecture depicted in Figure 3, which was designed in 1976. It was somehow lost when the [[Network Control Program]] was phased out ant the Internet officially started in 1983.
Line 91:
The current Internet architecture has an incomplete naming and addressing schema, which the reason why mobility and multi-homing require ad-hoc solutions and protocols tailored to different operational environments. The only names provided are Point of Attachment (PoA) names (IP addresses), which are usually confused to be node names. The result is that the network has no way to understand that the two or more IP addresses of a multi-homed node belong to the same node, making multi-homing hard. The same choice, naming the interface and not the node, forces the Internet to perform routing on the interface level instead of the node level, resulting in having much bigger routing tables than they really need to be. Mobility, which can be seen as dynamic multi-homing, is the next feature that suffers from having an incomplete naming schema.
In 1982, [[Jerry Saltzer]] in his work “On the Naming and Binding of network destinations” <ref name="Saltzer">J. Saltzer. On the Naming and Binding of Network Destinations.
[[File:Saltzer-naming.png|thumb|350px|Figure 5. Saltzer's point of view on naming and addressing in computer networks.]]
|