Net neutrality

This is an old revision of this page, as edited by RichardBennett (talk | contribs) at 01:05, 26 February 2007 (The most notable and only legal definition is the one supported by Save the Internet. Censoring this because of nationalistic UK patriotism is misleading.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Network neutrality (equivalently "net neutrality", "internet neutrality" or "NN") refers to a principle applied to residential broadband networks which provide Internet access, telephone service, and television programming. Precise definitions vary, but a broadband network free of arbitrary restrictions on the kinds of equipment attached and the modes of communication allowed would be considered neutral by most advocates, provided it met additional tests relating to the degradation of various communication streams by others. Arguably, no network is completely neutral, hence neutrality represents an ideal condition toward which networks and their operators should strive[1][2][3]

The term was coined in European telecommunications law around 2003[citation needed] and imported to the US as the FCC commenced consideration of re-classifying residential DSL as an Information Service, consistent with Cable Internet. Advocates didn't dispute consistent regulation, but believed that both should be regulated according to the more strict Telecommunications Service guidelines, which were traditional for services provided by telephone companies.

In order to compete with Broadband Cable's Triple Play service, telecommunications companies have proposed segregating telephone and television traffic from normal Internet traffic. As this practice is widely viewed as legitimate on Cable networks, its prohibition on DSL and other telephone company-provided networks is seen by net neutrality critics as arbitrary. Accordingly, "net neutrality" has been accused of being "a solution in search of a problem" and of eliminating incentives to upgrade networks and launch next generation Internet services.[4] Bob Kahn, the Internet's primary inventor, says net neutrality is a dogmatic slogan that would halt experimentation and improvement in the Internet's core [5]. Kahn's view is shared by most senior internetworking engineers, with the notable exception of Google employee Vint Cerf [6].

However, activists fear that telecom companies may also use this power to discriminate between traffic types, charging tolls on content from some content providers (i.e. websites, services, protocols), particularly competitors. Their worry is that failure to pay the tolls would result in poor service or no service for certain websites or certain types of applications. At least one major American telecommunications company has supported this idea.[7] Neutrality proponents claim that telecom companies seek to impose the tiered service model more for the purpose of profiting from their control of the pipeline rather than for any demand for their content or services.[8] Others have stated that they believe "net neutrality" to be primarily important as a preservation of current freedoms. [9]

Yet a third group finds the terms of both sides of this debate dubious.[10]

Definitions of Network Neutrality

There are several related but not identical definitions of Network Neutrality that are in use (in rough date order):

Tim Wu

(academic credited with popularising the term)

"Network neutrality is best defined as a network design principle. The idea is that a maximally useful public information network aspires to treat all content, sites, and platforms equally. This allows the network to carry every form of information and support every kind of application. The principle suggests that information networks are often more valuable when they are less specialized – when they are a platform for multiple uses, present and future." [2]

Sir Tim Berners-Lee

(inventor of the World Wide Web and director of the World Wide Web Consortium)

Sir Tim Berners-Lee's position is that different levels of service have always been available and doubtless always will be. He defines Network Neutrality as: "If I pay to connect to the net with a given quality of service, and you pay to connect to the net with the same or higher quality of service, then you and I can communicate across the net, with that quality of service,"[1].

Susan Crawford

(member of board of directors, ICANN)

The Internet's transport layer should not be shaped in accordance with particular applications but should rather provide only the transport service appropriate to the careful file transfer that was defined in the early 1970s as the Internet's canonical application. Timing of packet delivery is a form of anti-competitive discrimination. Open access (or unbundling) would promote network neutrality.

Google's definition

"Network neutrality is the principle that Internet users should be in control of what content they view and what applications they use on the Internet." [3]

Save the Internet's definition

The major consumer-oriented coalition in favor of net neutrality defines it according the AT&T/Bell South merger agreement's stipulations on broadband provider conduct: "not to provide or to sell to Internet content, application or service providers ... any service that privileges, degrades or prioritizes any (data) packet transmitted over AT&T/BellSouth’s wireline broadband Internet access service based on its source, ownership or destination." This is the first definition in law [11]

Bob Kahn

Bob Kahn, the Internet's primary inventor, has said net neutrality is a dogmatic slogan that means: "nothing interesting can happen inside the net" [6].

(See also Network neutrality in the United States for the legal situation there).

Applications of net neutrality

Timeline

  • The term "net neutrality" was coined only recently, but the concept existed in the age of the telegraph. In 1860, a US federal law subsidizing a coast-to-coast telegraph line stated that

...messages received from any individual, company, or corporation, or from any telegraph lines connecting with this line at either of its termini, shall be impartially transmitted in the order of their reception, excepting that the dispatches of the government shall have priority.

— An act to facilitate communication between the Atlantic and Pacific states by electric telegraph., June 16, 1860

  • The automatic telephone exchange was created by Almon Brown Strowger in 1888 as a way to bypass biased telephone operators who diverted unsuspecting customers to his competitors. This automating created a "neutral" environment that was freer from unseen tampering to telephone users. .[12]
  • The early roots of the Internet were created by DARPA with ongoing support from government officials as a United States-funded (hence publicly funded) research network governed by an Acceptable Use Policy (AUP) prohibiting commercial activity. In the early 1990s, it was privatized and the AUP was lifted for commercial users.
  • The end-to-end principle of Internet networking, coined as early as 1983, argued that network intelligence didn't preclude the need for intelligence in end systems, which allows the network to be both "dumb" and functional for many purposes.
  • The Internet2 project concluded, in 2001, that QoS protocols were probably not deployable on the Abilene network with equipment available at the time.
  • In 2003 Tim Wu published and popularized a proposal for a net neutrality rule, in his paper "Network Neutrality, Broadband Discrimination."[13] The paper considered Network Neutrality in terms of neutrality between applications, as well as neutrality between data and QOS sensitive traffic, and proposed some legislation to potentially deal with these issues.
  • In early 2005, in the Madison River case, the FCC for the first time showed a willingness to enforce its network neutrality principles by opening an investigation about Madison River Communications, a local telephone carrier that was blocking voice over IP service.
  • On August 5, 2005, the FCC adopted a policy statement stating its adherence to four principles of network neutrality.
  • In November 2005 Edward Whitacre, Jr. then CEO of SBC stated 'there's going to have to be some mechanism for these [internet upstarts] who use these pipes to pay for the portion they're using', even though both the users and Google were paying for their usage of the Internet already, and that 'The Internet can't be free in that sense, because we and the cable companies have made an investment';[7] sparking a furious debate. SBC spokesman Michael Balmoris said that Whitacre was misinterpreted and his comments only referred to new tiered services.[14]
  • 2006- over 1,000,000 signatures were delivered to Congress in favor of a network neutrality
  • "Internet Freedom and Nondiscrimination Act of 2006" Makes it a violation of the Clayton Antitrust Act for broadband providers to discriminate against any web traffic, refuse to connect to other providers, block or impair specific (legal) content; prohibits the use of admission control to determine network traffic priority. Approved 20-13 by the House Judiciary committee on May 25, 2006.
  • A bill called "Communications Opportunity, Promotion and Enhancement Act of 2006" was introduced in the US House of Representatives, which referenced the principles enunciated by the FCC and authorized fines up to $750,000 for infractions. It was passed 321-101 by the full House of Representatives on June 8, 2006.
  • The Center for American Progress held a 90 minute debate on Monday July 17, 2006 in Washington.
  • Bob Kahn, inventor of TCP and father of the Internet, declared his opposition to Net Neutrality in a talk at the Computer History Museum in January, 2007 [5]

Some contemporary trends in the use and provision of internet services addressed by the debate are:

Users

  • The requirements of Voice over IP and online games for low latency bandwidth.
  • The increasing use of high bandwidth applications, such as online games, and music and video downloading.
  • The increasing use of wireless home networks, which allow for neighbors to share an Internet connection, thereby (in some cases) reducing revenues for the service providers. In urban areas this factor can be very large, with a large number of people sharing one individual person's connection, although performance often is poor.

Service providers

  • Increasing use of traffic shaping by many or most broadband providers to control P2P and other services.
  • Improvements in networking technology, which make providing broadband service, on the aggregate, cheaper.
  • High bandwidth video and audio telecommunications over the Internet (including Voice Over IP technology) which threaten the land line revenues of Telco Internet service providers.
  • Deploying content filtering technology to stop spam and other attacks.[4]

Governments

  • The trend of governments funding the construction of high-speed networks in countries like South Korea and France, and for cities to build their own wireless networks, and their more gradual deployment in many areas of the U.S.

Uses of non-neutral networks

Non-neutral (or discriminatory) networks can be said to have a purpose in certain problematic situations. At times internet traffic has caused internet services to fail (see congestion collapse and slashdot effect). Such events are predicted to become more common as the use of multimedia applications requiring the transmission of real-time video and audio data increase. In such cases, high latency connections result in interruption of services. An environment in which a content provider can provide a guaranteed quality of service to customers throughout the network opens up opportunities for independent content providers to compete with traditional content providers in areas such as television and music broadcast, telephony, and video on demand.

Bram Cohen believes that the next generation of BitTorrent technology being developed by him and Cachelogic may violate some definitions of net neutrality[5].

One of the clearest examples of the need for highly reliable bandwidth is the developing technology of Remote surgery, where a surgeon can use robotics and communications technology to operate on a patient thousands of miles away.[15] Using dedicated circuits is highly desirable in this situation as the penalty for a communications failure could be death, so they are used in all cases; if they weren't available, prioritized bandwidth would be preferred to normal bandwidth. An example where non-neutrality could be useful is prioritizing emergency calls to fire and police.[16]

Alleged current discriminatory practices

Violations of the principle of network neutrality occur in the censorship of political, immoral or religious material around the world.[6] For example China [7] and Saudi Arabia [8] both filter content on the Internet, preventing access to certain types of websites. Singapore has network blocks on more than 100 sites.[17] In Britain and Norway telecommunication companies block access to websites that depict sexually explicit images of children, see pedophilia.[18] Germany also blocks foreign sites for copyright and other reasons.[19]

One often cited U.S. example:

In 2004, a small North Carolina telecom company, Madison River Communications, blocked their DSL customers from using the Vonage VoIP service. Service was restored after the FCC intervened and entered into a consent decree that had Madison River pay a fine of $15,000.[9] The FCC retains this authority under all telecommunications legislation pending in the US Congress, with or without "net neutrality" amendments, with an increase in fines to $500,000 under the House bill and $750,000 under the Senate bill.

Worldwide, the Bittorrent application is widely given reduced bandwidth or even in some cases blocked entirely.[20]

"Dumb" versus "intelligent" networks

Advocates of network neutrality insist that it is a theory of network design closely related to the end to end principle. Under this principle, a neutral network is a dumb network, merely passing packets according to the needs of applications. This point of view was expressed by David S. Isenberg in his seminal paper, The Rise of the Stupid Network[21] to wit:

A new network "philosophy and architecture," is replacing the vision of an Intelligent Network. The vision is one in which the public communications network would be engineered for "always-on" use, not intermittence and scarcity. It would be engineered for intelligence at the end-user's device, not in the network. And the network would be engineered simply to "Deliver the Bits, Stupid," not for fancy network routing or "smart" number translation. . . . In the Stupid Network, the data would tell the network where it needs to go. (In contrast, in a circuit network, the network tells the data where to go.) In a Stupid Network, the data on it would be the boss. . . .End user devices would be free to behave flexibly because, in the Stupid Network the data is boss, bits are essentially free, and there is no assumption that the data is of a single data rate or data type.

These terms merely signify the network's level of knowledge about and influence over the packets it handles - they carry no connotations of stupidity, inferiority or superiority.

Critics charge that Isenberg reads too much of philosophical significance into a a principle of a purely technical nature.[citation needed] The seminal paper on the End-to-End Principle, End-to-end arguments in system design by Saltzer, Reed, and Clark [22], actually argues that network intelligence doesn't relieve end systems of the requirement to check inbound data for errors and to rate-limit the sender, not for a wholesale removal of intelligence in the network core. End-to-end is one of many design tools, not the universal one:

The end-to-end argument does not tell us where to put the early checks, since either layer can do this performance-enhancement job. Placing the early retry protocol in the file transfer application simplifies the communication system, but may increase overall cost, since the communication system is shared by other applications and each application must now provide its own reliability enhancement. Placing the early retry protocol in the communication system may be more efficient, since it may be performed inside the network on a hop-by-hop basis, reducing the delay involved in correcting a failure. At the same time, there may be some application that finds the cost of the enhancement is not worth the result but it now has no choice in the matter.

The appropriate placement of functions in a protocol stack depends on many factors.

Quality of Service and Internet Protocols

Early Internet routers typically forwarded packets on a "best-effort" basis, without regard for application needs, but this is changing. Many private networks using Internet protocols now employ Quality of Service (QoS), and Network Service Providers frequently enter into Service Level Agreements with each other embracing some sort of QoS.

The IP datagram includes a 3-bit wide Precedence field which may be used to request a level of service, consistent with the notion that protocols in a layered architecture offer services through Service Access Points. Obeying this field is optional and it has rarely been used across public links, although it is commonly used in private networks, especially those including WiFi networks where priority is enforced. Indeed, no single standard describing exactly how such requests would be upheld across independently functioning Internet routers has successfully gained dominance, although SIP, RSVP, IEEE 802.11e, and MPLS define this behavior.

Router manufacturers have begun to introduce routers that have logic enabling them to route traffic for various Classes of Service in at "wire-speed".

With the emergence of multimedia and VoIP and applications that would benefit from low latency, various attempts to address this oversight have arisen, including the proposition of offering differing, priced levels of service that would shape Internet transmissions at the network layer based on application type. These efforts are ongoing, and are starting to yield results as wholesale Internet transport providers begin to amend service agreements to include service levels.[23]

Network neutrality is sometimes used as a technical term, although it has no history in the design documents (RFCs) describing the Internet protocols. In this usage, it is claimed to represent a property of protocol layering in which higher-layer protocols may not communicate service requirements to lower-layer protocols, a highly idiosyncratic interpretation of protocol engineering. (In conventional network engineering practice, each protocol in a layered system exposes Service Access Points to higher layers that can be used to request a level of service appropriate to the needs of higher-layer protocols.)

Gary Bachula's Testimony

Gary Bachula, Vice President for External Affairs for Internet2, asserts that specific QoS protocols are unnecessary in the core network as long as the core network links are "over-provisioned" to the point that network traffic never encounters delay.

The Internet2 project concluded, in 2001, that the QoS protocols were probably not deployable on its Abilene network with equipment available at that time. While newer routers are capable of following QoS protocols with no loss of performance, equipment available at the time relied on software to implement QoS. The Internet2 Abilene network group also predicted that "logistical, financial, and organizational barriers will block the way toward any bandwidth guarantees" by protocol modifications aimed at QoS.[24][25] In essence they believe that the economics would be likely to make the network providers deliberately erode the quality of best effort traffic as a way to push customers to higher priced QoS services.

The Abilene network study was the basis for the testimony of Gary Bachula to the Senate Commerce Committee's Hearing on Network Neutrality in early 2006. He expressed the opinion that adding more bandwidth was more effective than any of the various schemes for accomplishing QoS they examined.[26]

Bachula's testimony has been cited by proponents of a law banning Quality of Service as proof that no legitimate purpose is served by such an offering. Of course this argument is dependent on the assumption that over-provisioning is always possible. Obviously factors like natural disasters, delays in installation caused by zoning, domestic politics, and construction permits all affect the ability to pursue an over-provisioned network. Note however, that these are all short term and temporary set backs.

Quality of Service Procedures

Over-provisioning is not above controversy. Unlike the Internet 2 Abilene Network, the Internet's core is owned and managed by a number of different Network Service Providers, not a single entity. Hence its behavior is much more stochastic or unpredictable. Therefore, research continues on QoS procedures that are deployable in large, diverse networks.

There are two principal approaches to QoS in modern packet-switched networks, a parameterized system based on an exchange of application requirements with the network, and a prioritized system where each packet identifies a desired service level to the network.

On the Internet, Integrated services ("IntServ") implements the parameterized approach. In this model, applications use the Resource Reservation Protocol (RSVP) to request and reserve resources through a network.

Differentiated services ("DiffServ") implements the prioritized model. DiffServ marks packets according to the type of service they need. In response to these markings, routers and switches use various queuing strategies to tailor performance to requirements. (At the IP layer, differentiated services code point (DSCP) markings use the first 6 bits in the TOS field of the IP packet header. At the MAC layer, VLAN IEEE 802.1q and IEEE 802.1D can be used to carry essentially the same information.)

For a fuller discussion of these issues, see the Quality of Service entry.

Pricing models

Broadband Internet access has most often been provided to users based on bandwidth capacity. Some argue that if ISPs can provide varying levels of service to websites at various prices, this may be a way to manage the costs of unused capacity (or "leverage price discrimination to recoup costs of 'consumer surplus'"). However, purchasers of connectivity on the basis of bandwidth capacity must expect the capacity they purchase in order to meet their communications requirements. This is how high-traffic websites meet demand.

Current practice in interconnection

While the network neutrality debate continues, network providers often enter into peering arrangements among themselves. These agreements often stipulate how certain information flows should be treated. In addition, network providers often implement various policies such as blocking of port 25 to prevent insecure systems from serving as spam relays, or other ports commonly used by decentralized music search applications (often called "P2P" though all applications on the Internet are essentially peer-to-peer). They also present "terms of service" that often include rules about the use of certain applications as part of their contracts with users. Most "consumer Internet" providers implement policies like these.

However, the effect of peering arrangements among network providers are only local to the peers that enter into the arrangements, and cannot affect traffic flow outside their scope.

Other aspects of neutrality

Columbia University Law School professor Tim Wu observed the Internet is not neutral in terms of its impact on applications having different requirements. It is more beneficial for data applications than for applications that require low latency and low jitter, such as voice and real-time video: "In a universe of applications, including both latency-sensitive and insensitive applications, it is difficult to regard the IP suite as truly neutral." In presenting this analysis Wu shifts focus away from the design of the network for application flexibility. He has proposed regulations on Internet access networks that define net neutrality as equal treatment among similar applications, rather than neutral transmissions regardless of applications. He proposes allowing broadband operators to make reasonable tradeoffs between the requirements of different applications, while regulators carefully scrutinize network operator behavior where local networks interconnect.[27]

In Wu's view of net neutrality, the network should adapt to the diverse needs of emerging applications; in Crawford's view the network's traditional service structure provides a flexible transport designed to support a broad variety of applications.

Professor Rob Frieden of Penn State University [10]offers an assessment of the network neutrality debate with emphasis on the business and operational orientations of managers of telephone and data carriers' physical networks. Professor Frieden also assesses the strengths and weaknesses of positions articulated by Professors Tim Wu and Chris Yoo. [28].

Given these complexities and a rapidly changing technological and market environment, many in the public policy area question the government's ability to make and maintain meaningful regulation that doesn't cause more harm than good.[11] For example, fair queuing would actually be illegal under several proposals as it requires prioritization of packets based on criteria other than that permitted by the proposed law. Quoting Bram Cohen, the creator of BitTorrent,"I most definitely do not want the internet to become like television where there's actual censorship... however it is very difficult to actually create network neutrality laws which don't result in an absurdity like making it so that ISPs can't drop spam or stop... (hacker) attacks." [12] A laissez-faire approach would instead let the market enforce the desires of Internet users through the standard tools of consumer choice and contract law.

Other concerns exist that strict neutrality laws would restrict ISP's ability to adapt to a possible future scenario where growth in demand for high-definition video exceeds current bandwidth capacity, particularly if net neutrality laws have a detrimental effect on investment in new broadband networks. A Wall Street Journal op-ed described the amount of data produced globally in exabytes, calling the potential bandwidth crunch the "exaflood" [13].

Changes in carrier technology regulation

The topic is further complicated by the differences between the Internet and earlier communications systems in their regulatory histories. Essentially no new regulations accompanied the Internet when its technology was first made available to private carriers and the public, while the technical operations of most telecommunications services were regulated from their beginnings.[citation needed]

Some of the arguments associated with network neutrality regulations came into prominence in mid 2002, offered by the "High Tech Broadband Coalition", a group comprising developers for Amazon.com, Google, and Microsoft. However, the fuller concept of "Network neutrality" was developed mainly by regulators and legal academics, most prominently law professors Tim Wu and Lawrence Lessig and Federal Communications Commission Chairman Michael Powell most often while speaking at the Annual Digital Broadband Migration conference or writing within the pages of the Journal of Telecommunications and High Technology Law,[29] both of the University of Colorado School of Law. It is worth noting, however, that the ideas underlying Network Neutrality have a long pedigree in telecommunications practice and regulation.

Proposals for network neutrality laws are generally opposed by the cable television and telephone industries, and some network engineers and free-market scholars from the conservative to libertarian, including Christopher Yoo and Adam Thierer. Opponents argue that (1) Network neutrality regulations severely limit the Internet's usefulness; (2) network neutrality regulations threaten to set a precedent for even more intrusive regulation of the Internet; (3) imposing such regulation will chill investment in competitive networks (e.g., wireless broadband) and deny network providers the ability to differentiate their services; and (4) that network neutrality regulations confuse the unregulated Internet with the highly regulated telecom lines that it has shared with voice and cable customers for most of its history.

According to this view, the Internet has succeeded in attracting users and applications because it has been an oasis of deregulation in the midst of a highly regulated telecom market. Critics of Internet regulation in the name of "net neutrality" also say the Internet is much less neutral than proponents claim, pointing to such practices as the Type of Service header in the IP Datagram, the practice of active queuing described in RFC 2309 and the existence of Integrated Services and Differentiated Services enabling Quality of Service over IP. According to this view, the Internet is still very weak at meeting the needs of real-time and multimedia applications, and its continued evolution is stymied by the onerous regulations proposed in the name of network neutrality.

These views may be said to contrast with the historical development of network neutrality, which involves a retreat from intrusive regulation, and expanded investment in network construction, consumer and business subscriptions, and the technology sector which requires an open and neutral platform for its business model; they may also be said to more accurately describe the Internet as it has been and may become if not stifled by overly-zealous regulation.

There is also the issue of regulatory capture, where the supposedly regulated entities manipulate the system to their advantage (through political power gained by campaign contributions or independent expenditures), either over competitors, or in collusion with them, largely to increase profits and/or exclude market entrants (partcularly those employing new technologies). This exclusion and control by various means has been shown historically to be to the ultimate detriment of consumers, both from higher cost and from slowed innovation.[citation needed]

Law in the US

See Network neutrality in the US.

There have been and continue to be ongoing legal and political wrangling in the US.

In the meantime the FCC has claimed some jurisdiction over the issue and has laid down guideline rules that it expects the telecommunications industry to follow.

Law outside the U.S.

Net neutrality in the common carrier sense has been instantiated into law in many countries, including the United Kingdom, South Korea, and Japan.

In Japan, the nation's largest phone company, Nippon Telegraph and Telephone, operates a service called Flet's Square over their FTTH high speed internet connections that serves video on demand at speeds and levels of service higher than generic internet traffic.

Cultural References

Two 2006 episodes of The Daily Show with Jon Stewart discuss the proposed act. In the latter, John Hodgman made an appearance to discuss the purpose, being lead by Jon to utter his "I'm a PC" line from Apple's Get a Mac advertising campaign.

See also

References

  1. ^ a b Sir Tim Berners Lee Blog entry on Network Neutrality real mp4
  2. ^ a b Tim Wu's page on Network Neutrality
  3. ^ a b Net Neutrality
  4. ^ "The Web's Worst New Idea," Wall Street Journal, 18 May 2006
  5. ^ a b "An Evening With Robert Kahn," video from Computer History Museum, 9 Jan 2007 Cite error: The named reference "KAHNVID" was defined multiple times with different content (see the help page).
  6. ^ a b "Father of Internet warns against Net Neutrality," The Register 18 January, 2007
  7. ^ a b Business week-Online "At SBC, It's All About "Scale and Scope"
  8. ^ Four Eyed Monsters :: Humanity Lobotomy - Net Neutrality Open Source Documentary
  9. ^ "No Tolls On The Internet"
  10. ^ "No Neutral Ground In This Battle". Retrieved 2006-12-15.
  11. ^ News report
  12. ^ Net Neutrality: The Technical Side of the Debate: A White Paper
  13. ^ NETWORK NEUTRALITY, BROADBAND DISCRIMINATION by Tim Wu
  14. ^ Washington Post- SBC Head Ignites Access Debate
  15. ^ Robert Hahn and Scott Wallsten, "The Economics of Net Neutrality," The Economists' Voice, June 2006, p. 4
  16. ^ Tom Giovanetti, "Network Neutrality? Welcome to the Stupid Internet," Institute for Policy Innovation, June 9, 2006
  17. ^ [1]
  18. ^ [2]
  19. ^ [3]
  20. ^ BitTorrent: Shedding no tiers
  21. ^ Isenberg, David (1996-08-01). "The Rise of the Stupid Network" (HTML). Retrieved 2006-08-19. {{cite web}}: Check date values in: |date= (help)
  22. ^ "End-to-end arguments in system design", Jerome H. Saltzer, David P. Reed, and David D. Clark, ACM Transactions on Computer Systems 2, 4 (November 1984) pages 277-288
  23. ^ http://www.lightreading.com/document.asp?doc_id=101271
  24. ^ Oram, Andy (2002-06-11). "A Nice Way to Get Network Quality of Service?" (HTML). O'Reilly Net.com®. {{cite web}}: Check date values in: |date= (help); Text "aaccessdate2006-07-07" ignored (help)
  25. ^ http://qbone.internet2.edu/papers/non-architectural-problems.txt
  26. ^ Bachula, Gary (2006-02-07). "Testimony of Gary R. Bachula, Vice President, Internet2" (PDF). p. 5. Retrieved 2006-07-07. {{cite web}}: Check date values in: |date= (help)
  27. ^ Wu, Tim (2003). "Network Neutrality, Broadband Discrimination". Journal of Telecommunications and High Technology Law. 2: p.141. doi:10.2139/ssrn.388863. SSRN 388863. {{cite journal}}: |pages= has extra text (help)
  28. ^ Network Neutrality or Bias? -Handicapping the Odds for a Tiered and Branded Internet; see also Internet 3.0: Identifying Problems and Solutions to the Network Neutrality Debate, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=962181
  29. ^ Videos from the Digital Broadband Migration conference and papers from the Journal of Telecommunications and High Technology Law about Net Neutrality law are collected at neutralitylaw.com, http://neutralitylaw.com