Innovation at the core of the Internet
“Everything that can be invented has been invented” is a statement attributed to Charles H Duell, Commissioner, U.S. patent office, 1899. No one really expects this to be true…
But oddly enough, when it comes to the Internet, the incorporation of new innovation has some unique caveats.
It is almost that we want Charles Duell’s statement to be true!
To explain this further: The Internet is a platform. In a platform, innovation (and thus change) occurs at the edge of the network. This makes it a vibrant vehicle and a catalyst for new innovation.
But what about innovation at the core (in the platform) itself?
Here, we appear to NOT want to rock the boat and Charles Duell’s statement may well apply. For example, consider the woes in deployment of iPv6 but there are many other instances of the same problem as well. Changes to the core of the Internet are hard!
Thus, we want to keep the core of the Internet very simple and static (and imply that everything that has been invented is already invented) and we hope that all innovation should be at the edge (and not the core) of the Internet.
But the core cannot remain static forever. And if it does change, we want the core to be standardized since that makes the Internet available to all.
So, how do we balance these two perspectives?
The significance of overlay networks
There is a very interesting paper which I read recently called: Overlay Networks and the Future of the Internet (Dave CLARK, Bill LEHR, Steve BAUER, Peyman FARATIN, Rahul SAMI Massachusetts Institute of Technology John WROCLAWSKI University of Southern California, Information Sciences Institute) (FOUND HERE)
I summarise the discussion below:
In the paper Overlay Networks and the Future of the Internet Dave CLARK et al discuss the implications of overlays for the evolution of Internet architecture.
The Internet started out as a government-funded research network running on top of the Public Switched Telecommunications Network (PSTN). Thus, ironically, the Internet itself could be viewed as an ‘overlay’ over the PSTN network. Most of the incremental investment in routers, servers, and access devices (PCs) was undertaken by new types of providers (Internet Service Providers or ISPs) and by end-users. Over time, the ‘overlay’ became the basic architecture. The Internet grew and led to innovation because of the end to end principle which basically advocates ‘dumb pipes and smart nodes’ i.e. intelligence shifts to the edge of the network and not within it (except for performance optimization).
The paper offers a definition of an overlay network as follows:
“An overlay is a set of servers deployed across the internet that:
a) Provide infrastructure to one or more applications,
b) Take responsibility for the forwarding and handling of application data in ways that are different from or in competition with what is part of the basic internet,
c) Can be operated in an organized and coherent way by third parties (which may include collections of end-users)”
The end-to-end principle can be thought of as operating at two levels in the internet, the packet level and the application level. At the packet level, the routers know nothing about applications, but just forward packets. Knowledge of the applications is confined to the end-nodes. However, at the application level, the end-to-end principle could be interpreted as leading to an application architecture in which data is transferred directly between two nodes without an intermediary.
By this definition, many real applications today are not consistent with the end-to-end principle because they have intermediary servers (SMTP servers, Web servers etc).
Thus, in practice, we need other components than end nodes and routers. For example, we need SMTP servers (for email) and web servers (for the web) and also caches and proxies. These intermediary devices such as the mail and web server have some important characteristics that distinguish them from conventional end nodes. Firstly, from the perspective of the router, they may be an end-node, but from the perspective of the application, they are infrastructure because they provide a way for the application to run i.e. they support the end nodes and secondly, they tend to be provided and operated by third parties.
Thus, while overlays are distinct from conventional end nodes, these organized intermediary devices (overlays) are a means to add new functionality to the Internet beyond the core internet protocols (IP, TCP, UDP, DNS, BGP). While they could be seen to blur the end-to-end principle (at the application level), they are better than application specific solutions and they still preserve the workings of the end to end protocol (at the packet level).
The question raised by this is: what are the implications for standards by these overlay networks? I will discuss this in an upcoming post.