Source: Internet Telephony Magazine
This article originally appeared in the Feb. 2011 issue of INTERNET TELEPHONY Magazine.
As complicated as the networked world seems to be today there is actually a rather simple way to break it all down – the open systems interconnection model.
“The Open Systems (News - Alert) Interconnection model (OSI model) is a product of the Open Systems Interconnectioneffort at the International Organization for Standardization. It is a way of sub-dividing a communications system into smaller parts called layers. A layer is a collection of conceptually similar functions that provide services to the layer above it and receives services from the layer below it. On each layer an instance provides services to the instances at the layer above and requests service from the layer below.
For example, a layer that provides error-free communications across a network provides the path needed by applications above it, while it calls the next lower layer to send and receive packets that make up the contents of the path. Conceptually two instances at one layer are connected by a horizontal protocol connection on that layer.
Most network protocols used in the market today are based on TCP/IP stacks.”
Most people in the networked world are familiar with the OSI model, but typically those that are experts in one layer are not so well versed in the others. This is a result of the depth of each and the various functions, services and providers that must be comprehended. The good news is that the OSI model is a standard, and it is meant to allow for the hand-off from one layer to the next in a seamless fashion. So, basically, an expert in one particular layer need not be an expert in another for the entire system to reach optimal performance.
Although there may be slight differences between layers at the hand-off points, there are dramatic differences between fully separated layers such as the physical layer and the application layer. Usually network and IT people that are responsible for Ethernet and IP administration can perform functions in the local and wide area at layers 2 and 3. These people are typically not application or software engineers, but they can get an office network going to support those applications. Software developers may have an understanding of Ethernet and IP, but that is not necessarily their expertise as they focus more on programming languages in layer 7.
Neither of these two groups of expertise posses subject matter expertise on building the physical links that make all that they know and do possible.
Infrastructure peering is, in essence, this layered model. It is the physical infrastructure that supports the OSI stacks in the same way that the physical human body supports the DNA within it. Of course, without the DNA, there is no purpose for the physical. The same can be said for physical objects of any kind and the laws of physics. They are intertwined, and their relationship is inherent.
Without acknowledging each and every law or layer, and having a basic understanding and appreciation of each and its respective role, a prudent network plan cannot be effectuated. The peering that is taking place, whether on a P2P level between people, or machines, or on a network to network IP peering level, actually happens physically at one or many points of infrastructure. When the lowest layer is overlooked, or forgotten, even the best laid network plans will go awry. (In the old days of dial-up, the World-Wide-Wait and America-On-Hold sums it up. Today it would be the impact of the iPhone (News - Alert) on the AT&T wireless network.)
One of the most important things in the world when making any type of plan is predictability. Having the information to know cause and effect, input and output, is the only way to know and then mitigate the risk of loss, or failure. These are the elements required to maximize the return on any investment. Each layer of the OSI model is bound to this same principle in and of itself as much as they are bound to it between and among each other. All investments in any aspect of information technology, devices, equipment and networks of all kinds public and private are therefore bound to this as well.
In a totally local, fully private scenario where the end result is to run a single application and only be connected and functioning within a single environment and unto itself then perhaps some of the risk is already mitigated, but alas this is not the norm. In fact it is far from it. The reality is that everything of any meaningful value to society outside of a lab is connected. To be disconnected is to be non-existent.
Unfortunately over the past ten years the vast majority of businesses have been conditioned to not ask about anything beyond their own internal networks and to rely on the public Internet and the ISPs to provide them access for interconnection to all things not on their own IP networks. This encouraged ignorance has created a very detrimental situation in the United States called net neutrality. The situation is that net neutrality has nothing to do with the Internet itself (in the realm of public layer 3 and up to layer 7), but rather network access to the Internet (the physical link, layer 1 and 2). The issue is that due to ignorance the FCC (News - Alert) is now attempting to regulate the public Internet instead of attempting to create a real plan to resolve the issue of independent, physical access to it.
To make matters worse, no one can really have an educated opinion about the subject since everyone has been so misinformed for so many years. Being told the untruth repeatedly by the mass media that VoIP means voice over the Internet, and actually believing it, has contributed to our disastrous present reality. Internet protocol is not the Internet. The Internet is not your broadband cable connection. Your cable provider provides you access to the Internet. If you are an end user consumer, you may not have many, or any, choices so therefore you are subject to your providers’ discretion. This discretion is now what is in question with the FCC, but that is not and should not be a question of the Internet itself.
If “you” are a business, you might have other options, and if high-speed access to the Internet is something that your business requires to operate optimally then you will seek the best possible connection even going to the extent of moving your office location to a building or state that has a greater number of better, more economical options for access. The issue is that both consumers and businesses are connecting to the same Internet, but rules being created to supposedly help one group, the consumers, will have an impact on the other group, the businesses, or basically anything that is not a consumer. Unless….
Fortunately the OSI model is a standard and it provides a level of predictability for everyone that wishes to apply its rules. If the FCC truly wants to protect end users rights to “legal” content (legal as defined by whom? Maybe WikiLeaks becomes deemed an illegal terrorist website and gets blocked by the US GOV? That’s for another article) then all of those in favor of better access to the Internet can organize and build it. This is the basis of every community fiber network build in the U.S. and abroad including the Australian NBN.
The tools and the plans are all out there. The return on the investment is known as long as there is a path to physical interconnection. A positive outcome through a proper plan is as predictable as the negative impact of inaction. It is just a matter of knowledge and execution.