Note: Material furnished herein are copyright protected. © 2015: Dhiman Deb Chowdhury. All Rights Reserved.
In previous article, I have discussed about historical
perspective of system virtualization ( http://dhimanchowdhury.blogspot.com/2015/07/network-virtualization-101-prelude.html
): how the need to integrate IBM’s disparate systems to share resource and
enable multitasking led to the development of system level virtualization. If
you have not read that article, I suggest that you do: it will help you
understand the development in network virtualization and benefits thereof.
Today, system level virtualization advanced far beyond the
conception embraced in IBM’s CP/40 encouraging development works in network and
storage level virtualization. However, primary
goal remains the same: share resources and enable dynamic capabilities. For network
virtualization, complexities marred wide spread deployment of NVE (Network Virtualization Environment)
especially in enterprise network environment. Service providers and network
operators are more intrigued by the advent of network virtualization
technologies and are open to its potential deployment in their network
reconfiguration than enterprise. For service providers (including traditional network
operators), overhaul of age old networks is imperative to accommodate growing
demand for diversified services. Moreover, network virtualization is perceived
as instrumental to eradicate ossifying forces of current internet by
introducing disruptive technologies. To Network Operators/service providers, network
virtualization provides promise of reducing capex (Capital Expenditure) and
Opex (Operational Expenditure) in addition to augmenting their service
capabilities. For example, NFV (Network Function Virtualization) servers now
can offload various network functions such as CPE, edge signaling (SBC, IMS,
authentication and control), mobile & core network node (BNG, CG-NAT,
HLR/HSS, MME, SGSN, GGSN/PDN-GW, RNC, Node B, eNode B, DPI, QoS, IPSec/SSL etc)
from traditional network nodes that are expensive to replace. With this
approach, service providers are not constrained by the limited capabilities of
given network nodes. Additionally, such virtualization provides portability,
flexibilities and scalability that traditional network nodes are lacking. To
realize this perceived benefit, networks operators collectively created an
acceptable standard of NFV (Network Function Virtualization) through ETSI
(European Telecommunication Standard Institute).
In contrast, enterprise is slow in adopting network
virtualization technologies and much of the fear comes from increase
complexities, lack of standardization and case studies. Question is not about whether
enterprise wants network virtualization or for that matter SDN, but how to get
it and more importantly, whether technology can alleviate concern about
disruption, security, scalability and reliability. However, the advent of white
box concept in networking gear drawing increasing interest from enterprise
customer. I am observing a growing interest from larger enterprise customers to
buy white box switch. Majority of them are interested in the
reduced CAPEX/OPEX and hybrid (traditional protocol stack and Restful API/Open
Flow Agents) capabilities of the network gears so to realize the goal of virtualized
network environment or for that matter SDN (Software Defined Networking).
Smaller enterprises are more interested about traditional protocol suits and
pricing model of white box switch than SDN type capabilities of the system. Additionally,
the prevailing alphabet soup of network virtualization technologies are adding
much confusions: buzzwords and unsubstantiated claims are not helping. As a
result, network virtualization deployment in enterprise remains a piecemeal
effort that lacks both strategic focus and wholehearted approach.
In this article, I thought of achieving two goals: first,
impart readership about network virtualization technologies through survey of
most commonly used networking approaches such as NVE, SDN, VNF, NFV and Overlay:
explore the origin, differences and use case. Secondly, provide know-how based
on which to advance the understanding of interplay of the technologies. Once we
move to network configuration sections in the subsequent articles of this
series, this foundational know-how will tremendously help the readership to
develop advance level network virtualization configuration ground up.
Having said that, let us start begin our survey starting
with NVE (Network Virtualization Environment).
1.1
NVE (Network Virtualization Environment)
In a system virtualization, different components of hardware
platform (e.g. CPU, memory and NIC) are shared to the vm as shown in figure
below. For the purpose of our discussion, Xen hypervisor architecture is
considered. The Xen is an open-sourced and considered as “Type I” hypervisor or
VMM meaning it is bare metal and sits on top of hardware. Xen implements
virtual memory (vMemory), virtual CPU (vCPU), event channels and shared memory
on top of hardware and controls I/O (Input/Output) and memory access to devices
(Sailer et al., 2005).
Figure 1. Xen I/O
operations (Ram, Santos & Turner, 2010; Li & Li, 2009; Migeon, 2011).
|
The inner works of Xen as presented in the figure above
depicts how various hardware components are presented to the VMs for easier
access and utilization; for example, physical CPU is presented as vCPU (virtual
CPU) and memory presented as virtual memory. The benefit of such system
virtualization is that each VM can be perceived as an isolated system serving
specific applications without being constrained by resource limitation.
This approach of system virtualization benefit users by
presenting a single physical machine/server as multiple machine/servers. To
this abstraction, network virtualization is no different in essence, however
the approaches to achieve similar benefits at network level is different. For
example, in an NVE (Network Virtualization Environment) architecture,
virtualization is achieved through allowing multiple heterogenous networks to
cohabit on a shared physical substrate (Chowdhury & Boutaba, 2009). The
concept is nothing new and such process of abstraction that separates logical
networks behavior from physical networks is present in VLAN, VPN and overlay
networks. Drawing upon on those precursors Scholars proposed NVE as next
generation network architecture and the concept of NVE is inclusive of network
programmability as implied in SDN (Software Defined Networking) and logical
networks concept. The design goal of NVE is to achieve flexibility, manageability,
scalability, isolation, programmability,
stability and convergence. It should be
noted that NVE is not a technology but a method of achieving NextGen network
architecture that take advantage of available technologies. In contrast the
notion of SDN to some pundit is the technological advent that disaggregates
network separating control plane from data plane but for others SDN is an
evolving and encompassing term that is beyond programmability, centralization
of control plane and disaggregation.
The NFV on the other hand as discussed
earlier focused on offloading network functions from traditional networking
gears to servers or new generations of white box switch especially those using containerized process isolation for
multiple applications.
In an NVE architecture, VNE (Virtual Network Element) or
sometimes referred to VN (Virtual Network) is considered basic entity [Carapinha
& Jiménez, 2009; Chowdhury & Boutaba, 2009]. VNE or VN can be
understood as collection of nodes connected by virtual link.
1.2 Overlay Networks
Following up on the notion of NVE, we can take a pragmatic view
of virtual network: a logical network which sits on the top of a physical
network. For example, you can consider an existing ip network as the underlay
network and a tunnel across from one endpoint to the other over this underlay
network. This logical tunneling capability is known as overlay network. From
the definition perspective, overlay network is a virtual computer network which
creates a virtual topology on top of the physical topology of another network
(Chowdhury & Boutaba, 2008).
Therefore, an overlay network can be as simple as a VxLAN, NVGRE or
MPLSoGRE tunnel that connects one data center to the other over an ip network
or it could be a multitude of design consideration that deploy many network
features over existing networks such as QoS guarantee, performance and
availability assurance, enabling multicasting, protecting from denial of
service attacks, content distribution and file sharing.
Figure 2. Network
Virtualization Environment (NVE) architecture.
|
A virtual node act as same way a router or switch behave in
physical network: its main functionality is to forward packets according to the
protocol of the virtual network (Chowdhury & Boutaba, 2008). On the other
hand, virtual link connects virtual node the same way physical link connect
physical routers. Although NVE is primarily focused towards next-gen internet
design, as such is applicable to other networks as well. The notion of NVE that
comprises physical infrastructure, virtual links and virtual nodes are innately
applicable to any type of network virtualization. As we discussed earlier, the
notion of NVE is not new, from historical perspective, VLAN, VPN and overlay
networks that separate physical infrastructure from logical topology can be
considered as precursor.
Assuming that readership are conversant about VLAN and
various VPN technologies, I would like to brief over the later: overlay
networks since this particular term perhaps new to some and for other, this
section can be a refresher.
1.2 Overlay Networks
Following up on the notion of NVE, we can take a pragmatic view
of virtual network: a logical network which sits on the top of a physical
network. For example, you can consider an existing ip network as the underlay
network and a tunnel across from one endpoint to the other over this underlay
network. This logical tunneling capability is known as overlay network. From
the definition perspective, overlay network is a virtual computer network which
creates a virtual topology on top of the physical topology of another network
(Chowdhury & Boutaba, 2008).
Therefore, an overlay network can be as simple as a VxLAN, NVGRE or
MPLSoGRE tunnel that connects one data center to the other over an ip network
or it could be a multitude of design consideration that deploy many network
features over existing networks such as QoS guarantee, performance and
availability assurance, enabling multicasting, protecting from denial of
service attacks, content distribution and file sharing.
Figure 3. Typical Data Center Overlay Networks using
VxLAN, NVGRE or MPLSoGRE.
|
Clark et al. (2006) presented a table depicting survey of
overlay network possibilities as show below:
Table 1.
Overlay networks Examples (Clark, et al., 2006).
Type
|
Function/Purpose
|
Example
|
Peer-to-peer (p2p)
|
File Sharing
|
Napster, Gnutella
|
Content Delivery Network (CDN)
|
Content caching to reduce access delays and transport costs
|
Akamai, Digital Island
|
Routing
|
Reduce routing delays, resilient
routing overlays
|
Resilient Overlay Network (RON), Akamai SureRoute
|
Security
|
Enhance end-user security, privacy
|
Virtual private networks (VPNs), onion routing (Tor, I2P), anonymous
content
storage (Freenet, Entropy), censorship resistant overlays (Publius,
Infranet, Tangler)
|
Experimental
|
Facilitate innovation, implementation of new
technologies, experimentation
|
General purpose (PlanetLab, I3)
|
Other
|
Various
|
Email, VoIP (Skype), Multicast (MBone, 6Bone, TRIAD, IP-NL), Delay tolerant
networks, etc.
|
To understand how the perspective of overlay network
presented in the table above can be realized in a network deployment, we can
consider the Akamai’s overlay network framework as presented in the following
diagram.
Figure 4. Akamai's
Overlay Network framework (Sitaraman et al., n.d.)
|
Similar to the notion of VM as discussed earlier, a virtual
network (the overlay) can be built over the existing underlay network (e.g.
internet) to provide various services. Akamai’s framework shows how social
networking, e-commerce, media, file sharing and web portal services are offered
through overlay network over the internet (underlay network).
In the network configuration section of the succeeding
articles, readership can explore how an overlay network can be realized on top
of the existing ip network: especially using VxLAN or NVGRE protocol. Until
then hold your thoughts about benefits, constraints and network configuration
parameters. This section provided an overview on overlay network to prepare
readership for hands-on experience in later articles.
1.1
Software Defined Networking (SDN)
As discussed in earlier section, VLAN and VPN that separate
physical or logical network can be considered precursor to NVE and other type
of network virtualization including overlay and SDN. Historical perspective is
important, it allows readership to understand the evolution of network and the
reasoning behind advent of a technological approach such as SDN: a necessary
conduit to network programmability and disaggregation. Some pundits argue that
making computer network more programmable enables innovation in network
management and lowers the barrier to deploying new services (Feamster, Rexford
& Zegura, 2014). Towards this goal, research was undertaken at various
projects to achieve network programmability. In this section, I will provide
the review of network programmability works in three stages (Feamster, Rexford
& Zegura, 2014) rather than in chronological order. Each of stage has its
own contribution to SDN history: active network, control data plane separation,
Openflow and network virtualization.
Figure 5. Works on
network programmability: a historical perspective (Feamster, Rexford &
Zegura, 2014).
|
The timeline presented in the figure above is not
comprehensive rather it intended to depict important research works on network
programmability.
Active Network: As
the internet traffic increased in the mid 1990s, network operators faced with
numerous issues including route caching, preferential treating/filtering and
traffic engineering. Networking gear at the time did not offer capabilities to
effective manage network traffic. To
provide some control over traffic passing through otherwise passive network,
researchers undertook the work of creating network level API to bring some
level of programmability to otherwise static network core. Tennenhouse &
Wetherhall (1996) was the first to introduce the notion of “Active Network” in
which user inject customized programs to network nodes. These programs will be
interpreted by network nodes to perform desired operation on the data flowing
through the network. A further details of Active network operations including
ANTS and Netscript etc are available at this url: http://www.cse.wustl.edu/~jain/cis788-97/ftp/active_nets/
.
The idea of active network was radical at the time but did
not gain much traction. However, the
work later found validity in programs such as GENI (Global Environment for
Network Innovations), NSF’s FIND (Future Internet Design) and EU’s FIRE (Future
Internet Research and Experimentation Initiative).
Control Data-plane Separation:
In the early 2000s, network operator
introduced traffic engineering (a commonly known practice to control the path
of traffic forwarding) to manage increase traffic, improve performance and
reliability of the network. But the approach was primitive at best and increase
frustration to improve the traffic engineering led to researchers looking for
alternative to decouple control plane from data plane. Among several proposals,
such as open control interface between control and data plane, “ForCES”
(Forwarding and Control Element Separation), centralized control of the
network, “RCP” (Routing Control
Platform), “SoftRouter” and Ethane are important to note.
The ForCES (RFC 3746; https://tools.ietf.org/html/rfc3746
) discussed how control plane and forwarding plane can be separated within a
router by placing route control mechanism at control blades.
Figure 6. Control
and Forwarding element separation within a router as presented in RFC 3746.
|
Many L2/L3 chassis design till-date are using this notion of
control and Forwarding Element separation as part of their architectural
framework as shown in figure below. The RCP (Routing Control Platform), on the
other hand discusses about separating interdomain routing from IP router. The
reasoning behind such consideration is that route convergence should be faster
eliminating limitation to scale especially in the case of iBGP (Caesar et al.,
2005). Though researchers attempted to
make BGP more flexible through path attributes such as MED and scale to larger
network, such mechanism causes routers to perform complex path calculation
introducing potential inconsistencies and errors. Feamster et al. (2004) argue
that today’s router should only perform “lookup and forwarding” similar to
switch without being concerned for path calculation. Instead, they proposed RCP (Routing Control platform) as a
separate element outside of router/switch replacing BGP to select routes for
each router in a domain (e.g., an AS) and exchange routing information with
RCPs in other domains.
In contrast, Ethane is a flow based policy enforcement
mechanism that protects enterprise networks from threats (including namespace
concerns, e.g. hostname bindings). It is a research project at the Stanford
University that aim to create policy enforcement at enterprise networks through
flow based network and managed by central domain controller that secures
bindings and enforces policy, flow and access
control. Interested readership can learn further on this research project at http://yuba.stanford.edu/ethane/pubs.html
.
OpenFlow & NOS:
These prior works on network programmability and separation of control and data
plane gain further momentum during the mid 2000s as some of the chip vendors
like Broadcom offered open API for programmers to control certain forwarding
behavior (Feamster, Rexford & Zegura, 2014). The genesis of Openflow can be attributed to
the tireless works of network operators, equipment vendors and networking
researchers that created technology push and pull towards network
programmability and disaggregation. It is more of an industry adoption when compared
to intellectual predecessors (Feamster,
Rexford & Zegura, 2014). Parallel to this development academia also felt
the need to allow students to experiment new ideas in real network at scale.
This aspiration guided a group of researchers at Stanford University to begun
working on the initial concept of OpenFlow in 2008. By December, 2009 Openflow version 1.0 specification
was released. Since its inception the development work and standardization
efforts of Openflow is managed by Open
Networking Foundation (ONF), a user-led organization. The Openflow group at
Stanford deployed testbeds to demonstrate capabilities of the protocol in
single campus and over the WAN to multiple campuses. A real SDN use case is
thus materialized in those campuses and by 2012 experimental use of Openflow
deployment began in other realms including Data Center Networking.
Figure 7. Openflow
communications and interworks between switch agent and server based controller.
|
Openflow protocol is considered the fundamental element in
Software Defined Networking (SDN). The protocol facilitate communication
between OpenFlow Switching agent in the switch and the openflow controller. The
agent includes Openflow channel and flow table elements. Once rules and action
parameters are devised as such is pushed from controller to flow agent at the
switch using openflow protocol. A switch
of packet forwarder thus can use forwarding instruction from the table apply
action/rule profile to specific packet. For multiple controller, Role-Request
message can be used for effective communication. Openflow 1.4 specification
provides the following flow chart to depict how packet forwarding decisions are
done at the switch. An incoming packet is matched against entries in multiple
table to forward or drop packets by a switch.
Figure 8. Incoming
packets are matched against entries in multiple table to determine forwarding
action (OpenFlowv1.4, 2013).
|
OpenFlow communications are initiated as TCP handshake
between switch and the controller: in a given switch (that support OpenFlow 1.0
or higher) IP address of the controller is specified to initiate communication through TCP port 6633. A details
about Openflow configuration will be discussed in the succeeding series of
articles regarding network configuration. The notion of Openflow based SDN
replaces or intend to replace control plane or for that matter traditional NOS
(Network Operating Systems) from the switch. But the concept does not
eliminates NOS completely instead it resides in the Openflow controller (on a
server somewhere in the network). However, majority of the white box switches
or brand name switches provide hybrid solutions meaning the switch includes
traditional NOS and OpenFlow Agent. Some switch (with hybrid NOS) implements
arbitration API allowing user to intervene of setting flow priority to
eliminate conflict of flows between those directed by traditional protocols and
those set at the OF (OpenFlow) Flow tables.
At controller side, some of the early NOS are ONIX, NOX/POX and Beacon. Today a number of source controller NOS are available,
some supports openflow protocol such as Floodlight/OpenDaylight,
and Ryu and others use
Restful API/plugins or protocol such as netconf/Yang to connect data
forwarding plane (Switch) such as Openstack with Neutron ML2 plugins, ONOS
and OpenContrail.
Interestingly, controller NOS such as OpenContrail uses its
own switch agent known as vRouter similar to the concept of Open vSwitch which can
operate both as a soft switch running within the hypervisor, and as the control
stack for switching silicon. Further details on Open vSwitch is available at http://openvswitch.org/ .
I hope this brief overview of NVE, Network overlay and the historical
perspective of SDN and OpenFlow are helpful. In the next article, I will extend
the learning further emphasizing on Network
Virtualization as depicted in the timeline (figure 5) along with VNF (Virtual Network Function) and NFV (Network Function Virtualization).
Please stay tune and follow me at linkedin (https://www.linkedin.com/in/dhiman1
), twitter @dchowdhu ( https://twitter.com/dchowdhu
) and google plus (https://plus.google.com/u/0/+DhimanChowdhury/posts
). You may also subscribe to all these feeds through Agema System’s linkedin
page at https://www.linkedin.com/company/agema-systems-inc?trk=top_nav_home
Reference
[Caesar et al., 2005] Caesar, M., Caldwell, D., Feamster, N.
& Rexford, J., 2005. Design and
Implementation of a Routing Control Platform. USENIX Association. NSDI ’05:
2nd Symposium on Networked Systems Design & Implementation.
[Carapinha, J. & Jiménez, J. 2009] Carapinha, J. & Jiménez,
J. 2009. VISA '09 Proceedings of the 1st
ACM workshop on Virtualized infrastructure systems and architectures. The
ACM Digital Library.
[Chowdhury, K.M.M.N. & Boutaba, R., 2008 ] Chowdhury,
K.M.M.N. & Boutaba, R., 2008. A
Survey of Network Virtualization. Technical Report CS-2008-25. University
of Waterloo.
[Chowdhury, K.M.M.N. & Boutaba, R., 2009 ] Chowdhury,
K.M.M.N. & Boutaba, R., 2009. Network
Virtualization: State of the Art and Research Challenges. IEEE
COMMUNICATIONS MAGAZINE.
[Clark et al., 2006] Clark, D., Lehr, B., Bauer, S.,
Faratin, P., Sami, R. & Wroclawski, J., 2006. Overlay Networks and the Future of the Internet. Communications
& Strategies, no. 63, 3rd quarter 2006, p. 109.
[Feamster, N., Rexford, J. & Zegura, E., 2014] Feamster,
N., Rexford, J. & Zegura, E., 2014. The
Road to SDN: An Intellectual History of Programmable Networks. ACM Queue,
2014.
Feamster et al., 2004] Feamster, N., Balakrishnan, H., Rexford,
J., Shaikh, A. & van der Merwe, J., 2004. The Case for Separating Routing from Routers. SIGCOMM’04 Workshops,
Aug. 30-Sept. 3, 2004, Portland, Oregon, USA.
[OpenFlow v1.4, 2013] Openflow v1.4, 2013. OpenFlow Switch Specification Version 1.4.
ONF TS-012. Open Networking Foundation. Available online at https://www.opennetworking.org/images/stories/downloads/sdn-resources/onf-specifications/openflow/openflow-spec-v1.4.0.pdf
.
[Sitaraman, et al., n.d.] Sitaraman, K.R., Kasbekar, M., Lichtenstein,
W. and Jain, M., n.d. Overlay Networks:
An Akamai Perspective. Akamai Technologies Inc and University of
Massachusetts, Amherst.
[Tennenhouse, L.D. & Wetherhall, 1996] Tennenhouse, L.D.
& Wetherall, J.D., 1996. Towards an
Active Network Architecture. ACM SIGCOMM Computer Communications Review,
26(2):5–18, Apr. 1996.
Comments