SDN API and Protocols: ForCES (Forwarding and Control Element Separation)

This article is part of my “Network Virtualization 101” series. For other articles in this series, please visit http://www.dhimanchowdhury.com or my blog athttp://www.dhimanchowdhury.blogspot.com .
SDN (Software Defined Networking) is much of a buzz word now a days; for some it is simply decoupling of control and data plane but for others, it is an encompassing abstraction for cloudification. If you scratching your head, you are not alone. Well, simplistic purview may be decoupling but the sole purpose is network programmability. So as it goes, the work of network programmability developed into the notion of decoupling as a means to attain network programmability. Thus relating SDN to that degree of granularity is important to understand why industry is pondering over so many opensource projects and why protocol such as OpenFlow are gaining momentum so quickly as de-facto south bound protocol. The popularity of Opendaylight and Ryu controllers surely made OpenFlow a well-known name and most sought after as southbound protocol for network programmability. But does this mean, OpenFlow is the best solution there is for southbound protocol?
In this article, I am introducing somewhat forgotten yet an important framework and protocol for Control and Data plane separation (decoupling), “ForCES (Forwarding & Control Element Separation)”. The ForCES is an undertaking of IETF (Internet Engineering Task Force) and defined through a number of RFCs (Request for Comments). In my previous article at https://www.linkedin.com/pulse/network-virtualization-101-nve-overlay-sdn-dhiman-chowdhury?trk=pulse_spock-articles , I discussed about the historical significance of ForCES. It is important to note, the notion of decoupling originated from the scholarly works of ForCES led to the research work of Ethane project in 2007 and OpenFlow in 2008. The following diagram depicts the timeline for each.

Figure 1: Timeline depicting historical significance of ForCES.
In the early experimentations, scholars proposed the notion of separating control and data plane for networking processor (Olsson, et al., n.d.) [1], which pave the way for programmable networks or programmability in network devices. Today the context is further expanded to network design and architecture of NextGen networks. It is a good idea and in fact essential for future of networking. However, there are major predicaments to such abstraction e.g. network latency and scalability issues especially when control element is centralized and separated from data plane. This concern bogged down the wide spread deployment of some of the commonly used protocols such as OpenFlow which relies mainly on wildcard matches to push “packet flows” rules to the networking device. Such approach requires that wildcard matches are applied through TCAMs in the switch.
TCAM (Ternary Content Addressable Memory) and OpenFlow Issues
The TCAM (Ternary Content Addressable Memory) is a specialized high speed memory that allows searches of entire content within single clock-cycle. However, TCAM size are often limited in network devices due high cost of external TCAMs and thus such approach limits the number of flow table entries can be applied on a network device raising the question for scalability (Braun & Menth, 2014) [2].
All is not bad, whitebox are good news for OpenFlow implementation as some of these products are implementing server like capabilities packed in 1 RU box with increase CPU capacity, memory and TCAM sizes (Bifulco & Matsiuk, 2015) [3]. For example, Agema’s AGC7648 family of product supports two external TCAMs supporting more than 1.25 Millions IPV4 routes lookup. The AGC7648 combines a relatively powerful CPU with internal 750k UFT (Unified Forwarding table) on the switching chip, two external TCAMs and 6Gbytes of external port buffer. Enough horse power packed in 1 RU size box. In such platform, OpenFlow controller has better chance implementing significant number of flows than products without external TCAMs. However, question of latency and performance are still a big challenge for OpenFlow based Deployment despite the claim that OF-TTP (OpenFlow Table Type Patern) resolves scalability and performance issues.
 Bifulco & Matsiuk (2015) argue that combination of software based switching function (which they denote as shadow switch) [3] and TCAM will highly improve throughput in OpenFlow deployment and thus the proposal addresses the concern for performance.
Why ForCES?
Some scholars consider wildcard matches that requires significant TCAM size on networking devices is a handicap approach. Bifulco & Matsiuk, 2015 [3] think such drawback can be improved through a combination of software switching function and TCAM to increase overall throughput. An alternative to OpenFlow in networking programmability field, Haleplidis et al (2015) [4] argue is ForCES. Though it remain experimental at Academic world for elastic routing architecture, it lack of presence in commercial world does not make it bad alternative. It may even surprise you knowing that some of world’s largest service providers, and to my knowledge, few mega data centers are deploying ForCES framework. There are few startups have either developed ForCES based controller and/or working towards developing it.
To this abstraction, you may find this news interesting:https://www.sdxcentral.com/articles/news/verizon-uses-radisys-mojatatu-sdn-nfv/2016/06/ .
The ForCES provides a framework and defines open API/protocols that clearly separate control and forwarding planes. Although many such API/protocols has been developed and/or proposed (.e.g. OpenFlow and RestAPI), the real strength of ForCES lies with its model which enables the description of new data-path functionality without changing the protocol between the control and forwarding plane (Haleplidis et al, 2015) [4]. In contrast, OpenFlow requires implementation of defined protocol at controller and Forwarding plane (e.g. OpenFlow agent at networking device). ForCES extend the notion of elastic routing architecture with belief that data plane packet processing may need additional functionalities programmed in the future while the switches are deployed in the network. This abstraction innately differentiate it from other existing SDN protocols/APIs such as OpenFlow.
In my suggestion, ForCES serves as a good framework upon which to develop elastic routing architecture for networking devices considering forwarding elements and control elements part of a system. In that way, centralized controller and networking devices are capable of exchanging capabilities and be part of an elastic routing architecture system.
ForCES Framework
IETF formed ForCES Working group in 2001during the time of dedicated network processor and that led to its main objective of standardizing open programmable interfaces for off-the-shelf Network processor devices. However, the framework was later influenced by programmable network and the IEEE P1520 initiatives [Tourrilhes, 2014] [5].
 Figure 2:  ForCES Timeline: a brief overview.
 Several RFCs were written since then as depicted in the timeline above. The following table shows the list of ForCES RFCs and their purposes.

Table 1: ForCES RFC Information.
 In ForCES framework, the elastic routing architecture or network element (NE) comprises of two parts FE (Forwarding Element) and CE (Control Element). Both of these elements are implemented through ForCES Agent which includes a set of protocols and models.
 Figure 3: ForCES Framework.
The ForCES framework allows developers to define their own FE abstraction models for implementation. This does not constrained CE to control and manage ForCES-modeled FEs. More importantly, ForCES framework is also protocol agnostic meaning a vendor may choose to use a traditional networking protocol to communicate between CEs and FEs. In theory, CE can implement routing protocols like OSPF and BGP and packets that cannot be handled by FEs are redirected to CEs (Haleplidis et al, 2015) [4]. 
ForCES Protocol
RFC5810 suggests two essential elements of ForCES Protocol for communication between FEs and CEs: PL (Protocol Layer) and TML (Transport Mapping Layer). The Protocol Layer (PL) is infact ForCES protocol which relies on TML like TCP and other underlying transport protocols for communication between FEs and CEs.   
  Figure 4: ForCES Protocol Implementation.
The implementation of PL and TML may differ between control plane and management plane and in high availability scenarios. Figure 5 shows various interface point that are referenced in RFC5810 and behavior of PL/TML are also suggested accordingly in the RFC. A further discussion about those reference interfaces are available in RFC 3746.
FEs relies on underlying hardware abstraction layer for utilizing hardware resources to process packets. The handling of packets are directed either by a single controller or multiple controllers over ForCES Interface depending upon network deployment. The ForCES interface (herein PL and TML: please refer figure 4) may deploy either TCP or UDP for communication between FE and CE. For example, a ForCES implementation may choose to use TCP port 6653 (similar to OpenFlow) for communication between FE and CE.
 Figure 5: ForCES interfaces in a high availability scenario as defined in RFC5810.
One important distinction between OpenFlow and ForCES is the use of logical functional block (LFB) in FE which allows several functions including packet processing and capability exchange.  Rather than treating network device as dumb (as in OpenFlow implementation), ForCES allows capability exchange to occur between CE and FE as depicted in figure 6.  As such, FE could reach out to CE, indicating its ability or inability to process a packet. In cases where FE is unable to process a packet perhaps due to lack of underlying hardware support, e.g. VxLAN or MPLS, CE will perform the function on behalf of the FE.
 Figure 6: Implementation of LFB and capability exchange between FE and CE.
Such implementation of capability and state exchanges are important consideration for NextGen architecture and could serve as an ideal framework upon which a true elastic routing architecture can be developed.
In summary, I would recommend developers and vendors to explore further on ForCES specific framework for the development of NextGen elastic routing architecture, one that can be implemented in a single chassis (as in ForCES) as well in disaggregation model in which CE is implemented in server base controller and FE is implemented in networking device. More importantly, learning from ForCES should be applied towards developing a flexible routing architecture that make use of traditional routing and transport protocol as well allows network devices and controller to exchanges capability parameters. The work of ForCES worth the praise and really an outstanding framework for elastic routing architecture development.  

Reference:

[1] Olsson, R., Hagsand, O., Laas, J. & Gorden, B.,  n,.d. Control and forwarding-plane separation of an open-source router. Uppsala University and KTH. Available online athttp://www.inter.uadm.uu.se/digitalAssets/21/21239_opensourcerouting.pdf .
[2] Braun, W. & Meth, M., 2014. Software-Defined Networking Using OpenFlow: Protocols, Applications and Architectural Design Choices. Future Internet 2014, 6, 302-336; doi:10.3390/fi6020302.
[3] Bifulco R., & Matsiuk, A., 2015. Towards Scalable SDN Switches: Enabling Faster Flow Table Entries Installation. SIGCOMM ’15 August 17-21, 2015, London, United Kingdom.
[4] Haleplidis, E., Salim, J.H., Halpern, J.M., Hares, S., Pentikousis, K., Ogawa, K., Wang, W., Denazis, S. & Koufopavlou, O., 2015. Network Programmability With ForCES. IEEE COMMUNICATION SURVEYS & TUTORIALS, VOL. 17, NO. 3, THIRD QUARTER 2015.
[5] Tourrilhes, J., Sharma, P., Banerjee, S. & Pettit, J., 2014. SDN and openflow evolution: A standards perspective. Computer, vol. 47, no. 11, pp. 22–29, Nov. 2014

Comments