Some Initial Thoughts on the Software-Defined Network (SDN).

              · · · · ·

At one of the Ericsson R&D days, Professor Scott Shenker - who's an academic at the University of California in Berkeley, presented on a concept that he calls the "software defined network'. Now, if you haven't seen the presentation - it's definitely worth watching (it's on YouTube, here), and provides quite an engaging look at the problem of network scaling from the perspective of academia, and especially in terms of a comparison to the more rigorous disciplines of computer science, like OS design.

Now, there are some interesting parallels with the "software defined network' concept, and a couple of issues that I've been discussing, working on, or just had some interest in previously. 

When considering a network whereby we have a decoupled control-plane, there are great parallels to the argument around centralised-management vs. distributed/dynamic management - insofar that the idea that a centralised control-plane has an overview of exactly what the network is doing is considered in other places. I blogged about this issue previously - albeit through the guise of considering how one provides useful operational tools for MPLS-TE networks. The question in this case is whether providing dynamic path placement, controlled by a distributed set of nodes (essentially with each head-end LSR being responsible for its own path placement, within the constraints set by the network) is better than utilising a centralised, off-line computation mechanism whereby path placement is computed for all network elements, and then rolled out to them. There are some distinct advantages to the latter approach in terms of being able to make a more holistic approach to the problem - and considering the interdependencies of LSPs - however, it results in a complex (and therefore often expensive) centralised element of the system. However, in this case, we are not decoupling the network to the extent that the SDN would want to - we are merely computing the ways that traffic should flow, the actual signalling, FIB programming, and protocol configuration is still provided on a per element basis. This therefore leaves us with a set of distributed systems, that have already had a complex additional layer deployed with them to solve the traffic placement problem - surely the worst of both worlds? The question that interests me, regarding the SDN, is whether pushing all path decision functionality to a central network control-plane results in simplification of the elements within the network (and through removing this complexity, adds some further robustness) or whether removing the means by which each node within the network is responsible for its own survivability results in a system whereby we introduce a SPoF, where all elements are affected by erroneous behaviour, rather than a subset.

Another issue that interests me in this area is around scaleability - right now we have a tight coupling between where the control-plane functionality for a particular interconnection (be it a UNI or NNI) is deployed, and where the physical interconnection takes place. Sure, there are some interim layers that might exist (consider, for instance, the extension of a Layer 3 node by a Layer 2 - or even Layer 1 - domain to backhaul and/or aggregate connectivity) -  however, essentially, even where we are able to do this, we have a single point of interconnection into our Layer 3 domain (be it IP/MPLS or pure IP). This interconnection point needs to maintain both connectivity back into the network - i.e. how do I get to each other exit point that I need to be aware of - and the functionality required to support the UNI. It therefore becomes a pinch-point quite quickly.

At the recent IETF armd working-group meeting in Quebec City, this was something that was spoken about at some length, particularly focused on the scale concerns of network elements for the "Cloud'. In this case, let's dispense with the poorly defined definition of Cloud, and define the problem as the interconnection to increasingly dense sets of hosts, which are requiring increasingly more Layer 4+ service nodes, which are increasingly becoming multi-tenant. The key point of the discussion was that the problem that I described above - the existence of a single point of interconnection which must support any of the FHRP, address resolution, PE-CE routing protocol functionality required for the interconnect, as well as meshing into the SP network topology. Whilst armd perhaps focuses more on the address resolution element of this. I (and a couple of other network operators - watch this space on this one) think that this is rather more generic. So, how does this tie into the SDN concept? The decoupling of the control-plane and the forwarding-plane provides us a new toolset to solve this problem. If we can "outsource' the routing decision functionality from the physical interconnection point to another element (which may or may not be centralised for the entire SP network), as the SDN would want to do, this starts to give us some flexibility to independently scale the control-plane. This starts to get around the problem that we have a very dense interconnection point physically, since we can just stack up control-planes to be able to provide the functionality we require there. The SDN using this as a "centralised' element manager-type solution is also interesting, since this provides some implication that we have some tolerance to latency between the network manager, and the nodes - which means that it may be possible to place the control-plane in a physically disparate location to the forwarding plane - an interesting new concept.

There's another benefit of such a disconnected control-plane - even if we just consider some smaller concept (that might look a bit more achievable than the entire-network deployment that Prof. Shenker proposes. At the moment, we're seeing great demands for FTTx and deployment of more intelligent network elements closer and closer to the edge - this is motivating work like Seamless MPLS. However, this means deploying many relatively complex (and therefore expensive!) systems - perhaps something that works where one can offset costs of one element by additional revenue made by another, but where this isn't possible, the startup costs of such an issue are high. However, if we consider being able to deploy forwarding-only elements that have an API towards a central control-plane - then we can do two things. Firstly, the edge element can be cheaper  - it need only perform those functions needed right at the edge - FIB programming, OAM and QoS - without any control or management functionality for these. This is a concept we're already seeing in the industry, so nothing new I think. However, then if we have N of these forwarding elements, we can look at the idea of combining this with a hypervisor-esque virtualisation - at this point, our CPU resources can be timeshared - giving us statmux for CPU time, as the drive towards virtualisation for hosts has done. An interesting concept for lower initial cost builds where large numbers of elements are needed.

This discussion rambled on a bit for some initial thoughts, but there are definitely some interesting points that Prof. Shenker raises - I need to have a think about the availability question some more - I'd especially like to do this with a view to looking at how centralised management works within MPLS-TE (and probably even more importantly, MPLS-TP) networks. As always, I'm interested to discuss my view of this - clearly this is something being presented out of academia into the standardisation/design/R&D arena, and hence perhaps doesn't have a clear, public, operational model yet - so it's interesting to consider how it might apply to "real world' networks!