Can OpenStack Neutron really control the Physical Network?

This is a question I´ve been hearing a lot when we present the OpenStack to a new client, mostly from the guys who control the Networking infrastructure. So, can the OpenStack Neutron module really control and configure the Physical Network? The answer might disappoint you. It depends! One thing is for sure - there is no better way to make a group people put on the Poker Faces, then to try to explain how OpenStack Neutron works to a Networking Operations team.

There are 3 of us doing the technical part of the OpenStack presentation:

  • OpenStack Architect. Typically this will be a young fella, enthusiastic about stuff, and the impression that he gives away is that he is completely ignoring how Data Center is traditionally defined, and his answer to almost all of the questions is - "OpenStack will control that too!"
  • Virtualization Engineer. Seen as openminded by the traditional Mainframe experts, and completely ignored by the OpenStack guy.
  • Network Engineer (me, in our case). Seen as a dinosaur and the progress-stopper by both, the OpenStack and the Virtualization guy.

This is what happens to us, 100% of the cases: We start doing the presentation. The entire Networking department has their laptops opened, and they pay 0 attention to us. The Architects and the Openminded bosses are the ones who want to understand it, and they will have many questions during the entire presentation. Once we´re almost done, one of the networkers "wake up". This is where we enter the storm of crossed-looks, weird facial expressions, and we "get the chance to" repeat around half of the presentations again, for the Networking guys. It all ends with a single question on everyones mind - Can OpenStack Neutron really control the Physical Network?

To answer this, lets start with the ML2. ML2 is the OpenStack Plugin designed to enable Neutron to, among other things, controthe Physical Switches. This can be done using the manual API calls to Switches, or a Vendor-designed ML2-compatible Driver for the particular Switch Model.
Before getting deep into the ML2, here are the popular plug-ins:

  • Open vSwitch
  • Cisco UCS/Nexus
  • Linux Bridge
  • Arista
  • Ryu OpenFlow Controller
  • NEC OpenFlow

ML2 Plugin vs MechanismDriver: ML2 Plugin Works with the existing Agents. There is an ML2 Agent for each L2 Agent of the Linux bridge, Open vSwitch and Hyper-V. In the future all these Agents should be replaced by a single Modular Agent.


Cisco Nexus driver for OpenStack Neutron allows customers to easily build their infrastructure-as-a-service (IaaS) networks. It is capable of configuring Layer 2 tenant networks on the physical Nexus switches using either VLAN or VXLAN networks.

Note: This driver supports the VLAN network type for Cisco Nexus models 3000 – 9000 and the VXLAN overlay network type for the Cisco Nexus 3100 and 9000 switches only.

Cisco Nexus Mechanism Driver (MD) for the Modular Layer 2 (ML2) plugin can configure VLANs on Cisco Nexus switches through OpenStack Neutron. The Cisco Nexus MD provides a driver interface to communicate with Cisco Nexus switches. The driver uses the standard Network Configuration Protocol (Netconf) interface to send configuration requests to program the switches.

The Cisco mechanism plugin also supports multi-homes hosts in a vPC setup, as long as the interconnection requirements are fulfilled. The data interfaces on the host must be bonded. This bonded interface must be attached to the external bridge.

There are the APIs for each of these Modules so that the Tenant can “talk” to them. Cisco Switches are, for example, connected to the Neutron Module via the Plug-in to enable the OpenStack to communicate with them and configure them. There is a DRIVER for the Neutron for the Nexus Switches (ML2 Drivers), and the Switches can be configured from the OpenStack thanks to this driver. This way the resources of the Nova are saved, because we are offloading the Routing on to the Switch.

Using all these Drivers and Plug-ins the OpenStack Neutron can manage the connectivity and configure the Networking within the physical infrastructure. It can add/remove/extend the VLANs, manage the 802.1q Trunk ports and Port-Channels. The question is - What happens in a bigger network, can this "scale"? And, the answer is - NO! Not yet, at least. Yes, you can provision all the VLANs you want, even extend them if you have just a few Switches if there is no need to use some of the "advanced" control or security protocols. So, what happens with the Spanning-Tree, what controls the Routing? What if you have a few OSPF areas in the "middle", and you need to advertise a newly configured network? What happens to the Load Balancing between the VLANs, or is 1 Switch always the Root Bridge of all of the VLANs created by the OpenStack?

There is a way for OpenStack to provision all the Networking it needs, but in order to make it "scalable"(meaning - not a PoC in a greenfield only) - we need a Controlled Fabric. It can be an SDN, such as Cisco ACI or a VMware NSX (almost), or it can be a clients Networking Team that just assigns a group of VLANs for the OpenStack to use. This might change in the future, but for now - always consider OpenStack + SDN.


Most Popular Posts