Let's get one thing clear about OpenStack before we even start:
- “OpenStack is a collection of open source technologies delivering a massively scalable cloud operating system” openstack.org.
- Open source and open APIs allows the customer to avoid being locked in to a single vendor.
- One thing to have in mind is that OpenStack is made for the applications specifically made for the Cloud, you should not even consider moving all your Virtual Loads to the OpenStack.
- Everyone who got a bit deeper into the concept of a Private Cloud and OpenStack, how they operate and the basic use cases, understands that Neutron just wasn't designed to handle the entire OpenStack Networking.
To back this up, I'll get into a bit of a "bloggers loop" here by telling you to read this post by Scott Lowe, where he actually refers to one of my posts about Neutron and OVS.
There are 2 main advantages of OpenStack + ACI Integration?
- ACI handles all the Physical and Virtual Networking that OpenStack requires.
- There are OpenStack plugins for OpenDayLight as well, but they require much, much more manual work and "tuning". There is also a question of who gives you technical support when something goes wrong.
The concept of how OpenStack and Cisco ACI integration works is shown on a diagram below.
- From OpenStack Horizon we create the Networks and Routers. ACI OpFlex Plugin translates this into EPG/Contract language that ACI understands, and these "instructions" on how to configure the connectivity are sent to the APIC controller.
- APIC sends the instructions to the Physical and Virtual network elements in order to configure it in accordance with OpenStack needs.
To be fair, we used a completely different environments to deploy the OpenStack before we started the Cisco ACI Integration. I hope that this makes it clear that we did not and cannot compare the performance here, only the way it integrates and the features.
There are 2 ways of integrating OpenStack with Cisco ACI: Using the ML2 Driver, and the GBP Policy. The second one is still in BETA phase, and even though we did try it, and it's concept is much more in accordance with Cisco ACI Policy Model (read: recommended and to be used in the future) - I would highly recommend you to stick to the ML2 driver before the GBP gets somewhat stable and supported. The difference are shown in the diagram below:
There are currently 3 OpenStack distributions officially supported by Cisco ACI. Out of these 3, we focused on testing the RedHat and Mirantis distribution integration.
Red Hat OpenStack (RHOS/RDO)
- KILO Release.
- VxLAN mode (not fully supported by Cisco ACI at this moment).
- Deployed in a PackStack and Red Hat Director.
- UCS B Series (Director in a single Blade), FIs directly connected to Cisco Leafs.
- Control and Compute Nodes on Blades.
- We choose a mode where an OpenStack creates a single ACI Tenant, and each OpenStack Project maps into a ACI ANP within the OpenStack tenant (this used to be, but it no longer a default mode).
Mirantis OpenStack
- KILO Release.
- VLAN mode.
- Deployed in VMware vSphere environment.
- IBM Flex Chassis connected to Cisco Leafs.
- Control and Compute Nodes on VMs.
TIP: When you deploy OpenStack in a VMware environment, you need to "tune" your vSwitch/VDS in order to allow the LLDP packets between ACI and OpenStack Nodes by following these steps:
- Make sure the adapter passes the LLDP packets (in case of UCS C Series disable LLDP on vic through cimc).
- Disable LLDP/CDP on the vSwitch (or a VDS, if thats what you are using).
- Make the Port Group and vSwitch "promiscuous".
INTEGRATION PROCEDURE
DICLAIMER: In both of these cases we followed the official integration guides (in the References below). Have in mind that these Plugins are being continuously updated, and you will often find that the integration guide doesn't correspond with the plugin you can currently download.
You should know that these plugins are designed by Cisco, RedHat and Mirantis, so it's a mutual effort. If you have problems with the documentation, or encounter a bug, we found that it's much easier to ask for Ciscos support, as Cisco Lab guys really seem to be on top of things.
RedHat Integration
You can follow the step-by-step integration guide, but have in mind that often you will not be sure what something does and why they are doing it. This will get better in time, but for now - you better sit together your Networkers with ACI knowledge and Linux experts with OpenStack knowledge and make them talk and work together on every step, or you will not really be able to make it work.
Before you even start, define the External, Floating IP and SNAT subnets, and configure your L3_Out from the ACI Fabric. In our case we did OSPF with Nexus 5500. Once your OpenStack is fully integrated, the Nexus 5500 "learned" SNAT and FloatingIP Subnets from ACI via OSPF.
TIP: The External Network is a VLAN you extend from your production network for Director, Horizon etc. and it does NOT go out using the same route.
In RedHat you need to manually:
- Install and deploy the plugin on both nodes.
- Replace the Neutron Agents with the OpFlex ones.
- Configure all the parameters of the Interconnections and Protocols.
During the integration process we felt like a guide was made for a very specific environment, and that many of the steps were poorly documented or not explained at all. Many times we had to stop, make a diagram of how Linux guys and how Network guys understand the current step, and reach a conclusion together. I think this will happen in many organisations, as it's the Network and System engineers do not really "speak the same language", so to say.
This is what you will see once your nodes get the IP (VTEPs actually) from the DHCP from ACI Infrastructure VLAN (in our case VLAN 3456, 172.1.0.0/16) and get the LLDP connectivity with ACI Leafs. Your OpenStack Nodes will show up as OpenStack Hypervisors, and you will be able to see all the VMs from ACI:
TIP: Since we did a VxLAN mode, all the traffic between the ACI Leafs and OpenStack Nodes (Hypervisors) is going via the Infrastructure VLAN that is carrying the VxLAN traffic, so be sure that you have the Jumbo Frames enabled VTEP-to-VTEP (This includes the Blades, FIs and all the Switches you might have in the path).
For some reason the L3_EPG within the OpenStack tenant did not correctly "pick up" the L3 Out". Once I manually assigned the OSPF peering that I had created before in the Common Tenant, OpenStack got the "ping" to the outside network working.
Once you have your OpenStack Tenant in ACI, you will be available to add another VMs or a physical servers from another Domains (Physical or virtual, such as VMware or HyperV) to the Networks (EPGs). In the diagram below, you will see how the EPGs named "group1" and "group2" contain OpenStack and VMware domains,
Once you have your OpenStack Tenant in ACI, you will be available to add another VMs or a physical servers from another Domains (Physical or virtual, such as VMware or HyperV) to the Networks (EPGs). In the diagram below, you will see how the EPGs named "group1" and "group2" contain OpenStack and VMware domains,
MIRANTIS Integration
Mirantis OpenStack was deployed in the VLAN mode (fully supported at this point by Cisco ACI), but it was deployed in the virtual environment. We were therefore expecting some operational difficulties.
Mirantis integration was a really pleasant surprise. While in RedHat you need to manually install the plugin on both nodes, replace the Neutron Agents with the OpFlex ones, and then make sure all the correct services are running, in Mirantis you have a graphical interface where you just Import the OpFlex plugin, after which the OpFlex menu auto-magically appears in your menu when you want to deploy a new OpenStack Environment.
While deploying a new OpenStack Environment you will simply configure all the integration parameters, and the environment will be built in accordance with the integration policy, from the start. It just felt so easy, until we reached the point where it was all deployed correctly, Mirantis was giving the "all correct" messages, but our OpenStack simply wasn't appearing in the ACI. To be fair - this was a virtual environment installation so we were kinda expecting this type of problems.
After we deployed the VMware workaround described in the introduction of this post, we got the visibility, and the OpenStack Hypervisors were visible from the APIC GUI.
References
There is a lot of documentation out there for Kilo. These are the ones we used: