Cisco ACI and OpenStack Integration: RedHat vs Mirantis

Note: This post requires basic knowledge of Cisco ACI architecture and ACI logical elements, as well as understanding of what OpenStack is, what the OpenStack elements (Projects) do, and the principles of what OVS and Neutron are and how they work. If you wish to get more information about these technologies, check out the Cisco ACI  and OpenStack Section within the "SNArchs.COM Blog Map".

Let's get one thing clear about OpenStack before we even start:
  • “OpenStack is a collection of open source technologies delivering a massively scalable cloud operating system” openstack.org.
  • Open source and open APIs allows the customer to avoid being locked in to a single vendor.
  • One thing to have in mind is that OpenStack is made for the applications specifically made for the Cloud, you should not even consider moving all your Virtual Loads to the OpenStack.
  • Everyone who got a bit deeper into the concept of a Private Cloud and OpenStack, how they operate and the basic use cases, understands that Neutron just wasn't designed to handle the entire OpenStack Networking.
To back this up, I'll get into a bit of a "bloggers loop" here by telling you to read this post by Scott Lowe, where he actually refers to one of my posts about Neutron and OVS.


There are 2 main advantages of OpenStack + ACI Integration?
  • ACI handles all the Physical and Virtual Networking that OpenStack requires.
  • There are OpenStack plugins for OpenDayLight as well, but they require much, much more manual work and "tuning". There is also a question of who gives you technical support when something goes wrong.




The concept of how OpenStack and Cisco ACI integration works is shown on a diagram below.


  1. From OpenStack Horizon we create the Networks and Routers. ACI OpFlex Plugin translates this into EPG/Contract language that ACI understands, and these "instructions" on how to configure the connectivity are sent to the APIC controller.
  2. APIC sends the instructions to the Physical and Virtual network elements in order to configure it in accordance with OpenStack needs.


To be fair, we used a completely different environments to deploy the OpenStack before we started the Cisco ACI Integration. I hope that this makes it clear that we did not and cannot compare the performance here, only the way it integrates and the features.

There are 2 ways of integrating OpenStack with Cisco ACI: Using the ML2 Driver, and the GBP Policy. The second one is still in BETA phase, and even though we did try it, and it's concept is much more in accordance with Cisco ACI Policy Model (read: recommended and to be used in the future) - I would highly recommend you to stick to the ML2 driver before the GBP gets somewhat stable and supported. The difference are shown in the diagram below:




There are currently 3 OpenStack distributions officially supported by Cisco ACI. Out of these 3, we focused on testing the RedHat and Mirantis distribution integration.






Red Hat OpenStack (RHOS/RDO)
  • KILO Release.
  • VxLAN mode (not fully supported by Cisco ACI at this moment).
  • Deployed in a PackStack and Red Hat Director.
  • UCS B Series (Director in a single Blade), FIs directly connected to Cisco Leafs.
  • Control and Compute Nodes on Blades.
  • We choose a mode where an OpenStack creates a single ACI Tenant, and each OpenStack Project maps into a ACI ANP within the OpenStack tenant (this used to be, but it no longer a default mode).


Mirantis OpenStack
  • KILO Release.
  • VLAN mode.
  • Deployed in VMware vSphere environment.
  • IBM Flex Chassis connected to Cisco Leafs.
  • Control and Compute Nodes on VMs.

TIP: When you deploy OpenStack in a VMware environment, you need to "tune" your vSwitch/VDS in order to allow the LLDP packets between ACI and OpenStack Nodes by following these steps:
  1. Make sure the adapter passes the LLDP packets (in case of UCS C Series disable LLDP on vic through cimc).
  2. Disable LLDP/CDP on the vSwitch (or a VDS, if thats what you are using).
  3. Make the Port Group and vSwitch "promiscuous".

INTEGRATION PROCEDURE


DICLAIMER: In both of these cases we followed the official integration guides (in the References below). Have in mind that these Plugins are being continuously updated, and you will often find that the integration guide doesn't correspond with the plugin you can currently download.

You should know that these plugins are designed by Cisco, RedHat and Mirantis, so it's a mutual effort. If you have problems with the documentation, or encounter a bug, we found that it's much easier to ask for Ciscos support, as Cisco Lab guys really seem to be on top of things.


RedHat Integration


You can follow the step-by-step integration guide, but have in mind that often you will not be sure what something does and why they are doing it. This will get better in time, but for now - you better sit together your Networkers with ACI knowledge and Linux experts with OpenStack knowledge and make them talk and work together on every step, or you will not really be able to make it work.

Before you even start, define the External, Floating IP and SNAT subnets, and configure your L3_Out from the ACI Fabric. In our case we did OSPF with Nexus 5500. Once your OpenStack is fully integrated, the Nexus 5500 "learned" SNAT and FloatingIP Subnets from ACI via OSPF. 

TIP: The External Network is a VLAN you extend from your production network for Director, Horizon etc. and it does NOT go out using the same route.

In RedHat you need to manually:
  • Install and deploy the plugin on both nodes.
  • Replace the Neutron Agents with the OpFlex ones.
  • Configure all the parameters of the Interconnections and Protocols.

During the integration process we felt like a guide was made for a very specific environment, and that many of the steps were poorly documented or not explained at all. Many times we had to stop, make a diagram of how Linux guys and how Network guys understand the current step, and reach a conclusion together. I think this will happen in many organisations, as it's the Network and System engineers do not really "speak the same language", so to say.

This is what you will see once your nodes get the IP (VTEPs actually) from the DHCP from ACI Infrastructure VLAN (in our case VLAN 3456, 172.1.0.0/16) and get the LLDP connectivity with ACI Leafs. Your OpenStack Nodes will show up as OpenStack Hypervisors, and you will be able to see all the VMs from ACI:









TIP: Since we did a VxLAN mode, all the traffic between the ACI Leafs and OpenStack Nodes (Hypervisors) is going via the Infrastructure VLAN that is carrying the VxLAN traffic, so be sure that you have the Jumbo Frames enabled VTEP-to-VTEP (This includes the Blades, FIs and all the Switches you might have in the path).

For some reason the L3_EPG within the OpenStack tenant did not correctly "pick up" the L3 Out". Once I manually assigned the OSPF peering that I had created before in the Common Tenant, OpenStack got the "ping" to the outside network working.

Once you have your OpenStack Tenant in ACI, you will be available to add another VMs or a physical servers from another Domains (Physical or virtual, such as VMware or HyperV) to the Networks (EPGs). In the diagram below, you will see how the EPGs named "group1" and "group2" contain OpenStack and VMware domains,

MIRANTIS Integration

Mirantis OpenStack was deployed in the VLAN mode (fully supported at this point by Cisco ACI), but it was deployed in the virtual environment. We were therefore expecting some operational difficulties.

Mirantis integration was a really pleasant surprise. While in RedHat you need to manually install the plugin on both nodes, replace the Neutron Agents with the OpFlex ones, and then make sure all the correct services are running, in Mirantis you have a graphical interface where you just Import the OpFlex plugin, after which the OpFlex menu auto-magically appears in your menu when you want to deploy a new OpenStack Environment.

While deploying a new OpenStack Environment you will simply configure all the integration parameters, and the environment will be built in accordance with the integration policy, from the start. It just felt so easy, until we reached the point where it was all deployed correctly, Mirantis was giving the "all correct" messages, but our OpenStack simply wasn't appearing in the ACI. To be fair - this was a virtual environment installation so we were kinda expecting this type of problems.

After we deployed the VMware workaround described in the introduction of this post, we got the visibility, and the OpenStack Hypervisors were visible from the APIC GUI.



References

There is a lot of documentation out there for Kilo. These are the ones we used:





Cisco ACI Service Graph (L4-7), ADC: F5 vs NetScaler

Note: This post requires basic knowledge of Cisco ACI architecture and ACI logical elements, as well as understanding of what ADC is, and the basic principles of Load Balancing and SSL. If you wish to get more information about these technologies, check out the Cisco ACI Section within the "SNArchs.COM Blog Map".

I will not go all "Security is super important" on you, I assume that if you are reading this post - you already know that. Let's just skip that part then, and go directly to the facts we have so far:

  • ACI does not permit the flows we do not explicitly allow. ACI is therefore a stateless FW itself.
  • ACI Filters allow the basic L3-L4 FW rules. All additional L4-L7 "features" can be deployed in a form of a Service Graph.
  • Service Graph is directly attached to a Contract between 2 EPGs (End Point Groups).
  • Cisco ACI integrates with all the big L4-7 Services vendors using the "Device Package". A Device Package is a plugin that is deployed directly to APIC Controller, and allows a 3rd party device to be "instantiated" and later applied as a Service Graph.

Most of the "big players" in the area of Security have evolved (some more, some less) with the ACI integration:




Now that the concepts are clear, let's get a bit deeper into how CITRIX and F5 handle the ACI Integration.

1. What Service Functions do we get?









CITRIX allows you to deploy the NetScaler as a Virtual (VDX) or a physical (SDX) device. The Device Package is unique for both. Once the Device Package is deployed, you will be able to see all the Functions or Services that NetScaler lets you deploy within the ACI Fabric. It was a pleasant surprise to see a big variety of ADC functions:



































F5 on the other side has 2 ways of integrating with Cisco ACI:
- Direct BIG IP integration.
- BIG IP + BIG IQ integration.

We had the chance to test both of these, and I must say that I'm personally a big fan of how the second option work. Let me get deeper into that. Once I got the Device Package installed I was a bit disappointed to see that the only Services we could deploy are a Basic Load Balancing and a Microsoft SharePoint (for some reason...).






Good thing I did some more digging, and discovered the BigIQ integration. This is where F5 really impressed me. You basically first need to configure the BigIP + BigIQ Integration, before you even deploy the Device Package. If you are not familiar with the concept of iApps - you should most definitely check them out. This allows you to create a Template of whatever Service Functions you will need in your organisation, no matter how complex. Once you create these from BigIQ, you generate a "personalized" Device Package that you then deploy in APIC. Now you will get all the iApps you created as a separate Service Functions that you can then deploy between your EPGs.











2. FLEXIBILITY

The State of the Art is currently on a Beginner/Medium Level, so it might just be a bit early to talk about Flexibility, but even so - there are a few things worth mentioning:
  • Virtual Devices: This applies to both, NetScaler VDX and Virtual BigIP - they only support Single Tenant for now. This is a bit of a vSphere/HyperV limitation as well, as they still do not allow a PortGroup/VMNetwork to have a few VLANs of our choice, so APIC cannot send a command "add this VLAN to a PortGroup on a ADC Interface". Instead, every time you deploy a new Service Graph - the VLAN gets replaced, and the old Service Graph stops working. Silly, right? Good thing I have the insiders information that this will change soon :)
  • Service Functions: Both of the vendors found a way to give us a variety of Functions that we can apply via Service Graph. CITRIX does it in a native form, while F5 uses the iApp+BigIQ. They each have their advantages and disadvantages, but I predict a bright future for both, so - Good job Citrix and F5!

NetScaler has a "bonus feature" - an online Configuration Converter, which converts your NetScaler CLI configuration into a XML/JSON, which you can later deploy as an API CALL. You must admit that this is just cool:















3. USABILITY

One of the most common question I get during the ACI Demos is - "It's all super impressive, but should we really consider using it in a production environment, how do you TS is something goes wrong?". This brings us to the question of usability.

The biggest problem we face when we try to Apply the Service Graph Template to a ACI Contract is an entirely new interface of configuring a Service. Instead of BIGIP/NetScaler interface we are used to, we will get just a group of parameters with no comments or descriptions. Some of these are semi-intuitive, like Virtual IP and Port, while the others are just awkward (fvPar2 or something like that). There is no doubt that this will evolve in the future, but right now - you better know exactly what you are doing. Just so you can get the "taste" of how these parameters look like, here is a screenshot doing NetScaler (1st screenshot) and BigIP (2nd screenshot) implementation of a basic LBaaS:




























Al alternative is using a REST API calls. I'm personally a huge fan of this method, because it's just so fast and easy. Yes, in the beginning you will not trust it, and it does have a learning curve, but once you "get it" - that's it for you, no going back to the parameters. You will most probably do what I did - start making your own API Library, and tune it with love :)


Here are some examples of the NetScaler Function API CALLs in PostMan:



If you wish to use my Libraries, feel free to download the repository from my GitHub:

Temporal Link [Official NetScaler Library]: https://github.com/citrix/netscaler_aci_poc_kit 
Target Link: [Will be updated soon]

DISCLAIMER: These are designed for a personal use, and even though I decided to share them with the community, I take no responsibility if they do not work correctly in certain environments.

Bottom line, both NetScaler and BigIP work more or less the same way here. Once you figure out which parameters are obligatory and what exactly they are - you will have no problem configuring any Service Function. You will later log into the NetScaler/BigIP device to make sure that the configuration is accurate, and that the parameters are set correctly. For now there are parameters that you can only configure from the local interface, but I'm pretty sure that in time all of these will be added to the ACI Device Package.

So far none of the two vendors has issued a complete list of parameters, together with how to use them. We sincerely hope they are working on it.

IMPORTANT TIP: Once you get your Service Graph deployed and working, you can do a right click on a deployed Service Graph, and Save the configuration in the XML/JSON format. Why is this awesome? Because you can just add this to your PostMan Library and later deploy LBaaS with a single click. If you're not impressed yet - you need to try this, trust me - you will be!


So, which one is better then?

From my personal opinion, both of these are or more or less same level of integration with Cisco ACI. This is really good if you want to use the same ADC that you've been using so far. It may be a bit disappointing if you are trying to choose one of the two based on how it integrates with ACI, because in my opinion a lot of work is yet to be done.

Most Popular Posts