- Cisco Nexus 9k Switches make the ACI Fabric, which is the Control and the Data plane of ACI Architecture.
- The main components of the ACI Architecture are Bridge Domain (BD), EPG (End Point Group) and the Private Network.
- VXLAN is the encapsulation mechanism that enables ACI remote L2 connectivity.
If you have any doubts about any of the "facts" on the list, you should read my previous post about the ACI Fundamentals: Components.
N9k can run in one of the two Operational Modes:
- NX-OS Mode (by default)
- ACI Mode
There are 3 types of chips in the 9k devices. You should be very careful when buying these switches because depending on the N9k models you buy, you might get only one or two of the possible ASIC chipsets:
- T2 ASIC by Broadcom is a default chipset as a Nexus in a standalone mode (NX-OS mode)
- ALE – APIC Leaf Engine (ALE performs ACI leaf node functions when the Nexus 9500 switch is deployed as a leaf node in an ACI infrastructure).
- ASE - APIC Spine Engine, when the 9k are deployed as a Spine Switches.
Nexus 9000 supports Python in Interactive and Scripting mode. Python was chosen by Cisco because of its robust selection of Libraries. Data is returned to NXOS as XML or JSON, not as the pure Commands. You can invoke Python from CLI using a simple commands:
# python bootflash: /script.py
You can also use the NS-OS NX-API, which is basically a tool that allows you to program in Python on NX-OS platform, and it supports all the Libraries you can use normally. It can be downloaded from the GitHub.
Leaf Switching Tables: LST Station Table and GST Station Table
These two tables are found on the Leafs, and they represent the:
- LST (Leaf Switching Table): All hosts attached to the Leaf
- GST (Global Switching Table): Local cache of fabric endpoints, or all the endpoints that are reachable via Fabric on the N9K in ACI mode.
These are used for all kinds of Bridge Domain ARP floods and the Unknown Host requests. They contain the MAC and the IP routing tables. The ARP would be the Multicast traffic within the ACI, and the ARP would be Broadcasted everywhere where the same BD is configured.
Sharing the Workload: The concept of SHARDS
Shard is a unit of Data Management, which reminds a lot of Slices concept in the NSX Architecture. Data is placed into shards; each shard has a Primary and 2 Replicas. If we, for example, want to use the ACI Multi-Data Center Architecture, this is where we will see the full added value of this concept. Every task will be sliced into Shards, and each part of the Fabric will handle the Shards assigned to it. This improves the performance drastically.
Application Network Profiles (ANP) and Contracts
ANP (Application Network Profile) is a very simple, but a VERY IMPORTANT concept within the ACI architecture. ANP is a combination of the EPGs and the Contracts that define the communication between them, where the Provider-Consumer relationship defines the connectivity in the application terms. EPG is a “child” of the Application Profile. Have in mind that each EPG can have one, and only one Contract that it provides.
An Application Profile is essentially a logical grouping of policies. For example, you may create an Application Profile for “Network Services,” this Application Profile could contain standard services such as DNS, LDAP, TACACS, etc. Here is an Example of the Application Profile Cisqueros_Basic, and two EPGs:
Contract is a Policy Definition, and in consists of a number of Subjects (Such as for example "Web Traffic", or similar) and it contains:
- Filters (Example: UDP port 666).
- Action (Permit, Deny, Redirect, Log, Copy, Mark…).
- Label (additional optional identifier, for some additional capabilities).
One thing that us, Network Engineers, fail to understand is the actual difference between the Provider and the Consumer within a Contract. I think that the example that "paints the picture" in the most logical way is an Application Server accessing the Data Base Server. The DB Server provides it's data to another entity, and is therefore the Provider. We can define a Provider Contract on the DB Server which allows a group of Application Servers to access it's Data Bases. The Application Server is, on the other hand, the Consumer of the DB, and it therefore requires a Consumer Contract towards the DB Server(s).
Taboo is a group of Filters that DENIES the communication you need to be denied, and it´s applied before any other filter, regardless if the ANP is Enforced (default) or not.
Below is the Diagram of how an Application Network Profile could be implemented on the level of the entire Application Architecture:
Service Contract and Service Graph
Service Contract and Service Graph represent the way Cisco ACI Integrates the L4-L7 Services into the Contracts between the Client (Consumer) EPG and the Server (Provider) EPG. The basic way it works is shown using the Policy-Based Redirection, and it´s shown on the diagram below.
The Cisco APIC offers a graphical drag and drop GUI to easily create L4-L7 Service Graphs that specify network traffic routing; any of the L4-L7 ADC features available in the NetScaler device package can be included in a Service Graph definition, allowing comprehensive NetScaler integration with the Cisco APIC.
Once created, a Service Graph can be assigned to an Application Profile and contracted to a data center tenant, thereby defining the network traffic flow for that specific application and tenant.
L3 Routing in ACI
L3 Routing to the External Networks can be done using big or OSPF for now, but soon eBGP and EIGRP will be added (Dec2015). The routes learned from peering routers will be marked as “outside”. The decisions are made having in mind that Single Data plane is used with the Two Control Planes:
- Inside Networks are used for Tenants and their Bridge Domains (BDs).
- Outside Networks are associated with the Tenants on which the Peering with the External Routers is configured.
Route Redistribution is done on the Leaves. Right now there is no MP-BGP configured by default, but it can be configured (Fabric -> Fabric Policies). If you configure it within the entire Fabric, the Spines will take the role of the BGP Route Reflectors, reflecting all the BGP prefixes to leafs.
If there is OSPF between the Leaf and the External Route, then the Leaf is redistributing BGP into OSPF, and OSPF to BGP. Have in mind that the OSPF has to be configured as the NSSA.
For Routing Enthusiasts such as myself: Check out the following article regarding Connecting Application Centric Infrastructure (ACI) to Outside Layer 2 and 3 Networks.