DMVPN is documented under "Security and VPN", for IOS 12.4T it can be found here.
TIP: If you need to clear the NHRP cache because you changed something in the configuration, bounce all the tunnels.
TIP: It's crucial that you decide if you need to use 1 or 2 Hubs in the Phase 1 of DMVPN based on your actual needs.
Let´s start by defining DMVPN. From a high level DMVPN a Point to Multipoint (PMP) Tunnel, and it's an Overlay Tunnel, which means that it's not a Peer to Peer VPN, but a Tunnel which is independent on the underlying transport. Basically DMVPN is a GRE over IPsec site-to-site tunnel, that allows you to use Dynamic Routing Protocols.
DMVPN is a Hub and Spoke network based on an Overlay, where each one of the Spokes establishes a GRE tunnel with the Hub. Spokes can then run EIGRP, OSPF or BGP over the tunnel with the Hub. Spokes communicate with the Hub using the Tunnel (static configuration), as a normal P2P GRE Tunnel. Spokes will be communicating with the other Spokes using Dynamic, ON-DEMAND Tunnels. Therefore we don't need a full mesh of Tunnels, so that's a relieve.
Advantages of DMVPN (taken from INEs DMVPN Introduction, that can be found on YouTube):
- Simple configuration, where Spokes use the standard template configuration.
- Scalable (new spokes easily added, and hub doesn't need to be reconfigured).
- Supports MULTIPLE Transport Protocols inside of the tunnel (IPv4, IPv6, Multicast, Routing Protocols).
- Independent of the Transport, you just need any kind of IP connectivity between sites.
- In newer IOS version, DMVPN supports NAT.
- Spoke-to-Spoke tunnels are only formed if they are needed (on Demand), they are logically "dialing" each other.
- For large scalability GET VPN is supported on top of DMVPN, so we can use GROUP encryption if we have many devices.
When would we actually use DMVPN, instead of simple Static IPsec tunnels?
- When the number of sites is too big for static IPsec, because we need to manually specify source, destination and define the traffic that goes through the tunnel.
- When the number of sites is changing, so DMVPN is better cause of the Scalability.
- MPLS L3VPN adds too much routing complexity, because CE needs to run a protocol (typically MP-BGP) with PE, so we probably need to redistribute our LAN prefixes to the providers network. On DMVPN we're only running the tunnel on top of the IP infrastructure we already have, so we can just extend the routing protocol we're using within our LAN.
- MPLS sometimes needs inter-operability between providers, if your offices are not all in the countries where a certain provider is available. DMVPN uses Internet as an underlying transport, so we're safe here.
- MPLS L2VPN with VPLS (Virtual Private LAN Services) limits the connectivity options, like - only Ethernet. With DMVPN you can use any type of connectivity, including DSL.
Ok, now we're reaching an actual technical part. Let's start by defining the components of DMVPN:
1. Traffic Routing over the DMVPN Tunnels, where two main protocols are involved:
mGRE (Multipoint GRE)
NHRP (Next Hop Resolution Protocol)
2. Traffic Encryption (optional, but strongly recommended because you might be using a public network, such as Internet, as an underlay network)
IPsec
DMVPN Hub is considered NHRP Server and DMVPN Spokes are NHRP Servers. Clients (Spokes) manually specify the Hubs IP address in the configuration, while the Hub dynamically learns Spokes VPN address (PRIVATE address, the one that's inside of the tunnel) and NMBA address (PUBLIC address).
Spoke-to-Spoke Routing is not simple, because the Spokes (R4, R6, R7) do not know about other Spokes, the IPs of the other Spokes are not in their routing tables, they just know about the statically configured Hub (R5). This is where NHRP comes in. Hub should be in charge of sending the other Spokes details to each of the Spokes, so that spokes don't need to know each other addresses unless they need to actually send something to each other. Hub is therefore used only for the control plane exchange, ergo - NHRP, while the data plane goes from the Spoke to Spoke directly.
PHASE 1: When the Spokes do not communicate directly, but need to send the traffic to Hub in order to be forwarded to other Spokes.
PHASE 2 and 3:
What we want is the Spoke to send the traffic DIRECTLY to other Spokes, and this is considered DMVPN Phase 2, or Phase 3. Spokes actually only need to make a tunnel to the other Spokes if they require to send some traffic, that is why it's considered to be On-Demand Tunnel.
If you choose to use one Hub per DMVPN tunnel (the simplest design), each hub router is the NHRP server for the subnet it controls, and propagates routing information between spokes. The problem with this is the following: Spokes should learn the other Spokes routes via IGP via the Tunnel established with the Hub. The Spokes do not know the other Spokes REAL address, so there has to be a way to MAP the PRIVATE address (VPN) to a Public address (NBMA). This resolution is done using the NHRP Protocol, which kinda works like the Frame-Relay IP to DLCI mapping.
GRE keys are mandatory in Phase 2 deployments (otherwise the spokes cannot decipher which tunnel the other spoke router was using), causing performance degradation if the hub routers don’t support GRE keys in hardware (Catalyst 6500 doesn’t). This problem is solved in the multiple hubs in a single DMVPN tunnel architecture (All hub routers act as NHRP servers, and propagate routing information between the spokes), while the downside is that implementing primary/backup hub routers is a tricky task (you have to use routing protocol tricks like per-neighbour cost in OSPF or per-interface offset lists with EIGRP). Another pretty important down side of a one-hub-per-tunnel design is that when you lose a hub router, NHRP registrations on the tunnel fail completely and the routing across the affected tunnel stops after the spoke routers figure out their routing protocol neighbour is gone.[from ipspace.net]
DMVPN Configuration
First we need to make sure that we have basic connectivity over the Public Network. Public Network will be using some Public Routable addresses (NBMA addresses), while behind our Hub and Spokes we will have the Private addressing (VPN addresses).
TIP: Since the configuration is pretty standard, the easiest way is to just copy the Hub and Spoke configuration example from Cisco Docs, remove the IPsec configuration as it should be configured in the end, and adjust them to your own topology.
HUB Configuration:
*MTU is normally set to 1400 if you are using GRE+IPsec, in order to avoid additional CPU usage for the fragmentation on the other end.
interface Tunnel0
ip address 10.165.55.1 255.255.255.0
!
The following 3 parameters are mandatory, and they need to match on HUB and all the SPOKEs:
ip nhrp authentication AUTHENT_PASSWORD
ip nhrp network-id 99
tunnel key 100000 <- must match on all nodes that want to use this mGRE tunnel
ip nhrp map multicast dynamic
tunnel source Ethernet0
tunnel mode gre multipoint <- Defining the type of the tunnel to be mGRE
SPOKE Configuration:
interface Tunnel0
bandwidth 1000
ip address 10.0.0.2 255.255.255.0
ip nhrp authentication AUTHENT_PASSWORD
ip nhrp map multicast 200.17.0.1
ip nhrp network-id 99
ip nhrp nhs 10.0.0.1 <-Next Hop Server (HUBs Private Address)
ip nhrp map 10.0.0.1 200.17.0.1 <- Static Mapping using the Public Address
tunnel source Ethernet0
tunnel mode gre multipoint
tunnel key 100000 <-must match on all nodes that want to use this mGRE tunnel
To check the VPN to NBMA mapping and see the destination of the tunnel, do:
#show ip nhrp
On the SPOKE do:
#show ip nhrp dynamis [IP of the other Spokes]
The problem with the routing protocol such as EIGRP updates between the SPOKEs is that since the update is received from the Tunnel - due to the Split Horizon the update will not go back through the same interface (the Tunnel interface). This is why when we're using EIGRP, the Split Horizon needs to be switched off under the Tunnel interface:
(config-if)#no ip split-horizon eigrp 100
PHASE 1: The Hub is in the Data Plane for all of the traffic. No traffic goes from Spoke to Spoke directly. If you need the SPOKE-TO-SPOKE traffic to be routed directly, and not send everything via the Hub - you need the PHASE 2.
In order to implement an actual Phase 2, the Next Hop needs to be set to the IP of the other Spoke (cannot be modified into the HUB IP, as it would by default because the traffic is coming from the Hub). The key here is that the Hub is ONLY in the Control plane, and it's not used for traffic forwarding, only for NHRP resolution between the SPOKEs. The procedure will be more or less like this:
1. Spoke that wants to send the traffic to another Spoke starts by sending a NHRP query request with the Private IP to the Hub
2. Hub answers with the Public IP of the other Spoke
3. The ON-DEMAND tunnel is formed between the Spokes. This tunnel expires, and it's defined by NHRP "hold-time", it's 2 hours by default.
IPsec Configuration
IPsec is not required from the designing point of view, and it creates certain scalability problems if you reach some number of tunnels (Maybe you should consider Get VPN if you need much scalability), but it should be configured, because DMVPN is running as an overlay over a non-secure public IP network, such as Internet.
The Crypto config is made of 4 different steps, which is the same on the Hub and all of the Spokes:
Step 1. Key Negotiation (Tunnel Encryption Negotiation). In an actual production network you would issue the certificates, but something like this will not be required on the exam.
crypto isakmp policy 1
encr aes
authentication pre-share
group 14 <-How big and complicated is the KEY
crypto isakmp key PASSWORD_PHASE1 address 0.0.0.0
!
Step 2: Specify the algorithm you're using when you do the encryption/decryption:
crypto IPsec transform-set trans2 esp-des esp-md5-hmac
mode transport
!
Step 3: Sreate a Crypto Profile:
crypto IPsec profile vpnprof
set transform-set trans2
!
Step 4: Apply the profile under the tunnel interface:
interface Tunnel0
tunnel protection ipsec profile vpnprof
There will be a static IPsec tunnel going from each one of the Spokes to the Hub.
To check out the Security Associations (SAs):
#show crypto isakmp sa
OR
#show crypto ipsec sa
Subscribe to:
Posts (Atom)
Most Popular Posts
-
Before we start, lets once again make sure we fully understand what Bridge Domain is. The bridge domain can be compared to a giant distribut...
-
Ever since Cisco bought Insieme and created Cisco ACI, and VMware bought Nicira and created NSX, I've been intensively deep-diving and b...
-
[In collaboration with the guest blogger, Marc Espinosa ] Let's start with the messaging protocols, MQTT and CoAP, and consider which ...
-
By know you should know the following facts about ACI: Cisco Nexus 9k Switches make the ACI Fabric, which is the Control and the Data pla...
-
Get ready to have your mind blown. One of the easiest procedures I've encountered. You just need to follow these 3 steps, to migrate the...
-
The VM-Series firewall for VMware NSX is jointly developed by Palo Alto Networks and VMware. NetX APIs are used to integrate the Palo Alto N...
-
Google has made their Cloud Platform (GCP) so that you can host your application any way your business requires. When we talk about the ...
-
First time we “unpack” ACI, we will find a certain number of potential Spine and potential Leaf switches, and hopefully 3 (or 5) APIC Contro...
-
Narbik Topology for web-iou Disclaimer: I DON’T OWN NOR HAVE AN ACCESS TO THE TOPOLOGY, INITIAL CONFIGS, IOU/IOL BINARIES OR ANY OTHER ...
-
Before I get into the Python for NX-OS, let me explain a few concepts that I've seen Network Engineers have been struggling with. Dev...