Above we have one router that represents the HQ and there are four branch offices. Multipoint GRE, as the name implies allows us to have multiple destinations. When we use them, our picture could look like this:.
DMVPN in a VRF Environment
When we use GRE Multipoint, there will be only one tunnel interface on each router. The HQ for example has one tunnel with each branch office as its destination. Right now we have a hub and spoke topology. When there is traffic between the branch offices, we can tunnel it directly instead of sending it through the HQ router.
This sounds pretty cool but it introduces some problems…. Each router is connected to the Internet and has a public IP address:. On the GRE multipoint tunnel interface we use a single subnet with the following private IP addresses:. Our hub router will be the NHRP server and all other routers will be the spokes. Above we have two spoke routers NHRP clients which establish a tunnel to the hub router. The hub router will dynamically accept spoke routers.
A few seconds later, spoke1 decides that it wants to send something to spoke2. It needs to figure out the destination public IP address of spoke2 so it will send a NHRP resolution requestasking the Hub router what the public IP address of spoke 2 is. The Hub router checks its cache, finds an entry for spoke 2 and sends the NHRP resolution reply to spoke1 with the public IP address of spoke2. This is great, we only required the hub to figure out what the public IP address is and all traffic can be sent from spoke to spoke directly.
With phase 1 we use NHRP so that spokes can register themselves with the hub. The hub is the only router that is using a multipoint GRE interface, all spokes will be using regular point-to-point GRE tunnel interfaces. This means that there will be no direct spoke-to-spoke communication, all traffic has to go through the hub! Since our traffic has to go through the hub, our routing configuration will be quite simple. Spoke routers only need a summary or default route to the hub to reach other spoke routers.
The disadvantage of phase 1 is that there is no direct spoke to spoke tunnels. In phase 2, all spoke routers use multipoint GRE tunnels so we do have direct spoke to spoke tunneling.
Explained As Simple As Possible. Full Access to our Lessons. More Lessons Added Every Week! When would we choose to use Phase 1, 2, or 3, and why? I understand the differences between the three, but do we gain any benefit from implementing one or the other that is noticeable to end users? Initially, and that is the key word all spoke to spoke packets are switched across the hub.It also enables the system to control the state of the tunnel interface based on the health of the DMVPN tunnels.
Your software release may not support all the features documented in this module. For the latest caveats and feature information, see Bug Search Tool and the release notes for your platform and software release.
To find information about the features documented in this module, and to see a list of the releases in which each feature is supported, see the feature information table. Use Cisco Feature Navigator to find information about platform support and Cisco software image support. To access Cisco Feature Navigator, go to www.
An account on Cisco. However, a virtual persistence for the MIB notification control object happens, because that information is also captured via the configuration command line interface CLI. A spoke perceives that a hub has gone down. This can occur even if the spoke was not previously registered with the hub. For example, a spoke-spoke tunnel goes down.
For example, a spoke-spoke tunnel comes up. The rate limit set for NHRP packets on the interface is exceeded. The agent implementation of the MIB provides a means to enable and disable specific traps, from either the network management system or the CLI. All next-hop state change events.
TCP MSS & IP MTU considerations when using DMVPN
The severity level for these messages is set to critical. NHRP resolution events. For example, when a spoke sends a resolution to a remote spoke, or when an NHRP resolution times out without receiving a response. The severity level for these messages is set to informational. DMVPN cryptography events. The severity level for these messages is set to notification. Received Error Indication from The error message includes the IP address of the node where the error originates, the source nonbroadcast multiaccess NBMAand the destination address.
DMVPN error notifications.Configuring DMVPN using EIGRP/OSPF By Khawar Butt
The severity level for this message is set to warning. The Interface State Control feature allows NHRP to control the state of the interface based on whether the tunnels on the interface are live. However, if NHRP detects that any one of the NHSs configured on the interface is up, then it can change the state of the interface to up.
When the NHRP changes the interface state, other Cisco services can react to the state change, for example:. If the interface state changes to down, the Cisco IOS backup interface feature can be initiated to allow the system to use another interface to provide an alternative path to the failed primary path. If the interface state changes to down, the system generates an update that is sent to all dynamic routing protocols.
If the interface state changes to down, the system clears any static routes that use the mGRE interface as the next hop. The diagram below illustrates how the system behaves when the Interface State Control feature is initialized. The system reevaluates the protocol state and changes the state to line up and protocol down if none of the configured NHSs is responding.
On successful completion of the tunnel negotiation process, the system sends an IPsec Session Up message. In addition, you can configure the system to send the traps to particular trap receivers.Every time I show this to people I see eyes light up and little light bulbs turn on in their brains of where they can use this. Since they are on the same mGRE tunnel how would we do that?
Now what happens? Really just that simple. Neat eh? This will show you the basics of the NHRP registration request. Tossing this in here closer to the sniffer trace so you can match up IP addresses better. So first — frame It has been updated as well as reformatted.
Introduction to DMVPN
Thank you! Added to the list. Going to be doing a bunch of IWAN stuff next.
Will get back around to this after that. Your session rocked!! I had a similar question as gadam1. How do we setup spoke2spoke qos where spokeA has 2Mbps bw and spokeB has 4Mbps bw? Would we be able to push a policy where the qos shaping is set to the lower bandwidth? I have heard that Cisco is looking into what addl things to add here. I know this thread is 3.
This setup shapes and prioritizes outbound traffic from the hub. Outbound traffic from the spoke is shaped and prioritized with a service-policy tied directly to the tunnel source interface of the spoke router. I have thought actually about going for some CCNP stuff later this year. Hi Fish, thanks for this.
Confirm with you again, for the policy map for the 5Mbps, I presume we need the starred line of configuration as similar to the others. How can we handle spoke site which have for example 2Mpbs download and only 1Mbps upload?
Now what? Like this: Like Loading Leave a Reply Cancel reply. Sorry, your blog cannot share posts by email.DMVPN consists of two mainly deployment designs:. In both cases, the Hub router is assigned a static public IP Address while the branch routers spokes can be assigned static or dynamic public IP addresses.
The Hub router undertakes the role of the server while the spoke routers act as the clients. It is important to note that mGRE interfaces do not have a tunnel destination. Because mGRE tunnels do not have a tunnel destination defined, they cannot be used alone. DMVPN provides a number of benefits which have helped make them very popular and highly recommended.
These include:. As stated, DMVPN greatly reduces the necessary configuration in a large scale VPN network by eliminating the necessity for crypto maps and other configuration requirements.
The following requirements have been calculated for a traditional VPN network of a company with a central hub and 30 remote offices.
All spokes connect directly to the hub using a tunnel interface. The hub router is configured with three separate tunnel interfaces, one for each spoke:. In addition, the hub router has three GRE tunnels configured, one for each spoke, making the overall configuration more complicated. In case no routing protocol is used in our VPN network, the addition of one more spoke would mean configuration changes to all routers so that the new spoke is reachable by everyone.
Lastly, traffic between spokes in a point-to-point GRE VPN network must pass through the hub, wasting valuable bandwidth and introducing unnecessary bottlenecks. With mGRE, all spokes are configured with only one tunnel interface, no matter how many spokes they can connect to. All tunnel interfaces are part of the same network.
In our diagram below, this is network Furthermore, spoke-to-spoke traffic no longer needs to pass through the hub router but is sent directly from one spoke to another.
The flexibility, stability and easy setup it provides are second-to-none, making it pretty much the best VPN solution available these days for any type of network. Deal with bandwidth spikes Free Download.In the lab, I have been introducing vrf environments into everything that I do. On top of that, I went a step further and started having a few different combinations when it came to interface configurations:.
Being able to understand how a basic DMVPN hub and spoke environment works is key first and foremost. I always refer back to the Cisco documentation page when I am stuck on a command. You can get a sample config for a hub and a spoke on that page. Definitely a good one to have bookmarked for when you need it. On to the topic at hand. Now to my three different scenarios I previously mentioned to break it down in detail:.
Try it in a lab environment and let me know if there are any questions. Once your lab is done, write erase everything and do it again. Repetition is key to remembering these types of things.
That being said, that documentation page I shared before is a great one to keep around from the official Cisco command reference. Save my name, email, and website in this browser for the next time I comment.
This site uses Akismet to reduce spam. Learn how your comment data is processed. Sign in. Log into your account. Password recovery.
April 17, Forgot your password? Get help.I make no claims of the accuracy of this post. This information was gathered by reading Cisco documentation and testing in a lab environment. These settings were eventually deployed to a production environment, these settings worked in the situation I was in. You are responsible in every way for confirming my information if you happen to use it in a real world scenario. So I started working on figuring why we were seeing these issues. This was an easy issue to fix.
This is the incorrect transform set. This was causing fragmentation for large packets. So in addition to the list below there is going to be 20 bytes of overhead for TCP and another 20 bytes for the original IP header. Adding everything up we get bytes of overhead on each segment of payload. However the tunnel needs to add the GRE header, in this case an additional 28 bytes. So the tunnel interface needs to fragment the original packet into two packets just to fit the GRE header on.
After that happens, it is sent to the crypto process, which will add 56 bytes plus another 20 because of tunnel mode of overhead to each fragment. The larger of the two fragments from earlier will once again, be over the IP MTU on the physical interface bytes.
So the encrypted fragment is actually fragmented again. We now have three fragments for the original one. The one fragment that was fragmented again after encryption will need to be buffered and reassembled before decryption on the other end.
While the smaller of the two original fragments will be received and decrypted immediately. Look at this picture from the Cisco documentation for a good explanation of everything that happens.
Also once again, I recommend reading this. The fix was fairly strait forward. The expected size of the final packet should be around bytes. It would be possible to tune this a little more to get right up to or closer to it. After these changes were made on the hub and spoke routers that made up the DMVPN network, performance increased, CPU usage dropped and leveled out, and fragmentation counters were not incrementing very much.
I have a question on the total amount of overhead shown on your document. Like Like. You are commenting using your WordPress. You are commenting using your Google account. You are commenting using your Twitter account.In an old postdatedI explained various types of VPN technologies. Cisco DMVPN uses a centralized architecture to provide easier implementation and management for deployments that require granular access controls for diverse user communities, including mobile workers, telecommuters, and extranet users.
Key components are :. Note: Because all spoke-to-spoke traffic in DMVPN Phase1 always traverses the hub, it is actually inefficient to even send the entire routing table from the hub to the spokes. Phase2 : Allow spokes to build a spoke-to-spoke tunnel on demand with these restrictions: the spokes must receive specific routes for all remote spoke subnets.
For instance, to reach The request gets forwarded from HUB to Spoke3. Spoke3 replies directly to Spoke2 with its mapping information. At this point, the spokes can now modify their routing table entries to reflect the NHRP shortcut route and use it to reach the remote spoke. As you can notice, the network 1 WhoisUP 0.
Jun 26 An article by Fabio Semperboni Tutorial. Fabio Semperboni.
Related Posts Join us on LinkedIn! Join us on Facebbook! Follow Us on Twitter! Subscribe to our RSS Feed! Join us on Youtube! Intelligent Routing Platform. Email Updates Enter your email address to receive notifications of new posts. Ciscozine on Facebook.