Saturday, January 24, 2009

Conditional Debug

I came across this useful nugget the other day, wishing I knew about it years ago.

One of the first considerations when turning on a debug is that you can potentially bring a router to its knees. In theory, you should only log to the buffer, assign an access list to the debug, and adjust the scheduler to ensure that a certain amount of time is dedicated to essential process, blah, blah, blah.

Even when following these precautions, it can be gut-wrenching to even consider running a debug that you know is going to generate A LOT of output.

Take for example, debug ppp packet. What if you have a router that has tens, or even hundreds, of interfaces running ppp? Yet you need something beyond debug ppp authentication and debug ppp negotiation. There is not an option to specify an ip address on debug ppp packet, like there is for debug ip packet. No wonder, since ppp is layer 2.

There is still a safe way to do this (disclaimer: test this in the lab first, don't blame me if you muck up your production network!)

debug condition interface [interface] allows you to only perform the debug on a specified interface.

For example, say you have a native 6500 with 336 interfaces, and you want to debug ip packet on interface f2/41. You don't really want to use an access list because you're more interested in all traffic on the interface, rather than a specific set of IPs.

You can do the following:

debug condition interface f2/41
debug ip packet

And the result is you'll only see ip packets associated with f2/41, instead of all 336 interfaces.

Further details can be found here.

Sunday, January 18, 2009

IE Vol 1 Auto-RP Filtering Candidate RPs Complete

Ok, it's obvious whoever wrote this part of the workbook wasn't paying close attention to subnet masks. There are several mistakes here.

To filter, from the mapping agent side, which RPs will be announced for a particular group, use the ip pim rp-announce-filter. The tricky part here is that, the only part is not that an entry required for each RP and its associated groups that should be allowed. An extra entry explicitly denying remaining RPs and groups must be added as well, to prevent rouge RPs.

IE Vol 1 Multiple Candidate RPs Complete

In this lab, two candidate RPs are created, which only advertise a portion of the multicast addresses. This requires the group-list option of the ip pim send-rp-announce command. Not much to it, just configure the access list to only allow the multicast groups that the router wants to be an RP for.

There is one thing to keep in mind on this command. Typically, whenever creating a multicast access-list, you want to use the destination address, since a multicast group can only be a destination, and not a source. But we are not filtering here, we are telling the RP which groups to advertise. Therefore, the groups to be advertised are configred as the source of the access-list. Perhaps more easily, a standard ip access-list can be used.

On a minor irritating note, the lab specifies a /4 address, when it should have used a /5. /1 is a class A, /2 is a class B, /3 is a class C, therefore /4 would be the entire class D. To split a class D down the middle, a /5 is required.

IE Vol 1 Auto-RP Complete

Again, nothing fancy here. First the interfaces are configured for sparse-dense mode. Once configured this way, the multicast flow works, since it falls back to dense mode when an RP is not known for a flow.

So the trick is to get that RP learned dynamically. There are two dynamic RP protocols: Auto-RP and bootstrap RP. This exercise uses auto-rp, but they are both very similar.

Two things are required for the dynamic registration to happen. First, at least one router must advertise one of its interfaces as a candidate rp, using ip pim send-rp-announce. This advertisement happens via multicast group Since there is no rp for this group, it uses dense mode and will get flooded/pruned.

Next, a router must be configured to listen for messages. This is done via the ip pim send-rp-discovery command, and this router is called a mapping agent. It can be on the same or a different router. Once added, the mapping agent acts as a receiver on Therefore, the routers must graft this receiver on to be a listener for this group. This means any RPF checks must be successful on the group to ensure messages from the candidate rp(s) are being received by the mapping agent.

The mapping agent decides on a candidate rp and advertises its selection out via dense mode to group Through this group, all routers learn the RP for the group, and can the flow can use sparse mode.

Saturday, January 17, 2009

IE Vol 1 Multicast RPF Failure

This is probably my single biggest weakness in multicast: troubleshooting RPF failure issues. I actually got through this lab pretty quickly, but here is the sequence of steps to find and fix an RPF failure.

1. Do a sh ip mroute from the source until you find an interface which does not show the S,G flow on an incoming interface.

(,, 00:01:25/00:01:42, flags:
Incoming interface: Null, RPF nbr

2. Turn ip mroute-cache off on each pim interface. Then, turn on debug ip mpacket. This message spells out the issue.

*Jan 18 01:59:52.590: IP(0): s= (Serial2/0.2) d= id=0, ttl=250, prot=17, len=48(44), not RPF interface

3. sh ip mroute count is another way to verify, although it's not the most intuitive output ever created.

2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts(neg(-) = Drops) per second/Avg Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

Group:, Source count: 1, Packets forwarded: 0, Packets received: 21
Source:, Forwarding: 0/0/0/0, Other: 21/21/0

So from all this output, it's evident that source is being received on port S2/0.2, which is not the RPF interface. So what is the RPF interface? sh ip route tells you that answer:
*, from, 00:14:03 ago, via Ethernet1/0
Route metric is 3, traffic share count is 1

There it is. The router is expecting to see flows from coming in on E1/0, but instead they arrive on port S2/0.2.

Now, there are two ways to fix this. One way would be to adjust IGP metrics so the current path is preferred in the routing table, thus resolving the RPF issue. The other way is more of a workaround, in which a static mroute is added to allow another interface to pass the RPF check. I chose that route.

R5(config)#ip mroute s2/0.2

And as soon as that's added:

*Jan 18 02:07:27.570: IP(0): s= (Serial2/0.2) d= (Ethernet0/0) id=0, ttl=250, prot=17, len=44(44), mforward

And now, everything looks wonderful:


Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to, timeout is 2 seconds:

Reply to request 0 from, 64 ms

Incidently, for the lazy folks out there, you can just specify all IPs in the mroute:
R5(config)#ip mroute s2/0.2

Or even add a default mroute to every single interface on the router to be sure no RPF checks are failing. On the negative side to an approach like this, keep in mind that RPF is used as a loop prevention mechanism. If you go overboard in removing RPF, you may end of DoS'ing your network due to a routing loop. Therefore this can be a bad approach in a production network.

IE Vol1 PIM Sparse Mode Complete

My rustiness with multicast is evident in the amount of time it's taking me to troubleshoot silly issues. The fundamental difference in configuring sparse mode vs. dense mode is that an RP is required. When given the option, don't bother messing with dynamic protocols and just assign a static RP.

No problem there. I throw the static RP on each router, verify pim neighbors, see the flow--everything seems to be working great. One problem, the flow from R7 didn't appear anywhere. Debugs don't seem to be telling me anything special.

And then it dawns on me. Do the routers actually have a route to the RP? No! I used R2's loopback, but never actually assigned the loopback into OSPF. I fix that and bam! everything comes up as expected.

Bottom line, I still don't have any sort of troubleshooting methodology for multicast. Most likely because I simply haven't spent enough time troubleshooting multicast issues. Hopefully by time I get through the rest of the volume I exercises I'll be more comfortable.

IE Vol1 PIM Dense Mode Complete

This exercise was pretty simple, but I really enjoyed going through the show commands and debugs to refresh my memory on how to troubleshoot multicast. In R&S I definitely felt as if multicast was my weakest subject. I want to spend a lot more time on it before the SP lab to make sure it's second nature to me.

There really isn't a whole lot to dense mode. But the exercise had me bring up ip sla to start a flow, and then use debug ip pim and sh ip mroute to watch the flow get pruned back because there are no receivers. After watching this, I added a receiver to the far end of the network. Sure enough, the debugs showed graft messages and sh ip mroute showed that the path was no longer pruned. Finally, turning off ip mroute-cache and debug ip mpacket showed the packets flowing.

I Just Don't Feel Like It!!!

I do not feel like studying lately. Last year I got a heck of a lot of studying done over Christmas, New Year's, and the following weekends. But this year I just don't seem to have the energy. I'm working a lot more hours these days and my son is a year older. I'm not sure how much all of that has to do with it. Last year I was still going to grad school, so I would think I'd have more energy this year.

It's not a feeling of being overwhelmed--I actually feel like I'm somewhat close. Maybe that's part of the problem. The fear and accompanying adrenalin hasn't hit yet.

Interestingly, a lot of the other CCIE blogs seem to be pretty quite the last few weeks. I'm probably not the only one having a hard time getting motivated right now.

I'm gonna force myself to go through a multicast lab or two to at least get something done.

Saturday, January 10, 2009

IE Vol 2 Lab 1 Wrap-up

Some thoughts on this lab:

Layer 2:
Everything but Cell Mode MPLS was simple. I would expect switching and frame-relay to be MUCH simpler than what was on the R&S lab.
Regarding Cell Mode, this was literally the first time I touched it. I realized quickly I had absolutely no clue what I was doing.
Next step: Watch the CoD

This section was simple as well. Not much to single area OSPF.
Next step: Read the message board to see why point-to-multipoint non-broadcast was used instead of non-broadcast

Everything was straightforward here until task 3.4, bestpath selection. I did put some time into this and couldn't figure it out. I'm looking at the solution and I'm not sure about it.
Next step: Lab this up in dynamips.

I skipped TE mostly because it was only 3 points and a lot of work for those 3 points.
Next step: Redo a Vol 1 TE lab as penance.

The Internet Access task was the first time I did route leaking. But I understood the theory enough to get this working relatively easily. The key is the destination interface and the ip must both be stated on a multipoint interface in the static route.

This was the first time I've configured VRF NAT. I almost had it, but debugs were complaining about host unreachable. Turned out I completely missed the route-target import so I didn't have a route back to the source. If I were more confident in NAT I would have continued troubleshooting--it was certainly solvable if I wasn't so "sure" it was a NAT issue.

Next step: Watch NAT CoD

IP Multicast:
I apparently did a pretty big brain dump on Multicast.

Next step:
Do Vol 1 Multicast labs

QoS: Skipped
Next step: Do Vol1 QoS lab

Security: Skipped
Next step: Review tasks, research doccd for anything that's not obvious

System Management: Skipped
Next step: Review tasks, research doccd for anything that's not obvious

All in all, I am pretty please with the way this went. It was a HECK of a lot better than my first ever R&S full lab. A few takeaways but nothing extravagant. Once I complete the next steps I'll move on to lab 2, but may come back to repeat this one later.

Final Score: 55/100

IE Vol 2 Lab 1

7:00am Lab Started
7:30am Diagrams and read-through complete
7:55am 1.1 complete, 3 points
8:01am 1.2 compete, 3 points (switchport protected)
8:08am 1.3 complete, 2 points
8:13am 1.4 complete, 2 points
8:17am 1.5 complete, 2 points
9:10am 1.6 complete, 0 points (cell mode mpls, had to cheat)

9:27am 2.1 complete, 3 points
9:34am 2.2 complete, 2 points
9:46am 2.3 complete, 3 points (assuming non-broadcast)

10:02 am 3.1 complete, 3 points
10:11am 3.2 complete, 3 points
10:26am 3.3 complete, 3 points
10:47am skipping 3.4 until mpls is complete, 0 points

11:02am 4.1 complete, 3 points
11:03am 4.2 complete, 3 points
11:05am 4.3 complete, 3 points


1:49pm back from break

skipping 4.4 MPLS TE

1:57 5.1 complete, 3 points
2:06 5.2 complete, 3 points
2:14 5.3 complete, 3 points
2:39 5.4 complete, 3 points
3:23 5.5 complete, 0 points (cheated)
3:32 5.6 complete, 3 points
3:42 5.7 complete, 2 points

That's it. My energy is gone so I'm calling it a day.

Tuesday, January 6, 2009

IE Vol 1: MPLS TE Unequal Cost Load Balancing Complete

This time, we create two tunnels. One uses explicit paths, the other uses dynamic paths. Setting the traffic-eng bandwidth different on each tunnel causes unequal-cost load-balancing to occur.

One troubleshooting note, there are three requirements to have the TE tunnels show in the routing table:
  • tunnel mpls traffic-eng autoroute announce must be configured on each tunnel
  • the TE tunnels must show up/up in sh mpls traf tun brie
  • the TE tunnels must have ip unnumbered or an IP address assigned
Not having an IP unnumbered assigned to the tunnel interfaces caused me a lot of grief. Everything showed up and functioning, but I didn't see the tunnels in the routing table. Once I added ip unnumbered loo0, everything looked great. Note the traffic share counts that are different between the two paths. Try doing that in OSPF without MPLS-TE!

R4#sh ip route
Routing entry for
Known via "ospf 1", distance 110, metric 5, type intra area
Routing Descriptor Blocks:
* directly connected, via Tunnel0
Route metric is 5, traffic share count is 2
directly connected, via Tunnel1
Route metric is 5, traffic share count is 1

IE Vol 1: MPLS TE Explicit PE-PE Tunnels Complete

This is the same as the last lab, but instead of dynamic tunnels, explicit tunnels are used.

To do this, an ip explicit path list needs to be created, which lists each ingress interface on each router in the path.

When I set it up, I used one of the wrong IPs. The nice part was the sh mpls tra tun told me which was incorrect:

Last Error: PCALC:: Explicit path has unknown address,

MPLS Traffic Engineering: PE to PE Tunnels Complete

I must confess, I made it thorough all the written exams so far without touching Traffic Engineering. I knew a tiny bit of theory, such as it relied on RSVP and inserted yet another label on the stack, but that was about it.

This first TE lab was a pretty good introduction. This link describes the configuration process really well.

In a nutshell, a tunnel interface needs to be created on the two endpoints. The tunnel gets configured as mode mpls traffic-eng, along with a bunch of other options under the tunnel interface.

Then, the mpls network needs to be configured to support this tunnel. This is accomplished through the monotonous configuration, on each router, of:
  • enabling mpls traffic engineering globally
  • enabling mpls traffic engineering on each interface between the tunnel endpoints
  • configuring ip rsvp bandwidth on each interface between the tunnel endpoints
  • configuring ospf (or isis) for traffic engineering on each mpls router
It seems if you miss a link in the path, little info is given as to which one you've missed. Instead, all you get is a message saying that the path isn't valid when you do a show mpls tra tun. Once all the links have been completed, the tunnel interface comes up/up.

IE Vol 1 Controlling MPLS label distribution Complete

Nothing fancy here, the lab just has you only send MPLS labels for the BGP endpoints. Those two are the only IPs required for the vpn labels to have igp labels to ride. So getting rid of all the extraneous lables has no effect on the vpn functionality.

Saturday, January 3, 2009

IE Vol 1: VRF-lite Complete

One would think with a name like VRF-lite, things wouldn't be too bad. That really wasn't the case. I'd hate to see what VRF-heavy looks like.

In actuality, VRF-lite refers to having VRFs without running MPLS. We're keeping separate routing tables, but are separating the traffic on the link using dot1q tags or DLCIs instead of labels.

There really weren't any tricks here, just a lot of stuff going on at the same time.

The setup starts with a basic iBGP vpnv4 session between two routers. But this time, the two BGP routers have separate subinterfaces for each vrf.

Next, the next downstream router has two subinterfaces mapped to vrf's as well. That router is running the customer IGP, but keeping them separate via subinterfaces, rather than running bgp vpnv4 to the upstream router to learn labels and sending traffic via MPLS.

So in effect, this is brining the MPLS domain one more router upstream from the edge.

Friday, January 2, 2009

IE Vol 1: Carrier Supporting Carrier - Hierarchical MPLS VPNs Complete

Nothing fancy going on in this lab. The configuration ends up with two separate vpnv4 sessions, I'm just a little unsure of how to refer to the two. Perhaps by going with the old Access/Distribution/Core model we could say the Core P (CP) network, Distribution P (DP) network, and consider the C network to be access? It's not a perfect analogy but good enough...

So that gives us a vpnv4 session on the Core P network to carry the Distribution P routes. Then there's a Distribution vpnv4 network to carry the C routes.

Like anything else, the key is just to build from the ground up and verify each step along the way.