How to hurt yourself with EIGRP

As long as all your routing nodes are Cisco branded, EIGRP (Cisco’s proprietary routing protocol) is very easy to implement. You pretty much just turn it on and it works, like the old Appletalk/phonenet networks in the pre-OSX days.

But if you have a machine that’s all loaded up with static routes, and you accidentally redistribute them back to the machine the routes point to, the network gets pretty loopy. Little network geek joke there.where she stops nobody knows

DIY Ground-based Ion Cannon

Hobbit’s netcat can be used to vomit forth network traffic as fast as your machine can generate it. We don’t need no steenkin’ LOIC!

Anyway, I needed to test a WAN pipe to see if Comcast was delivering the bandwidth we’re paying for – we’re supposed to have a 200 Mbps link to Boston.

[root@monster ~]# yes | nc -4 -u -v -v -n remotehost.boston.com 9

The yes command just screams “yes!” incessantly, like a teenage boy’s dream girlfriend. We pipe the output to netcat, and force it to use UDP and IPv4 to send all the yes traffic to a host in Boston. UDP port 9 is the “discard” service, of course, so the machine at the other end just throws the traffic away. We already constantly monitor all the routing nodes in the path so we can see and graph what happens to the packets in real time.

Turns out the host can generate 80Mbps, sustainable indefinitely. That goes into the 200Mbps Comcast pipe… and only 4Mbps comes out the other end! Thanks, netcat! Time to call Comcast!

Don’t do this if you aren’t ready to deal with the repercussions of completely smashing your network. Saturating interfaces, routers and pipes will severely impact normal business routines, and should be saved as a last resort.

Setting default gateway on Cisco 2960 switches

Since The Dawn Of Time ™ it’s been possible for a networked device to have a default route. Way back then, before our beards turned thick and grey, all routers were called “gateways” so the default route was called a default gateway in those ancient times.

The purpose of the default route is to provide a last ditch option when the device does not know what to do. Basically, whenever a networked device doesn’t know where to send some data, it can do the equivalent of a hail mary pass, and just chuck it blindly at a mysterious place where hopefully there will be a router or modem of some sort which is part of the global Internet. This is actually how the vast majority of Internet traffic is handled, believe it or not; PCs, Macs and webservers typically don’t know anything about how to reach other things on the Internet. The router that sits at the end of their default route handles it for them.

The Cisco 2960 is a commodity network switch that has recently been given some routing capabilities by a software update. They are quite commonplace; there’s a couple stacks of them around my job site, hanging off the larger Nexus fabrics.

The 2960 has brought some fresh confusion to the terminology, because for reasons unknown Cisco has provided these three commands:

ip default-gateway (when IP routing is disabled)
ip default-network (when IP routing is enabled)
ip route 0.0.0.0 0.0.0.0 (when load balancing across multiple routes is enabled)

To an experienced networking professional, those are all the same thing. If I say “hey, Melvin, set route zero mask zero on your box to point to the core12 router” it means the same as if I say “Melvin, you dolt, your default gateway needs to be core12” or even “the default net should be core12, Melvin!” So this is a remarkably non-intuitive set of configuration options here.

“So what” you say, with a Cisco router you just use the tab-completion and question-mark help features of the command line to learn what to do, right? Who needs documentation, Cisco rocks. Er, except in the current version of the software there’s no help text at all for ip default-gateway, and you can’t use ip default-network until routing is enabled, and it’ll accept ip routes to 0.0.0.0 without using them as a default. So, not so much. Thankfully Keith Barker has a more helpful post than mine, if you haven’t already figured out what you need from this one.

MPLS is back up, Cisco WIC at fault

Verizon wasn’t calling us back or being helpful for the first 24 hours while our network was down, so we starting yelling at them. After three hours of this, continuously on the phone with their support group, and working through four “escalations”, they eventually gave us some useful attention. With their help we determined that (unusually enough) the problem was not in Verizon’s equipment, although it’s not unlikely that the whole sequence of events started in their network. Embarrassing situation all around, really. Verizon did a great job once they got started, and from their point of view our system was the one failing and not theirs, but you still shouldn’t have to harangue people to get them to fulfill the most basic requirements of customer service.

Anyway, the MPLS circuit terminates at a Cisco 2811 router with 4 FastEthernet interfaces. The 2 port fast ethernet WAN Interface Card on that system was reporting completely false packet counts and diagnostics – pretending to work perfectly while actually generating no outgoing data packets at all, and ignoring all incoming packets. There was literally no way to diagnose this without using physical loopbacks and other caveman tricks since rebooting the router during business hours would wreak even more havoc.

Shutting down the router, pulling the power cord, reseating the WIC and restarting everything fixed it. But of course we had to fly Jay from Philly to Boston in order to do that, and the network was crippled for 41 hours before the situation resolved at right about 5AM. Three engineers working 19 hour shifts is not fun for people over 30!

Since clearly this WIC is unreliable we moved the line to a dedicated port and set up a hot spare router next to the problem child.