Associated Press jumps the shark

Everybody has been saying that the Internet is killing journalism, but I see it as more of a suicide.

The argument for lowercasing Internet is that it has become wholly generic, like electricity and the telephone. It never was trademarked and is not based on any proper noun,” Tom Kent, AP Standards Editor, said in a statement. “The best reason for capitalizing it in the past may have been that the term was new. At one point, we understand, ‘Phonograph’ was capitalized.”

Mr. Kent has become an arbiter of journalistic composition despite being apparently unfamiliar with the concept called “research.” Which explains the death of journalism better than the Internet, or indeed any number of internets.

Don’t be a .local yokel

Wikipedia has a nice technical write up that explains why you should never, ever use the .local suffix the way Microsoft has frequently recommended.

But I like this politically incorrect version better:

Microsoft: “Gee, nobody is using the .local piece of the globally shared Internet namespace, so let’s tell all our customers that it’s best practice to use it for our totally super cool version of Kerberized LDAP service called Active Directory!”

Novell: “Oh noes, Microsoft has made an inferior competitor to our flagship technology! It’ll probably destroy our market advantage just like their inferior networking stack did!”

Linux/Unix: “Oh noes, when somebody attaches the new Microsoft technology to an existing mature standards-based network, Kerberos breaks!”

Microsoft: “HA HA HA HA HA HA HA we are totally following the standard, lusers!”

Linux/Unix: “grumble whine we will patch Kerberos even though we don’t agree.”

Microsoft: “whatevs. Did you notice we broke your DNS too? :)”

Apple: “Hey, IETF, we have this cool new zeroconf technology. We want to reserve the .local namespace for it.”

IETF: “OK, sure, you’ve filled out all the forms and attended all the meetings and there’s two independent implementations so you’ve done everything correctly. We have no valid reason to deny this allocation.”

Novell: “Hey, we were using SLP already, what did you just do?”

Apple: “Oh, whoopsie, did we just eat your lunch? HA HA HA HA HA”

Microsoft: “Hey, what just happened?”

Apple: “HA HA HA HA HA HA HA HA HA HA HA RFC6762, lusers!”

Linux/Unix: “grumble mumble whatevs. We can do mDNS.”

Microsoft customers: “OH NOES WE ARE SCREWZ0RRED”

Microsoft: “Meh, you didn’t really want Apple products on your networks anyway.”

:TEN YEARS LATER:

Microsoft customers: “How much would it cost to fix this network?”

Microsoft: “What, were you talking to us? Everything’s fine here. Windows 10 forever!”

Comodo up to more tricks

People occasionally ask me who they should buy security certificates from. I absolutely will not recommend anyone in particular – even the most honest and honorable Certificate Authorities are inherently swindlers, because the trade itself is pretty much a legalized extortion scheme – but I am willing to say who I don’t recommend – Comodo is the worst CA, hands down. Witness their latest hijinks:

When you install Comodo Internet Security, by default a new browser called Chromodo is installed and set as the default browser. Additionally, all shortcuts are replaced with Chromodo links and all settings, cookies, etc are imported from Chrome. They also hijack DNS settings, among other shady practices.
[Link to Chromodo download elided]
Chromodo is described as “highest levels of speed, security and privacy”, but actually disables all web security. Let me repeat that, they ***disable the same origin policy***…. ?!?..

This certainly isn’t the first time Comodo’s been caught doing things they shouldn’t, but somehow they still control around a third of the world’s certificate issuance. People need to stop giving business to known bad actors, even when it’s unclear whether the actions stem from malice or incompetence.

Query all non-subscribed RHEL7 repos at once

The old Red Hat Network was simple and easy to use. The RHN website presented a list of systems in your web browser, with counts of outstanding patches and outdated packages. You could click on a specific system name and do various things like subscribe to specific repositories (channels) etc.

The current Red Hat Network is a glittering javascript tour-de-force that multiplies the number of clicks and the amount of specialized knowledge you will need to manage your systems. You can pay extra for add-on capabilities such as the ability to select groups of systems and apply a set of operations to all of them, which is almost certainly necessary if you have a large number of systems. It’s a sad travesty of the much-maligned system it replaced.

If you’re completely entangled in the new RHN with your Red Hat Enterprise Linux 7 systems (by which I mean that you haven’t managed to exit the Red Hat ecosystem for a more cost-effective infrastructure yet) you might want to do something like figure out which of the various poorly named repos (such as -extras, -optional, and -supplementary) contains some particular package you want.

Command line to the rescue! Ignore all RHN’s useless beauty and use ugly, reliable Gnu awk. This, for example, finds the repo where the git-daemon package has been hidden away.

subscription-manager repos --list | gawk '/^Repo ID/{print "yum --showduplicates list available --disablerepo=\"*\" --enablerepo=" $3}' | bash | grep -i git-daemon

After several minutes (there’s a lot of network traffic involved) you’ll find that versions of git-daemon are in five different repos.

git19-git-daemon.x86_64 1.9.4-2.el7 rhel-server-rhscl-7-eus-rpms
git19-git-daemon.x86_64 1.9.4-3.el7 rhel-server-rhscl-7-eus-rpms
git19-git-daemon.x86_64 1.9.4-3.el7.1 rhel-server-rhscl-7-eus-rpms
git-daemon.x86_64 1.8.3.1-5.el7 rhel-7-server-optional-fastrack-rpms
git-daemon.x86_64 1.8.3.1-4.el7 rhel-7-server-optional-rpms
git-daemon.x86_64 1.8.3.1-5.el7 rhel-7-server-optional-rpms
git-daemon.x86_64 1.8.3.1-6.el7 rhel-7-server-optional-rpms
git19-git-daemon.x86_64 1.9.4-2.el7 rhel-server-rhscl-7-rpms
git19-git-daemon.x86_64 1.9.4-3.el7 rhel-server-rhscl-7-rpms
git19-git-daemon.x86_64 1.9.4-3.el7.1 rhel-server-rhscl-7-rpms
git-daemon.x86_64 1.8.3.1-5.el7 rhel-7-server-optional-beta-rpms

So, you query the Red Hat Package Manager, rpm, to find out what version of git you have.

rpm -q git
1.8.3.1-6.el7

Since 1.8.3.1-6.el7 matches the latest version of git-daemon available from the rhel-7-server-optional-rpms repository, that’s the one you need to add in order to load git-daemon.

subscription-manager repos --enable rhel-6-server-optional-rpms
yum install git-daemon
.

This process is much easier than using the Red Hat Network web gui, and requires less specialized knowledge. Which is pretty sad, considering how arcane these incantations are.

rsyslog & systemd

The ancient Berkeley syslog is a functionally impoverished logging mechanism, but the protocol is well understood and widely supported. You can use a modern version of the daemon (Ranier’s rsyslog or syslog-ng for example) and work around the shortcomings of the protocol itself.

I’ve been working with a Red Hat Enterprise Linux version 7 spin-up, and since systemd is basically a Red Hat product it should come as no surprise that RHEL7 thoroughly embeds systemd.

Here’s a section of the documentation that describes how the error logging works:

Some versions of systemd journal have problems with database corruption, which leads to the journal to return the same data endlessly in a tight loop. This results in massive message duplication inside rsyslog probably resulting in a denial-of-service when the system resources get exhausted. This can be somewhat mitigated by using proper rate-limiters, but even then there are spikes of old data which are endlessly repeated. By default, ratelimiting is activated and permits to process 20,000 messages within 10 minutes, what should be well enough for most use cases.

Traceroute vs Tracert

Van Jacobsen’s traceroute utility is not the same thing as Windows tracert, and the MS-Windows tool is probably more academically correct. The GNU version of traceroute that is included with most linux and BSD operating systems can do both kinds of tracing, but does the Van Jake by default (use traceroute -I to get the windows-style ICMP trace).

People have occasionally given routers silly names to produce amusing traces.

RMS is online

Richard Stallman finally figured out a way to get online that was ideologically acceptable.

…he now connects to websites from his own computer – via Tor and using a free software browser. Previously, he used a complicated workaround to more or less email webpages to himself. The announcement brought a surprised gasp and a round of applause from the 300-plus attendees.

β€œAt one point, I used to believe that the Firefox trademark license was incompatible with free software, I found out I was mistaken – it does allow the redistribution of unmodified copies,” he said.

Terminology: routes and gateways

Originally, back when the ARPAnet merged with SRI, BBN, NSFnet and MERIT to become the Internet, and dinosaurs still roamed the earth, there was no such thing as a “network router”. How can that be? Meh, it’s just semantics. The terminology has evolved.

Internet-connected systems that routed traffic (which was most of them, back in the day) usually ran a program called “gated” (that’s the GATEway Daemon, written at MERIT) that routed IP traffic between networks. A lot of those oldtimey networks were connected by UUCP dial-up links that were only live between 11pm and midnight to save money, so the code was written to support poor quality network links that came and went somewhat randomly.

Any physical network connection that would accept packets bound for some remote network was called a gateway. Gateways were defined by their network addresses. A data structure was created to hold information about which gateways led to which networks – this is called the routing table. The individual entries in that table are created by specifying a set of target IP addresses (using a network address and a mask), a target gateway, and which physical connection to use to reach that target gateway. That terminology is still in use in some commands, such as the “route” command. The individual routing table entries quickly came to be called routes.

At some point somebody at Stanford or MIT came up with the concept of the default gateway. This was a hack, that has become a crucially important networking concept today. No matter what kind of OS they were running, network-connected computers already had routing tables that held networks, masks, and gateways – so a special “fake network” was defined for the purpose of putting a default gateway into the existing tables. It has an address/mask pair that makes no sense at all – 0.0.0.0/0.0.0.0 – this is intentional, so the fake network entry can’t possibly interfere with any real networks.

The network stacks of all modern systems (post 1979) will look for a route to a target address, and if they don’t find one, they will use the route defined by the 0.0.0.0/0.0.0.0 routing table entry. It’s a wild swing, the hail mary pass, you just throw it out there and hope for the best.

Since the default route fits the format that is used for all other routes (it just has an impossible ip/netmask pair) it can be carried on any dynamic routing protocol – BGP, EIGRP, OSPF, RIPv2, you name it. This usually causes more problems than it’s worth, so most places do not distribute default routes dynamically. Instead they are configured by DHCP or defined manually, and cannot fluctuate.

Anyway, today, individual people have their own computers, instead of sharing a computer with 500 other people using dumb terminals, so most of our hosts don’t route, so their routing tables are almost empty. They will typically have two entries:

1) the default route, still called the default gateway in many implementations
2) the route to the local net, which is specified by the host’s IP address and mask, and uses the physical ethernet port as the gateway.

A host that has no default route can only talk to machines on networks for which it holds specific routes.

Multicast-capable hosts (like linux and Windows machines) may also have multicast routes in their routing tables, but that is something you usually only see on servers at this point. It will become more common on end user desktops in the future, though; MacOSX and Ubuntu already have multicast capabilities turned on from the factory.

So today any network-capable widget might have static routes, defined by the system administrators, and those static routes might include a default route. It might also have dynamic routes, learned by communicating over the network with other systems, and those dynamic routes might include a default route. You can still call the target of the default route the default gateway if you wish, or you can call it the default route’s next hop, but most networking pros will just say default route or default gateway interchangeably. We’re a little sloppy with the language.

Oddly, over time computers have become less and less capable of dealing with multiple default routes. The pre-v2 linux kernels handled it effortlessly, but modern linux is just as bad in this respect as Windows.

Language evolves, although not always for the better. I personally have found it advantageous to adopt or at least be fluent in the terms and notations used by the youngest generation of technologists. I try to say folder instead of directory, for instance, because directory now means a backend database accessed by LDAP, instead of an on-disk filesystem data structure. I insist on using only international date notation. And I would like to train myself to pronounce router the same as rooter – which is almost certainly going to be the standard pronunciation before I manage to retire – but I haven’t got that programmed into my wetware yet. And I try to always say route instead of gateway whenever possible. The only time I want to use the word gateway is when I’m specifically talking about the target of a route. It’s not that the term is wrong in all other contexts, it’s just that it’s somewhat sloppy and very old-fashioned; it’s like calling your car a flivver instead of a beater.

Office not so 365

Microsoft’s Azure Cloud service failed at almost exactly midnight last night, taking down hundreds of websites who may have thought that hardware redundancy could magically protect them from sysadmin oopses, as well as users of Xbox live and Microsoft’s flagship service Office 365.

Viva Zorggroep, a Dutch healthcare organisation with 4,000 employees, said it had also been affected as a consequence of adopting Microsoft’s online apps.

“At this time, our supporting departments such as finance, HR, education, IT et cetera are working with Office 365,” said Dave Thijssen, an IT manager at the company.

“This morning these servers were unresponsive, which means users were not able to log in to Office 365.

“As a result they had no access to email, calendars, or – most importantly – their documents and Office Online applications.

“We also had trouble reporting the outage to our users as most of digital communication – email, Lync, intranet/Sharepoint – was out.

The outage persisted for over five hours for some customers and apparently there are still latency issues at this time. This is of course a violation of the Service Level Agreement… so you can keep a nickel or two of your monthly rent, I bet.

How to hurt yourself with EIGRP

As long as all your routing nodes are Cisco branded, EIGRP (Cisco’s proprietary routing protocol) is very easy to implement. You pretty much just turn it on and it works, like the old Appletalk/phonenet networks in the pre-OSX days.

But if you have a machine that’s all loaded up with static routes, and you accidentally redistribute them back to the machine the routes point to, the network gets pretty loopy. Little network geek joke there.where she stops nobody knows

DIY Ground-based Ion Cannon

Hobbit’s netcat can be used to vomit forth network traffic as fast as your machine can generate it. We don’t need no steenkin’ LOIC!

Anyway, I needed to test a WAN pipe to see if Comcast was delivering the bandwidth we’re paying for – we’re supposed to have a 200 Mbps link to Boston.

[root@monster ~]# yes | nc -4 -u -v -v -n remotehost.boston.com 9

The yes command just screams “yes!” incessantly, like a teenage boy’s dream girlfriend. We pipe the output to netcat, and force it to use UDP and IPv4 to send all the yes traffic to a host in Boston. UDP port 9 is the “discard” service, of course, so the machine at the other end just throws the traffic away. We already constantly monitor all the routing nodes in the path so we can see and graph what happens to the packets in real time.

Turns out the host can generate 80Mbps, sustainable indefinitely. That goes into the 200Mbps Comcast pipe… and only 4Mbps comes out the other end! Thanks, netcat! Time to call Comcast!

Don’t do this if you aren’t ready to deal with the repercussions of completely smashing your network. Saturating interfaces, routers and pipes will severely impact normal business routines, and should be saved as a last resort.

“Man Ass”

Unix-derived operating systems have a tradition of making commands short and easily typed regardless of social conventions.

So, in order to consult the manual page for the Autonomous System Scanner, you would type “man ass” at the command line. People involved with AS work would not find this remarkably odd or offensive – we’ve already got jobs to do, that don’t involve complaining about other people’s sense of propriety.

However, if one creates a site that automatically generates HTML-formatted web pages from the man pages of the Ubuntu V13.04 linux distribution, popularly called Raring Ringtail, one ends up hosting a page describing “raring man ass”.

The Internet being what it is, such a page may have unexpected effects on your google analytics results…

STOP USING ACTIVE X.

One of the horrors remaining from the browser wars of the late 90s is Microsoft’s “ActiveX” technology. ActiveX, not DirectX, although maybe the latter needs to die too.

ActiveX in browsers is based on the idea that your computer should be able to download and execute completely random binary images from the Internet without your permission. What a great basic architecture, huh? It was created because Microsoft’s implementations of COM and OLE technologies were so unnecessarily complex and fundamentally user-hostile that nobody sane wanted to use them. Microsoft needed an alternative, one that could be integrated with the web, since they wanted to crush Netscape and take over the Internet. Browser technology was critically important to them and ActiveX was a way to prevent the creation of a level browser playing field based on shared standards.

To give a more generous interpretation of the same events, Microsoft was faced with a desire to provide a richer web experience to their customers and an inability to deliver their vision using existing web standards. ActiveX was an early attempt to work around the inadequacy of HTML, and while it had many issues (security being a big one, and lack of support for non-Intel platforms another) Microsoft has worked continuously and diligently to remediate those issues and support current and former users of their products.

Personally I’m completely happy with either of those interpretations of the events surrounding the birth of ActiveX. Who cares? Those bodies are all buried now… or at least they should be.. NO WAIT. ActiveX is still stinking up the room!

If you use ActiveX in your websites, or allow your browser to execute ActiveX controls, you are part of the problem. Please, I’m begging you, for the love of God, stop it! Just let this hideous thing die, will you?

There’s nothing that ActiveX provides that can’t be provided using current web standards and technologies. You don’t have to keep hurting yourself, and your readership. Just stop already.

Whenever you purchase any software with a web server in it, or sign up for any service that has a web interface, you need to routinely insist that the product you are buying must be useable with any browser, not merely Microsoft Internet Explorer with ActiveX enabled running on 32-bit Microsoft Windows on a x86 chipset. Make the seller put that in writing, so you don’t get stuck supporting ActiveX against your own will. It’s a shame you have to do this – you don’t have to specify in writing that there will be no incontinent rabid monkeys in the back seat when you purchase a car – but it’s necessary. ActiveX must be destroyed.

Setting default gateway on Cisco 2960 switches

Since The Dawn Of Time ™ it’s been possible for a networked device to have a default route. Way back then, before our beards turned thick and grey, all routers were called “gateways” so the default route was called a default gateway in those ancient times.

The purpose of the default route is to provide a last ditch option when the device does not know what to do. Basically, whenever a networked device doesn’t know where to send some data, it can do the equivalent of a hail mary pass, and just chuck it blindly at a mysterious place where hopefully there will be a router or modem of some sort which is part of the global Internet. This is actually how the vast majority of Internet traffic is handled, believe it or not; PCs, Macs and webservers typically don’t know anything about how to reach other things on the Internet. The router that sits at the end of their default route handles it for them.

The Cisco 2960 is a commodity network switch that has recently been given some routing capabilities by a software update. They are quite commonplace; there’s a couple stacks of them around my job site, hanging off the larger Nexus fabrics.

The 2960 has brought some fresh confusion to the terminology, because for reasons unknown Cisco has provided these three commands:

ip default-gateway (when IP routing is disabled)
ip default-network (when IP routing is enabled)
ip route 0.0.0.0 0.0.0.0 (when load balancing across multiple routes is enabled)

To an experienced networking professional, those are all the same thing. If I say “hey, Melvin, set route zero mask zero on your box to point to the core12 router” it means the same as if I say “Melvin, you dolt, your default gateway needs to be core12” or even “the default net should be core12, Melvin!” So this is a remarkably non-intuitive set of configuration options here.

“So what” you say, with a Cisco router you just use the tab-completion and question-mark help features of the command line to learn what to do, right? Who needs documentation, Cisco rocks. Er, except in the current version of the software there’s no help text at all for ip default-gateway, and you can’t use ip default-network until routing is enabled, and it’ll accept ip routes to 0.0.0.0 without using them as a default. So, not so much. Thankfully Keith Barker has a more helpful post than mine, if you haven’t already figured out what you need from this one.

MPLS is back up, Cisco WIC at fault

Verizon wasn’t calling us back or being helpful for the first 24 hours while our network was down, so we starting yelling at them. After three hours of this, continuously on the phone with their support group, and working through four “escalations”, they eventually gave us some useful attention. With their help we determined that (unusually enough) the problem was not in Verizon’s equipment, although it’s not unlikely that the whole sequence of events started in their network. Embarrassing situation all around, really. Verizon did a great job once they got started, and from their point of view our system was the one failing and not theirs, but you still shouldn’t have to harangue people to get them to fulfill the most basic requirements of customer service.

Anyway, the MPLS circuit terminates at a Cisco 2811 router with 4 FastEthernet interfaces. The 2 port fast ethernet WAN Interface Card on that system was reporting completely false packet counts and diagnostics – pretending to work perfectly while actually generating no outgoing data packets at all, and ignoring all incoming packets. There was literally no way to diagnose this without using physical loopbacks and other caveman tricks since rebooting the router during business hours would wreak even more havoc.

Shutting down the router, pulling the power cord, reseating the WIC and restarting everything fixed it. But of course we had to fly Jay from Philly to Boston in order to do that, and the network was crippled for 41 hours before the situation resolved at right about 5AM. Three engineers working 19 hour shifts is not fun for people over 30!

Since clearly this WIC is unreliable we moved the line to a dedicated port and set up a hot spare router next to the problem child.