Don’t be a .local yokel

Wikipedia has a nice technical write up that explains why you should never, ever use the .local suffix the way Microsoft has frequently recommended.

But I like this politically incorrect version better:

Microsoft: “Gee, nobody is using the .local piece of the globally shared Internet namespace, so let’s tell all our customers that it’s best practice to use it for our totally super cool version of Kerberized LDAP service called Active Directory!”

Novell: “Oh noes, Microsoft has made an inferior competitor to our flagship technology! It’ll probably destroy our market advantage just like their inferior networking stack did!”

Linux/Unix: “Oh noes, when somebody attaches the new Microsoft technology to an existing mature standards-based network, Kerberos breaks!”

Microsoft: “HA HA HA HA HA HA HA we are totally following the standard, lusers!”

Linux/Unix: “grumble whine we will patch Kerberos even though we don’t agree.”

Microsoft: “whatevs. Did you notice we broke your DNS too? :)”

Apple: “Hey, IETF, we have this cool new zeroconf technology. We want to reserve the .local namespace for it.”

IETF: “OK, sure, you’ve filled out all the forms and attended all the meetings and there’s two independent implementations so you’ve done everything correctly. We have no valid reason to deny this allocation.”

Novell: “Hey, we were using SLP already, what did you just do?”

Apple: “Oh, whoopsie, did we just eat your lunch? HA HA HA HA HA”

Microsoft: “Hey, what just happened?”

Apple: “HA HA HA HA HA HA HA HA HA HA HA RFC6762, lusers!”

Linux/Unix: “grumble mumble whatevs. We can do mDNS.”

Microsoft customers: “OH NOES WE ARE SCREWZ0RRED”

Microsoft: “Meh, you didn’t really want Apple products on your networks anyway.”

:TEN YEARS LATER:

Microsoft customers: “How much would it cost to fix this network?”

Microsoft: “What, were you talking to us? Everything’s fine here. Windows 10 forever!”

Query all non-subscribed RHEL7 repos at once

The old Red Hat Network was simple and easy to use. The RHN website presented a list of systems in your web browser, with counts of outstanding patches and outdated packages. You could click on a specific system name and do various things like subscribe to specific repositories (channels) etc.

The current Red Hat Network is a glittering javascript tour-de-force that multiplies the number of clicks and the amount of specialized knowledge you will need to manage your systems. You can pay extra for add-on capabilities such as the ability to select groups of systems and apply a set of operations to all of them, which is almost certainly necessary if you have a large number of systems. It’s a sad travesty of the much-maligned system it replaced.

If you’re completely entangled in the new RHN with your Red Hat Enterprise Linux 7 systems (by which I mean that you haven’t managed to exit the Red Hat ecosystem for a more cost-effective infrastructure yet) you might want to do something like figure out which of the various poorly named repos (such as -extras, -optional, and -supplementary) contains some particular package you want.

Command line to the rescue! Ignore all RHN’s useless beauty and use ugly, reliable Gnu awk. This, for example, finds the repo where the git-daemon package has been hidden away.

subscription-manager repos --list | gawk '/^Repo ID/{print "yum --showduplicates list available --disablerepo=\"*\" --enablerepo=" $3}' | bash | grep -i git-daemon

After several minutes (there’s a lot of network traffic involved) you’ll find that versions of git-daemon are in five different repos.

git19-git-daemon.x86_64 1.9.4-2.el7 rhel-server-rhscl-7-eus-rpms
git19-git-daemon.x86_64 1.9.4-3.el7 rhel-server-rhscl-7-eus-rpms
git19-git-daemon.x86_64 1.9.4-3.el7.1 rhel-server-rhscl-7-eus-rpms
git-daemon.x86_64 1.8.3.1-5.el7 rhel-7-server-optional-fastrack-rpms
git-daemon.x86_64 1.8.3.1-4.el7 rhel-7-server-optional-rpms
git-daemon.x86_64 1.8.3.1-5.el7 rhel-7-server-optional-rpms
git-daemon.x86_64 1.8.3.1-6.el7 rhel-7-server-optional-rpms
git19-git-daemon.x86_64 1.9.4-2.el7 rhel-server-rhscl-7-rpms
git19-git-daemon.x86_64 1.9.4-3.el7 rhel-server-rhscl-7-rpms
git19-git-daemon.x86_64 1.9.4-3.el7.1 rhel-server-rhscl-7-rpms
git-daemon.x86_64 1.8.3.1-5.el7 rhel-7-server-optional-beta-rpms

So, you query the Red Hat Package Manager, rpm, to find out what version of git you have.

rpm -q git
1.8.3.1-6.el7

Since 1.8.3.1-6.el7 matches the latest version of git-daemon available from the rhel-7-server-optional-rpms repository, that’s the one you need to add in order to load git-daemon.

subscription-manager repos --enable rhel-6-server-optional-rpms
yum install git-daemon
.

This process is much easier than using the Red Hat Network web gui, and requires less specialized knowledge. Which is pretty sad, considering how arcane these incantations are.

rsyslog & systemd

The ancient Berkeley syslog is a functionally impoverished logging mechanism, but the protocol is well understood and widely supported. You can use a modern version of the daemon (Ranier’s rsyslog or syslog-ng for example) and work around the shortcomings of the protocol itself.

I’ve been working with a Red Hat Enterprise Linux version 7 spin-up, and since systemd is basically a Red Hat product it should come as no surprise that RHEL7 thoroughly embeds systemd.

Here’s a section of the documentation that describes how the error logging works:

Some versions of systemd journal have problems with database corruption, which leads to the journal to return the same data endlessly in a tight loop. This results in massive message duplication inside rsyslog probably resulting in a denial-of-service when the system resources get exhausted. This can be somewhat mitigated by using proper rate-limiters, but even then there are spikes of old data which are endlessly repeated. By default, ratelimiting is activated and permits to process 20,000 messages within 10 minutes, what should be well enough for most use cases.

annoying git

I’ve been installing git on some corporate servers with the idea of converting existing CVS and ad-hoc code management systems into something reasonably fast and modern.

It’s been somewhat tedious and painful, but supposedly once I’m done the installation will be stable and maintainable. For an enterprise SCM that’s a lot more important than ease of installation, at least in theory. (I ran OpenLDAP for a decade or more, so I can appreciate the value of putting all the pain up front.)

Today’s annoyance is that the gitolite documentation and web site refer to a “hosting user” but the toolset and other web sites describing gitolite installation talk about an “admin user”. After wasting several hours with Google trying to find out exactly what the difference was, I created a new user account for the admin user and executed the commands – at which point it became immediately obvious that THOSE ARE THE SAME DAMN THING.

Curse you, gitolite. I WANTED US TO BE FRIENDS.

Terminology: routes and gateways

Originally, back when the ARPAnet merged with SRI, BBN, NSFnet and MERIT to become the Internet, and dinosaurs still roamed the earth, there was no such thing as a “network router”. How can that be? Meh, it’s just semantics. The terminology has evolved.

Internet-connected systems that routed traffic (which was most of them, back in the day) usually ran a program called “gated” (that’s the GATEway Daemon, written at MERIT) that routed IP traffic between networks. A lot of those oldtimey networks were connected by UUCP dial-up links that were only live between 11pm and midnight to save money, so the code was written to support poor quality network links that came and went somewhat randomly.

Any physical network connection that would accept packets bound for some remote network was called a gateway. Gateways were defined by their network addresses. A data structure was created to hold information about which gateways led to which networks – this is called the routing table. The individual entries in that table are created by specifying a set of target IP addresses (using a network address and a mask), a target gateway, and which physical connection to use to reach that target gateway. That terminology is still in use in some commands, such as the “route” command. The individual routing table entries quickly came to be called routes.

At some point somebody at Stanford or MIT came up with the concept of the default gateway. This was a hack, that has become a crucially important networking concept today. No matter what kind of OS they were running, network-connected computers already had routing tables that held networks, masks, and gateways – so a special “fake network” was defined for the purpose of putting a default gateway into the existing tables. It has an address/mask pair that makes no sense at all – 0.0.0.0/0.0.0.0 – this is intentional, so the fake network entry can’t possibly interfere with any real networks.

The network stacks of all modern systems (post 1979) will look for a route to a target address, and if they don’t find one, they will use the route defined by the 0.0.0.0/0.0.0.0 routing table entry. It’s a wild swing, the hail mary pass, you just throw it out there and hope for the best.

Since the default route fits the format that is used for all other routes (it just has an impossible ip/netmask pair) it can be carried on any dynamic routing protocol – BGP, EIGRP, OSPF, RIPv2, you name it. This usually causes more problems than it’s worth, so most places do not distribute default routes dynamically. Instead they are configured by DHCP or defined manually, and cannot fluctuate.

Anyway, today, individual people have their own computers, instead of sharing a computer with 500 other people using dumb terminals, so most of our hosts don’t route, so their routing tables are almost empty. They will typically have two entries:

1) the default route, still called the default gateway in many implementations
2) the route to the local net, which is specified by the host’s IP address and mask, and uses the physical ethernet port as the gateway.

A host that has no default route can only talk to machines on networks for which it holds specific routes.

Multicast-capable hosts (like linux and Windows machines) may also have multicast routes in their routing tables, but that is something you usually only see on servers at this point. It will become more common on end user desktops in the future, though; MacOSX and Ubuntu already have multicast capabilities turned on from the factory.

So today any network-capable widget might have static routes, defined by the system administrators, and those static routes might include a default route. It might also have dynamic routes, learned by communicating over the network with other systems, and those dynamic routes might include a default route. You can still call the target of the default route the default gateway if you wish, or you can call it the default route’s next hop, but most networking pros will just say default route or default gateway interchangeably. We’re a little sloppy with the language.

Oddly, over time computers have become less and less capable of dealing with multiple default routes. The pre-v2 linux kernels handled it effortlessly, but modern linux is just as bad in this respect as Windows.

Language evolves, although not always for the better. I personally have found it advantageous to adopt or at least be fluent in the terms and notations used by the youngest generation of technologists. I try to say folder instead of directory, for instance, because directory now means a backend database accessed by LDAP, instead of an on-disk filesystem data structure. I insist on using only international date notation. And I would like to train myself to pronounce router the same as rooter – which is almost certainly going to be the standard pronunciation before I manage to retire – but I haven’t got that programmed into my wetware yet. And I try to always say route instead of gateway whenever possible. The only time I want to use the word gateway is when I’m specifically talking about the target of a route. It’s not that the term is wrong in all other contexts, it’s just that it’s somewhat sloppy and very old-fashioned; it’s like calling your car a flivver instead of a beater.

DIY Ground-based Ion Cannon

Hobbit’s netcat can be used to vomit forth network traffic as fast as your machine can generate it. We don’t need no steenkin’ LOIC!

Anyway, I needed to test a WAN pipe to see if Comcast was delivering the bandwidth we’re paying for – we’re supposed to have a 200 Mbps link to Boston.

[root@monster ~]# yes | nc -4 -u -v -v -n remotehost.boston.com 9

The yes command just screams “yes!” incessantly, like a teenage boy’s dream girlfriend. We pipe the output to netcat, and force it to use UDP and IPv4 to send all the yes traffic to a host in Boston. UDP port 9 is the “discard” service, of course, so the machine at the other end just throws the traffic away. We already constantly monitor all the routing nodes in the path so we can see and graph what happens to the packets in real time.

Turns out the host can generate 80Mbps, sustainable indefinitely. That goes into the 200Mbps Comcast pipe… and only 4Mbps comes out the other end! Thanks, netcat! Time to call Comcast!

Don’t do this if you aren’t ready to deal with the repercussions of completely smashing your network. Saturating interfaces, routers and pipes will severely impact normal business routines, and should be saved as a last resort.

“Man Ass”

Unix-derived operating systems have a tradition of making commands short and easily typed regardless of social conventions.

So, in order to consult the manual page for the Autonomous System Scanner, you would type “man ass” at the command line. People involved with AS work would not find this remarkably odd or offensive – we’ve already got jobs to do, that don’t involve complaining about other people’s sense of propriety.

However, if one creates a site that automatically generates HTML-formatted web pages from the man pages of the Ubuntu V13.04 linux distribution, popularly called Raring Ringtail, one ends up hosting a page describing “raring man ass”.

The Internet being what it is, such a page may have unexpected effects on your google analytics results…

Automotive Grade Linux might save your life

A standard Linux-based software platform for the connected car would be huge, and at this point could even be a life-saving development.

Automotive Grade Linux is a collaborative open source project developing a common, Linux-based software stack for the connected car. The community’s first open source software release is now available for download, bringing the industry one step closer to realizing the benefits of open automotive innovation.

Read the press release or visit the AGL Wiki to learn more and download the code.

Recent Windows-based dashboards (for example the Nissan Leaf) are an abomination only slightly less dangerous than even-more-hideous automaker proprietary dashboards (for example the Toyota Prius Plug-in). With all the data that exists about the dangers of distracted driving, and state legislatures passing draconian laws against texting behind the wheel, why is it legal for auto vendors to create these potentially lethal user interfaces? How can a pure touch-screen interface, that must be visually examined to be used, possibly be less dangerous than texting while driving? I can drop or ignore a smartphone, or just turn the bloody thing off, but I am forced to interact with my dashboard!

A step in the right direction is to open up the dashboard software ecosystem, so sane designs have an opportunity to compete for driver approval. After all, you can’t expect the same people who designed backwards fake stickshifts (as commonly found in Nissans and Toyotas) to create a good user interface; these people have already demonstrated that they aren’t capable of understanding the task, much less reaching the goal. But a robust community of Open Source hackers would allow the computerized automotive dashboard to progress in the same way that automobile clubs, hot rod enthusiasts, and similar communities have driven innovation historically in the rest of the car industry – by finding more alternatives, and demonstrating them in action.

For every good design there will probably need to be a lot of bad ones. Let’s stop limiting ourselves to the bad (are you listening, Ford?) and start working on a dashboard that’s less likely to kill people.

Fix uart boot errors on M1000e blade chassis

Somebody else figured it out in 2009, and I’m late to the party.

Basically, if you are running Red Hat Enterprise Linux (or one of it’s clone siblings) on a Dell M600 blade, you’ll need to modify the default BIOS settings in a non-intuitive way or you’ll get an error on every boot-up.

IRQ handler type mismatch for IRQ 12

Call Trace:
[] setup_irq+0x1b7/0x1cf
[] serial8250_interrupt+0x0/0xfe
[] request_irq+0xb0/0xd6
[] serial8250_startup+0x43d/0x5dc
[] uart_startup+0x76/0x16c
[] uart_open+0x19e/0x427
[] tty_open+0x1e8/0x3b0
[] chrdev_open+0x14d/0x183
[] open_namei+0x2be/0x6ba
[] chrdev_open+0x0/0x183
[] __dentry_open+0xd9/0x1dc
[] do_filp_open+0x2a/0x38
[] do_sys_open+0x44/0xbe
[] tracesys+0xd5/0xdf

I knew this was a serial port issue from the second and fifth lines of the trace, but I couldn’t figure out why it was involving IRQ 12, which is normally used for SCSI cards or PS/2 mice.

bochs is still the box

I’m surprised and pleased to learn that bochs still exists and is still being actively developed and improved. Lots of people said it would die once hardware-accelerated virtualization became commonplace, since pure software emulation of a PC is so much slower than using a hypervisor. But not only is bochs still popular, it’s got competition!

New MariaDB & Linux kernel releases

The Linux 3.14 kernel has yet another process scheduler, a new network packet scheduler intended to combat bufferbloat, kernal address space layout randomization, and the usual plethora of other improvements.

MariaDB 10 has speed improvements, parallel replication, sharding, and NoSQL support. Looks like Oracle’s mySQL is truly irrelevant at this point; despite Sun paying Monty one billion US dollars for it back in 2008.

A moment of extreme computer geekery

Almost certainly of no interest to anyone.. well, maybe DNS experts who have occasional need of perl. Net::DNS::RR::CNAME->set_rrsort_func is pretty incredibly obscure, though.


#!/usr/bin/perl -w -T -W
#
#  DNS zone transfer and output CNAMEs sorted by target host
#  Charlie Brooks 2014-01-08

use Net::DNS;
use Net::DNS qw(rrsort);  # why don't I get this automatically?

my @domains=qw/typinganimal.net egbt.org hell.com/;

# Use system defaults from resolv.conf to find nameservers
my $res  = Net::DNS::Resolver->new;

foreach my $namespace (@domains) {

# do a zone transfer, loading resource records into array
# axfr is standard (BIND style) not djbdns style
  my @zone = $res->axfr($namespace);

# Red Hat's perl-Net-DNS-0.59-3.el5 package doesn't seem
# to have a useable rrsort for CNAMES (it tries to do a
# "<=>" flying saucer instead of "cmp") and the examples
# in the doco for custom sort methods flat out don't work
# but I flailed around until I found a way to do it.  It's
# weirdly simple if you stumble upon the magic incantation.

# dumping the CNAMEs sorted by target requires custom sort function
  Net::DNS::RR::CNAME->set_rrsort_func ('cnamet',
             sub {($a,$b)=($Net::DNS::a,$Net::DNS::b);
                  $a->{'cname'} cmp $b->{'cname'}});

  foreach my $cname (rrsort("CNAME","cnamet",@zone)) {
    $cname->print;
  }
}
exit;

A very tiny apocalypse

Oracle’s finally going to make good on their threat to stop allowing unsigned Java code to run from web browsers.

This may wreak great havoc in the world of lame web-launched java-based applications. Such as those infesting governments, hospitals and large corporations who aren’t savvy enough to use LAMP for their web development.

Good software will not be in any way impacted by this event.

SSL/TLS certificates, formats and file types

This stuff is a stack. You can’t skip the middle part and expect to understand any of it.

SSL (Secure Socket Layer) is a type of secure communications channel that you can push anything you want through. It is mostly used by web browsers to talk to web servers but it has infinite other uses. It was invented so that you could use a credit card online, and that is still the #1 use for it.

When a web address starts with “HTTPS” instead of “HTTP” it’s using SSL. You might see a little padlock icon in your browser when you go there.

SSL and TLS (Transport Layer Security) are pretty much the same thing. Everything I say here about SSL also applies to TLS.

PKI really means Paired Key Infrastructure even though officially the “P” stands for “Public”. I use lots of different PKIs, you probably do too. SSH uses one, SSL uses a different one, etc.

X.509 is a PKI standard for using linked pairs of cryptographic keys to ensure two separate things: #1, that you are talking to exactly who you think you are talking to, not some random criminal, and, #2 nobody can listen in on the conversation.

The security and reliability of x.509 depends on the non-existent virtuousness of commercial Certificate Authorities, so it’s not as great as you could hope, but good enough for buying stuff on Amazon or protecting PHI. The NSA and Unit 8200 are totally inside it all the time, but they don’t care about your Amazon wish list.

X.509 specifies only how key pairs are used, and not how they are stored on your disk drive. There are many formats for storage, but we have to stack up some more knowledge before we can talk intelligently about that.

As usual in paired key crypto, one key is chosen to be “public” (doesn’t matter which one) and one key is chosen to be “private”. Data encrypted with one can only be decrypted with the other, and vice versa. Bigger keys are better. Most people aren’t using big enough keys.

X.509 adds the extra wrinkle that the key chosen to be public will be time-stamped and signed by a Certificate Authority. A signed, stamped public key is called a certificate. The time stamp is there so CAs can charge absurdly high fees when certificates expire; it serves no other real purpose and don’t let them tell you different.

Don’t worry about what “signed” means. All that matters is that your web browser can always tell if your certificate was signed by a real commercial CA, or by your employer’s private CA, or is self-signed, or was signed by some random unknown system that might be criminal, or is expired.

When certificates are passed around from one system to another on the wires (like, from Amazon to your web browser, or in a Certificate Signing Request submitted to a CA, or whatever) they use Abstract Syntax Notation One’s Distinguished Encoding Rules (ASN.1 DER). If you really want to understand everything about standardized arbitrary data structure representation go to Wikipedia and start reading at ASN.1, which is sort of the ground rules everything else rests on. But you don’t really need to know the air:fuel mixture in your car is 16:1 to fix a carburetor, and you won’t need to know ASN.1 or DER to build a great web service.

Major point here: When you say “SSL certificate” you are saying “X.509 ASN.1 DER timestamped signed public key”, in the same way that when you say “living woman” you are saying “breathing mammalian human female person”. You don’t add any information by saying DER or X.509, those are already known when you say “SSL certificate”. Which is why I get annoyed whenever I read vendor documentation to see what format they want their certs in, because they always say something useless like “DER” or “X.509”. I already knew that!

Certificates and keys can be stored on disk in an bewildering number of different formats. Tomcat/Java, Apache, IIS/AD, and HP-UX’s webserver all use different formats with mostly stupid names following no particularly obvious pattern.

I’m only going to talk about the storage formats you might actually need to use, and I’m going to ignore lots of details.

PEM (used by lots of stuff) is the easiest way to store certs and keys and the least secure. You have to be super careful when you use PEM; making minor mistakes with file permissions or user privileges can be equivalent to leaving the root password written on a postit stuck to the side of your keyboard. Poorly written software may require you to put both the (public) certificate and the (private) key in a single PEM file which is unnecessarily dangerous. There are no non-printable characters in a PEM cert, it’s all human-readable gibberish that you can cut and paste.

PKCS#12 (Public Key Cryptography Standard number 12, the “Personal Information Exchange Syntax Standard”) is a password-protected format that can hold multiple sets of both (public) certs and (private) keys. The encryption is not marvelously strong so you still have to protect a PKCS#12 file, but it’s strong enough that you sure don’t want to lose the password! It’s a very good format for moving certificates and keys from system to system and used by many Microsoft products.

JKS (Java Keystore) is supposedly PKCS#12… but in my experience, using various versions of Tomcat, you have to build your Java keystore with the Java keytool that came with the version of the Java SDK that was used to build your Java application (such as Tomcat) which is a pain in the butt. It’s password-protected, so you need the passphrase used to build it in order to use it. The Java keytool can’t extract the private key to another file but there are plenty of other tools that can, so it’s not like this adds any real extra security, it’s mostly just annoying.

PKCS#7 (Public Key Cryptography Standard number seven, the “Cryptographic Message Syntax Standard”) is used a lot in the deep deep infrastructure. It cannot hold private keys, only certs, but it can hold a “cert chain” of any length, so for example CertX signed by CertZ, plus CertZ signed by some CA, plus the CA cert all in one file. I occasionally need to put certificates into this format for stuff like complex multi-OS LDAP architectures, and CAs use it, but most people will never need to work with it.

<Curmudgeonly Digression> An unfortunate result of Microsoft’s market dominance is that otherwise well-informed people often think that the last four characters of file names are deeply magical. This is because Apple used to have better filesystems than Microsoft (and arguably they still do). Apple filesystems implemented a resource fork as an extension to file metadata; the resource fork allows users, applications or operating systems to mark what program(s) should be used to process a file, so that you can just click on a file created by Excel and it will open in Excel, or whatever. Microsoft made a really crappy lame fake of this capability by creating a list of three-character codes and assigning each one to a piece of software, so that when you click on a file ending in .xls the operating system fires up Excel. If you think about this really deeply, you’ll realize it’s is a truly horrible idea that Microsoft’s success has conditioned everyone to believe is reasonable – sort of like the way people used to be conditioned to think it was totally reasonable to test for witchcraft by dunking people in water. Nowadays Microsoft takes this stupidity a step further by hiding the last four characters from the user (unless you change the file viewer settings, which you definitely should), mostly likely because they are ashamed of the utter boneheadedness of it.
</End Digression>

So anyway, although file “types” aren’t really types at all, but merely arbitrary strings preceded by dots on the ends of file names, that are used in Microsoft systems to do Dumb Things™, we humans generally use names and labels to encode useful hints to other humans and that’s all very well and good. I always end my perl sources with .pl for example, even though the perl interpreter couldn’t care less. It’s a useful hint to my co-workers about content.

These are the most commonly used file types for x.509:

something.key = PEM format private key for something
something.csr = PEM format “certificate signing request” to submit to a CA
something.crt = PEM format signed certificate

whatever.p7s = PKCS#7 format certificate chain

whatever.p12 = PKCS#12 password-protected keystore
whatever.pfx = either a PKCS#12 keystore or an obsolete Microsoft PFX keystore
tomcat.jks = a Java Keystore, probably for Tomcat, possibly PKCS#12 format

Unfortunately, there are hundreds of exceptions to the common usages – and Netscape Security Services, which is used in Firefox and HP-UX and lots of other places, can use files with names like cert7.db, secmod.db, key3.db, that use formats I haven’t even bothered to explain (use PEM format to import and export certs and keys into NSS and don’t worry about it).

Here are the takeaways:

#1 Crypto isn’t simple. Every vendor believes they are doing it right and nobody else is, although really they are pretty much all doing it partly wrong… in various different ways.

#2 If you start thinking .cer or .der or .spc means something outside a very limited space, you aren’t doing yourself any favors. File names are poor hints only. Never ask someone for a .DER formatted file, it makes you sound like an idiot.

#3 You can use well known vendor-independent language that does have real meaning – Here’s a list of the PKCS number standards and what they are used for. If you use that language, you can communicate effectively (and also sound like you might know what you’re talking about).

#4 Make sure you thoroughly document any non-standard formats that you’re forced to use by vendors so your co-workers aren’t cursing your name whenever you’re on vacation.

#5 Be fanatical about securing your private keys, and don’t lose the passwords to your keystores.

Sort your /etc/passwd and /etc/shadow files!

It’s very convenient to have your local user accounts sorted by uidNumber, but if you’re running the shadow suite there’s no uidNumber field in /etc/shadow to sort on. Something something something Ted Codd and the horse he rode in on.

This should work on anything with GNU sort, grep and awk and no hoary old NIS nonsense in /etc/passwd. It’s worked on every linux distro I’ve ever used, all the way back to yggdrasil, although in Ubuntu gawk is not necessarily included by default (which is weird, but easily dealt with using [insert package-manager-du-jour name here] or sudo apt-get install gawk).

touch passwd.sorted shadow.sorted
chmod 644 passwd.sorted
chmod 600 shadow.sorted
sort -t: -n -k3,3 /etc/passwd >passwd.sorted
gawk -F: '{system("grep \"^" $1 ":\" /etc/shadow")}' passwd.sorted >shadow.sorted

If you don’t trust my mad gawk skillz (or your own transcription skills) you can crudely check the results with wc, because the number of lines, words and characters will be unchanged by a clean sort.

wc /etc/shadow shadow.sorted
wc /etc/passwd passwd.sorted

After you have carefully checked the output, save off a backup copy of the old files and overwrite them with the sorted ones.

cp -a /etc/passwd /root/passwd.`date -I`
cp -a /etc/shadow /root/shadow.`date -I`
mv passwd.sorted /etc/passwd && mv shadow.sorted /etc/shadow

If you’re running selinux (of course you are, my bright little star!) you need to make sure you reset the file security contexts, right quick.

restorecon -v /etc/passwd
restorecon -v /etc/shadow

Keep in mind that mucking about with primary user authentication sources is not something you should do unless you are an expert (or want to become one). And you’re going to have to be the root superuser to do this, or type “sudo” a whole lot. The consequences of error may be severe! For example, if you have selinux in enforcing mode and you reboot without resetting the security context on /etc/shadow… yeah, good luck with that.

The same procedure can be used for /etc/group and /etc/gshadow, naturlich.

Red Hat Enterprise Linux 6.4 installs samba4-libs by default

I’m arguing with Red Hat again… the latest downloadable DVD of RHEL6 by default installs part of samba 4, which is supposed to be an unsupported “technology preview” and not a mainline package. In what world does it make sense for your flagship product, for which you sell expensive support contracts, to depend on a chunk of code you decline to support? How is that not bad craziness?

If you try to tear it out with rpm -e you’ll get sssd dependency errors. And ghods, I hate the way RHEL6 and up basically force you to run half-baked name and authentication service caching daemons – my networks worked faster and better without caching, because we actually had a high performance LDAP infrastructure that didn’t need such Microsofty complications. But that’s another rant entirely.

ANYway, if you say OK I will upgrade to Samba 4 to avoid dependency hell, you trigger bug 984727 which Red Hat has set to CLOSED WONTFIX.

Update: Andreas Schneider of Red Hat and the Samba Team has clarified the matter. Since FreeIPA (Red Hat’s Active Directory implementation) and sssd (Red Hat’s new authentication daemon, much like PADL’s PAM and NSS modules only rawer and more oriented toward caching) both require the samba4-libs library in RHEL6, that single package is now officially supported – although version 4 of the Samba Suite is otherwise still a “technology preview”.

dkms patches go live

My fixes for Dell’s Dynamic Kernel Module System made it into their git tree.

Mario’s still reviewing my rewrite of the autoinstall loop, but that’s not actually very important from a functional standpoint. Presumably there will be a fresh release as soon as he’s rejected or accepted it.

I’ve already distributed RPMs at work with the fixed AoE and DKMS packages, so we’re stable at this point.