Skip to content

Openssh taking minutes to become available, booting takes half an hour ... because your server waits for a few bytes of randomness

Linux

So, your machine now needs minutes to boot before you can ssh in where it used to be seconds before the Debian Buster update?

Problem

Linux 3.17 (2014-10-05) learnt a new syscall getrandom() that, well, gets bytes from the entropy pool. Glibc learnt about this with 2.25 (2017-02-05) and two tries and four years after the kernel, OpenSSL used that functionality from release 1.1.1 (2018-09-11). OpenSSH implemented this natively for the 7.8 release (2018-08-24) as well.

Now the getrandom() syscall will block1 if the kernel can't provide enough entropy. And that's frequenty the case during boot. Esp. with VMs that have no input devices or IO jitter to source the pseudo random number generator from.

First seen in the wild January 2017

I vividly remember not seeing my Alpine Linux VMs back on the net after the Alpine 3.5 upgrade. That was basically the same issue.

Systemd. Yeah.

Systemd makes this behaviour worse, see issues #4271, #4513 and #10621.
Basically as of now the entropy file saved as /var/lib/systemd/random-seed will not - drumroll - add entropy to the random pool when played back during boot. Actually it will. It will just not be accounted for. So Linux doesn't know. And continues blocking getrandom(). This is obviously different from SysVinit times2 when /var/lib/urandom/random-seed (that you still have lying around on updated systems) made sure the system carried enough entropy over reboot to continue working right after enough of the system was booted.

#4167 is a re-opened discussion about systemd eating randomness early at boot (hashmaps in PID 0...). Some Debian folks participate in the recent discussion and it is worth reading if you want to learn about the mess that booting a Linux system has become.

While we're talking systemd ... #10676 also means systems will use RDRAND in the future despite Ted Ts'o's warning on RDRAND [Archive.org mirror and mirrored locally as 130905_Ted_Tso_on_RDRAND.pdf, 205kB as Google+ will be discontinued in April 2019].
Update: RDRAND doesn't return random data on pre-Ryzen AMD CPUs (AMD CPU family <23) as per systemd bug #11810. It will always be 0xFFFFFFFFFFFFFFFF (264-1). This is a known issue since 2014, see kernel bug #85991.

Debian

Debian is seeing the same issue working up towards the Buster release, e.g. Bug #912087.

The typical issue is:

[    4.428797] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: data=ordered
[ 130.970863] random: crng init done

with delays up to tens of minutes on systems with very little external random sources.

This is what it should look like:

[    1.616819] random: fast init done
[    2.299314] random: crng init done

Check dmesg | grep -E "(rng|random)" to see how your systems are doing.

If this is not fully solved before the Buster release, I hope some of the below can end up in the release notes3.

Solutions

You need to get entropy into the random pool earlier at boot. There are many ways to achieve this and - currently - all require action by the system administrator.

Kernel boot parameter

From kernel 4.19 (Debian Buster currently runs 4.18 [Update: but will be getting 4.19 before release according to Ben via Mika]) you can set RANDOM_TRUST_CPU at compile time or random.trust_cpu=on on the kernel command line. This will make recent Intel / AMD systems trust RDRAND and fill the entropy pool with it. See the warning from Ted Ts'o linked above.

Update: Since Linux kernel build 4.19.20-1 CONFIG_RANDOM_TRUST_CPU has been enabled by default in Debian.

Using a TPM

The Trusted Platform Module has an embedded random number generator that can be used. Of course you need to have one on your board for this to be useful. It's a hardware device.

Load the tpm-rng module (ideally from initrd) or compile it into the kernel (config HW_RANDOM_TPM). Now, the kernel does not "trust" the TPM RNG by default, so you need to add

rng_core.default_quality=1000

to the kernel command line. 1000 means "trust", 0 means "don't use". So you can chose any value in between that works for you depending on how much you consider your TPM to be unbugged.

VirtIO (KVM, QEMU, ...)

For Virtual Machines (VMs) you can forward entropy from the host (that should be running longer than the VMs and have enough entropy) via virtio_rng.

So on the host, you do:

kvm ... -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,bus=pci.0,addr=0x7

and within the VM newer kernels should automatically load virtio_rng and use that.

You can confirm with dmesg as per above.

Or check:

# cat /sys/devices/virtual/misc/hw_random/rng_available
virtio_rng.0
# cat /sys/devices/virtual/misc/hw_random/rng_current
virtio_rng.0

Patching systemd

The Fedora bugtracker has a bash / python script that replaces the systemd rnd seeding with a (better) working one. The script can also serve as a good starting point if you need to script your own solution, e.g. for reading from an entropy provider available within your (secure) network.

Chaoskey

The wonderful Keith Packard and Bdale Garbee have developed a USB dongle, ChaosKey, that supplies entropy to the kernel. Hard- and software are open source.

Jitterentropy_RNG

Kernel 4.2 introduced jitterentropy_rng which will use the jitter in CPU timings to generate randomness.

modprobe jitterentropy_rng

This apparently needs a userspace daemon though (read: design mistake) so

apt install jitterentropy-rngd (available from Buster/testing).

The current version 1.0.8-3 installs nicely on Stretch. dpkg -i is your friend.

But - drumroll - that daemon doesn't seem to use the kernel module at all.

That's where I stopped looking at that solution. At least for now. There are extensive docs if you want to dig into this yourself.

Update: The Linux kernel 5.3 will have an updated jitterentropy_rng as per Commit 4d2fa8b44. This is based on the upstream version 2.1.2 and should be worth another look.

Haveged

apt install haveged

Haveged is a user-space daemon that gathers entropy though the timing jitter any CPU has. It will only run "late" in boot but may still get your openssh back online within seconds and not minutes.

It is also - to the best of my knowledge - not verified at all regarding the quality of randomness it generates. The haveged design and history page provides and interesting read and I wouldn't recommend haveged if you have alternatives. If you have none, haveged is a wonderful solution though as it works reliably. And unverified entropy is better than no entropy. Just forget this is 2018 2019 :-).

early-rng-init-tools

Thorsten Glaser has posted newly developed early-rng-init-tools in a debian-devel thread. He provides packages at http://fish.mirbsd.org/~tg/Debs/dists/sid/wtf/Pkgs/early-rng-init-tools/ .

First he deserves kudos for naming a tool for what it does. This makes it much more easily discoverable than the trend to name things after girlfriends, pets or anime characters. The implementation hooks into the early boot via initrd integration and carries over a seed generated during the previous shutdown. This and some other implementation details are not ideal and there has been quite extensive scrutiny but none that discovered serious issues. Early-rng-init-tools look like a good option for non-RDRAND (~CONFIG_RANDOM_TRUST_CPU) capable platforms.

Linus to the rescue

Luckily end of September Linus Torvalds was fed up with the entropy starvation issue and the non-conclusive discussions about (mostly) who's at fault and ... started coding.

With the kernel 5.4 release on 25.11.2019 his patch has made it into mainline. He created a try_to_generate_entropy function that uses CPU jitter to generate seed entropy for the PRNG early in boot.

In the merge commit Linus explains:

This is admittedly partly "for discussion". We need to have a way forward for the boot time deadlocks where user space ends up waiting for more entropy, but no entropy is forthcoming because the system is entirely idle just waiting for something to happen.

While this was triggered by what is arguably a user space bug with GDM/gnome-session asking for secure randomness during early boot, when they didn't even need any such truly secure thing, the issue ends up being that our "getrandom()" interface is prone to that kind of confusion, because people don't think very hard about whether they want to block for sufficient amounts of entropy.

The approach here-in is to decide to not just passively wait for entropy to happen, but to start actively collecting it if it is missing. This is not necessarily always possible, but if the architecture has a CPU cycle counter, there is a fair amount of noise in the exact timings of reasonably complex loads.

We may end up tweaking the load and the entropy estimates, but this should be at least a reasonable starting point.

So once this kernel is available in your distribution, you should be safe from entropy starvation at boot on any platform that has hardware timers (I haven't encountered one that does not in the last decade).

Ted Ts'o reviewed the approach and was fine and Ahmed Dawish did some testing of the quality of randomness generated and that seems fine, too.

Updates

14.01.2019

Stefan Fritsch, the Apache2 maintainer in Debian, OpenBSD developer and a former Debian security team member stumbled over the systemd issue preventing Apache libssl to initialize at boot in a Debian bug #916690 - apache2: getrandom call blocks on first startup, systemd kills with timeout.

The bug has been retitled "document getrandom changes causing entropy starvation" hinting at not fixing the underlying issue but documenting it in the Debian Buster release notes.

Unhappy with this "minimal compromise" Stefan wrote a comprehensive summary of the current situation to the Debian-devel mailing list. The discussion spans over December 2018 and January 2019 and mostly iterated what had been written above already. The discussion has - so far - not reached any consensus. There is still the "systemd stance" (not our problem, fix the daemons) and the "ssh/apache stance" (fix systemd, credit entropy).

The "document in release notes" minimal compromise was brought up again and Stefan warned of the problems this would create for Buster users:

> I'd prefer having this documented in the release notes:
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=916690
> with possible solutions like installing haveged, configuring virtio-rng,
> etc. depending on the situation.

That would be an extremely user-unfriendly "solution" and would lead to 
countless hours of debugging and useless bug reports.

This is exactly why I wrote this blog entry and keep it updated. We need to either fix this or tell everybody we can reach before upgrading to Buster. Otherwise this will lead to huge amounts of systems dead on the network after what looked like a successful upgrade.

Some interesting tidbits were mentioned within the thread:

Raphael Hertzog fixed the issue for Kali Linux by installing haveged by default. Michael Prokop did the same for the grml distribution within its December 2018 release.

Ben Hutchings pointed to an interesting thread on the debian-release mailing list he kicked off in May 2018. Multiple people summarized the options and the fact that there is no "general solution that is both correct and easy" at the time.

Sam Hartman identified Debian Buster VMs running under VMware as an issue, because that supervisor does not provide virtio-rng. So Debian VMs wouldn't boot into ssh availability within a reasonable time. This is an issue for real world use cases albeit running a proprietary product as the supervisor.

16.01.2019

Daniel Kahn Gillmor wrote in to explain a risk for VMs starting right after the boot of the host OS:

If that pool is used by the guest to generate long-term secrets because it appears to be well-initialized, that could be a serious problem.
(e.g. "Mining your P's and Q's" by Heninger et al -- https://factorable.net/weakkeys12.extended.pdf)
I've just opened https://bugs.launchpad.net/qemu/+bug/1811758 to report a way to improve that situation in qemu by default.

So ... make sure that your host OS has access to a hardware random number generator or at least carries over its random seed properly across reboots. You could also delay VM starts until the crng on the host Linux is fully initialized (random: crng init done).
Otherwise your VMs may get insufficiently generated pseudo-random numbers and won't even know.

12.03.2019

Stefan Fritsch revived the thread on debian-devel again and got a few more interesting tidbits out of the developer community:

Ben Hutchings has enabled CONFIG_RANDOM_TRUST_CPU for Debian kernels from 4.19.20-1 so the problem is somewhat contained for recent CPU AMD64 systems (RDRAND capable) in Buster.

Thorsten Glaser developed early-rng-init-tools which combine a few options to try and get entropy carried across boot and generated early during boot. He received some scrutiny as can be expected but none that would discourage me from using it. He explains that this is for early boot and thus has initrd integration. It complements safer randomness sources or haveged.

16.04.2019

The Debian installer for Buster is running into the same problem now as indicated in the release notes for RC1. Bug #923675 has details. Essentially choose-mirror waits serveral minutes for entropy when used with https mirrors.

08.05.2019

The RDRAND use introduced in systemd to bypass the kernel random number generator during boot falls for a AMD pre-Ryzen bug as RDRAND on these systems doesn't return random data after a suspend / resume cycle. Added an update note to the systemd section above.

03.06.2019

Bastian Blank reports the issue is affecting Debian cloud images now as well as cloud-init generates ssh keys during boot.

10.07.2019

Added the update of jitterentropy_rng to a version based on upstream v2.1.2 into the Jitterentropy section above.

16.09.2019

The Linux Kernel Mailing List (LKML) is re-iterating the entropy starvation issue and the un-willingness of systemd to fix its usage of randomness in early boot. Ahmed S. Darwish has reported the issue leading to ext4 reproducibly blocking boot with Kernel 5.3-r8. There are a few patches floated and the whole discussion it worth reading albeit non-conclusive as of now.

Ted Ts'o says "I really very strongly believe that the idea of making getrandom(2) non-blocking and to blindly assume that we can load up the buffer with 'best efforts' randomness to be a terrible, terrible idea that is going to cause major security problems that we will potentially regret very badly. Linus Torvalds believes I am an incompetent systems designer." in this email.

In case you needed a teaser to really start reading the thread! Linus Torvalds also mentions the issue (and a primer on what "never break userspace" means) in the Linux kernel 5.3 release notes.

18.09.2019

... and Martin Steigerwald kindly noticed that I update this blog post with the relevant discussions I come across as this entropy starvation mess continues to haunt us.

25.11.2019

Added the "Linus to the rescue" section after the Linux kernel 5.4 has been released.

02.04.2020

I ran into the same issue on a Gentoo system today. Luckily OpenRC handeled this gracefully but it delayed booting: syslog-ng actually hangs the boot for some time ... waiting for entropy. Argh. The Gentoo forums thread on the topic clearly listed the options:

  1. Make syslog-ng depend on haveged by adding rc_syslog_ng_need="haveged" to /etc/rc.conf (and obviously having haveged installed)
  2. Re-compiling the kernel with CONFIG_RANDOM_TRUST_CPU=y where that is an option

  1. it will return with EAGAIN in the GRND_NONBLOCK use case. The blocking behaviour when lacking entropy is a security measure as per Bug #1559 of Google's Project Zero

  2. Update 18.12.2018: "SysVinit times" ::= "The times when most Linux distros used SysVinit over other init systems." So Wheezy and previous for Debian. Some people objected to the statement, so I added this footnote as a clarification. See the discussion in the comments below. 

  3. there is no Buster branch in the release notes repository yet (17.12.2018). Update: I wrote a section for the release notes 06.05.2019 and Paul Gevers amended and committed that. So when users of affected systems read the release notes before upgrading to Buster they will hopefully not be surprised (and worried) by the long boot delays. 

Prevent Ubuntu from phoning home

Linux

Ubuntu unfortunately has decided again to implement another "phone home" feature, this time transferring your lsb_release information, CPU model and speed (from /proc/cpuinfo), uptime output, most of uname -a and curl version to a Ubuntu news web-service.

Here is the Launchpad bug report #1637800 introducing this ... web bug.

This thing runs both systemd-timer based (via /lib/systemd/system/motd-news.service and /lib/systemd/system/motd-news.timer) and on request when you log in (via /etc/update-motd.d/50-motd-news).

Ubuntu news on ssh login

There has even been a bug filed about the motd advertising HBO's Silicon Valley show.

To prevent this from running (it is enabled by default on Ubuntu 17.04 and may probably propagate down to earlier versions as well), edit /etc/default/motd-news to include

ENABLED=0

so

sed -i "s/ENABLED=1/ENABLED=0/" /etc/default/motd-news # run as root

for your automated installs.

Update:

02.07.2017: Dustin Kirkland responded to a YC "hacker news" mention of his motd spam. He mentions:

You're welcome to propose your own messages for merging, if you have a well formatted, informative message for Ubuntu users.
We'll be happy to review and include them in the future.

What could possibly go wrong?

Updating the Dell XPS 13 9360 Thunderbolt firmware to get VGA and HDMI working

IT

Last year I bought the wonderful Dell XPS 13 9360 as it is certified to work with Ubuntu Linux and is just all around an awesome device. Dell made me buy the Windows version as only that got a 1 TB NVMe-SSD option. Linux apparently is only worthy of the 512GB and below models. What product manager comes up with such a stupid idea? Are SKUs that precious? Anyways ... so I bought a Windows version and that got wiped with a Linux install immediately as that was and is its intended purpose.

Dell DA200 USB-C to HDMI/VGA/Ethernet/USB 3.0 adapter

I purchased a DA200 with the system which is Dell's USB-C to anything (HDMI/VGA/Ethernet/USB 3.0) dongle. When I got the laptop the Ethernet port and USB 3.0 via the DA200 were working right out of the box. The VGA and HDMI ports were detected by Ubuntu but there was no way to get connected screens working. They stayed black.

The device was shipped with Thunderbolt firmware NVM18 and we've been told rather quickly by Dell this would be fixed with an update. And lo and behold Dell published the firmware version NVM21 right for Christmas 2016. Now unfortunately while their BIOS updates are Windows / DOS executables that can be just shoved at the Dell UEFI flash updater and thus the main BIOS can be updated from any OS, including Linux, without any hassle, the Intel provided Thunderbolt update needs Windows to get installed. Or, well, there is a convoluted way to compile an out-of-tree Linux kernel module, download and compile a few sets of software and do it via Linux. That description read so lengthy, I didn't even try it. Additionally there seems to have been no progress at all in getting this more mainline in the last three months, so I chose the cheap route and installed Windows 10 on a USB thumb drive1.

This is done via the (unfortunately Windows only) Win2USB software (the free version is sufficient).

Update: There's a new bash script windows2usb that looks good and should work to get you a bootable Windows USB thumb drive in Linux. WinUSB (that stopped working in the Win10 area some time) has also been forked and updated into WoeUSB. And there is WinToUSBLinux, yet another shell script. Give them a try.

Once Windows has rebooted often enough to finish its own installation, you can work with the USB thumb drive install as with any Windows 10. Nice.

Dell TPM 1.2 to 2.0 firmware update

Put all the files you downloaded from Dell to update your XPS 13 into a directory on the USB thumb drive. That way Windows does not need to have any network connectivity.

I first updated the TPM 1.2 firmware to a TPM 2.0 version (DellTpm2.0_Fw1.3.2.8_V1_64.exe at the time of writing this blog entry). Now this is quite hilarious as the Windows installer doesn't do anything but putting a UEFI firmware update into the EFI partition that runs on reboot. Duh. You do need to manually clear the TPM in the BIOS' security settings section (there's a clear checkbox) to be able to program new firmware onto it.
Thunderbolt firmware upgrade progress bar Thunderbolt firmware upgrade successful Now back in Windows install the Thunderbolt drivers (Chipset_Driver_J95RR_WN32_16.2.55.275_A01.exe at the time of writing this) and then run Intel_TBT3_FW_UPDATE_NVM21_0THFT_A00_3.21.00.008.exe, which is the NVM21 Thunderbolt firmware update (or a later version).

Reboot again (into Linux if you want to) and (drumroll) the VGA and HDMI ports are working. Awesome.

An update log can be found on the USB thumb drive at Dell\UpdatePackage\Log\Intel_TBT3_FW_UPDATE_NVM21_0THFT_A00_3.log:

*** Dell Thunderbolt firmware update started on 4/6/2017 at 12:56:56***
Command: C:\Install\Intel_TBT3_FW_UPDATE_NVM21_0THFT_A00_3.21.00.008.exe 

Starting FW Update....
***TBT GPIO Power is Turning On:  No Dock or DockInfo.
***TBT GPIO power is turned on.

Thunderbolt Firmware Update SUCCEEDED
TBT Items Registry creation is Success at \SOFTWARE\Dell\ManageableUpdatePackage\Thunderbolt Controller:
User selected OK for reboot
System TBT NVM Current Version:BCD:00000018: New Version:BCD:00000021

Exit Code = 0 (Success) 
***Thunderbolt Firmware flash finished at 4/6/2017 at 13:00:23***

If Windows has added its boot loader entry into your UEFI options, you can easily remove that again with the Dell UEFI BIOS or efibootmgr from within Linux.

The whole process took me less than 30 minutes. And most of that was creating the Windows USB thumb drive. I'll keep that for future updates until Intel and Dell have sorted out the Thunderbolt update process in Linux.

Updates:

18.05.17: Intel has published a large patchset on LKML to enable Thunderbolt security levels (thus preventing DMA attacks) and get NVM firmware upgrades mainlined. Yeah!

02.05.18: Added a link to the windows2usb bash script that should remove the need to create a bootable Windows USB thumb drive with a Windows only software.

11.06.18: Added a link to WoeUSB which is currently packaged for Ubuntu in a PPA.

08.04.20: Added a link to WinToUSBLinux. A recently released shell script to create a bootable Windows USB stick from Linux.


  1. If you go the Linux route please post a minimal image somewhere (kernel, initrd, squashfs or FAT16/32 raw image) and put a link into a comment below this blog post. Thanks. 

Security is hard, open source security unnecessarily harder

IT

Now it is a commonplace that security is hard. It involves advanced mathematics and a single, tiny mistake or omission in implementation can spoil everything.

And the only sane IT security can be open source security. Because you need to assess the algorithms and their implementation and you need to be able to completely verify the implementation. You simply can't if you don't have the code and can compile it yourself to produce a trusted (ideally reproducible) build. A no-brainer for everybody in the field.

But we make it unbelievably hard for people to use security tools. Because these have grown over decades fostered by highly intelligent people with no interest in UX.
"It was hard to write, so it should be hard to use as well."
And then complain about adoption.

PGP / gpg has received quite some fire this year and the good news is this has resulted in funding for the sole gpg developer. Which will obviously not solve the UX problem.

But the much worse offender is OpenSSL. It is so hard to use that even experienced hackers fail.

IRC wallop on hackint

Now, securely encrypting a mass communication media like IRC is not possible at all. Read Trust is not transitive: or why IRC over SSL is pointless1.
Still it makes wiretapping harder and that may be a good thing these days.

LibreSSL has forked the OpenSSL code base "with goals of modernizing the codebase, improving security, and applying best practice development processes". No UX improvement. A cleaner code for the chosen few. Duh.

I predict the re-implementations and gradual improvement scenarios will fail. The nearly-impossible-to-use-right situation with both gpg and (much more importantly) OpenSSL cannot be fixed by gradual improvements and however thorough code reviews.

Now the "there's an App for this" security movement won't work out on a grand scale either:

  1. Most often not open source. Notable exceptions: ChatSecure, TextSecure.
  2. No reference implementations with excellent test servers and well documented test suites but products. "Use my App.", "No, use MY App!!!".
  3. Only secures chat or email. So the VC-powered ("next WhatsApp") mass-adoption markets but not the really interesting things to improve upon (CA, code signing, FDE, ...).
  4. While everybody is focusing on mobile adoption the heavy lifting is still on servers. We need sane libraries and APIs. No App for that.

So we need a new development, a new code, a new open source product. Sadly so the Core Infrastructure Initiative so far only funds existing open source projects in dire needs and people bug hunting.

It basically makes the bad solutions of today a bit more secure and ensures maintenance of decade old crufty code bases. That way it extends the suffering of everybody using the inadequate solutions of today.

That's inevitable until we have a better stack but we need to look into getting rid of gpg and OpenSSL and replacing it with something new. Something designed well from the ground up, technically and from a user experience perspective.

Now who's in for a five year funding plan? $3m2 annually. ROCE 0. But a very good chance to get the OBE awarded.

Keep calm and enjoy the silence

Updates:

10.06.22: Carl Tashian made a GUI mockup to show the complexity of the OpenSSL "user interface".

21.07.19: A current essay on "The PGP problem" is making rounds and lists some valid issues with the file format, RFCs and the gpg implementation. The GnuPG-users mailing list has a discussion thread on the issues listed in the essay.

19.01.19: Daniel Kahn Gillmor, a Senior Staff Technologist at the ACLU, tried to get his gpg key transition correct. He put a huge amount of thought and preparation into the transition. To support Autocrypt (another try to get GPG usable for more people than a small technical elite), he specifically created different identities for him as a person and his two main email addresses. Two days later he has to invalidate his new gpg key and back-off to less "modern" identity layouts because many of the brittle pieces of infrastructure around gpg from emacs to gpg signature management frontends to mailing list managers fell over dead.

28.11.18: Changed the Quakenet link on why encrypting IRC is useless to an archive.org one as they have removed the original content.

13.03.17: Chris Wellons writes about why GPG is a failure and created a small portable application Enchive to replace it for asymmetric encryption.

24.02.17: Stefan Marsiske has written a blog article: On PGP. He argues about adversary models and when gpg is "probably" 3 still good enough to use. To me a security tool can never be a sane choice if the UI is so convoluted that only a chosen few stand at least a chance of using it correctly. Doesn't matter who or what your adversary is.
Stefan concludes his blog article:

PGP for encryption as in RFC 4880 should be retired, some sunk-cost-biases to be coped with, but we all should rejoice that the last 3-4 years had so much innovation in this field, that RFC 4880 is being rewritten[Citation needed] with many of the above in mind and that hopefully there'll be more and better tools. [..]

He gives an extensive list of tools he considers worth watching in his article. Go and check whether something in there looks like a possible replacement for gpg to you. Stefan also gave a talk on the OpenPGP conference 2016 with similar content, slides.

14.02.17: James Stanley has written up a nice account of his two hour venture to get encrypted email set up. The process is speckled with bugs and inconsistent nomenclature capable of confusing even a technically inclined person. There has been no progress in the last ~two years since I wrote this piece. We're all still riding dead horses. James summarizes:

Encrypted email is nothing new (PGP was initially released in 1991 - 26 years ago!), but it still has a huge barrier to entry for anyone who isn't already familiar with how to use it.

04.09.16: Greg Kroah-Hartman ends an analysis of the Evil32 PGP keyid collisions with:

gpg really is horrible to use and almost impossible to use correctly.

14.11.15:
Scott Ruoti, Jeff Andersen, Daniel Zappala and Kent Seamons of BYU, Utah, have analysed the usability [local mirror, 173kB] of Mailvelope, a webmail PGP/GPG add-on based on a Javascript PGP implementation. They describe the results as "disheartening":

In our study of 20 participants, grouped into 10 pairs of participants who attempted to exchange encrypted email, only one pair was able to successfully complete the assigned tasks using Mailvelope. All other participants were unable to complete the assigned task in the one hour allotted to the study. Even though a decade has passed since the last formal study of PGP, our results show that Johnny has still not gotten any closer to encrypt his email using PGP.

  1. Quakenet has removed that article citing "near constant misrepresentation of the presented argument" sometime in 2018. The contents (not misrepresented) are still valid so I have added and archive.org Wayback machine link instead. 

  2. The estimate was $2m until end of 2018. The longer we wait, the more expensive it'll get. And - obviously - ever harder. E.g. nobody needed to care about sidechannel attacks on big-LITTLE five years ago. But now they start to hit servers and security-sensitive edge devices. 

  3. Stefan says "probably" five times in one paragraph. Probably needs an editor. The person not the application. 

Apple Timemachine backups on Debian 8 (Jessie)

Debian

Upgrading Debian 7 (Wheezy) servers to Debian 8 (Jessie) proves (unexpectedly) quite rough around the edges.

That's what you get for using a version x.0, we should have known better :-).

And - of course - the release notes follow the common practice of not even mentioning any of the issues we encountered so far.

Ah, well, let's go through the first one:

In Debian 7 (Wheezy) there was netatalk 2.2.2 (packages link). Now during upgrades that package may or may not get removed. There is no netatalk in Debian 8 (Jessie) anymore. Duh. There is in sid (aka Debian unstable) (packages link) so we may see a backport some time. Or not. In any case this is still 2.2.5 at the time of writing and as Adrian Knoth put it in the three year old bug asking for a upgrade to Netatalk 3:

Let's not ship another release without netatalk3, it's embarrassing.

Yes. It is. Removing a working version and not even mentioning it in the release notes is even worse though. So no cookies there.

Luckily the absolutely awesome Debian and greater FLOSS community have sorted 90% of the problem out for us already:

A quick Google search turns up an excellent article on netatalk's wiki that details installing Netatalk 3.1.7 on Debian 8 Jessie. There are two shortcomings to this: First it doesn't compile to .debs but installs besides apt and friends. And second it compiles with Spotlight search, courtesy of Gnome tracker, which doesn't really work well on servers yet. Hence we're lucky that Adrian Knoth's debified install has not yet added the tracker dependencies. It does compile for systemd use (the default for Debian 8 Jessie). If you want to continue using SysVInit, you need to modify debian/rules.

Compiling to .debs becomes as easy as:

# get build dependencies and a few helpers
apt-get install build-essential devscripts debhelper cdbs autotools-dev dh-buildinfo libdb-dev libwrap0-dev libpam0g-dev libcups2-dev libkrb5-dev libltdl3-dev libgcrypt11-dev libcrack2-dev libavahi-client-dev libldap2-dev libacl1-dev libevent-dev d-shlibs dh-systemd
# in case you want to try the tracker support (you need to ammend the debian/ build config as well)
# apt-get install tracker libtracker-sparql-1.0-dev libtracker-miner-1.0-dev  
git clone https://github.com/adiknoth/netatalk-debian
cd netatalk-debian
debuild -b -uc -us

This should leave you with (at the time of writing this):

File Function md5 sha1
libatalk-dev_3.1.7-1_amd64.deb Development files for the libatalk library (dev only) e5a465e39a8560c919d8db85c8e5a83b 0b924cf75f22ab42406289c6f18ae0243d6396a3
libatalk16_3.1.7-1_amd64.deb libatalk library (needed) 17a3d677ed0b3df1c2f4c1a8ab9045fd 7345ed3edd442716c99c2fe979140703204c0826
netatalk_3.1.7-1_amd64.deb netatalk daemons (needed) c694abca7f3cdc0070b2b3e7d528324a 932d1e3d5899958f29e79a7ba40e858d4ac272e8

Obviously you can download the files above if you run the AMD64 architecture and trust me enough to compile them for you.

Continue reading "Apple Timemachine backups on Debian 8 (Jessie)"

Managing a project consisting of multiple git repositories

IT

The core team organizing DebConf, the annual Debian developer conference, reached out to me two weeks ago to help support this year's effort a bit.

I'm very happy to do so as Debian is a cornerstone of everything I do in the Open Source/Free Software space.

Screenshot of git_pull_all with color

To get me started I got access to a lot of mailing lists and irc channels. And even more git repositories. So many that the DebConf team even has an instruction page on how the repositories all fit together.

It's unfortunately quite common to split a bigger project into many git repositories to ease access rights management and reduce the noise and data transfer volume for the average user. The downside is, everybody ends up with a dozen or more individual repositories to keep pulling. And then there's git annex for yet another level of indirection.

Joey Hess, a former Debian developer, has even written an extensive tool, myrepos, to meta-manage the different repositories and it can do quite some magic across different SCMSs1. In my case this is a bit of an overkill though.

And using myrepos may get you confused at some point whether to now run mr or git directly for each batch of repos you have inherited over some time of working on multiple projects.2 Thus I prefer the simple route:

Check out each repository into a common top-level directory (~/debconf/ in this case) and then put the following two lines into an executable script git_pull_all into that top level directory:

#!/bin/sh
find ~/debconf -mindepth 1 -maxdepth 1 -type d -exec sh -c "cd {}; test -r .git/config && git pull $*" \;

This will allow you to pull all git repos with one command and keep the normal syntax for everything else you do with each repo.

The --mindepth and --maxdepth will instruct find to just go and run your git pull only inside each direct child of the top level directory. So recursion depth = 1. That is the single trick there is to this.

Updates:

If you like to have some color and a bit of a spaced layout for improved readability, try:

#!/bin/sh
find ~/debconf -mindepth 1 -maxdepth 1 -type d -exec sh -c "cd {}; test -r .git/config && (printf \"\033[1m\033[34m%-50s\033[0m\" \"\${PWD}:\" ; git pull $*)" \;

When you have pull.rebase=true set in your .gitconfig, you can run ./git_pull_all --no-rebase to avoid rebases in case you work somewhere and want to have the merge commits.

P.S.: The DebConf15 Heidelberg registration just opened, please check the DebConf15 homepage for news, venue information and please register if you want to come around.


  1. Source Code Management Systems, like git, mercurial (hg) or subversion (svn). Or God forbid ... cvs. I don't like the (D)VCS (Distributed) Version Control Systems moniker. Because that's not really all these systems do. Not even the most important piece of what they do these days. 

  2. With myrepos you can still work with each individual repository via git. Just so nobody will write in "but...". 

Security by policy does not work

Management

The laptop systems aboard the International Space Station (ISS) have been infected by computer viruses and worms multiple times. The W32.Gammima.AG virus made it to space in July 2008. And it happily spread from laptop to laptop onboard the ISS. The virus has been written to steal credentials for some common games. It is unknown how many of these were run in orbit. The latency would kill the experience for sure.

I am sure there have been policies in place to prevent astronauts carrying personal soft- and hardware up to the ISS. Personal items must be explicitly applied for and will only be approved after severe scrutiny of each item. Even beyond the obvious security considerations, this is necessary as the launch weight needs to be calculated exactly.
NASA and Roscosmos both have very strict policies for their personnel and strict training to make sure they know and follow policy. The group of astronauts primarily affected by the policy is very well known and counts a few dozen heads.

Still at least one infected USB stick made it up to the ISS and could spread its malware. Other infections have happened and we can assume similar infection vectors.

So the policy has proven unenforceable. It is broken. It is still correct per se. There is nothing wrong with prohibiting personal soft- and hardware in a high risk environment. So the policy stays in place. NASA still needed to make sure to rely much less on its effectiveness.

Hence NASA did the only sane thing: Move from an unenforceable policy to a technically feasible solution, significantly reducing the security exposure. In May 2013 NASA announced the ISS laptops are being migrated to Debian 6. Imagine how much pressure Microsoft must have put up to prevent such a technical decision due to the adverse marketing message it provides along the way. And still the engineers at NASA saw this as the best way forward.

The take-away message here is: Security by policy does not work.

Continue reading "Security by policy does not work"

Firefox opens directories in RawTherapee ... or a tale of colliding assumptions

Internet

After installing RawTherapee, a raw photo to jpg image converter, on my Kubuntu 13.04 desktop, Mozilla Firefox started to open the Downloads directory in ... RawTherapee. Gosh. Now that's what you'd want to browse a downloads folder with.

Update-alternatives did not help as RawTherapee is not managed by the Debian alternatives system. Hm, xdg-open still works as intended and opens dolphin.

Asking Google turns up many not working solution proposals. Hm.

Firefox: Context menu (right click) -> Open Containing Folder

But, searching the collective memory is not completely useless. There is a solved bug report on the RawTherapee bug tracker that explains the issue. The .desktop file RawTherapee installs contains inode/directory as a supported mime type. Well, that's technically correct but makes no sense in the way mime associations are used. Thankfully the bug reporter was able to convince the developer of this and get this fixed for future RawTherapee releases.

So option 1: edit /usr/share/applications/rawtherapee.desktop and remove the inode/directory entry from the MimeType=-line. Then regenerate the various mimetype caches.

This still bears the question why Firefox does not use the inode/directory entry from dolphin.desktop. Also, the next update from Debian/Ubuntu will probably overwrite the fixed rawtherapee.desktop file again as the patch from the Rawtherapee developer will only be included on new releases (and these may or may not be backported to Debian/Ubuntu). So you'd need to copy that fixed rawtherapee.desktop file to ~/.local/share/applications/ or the like.

Option 2: find out what the real issue is...

Continue reading "Firefox opens directories in RawTherapee ... or a tale of colliding assumptions"

Securing the grub boot loader

Open Source

Since version 2.0 the behaviour of grub regarding passwords has changed quite substantially. It can be nicely used to secure the boot process so that a X display manager (gdm, kdm, lightdm, ...) or login prompt cannot be circumvented by editing the Linux kernel boot command line parameters. The documentation is concise but many old how-tos may lead you down the wrong GNU grub "legacy" (the pre-2.0 versions) path.

So this assumes you have a grub installed and working. I.e. if you press Shift during boot, you get a grub menu and can edit menu entries via the e key.

First you need to setup grub users and corresponding passwords:

Run grub-mkpasswd-pbkdf2 to encrypt every password you want to use for grub users (which are technically unrelated to Linux system users at this time).
You'll get a string like 'grub.pbkdf2.sha512.10000...'. It will replace the plain text passwords.

In '/etc/grub/40_custom' add lines like:

# These users can change the config at boot time and run any menuentry:
set superusers="root user1"
password_pbkdf2 root grub.pbkdf2.sha512.10000.aaa...
password_pbkdf2 user1 grub.pbkdf2.sha512.10000.bbb...
# This user can only run specifically designated menuentries (not a superuser):
password_pbkdf2 user2 grub.pbkdf2.sha512.10000.ccc...

Now once you did this grub v. 2.0+ will ask for a supervisor password every time you want to boot any menu item. This is a changed behavior from v. 1.9x which defaulted to allow all entries if no user restriction was specified. So you need to add '--unrestricted' to all 'menuentries' that any user shall be able to boot. You can edit '/boot/grub/grub.cfg' and add --unrestricted to (the default) menuentries. Or you can edit the 'linux_entry ()' function in '/etc/grub/10_linux' so that the 'echo "menuentry ..."' lines include --unrestricted by default:

[...]
echo "menuentry '$(echo "$title" | grub_quote)' --unrestricted ${CLASS} \$menuentry_id_option 'gnulinux-$version-$type-$boot_device_id' {" | sed "s/^/$submenu_indentation/"
else
echo "menuentry '$(echo "$os" | grub_quote)' --unrestricted ${CLASS} \$menuentry_id_option 'gnulinux-simple-$boot_device_id' {" | sed "s/^/$submenu_indentation/"
[...]

Make a backup of this file as it will be overwritten by grub updates. This way all Linux kernels detected by the script will be available to all users without identifying to grub via username / password.

Now issue update-grub to re-generate 'grub.cfg' with the amended menuentries.

If everything worked well, your system can now be booted unrestricted but the grub configuration can only be changed from the grub superusers after identifying with their username and password at the grub prompt.

Bonus point:

If you want to create menuentries that user2 (and any superuser) from the above example user list can run, add blocks like these to the end of '40_custom':

menuentry "Only user2 (or superuser) can run this Windows installation" --users user2 {
set root=(hd1,1)
chainloader +1
}

Update

16.12.2015:
Hector Marco and Ismael Ripoll have found a nearly unbelievable exploit in Grub2 that allows you to tap backspace 28 times to get a rescue shell and that way bypass a password prompt. Time to update!
Read the excellent analysis of the bug and the exploit vector in Hector Marco's blog post.

SyncPlaces, SortPlaces ... preserving these and other excellent Firefox add-ons

Internet

A few days ago I was re-installing a laptop of mine. Downloading the default add-ons for Firefox came to a stall as I could not find SyncPlaces, which I've been using for a few years to synchronize my bookmarks between the systems I use. As I knew the author, Andy Halford, has a homepage I surfed there and found...

"So long Mozilla and thanks for all the fish

If you are here looking for any of my extensions (SyncPlaces, SortPlaces, CheckPlaces or SearchPlaces) then I'm sad to inform you that these are no longer available.

Unfortunately I do not have the time or the motivation to continue to support these in light of the close-minded, short-sighted and increasingly inflexible organisation that Mozilla has become. Apparently spending 1000's of hours promoting their products and providing my extensions to the general public to fill the gaps in their product, and repair bookmarks destroyed by Firefox Sync, counts for little and does not deserve any flexibility or support from them. Apparently add-on developers are second-class citizens to them whatever they may say. Surprising given that these are the main USP of Firefox, and developers are providing their time and efforts for free. My decision was taken after a series of problems the last one being the final straw."
Andy Halford, 17.06.12 screenshot

I was unable to find any details what caused this rant and the serious decision to remove his excellent add-ons from the Internet. There are a few discussions on the Mozillazine forums but nothing enlightening so far. Andy's personal homepage, family homepage and the TotalValidator product he and his wife offer are all still online, he just pulled all his popular Firefox add-ons.

Andy states on his homepage

"I will not be answering any further emails on this matter, I suggest that you direct any questions to Mozilla themselves and ask them why they were happier for me to leave than to use a little common sense." [ib.]

I still tried to contact him and asked for an interview but he has not come back to me so far. I've sent a second email with the URL of this blog post.

The SyncPlaces plug-in is still prominently featured on Wikipedia's Comparison of browser synchronizers at press time.

Now this is a mess. With some ungraciously licensed closed source software, you're probably used to plan migrations once license terms run out or you have to upgrade to the much improved successor product because "extended support" for your perfectly fine current version gets ridiculously expensive.

But with open source you have more rights and e.g. the security to use the product as long as you wish. If you have kept a copy of the code (and the license file). Because in this case the Mozilla Firefox installer .xpi's are hard to come by. Mozilla has a centralized add-on repository and once the files are pulled from there, they quickly vanish off the net. Now don't get me wrong, these plug-ins are now unmaintained software, so you will have to migrate at some point in time, but not necessarily now. You have some time for planning and evaluation of alternatives.

Continue reading "SyncPlaces, SortPlaces ... preserving these and other excellent Firefox add-ons"

Apple iPhone ring tones Linux style

Open Source

Apple has crippled the iPhone to not allow normal music files as ringtones. Business decision. Technically any sub 40 second MP4 audio file will do once you rename it to *.m4r and drag-and-drop it to the ringtones folder of your phone in iTunes. Longer ones will work, too. But you'd need a jailbroken iPhone for that as iTunes will refuse to transfer the ringtone file if it's too long. Not much of an issue imho, who keeps ringing your phone for 40 seconds or more?

There's a gazillion websites available telling you how to convert a single .mp3-file to a ringtone with or without iTunes help and there are hundreds of tools doing that for you if you can't find out how to do it with just iTunes itself. Still the ones I tried failed for me as I wanted to convert my 20 or so standard ringtones from the good old Motorola K3 to iPhone ringtones all in one go. Without having to edit each one by hand. They are already nice ringtones and have served me well for years, just too long for the iPhone and in .mp3 format.

The basic processing sequence needed is

  1. Cut the .mp3 down to 39s
  2. Convert the .mp3 -> .wav (with mplayer, normalize output gain while we're at it)
  3. Convert the .wav -> .mp4 (with facc)
  4. Clean up, GOTO 1 for next file

So below is the free shell script to create multiple ringtones in one go on any Linux system. You need to install cutmp3, mplayer and faac for it, so apt-get install cutmp3 mplayer faac on Debian or Ubuntu. cutmp3 is currently not in the portage tree for Gentoo, but you can download an ebuild from Polynomial-C's overlay (mirror). Or you just download the cutmp3 binary from Jochen Puchalla's homepage. There's no error checking in the script, so know your way around the shell before running it.

Without further ado:

#!/bin/sh
#
# convert_to_ringtone file1.mp3 [file2.mp3, ...]
# Placed into the public domain by Daniel Lange, 2011.
#

for arg
do
        echo "Processing $arg..."
        cutmp3 -c -a 0:0.0 -b 0:39.0 -i "$arg" -O "$arg.tmp"
        mplayer -vo null -vc null -af volnorm -ao pcm:fast:file=tmpfile.wav "$arg.tmp"
        faac -b 128 -c 44100 -w tmpfile.wav
        name=`echo $arg| sed 's/.mp3//g'| sed 's/ /_/g'`
        mv tmpfile.m4a $name.m4r
        rm tmpfile.wav
        rm "$arg.tmp"
        echo "$arg done."
done
 

Wikipedia article on Apple's 1984 ad.

Update

23.12.14 Apparently the faac package in Debian and Ubuntu has had the MP4 writing capability removed in v1.28-5 and later due to a minor license incompatibility. See the Debian Changelogs. Duh.

faac (1.28-5) unstable; urgency=low
  [ Andres Mejia ]
  * Disable mp4v2 support.
    This only disables mp4v2 for the faac utility program. The faac
    utility is GPL-2 but the mp4v2 library is MPL-1.1. The two licenses
    are incompatible with each other.

So ... unfortunately you have to built faac from source yourself or pin the v1.28-4 version which is identical except for the castration anyways.

Random distro dev: "Why oh why doesn't my distro ever head mainstream...?"
Hint: Because of stuff like this.

Keeping IRC nicks active

IRC

Typical IRC services usually allow you to register with nickserv and link a number of nicks to a personal account. It's quite common to have nick, nick_ and nick__ as many IRC clients auto-append underscores if the primary nickname is already in use when connecting. Obviously you can set these alternate nicknames to almost anything you like in a decent client.

Some folks also group a "vanity" nickname or two for whatever reason. To keep these active, people do the "nick shuffle" (/nick newnick, /nick oldnick) all the time:

nick shuffle on freenode

People who forget the occasional nick shuffle may end up losing a grouped nick because it became inactive. While freenode staff try to contact people before dropping linked nicks, there are occasional prunes of "old data" from the services database. And then nobody can really ask upfront.

So before the next big purge comes up, I wrote a small bash script that logs into a nickserv account and cycles through the linked nicks. A few friends and me have used it successfully for many months now.

Grab a copy of keepnick (2.4kB) and drop it into /usr/local/bin.

Keepnick expects to have an accountname, the corresponding password and then a sequence of linked nicks given on its command line.

Something like

/usr/local/bin/keepnick accountname passw0rd linked_nick linked_nick_ vanity_nick MyOtherNick

should work.

For regular use, you need to set up a cron job to call keepnick e.g. every week. So put something like the following script into /etc/cron.weekly/keepnicks_irc or create a corresponding crontab entry for keepnicks_irc if you do not have the convenient cron.* directories set up:

#!/bin/bash
#
# run keepnick for user(s) irc account(s)
# intended to be run from cron, e.g. through /etc/cron.weekly
#

KEEPNICK="/usr/local/bin/keepnick"
# better safe than sorry
PATH="/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin"
export PATH

$KEEPNICK accountname1 passw0rd1 linked_nick1 linked_nick1_ linked_nick1__
$KEEPNICK accountname2 passw0rd2 linked_nick1 linked_nick2_ linked_nick2__
 

You should see keepnick in action now every week like this:

keepnick in action

What happens here is that the IRC services package tells you, keepnick has just authenticated to your account and will now shuffle through all nicks you asked it to. The big advantage is that is does this outside of channels, so not annoying any users. The cron job should make sure you don't forget the nick shuffle anymore.

Making sure your bash supports network connections

Stock bash will support network connections but on Debian and old (=pre-karmic) Ubuntu that capability was disabled at compile time.

If you need to check whether your bash is compiled with network support, type cat < /dev/tcp/time.nist.gov/13 into a bash terminal.

In case that gives you a RFC-867 time string, you're all fine. If not, re-compile your bash with --enable-net-redirections.

Now for something more advanced (but entirely optional):

Continue reading "Keeping IRC nicks active"

Binding applications to a specific IP

Linux

These days many systems are multi-homed in the sense that they have more than one IP address bound at the same time.
I.e. for different network cards, virtual IPs for shared servers or just using WiFi and a wired network connection at the same time on a laptop.

Murphy of course makes sure that your system will choose to worst IP (i.e. that on slow WiFi or the one reserved for admin access) when an application does not specifically supports binding to a selected IP address. And Mozilla Firefox for example doesn't.

The kernel chooses an outgoing IP from those in the routing table with the same metric:

daniel@server:~$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.0.2.1         0.0.0.0         U     0      0        0 eth0
0.0.0.0         192.0.2.2         0.0.0.0         U     0      0        0 eth1
0.0.0.0         192.0.2.3         0.0.0.0         U     0      0        0 eth2
0.0.0.0         192.0.2.4         0.0.0.0         U     0      0        0 eth3

You can obviously play around with the metric and make the kernel router prefer the desired interface above others. This will affect all applications though. Some people use the firewall to nat all packages to port 80 onto the network interface desired for web browsing. Gee, beware the http://somewebsite.tld:8080 links...

Thankfully Daniel Ryde has solved the problem via a LD_PRELOAD shim. With his code you can run

daniel@laptop:~$ BIND_ADDR="192.0.2.100" LD_PRELOAD=/usr/lib/bind.so firefox (*)

and happily surf away.

To compile his code (3.3kB, local copy, see note 1) you need to run

gcc -nostartfiles -fpic -shared bind.c -o bind.so -ldl -D_GNU_SOURCE
strip bind.so
cp -i bind.so /usr/lib/

and you're set to go.

If you don't have gcc available (and trust me) you can download pre-compiled 32bit and 64bit (glibc-2) bind.so libraries here (4.5kB).

I guess because Daniel Ryde hid his code so well on his webpage, Robert J. McKay wrote another LD_PRELOAD shim, called Bindhack (4.5kB, local mirror). This will - as is - only compile on 32bit machines. But YMMV.

Run the above command (*) with your desired (and locally bound) IP address in bash and visit MyIP.dk or DNStools.ch or any of the other services that show your external IP to see whether you've succeeded.

Notes:

  1. Daniel Ryde did not specify the -D_GNU_SOURCE in the comments section of bind.c. Modern glibc/gcc need that as he used RTLD_NEXT which is Unix98 and not POSIX. I amended the local copy of bind.c and sent him an email so he can update his.
  2. Both are IPv4 only, no IPv6 support.

Updates:

19.03.15 madmakz wrote in to clarify that all of the bind LD_PRELOAD shims only work with TCP connections. So not with UDP.
I'm not aware of a shim that manipulates UDP sockets.

14.01.14 Christian Pellegrin wrote a superb article on how to achieve per-application routing with the help of Linux network namespaces.

16.06.13 showip.be seems to be gone, so I replaced it with dnstools.ch in the text above. There are plenty of others as well.

22.06.12 Lennart Poettering has a IPv4 only version of a shim and a rather good readme available at his site.

29.11.10 Catalin M. Boie wrote another LD_PRELOAD shim, force_bind. I have not tested this one. It's capable of handling IPv6 binds.

11.01.09 Daniel Ryde has replied to my email and updated his local copy now as well.

Google GMail dominating the email market

Other

Google's GMail was launched in April 2004 and only in February 2007 Google dropped its invite system to open up to the general public acc. to Wikipedia's history of GMail. That's some five years of operations up to now.

It kind of amazed me how many people I know have GMail as their primary mail provider. So I took the chance today to get a bit of statistics to check my gut feelings:

A friend of mine selected some (mostly American) bloggers that have indicated specific interests in a topic related to his Doctoral thesis. This sample ended up to be 1,375 people. These folks have 295 different email domains. Only.

A whooping 46% of the (rather random) sample use GMail, 12% Yahoo, 8% Hotmail and about 3% AOL. While Yahoo has some foreign domains in the sample (yahoo.co.uk, yahoo.ca, see mostly American bloggers above), these add up to around 0.1% of the sample so it's not really significant.

Distribution of American blogger's email domains

This data is in no way representative, but still wow. Google basically has a monopoly on search and now seems to have a close-to-majority footprint in personal email.

I guess the dominance is currently larger in the States than in Europe or Asia as GMail has only gradually learned languages beyond English.
Large local providers should also have some foothold in these markets. Similar to the Comcast and SBC customers still significant in sample depicted above. Just the local providers in Europe and Asia will be somewhat stronger (for now). Google is also aggressively targeting corporations with hosted email and apps now so one can expect further and accelerated growth in that area. Quite a number of companies are considering using hosted email instead of the conventional mail system they have operated on site for many years now.

So while Gina Trapani recommends "Break Google's Monopoly on Your Data: Switch to Yahoo Search", may I humbly point out: It's becoming quite impossible to just keep your emails between the recipient and the addressee these days.

Even if you personally do not use GMail, Google can (technically) still profile you because a huge chunk of people you communicate with send from GMail and receive and store your emails there.

Nearly all email that is sent also passes spam filters before delivery. Google bought the Postini spam filter in 2007. That anti-spam service is used by many enterprises and even city governments, see here.

So time to consider (unencrypted) email as what it has always been: The digital equivalent of a postcard.
Just now Google has become the postmen. All of them, every second shift. You should hope they're not nosey. Or send letters.

Update:

11.05.2014: Benjamin Mako Hill has written a blog entry Google Has Most of My Email Because It Has All of Yours doing analysis for his own email box. He found a third of his inbox emails come from Google and - as he doesn't usually reply to newsletters and the like - more than half of his own email replies (57% in 2013) end up at GMail. He published his code in case you want to do the analysis on our own email.

httpdate - set local date and time from a web server

Linux

While ntp may be a great protocol, I find it quite bloated and slow for the simple purpose of just setting a local date and time to a reference clock. I do not need 20ms accuracy on a notebook's clock :-). Thus I use(d) rdate for a decade now but the public rdate servers are slowly dying out. So I'm replacing it more and more with htpdate which works quite nicely. It's written in C and a perl alternative is available on the author's site. There is also a forked windows version of it available.

Developing a bit larger bash script (which syncs a few servers), I wondered whether I could realize the time sync part in bash as well.

It's quite possible:

  1. # open a tcp connection to www.google.com
  2. exec 3<>/dev/tcp/www.google.com/80
  3. # say hello HTTP-style
  4. echo -e "GET / HTTP/1.0\n\n">&3
  5. # parse for a Date: line and with a bit of magic throw the date-string at the date command
  6. LC_ALL=C LANG=en date --rfc-2822 --utc -s "$(head <&3 | grep -i "Date: " | sed -e s/Date\:\ //I)"
  7. # close the tcp connection
  8. exec 3<&-

Simple, eh?

Continue reading "httpdate - set local date and time from a web server"