Skip to content

Prevent Ubuntu from phoning home

Linux

Ubuntu unfortunately has decided again to implement another "phone home" feature, this time transferring your lsb_release information, CPU model and speed (from /proc/cpuinfo), uptime output, most of uname -a and curl version to a Ubuntu news web-service.

Here is the Launchpad bug report #1637800 introducing this ... web bug.

This thing runs both systemd-timer based (via /lib/systemd/system/motd-news.service and /lib/systemd/system/motd-news.timer) and on request when you log in (via /etc/update-motd.d/50-motd-news).

Ubuntu news on ssh login

There has even been a bug filed about the motd advertising HBO's Silicon Valley show.

To prevent this from running (it is enabled by default on Ubuntu 17.04 and may probably propagate down to earlier versions as well), edit /etc/default/motd-news to include

ENABLED=0

so

sed -i "s/ENABLED=1/ENABLED=0/" /etc/default/motd-news # run as root

for your automated installs.

Update:

02.07.2017: Dustin Kirkland responded to a YC "hacker news" mention of his motd spam. He mentions:

You're welcome to propose your own messages for merging, if you have a well formatted, informative message for Ubuntu users.
We'll be happy to review and include them in the future.

What could possibly go wrong?

Updating the Dell XPS 13 9360 Thunderbolt firmware to get VGA and HDMI working

IT

Last year I bought the wonderful Dell XPS 13 9360 as it is certified to work with Ubuntu Linux and is just all around an awesome device. Dell made me buy the Windows version as only that got a 1 TB NVMe-SSD option. Linux apparently is only worthy of the 512GB and below models. What product manager comes up with such a stupid idea? Are SKUs that precious? Anyways ... so I bought a Windows version and that got wiped with a Linux install immediately as that was and is its intended purpose.

Dell DA200 USB-C to HDMI/VGA/Ethernet/USB 3.0 adapter

I purchased a DA200 with the system which is Dell's USB-C to anything (HDMI/VGA/Ethernet/USB 3.0) dongle. When I got the laptop the Ethernet port and USB 3.0 via the DA200 were working right out of the box. The VGA and HDMI ports were detected by Ubuntu but there was no way to get connected screens working. They stayed black.

The device was shipped with Thunderbolt firmware NVM18 and we've been told rather quickly by Dell this would be fixed with an update. And lo and behold Dell published the firmware version NVM21 right for Christmas 2016. Now unfortunately while their BIOS updates are Windows / DOS executables that can be just shoved at the Dell UEFI flash updater and thus the main BIOS can be updated from any OS, including Linux, without any hassle, the Intel provided Thunderbolt update needs Windows to get installed. Or, well, there is a convoluted way to compile an out-of-tree Linux kernel module, download and compile a few sets of software and do it via Linux. That description read so lengthy, I didn't even try it. Additionally there seems to have been no progress at all in getting this more mainline in the last three months, so I chose the cheap route and installed Windows 10 on a USB thumb drive1.

This is done via the (unfortunately Windows only) Win2USB software (the free version is sufficient).

Update: There's a new bash script windows2usb that looks good and should work to get you a bootable Windows USB thumb drive in Linux. WinUSB (that stopped working in the Win10 area some time) has also been forked and updated into WoeUSB. And there is WinToUSBLinux, yet another shell script. Give them a try.

Once Windows has rebooted often enough to finish its own installation, you can work with the USB thumb drive install as with any Windows 10. Nice.

Dell TPM 1.2 to 2.0 firmware update

Put all the files you downloaded from Dell to update your XPS 13 into a directory on the USB thumb drive. That way Windows does not need to have any network connectivity.

I first updated the TPM 1.2 firmware to a TPM 2.0 version (DellTpm2.0_Fw1.3.2.8_V1_64.exe at the time of writing this blog entry). Now this is quite hilarious as the Windows installer doesn't do anything but putting a UEFI firmware update into the EFI partition that runs on reboot. Duh. You do need to manually clear the TPM in the BIOS' security settings section (there's a clear checkbox) to be able to program new firmware onto it.
Thunderbolt firmware upgrade progress bar Thunderbolt firmware upgrade successful Now back in Windows install the Thunderbolt drivers (Chipset_Driver_J95RR_WN32_16.2.55.275_A01.exe at the time of writing this) and then run Intel_TBT3_FW_UPDATE_NVM21_0THFT_A00_3.21.00.008.exe, which is the NVM21 Thunderbolt firmware update (or a later version).

Reboot again (into Linux if you want to) and (drumroll) the VGA and HDMI ports are working. Awesome.

An update log can be found on the USB thumb drive at Dell\UpdatePackage\Log\Intel_TBT3_FW_UPDATE_NVM21_0THFT_A00_3.log:

*** Dell Thunderbolt firmware update started on 4/6/2017 at 12:56:56***
Command: C:\Install\Intel_TBT3_FW_UPDATE_NVM21_0THFT_A00_3.21.00.008.exe 

Starting FW Update....
***TBT GPIO Power is Turning On:  No Dock or DockInfo.
***TBT GPIO power is turned on.

Thunderbolt Firmware Update SUCCEEDED
TBT Items Registry creation is Success at \SOFTWARE\Dell\ManageableUpdatePackage\Thunderbolt Controller:
User selected OK for reboot
System TBT NVM Current Version:BCD:00000018: New Version:BCD:00000021

Exit Code = 0 (Success) 
***Thunderbolt Firmware flash finished at 4/6/2017 at 13:00:23***

If Windows has added its boot loader entry into your UEFI options, you can easily remove that again with the Dell UEFI BIOS or efibootmgr from within Linux.

The whole process took me less than 30 minutes. And most of that was creating the Windows USB thumb drive. I'll keep that for future updates until Intel and Dell have sorted out the Thunderbolt update process in Linux.

Updates:

18.05.17: Intel has published a large patchset on LKML to enable Thunderbolt security levels (thus preventing DMA attacks) and get NVM firmware upgrades mainlined. Yeah!

02.05.18: Added a link to the windows2usb bash script that should remove the need to create a bootable Windows USB thumb drive with a Windows only software.

11.06.18: Added a link to WoeUSB which is currently packaged for Ubuntu in a PPA.

08.04.20: Added a link to WinToUSBLinux. A recently released shell script to create a bootable Windows USB stick from Linux.


  1. If you go the Linux route please post a minimal image somewhere (kernel, initrd, squashfs or FAT16/32 raw image) and put a link into a comment below this blog post. Thanks. 

Security is hard, open source security unnecessarily harder

IT

Now it is a commonplace that security is hard. It involves advanced mathematics and a single, tiny mistake or omission in implementation can spoil everything.

And the only sane IT security can be open source security. Because you need to assess the algorithms and their implementation and you need to be able to completely verify the implementation. You simply can't if you don't have the code and can compile it yourself to produce a trusted (ideally reproducible) build. A no-brainer for everybody in the field.

But we make it unbelievably hard for people to use security tools. Because these have grown over decades fostered by highly intelligent people with no interest in UX.
"It was hard to write, so it should be hard to use as well."
And then complain about adoption.

PGP / gpg has received quite some fire this year and the good news is this has resulted in funding for the sole gpg developer. Which will obviously not solve the UX problem.

But the much worse offender is OpenSSL. It is so hard to use that even experienced hackers fail.

IRC wallop on hackint

Now, securely encrypting a mass communication media like IRC is not possible at all. Read Trust is not transitive: or why IRC over SSL is pointless1.
Still it makes wiretapping harder and that may be a good thing these days.

LibreSSL has forked the OpenSSL code base "with goals of modernizing the codebase, improving security, and applying best practice development processes". No UX improvement. A cleaner code for the chosen few. Duh.

I predict the re-implementations and gradual improvement scenarios will fail. The nearly-impossible-to-use-right situation with both gpg and (much more importantly) OpenSSL cannot be fixed by gradual improvements and however thorough code reviews.

Now the "there's an App for this" security movement won't work out on a grand scale either:

  1. Most often not open source. Notable exceptions: ChatSecure, TextSecure.
  2. No reference implementations with excellent test servers and well documented test suites but products. "Use my App.", "No, use MY App!!!".
  3. Only secures chat or email. So the VC-powered ("next WhatsApp") mass-adoption markets but not the really interesting things to improve upon (CA, code signing, FDE, ...).
  4. While everybody is focusing on mobile adoption the heavy lifting is still on servers. We need sane libraries and APIs. No App for that.

So we need a new development, a new code, a new open source product. Sadly so the Core Infrastructure Initiative so far only funds existing open source projects in dire needs and people bug hunting.

It basically makes the bad solutions of today a bit more secure and ensures maintenance of decade old crufty code bases. That way it extends the suffering of everybody using the inadequate solutions of today.

That's inevitable until we have a better stack but we need to look into getting rid of gpg and OpenSSL and replacing it with something new. Something designed well from the ground up, technically and from a user experience perspective.

Now who's in for a five year funding plan? $3m2 annually. ROCE 0. But a very good chance to get the OBE awarded.

Keep calm and enjoy the silence

Updates:

10.06.22: Carl Tashian made a GUI mockup to show the complexity of the OpenSSL "user interface".

21.07.19: A current essay on "The PGP problem" is making rounds and lists some valid issues with the file format, RFCs and the gpg implementation. The GnuPG-users mailing list has a discussion thread on the issues listed in the essay.

19.01.19: Daniel Kahn Gillmor, a Senior Staff Technologist at the ACLU, tried to get his gpg key transition correct. He put a huge amount of thought and preparation into the transition. To support Autocrypt (another try to get GPG usable for more people than a small technical elite), he specifically created different identities for him as a person and his two main email addresses. Two days later he has to invalidate his new gpg key and back-off to less "modern" identity layouts because many of the brittle pieces of infrastructure around gpg from emacs to gpg signature management frontends to mailing list managers fell over dead.

28.11.18: Changed the Quakenet link on why encrypting IRC is useless to an archive.org one as they have removed the original content.

13.03.17: Chris Wellons writes about why GPG is a failure and created a small portable application Enchive to replace it for asymmetric encryption.

24.02.17: Stefan Marsiske has written a blog article: On PGP. He argues about adversary models and when gpg is "probably" 3 still good enough to use. To me a security tool can never be a sane choice if the UI is so convoluted that only a chosen few stand at least a chance of using it correctly. Doesn't matter who or what your adversary is.
Stefan concludes his blog article:

PGP for encryption as in RFC 4880 should be retired, some sunk-cost-biases to be coped with, but we all should rejoice that the last 3-4 years had so much innovation in this field, that RFC 4880 is being rewritten[Citation needed] with many of the above in mind and that hopefully there'll be more and better tools. [..]

He gives an extensive list of tools he considers worth watching in his article. Go and check whether something in there looks like a possible replacement for gpg to you. Stefan also gave a talk on the OpenPGP conference 2016 with similar content, slides.

14.02.17: James Stanley has written up a nice account of his two hour venture to get encrypted email set up. The process is speckled with bugs and inconsistent nomenclature capable of confusing even a technically inclined person. There has been no progress in the last ~two years since I wrote this piece. We're all still riding dead horses. James summarizes:

Encrypted email is nothing new (PGP was initially released in 1991 - 26 years ago!), but it still has a huge barrier to entry for anyone who isn't already familiar with how to use it.

04.09.16: Greg Kroah-Hartman ends an analysis of the Evil32 PGP keyid collisions with:

gpg really is horrible to use and almost impossible to use correctly.

14.11.15:
Scott Ruoti, Jeff Andersen, Daniel Zappala and Kent Seamons of BYU, Utah, have analysed the usability [local mirror, 173kB] of Mailvelope, a webmail PGP/GPG add-on based on a Javascript PGP implementation. They describe the results as "disheartening":

In our study of 20 participants, grouped into 10 pairs of participants who attempted to exchange encrypted email, only one pair was able to successfully complete the assigned tasks using Mailvelope. All other participants were unable to complete the assigned task in the one hour allotted to the study. Even though a decade has passed since the last formal study of PGP, our results show that Johnny has still not gotten any closer to encrypt his email using PGP.

  1. Quakenet has removed that article citing "near constant misrepresentation of the presented argument" sometime in 2018. The contents (not misrepresented) are still valid so I have added and archive.org Wayback machine link instead. 

  2. The estimate was $2m until end of 2018. The longer we wait, the more expensive it'll get. And - obviously - ever harder. E.g. nobody needed to care about sidechannel attacks on big-LITTLE five years ago. But now they start to hit servers and security-sensitive edge devices. 

  3. Stefan says "probably" five times in one paragraph. Probably needs an editor. The person not the application. 

Apple Timemachine backups on Debian 8 (Jessie)

Debian

Upgrading Debian 7 (Wheezy) servers to Debian 8 (Jessie) proves (unexpectedly) quite rough around the edges.

That's what you get for using a version x.0, we should have known better :-).

And - of course - the release notes follow the common practice of not even mentioning any of the issues we encountered so far.

Ah, well, let's go through the first one:

In Debian 7 (Wheezy) there was netatalk 2.2.2 (packages link). Now during upgrades that package may or may not get removed. There is no netatalk in Debian 8 (Jessie) anymore. Duh. There is in sid (aka Debian unstable) (packages link) so we may see a backport some time. Or not. In any case this is still 2.2.5 at the time of writing and as Adrian Knoth put it in the three year old bug asking for a upgrade to Netatalk 3:

Let's not ship another release without netatalk3, it's embarrassing.

Yes. It is. Removing a working version and not even mentioning it in the release notes is even worse though. So no cookies there.

Luckily the absolutely awesome Debian and greater FLOSS community have sorted 90% of the problem out for us already:

A quick Google search turns up an excellent article on netatalk's wiki that details installing Netatalk 3.1.7 on Debian 8 Jessie. There are two shortcomings to this: First it doesn't compile to .debs but installs besides apt and friends. And second it compiles with Spotlight search, courtesy of Gnome tracker, which doesn't really work well on servers yet. Hence we're lucky that Adrian Knoth's debified install has not yet added the tracker dependencies. It does compile for systemd use (the default for Debian 8 Jessie). If you want to continue using SysVInit, you need to modify debian/rules.

Compiling to .debs becomes as easy as:

# get build dependencies and a few helpers
apt-get install build-essential devscripts debhelper cdbs autotools-dev dh-buildinfo libdb-dev libwrap0-dev libpam0g-dev libcups2-dev libkrb5-dev libltdl3-dev libgcrypt11-dev libcrack2-dev libavahi-client-dev libldap2-dev libacl1-dev libevent-dev d-shlibs dh-systemd
# in case you want to try the tracker support (you need to ammend the debian/ build config as well)
# apt-get install tracker libtracker-sparql-1.0-dev libtracker-miner-1.0-dev  
git clone https://github.com/adiknoth/netatalk-debian
cd netatalk-debian
debuild -b -uc -us

This should leave you with (at the time of writing this):

File Function md5 sha1
libatalk-dev_3.1.7-1_amd64.deb Development files for the libatalk library (dev only) e5a465e39a8560c919d8db85c8e5a83b 0b924cf75f22ab42406289c6f18ae0243d6396a3
libatalk16_3.1.7-1_amd64.deb libatalk library (needed) 17a3d677ed0b3df1c2f4c1a8ab9045fd 7345ed3edd442716c99c2fe979140703204c0826
netatalk_3.1.7-1_amd64.deb netatalk daemons (needed) c694abca7f3cdc0070b2b3e7d528324a 932d1e3d5899958f29e79a7ba40e858d4ac272e8

Obviously you can download the files above if you run the AMD64 architecture and trust me enough to compile them for you.

Continue reading "Apple Timemachine backups on Debian 8 (Jessie)"

Managing a project consisting of multiple git repositories

IT

The core team organizing DebConf, the annual Debian developer conference, reached out to me two weeks ago to help support this year's effort a bit.

I'm very happy to do so as Debian is a cornerstone of everything I do in the Open Source/Free Software space.

Screenshot of git_pull_all with color

To get me started I got access to a lot of mailing lists and irc channels. And even more git repositories. So many that the DebConf team even has an instruction page on how the repositories all fit together.

It's unfortunately quite common to split a bigger project into many git repositories to ease access rights management and reduce the noise and data transfer volume for the average user. The downside is, everybody ends up with a dozen or more individual repositories to keep pulling. And then there's git annex for yet another level of indirection.

Joey Hess, a former Debian developer, has even written an extensive tool, myrepos, to meta-manage the different repositories and it can do quite some magic across different SCMSs1. In my case this is a bit of an overkill though.

And using myrepos may get you confused at some point whether to now run mr or git directly for each batch of repos you have inherited over some time of working on multiple projects.2 Thus I prefer the simple route:

Check out each repository into a common top-level directory (~/debconf/ in this case) and then put the following two lines into an executable script git_pull_all into that top level directory:

#!/bin/sh
find ~/debconf -mindepth 1 -maxdepth 1 -type d -exec sh -c "cd {}; test -r .git/config && git pull $*" \;

This will allow you to pull all git repos with one command and keep the normal syntax for everything else you do with each repo.

The --mindepth and --maxdepth will instruct find to just go and run your git pull only inside each direct child of the top level directory. So recursion depth = 1. That is the single trick there is to this.

Updates:

If you like to have some color and a bit of a spaced layout for improved readability, try:

#!/bin/sh
find ~/debconf -mindepth 1 -maxdepth 1 -type d -exec sh -c "cd {}; test -r .git/config && (printf \"\033[1m\033[34m%-50s\033[0m\" \"\${PWD}:\" ; git pull $*)" \;

When you have pull.rebase=true set in your .gitconfig, you can run ./git_pull_all --no-rebase to avoid rebases in case you work somewhere and want to have the merge commits.

P.S.: The DebConf15 Heidelberg registration just opened, please check the DebConf15 homepage for news, venue information and please register if you want to come around.


  1. Source Code Management Systems, like git, mercurial (hg) or subversion (svn). Or God forbid ... cvs. I don't like the (D)VCS (Distributed) Version Control Systems moniker. Because that's not really all these systems do. Not even the most important piece of what they do these days. 

  2. With myrepos you can still work with each individual repository via git. Just so nobody will write in "but...". 

Security by policy does not work

Management

The laptop systems aboard the International Space Station (ISS) have been infected by computer viruses and worms multiple times. The W32.Gammima.AG virus made it to space in July 2008. And it happily spread from laptop to laptop onboard the ISS. The virus has been written to steal credentials for some common games. It is unknown how many of these were run in orbit. The latency would kill the experience for sure.

I am sure there have been policies in place to prevent astronauts carrying personal soft- and hardware up to the ISS. Personal items must be explicitly applied for and will only be approved after severe scrutiny of each item. Even beyond the obvious security considerations, this is necessary as the launch weight needs to be calculated exactly.
NASA and Roscosmos both have very strict policies for their personnel and strict training to make sure they know and follow policy. The group of astronauts primarily affected by the policy is very well known and counts a few dozen heads.

Still at least one infected USB stick made it up to the ISS and could spread its malware. Other infections have happened and we can assume similar infection vectors.

So the policy has proven unenforceable. It is broken. It is still correct per se. There is nothing wrong with prohibiting personal soft- and hardware in a high risk environment. So the policy stays in place. NASA still needed to make sure to rely much less on its effectiveness.

Hence NASA did the only sane thing: Move from an unenforceable policy to a technically feasible solution, significantly reducing the security exposure. In May 2013 NASA announced the ISS laptops are being migrated to Debian 6. Imagine how much pressure Microsoft must have put up to prevent such a technical decision due to the adverse marketing message it provides along the way. And still the engineers at NASA saw this as the best way forward.

The take-away message here is: Security by policy does not work.

Continue reading "Security by policy does not work"

Firefox opens directories in RawTherapee ... or a tale of colliding assumptions

Internet

After installing RawTherapee, a raw photo to jpg image converter, on my Kubuntu 13.04 desktop, Mozilla Firefox started to open the Downloads directory in ... RawTherapee. Gosh. Now that's what you'd want to browse a downloads folder with.

Update-alternatives did not help as RawTherapee is not managed by the Debian alternatives system. Hm, xdg-open still works as intended and opens dolphin.

Asking Google turns up many not working solution proposals. Hm.

Firefox: Context menu (right click) -> Open Containing Folder

But, searching the collective memory is not completely useless. There is a solved bug report on the RawTherapee bug tracker that explains the issue. The .desktop file RawTherapee installs contains inode/directory as a supported mime type. Well, that's technically correct but makes no sense in the way mime associations are used. Thankfully the bug reporter was able to convince the developer of this and get this fixed for future RawTherapee releases.

So option 1: edit /usr/share/applications/rawtherapee.desktop and remove the inode/directory entry from the MimeType=-line. Then regenerate the various mimetype caches.

This still bears the question why Firefox does not use the inode/directory entry from dolphin.desktop. Also, the next update from Debian/Ubuntu will probably overwrite the fixed rawtherapee.desktop file again as the patch from the Rawtherapee developer will only be included on new releases (and these may or may not be backported to Debian/Ubuntu). So you'd need to copy that fixed rawtherapee.desktop file to ~/.local/share/applications/ or the like.

Option 2: find out what the real issue is...

Continue reading "Firefox opens directories in RawTherapee ... or a tale of colliding assumptions"

Securing the grub boot loader

Open Source

Since version 2.0 the behaviour of grub regarding passwords has changed quite substantially. It can be nicely used to secure the boot process so that a X display manager (gdm, kdm, lightdm, ...) or login prompt cannot be circumvented by editing the Linux kernel boot command line parameters. The documentation is concise but many old how-tos may lead you down the wrong GNU grub "legacy" (the pre-2.0 versions) path.

So this assumes you have a grub installed and working. I.e. if you press Shift during boot, you get a grub menu and can edit menu entries via the e key.

First you need to setup grub users and corresponding passwords:

Run grub-mkpasswd-pbkdf2 to encrypt every password you want to use for grub users (which are technically unrelated to Linux system users at this time).
You'll get a string like 'grub.pbkdf2.sha512.10000...'. It will replace the plain text passwords.

In '/etc/grub/40_custom' add lines like:

# These users can change the config at boot time and run any menuentry:
set superusers="root user1"
password_pbkdf2 root grub.pbkdf2.sha512.10000.aaa...
password_pbkdf2 user1 grub.pbkdf2.sha512.10000.bbb...
# This user can only run specifically designated menuentries (not a superuser):
password_pbkdf2 user2 grub.pbkdf2.sha512.10000.ccc...

Now once you did this grub v. 2.0+ will ask for a supervisor password every time you want to boot any menu item. This is a changed behavior from v. 1.9x which defaulted to allow all entries if no user restriction was specified. So you need to add '--unrestricted' to all 'menuentries' that any user shall be able to boot. You can edit '/boot/grub/grub.cfg' and add --unrestricted to (the default) menuentries. Or you can edit the 'linux_entry ()' function in '/etc/grub/10_linux' so that the 'echo "menuentry ..."' lines include --unrestricted by default:

[...]
echo "menuentry '$(echo "$title" | grub_quote)' --unrestricted ${CLASS} \$menuentry_id_option 'gnulinux-$version-$type-$boot_device_id' {" | sed "s/^/$submenu_indentation/"
else
echo "menuentry '$(echo "$os" | grub_quote)' --unrestricted ${CLASS} \$menuentry_id_option 'gnulinux-simple-$boot_device_id' {" | sed "s/^/$submenu_indentation/"
[...]

Make a backup of this file as it will be overwritten by grub updates. This way all Linux kernels detected by the script will be available to all users without identifying to grub via username / password.

Now issue update-grub to re-generate 'grub.cfg' with the amended menuentries.

If everything worked well, your system can now be booted unrestricted but the grub configuration can only be changed from the grub superusers after identifying with their username and password at the grub prompt.

Bonus point:

If you want to create menuentries that user2 (and any superuser) from the above example user list can run, add blocks like these to the end of '40_custom':

menuentry "Only user2 (or superuser) can run this Windows installation" --users user2 {
set root=(hd1,1)
chainloader +1
}

Update

16.12.2015:
Hector Marco and Ismael Ripoll have found a nearly unbelievable exploit in Grub2 that allows you to tap backspace 28 times to get a rescue shell and that way bypass a password prompt. Time to update!
Read the excellent analysis of the bug and the exploit vector in Hector Marco's blog post.

SyncPlaces, SortPlaces ... preserving these and other excellent Firefox add-ons

Internet

A few days ago I was re-installing a laptop of mine. Downloading the default add-ons for Firefox came to a stall as I could not find SyncPlaces, which I've been using for a few years to synchronize my bookmarks between the systems I use. As I knew the author, Andy Halford, has a homepage I surfed there and found...

"So long Mozilla and thanks for all the fish

If you are here looking for any of my extensions (SyncPlaces, SortPlaces, CheckPlaces or SearchPlaces) then I'm sad to inform you that these are no longer available.

Unfortunately I do not have the time or the motivation to continue to support these in light of the close-minded, short-sighted and increasingly inflexible organisation that Mozilla has become. Apparently spending 1000's of hours promoting their products and providing my extensions to the general public to fill the gaps in their product, and repair bookmarks destroyed by Firefox Sync, counts for little and does not deserve any flexibility or support from them. Apparently add-on developers are second-class citizens to them whatever they may say. Surprising given that these are the main USP of Firefox, and developers are providing their time and efforts for free. My decision was taken after a series of problems the last one being the final straw."
Andy Halford, 17.06.12 screenshot

I was unable to find any details what caused this rant and the serious decision to remove his excellent add-ons from the Internet. There are a few discussions on the Mozillazine forums but nothing enlightening so far. Andy's personal homepage, family homepage and the TotalValidator product he and his wife offer are all still online, he just pulled all his popular Firefox add-ons.

Andy states on his homepage

"I will not be answering any further emails on this matter, I suggest that you direct any questions to Mozilla themselves and ask them why they were happier for me to leave than to use a little common sense." [ib.]

I still tried to contact him and asked for an interview but he has not come back to me so far. I've sent a second email with the URL of this blog post.

The SyncPlaces plug-in is still prominently featured on Wikipedia's Comparison of browser synchronizers at press time.

Now this is a mess. With some ungraciously licensed closed source software, you're probably used to plan migrations once license terms run out or you have to upgrade to the much improved successor product because "extended support" for your perfectly fine current version gets ridiculously expensive.

But with open source you have more rights and e.g. the security to use the product as long as you wish. If you have kept a copy of the code (and the license file). Because in this case the Mozilla Firefox installer .xpi's are hard to come by. Mozilla has a centralized add-on repository and once the files are pulled from there, they quickly vanish off the net. Now don't get me wrong, these plug-ins are now unmaintained software, so you will have to migrate at some point in time, but not necessarily now. You have some time for planning and evaluation of alternatives.

Continue reading "SyncPlaces, SortPlaces ... preserving these and other excellent Firefox add-ons"

Apple iPhone ring tones Linux style

Open Source

Apple has crippled the iPhone to not allow normal music files as ringtones. Business decision. Technically any sub 40 second MP4 audio file will do once you rename it to *.m4r and drag-and-drop it to the ringtones folder of your phone in iTunes. Longer ones will work, too. But you'd need a jailbroken iPhone for that as iTunes will refuse to transfer the ringtone file if it's too long. Not much of an issue imho, who keeps ringing your phone for 40 seconds or more?

There's a gazillion websites available telling you how to convert a single .mp3-file to a ringtone with or without iTunes help and there are hundreds of tools doing that for you if you can't find out how to do it with just iTunes itself. Still the ones I tried failed for me as I wanted to convert my 20 or so standard ringtones from the good old Motorola K3 to iPhone ringtones all in one go. Without having to edit each one by hand. They are already nice ringtones and have served me well for years, just too long for the iPhone and in .mp3 format.

The basic processing sequence needed is

  1. Cut the .mp3 down to 39s
  2. Convert the .mp3 -> .wav (with mplayer, normalize output gain while we're at it)
  3. Convert the .wav -> .mp4 (with facc)
  4. Clean up, GOTO 1 for next file

So below is the free shell script to create multiple ringtones in one go on any Linux system. You need to install cutmp3, mplayer and faac for it, so apt-get install cutmp3 mplayer faac on Debian or Ubuntu. cutmp3 is currently not in the portage tree for Gentoo, but you can download an ebuild from Polynomial-C's overlay (mirror). Or you just download the cutmp3 binary from Jochen Puchalla's homepage. There's no error checking in the script, so know your way around the shell before running it.

Without further ado:

#!/bin/sh
#
# convert_to_ringtone file1.mp3 [file2.mp3, ...]
# Placed into the public domain by Daniel Lange, 2011.
#

for arg
do
        echo "Processing $arg..."
        cutmp3 -c -a 0:0.0 -b 0:39.0 -i "$arg" -O "$arg.tmp"
        mplayer -vo null -vc null -af volnorm -ao pcm:fast:file=tmpfile.wav "$arg.tmp"
        faac -b 128 -c 44100 -w tmpfile.wav
        name=`echo $arg| sed 's/.mp3//g'| sed 's/ /_/g'`
        mv tmpfile.m4a $name.m4r
        rm tmpfile.wav
        rm "$arg.tmp"
        echo "$arg done."
done
 

Wikipedia article on Apple's 1984 ad.

Update

23.12.14 Apparently the faac package in Debian and Ubuntu has had the MP4 writing capability removed in v1.28-5 and later due to a minor license incompatibility. See the Debian Changelogs. Duh.

faac (1.28-5) unstable; urgency=low
  [ Andres Mejia ]
  * Disable mp4v2 support.
    This only disables mp4v2 for the faac utility program. The faac
    utility is GPL-2 but the mp4v2 library is MPL-1.1. The two licenses
    are incompatible with each other.

So ... unfortunately you have to built faac from source yourself or pin the v1.28-4 version which is identical except for the castration anyways.

Random distro dev: "Why oh why doesn't my distro ever head mainstream...?"
Hint: Because of stuff like this.

Keeping IRC nicks active

IRC

Typical IRC services usually allow you to register with nickserv and link a number of nicks to a personal account. It's quite common to have nick, nick_ and nick__ as many IRC clients auto-append underscores if the primary nickname is already in use when connecting. Obviously you can set these alternate nicknames to almost anything you like in a decent client.

Some folks also group a "vanity" nickname or two for whatever reason. To keep these active, people do the "nick shuffle" (/nick newnick, /nick oldnick) all the time:

nick shuffle on freenode

People who forget the occasional nick shuffle may end up losing a grouped nick because it became inactive. While freenode staff try to contact people before dropping linked nicks, there are occasional prunes of "old data" from the services database. And then nobody can really ask upfront.

So before the next big purge comes up, I wrote a small bash script that logs into a nickserv account and cycles through the linked nicks. A few friends and me have used it successfully for many months now.

Grab a copy of keepnick (2.4kB) and drop it into /usr/local/bin.

Keepnick expects to have an accountname, the corresponding password and then a sequence of linked nicks given on its command line.

Something like

/usr/local/bin/keepnick accountname passw0rd linked_nick linked_nick_ vanity_nick MyOtherNick

should work.

For regular use, you need to set up a cron job to call keepnick e.g. every week. So put something like the following script into /etc/cron.weekly/keepnicks_irc or create a corresponding crontab entry for keepnicks_irc if you do not have the convenient cron.* directories set up:

#!/bin/bash
#
# run keepnick for user(s) irc account(s)
# intended to be run from cron, e.g. through /etc/cron.weekly
#

KEEPNICK="/usr/local/bin/keepnick"
# better safe than sorry
PATH="/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin"
export PATH

$KEEPNICK accountname1 passw0rd1 linked_nick1 linked_nick1_ linked_nick1__
$KEEPNICK accountname2 passw0rd2 linked_nick1 linked_nick2_ linked_nick2__
 

You should see keepnick in action now every week like this:

keepnick in action

What happens here is that the IRC services package tells you, keepnick has just authenticated to your account and will now shuffle through all nicks you asked it to. The big advantage is that is does this outside of channels, so not annoying any users. The cron job should make sure you don't forget the nick shuffle anymore.

Making sure your bash supports network connections

Stock bash will support network connections but on Debian and old (=pre-karmic) Ubuntu that capability was disabled at compile time.

If you need to check whether your bash is compiled with network support, type cat < /dev/tcp/time.nist.gov/13 into a bash terminal.

In case that gives you a RFC-867 time string, you're all fine. If not, re-compile your bash with --enable-net-redirections.

Now for something more advanced (but entirely optional):

Continue reading "Keeping IRC nicks active"

Binding applications to a specific IP

Linux

These days many systems are multi-homed in the sense that they have more than one IP address bound at the same time.
I.e. for different network cards, virtual IPs for shared servers or just using WiFi and a wired network connection at the same time on a laptop.

Murphy of course makes sure that your system will choose to worst IP (i.e. that on slow WiFi or the one reserved for admin access) when an application does not specifically supports binding to a selected IP address. And Mozilla Firefox for example doesn't.

The kernel chooses an outgoing IP from those in the routing table with the same metric:

daniel@server:~$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.0.2.1         0.0.0.0         U     0      0        0 eth0
0.0.0.0         192.0.2.2         0.0.0.0         U     0      0        0 eth1
0.0.0.0         192.0.2.3         0.0.0.0         U     0      0        0 eth2
0.0.0.0         192.0.2.4         0.0.0.0         U     0      0        0 eth3

You can obviously play around with the metric and make the kernel router prefer the desired interface above others. This will affect all applications though. Some people use the firewall to nat all packages to port 80 onto the network interface desired for web browsing. Gee, beware the http://somewebsite.tld:8080 links...

Thankfully Daniel Ryde has solved the problem via a LD_PRELOAD shim. With his code you can run

daniel@laptop:~$ BIND_ADDR="192.0.2.100" LD_PRELOAD=/usr/lib/bind.so firefox (*)

and happily surf away.

To compile his code (3.3kB, local copy, see note 1) you need to run

gcc -nostartfiles -fpic -shared bind.c -o bind.so -ldl -D_GNU_SOURCE
strip bind.so
cp -i bind.so /usr/lib/

and you're set to go.

If you don't have gcc available (and trust me) you can download pre-compiled 32bit and 64bit (glibc-2) bind.so libraries here (4.5kB).

I guess because Daniel Ryde hid his code so well on his webpage, Robert J. McKay wrote another LD_PRELOAD shim, called Bindhack (4.5kB, local mirror). This will - as is - only compile on 32bit machines. But YMMV.

Run the above command (*) with your desired (and locally bound) IP address in bash and visit MyIP.dk or DNStools.ch or any of the other services that show your external IP to see whether you've succeeded.

Notes:

  1. Daniel Ryde did not specify the -D_GNU_SOURCE in the comments section of bind.c. Modern glibc/gcc need that as he used RTLD_NEXT which is Unix98 and not POSIX. I amended the local copy of bind.c and sent him an email so he can update his.
  2. Both are IPv4 only, no IPv6 support.

Updates:

19.03.15 madmakz wrote in to clarify that all of the bind LD_PRELOAD shims only work with TCP connections. So not with UDP.
I'm not aware of a shim that manipulates UDP sockets.

14.01.14 Christian Pellegrin wrote a superb article on how to achieve per-application routing with the help of Linux network namespaces.

16.06.13 showip.be seems to be gone, so I replaced it with dnstools.ch in the text above. There are plenty of others as well.

22.06.12 Lennart Poettering has a IPv4 only version of a shim and a rather good readme available at his site.

29.11.10 Catalin M. Boie wrote another LD_PRELOAD shim, force_bind. I have not tested this one. It's capable of handling IPv6 binds.

11.01.09 Daniel Ryde has replied to my email and updated his local copy now as well.

Google GMail dominating the email market

Other

Google's GMail was launched in April 2004 and only in February 2007 Google dropped its invite system to open up to the general public acc. to Wikipedia's history of GMail. That's some five years of operations up to now.

It kind of amazed me how many people I know have GMail as their primary mail provider. So I took the chance today to get a bit of statistics to check my gut feelings:

A friend of mine selected some (mostly American) bloggers that have indicated specific interests in a topic related to his Doctoral thesis. This sample ended up to be 1,375 people. These folks have 295 different email domains. Only.

A whooping 46% of the (rather random) sample use GMail, 12% Yahoo, 8% Hotmail and about 3% AOL. While Yahoo has some foreign domains in the sample (yahoo.co.uk, yahoo.ca, see mostly American bloggers above), these add up to around 0.1% of the sample so it's not really significant.

Distribution of American blogger's email domains

This data is in no way representative, but still wow. Google basically has a monopoly on search and now seems to have a close-to-majority footprint in personal email.

I guess the dominance is currently larger in the States than in Europe or Asia as GMail has only gradually learned languages beyond English.
Large local providers should also have some foothold in these markets. Similar to the Comcast and SBC customers still significant in sample depicted above. Just the local providers in Europe and Asia will be somewhat stronger (for now). Google is also aggressively targeting corporations with hosted email and apps now so one can expect further and accelerated growth in that area. Quite a number of companies are considering using hosted email instead of the conventional mail system they have operated on site for many years now.

So while Gina Trapani recommends "Break Google's Monopoly on Your Data: Switch to Yahoo Search", may I humbly point out: It's becoming quite impossible to just keep your emails between the recipient and the addressee these days.

Even if you personally do not use GMail, Google can (technically) still profile you because a huge chunk of people you communicate with send from GMail and receive and store your emails there.

Nearly all email that is sent also passes spam filters before delivery. Google bought the Postini spam filter in 2007. That anti-spam service is used by many enterprises and even city governments, see here.

So time to consider (unencrypted) email as what it has always been: The digital equivalent of a postcard.
Just now Google has become the postmen. All of them, every second shift. You should hope they're not nosey. Or send letters.

Update:

11.05.2014: Benjamin Mako Hill has written a blog entry Google Has Most of My Email Because It Has All of Yours doing analysis for his own email box. He found a third of his inbox emails come from Google and - as he doesn't usually reply to newsletters and the like - more than half of his own email replies (57% in 2013) end up at GMail. He published his code in case you want to do the analysis on our own email.

httpdate - set local date and time from a web server

Linux

While ntp may be a great protocol, I find it quite bloated and slow for the simple purpose of just setting a local date and time to a reference clock. I do not need 20ms accuracy on a notebook's clock :-). Thus I use(d) rdate for a decade now but the public rdate servers are slowly dying out. So I'm replacing it more and more with htpdate which works quite nicely. It's written in C and a perl alternative is available on the author's site. There is also a forked windows version of it available.

Developing a bit larger bash script (which syncs a few servers), I wondered whether I could realize the time sync part in bash as well.

It's quite possible:

  1. # open a tcp connection to www.google.com
  2. exec 3<>/dev/tcp/www.google.com/80
  3. # say hello HTTP-style
  4. echo -e "GET / HTTP/1.0\n\n">&3
  5. # parse for a Date: line and with a bit of magic throw the date-string at the date command
  6. LC_ALL=C LANG=en date --rfc-2822 --utc -s "$(head <&3 | grep -i "Date: " | sed -e s/Date\:\ //I)"
  7. # close the tcp connection
  8. exec 3<&-

Simple, eh?

Continue reading "httpdate - set local date and time from a web server"

Disabling a group policy'd screensaver on Windows

IT

I guess many people know the issue of having a screen saver forced active after a some time through a group policy in a corporate environment. This is usually done to make sure systems are locked during breaks if people forget to press Win+L (or Ctrl+Alt+Del and then Enter). While that may well help IT security, it turns problematic when giving presentations for extended periods of time. Having to move the mouse through the presentation pointer every few minutes or dash back to the PC once the screen saver has kicked in, again, is simply annoying. On your company's systems you may be able to get the system admins to allow configuration of the interval or allow for disabling the screen saver, but on foreign systems you're often lost. But...

Continue reading "Disabling a group policy'd screensaver on Windows"