Skip to content

Security is hard, open source security unnecessarily harder

IT

Now it is a commonplace that security is hard. It involves advanced mathematics and a single, tiny mistake or omission in implementation can spoil everything.

And the only sane IT security can be open source security. Because you need to assess the algorithms and their implementation and you need to be able to completely verify the implementation. You simply can't if you don't have the code and can compile it yourself to produce a trusted (ideally reproducible) build. A no-brainer for everybody in the field.

But we make it unbelievably hard for people to use security tools. Because these have grown over decades fostered by highly intelligent people with no interest in UX.
"It was hard to write, so it should be hard to use as well."
And then complain about adoption.

PGP / gpg has received quite some fire this year and the good news is this has resulted in funding for the sole gpg developer. Which will obviously not solve the UX problem.

But the much worse offender is OpenSSL. It is so hard to use that even experienced hackers fail.

IRC wallop on hackint

Now, securely encrypting a mass communication media like IRC is not possible at all. Read Trust is not transitive: or why IRC over SSL is pointless1.
Still it makes wiretapping harder and that may be a good thing these days.

LibreSSL has forked the OpenSSL code base "with goals of modernizing the codebase, improving security, and applying best practice development processes". No UX improvement. A cleaner code for the chosen few. Duh.

I predict the re-implementations and gradual improvement scenarios will fail. The nearly-impossible-to-use-right situation with both gpg and (much more importantly) OpenSSL cannot be fixed by gradual improvements and however thorough code reviews.

Now the "there's an App for this" security movement won't work out on a grand scale either:

  1. Most often not open source. Notable exceptions: ChatSecure, TextSecure.
  2. No reference implementations with excellent test servers and well documented test suites but products. "Use my App.", "No, use MY App!!!".
  3. Only secures chat or email. So the VC-powered ("next WhatsApp") mass-adoption markets but not the really interesting things to improve upon (CA, code signing, FDE, ...).
  4. While everybody is focusing on mobile adoption the heavy lifting is still on servers. We need sane libraries and APIs. No App for that.

So we need a new development, a new code, a new open source product. Sadly so the Core Infrastructure Initiative so far only funds existing open source projects in dire needs and people bug hunting.

It basically makes the bad solutions of today a bit more secure and ensures maintenance of decade old crufty code bases. That way it extends the suffering of everybody using the inadequate solutions of today.

That's inevitable until we have a better stack but we need to look into getting rid of gpg and OpenSSL and replacing it with something new. Something designed well from the ground up, technically and from a user experience perspective.

Now who's in for a five year funding plan? $3m2 annually. ROCE 0. But a very good chance to get the OBE awarded.

Keep calm and enjoy the silence

Updates:

10.06.22: Carl Tashian made a GUI mockup to show the complexity of the OpenSSL "user interface".

21.07.19: A current essay on "The PGP problem" is making rounds and lists some valid issues with the file format, RFCs and the gpg implementation. The GnuPG-users mailing list has a discussion thread on the issues listed in the essay.

19.01.19: Daniel Kahn Gillmor, a Senior Staff Technologist at the ACLU, tried to get his gpg key transition correct. He put a huge amount of thought and preparation into the transition. To support Autocrypt (another try to get GPG usable for more people than a small technical elite), he specifically created different identities for him as a person and his two main email addresses. Two days later he has to invalidate his new gpg key and back-off to less "modern" identity layouts because many of the brittle pieces of infrastructure around gpg from emacs to gpg signature management frontends to mailing list managers fell over dead.

28.11.18: Changed the Quakenet link on why encrypting IRC is useless to an archive.org one as they have removed the original content.

13.03.17: Chris Wellons writes about why GPG is a failure and created a small portable application Enchive to replace it for asymmetric encryption.

24.02.17: Stefan Marsiske has written a blog article: On PGP. He argues about adversary models and when gpg is "probably" 3 still good enough to use. To me a security tool can never be a sane choice if the UI is so convoluted that only a chosen few stand at least a chance of using it correctly. Doesn't matter who or what your adversary is.
Stefan concludes his blog article:

PGP for encryption as in RFC 4880 should be retired, some sunk-cost-biases to be coped with, but we all should rejoice that the last 3-4 years had so much innovation in this field, that RFC 4880 is being rewritten[Citation needed] with many of the above in mind and that hopefully there'll be more and better tools. [..]

He gives an extensive list of tools he considers worth watching in his article. Go and check whether something in there looks like a possible replacement for gpg to you. Stefan also gave a talk on the OpenPGP conference 2016 with similar content, slides.

14.02.17: James Stanley has written up a nice account of his two hour venture to get encrypted email set up. The process is speckled with bugs and inconsistent nomenclature capable of confusing even a technically inclined person. There has been no progress in the last ~two years since I wrote this piece. We're all still riding dead horses. James summarizes:

Encrypted email is nothing new (PGP was initially released in 1991 - 26 years ago!), but it still has a huge barrier to entry for anyone who isn't already familiar with how to use it.

04.09.16: Greg Kroah-Hartman ends an analysis of the Evil32 PGP keyid collisions with:

gpg really is horrible to use and almost impossible to use correctly.

14.11.15:
Scott Ruoti, Jeff Andersen, Daniel Zappala and Kent Seamons of BYU, Utah, have analysed the usability [local mirror, 173kB] of Mailvelope, a webmail PGP/GPG add-on based on a Javascript PGP implementation. They describe the results as "disheartening":

In our study of 20 participants, grouped into 10 pairs of participants who attempted to exchange encrypted email, only one pair was able to successfully complete the assigned tasks using Mailvelope. All other participants were unable to complete the assigned task in the one hour allotted to the study. Even though a decade has passed since the last formal study of PGP, our results show that Johnny has still not gotten any closer to encrypt his email using PGP.

  1. Quakenet has removed that article citing "near constant misrepresentation of the presented argument" sometime in 2018. The contents (not misrepresented) are still valid so I have added and archive.org Wayback machine link instead. 

  2. The estimate was $2m until end of 2018. The longer we wait, the more expensive it'll get. And - obviously - ever harder. E.g. nobody needed to care about sidechannel attacks on big-LITTLE five years ago. But now they start to hit servers and security-sensitive edge devices. 

  3. Stefan says "probably" five times in one paragraph. Probably needs an editor. The person not the application. 

Apple Timemachine backups on Debian 8 (Jessie)

Debian

Upgrading Debian 7 (Wheezy) servers to Debian 8 (Jessie) proves (unexpectedly) quite rough around the edges.

That's what you get for using a version x.0, we should have known better :-).

And - of course - the release notes follow the common practice of not even mentioning any of the issues we encountered so far.

Ah, well, let's go through the first one:

In Debian 7 (Wheezy) there was netatalk 2.2.2 (packages link). Now during upgrades that package may or may not get removed. There is no netatalk in Debian 8 (Jessie) anymore. Duh. There is in sid (aka Debian unstable) (packages link) so we may see a backport some time. Or not. In any case this is still 2.2.5 at the time of writing and as Adrian Knoth put it in the three year old bug asking for a upgrade to Netatalk 3:

Let's not ship another release without netatalk3, it's embarrassing.

Yes. It is. Removing a working version and not even mentioning it in the release notes is even worse though. So no cookies there.

Luckily the absolutely awesome Debian and greater FLOSS community have sorted 90% of the problem out for us already:

A quick Google search turns up an excellent article on netatalk's wiki that details installing Netatalk 3.1.7 on Debian 8 Jessie. There are two shortcomings to this: First it doesn't compile to .debs but installs besides apt and friends. And second it compiles with Spotlight search, courtesy of Gnome tracker, which doesn't really work well on servers yet. Hence we're lucky that Adrian Knoth's debified install has not yet added the tracker dependencies. It does compile for systemd use (the default for Debian 8 Jessie). If you want to continue using SysVInit, you need to modify debian/rules.

Compiling to .debs becomes as easy as:

# get build dependencies and a few helpers
apt-get install build-essential devscripts debhelper cdbs autotools-dev dh-buildinfo libdb-dev libwrap0-dev libpam0g-dev libcups2-dev libkrb5-dev libltdl3-dev libgcrypt11-dev libcrack2-dev libavahi-client-dev libldap2-dev libacl1-dev libevent-dev d-shlibs dh-systemd
# in case you want to try the tracker support (you need to ammend the debian/ build config as well)
# apt-get install tracker libtracker-sparql-1.0-dev libtracker-miner-1.0-dev  
git clone https://github.com/adiknoth/netatalk-debian
cd netatalk-debian
debuild -b -uc -us

This should leave you with (at the time of writing this):

File Function md5 sha1
libatalk-dev_3.1.7-1_amd64.deb Development files for the libatalk library (dev only) e5a465e39a8560c919d8db85c8e5a83b 0b924cf75f22ab42406289c6f18ae0243d6396a3
libatalk16_3.1.7-1_amd64.deb libatalk library (needed) 17a3d677ed0b3df1c2f4c1a8ab9045fd 7345ed3edd442716c99c2fe979140703204c0826
netatalk_3.1.7-1_amd64.deb netatalk daemons (needed) c694abca7f3cdc0070b2b3e7d528324a 932d1e3d5899958f29e79a7ba40e858d4ac272e8

Obviously you can download the files above if you run the AMD64 architecture and trust me enough to compile them for you.

Continue reading "Apple Timemachine backups on Debian 8 (Jessie)"

Managing a project consisting of multiple git repositories

IT

The core team organizing DebConf, the annual Debian developer conference, reached out to me two weeks ago to help support this year's effort a bit.

I'm very happy to do so as Debian is a cornerstone of everything I do in the Open Source/Free Software space.

Screenshot of git_pull_all with color

To get me started I got access to a lot of mailing lists and irc channels. And even more git repositories. So many that the DebConf team even has an instruction page on how the repositories all fit together.

It's unfortunately quite common to split a bigger project into many git repositories to ease access rights management and reduce the noise and data transfer volume for the average user. The downside is, everybody ends up with a dozen or more individual repositories to keep pulling. And then there's git annex for yet another level of indirection.

Joey Hess, a former Debian developer, has even written an extensive tool, myrepos, to meta-manage the different repositories and it can do quite some magic across different SCMSs1. In my case this is a bit of an overkill though.

And using myrepos may get you confused at some point whether to now run mr or git directly for each batch of repos you have inherited over some time of working on multiple projects.2 Thus I prefer the simple route:

Check out each repository into a common top-level directory (~/debconf/ in this case) and then put the following two lines into an executable script git_pull_all into that top level directory:

#!/bin/sh
find ~/debconf -mindepth 1 -maxdepth 1 -type d -exec sh -c "cd {}; test -r .git/config && git pull $*" \;

This will allow you to pull all git repos with one command and keep the normal syntax for everything else you do with each repo.

The --mindepth and --maxdepth will instruct find to just go and run your git pull only inside each direct child of the top level directory. So recursion depth = 1. That is the single trick there is to this.

Updates:

If you like to have some color and a bit of a spaced layout for improved readability, try:

#!/bin/sh
find ~/debconf -mindepth 1 -maxdepth 1 -type d -exec sh -c "cd {}; test -r .git/config && (printf \"\033[1m\033[34m%-50s\033[0m\" \"\${PWD}:\" ; git pull $*)" \;

When you have pull.rebase=true set in your .gitconfig, you can run ./git_pull_all --no-rebase to avoid rebases in case you work somewhere and want to have the merge commits.

P.S.: The DebConf15 Heidelberg registration just opened, please check the DebConf15 homepage for news, venue information and please register if you want to come around.


  1. Source Code Management Systems, like git, mercurial (hg) or subversion (svn). Or God forbid ... cvs. I don't like the (D)VCS (Distributed) Version Control Systems moniker. Because that's not really all these systems do. Not even the most important piece of what they do these days. 

  2. With myrepos you can still work with each individual repository via git. Just so nobody will write in "but...". 

scp (secure copy) a file to the same path on a remote system

IT

I've been copying files from one server to the symmetrical cluster partner a lot last week.

It's always

 scp /path/to/directory/file remote:/path/to/directory/

That gets boring after a while but does not really warrant setting up a full blown config management solution like salt, puppet or ansible.

So here is scpover[1.5kB].

It reduces the effort to

 scpover /path/to/directory/file

This will copy "file" from "/path/to/directory/" into exactly the same directory on the remote cluster partner. Which system to target is detected from the hostname of the local system and hard-coded into the script.

Scpover can also take multiple filepaths on one command line so you can beam over a few related config files from multiple locations in one go.
Not an atomic change but at least all within a reasonably short timespan.

Continue reading "scp (secure copy) a file to the same path on a remote system"

How much memory does a process use on Linux?

Linux

Sometimes the easy questions are the hardest to answer.

Memory can mean RSS (Resident Set Size) which is the memory of a process held in RAM (so not swapped out). That does include shared memory allocations. So if you add two RSS numbers, you're probably wrong already. Still this is usually the number we look for in most practical investigations.

Then there is VSZ (Virtual Set siZe) also called SIZE. The VSZ includes code, data and stack segments a process has allocated. And again that will count some shared address space. So usually bash will have a VSZ that's lower than its RSS.

man ps will also tell you:

   The SIZE and RSS fields don't count some parts of a process including the page tables, kernel stack, struct
   thread_info, and struct task_struct.  This is usually at least 20 KiB of memory that is always resident.

In most (if not all) practical scenarios that difference won't matter. If it were, you'd be using valgrind to look into the memory usage of your application in minute detail. Wouldn't you?

If you want to have an as-detailed-as-possible look into the memory allocations of a process pmap <pid> will give you the information. The summary at the end is a gross over-estimation of the total memory a process has allocated as it counts all mapped memory (and may still be wrong due to de-duplication and other factors). But that number may well serve as an upper bound if you need something like that.

For running processes


ps -eo 'pid user rss:8 size:8 cmd' --sort 'rss'
 

will give you a nice sorted list of processes and their RSS and VSZ (SIZE) in kiB (old school kB...).

For short running commands GNU time (not the bash build-in time command, apt install time on Debian-based systems) has a nice capability that's not widely known yet:


/usr/bin/time -f "RSS: %MkiB" <command>
 

will tell you the maximum RSS size the <command> has had during its lifetime. That's better than top or watch ps and trying to spot the process.

iTunes starts but does not show the main application window

IT

Every once in a while iTunes on Windows decides to start but does not show the main application window when trying to sync an iPhone (or any other iDevice).
May be it pops up after half an hour, may be not.

This behavior is caused by iTunes waiting for its Bonjour zeroconf network service. Unfortunately though iTunes is updated very frequently that bug is persistent for years now.
The work-around is easy though:

Open up CMD as Administrator and type

net stop "bonjour service"

and iTunes should pop up its window a few moments later. It will complain about Bonjour not running but that is not needed for anything but network self-discovery. Which you usually don't need anyways.

If you do, you can start Bonjour again with

net start "bonjour service"

anytime also while iTunes is running. Once it decides to show its main window, it's fine to go for the session.

Screenshot of iTunes and the CMD window

Security by policy does not work

Management

The laptop systems aboard the International Space Station (ISS) have been infected by computer viruses and worms multiple times. The W32.Gammima.AG virus made it to space in July 2008. And it happily spread from laptop to laptop onboard the ISS. The virus has been written to steal credentials for some common games. It is unknown how many of these were run in orbit. The latency would kill the experience for sure.

I am sure there have been policies in place to prevent astronauts carrying personal soft- and hardware up to the ISS. Personal items must be explicitly applied for and will only be approved after severe scrutiny of each item. Even beyond the obvious security considerations, this is necessary as the launch weight needs to be calculated exactly.
NASA and Roscosmos both have very strict policies for their personnel and strict training to make sure they know and follow policy. The group of astronauts primarily affected by the policy is very well known and counts a few dozen heads.

Still at least one infected USB stick made it up to the ISS and could spread its malware. Other infections have happened and we can assume similar infection vectors.

So the policy has proven unenforceable. It is broken. It is still correct per se. There is nothing wrong with prohibiting personal soft- and hardware in a high risk environment. So the policy stays in place. NASA still needed to make sure to rely much less on its effectiveness.

Hence NASA did the only sane thing: Move from an unenforceable policy to a technically feasible solution, significantly reducing the security exposure. In May 2013 NASA announced the ISS laptops are being migrated to Debian 6. Imagine how much pressure Microsoft must have put up to prevent such a technical decision due to the adverse marketing message it provides along the way. And still the engineers at NASA saw this as the best way forward.

The take-away message here is: Security by policy does not work.

Continue reading "Security by policy does not work"

Encrypting files with openssl for synchronization across the Internet

Linux

Well, shortly after I wrote about encrypting files with a keyfile / passphrase with gpg people asked about a solution with openssl.

You should prefer to use the gpg version linked above, but if you can't, below is a script offering the same functionality with openssl.

You basically call crypt_openssl <file> [<files...>] to encrypt file to file.aes using the same keyfile as used in the gpg script (~/.gnupg/mykey001 per default).

A simple crypt_openssl -d <file.aes> [<files.aes...>] will restore the original files from the encrypted AES256 version that you can safely transfer over the Internet even using insecure channels.

Please note that you should feed compressed data to crypt_openssl whenever you can. So use preferably use it on .zip or .tar.gz files.

Continue reading "Encrypting files with openssl for synchronization across the Internet"

Encrypting files with gpg for synchronization across the Internet

Linux

Automatically transferring (syncing) files between multiple computers is easy these days. Dropbox, owncloud or bitpocket to name a few. You can imagine I use the latter (if you want a recommendation)1.

In any case you want to encrypt what you send to be stored in "the cloud" even if it is just for a short time. There are many options how to encrypt the "in flight" data. Symmetric ciphers are probably the safest and most widely researched cryptography these days and easier to use than asymmetric key pairs in this context as well.

Encryption is notoriously hard to implement correctly and worthless when the implementation is flawed. So I looked at gpg, a well known reference implementation, and was amazed that it can neither use a proper keyfile for symmetric encryption (you can just supply a passphrase via --passphrase-file) nor does it handle multiple files on the command line consistently. You can use --multifile (wondering...why does a command need that at all?) with --decrypt and --encrypt (asymmetric public/private key pair encryption) but not with --symmetric (symmetric shared key encryption). Duh!

With a bit of scripting around the gpg shortcomings, you end up with crypt_gpg that can nicely encrypt or decrypt multiple files (symmetric cipher) in one go.


  1. Dropbox is closed source so it cannot be assessed for its security. Owncloud needs a thorough code review before I would dare to run it on my systems. 

Continue reading "Encrypting files with gpg for synchronization across the Internet"

Firefox opens directories in RawTherapee ... or a tale of colliding assumptions

Internet

After installing RawTherapee, a raw photo to jpg image converter, on my Kubuntu 13.04 desktop, Mozilla Firefox started to open the Downloads directory in ... RawTherapee. Gosh. Now that's what you'd want to browse a downloads folder with.

Update-alternatives did not help as RawTherapee is not managed by the Debian alternatives system. Hm, xdg-open still works as intended and opens dolphin.

Asking Google turns up many not working solution proposals. Hm.

Firefox: Context menu (right click) -> Open Containing Folder

But, searching the collective memory is not completely useless. There is a solved bug report on the RawTherapee bug tracker that explains the issue. The .desktop file RawTherapee installs contains inode/directory as a supported mime type. Well, that's technically correct but makes no sense in the way mime associations are used. Thankfully the bug reporter was able to convince the developer of this and get this fixed for future RawTherapee releases.

So option 1: edit /usr/share/applications/rawtherapee.desktop and remove the inode/directory entry from the MimeType=-line. Then regenerate the various mimetype caches.

This still bears the question why Firefox does not use the inode/directory entry from dolphin.desktop. Also, the next update from Debian/Ubuntu will probably overwrite the fixed rawtherapee.desktop file again as the patch from the Rawtherapee developer will only be included on new releases (and these may or may not be backported to Debian/Ubuntu). So you'd need to copy that fixed rawtherapee.desktop file to ~/.local/share/applications/ or the like.

Option 2: find out what the real issue is...

Continue reading "Firefox opens directories in RawTherapee ... or a tale of colliding assumptions"

Replicating BIND DNS slave configurations from BIND DNS master servers

Internet

Manually copying the BIND master zone configuration to slave servers is a tedious and error-prone process. There are configuration management systems available for large deployments and BIND can support database backends as well which allow synchronization virtually behind the scenes.

For my use these options are overly complex and have unwanted risks associated with it.

The better option is to automate generating the BIND slave configuration from the BIND master named.conf.local.

Continue reading "Replicating BIND DNS slave configurations from BIND DNS master servers"

Securing the grub boot loader

Open Source

Since version 2.0 the behaviour of grub regarding passwords has changed quite substantially. It can be nicely used to secure the boot process so that a X display manager (gdm, kdm, lightdm, ...) or login prompt cannot be circumvented by editing the Linux kernel boot command line parameters. The documentation is concise but many old how-tos may lead you down the wrong GNU grub "legacy" (the pre-2.0 versions) path.

So this assumes you have a grub installed and working. I.e. if you press Shift during boot, you get a grub menu and can edit menu entries via the e key.

First you need to setup grub users and corresponding passwords:

Run grub-mkpasswd-pbkdf2 to encrypt every password you want to use for grub users (which are technically unrelated to Linux system users at this time).
You'll get a string like 'grub.pbkdf2.sha512.10000...'. It will replace the plain text passwords.

In '/etc/grub/40_custom' add lines like:

# These users can change the config at boot time and run any menuentry:
set superusers="root user1"
password_pbkdf2 root grub.pbkdf2.sha512.10000.aaa...
password_pbkdf2 user1 grub.pbkdf2.sha512.10000.bbb...
# This user can only run specifically designated menuentries (not a superuser):
password_pbkdf2 user2 grub.pbkdf2.sha512.10000.ccc...

Now once you did this grub v. 2.0+ will ask for a supervisor password every time you want to boot any menu item. This is a changed behavior from v. 1.9x which defaulted to allow all entries if no user restriction was specified. So you need to add '--unrestricted' to all 'menuentries' that any user shall be able to boot. You can edit '/boot/grub/grub.cfg' and add --unrestricted to (the default) menuentries. Or you can edit the 'linux_entry ()' function in '/etc/grub/10_linux' so that the 'echo "menuentry ..."' lines include --unrestricted by default:

[...]
echo "menuentry '$(echo "$title" | grub_quote)' --unrestricted ${CLASS} \$menuentry_id_option 'gnulinux-$version-$type-$boot_device_id' {" | sed "s/^/$submenu_indentation/"
else
echo "menuentry '$(echo "$os" | grub_quote)' --unrestricted ${CLASS} \$menuentry_id_option 'gnulinux-simple-$boot_device_id' {" | sed "s/^/$submenu_indentation/"
[...]

Make a backup of this file as it will be overwritten by grub updates. This way all Linux kernels detected by the script will be available to all users without identifying to grub via username / password.

Now issue update-grub to re-generate 'grub.cfg' with the amended menuentries.

If everything worked well, your system can now be booted unrestricted but the grub configuration can only be changed from the grub superusers after identifying with their username and password at the grub prompt.

Bonus point:

If you want to create menuentries that user2 (and any superuser) from the above example user list can run, add blocks like these to the end of '40_custom':

menuentry "Only user2 (or superuser) can run this Windows installation" --users user2 {
set root=(hd1,1)
chainloader +1
}

Update

16.12.2015:
Hector Marco and Ismael Ripoll have found a nearly unbelievable exploit in Grub2 that allows you to tap backspace 28 times to get a rescue shell and that way bypass a password prompt. Time to update!
Read the excellent analysis of the bug and the exploit vector in Hector Marco's blog post.

855 years and counting

Fun

Dear marketing managers:

  1. Test your scripts before using them on real customers
  2. Server images don't show in emails these days anymore

Dear developers:

  1. Test your scripts before deploying them in production
  2. Add (configurable) sanity checks to customer visible output according to the underlying business logic
SPG mailing fail
SPG mailing fail

SyncPlaces, SortPlaces ... preserving these and other excellent Firefox add-ons

Internet

A few days ago I was re-installing a laptop of mine. Downloading the default add-ons for Firefox came to a stall as I could not find SyncPlaces, which I've been using for a few years to synchronize my bookmarks between the systems I use. As I knew the author, Andy Halford, has a homepage I surfed there and found...

"So long Mozilla and thanks for all the fish

If you are here looking for any of my extensions (SyncPlaces, SortPlaces, CheckPlaces or SearchPlaces) then I'm sad to inform you that these are no longer available.

Unfortunately I do not have the time or the motivation to continue to support these in light of the close-minded, short-sighted and increasingly inflexible organisation that Mozilla has become. Apparently spending 1000's of hours promoting their products and providing my extensions to the general public to fill the gaps in their product, and repair bookmarks destroyed by Firefox Sync, counts for little and does not deserve any flexibility or support from them. Apparently add-on developers are second-class citizens to them whatever they may say. Surprising given that these are the main USP of Firefox, and developers are providing their time and efforts for free. My decision was taken after a series of problems the last one being the final straw."
Andy Halford, 17.06.12 screenshot

I was unable to find any details what caused this rant and the serious decision to remove his excellent add-ons from the Internet. There are a few discussions on the Mozillazine forums but nothing enlightening so far. Andy's personal homepage, family homepage and the TotalValidator product he and his wife offer are all still online, he just pulled all his popular Firefox add-ons.

Andy states on his homepage

"I will not be answering any further emails on this matter, I suggest that you direct any questions to Mozilla themselves and ask them why they were happier for me to leave than to use a little common sense." [ib.]

I still tried to contact him and asked for an interview but he has not come back to me so far. I've sent a second email with the URL of this blog post.

The SyncPlaces plug-in is still prominently featured on Wikipedia's Comparison of browser synchronizers at press time.

Now this is a mess. With some ungraciously licensed closed source software, you're probably used to plan migrations once license terms run out or you have to upgrade to the much improved successor product because "extended support" for your perfectly fine current version gets ridiculously expensive.

But with open source you have more rights and e.g. the security to use the product as long as you wish. If you have kept a copy of the code (and the license file). Because in this case the Mozilla Firefox installer .xpi's are hard to come by. Mozilla has a centralized add-on repository and once the files are pulled from there, they quickly vanish off the net. Now don't get me wrong, these plug-ins are now unmaintained software, so you will have to migrate at some point in time, but not necessarily now. You have some time for planning and evaluation of alternatives.

Continue reading "SyncPlaces, SortPlaces ... preserving these and other excellent Firefox add-ons"

The XY problem in IT management

Management

Online community users know the XY problem:

A person has a problem X and tries to solve it with Y. He asks about help with Y online.
Often X has a straight forward solution which is not Y. But the person asking doesn't describe X.1

The term "XY problem" was implicitly coined when the Open Source philosopher Eric S. Raymond wrote his "How To Ask Questions The Smart Way" text and added "How can I use X to do Y?" to the "Questions Not To Ask" section.

ESR himself states in "How To Ask Questions The Smart Way":

Describe the problem's symptoms, not your guesses

It's not useful to tell hackers what you think is causing your problem. (If your diagnostic theories were such hot stuff, would you be consulting others for help?) So, make sure you're telling them the raw symptoms of what goes wrong, rather than your interpretations and theories. Let them do the interpretation and diagnosis. If you feel it's important to state your guess, clearly label it as such and describe why that answer isn't working for you.

[...]

Since the preceding point seems to be a tough one for many people to grasp, here's a phrase to remind you: "All diagnosticians are from Missouri." That US state's official motto is "Show me" (earned in 1899, when Congressman Willard D. Vandiver said "I come from a country that raises corn and cotton and cockleburs and Democrats, and frothy eloquence neither convinces nor satisfies me. I'm from Missouri. You've got to show me.") In diagnosticians' case, it's not a matter of skepticism, but rather a literal, functional need to see whatever is as close as possible to the same raw evidence that you see, rather than your surmises and summaries. Show us.
Source

As online communities like IRC or forums are quite aware of the XY problem now, the people involved will often quickly focus on getting behind the issue presented. They will ask questions around the "why" do you want to do Y, what is the reason for you seeking help on Y etc. trying to uncover X.

They have been trained by numerous long winded discussions of why somebody would want to solve something as awkward as Y which slowly lead to uncover the unarticulated underlying problems X.2


  1. There are many alternate definitions for the XY problem available on PerlMonks. Some - like ESR - define Y to be the original problem and X the offered solution. I stuck with Greg Wooledge and John D. Porter and used X to be the underlying problem and Y the exposed question or request. That occurs more intuitively to me and seems to be the more frequent definition. As they don't change the message both nomenclatures are fine and time will tell which one prevails. 

  2. Greg "GreyCat" Wooledge has collected examples of the XY problem in IRC communities in his Wiki

Continue reading "The XY problem in IT management"