Skip to content

Linux kernel USB errors -71 and -110

Linux

After an upgrade of my PC's mainboard BIOS the boot would take a minute or more to complete and sometimes the lightdm login screen would sit there but not accept keyboard input for another minute or so. Then the keyboard got enabled and I could log in normally. Everything worked fine after that bootup struggle completed. This was fully reproducible and persisted across reboots. Weird.

The kernel dmesg log showed entries that looked suspicious:

dmesg log excerpt showing USB error messages

Googleing these error -110 and error -71 is a bit hard. Now why the USB driver does not give useful error messages instead of archaic errno-style numbers escapes me. This is not the 80s anymore.

Citation needed (Wikipedia style) The wisdom of the crowd says error -110 is something around "the USB port power supply was exceeded" [source].

Now lsusb -tv shows device 1-7 ... to be my USB keyboard. I somehow doubt that wants more power than the hub is willing to provide.

The Archlinux BBS Forums recommend to piece together information from drivers/usb/host/ohci.h and (updated from their piece which is from 2012) /tools/include/uapi/asm-generic/errno.h. This is why some people then consider -110 to mean "Connection timed out". Nah, not likely either.

Reading through the kernel source around drivers/usb/host did not enlighten me either. To the contrary. Uuugly. There seems to be no comprehensive list what these error codes mean. And the numbers are assigned to errors conditions quite arbitrarily. And - of course - there is no documentation. "It was hard to do, so it should be hard to understand as well."

Luckily some of the random musings I read through contained some curious advice: power cycle the host. So I did and that did not make the error go away. Other people insisted on removing cables out of wall sockets, unplugging everything and conducting esoteric rituals. That made it dawn on me, the mainboard of course nicely powers the USB in "off" state, too. So switching the power supply off (yes, these have a separate switch, go find yours), waiting a bit for capacitors to drain and switching things back on and ... the errors were gone, the system booted within seconds again.

So the takeaway message: If you get random error messages like

device descriptor read/64, error -110
device not accepting address 42, error -71

on devices that previously worked fine ... completely remove power from the host, the hubs and the USB devices. So they forget they saw each other on the bus before. And when they see each other after that blackout, they will happily go through negotiating protocol details with each other again successfully.

Install "kept back" updates on Ubuntu

Linux

Canonical has implemented a staged roll-out for some Ubuntu package updates.

I find that rather annoying at times, e.g. when preparing the laptop for traveling.

So for my memory and for the benefit of others:

# disable the phased roll-out feature on this apt upgrade run
sudo apt -o "APT::Get::Always-Include-Phased-Updates=true" dist-upgrade

Screenshot of apt with the option to disable staged rollouts

This can - for permanent use - be put into a config file, e.g. Gerrit Heim puts it into /etc/apt/apt.conf.d/99-Phased-Updates [German]. Some other options around this staged roll-out feature are "documented" within a thread in the Ubuntu discourse forum.

Installing System Rescue (CD) to a flash drive

Linux

System Rescue, the project formerly known as System Rescue CD, has moved from being based on Gentoo to being built on Arch Linux packages.

With this their ISO layout changed substantially so when updating my trusty recue USB flash drive, I could not just update the kernel, initrd and the root filesystem image as I had typically done every other year before.

The "Installing on a USB memory stick" documentation is good for Windows (use Rufus, it's nice) but rather useless for Linux. They recommend a dd or the fancy graphical version of that, called usbimager.

I much prefer to have a flash drive that I can write to over an image of a CD (ISO) written 1:1 onto the flash media.

The basic idea is to use the bulk of the System Rescue ISO contents but amend these with your own grub and syslinux so they work as intended over the supplied ones that are bound to the ISO layout a bit too much.

I did this on Debian Buster but with some adjustments to paths and what packages to install, any recent Linux distribution should do:

Continue reading "Installing System Rescue (CD) to a flash drive"

Wiping harddisks in 2019

Linux

Wiping hard disks is part of my company's policy when returning servers. No exceptions.

Good providers will wipe what they have received back from a customer, but we don't trust that as the hosting / cloud business is under constant budget-pressure and cutting corners (wipefs) is a likely consequence.

With modern SSDs there is "security erase" (man hdparm or see the - as always well maintained - Arch wiki) which is useful if the device is encrypt-by-default. These devices basically "forget" the encryption key but it also means trusting the devices' implementation security. Which doesn't seem warranted. Still after wiping and trimming, a secure erase can't be a bad idea :-).

Still there are three things to be aware of when wiping modern hard disks:

  1. Don't forget to add bs=4096 (blocksize) to dd as it will still default to 512 bytes and that makes writing even zeros less than half the maximum possible speed. SSDs may benefit from larger block sizes matched to their flash page structure. These are usually 128kB, 256kB, 512kB, 1MB, 2MB and 4MB these days.1
  2. All disks can usually be written to in parallel. screen is your friend.
  3. The write speed varies greatly by disk region, so use 2 hours per TB and wipe pass as a conservative estimate. This is better than extrapolating what you see initially in the fastest region of a spinning disk.
  4. The disks have become huge (we run 12TB disks in production now) but the write speed is still somewhere 100 MB/s ... 300 MB/s. So wiping servers on the last day before returning is not possible anymore with disks larger than 4 TB each (and three passes). Or 12 TB and one pass (where e.g. fully encrypted content allows to just do a final zero-wipe).

hard disk size one pass three passes
1 TB2 h6 h
2 TB4 h12 h
3 TB6 h18 h
4 TB8 h24 h (one day)
5 TB10 h30 h
6 TB12 h36 h
8 TB16 h48 h (two days)
10 TB20 h60 h
12 TB24 h72 h (three days)
14 TB28 h84 h
16 TB32 h96 h (four days)
18 TB36 h108 h
20 TB40 h120 h (five days)

Hard disk wipe animation


  1. As Douglas pointed out correctly in the comment below, these are IT Kilobytes and Megabytes, so 210 Bytes and 220 Bytes. So Kibibytes and Mebibytes for those firmly in SI territory. 

Xfce 4.12 not suspending on laptop-lid close

Linux

Xfce 4.12 as default in Ubuntu/Xubuntu 18.04 LTS did not suspend a laptop after closing the lid. In fact running xfce4-power-manager --quit ; xfce4-power-manager --no-daemon --debug showed that xfce4 wasn't seeing a laptop lid close event at all.

To the contrary acpi_listen nicely finds button/lid LID close and button/lid LID open events when folding the screen and opening it up again.

As so often the wonderful docs / community of Arch Linux to the rescue. This forum thread from 2015 received the correct answer in 2017:

Xfce4 basically recognizes systemd and thus disables its built-in power-management options for handling these "button events" (but doesn't tell you so in the config UI for power-manager). Systemd is configured to handle these events by default (/etc/systemd/logind.conf has HandleLidSwitch=suspend but for unknown reasons decides not to honor that).

So best is to teach Xfce4 to handle the events again as in pre-systemd times:

xfconf-query -c xfce4-power-manager -p /xfce4-power-manager/logind-handle-lid-switch -s false

Now the UI options will work again as intended and the laptop suspends on lid close and resumes on lid open.

Update:

07.01.19: Changed XFCE -> Xfce as per Corsac's suggestion in the comments below. Thank you!

Background info:

The name "XFCE" was originally an acronym for "XForms Common Environment", but since that time it has been rewritten twice and no longer uses the XForms toolkit. The name survived, but it is no longer capitalized as "XFCE", but rather as "Xfce". The developers' current stance is that the initialism no longer stands for anything specific. After noting this, the FAQ on the Xfce Wiki comments "(suggestion: X Freakin' Cool Environment)".

(quoted from Wikipedia's Xfce article also found in the Xfce docs FAQ).

Openssh taking minutes to become available, booting takes half an hour ... because your server waits for a few bytes of randomness

Linux

So, your machine now needs minutes to boot before you can ssh in where it used to be seconds before the Debian Buster update?

Problem

Linux 3.17 (2014-10-05) learnt a new syscall getrandom() that, well, gets bytes from the entropy pool. Glibc learnt about this with 2.25 (2017-02-05) and two tries and four years after the kernel, OpenSSL used that functionality from release 1.1.1 (2018-09-11). OpenSSH implemented this natively for the 7.8 release (2018-08-24) as well.

Now the getrandom() syscall will block1 if the kernel can't provide enough entropy. And that's frequenty the case during boot. Esp. with VMs that have no input devices or IO jitter to source the pseudo random number generator from.

First seen in the wild January 2017

I vividly remember not seeing my Alpine Linux VMs back on the net after the Alpine 3.5 upgrade. That was basically the same issue.

Systemd. Yeah.

Systemd makes this behaviour worse, see issues #4271, #4513 and #10621.
Basically as of now the entropy file saved as /var/lib/systemd/random-seed will not - drumroll - add entropy to the random pool when played back during boot. Actually it will. It will just not be accounted for. So Linux doesn't know. And continues blocking getrandom(). This is obviously different from SysVinit times2 when /var/lib/urandom/random-seed (that you still have lying around on updated systems) made sure the system carried enough entropy over reboot to continue working right after enough of the system was booted.

#4167 is a re-opened discussion about systemd eating randomness early at boot (hashmaps in PID 0...). Some Debian folks participate in the recent discussion and it is worth reading if you want to learn about the mess that booting a Linux system has become.

While we're talking systemd ... #10676 also means systems will use RDRAND in the future despite Ted Ts'o's warning on RDRAND [Archive.org mirror and mirrored locally as 130905_Ted_Tso_on_RDRAND.pdf, 205kB as Google+ will be discontinued in April 2019].
Update: RDRAND doesn't return random data on pre-Ryzen AMD CPUs (AMD CPU family <23) as per systemd bug #11810. It will always be 0xFFFFFFFFFFFFFFFF (264-1). This is a known issue since 2014, see kernel bug #85991.

Debian

Debian is seeing the same issue working up towards the Buster release, e.g. Bug #912087.

The typical issue is:

[    4.428797] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: data=ordered
[ 130.970863] random: crng init done

with delays up to tens of minutes on systems with very little external random sources.

This is what it should look like:

[    1.616819] random: fast init done
[    2.299314] random: crng init done

Check dmesg | grep -E "(rng|random)" to see how your systems are doing.

If this is not fully solved before the Buster release, I hope some of the below can end up in the release notes3.

Solutions

You need to get entropy into the random pool earlier at boot. There are many ways to achieve this and - currently - all require action by the system administrator.

Kernel boot parameter

From kernel 4.19 (Debian Buster currently runs 4.18 [Update: but will be getting 4.19 before release according to Ben via Mika]) you can set RANDOM_TRUST_CPU at compile time or random.trust_cpu=on on the kernel command line. This will make recent Intel / AMD systems trust RDRAND and fill the entropy pool with it. See the warning from Ted Ts'o linked above.

Update: Since Linux kernel build 4.19.20-1 CONFIG_RANDOM_TRUST_CPU has been enabled by default in Debian.

Using a TPM

The Trusted Platform Module has an embedded random number generator that can be used. Of course you need to have one on your board for this to be useful. It's a hardware device.

Load the tpm-rng module (ideally from initrd) or compile it into the kernel (config HW_RANDOM_TPM). Now, the kernel does not "trust" the TPM RNG by default, so you need to add

rng_core.default_quality=1000

to the kernel command line. 1000 means "trust", 0 means "don't use". So you can chose any value in between that works for you depending on how much you consider your TPM to be unbugged.

VirtIO (KVM, QEMU, ...)

For Virtual Machines (VMs) you can forward entropy from the host (that should be running longer than the VMs and have enough entropy) via virtio_rng.

So on the host, you do:

kvm ... -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,bus=pci.0,addr=0x7

and within the VM newer kernels should automatically load virtio_rng and use that.

You can confirm with dmesg as per above.

Or check:

# cat /sys/devices/virtual/misc/hw_random/rng_available
virtio_rng.0
# cat /sys/devices/virtual/misc/hw_random/rng_current
virtio_rng.0

Patching systemd

The Fedora bugtracker has a bash / python script that replaces the systemd rnd seeding with a (better) working one. The script can also serve as a good starting point if you need to script your own solution, e.g. for reading from an entropy provider available within your (secure) network.

Chaoskey

The wonderful Keith Packard and Bdale Garbee have developed a USB dongle, ChaosKey, that supplies entropy to the kernel. Hard- and software are open source.

Jitterentropy_RNG

Kernel 4.2 introduced jitterentropy_rng which will use the jitter in CPU timings to generate randomness.

modprobe jitterentropy_rng

This apparently needs a userspace daemon though (read: design mistake) so

apt install jitterentropy-rngd (available from Buster/testing).

The current version 1.0.8-3 installs nicely on Stretch. dpkg -i is your friend.

But - drumroll - that daemon doesn't seem to use the kernel module at all.

That's where I stopped looking at that solution. At least for now. There are extensive docs if you want to dig into this yourself.

Update: The Linux kernel 5.3 will have an updated jitterentropy_rng as per Commit 4d2fa8b44. This is based on the upstream version 2.1.2 and should be worth another look.

Haveged

apt install haveged

Haveged is a user-space daemon that gathers entropy though the timing jitter any CPU has. It will only run "late" in boot but may still get your openssh back online within seconds and not minutes.

It is also - to the best of my knowledge - not verified at all regarding the quality of randomness it generates. The haveged design and history page provides and interesting read and I wouldn't recommend haveged if you have alternatives. If you have none, haveged is a wonderful solution though as it works reliably. And unverified entropy is better than no entropy. Just forget this is 2018 2019 :-).

early-rng-init-tools

Thorsten Glaser has posted newly developed early-rng-init-tools in a debian-devel thread. He provides packages at http://fish.mirbsd.org/~tg/Debs/dists/sid/wtf/Pkgs/early-rng-init-tools/ .

First he deserves kudos for naming a tool for what it does. This makes it much more easily discoverable than the trend to name things after girlfriends, pets or anime characters. The implementation hooks into the early boot via initrd integration and carries over a seed generated during the previous shutdown. This and some other implementation details are not ideal and there has been quite extensive scrutiny but none that discovered serious issues. Early-rng-init-tools look like a good option for non-RDRAND (~CONFIG_RANDOM_TRUST_CPU) capable platforms.

Linus to the rescue

Luckily end of September Linus Torvalds was fed up with the entropy starvation issue and the non-conclusive discussions about (mostly) who's at fault and ... started coding.

With the kernel 5.4 release on 25.11.2019 his patch has made it into mainline. He created a try_to_generate_entropy function that uses CPU jitter to generate seed entropy for the PRNG early in boot.

In the merge commit Linus explains:

This is admittedly partly "for discussion". We need to have a way forward for the boot time deadlocks where user space ends up waiting for more entropy, but no entropy is forthcoming because the system is entirely idle just waiting for something to happen.

While this was triggered by what is arguably a user space bug with GDM/gnome-session asking for secure randomness during early boot, when they didn't even need any such truly secure thing, the issue ends up being that our "getrandom()" interface is prone to that kind of confusion, because people don't think very hard about whether they want to block for sufficient amounts of entropy.

The approach here-in is to decide to not just passively wait for entropy to happen, but to start actively collecting it if it is missing. This is not necessarily always possible, but if the architecture has a CPU cycle counter, there is a fair amount of noise in the exact timings of reasonably complex loads.

We may end up tweaking the load and the entropy estimates, but this should be at least a reasonable starting point.

So once this kernel is available in your distribution, you should be safe from entropy starvation at boot on any platform that has hardware timers (I haven't encountered one that does not in the last decade).

Ted Ts'o reviewed the approach and was fine and Ahmed Dawish did some testing of the quality of randomness generated and that seems fine, too.

Updates

14.01.2019

Stefan Fritsch, the Apache2 maintainer in Debian, OpenBSD developer and a former Debian security team member stumbled over the systemd issue preventing Apache libssl to initialize at boot in a Debian bug #916690 - apache2: getrandom call blocks on first startup, systemd kills with timeout.

The bug has been retitled "document getrandom changes causing entropy starvation" hinting at not fixing the underlying issue but documenting it in the Debian Buster release notes.

Unhappy with this "minimal compromise" Stefan wrote a comprehensive summary of the current situation to the Debian-devel mailing list. The discussion spans over December 2018 and January 2019 and mostly iterated what had been written above already. The discussion has - so far - not reached any consensus. There is still the "systemd stance" (not our problem, fix the daemons) and the "ssh/apache stance" (fix systemd, credit entropy).

The "document in release notes" minimal compromise was brought up again and Stefan warned of the problems this would create for Buster users:

> I'd prefer having this documented in the release notes:
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=916690
> with possible solutions like installing haveged, configuring virtio-rng,
> etc. depending on the situation.

That would be an extremely user-unfriendly "solution" and would lead to 
countless hours of debugging and useless bug reports.

This is exactly why I wrote this blog entry and keep it updated. We need to either fix this or tell everybody we can reach before upgrading to Buster. Otherwise this will lead to huge amounts of systems dead on the network after what looked like a successful upgrade.

Some interesting tidbits were mentioned within the thread:

Raphael Hertzog fixed the issue for Kali Linux by installing haveged by default. Michael Prokop did the same for the grml distribution within its December 2018 release.

Ben Hutchings pointed to an interesting thread on the debian-release mailing list he kicked off in May 2018. Multiple people summarized the options and the fact that there is no "general solution that is both correct and easy" at the time.

Sam Hartman identified Debian Buster VMs running under VMware as an issue, because that supervisor does not provide virtio-rng. So Debian VMs wouldn't boot into ssh availability within a reasonable time. This is an issue for real world use cases albeit running a proprietary product as the supervisor.

16.01.2019

Daniel Kahn Gillmor wrote in to explain a risk for VMs starting right after the boot of the host OS:

If that pool is used by the guest to generate long-term secrets because it appears to be well-initialized, that could be a serious problem.
(e.g. "Mining your P's and Q's" by Heninger et al -- https://factorable.net/weakkeys12.extended.pdf)
I've just opened https://bugs.launchpad.net/qemu/+bug/1811758 to report a way to improve that situation in qemu by default.

So ... make sure that your host OS has access to a hardware random number generator or at least carries over its random seed properly across reboots. You could also delay VM starts until the crng on the host Linux is fully initialized (random: crng init done).
Otherwise your VMs may get insufficiently generated pseudo-random numbers and won't even know.

12.03.2019

Stefan Fritsch revived the thread on debian-devel again and got a few more interesting tidbits out of the developer community:

Ben Hutchings has enabled CONFIG_RANDOM_TRUST_CPU for Debian kernels from 4.19.20-1 so the problem is somewhat contained for recent CPU AMD64 systems (RDRAND capable) in Buster.

Thorsten Glaser developed early-rng-init-tools which combine a few options to try and get entropy carried across boot and generated early during boot. He received some scrutiny as can be expected but none that would discourage me from using it. He explains that this is for early boot and thus has initrd integration. It complements safer randomness sources or haveged.

16.04.2019

The Debian installer for Buster is running into the same problem now as indicated in the release notes for RC1. Bug #923675 has details. Essentially choose-mirror waits serveral minutes for entropy when used with https mirrors.

08.05.2019

The RDRAND use introduced in systemd to bypass the kernel random number generator during boot falls for a AMD pre-Ryzen bug as RDRAND on these systems doesn't return random data after a suspend / resume cycle. Added an update note to the systemd section above.

03.06.2019

Bastian Blank reports the issue is affecting Debian cloud images now as well as cloud-init generates ssh keys during boot.

10.07.2019

Added the update of jitterentropy_rng to a version based on upstream v2.1.2 into the Jitterentropy section above.

16.09.2019

The Linux Kernel Mailing List (LKML) is re-iterating the entropy starvation issue and the un-willingness of systemd to fix its usage of randomness in early boot. Ahmed S. Darwish has reported the issue leading to ext4 reproducibly blocking boot with Kernel 5.3-r8. There are a few patches floated and the whole discussion it worth reading albeit non-conclusive as of now.

Ted Ts'o says "I really very strongly believe that the idea of making getrandom(2) non-blocking and to blindly assume that we can load up the buffer with 'best efforts' randomness to be a terrible, terrible idea that is going to cause major security problems that we will potentially regret very badly. Linus Torvalds believes I am an incompetent systems designer." in this email.

In case you needed a teaser to really start reading the thread! Linus Torvalds also mentions the issue (and a primer on what "never break userspace" means) in the Linux kernel 5.3 release notes.

18.09.2019

... and Martin Steigerwald kindly noticed that I update this blog post with the relevant discussions I come across as this entropy starvation mess continues to haunt us.

25.11.2019

Added the "Linus to the rescue" section after the Linux kernel 5.4 has been released.

02.04.2020

I ran into the same issue on a Gentoo system today. Luckily OpenRC handeled this gracefully but it delayed booting: syslog-ng actually hangs the boot for some time ... waiting for entropy. Argh. The Gentoo forums thread on the topic clearly listed the options:

  1. Make syslog-ng depend on haveged by adding rc_syslog_ng_need="haveged" to /etc/rc.conf (and obviously having haveged installed)
  2. Re-compiling the kernel with CONFIG_RANDOM_TRUST_CPU=y where that is an option

  1. it will return with EAGAIN in the GRND_NONBLOCK use case. The blocking behaviour when lacking entropy is a security measure as per Bug #1559 of Google's Project Zero

  2. Update 18.12.2018: "SysVinit times" ::= "The times when most Linux distros used SysVinit over other init systems." So Wheezy and previous for Debian. Some people objected to the statement, so I added this footnote as a clarification. See the discussion in the comments below. 

  3. there is no Buster branch in the release notes repository yet (17.12.2018). Update: I wrote a section for the release notes 06.05.2019 and Paul Gevers amended and committed that. So when users of affected systems read the release notes before upgrading to Buster they will hopefully not be surprised (and worried) by the long boot delays. 

Firefox asking to be made the default browser again and again

Linux

Firefox on Linux can develop the habit to (rather randomly) ask again and again to be made the default browser. E.g. when started from Thunderbird by clicking a link it asks but when starting from a shell all is fine.

The reason to this is often two (or more) .desktop entries competing with each other.

So, walkthrough: (GOTO 10 in case you are sure to have all the basics right)

update-alternatives --display x-www-browser
update-alternatives --display gnome-www-browser

should both show firefox for you. If not

update-alternatives --config <entry>

the entry to fix the preference on /usr/bin/firefox.

Check (where available)

exo-preferred-applications

that the "Internet Browser" is "Firefox".

Check (where available)

xfce4-mime-settings

that anything containing "html" points to Firefox (or is left at a non-user set default).

Check (where available)

xdg-settings get default-web-browser

that you get firefox.desktop. If not run

xdg-settings check default-web-browser firefox.desktop

If you are running Gnome, check

xdg-settings get default-url-scheme-handler http

and the same for https.

LABEL 10:

Run

sensible-editor ~/.config/mimeapps.list

and remove all entries that contain something like userapp-Firefox-<random>.desktop.

Run

find ~/.local/share/applications -iname "userapp-firefox*.desktop"

and delete these files or move them away.

Done.

Once you have it working again consider disabling the option for Firefox to check whether it is the default browser. Because it will otherwise create those pesky userapp-Firefox-<random>.desktop files again.

Configuring Linux is easy, innit?

Unbalanced volume (channels) on headset audio

Linux

I use a headset to make phone calls and when they are mono the great awesomeness of the Linux audio stack seems to change volume only on the active channel (e.g. the right channel). So when I listen to some music (stereo) afterwards the channels are not balanced anymore and one side is louder than the other. And this persists thanks to saving the preferences across reboots. Duh.

As usually checking Pulseaudio (pavucontrol) is useless, it shows balanced channels.

But checking Alsa (alsamixer) revealed the issue and alsamixer can fix this, too:

Step 1: run alsamixer in a terminal and select your headset after pressing [F6]:

Alsamixer: Select sound card

Step 2: Select the headset audio output with [<-] and [->] cursor keys:

Alsamixer: Unbalanced channels on the headset (left / right channel loudness are different)

Step 3: Press [b] to balance the left and right channels:

Alsamixer: Balanced channels (left / right channel loudness) again

Step 4: Press [Esc] to exit alsamixer which will keep the changed settings (... great choice of key, [q] raises the left channel's loundness ...).

Step 5: Save this setting by running sudo alsactl store which should update /var/lib/alsa/asound.state with the fixed settings so they persist across reboots.

Step 6: Enjoy music again :-).

If you need to script this, amixer is the tool to use, e.g. amixer -c 1 set "Headset" 36.
1 is the card number which you see in alsamixer, "Headset" is the channel name, also from alsamixer (which can contain blanks, hence the quotes around the name) and 36 is the desired loundness level for both channels. See the screenshots above where to find the data or run aplay -l to see the cards on your PC and amixer -c 1 (with your card id) to see the channels that (virtual, USB) sound card has.

Prevent Ubuntu from phoning home

Linux

Ubuntu unfortunately has decided again to implement another "phone home" feature, this time transferring your lsb_release information, CPU model and speed (from /proc/cpuinfo), uptime output, most of uname -a and curl version to a Ubuntu news web-service.

Here is the Launchpad bug report #1637800 introducing this ... web bug.

This thing runs both systemd-timer based (via /lib/systemd/system/motd-news.service and /lib/systemd/system/motd-news.timer) and on request when you log in (via /etc/update-motd.d/50-motd-news).

Ubuntu news on ssh login

There has even been a bug filed about the motd advertising HBO's Silicon Valley show.

To prevent this from running (it is enabled by default on Ubuntu 17.04 and may probably propagate down to earlier versions as well), edit /etc/default/motd-news to include

ENABLED=0

so

sed -i "s/ENABLED=1/ENABLED=0/" /etc/default/motd-news # run as root

for your automated installs.

Update:

02.07.2017: Dustin Kirkland responded to a YC "hacker news" mention of his motd spam. He mentions:

You're welcome to propose your own messages for merging, if you have a well formatted, informative message for Ubuntu users.
We'll be happy to review and include them in the future.

What could possibly go wrong?

How much memory does a process use on Linux?

Linux

Sometimes the easy questions are the hardest to answer.

Memory can mean RSS (Resident Set Size) which is the memory of a process held in RAM (so not swapped out). That does include shared memory allocations. So if you add two RSS numbers, you're probably wrong already. Still this is usually the number we look for in most practical investigations.

Then there is VSZ (Virtual Set siZe) also called SIZE. The VSZ includes code, data and stack segments a process has allocated. And again that will count some shared address space. So usually bash will have a VSZ that's lower than its RSS.

man ps will also tell you:

   The SIZE and RSS fields don't count some parts of a process including the page tables, kernel stack, struct
   thread_info, and struct task_struct.  This is usually at least 20 KiB of memory that is always resident.

In most (if not all) practical scenarios that difference won't matter. If it were, you'd be using valgrind to look into the memory usage of your application in minute detail. Wouldn't you?

If you want to have an as-detailed-as-possible look into the memory allocations of a process pmap <pid> will give you the information. The summary at the end is a gross over-estimation of the total memory a process has allocated as it counts all mapped memory (and may still be wrong due to de-duplication and other factors). But that number may well serve as an upper bound if you need something like that.

For running processes


ps -eo 'pid user rss:8 size:8 cmd' --sort 'rss'
 

will give you a nice sorted list of processes and their RSS and VSZ (SIZE) in kiB (old school kB...).

For short running commands GNU time (not the bash build-in time command, apt install time on Debian-based systems) has a nice capability that's not widely known yet:


/usr/bin/time -f "RSS: %MkiB" <command>
 

will tell you the maximum RSS size the <command> has had during its lifetime. That's better than top or watch ps and trying to spot the process.

Encrypting files with openssl for synchronization across the Internet

Linux

Well, shortly after I wrote about encrypting files with a keyfile / passphrase with gpg people asked about a solution with openssl.

You should prefer to use the gpg version linked above, but if you can't, below is a script offering the same functionality with openssl.

You basically call crypt_openssl <file> [<files...>] to encrypt file to file.aes using the same keyfile as used in the gpg script (~/.gnupg/mykey001 per default).

A simple crypt_openssl -d <file.aes> [<files.aes...>] will restore the original files from the encrypted AES256 version that you can safely transfer over the Internet even using insecure channels.

Please note that you should feed compressed data to crypt_openssl whenever you can. So use preferably use it on .zip or .tar.gz files.

Continue reading "Encrypting files with openssl for synchronization across the Internet"

Encrypting files with gpg for synchronization across the Internet

Linux

Automatically transferring (syncing) files between multiple computers is easy these days. Dropbox, owncloud or bitpocket to name a few. You can imagine I use the latter (if you want a recommendation)1.

In any case you want to encrypt what you send to be stored in "the cloud" even if it is just for a short time. There are many options how to encrypt the "in flight" data. Symmetric ciphers are probably the safest and most widely researched cryptography these days and easier to use than asymmetric key pairs in this context as well.

Encryption is notoriously hard to implement correctly and worthless when the implementation is flawed. So I looked at gpg, a well known reference implementation, and was amazed that it can neither use a proper keyfile for symmetric encryption (you can just supply a passphrase via --passphrase-file) nor does it handle multiple files on the command line consistently. You can use --multifile (wondering...why does a command need that at all?) with --decrypt and --encrypt (asymmetric public/private key pair encryption) but not with --symmetric (symmetric shared key encryption). Duh!

With a bit of scripting around the gpg shortcomings, you end up with crypt_gpg that can nicely encrypt or decrypt multiple files (symmetric cipher) in one go.


  1. Dropbox is closed source so it cannot be assessed for its security. Owncloud needs a thorough code review before I would dare to run it on my systems. 

Continue reading "Encrypting files with gpg for synchronization across the Internet"

Binding applications to a specific IP

Linux

These days many systems are multi-homed in the sense that they have more than one IP address bound at the same time.
I.e. for different network cards, virtual IPs for shared servers or just using WiFi and a wired network connection at the same time on a laptop.

Murphy of course makes sure that your system will choose to worst IP (i.e. that on slow WiFi or the one reserved for admin access) when an application does not specifically supports binding to a selected IP address. And Mozilla Firefox for example doesn't.

The kernel chooses an outgoing IP from those in the routing table with the same metric:

daniel@server:~$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.0.2.1         0.0.0.0         U     0      0        0 eth0
0.0.0.0         192.0.2.2         0.0.0.0         U     0      0        0 eth1
0.0.0.0         192.0.2.3         0.0.0.0         U     0      0        0 eth2
0.0.0.0         192.0.2.4         0.0.0.0         U     0      0        0 eth3

You can obviously play around with the metric and make the kernel router prefer the desired interface above others. This will affect all applications though. Some people use the firewall to nat all packages to port 80 onto the network interface desired for web browsing. Gee, beware the http://somewebsite.tld:8080 links...

Thankfully Daniel Ryde has solved the problem via a LD_PRELOAD shim. With his code you can run

daniel@laptop:~$ BIND_ADDR="192.0.2.100" LD_PRELOAD=/usr/lib/bind.so firefox (*)

and happily surf away.

To compile his code (3.3kB, local copy, see note 1) you need to run

gcc -nostartfiles -fpic -shared bind.c -o bind.so -ldl -D_GNU_SOURCE
strip bind.so
cp -i bind.so /usr/lib/

and you're set to go.

If you don't have gcc available (and trust me) you can download pre-compiled 32bit and 64bit (glibc-2) bind.so libraries here (4.5kB).

I guess because Daniel Ryde hid his code so well on his webpage, Robert J. McKay wrote another LD_PRELOAD shim, called Bindhack (4.5kB, local mirror). This will - as is - only compile on 32bit machines. But YMMV.

Run the above command (*) with your desired (and locally bound) IP address in bash and visit MyIP.dk or DNStools.ch or any of the other services that show your external IP to see whether you've succeeded.

Notes:

  1. Daniel Ryde did not specify the -D_GNU_SOURCE in the comments section of bind.c. Modern glibc/gcc need that as he used RTLD_NEXT which is Unix98 and not POSIX. I amended the local copy of bind.c and sent him an email so he can update his.
  2. Both are IPv4 only, no IPv6 support.

Updates:

19.03.15 madmakz wrote in to clarify that all of the bind LD_PRELOAD shims only work with TCP connections. So not with UDP.
I'm not aware of a shim that manipulates UDP sockets.

14.01.14 Christian Pellegrin wrote a superb article on how to achieve per-application routing with the help of Linux network namespaces.

16.06.13 showip.be seems to be gone, so I replaced it with dnstools.ch in the text above. There are plenty of others as well.

22.06.12 Lennart Poettering has a IPv4 only version of a shim and a rather good readme available at his site.

29.11.10 Catalin M. Boie wrote another LD_PRELOAD shim, force_bind. I have not tested this one. It's capable of handling IPv6 binds.

11.01.09 Daniel Ryde has replied to my email and updated his local copy now as well.

Ubuntu Karmic 9.10 Bluetooth UMTS Dial-up (DUN)

Linux

Using a mobile phone's Bluetooth Dial-up network (DUN) to connect to the Internet (UMTS/GPRS) while on the road is quite convenient for me. Sadly so this is not supported out-of-the-box in Ubuntu Karmic 9.10 (Netbook Remix) as it uses Network-Manager to handle - well - network connections. And that is not quite there on Bluetooth managed devices yet.

While the default solution (rfcomm and Gnome-PPP) still works, it's ugly to set up. Sadly so, zillions of Ubuntu-Forum threads and blog entries still detail this solution - or the issues encountered with it along the way.

The much better solutions is using Blueman, an improved Gnome-Bluetooth primarily developed by Valmantas Palikša. It brings the right UDEV magic along to teach Network-Manager about the Bluetooth devices it handles.

Blueman Screenshot on Ubuntu Karmic 9.10 Netbook Edition

Just follow the steps on their downloads page to set up the Blueman PPA (Personal Package Archive) to get things working.

Kubuntu 9.10 (karmic) 64bit firefox java plugin

Linux

For some unknown reason the (K)Ubuntu developers did not update the Java plugin for firefox after jaunty (yet?).

The version that Karmic (9.10) pulls out of the multiverse repository is still jaunty's (9.04).

So when you try:

apt-get install sun-java6-plugin

you'll get something like

   Reading package lists... Done
   Building dependency tree
   Reading state information... Done
   Some packages could not be installed. This may mean that you have
   requested an impossible situation or if you are using the unstable
   distribution that some required packages have not yet been created
   or been moved out of Incoming.
   The following information may help to resolve the situation:
   
   The following packages have unmet dependencies:
     sun-java6-plugin: Depends: sun-java6-bin (= 6-15-1) but 6-16-0ubuntu1.9.04 is to be installed
   E: Broken packages

Duh.

Actually if you have the Java Runtime Environment (JRE, package name sun-java6-jre) installed all files needed are already present.
Just not put in the right place on the filesystem.

So, run:

sudo apt-get install sun-java6-jre   # install JRE if needed
sudo ln -s /usr/lib/jvm/java-6-sun/jre/lib/amd64/libnpjp2.so /usr/lib/mozilla/plugins/

This will install the JRE (if it's not already installed) and will symlink the firefox plugin for java in place so that it'll be found after a browser restart.